This page intentionally left blank
Cognitive Radio Networking and Security With the rapid growth of new wireless devi...
42 downloads
855 Views
7MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
This page intentionally left blank
Cognitive Radio Networking and Security With the rapid growth of new wireless devices and applications over the past decade, the demand for wireless radio spectrum is increasing relentlessly. The development of cognitive radio networking provides a framework for making the best possible use of limited spectrum resources, and it is revolutionizing the telecommunications industry. This book presents the fundamentals of designing, implementing, and deploying cognitive radio communication and networking systems. Uniquely, it focuses on game theory and its applications to various aspects of cognitive networking. It covers in detail the core aspects of cognitive radio, including cooperation, situational awareness, learning, and security mechanisms and strategies. In addition, it provides novel, state-ofthe-art concepts and recent results. This is an ideal reference for researchers, students, and professionals in industry who need to learn the applications of game theory to cognitive networking. K. J. RAY LIU is a Distinguished Scholar-Teacher at the University of Maryland, College Park. He is the recipient of numerous honors and awards including the 2009 IEEE Signal Processing Society Technical Achievement Award, IEEE Signal Processing Society Distinguished Lecturer, National Science Foundation Presidential Young Investigator, and various best-paper awards. BEIBEI WANG is currently a Senior Systems Engineer with Corporate Research and Development, Qualcomm Incorporated. She received her Ph.D. from the University of Maryland, College Park in 2009. Her research interests include dynamic spectrum allocation and management in cognitive radio systems, cooperative communications, multimedia communications, game theory and learning, and network security.
Cognitive Radio Networking and Security A Game-Theoretic View K . J . R AY L I U University of Maryland, College Park
BEIBEI WANG Qualcomm Incorporated
CAMBRIDGE UNIVERSITY PRESS
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo, Delhi, Dubai, Tokyo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521762311 © Cambridge University Press 2011 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format 2010 ISBN-13
978-0-511-90418-9
eBook (Adobe Reader)
ISBN-13
978-0-521-76231-1
Hardback
Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
In memory of my great-grand mother Lang-Xiang Liu (Kane Koda), August 4, 1899–April 11, 1992, for the eternal loving bond transcending generations. I always miss you. – K. J. Ray Liu To my parents, Liangyuan Wang and Shuqin Huang, for their unconditional love and support. – Beibei Wang
Contents
Preface
page xiii
Part I Cognitive radio communications and cooperation
1
1
3
Introduction to cognitive radios 1.1 1.2 1.3 1.4 1.5
2
3
3 5 9 24 39
Game theory for cognitive radio networks
46
2.1 2.2 2.3 2.4 2.5 2.6
46 49 67 77 83 86
Introduction Non-cooperative games and Nash equilibrium Economic games, auction games, and mechanism design Cooperative games Stochastic games Summary
Markov models for dynamic spectrum allocation 3.1 3.2 3.3 3.4 3.5 3.6
4
Introduction Fundamentals Spectrum sensing and analysis Dynamic spectrum allocation and sharing Cognitive radio platforms
Introduction The system model Primary-prioritized Markov models Primary-prioritized dynamic spectrum access Simulation results and analysis Summary and bibliographical notes
87 87 88 91 97 102 109
Repeated open spectrum sharing games
111
4.1 4.2 4.3
111 112 113
Introduction The system model Repeated spectrum sharing games
viii
Contents
4.4 4.5 4.6 4.7 5
6
7
8
9
Cooperation with optimal detection Cheat-proof strategies Simulation results Summary and bibliographical notes
118 122 127 132
Pricing games for dynamic spectrum allocation
133
5.1 5.2 5.3 5.4 5.5 5.6
133 134 135 139 151 154
Introduction The system model Pricing-game models Collusion-resistant dynamic spectrum allocation Simulation results Summary and bibliographical notes
A multi-winner cognitive spectrum auction game
155
6.1 6.2 6.3 6.4 6.5 6.6
155 157 160 168 171 176
Introduction The system model One-band multi-winner auctions Multi-band multi-winner auctions Simulation results Summary
Evolutionary cooperative spectrum sensing games
177
7.1 7.2 7.3 7.4 7.5
177 179 184 194 199
Introduction The system model and spectrum sensing game Evolutionary sensing games and strategy analysis Simulation results and analysis Summary and bibliographical notes
Anti-jamming stochastic games
200
8.1 8.2 8.3 8.4 8.5 8.6
200 202 205 211 215 225
Introduction The system model Formulation of the stochastic anti-jamming game Solving optimal policies of the stochastic game Simulation results Summary and bibliographical notes
Opportunistic multiple access for cognitive networks
226
9.1 9.2 9.3 9.4
226 228 231 237
Introduction Network and channel models Multiple relays for the primary network Opportunistic multiple access for secondary nodes
Contents
9.5
Summary and bibliographical notes
ix
245
Part II Resource awareness and learning
247
10
Reinforcement learning for energy-aware communications
249
10.1 10.2 10.3 10.4 10.5 10.6 10.7
249 251 252 254 262 266 268
11
12
13
14
Introduction The Markov decision process and dynamic programming Reinforcement learning Throughput maximization in point-to-point communication Multi-node energy-aware optimization Discussion Summary and bibliographical notes
Repeated games and learning for packet forwarding
270
11.1 11.2 11.3 11.4 11.5 11.6
270 271 275 285 290 296
Introduction The system model and design challenge The repeated-game framework and punishment analysis Self-learning algorithms Simulation results Summary and bibliographical notes
Dynamic pricing games for routing
297
12.1 12.2 12.3 12.4 12.5 12.6
297 299 302 306 317 323
Introduction The system model Pricing game models Optimal dynamic pricing-based routing Simulation studies Summary and bibliographical notes
Connectivity-aware network lifetime optimization
325
13.1 13.2 13.3 13.4 13.5 13.6 13.7 13.8
325 327 329 331 335 340 342 349
Introduction The system model and problem formulation Facts from spectral graph theory Keep-connect algorithms The upper bound on the energy consumption The distributed implementation and learning algorithm Simulation results Summary
Connectivity-aware network maintenance and repair
350
14.1 Introduction
350
x
Contents
14.2 14.3 14.4 14.5 14.6 14.7
The system model Network maintenance Lifetime-maximization strategies Network repair Simulation results Summary and bibliographical notes
352 355 357 360 361 368
Part III Securing mechanism and strategies
371
15
Trust modeling and evaluation
373
15.1 15.2 15.3 15.4 15.5 15.6
373 375 383 388 391 397
16
17
18
Introduction The foundations of trust evaluation Attacks and protection Trust-management systems in ad hoc networks Simulations Summary and bibliographical notes
Defense against routing disruptions
399
16.1 16.2 16.3 16.4 16.5 16.6 16.7
399 401 403 408 410 412 417
Introduction and background Assumptions and the system model Security mechanisms Security analysis Simulation methodology Performance evaluation Summary and bibliographical notes
Defense against traffic-injection attacks
420
17.1 17.2 17.3 17.4 17.5 17.6 17.7
420 421 423 428 437 439 443
Introduction Traffic-injection attacks Defense mechanisms Theoretical analysis Centralized detection with decentralized implementation Simulation studies Summary and bibliographical notes
Stimulation of attack-resistant cooperation
444
18.1 18.2 18.3 18.4 18.5 18.6
444 445 448 457 460 466
Introduction The system model and problem formulation System description Analysis under attacks Simulation studies Summary and bibliographical notes
Contents
19
20
21
22
xi
Optimal strategies for stimulation of cooperation
468
19.1 19.2 19.3 19.4 19.5 19.6 19.7 19.8 19.9
468 469 477 479 483 485 487 489 495
Introduction Optimal strategies in packet-forwarding games System description and the game model Attack-resistant and cheat-proof cooperation-stimulation strategies Strategy analysis under no attacks Strategy analysis under attacks Discussion Simulation studies Summary
Belief evaluation for cooperation enforcement
496
20.1 20.2 20.3 20.4 20.5 20.6
496 497 500 502 512 517
Introduction The system model and game-theoretic formulation Vulnerability analysis A belief-evaluation framework Simulation studies Summary and bibliographical notes
Defense against insider attacks
519
21.1 21.2 21.3 21.4 21.5 21.6
519 520 525 533 538 544
Introduction System description and the game model Defense strategies with statistical attacker detection Optimality analysis Performance evaluation Summary
Secure cooperation stimulation under noise and imperfect monitoring
545
22.1 22.2 22.3 22.4 22.5 22.6 22.7
545 546 551 555 557 567 569
Introduction Design challenges and game description Attack-resistant cooperation stimulation Game-theoretic analysis and limitations Simulation studies Discussion Summary and bibliographical notes
References Index
570 598
Preface
Recent increases in demand for cognitive radio technology have driven researchers and technologists to rethink the implications of the traditional engineering designs and approaches to communications and networking. One issue is that the traditional thinking is that one should try to have more bandwidth, more resources, and more of everything, while we have come to the realization that the problem is not that we do not have enough bandwidth or resources. It is rather that the bandwidth/resource utilization rates in many cases are too low. For example, the TV bandwidth utilization nowadays in the USA is less than 6%, which is quite similar to that in most developed countries. So why continue wanting to obtain more new bandwidth when it is indeed a scarce commodity already? Why not just utilize the wasted resource in a more effective way? Another reconsideration is that often one can find the optimization tools and solutions employed in engineering problems being too rigid, without offering much flexibility, adaptation, and learning. The super highway is a typical example in that, during traffic hours, one direction is completely jammed with bumper-to-bumper cars, while the other direction has few cars with mostly empty four-lane way. That is almost the case for networking as well. Rigid, inflexible protocols and strategies often leave wasted resources that could otherwise be efficiently utilized by others. It was recognized that traditional communication and networking paradigms have taken little or no situational information into consideration by offering cognitive processing, reasoning, learning, and adaptation. Along the same lines, such awareness also drives us to seek an optimization tool to better enhance cooperation and resolve conflict with learning capability. In the past decade we have witnessed that the concept of cognitive networking and communications has offered a revolutionary perspective in the design of modern communication infrastructure. By cognitive communications and networking we mean that a communication system is composed of elements that can dynamically adapt themselves to the varying conditions, resources, environments, and users through interacting, learning, and reasoning to evolve and reach better operating points or a better set of system parameters to enhance cooperation and resolve conflict, if any. Those factors can include awareness of channel conditions, energy efficiency, bandwidth availability, locations, spectrum usage, and the connectivity of a network, to name just a few. Such design with awareness of situations, resources, environments, and users forms the core concept of the emerging field of cognitive communications and networking. Many new ideas have thus been inspired and have blossomed.
xiv
Preface
Cognitive radio, a special case of cognitive networking, has received a lot of attention recently. In contrast to traditional radio, cognitive radio is an intelligent wireless communication system that is aware of its surrounding environment and can adaptively change its operating parameters on the basis of interactions with the environment and users. With cognitive radio technology, future wireless devices are envisioned to be able to sense and analyze their surrounding environment and user conditions, learn from the environmental variations, and adapt their operating parameters to achieve highly reliable communications and efficient utilization of the spectrum resources. In a cognitive network, nodes are intelligent and have the ability to observe, learn, and act to optimize their performance. Since nodes generally belong to different authorities and pursue different goals, fully cooperative behaviors cannot be taken for granted. Instead, nodes will cooperate with others only when cooperation can improve their own performance. Often nodes with such selfish behaviors are regarded as rational. Therefore, a key problem in cognitive networks is how to stimulate cooperation among selfish nodes. To address the interactions of the dynamics among conditions, resources, environments, and users, game theory has naturally become an important emerging tool that is ideal and essential in studying, modeling, and analyzing the cognitive interaction process. This is especially true because a rational user in a cognitive network often behaves selfishly to maximize his/her own utility or welfare. There is of course no surprise here, since game theory has been a core tool in the study of economics and business/social models, in particular in the understanding of cooperation, interaction, and conflict, via which strategies and mechanisms can be developed to offer flexible and adaptable solutions. In recent years, it has found a major engineering challenge in the emerging development of cognitive communications and networking. In a certain sense, what is taking place in cognitive communications and networking can be viewed as a kind of information game, where optimal policies, strategies, and protocols are developed from the signals/information obtained by users through interaction, cooperation, or competition of communication/networking devices, rather than economic and financial games being played in human society. Not only can traditional games be leveraged to apply to various networking scenarios, but also new games can be developed, since wireless communication is interference-limited instead of quantity-limited as is the case for most economic models. Therefore we are seeing the new era of information games emerging and unfolding. This book aims at providing a comprehensive coverage of fundamental issues on cooperation, learning, adaption, and security that should be understood in the design, implementation, and deployment of cognitive communication and networking systems, with a focus on game-theoretical approaches. Most of the material stems from our research over the past decade pursuing the realization of cognitive communications and secure networking. A goal of the book is to provide a bridge between advanced research on the one hand and classroom learning and self-study on the other by offering an emphasis on systematic game-theoretical treatments of cognitive communications and networking. In particular, we partition the book into three parts.
Preface
xv
In Part I, we address the issues relating to cognitive radio communications and user cooperation. The users in a cognitive network will be assumed to be rational when cooperating with others, i.e., they behave selfishly in maximizing their own interest. In Chapter 1 we provide an introductory overview and survey of cognitive radio technology and related technical issues, including spectrum sensing, dynamic spectrum sharing and allocation, and cognitive radio platforms and standards, followed by a tutorial on fundamentals of game theory for cognitive networking in Chapter 2. We then focus on each important component of cognitive radio technology with more detailed treatments. Chapter 3 introduces Markov models for efficient dynamic spectrum allocation. Chapter 4 considers repeated open spectrum sharing games with cheat-proof strategies. The concept of pricing games is studied in Chapter 5 for dynamic spectrum allocation. A multi-winner spectrum auction game is presented in Chapter 6 to address the interference-limited situation of wireless communications. An evolutionary cooperative spectrum sensing game is then introduced in Chapter 7 in order for the reader to understand the best strategy for cooperation and its evolution when the situation is changing. It is followed by discussion of a stochastic anti-jamming game to design the optimal adaptive defense strategies against cognitive malicious attackers in Chapter 8. Finally, the issue of opportunistic multiple access for cognitive networks with cooperation of relays is studied in Chapter 9. In Part II, the focus is on resource awareness and learning. The discussion is extended beyond the narrow definition of a cognitive radio to the general notion of cognitive wireless communications and networking. Various situational awareness and learning scenarios are considered. In Chapter 10, reinforcement learning for energy awareness is discussed. Chapter 11 considers a repeated game framework and learning for cooperation enforcement. Dynamic pricing games for routing are studied in Chapter 12. A graph-theoretical connectivity-aware approach for network lifetime optimization is presented in Chapter 13, followed by the issues relating to graph-theoretic network maintenance and repair in Chapter 14. Because of the interactions and cooperation in cognitive networks, security becomes a major issue. Therefore Part III is dedicated to the consideration of a securing mechanism and strategies. However, since there is no consensus notion of a security paradigm yet in this arena, there are three main themes in this part: trust modeling and evaluation, defense mechanisms and strategies, and game-theoretical analysis of security. Some users who are attackers are assumed to be malicious, i.e., their goal is to damage the system’s performance, instead of maximizing their own interest. Since security in centralized systems is less of an issue, most of the chapters are formulated in terms of distributed ad hoc networking. First information-theoretical trust models and an evaluation framework are presented in Chapter 15 for network security, followed by some defenses against a series of attacks such as routing disruption attacks in Chapter 16 and injecting traffic attacks in Chapter 17. Attack-resistant mechanisms and optimal strategies for cooperation stimulation are considered in Chapters 18 and 19, respectively. Finally, statistical securing approaches for cooperation stimulation and enforcement under noise and imperfect monitoring situations are presented in the next three chapters, with Chapter 20 focusing on belief evaluation and vulnerability analysis,
xvi
Preface
Chapter 21 on defense against insider attacks, and Chapter 22 on secure cooperation stimulation. This book is intended to be a textbook or a reference book for graduate-level courses on wireless communications and networking that cover cognitive radios, game theory, and/or security. We hope that the comprehensive coverage of cognitive communications, networking, and security with a holistic treatment from the view of information games will make this book a useful resource for readers who want to understand this emerging technology, as well as for those who are conducting research and development in this field. This book could not have been made possible without the research contributions by the following people: Charles Clancy, Amr El-Sherif, Zhu Han, Ahmed Ibrahim, Zhu Ji, Charles Pandana, Karim Seddik, Yan Sun, Yongle Wu, and Wei Yu. We also would like to thank all the colleagues whose work enlightening our thoughts and research made this book possible. We can only stand on the shoulders of giants. K. J. Ray Liu Beibei Wang
Part I
Cognitive radio communications and cooperation
1
Introduction to cognitive radios
With the rapid deployment of new wireless devices and applications, the last decade has witnessed a growing demand for wireless radio spectrum. However, the policy of fixed spectrum assignment produces a bottleneck for more efficient spectrum utilization, such that a great portion of the licensed spectrum is severely under-utilized. The inefficient usage of the limited spectrum resources has motivated the regulatory bodies to review their policy and start to seek innovative communication technology that can exploit the wireless spectrum in a more intelligent and flexible way. The concept of cognitive radio was proposed to address the issue of spectrum efficiency and has been receiving increasing attention in recent years, since it equips wireless users with the capability to optimally adapt their operating parameters according to the interactions with the surrounding radio environment. There have been many significant developments in the past few years concerning cognitive radios. In this chapter, the fundamentals of cognitive radio technology, including the architecture of a cognitive radio network and its applications, are introduced. The existing works on spectrum sensing are reviewed, and important issues in dynamic spectrum allocation and sharing are discussed in detail. Finally, an overview on implementation of cognitive radio platforms and standards for cognitive radio technology is provided.
1.1
Introduction The usage of radio spectrum resources and the regulation of radio emissions are coordinated by national regulatory bodies such as the Federal Communications Commission (FCC). The FCC assigns spectrum to licensed holders, also known as primary users, on a long-term basis for large geographical regions. However, a large portion of the assigned spectrum remains under-utilized as illustrated in Figure 1.1 [114]. The inefficient usage of the limited spectrum necessitates the development of dynamic spectrum access techniques, where users who have no spectrum licenses, also known as secondary users, are allowed to use the temporarily unused licensed spectrum. In recent years, the FCC has been considering more flexible and comprehensive uses of the available spectrum [116], through the use of cognitive radio technology [284]. Cognitive radio is the key enabling technology that enables next-generation (xG) communication networks, also known as dynamic spectrum access (DSA) networks, to utilize the spectrum more efficiently in an opportunistic fashion without interfering
4
Introduction to cognitive radios
Maximum Amplitudes
Amplitude (dB m)
Heavy Use
Heavy Use
Less than 6% Occupancy
Sparse Use
Medium Use
Frequency (MHz) Figure 1.1
Spectrum usage, from FCC Report [7].
with the primary users. It is defined as a radio that can change its transmitter parameters according to the interactions with the environment in which it operates [114]. It differs from conventional radio devices in that a cognitive radio can equip users with cognitive capability and reconfigurability [160] [7]. Cognitive capability refers to the ability to sense and gather information from the surrounding environment, such as information about the transmission frequency, bandwidth, power, modulation, etc. With this capability, secondary users can identify the best available spectrum. Reconfigurability refers to the ability to rapidly adapt the operational parameters according to the sensed information in order to achieve the optimal performance. By exploiting the spectrum in an opportunistic fashion, cognitive radio enables secondary users to sense which portions of the spectrum are available, select the best available channel, coordinate spectrum access with other users, and vacate the channel when a primary user reclaims the spectrum-usage right. Considering the more flexible and comprehensive use of the spectrum resources, especially when secondary users coexist with primary users, traditional spectrumallocation schemes and spectrum-access protocols are no longer applicable. New approaches to spectrum management need to be developed to solve new challenges in research related to cognitive radio, specifically in spectrum sensing and dynamic spectrum sharing. Since primary users have priority in using the spectrum, when secondary users coexist with primary users, they have to perform real-time wideband monitoring of the licensed spectrum to be used. When secondary users are allowed to transmit data simultaneously with a primary user, the interference temperature limit should not be violated [65]. If secondary users are allowed to transmit only when the primary users are not using the spectrum, they need to be aware of the primary users’ reappearance through various detection techniques, such as energy detection, feature detection, matched filtering, and coherent detection. Owing to noise uncertainty, shadowing, and multipath effects, the
1.2 Fundamentals
5
detection performance of single-user sensing is pretty limited. Cooperative sensing has been considered effective in improving detection accuracy by taking advantage of the spatial and multiuser diversity. In cooperative spectrum sensing, how to select proper users for sensing, how to fuse an individual user’s decision and exchange information, and how to perform distributed spectrum sensing are issues worth studying. In order to fully utilize the spectrum resources, efficient dynamic spectrum allocation and sharing schemes are very important. Novel spectrum-access control protocols and control-channel management should be designed to accommodate the dynamic spectrum environment while avoiding collision with a primary user. When a primary user reappears in a licensed band, a good spectrum-handoff mechanism is required to provide secondary users with smooth frequency transition with low latency. In multi-hop cognitive wireless networks, intermediate cognitive nodes should intelligently support relaying information and routing through using a set of dynamically changing channels. In order to manage the interference to the primary users and the mutual interference among themselves, secondary users’ transmission power should be carefully controlled, and their competition for the spectrum resources should also be addressed. There have been many significant developments relating to cognitive radios in the past few years. In Section 1.2, we overview the fundamentals of cognitive radio technology, including the architecture of a cognitive radio network and its applications. In Section 1.3, we review existing works on spectrum sensing, including interference temperature, different types of detection techniques, and cooperative spectrum sensing. In Section 1.4 we discuss several important issues in dynamic spectrum allocation and sharing. Finally, we present in Section 1.5 several cognitive radio platforms that have been developed in research institutes and industry, and standards on cognitive radio technology.
1.2
Fundamentals
1.2.1
Cognitive radio characteristics The dramatic increase of service quality and channel capacity in wireless networks is severely limited by the scarcity of energy and bandwidth, which are the two fundamental resources for communications. Therefore, researchers are currently focusing their attention on new communications and networking paradigms that can intelligently and efficiently utilize these scarce resources. Cognitive radio (CR) is one critical enabling technology for future communications and networking that can utilize the limited network resources in a more efficient and flexible way. It differs from traditional communication paradigms in that the radios/devices can adapt their operating parameters, such as transmission power, frequency, modulation type, etc., to the variations of the surrounding radio environment [114]. Before CRs adjust their operating mode to environment variations, they must first gain necessary information from the radio environment. This kind of characteristic is referred to as cognitive capability [160], which enables CR devices to be aware of the transmitted
6
Introduction to cognitive radios
waveform, radio-frequency (RF) spectrum, communication-network type/protocol, geographical information, locally available resources and services, user needs, security policy, and so on. After CR devices have gathered the needed information from the radio environment, they can dynamically change their transmission parameters according to the sensed environment variations and achieve optimal performance, which is referred to as reconfigurability [160]. For instance, the frequencies of available spectrum bands may keep changing, due to primary users’ transmission. Secondary users equipped with CR will know which portion of the spectrum is not occupied by sensing the spectrum, and tune their transmitting frequencies to the spectrum white space.
1.2.2
Cognitive radio functions A typical duty cycle of CR, as illustrated in Figure 1.2, includes detecting spectrum white space, selecting the best frequency bands, coordinating spectrum access with other users, and vacating the frequency when a primary user appears. Such a cognitive cycle is supported by the following functions: • spectrum sensing and analysis; • spectrum management and handoff; • spectrum allocation and sharing. Through spectrum sensing and analysis, CR can detect the spectrum white space (see Figure 1.3), i.e., a portion of the frequency band that is not being used by the primary users, and utilize the spectrum. On the other hand, when primary users start using the licensed spectrum again, CR can detect their activity through sensing, so that no harmful interference is generated due to secondary users’ transmission.
Sensing Real-time wideband monitoring
Adaptation p Transition to new operating parameters
Radio Environment
Reasoning g Determine best response strategy
Figure 1.2
The cognitive cycle.
Analysis y Rapid characterization of environment
7
1.2 Fundamentals
Spectrum used by primary users Frequency Power
Opportunistic access Time Spectrum white space Figure 1.3
Spectrum white space.
After recognizing the spectrum white space by sensing, the spectrum management and handoff function of CR enables secondary users to choose the best frequency band and hop among multiple bands according to the time-varying channel characteristics to meet various quality-of-service (QoS) requirements [7]. For instance, when a primary user reclaims his/her frequency band, the secondary user using the licensed band can direct his/her transmission to other available frequencies, according to the channel capacity determined by the noise and interference levels, path loss, channel error rate, holding time, etc. In dynamic spectrum access, a secondary user may share the spectrum resources with primary users, other secondary users, or both. Hence, a good mechanism for spectrum allocation and sharing is critical in order to achieve high spectrum efficiency. Since primary users own the spectrum rights, when secondary users coexist in a licensed band with primary users, the interference level due to secondary spectrum usage should be limited by a certain threshold. When multiple secondary users share a frequency band, their access should be coordinated in order to alleviate collisions and interference.
1.2.3
Network architecture and applications With the development of CR technologies, secondary users who have not been allocated spectrum-usage rights can utilize the temporally unused licensed bands owned by the primary users. Therefore, in a CR network architecture, the components include both a secondary network and a primary network, as shown in Figure 1.4. A secondary network is a network composed of a set of secondary users and one or more secondary base stations. Secondary users can access the licensed spectrum only when it is not occupied by a primary user. The opportunistic spectrum access of secondary users is usually coordinated by a secondary base station, which is a fixed infrastructure component serving as a hub of the secondary network. Both secondary users and secondary base stations are equipped with CR functions. If several secondary networks share one common spectrum band, their spectrum usage may be coordinated by a central network entity, called a spectrum broker [372]. The spectrum broker collects operation information from each secondary network, and allocates the network resources in such a way as to achieve efficient and fair spectrum sharing.
8
Introduction to cognitive radios
Primary Base- Primary User Station
Figure 1.4
Secondary Users
Dynamic spectrum sharing.
A primary network is composed of a set of primary users and one or more primary base stations. Primary users are authorized to use certain licensed spectrum bands under the coordination of primary base stations. Their transmission should not be interfered with by secondary networks. Primary users and primary base stations are in general not equipped with CR functions. Therefore, if a secondary network shares a licensed spectrum band with a primary network, besides detecting the spectrum white space and utilizing the best spectrum band, the secondary network is required to immediately detect the presence of a primary user and direct the secondary transmission to another available band so as to avoid interfering with primary transmission. Because CRs are able to sense, detect, and monitor the surrounding RF environment such as interference and access availability, and reconfigure their own operating characteristics to best match outside situations, cognitive communications can increase spectrum efficiency and support higher-bandwidth service. Moreover, the capability of real-time autonomous decisions for efficient spectrum sharing also reduces the burdens of centralized spectrum management. As a result, CRs can be employed in many applications. First, the capacity of military communications is limited by radio spectrum scarcity because static frequency assignments freeze bandwidth into unproductive applications, where a large amount of spectrum is idle. CR using dynamic spectrum access can alleviate the spectrum congestion through efficient allocation of bandwidth and flexible spectrum access [284]. Therefore, CR can provide military users with adaptive, seamless, and secure communications. Moreover, a CR network can also be implemented to enhance public safety and homeland security. A natural disaster or terrorist attack can destroy existing communication infrastructure, so an emergency network to aid the search and rescue effort becomes indispensable. Since a CR can recognize spectrum availability and reconfigure itself for much more efficient communication, this provides public-safety personnel with
1.3 Spectrum sensing and analysis
9
dynamic spectrum selectivity and reliable broadband communication to minimize information delay. Moreover, CR can facilitate interoperability between various communication systems. Through adapting to the requirements and conditions of another network, the CR devices can support multiple service types, such as voice, data, video, etc. Another very promising application of CR is in the commercial markets for wireless technologies. Since CR can intelligently determine which communication channels are in use and automatically switches to an unoccupied channel, it provides additional bandwidth and versatility for rapidly growing data applications. Moreover, CR constantly scans the entire band to look for and avoid interference from other users; whenever a channel in use by CR is reclaimed by a primary user or interfered with by another secondary user, the CR will instantly select a free channel from its constantly updated free-channel list. The adaptive and dynamic channel switching can help avoid spectrum conflict and expensive redeployment. In addition, since CR can utilize a wide range of frequencies, some of which have excellent propagation characteristics, CR devices are less susceptible to fading related to growing foliage, buildings, terrain, and weather. A CR configuration can also support mobile applications with low cost. When frequency changes are needed due to conflict or interference, the CR frequencymanagement software will change the operating frequency automatically even without human intervention. Additionally, the radio software can change the service bandwidth remotely to accommodate new applications. As long as no end-user hardware needs to be updated, product upgrades or configuration changes can be completed simply by downloading newly released radio management software. Thus, CR is viewed as the key enabling technology for future mobile wireless services anywhere, anytime, and with any device.
1.3
Spectrum sensing and analysis Through spectrum sensing, CR can obtain necessary observations about its surrounding radio environment, such as the presence of primary users and the appearance of spectrum holes. Only with this information can CR adapt its transmitting and receiving parameters, such as transmission power, frequency, modulation schemes, etc., in order to achieve efficient spectrum utilization. Therefore, spectrum sensing and analysis is the first critical step toward dynamic spectrum management. In this section, we will discuss three different aspects of spectrum sensing. First is the interference temperature model, which measures the interference level observed at a receiver and is used to protect licensed primary users from harmful interference due to unlicensed secondary users. Then we will talk about spectrum hole detection to determine additional available spectrum resources and compare several detection techniques. Finally, we will discuss cooperative sensing with multiple users or relays’ help.
1.3.1
Interference temperature Secondary users do not have a license for using the spectrum, and can use the licensed spectrum only when they cause no harmful interference to primary users. This requires
10
Introduction to cognitive radios
secondary users to be equipped with CRs, which can detect primary users’ appearance and decide which portion of the spectrum is available. Such a decision can be made according to various metrics. The traditional approach is to limit the transmitter power of interfering devices, i.e., the transmitted power should be no more than a prescribed noise floor at a certain distance from the transmitter. However, due to the increased mobility and variability of RF emitters, constraining the transmitter power becomes problematic, since unpredictable new sources of interference may appear. To address this issue, the FCC Spectrum Policy Task Force [115] has proposed a new metric on interference assessment, the interference temperature, to enforce an interference limit perceived by receivers. The interference temperature is a measure of the RF power available at a receiving antenna to be delivered to a receiver, reflecting the power generated by other emitters and noise sources [236]. More specifically, it is defined as the temperature equivalent to the RF power available at a receiving antenna per unit bandwidth [66], i.e., TI ( f c , B) =
PI ( f c , B) , kB
(1.1)
where PI ( f c , B) is the average interference power in watts centered at f c , covering bandwidth B measured in hertz, and Boltzmann’s constant k is 1.38 × 10−23 J K−1 . With the concept of interference temperature, the FCC further established an interference-temperature limit, which provides a maximum amount of tolerable interference for a given frequency band at a particular location. Any unlicensed secondary transmitter using this band must guarantee that their transmission plus the existing noise and interference will not exceed the interference-temperature limit at a licensed receiver. Since any transmission in the licensed band is viewed to be harmful if it would increase the noise floor above the interference-temperature limit, it is necessary that the receiver have a reliable spectral estimate of the interference temperature. This requirement can be fulfilled by using the multitaper method to estimate the power spectrum of the interference temperature with a large number of sensors [160]. The multitaper method can solve the tradeoff between bias and variance of an estimator and provide a near-optimal estimation performance. The large number of sensors can account for the spatial variation of the RF energy from one location to another. A subspace-based method to gain knowledge of the quality and usage of a spectrum band has also been proposed [454], in which information about the interference temperature is obtained by eigenvalue decomposition. Given a particular frequency band in which the interference-temperature limit is not exceeded, that band could be made available for secondary usage. If a regulatory body sets an interference-temperature limit TL for a particular frequency band with bandwidth B, then the secondary transmitter has to keep the average interference below k BTL . Therefore, the interference temperature serves as a cap placed on the potential RF energy that could appear on that band, and there have been some previous studies on how to implement efficient spectrum allocation with the interference-temperature limit. In [66], two interpretations of the interference-temperature models were analyzed, since there is ambiguity over which signals are considered interference, and
11
1.3 Spectrum sensing and analysis
which frequency f c and bandwidth B to use. The first is the ideal interferencetemperature model, in which interference is limited specifically to primary signals. Assume a secondary transmitter is operating with average power P in a band [ f c − B/2, f c + B/2], which overlaps n primary signals with frequency f i and bandwidth Bi . Then, the interference-temperature limit will ensure that TI ( f i , Bi ) +
MP ≤ TL ( f i ), k Bi
∀1 ≤ i ≤ n,
(1.2)
where M represents attenuation due to fading and path loss between the secondary transmitter and the primary receiver. However, it is generally very difficult to distinguish primary signals from secondary signals or measure TI in the presence of a primary signal, unless some a priori information is known about the primary signal. Therefore, a generalized model is considered, which requires no a priori knowledge of the RF environment and limits the secondary transmitter’s parameters, since the information about the primary receivers is unknown. In the generalized model, the interferencetemperature limit is applied to the entire frequency range, i.e., TI ( f c , B) +
MP ≤ TL ( f c ). kB
(1.3)
With the interference-temperature-limit constraints, secondary users can select the optimal operating frequency, bandwidth, and power to maximize their capacity. Spectrum shaping has been proposed to improve spectrum efficiency [84] in CR networks. More specifically, using interference fitting, a CR senses the shape of the interference power spectrum and creates spectra inversely shaped with respect to the current interference environment in order to take advantage of gaps between the noise floor and the cap of the interference-temperature limit. Another application of spectrum shaping is to create notched power spectra that allow the usage of noncontiguous spectrum segments and avoid primary signals. Dynamic spectrum access with QoS and interference-temperature constraints has been studied in [472]. The objective of the scheme is to maximize the total throughput of all secondary users in a network, constrained by a minimum QoS requirement and a total-received-power requirement at a specified measurement point. Within the framework of the interferencetemperature model, the work in [399] presented cooperative algorithms for selecting the most appropriate channel for transmission in a cognitive mesh network. Each mesh node computes a set of channels available for transmission without violating the interference-temperature limit in its interference range, and then uses a per-hop linkcost metric and an end-to-end routing metric to select channels for each hop on the path. Traditional interference constraints are usually binary and inefficient, since they consider only pair-wise sets of users. Non-binary constraints in line with the interferencetemperature model have been studied in [39], which considered the effects of multiple interference sources from across the network. Under these constraints, simultaneous spectrum assignment for a number of secondary transmitters is achievable to improve spectrum utilization, while ensuring that the primary receivers can maintain
12
Introduction to cognitive radios
a certain QoS. The interference-temperature dynamics in a CR network were investigated in [400] using a hidden Markov model (HMM). The HMM is trained with the observed interference-temperature values using the Baum–Welch procedure. The trained HMM is shown to be statistically stable, and can be used as a sequence generator for secondary nodes to predict the interference temperature of the channel in the future. Secondary users can further utilize the prediction to aid their channel selection for transmission. A comprehensive analysis has been presented in [67], which quantifies how interference-temperature limits should be selected and how those choices affect the range of licensed signals. Assuming a fixed transmit bandwidth that overlaps a single primary signal, [67] first determines the base interference temperature seen by a set of network nodes, and then quantifies the total network capacity achievable by the underlay secondary network. It is shown that the capacity achieved is a simple function of the number of nodes, the average bandwidth, and the fractional impact on the primary signal’s coverage area. On the basis of the capacity analysis, [67] introduces a PHY/MAC protocol, interference-temperature multiple access (ITMA), by first sensing the RF environment and then determining the bandwidth and power needed in order to achieve a desired capacity. However, as observed in [67], the capacity achievable from the interference-temperature model is low, compared with the amount of interference with primary users it can cause. It is also argued by other commenting parties of the FCC that the interference-temperature approach is not a workable concept and would result in increased interference in the frequency bands where it would be used. Therefore, in May 2007 the FCC terminated the work on rule making for implementing the interference-temperature model.
1.3.2
Spectrum sensing Spectrum sensing enables the capability of a CR to measure, learn, and be aware of the radio’s operating environment, such as the spectrum availability and interference status. When a certain frequency band is detected as not being used by the primary licensed user of the band at a particular time in a particular position, secondary users can utilize the spectrum, i.e., there exists a spectrum opportunity. Therefore, spectrum sensing can be performed in the time, frequency, and spatial domains. With the recent development of beamforming technology, multiple users can utilize the same channel/frequency at the same time in the same geographical location. Thus, if a primary user does not transmit in all directions, extra spectrum opportunities can be created for secondary users in the directions where the primary user is not operating, and spectrum sensing needs also to take the angles of arrivals (AoAs) into account [270] [100] [477]. Primary users can also use their assigned bands by means of spread spectrum or frequency hopping, and then secondary users can transmit in the same band simultaneously without severely interfering with primary users as long as they adopt an orthogonal code with respect to the codes adopted by primary users [180] [426]. This creates spectrum opportunities in the code domain, but meanwhile requires detection of the codes used by primary users as well as multipath parameters.
13
1.3 Spectrum sensing and analysis
Table 1.1. Summary of main spectrum-sensing techniques Type
Test statistics
Advantages
Disadvantages
Energy detector
Energy of the received signal samples
• Easy to implement • Does not require prior knowledge about primary signals
Feature detector
Cyclic spectrum density function of the received signal, or by matching general features of the received signal to the already-known primary-signal characteristics Projected received signal in the direction of the already-known primary signal or a certain waveform pattern
• More robust against noise uncertainty and better detection in low-SNR regimes than energy detection • Can distinguish among different types of transmissions and primary systems • More robust against noise uncertainty and better detection in low-SNR regimes than feature detector • Require fewer signal samples to achieve good detection
• High false-alarm rate due to noise uncertainty • Very unreliable in low-SNR regimes • Cannot differentiate a primary user from other signal sources • Specific features, e.g., cyclostationary features, must be associated with primary signals • Particular features may need to be introduced, e.g., to OFDM-based communications
Matched filtering and coherent detection
• Require precise prior information about certain waveform patterns of primary signals • High complexity
A wealth of literature on spectrum sensing focuses on primary transmitter detection based on the local measurements of secondary users, since detecting the primary users that are receiving data is in general very difficult. According to the a priori information they require and the resulting complexity and accuracy, spectrum-sensing techniques can be categorized into the following types, which are summarized in Table 1.1.
1.3.2.1
Energy detector Energy detection is the most common type of spectrum sensing because it is easy to implement and requires no a priori knowledge about the primary signal. Assume the hypothesis model of the received signal is H0 : y(t) = n(t), H1 : y(t) = hx(t) + n(t),
(1.4)
where x(t) is the primary user’s signal to be detected at the local receiver of a secondary user, n(t) is the additive white Gaussian noise (AWGN), and h is the channel gain from the primary user’s transmitter to the secondary user’s receiver. H0 is a null hypothesis, meaning that there is no primary user present in the band, while H1 means the primary user’s presence. The detection statistics of the energy detector can be defined as the average (or total) energy of N observed samples
14
Introduction to cognitive radios
T =
N 1 |y(t)|2 . N
(1.5)
t=1
The decision on whether the spectrum is being occupied by the primary user is made by comparing the detection statistics T with a predetermined threshold λ. The performance of the detector is characterized by two probabilities: the probability of a false alarm PF and the probability of detection PD . PF denotes the probability that the hypothesis test decides H1 while it is actually H0 , i.e., PF = Pr(T > λ|H0 ).
(1.6)
PD denotes the probability that the test correctly decides H1 , i.e., PD = Pr(T > λ|H1 ).
(1.7)
A good detector should ensure a high detection probability PD and a low false-alarm probability PF , or it should optimize the spectrum-usage efficiency (e.g., the QoS of a secondary-user network) while guaranteeing a certain level of primary-user protection. To this end, various approaches to improve the efficiency of energy-detector-based spectrum sensing have been proposed. Since the detection performance is very sensitive to the noise-power-estimate error [422], an adaptive noise-level-estimation approach is proposed in [316], where a multiple-signal classification algorithm is used to decouple the noise and signal subspaces and estimate the noise floor. A constant false-alarm-rate threshold is further computed to study the spectrum occupancy and its statistics. A well-chosen detection threshold can minimize spectrum-sensing error, provide the primary user with enough protection, and fully enhance spectrum utilization. In [441] the detection threshold is optimized iteratively to satisfy the requirement on the false-alarm probability. Threshold optimization subject to spectrum-sensing constraints is investigated in [320], where an optimal adaptive threshold level is developed by utilizing the spectrum-sensing error function. In [259], forward methods for energy detection are proposed, for which the noise power is unknown and is adaptively estimated. In order to find and localize narrowband signals, a localization algorithm based on double-thresholding (LAD) is proposed in [437], where the usage of two thresholds can provide signal separation and localization. The LAD method involves blind narrowband-signal detection, and neither information about the noise level nor narrowband signals are required. The LAD method with normalized thresholds can reduce computational complexity without performance loss, and the estimation of the number of narrowband signals is made more accurate by combining adjacent clusters. The sensing-throughput tradeoff of energy detection is studied in [265], where the duration of the sensing period in a time slot is optimized to maximize the achievable throughput for the secondary users under the constraint that the primary users are sufficiently protected. A novel wideband spectrum-sensing technique based on energy detection is introduced in [355], which jointly detects the signal energy levels over multiple frequency bands in order to improve the opportunistic throughput of CRs and reduce their interference with the primary systems. The analysis in [443] shows that detection of narrowband transmission using energy
1.3 Spectrum sensing and analysis
15
detection over multi-band OFDM is feasible, and can be further extended to cover more complex systems. Experimental studies on energy-detection-based spectrum access have also been conducted in the literature. [365] proposes spectrum-sensing algorithms for localizing transmitters in the same band by applying triangulation techniques based on sensed power at each sensor, and addresses the issue of how to find the spectral occupancy of multiple transmitters over a wide range of frequencies. In [150], actual measurements of the channel-access pattern in the 2.4-GHz ISM band are taken. This approach uses a vector signal analyzer to collect complex baseband data, and uses the collected data to statistically characterize the idle and busy periods of the channel. The performance of energy detection under various channel conditions is studied in [97], where the probability of detection under conditions of AWGN and fading channels is derived. Besides its low computational and implementation complexity and short detection time, there also exist some challenges in designing a good energy detector. First, the detection threshold depends on the noise power, which may change over time and hence is difficult to measure precisely in real time. In low-SNR regimes where the noise power is very high, reliable identification of a primary user is not even possible [424]. Moreover, an energy detector can only decide the primary user’s presence by comparing the received signal energy with a threshold; thus, it cannot differentiate the primary user from other unknown signal sources. Hence, it can trigger false alarms frequently.
1.3.2.2
Feature detectors There are specific features associated with the information transmission of a primary user. For instance, the statistics of the transmitted signals in many communication paradigms are periodic because of the inherent periodicities such as the modulation rate, carrier frequency, etc. Such features are usually viewed as cyclostationary features, on the basis of which a detector can distinguish cyclostationary signals from stationary noise. In a more general sense, features can refer to any intrinsic characteristics associated with a primary user’s transmission, as well as the cyclostationary features. For example, center frequencies and bandwidths [476] extracted from energy detection can also be used as reference features for classification and determining a primary user’s presence. In this section, we will introduce cyclostationary feature detection followed by a generalized feature detection. Cyclostationary feature detection was first introduced in [132]. Since, in most communication systems, the transmitted signals are modulated signals coupled with sine-wave carriers, pulse trains, hopping sequences, or cyclic prefixes, while the additive noise is generally wide-sense stationary (WSS) with no correlation, cyclostationary feature detectors can be utilized to differentiate noise from primary users’ signals [47] [318] [56] [376] [71] [353] [319] [144] [188] [222] [392] [397] [416] and distinguish among different types of transmissions and primary systems [250]. Features of the primary user’s signal can be extracted by a cyclostationary detector to aid realtime detection of the primary user. For instance, in a spectrum-pooling system where
16
Introduction to cognitive radios
secondary users rent the temporarily used licensed bands, secondary users should monitor the channel-allocation information (CAI) continuously in order to vacate the frequency bands upon the appearance of a primary user. Exploiting the different cyclostationary properties of the primary and secondary signals [319] provides secondary users with immediate awareness of a primary user. In contrast to an energy detector, which uses time-domain signal energy as test statistics, a cyclostationary feature detector performs a transformation from the time domain into the frequency feature domain and then conducts a hypothesis test in the new domain. Specifically, define the cyclic autocorrelation function (CAF) of the received signal y(t) by R αy (τ ) = E[y(t + τ )y ∗ (t − τ )ej2π αt ],
(1.8)
where E[·] is the expectation operation, ∗ denotes complex conjugation, and α is the cyclic frequency. Since periodicity is a common property of wireless modulated signals, while noise is WSS, the CAF of the received signal also demonstrates periodicity when the primary signal is present. Thus, we can represent the CAF using its Fourier-series expansion, called the cyclic spectrum density (CSD) function, expressed as [132] S( f, α) =
∞
R αy (τ )e−j2π f τ .
(1.9)
τ =−∞
The CSD function has peaks when the cyclic frequency α equals the fundamental frequencies of the transmitted signal x(t), i.e., α = k/Tx with Tx being the period of x(t). Under hypothesis H0 , the CSD function does not have any peaks since the noise is non-cyclostationary signals. A peak detector [141] or a generalized likelihood ratio test [318] [250] can be further used to distinguish between the two hypotheses. Different primary communication systems using different air interfaces (modulation, multiplexing, coding, etc.) can also be differentiated by their different properties of cyclostationarity. However, when OFDM becomes the air interface, as suggested by several wireless communication standards, identification of different systems may become problematic, since the features due to the nature of OFDM signaling are likely to be similar or even identical. To address this issue, particular features need to be introduced into OFDM-based communications. In [272], methods that assign different properties of cyclostationarity to different systems are considered. The OFDM signal is configured before transmission so that its CAF outputs peak at certain pre-chosen cycle frequencies, and the difference in these frequencies is used to distinguish among several systems operating within the same OFDM air interface. A similar approach is considered in [392]. Compared with energy detectors that are prone to high false-alarm probability due to noise uncertainty and cannot detect weak signals in noise, cyclostationary detectors are good alternatives because they can differentiate noise from primary users’ signal and have better detection robustness in a low-SNR regime. However, the computational complexity and the significant amount of observation time required for adequate
1.3 Spectrum sensing and analysis
17
detection performance prevent wide use of this approach. A spectrum-sensing method based on maximum cyclic autocorrelation selection has been proposed in [267], where the peak and non-peak values of the cyclic autocorrelation function are compared to determine whether the primary signal is present or not. This method does not require noise-variance estimation, and is robust against noise uncertainty and interference signals. Frequency-selective fading and uncertain noise impair the robustness of cyclostationary signal detection in low-SNR environments. Run-time noise calibration has been considered in [424] and [423], in order to improve detector robustness. The method exploits the in-band measurements at frequencies where a pilot is absent to calibrate the noise statistics at the pilot frequencies. By combining neural network for signal classification with cyclic spectral analysis, a more efficient and reliable classifier is developed in [120]. Since a large amount of processing is performed offline using neural networks, the online computation for signal classification is greatly reduced. Generalized feature detection refers to detection and classification that extracts more feature information other than the cyclostationarity due to the modulated primary signals, such as the transmission technologies used by a primary user, the amount of energy and its distribution across different frequencies [432] [280], the channel bandwidth and its shape [348] [476], the power spectrum density (PSD) [356], the center frequency [476], the idle guard interval of OFDM [230], an FFT-type feature [256], etc. By matching the features extracted from the received signal to the a priori information about primary users’ transmission characteristics, primary users can be identified. Using the packet-length information extracted from the packet header [150], a continuous-time semi-Markov traffic model is developed that not only captures the WLAN’s behavior but also helps in designing the optimal control policies for dynamic spectrum access. Location information of the primary signal is also an important feature that can be used to distinguish a primary user from other signal sources. In primary-user-emulation attack, a malicious secondary user transmits signals whose characteristics emulate those of the primary signals. A transmitter verification scheme is proposed in [74] to secure trustworthy spectrum sensing based on location verification of the primary user. Different spectrum environments are analyzed and characterized in [269], which derives the closed-form probability distributions for the availability of a given frequency channel (narrowband and wideband) and energy appearing in the channel. Constructing a probability distribution of the spectrum environment enables researchers to perform spectrum analysis without requiring a large database and make provable assertions in various types of radio environments.
1.3.2.3
Matched filtering and coherent detection If secondary users know information about a primary user’s signal a priori, then the optimal detection method is matched filtering [349], since a matched filter can correlate the already-known primary signal with the received signal to detect the presence of the primary user and thus maximize the SNR in the presence of additive stochastic noise. The merit of matched filtering is the short time it requires to achieve a certain detection performance such as a low probability of missed detection and false alarms
18
Introduction to cognitive radios
[375] [458], since a matched filter needs fewer received signal samples. However, the required number of signal samples also grows as the received SNR decreases, so there exists an SNR wall [424] for a matched filter. In addition, its implementation complexity and power consumption are too high [71], because the matched filter needs receivers for all types of signals and corresponding receiver algorithms have to be executed. Matched filtering requires perfect knowledge of the primary user’s signal, such as the operating frequency, bandwidth, modulation type and order, pulse shape, packet format, etc. If wrong information is used for matched filtering, the detection performance will be degraded a lot. On the other hand, most wireless communication systems exhibit certain patterns, such as pilot tones, preambles, midambles, and spreading codes, which are used to assist control, equalization, synchronization, and continuity, or for reference purposes. Even though perfect information about a primary user’s signal might not be attainable, if a certain pattern is known from the received signals, coherent detection (or waveformbased sensing) can be used to decide whether a primary user is transmitting or not [404] [82] [274]. As an example, the procedure of coherent detection using a pilot pattern is explained as follows [404]. There are two hypotheses in coherent detection: H0 : y(t) = n(t), √ √ H1 : y(t) = xp (t) + 1 − x(t) + n(t),
(1.10)
where xp (t) is a known pilot tone, is the fraction of energy allocated to the pilot tone, x(t) is the desired signal, which is assumed to be orthogonal to the pilot tone, and n(t) is additive white noise. The test statistic of the coherent detection is defined as the projected received signal in the pilot direction, i.e., T =
N 1 y(t)xˆp (t), N
(1.11)
t=1
where xˆp is a normalized unit vector in the direction of the pilot tone. As N increases, the test statistic T under hypothesis H1 is much greater than that under H0 . By comparing T with a predetermined detection threshold, one can decide on the presence of a primary user. Coherent detection can also be performed in the frequency domain [356]. One can express the binary hypothesis test using the PSD of the received signal SY (ω), and distinguish between H0 and H1 by exploiting the unique spectral signature exhibited in S X (ω). For instance, the PSD of the received signals can be estimated from a periodogram, which uses an n-point received signal to obtain the squared magnitudes of (n) the n-point discrete-time Fourier transform SY (k), k = 0, 1, . . . , n − 1. If the n-point (n) sampled PSD of the signal S X (k) is determined at the receiver by exploiting the a priori known spectral features, the presence of a TV signal can be detected [356] using the following test statistics:
19
1.3 Spectrum sensing and analysis
Tn =
n−1 1 (n) SY (k)S X(n) (k). N
(1.12)
k=0
It is also shown that frequency-domain coherent detection with a priori known features can detect TV signals reliably from AWGN at very low SNR. Coherent detection is shown to be robust against noise uncertainty, and not limited by the SNR wall [404] as long as N is large enough. Moreover, coherent detection outperforms energy detection in the sensing convergence time [82] [413], because the sensing time of energy detection increases quadratically with the SNR reduction, while that of coherent detection increases only linearly [413]. However, information about the waveform patterns is a prerequisite for implementing coherent detection; the more precise information a coherent detector has, the better the sensing performance will be. Moreover, utilization of a wide range of spectrum requires a frequency synthesizer, which, however, generates a square-wave local oscillator signal containing many harmonics (termed harmonic images) in addition to the fundamental frequency and degrades the performance of the primary-user detection. The use of a frequency offset to decorrelate the desired signals from the harmonic images is proposed in [285] in order to reject the harmonic images and improve spectrum sensing.
1.3.2.4
Other techniques Several other spectrum-sensing techniques have been proposed in the recent literature, and some of them are variations inspired by the above-mentioned sensing techniques. Statistical-covariance-based sensing. Since the statistical covariance matrices of the received signal and noise are generally different, the difference is used in [503] [501] to differentiate the desired signal component from background noise. The eigenvalues of the covariance matrix of the received signal can also be used for primary detection [502]. The ratio of the maximum eigenvalue to the minimum eigenvalue is quantized on the basis of random-matrix theory [425], and the detection threshold can be found among them. By the simulation on detecting digital TV signals, these methods based on statistical covariances have been shown to be more robust against noise uncertainty while requiring no a priori information about the signal, the channel, and the noise power. Filter-based sensing. Application of a specific class of filter banks is proposed in [113] for spectrum sensing in CR systems. When filter banks are used for multicarrier communications in CR networks, the spectrum sensing can be performed merely by measuring the signal power at the outputs of subcarrier channels, with virtually no computational cost. The multitaper method [160] can also be thought of as a filter-bank spectrum estimation with multiple filter banks. Fast sensing. By utilizing the theory of quickest detection, with which one performs a statistical test to detect the change of distribution in spectrum-usage observations as quickly as possible, agile and robust spectrum sensing is achieved in [251]. The unknown parameters after a primary user appears can be estimated using the proposed successive refinement, which combines both generalized likelihood ratio and parallel cumulative-sum tests. An efficient sensing sequence is developed in [239] to reduce
20
Introduction to cognitive radios
the delay due to spectrum-opportunity discovery. The probability that a frequency band is available at sensing, the sensing duration, and the channel capacity are three factors that determine the sensing sequence. Learning/reasoning-based sensing. An approach based on reinforcement learning for the detection of spectral resources in a multi-band CR scenario is investigated in [24], where the optimal detection strategy is obtained by solving a Markov decision process (MDP). A MAC layer spectrum-sensing algorithm using knowledge-based reasoning is proposed in [446], where the optimal range of channels to finely sense is determined through proactive fast sensing and channel-quality information. Measurements-based sensing and modeling. By collecting data over a long period of time at many base stations, [455] provides a unique analysis of cellular primary usage. The collected data are dissected along different dimensions to characterize the primary usage, and it is found that a random-walk process can be used to model the aggregate cell capacity, while the commonly adopted exponential distribution is not a good model for call durations. With the aid of a spectrum observatory, [22] extends short-term spectrum-usage measurements to study the spectrum-usage trend over long periods, observes spectrum-usage patterns, and detects the positions of spectrum white space in the time and spatial domains. Such information can be greatly helpful in developing good dynamic-access protocols and governing secondary systems. Hough-transform-based sensing. The Hough transform, which has been studied in the image-processing literature for detection of patterns in binary images, can be used for detecting patterns in primary-user signals such as radar pulses [72], as long as the radio signals exhibit periodic patterns.
1.3.3
Cooperative sensing The performance of spectrum sensing is limited by noise uncertainty, shadowing, and multipath fading effects. When the received primary SNR is too low, there exists a SNR wall, below which reliable spectrum detection is impossible even with a very long sensing time. If secondary users cannot detect the primary transmitter, while the primary receiver is within the secondary users’ transmission range, a hidden-primary-user problem will occur, and the primary user’s transmission will be interfered with. By taking advantage of the independent fading channels (i.e., spatial diversity) and multiuser diversity, cooperative spectrum sensing is proposed to improve the reliability of spectrum sensing, increase the detection probability to better protect a primary user, and reduce the false-alarm rate to utilize the idle spectrum more efficiently. In centralized cooperative spectrum sensing, a central controller, e.g., a secondary base station, collects local observations from multiple secondary users, decides the available spectrum channels using some decision-fusion rule, and informs the secondary users which channels to access. In distributed cooperative spectrum sensing, secondary users exchange their local detection results among themselves without requiring a backbone infrastructure, and hence with reduced cost. Relays can also be used in cooperative spectrum sensing, such as the cooperative sensing scheme proposed in [155], where the cognitive users operating in the same band help each other relay information using
1.3 Spectrum sensing and analysis
21
an amplify-and-forward protocol. It is shown that the inherent network asymmetry can be exploited to increase the agility. An extension to multiuser networks is studied in [156], and a decentralized cooperation protocol is proposed to ensure an agility gain for a large network population. There also exist several challenges with cooperative spectrum sensing. For instance, secondary users can be low-cost devices equipped with only a limited amount of power, so they cannot afford very complicated detection hardware and high computational complexity. In wideband cooperative sensing, multiple secondary users have to scan a wide range of spectrum channels and share their detection results. This results in a large amount of sensory-data exchange, high energy consumption, and an inefficient data throughput. If the spectrum environment is highly dynamic, the sensed information may even be stale due to user mobility, channel fading, etc.
1.3.3.1
User selection Owing to secondary users’ different locations and channel conditions, it is shown in [341] that cooperation of all secondary users in spectrum sensing is not optimal, and the optimum detection/false-alarm probability is achieved by limiting the cooperation to a group of users who have relatively high SNR of the received primary signal. Since detecting a primary user costs battery power of secondary users, and shadow fading may be correlated for nearby secondary users, an optimal selection of secondary users for cooperative spectrum sensing is desirable. In [403], various algorithms based on different amounts of available information are proposed to select a proper set of sensors that experience uncorrelated shadow fading. A joint spatial–temporal sensing scheme for CR networks is proposed in [102], where secondary users collaboratively estimate the location and transmitting power of the primary transmitter to determine their maximum allowable transmission power, and use the location information to decide which users should participate in collaborative sensing in order to minimize correlation among the secondary users. Performance evaluation of cooperative spectrum sensing over realistic propagation environments, i.e., correlated log-normal shadowing both in the sensing and in the reporting channel, is investigated in [106]. This work also provides guidelines to select the optimal number of users in order to guarantee a certain detecting performance in a practical radio environment. In a CR sensor network, individual sensor nodes may experience a heterogeneous false-alarm and detection probability due to their different locations, making it harder to determine the optimal number of cooperative nodes. Sensor clustering is proposed in [238], where the optimal cluster size is derived so as to place an upper bound on the variation of the average received signal strength in a cluster of sensor nodes. Moreover, the sensor density is optimized so that the average distance between neighboring nodes is lower-bounded and their measurements are nearly independent, without much correlation. If a secondary user cannot distinguish between the transmissions of a primary user and another secondary user, he will lose the opportunity to use the spectrum. It is shown in [404] that the presence/absence of possible interference from other secondary users is the main reason for the uncertainty in primary-user detection, and coordinating
22
Introduction to cognitive radios
nearby secondary users can greatly reduce the noise uncertainty due to shadowing, fading, and multipath effects. A good degree of coordination should be chosen on the basis of the channel coherent times, bandwidths, and the complexity of the detectors.
1.3.3.2
Decision fusion Various decision-fusion rules for cooperative spectrum sensing have been studied in the literature. A logical OR rule is used [148] for combining multiple users’ decisions for spectrum sensing in fading environments. Cooperative spectrum sensing using a counting rule is studied in [211], where sensing errors are minimized by choosing the optimal settings for both matched filtering and energy detection. It is shown in [506] that a half-voting rule is the optimal decision-fusion rule in cooperative sensing based on energy detection. Light-weight cooperation based on hard decisions is proposed [297] for cooperative sensing to alleviate the sensitivity requirements on individual users. A linear-quadratic (LQ) strategy has been developed [430] to combat the detrimental effects of correlation between different secondary users. A good way to optimally combine the received primary-signal samples in space and time is to maximize the SNR of local energy detectors. However, optimal combination requires information about the signal and channel. Blindly combined energy detection is proposed in [505], which, without requiring such information and noise-power estimation, performs much better than an energy detector and is more robust against noise uncertainty. Hard decision combined with the logical AND rule and soft decision using the likelihood ratio test are proposed in [435] for use in collaborative detection of TV transmissions. It is shown that soft decision combining for spectrum sensing yields more precise detection than hard decision combining. Soft decision combination for cooperative sensing based on energy detection is investigated in [287], and maximal ratio combination (MRC) is proved to be near optimal in low-SNR regions and to reduce the SNR wall. A softened hard combination scheme with two-bit overhead is further proposed, which achieves a good tradeoff between detection performance and complexity. In general, cooperative sensing is coordinated over a separate control channel, so a good cooperation scheme should be able to use a small bandwidth and power for exchanging local detection results while maximizing the detection reliability. An efficient linear cooperation framework for spectrum sensing is proposed in [354], where the global decision is a linear combination of the local statistics collected from individual nodes using energy detection. Compared with the likelihood ratio test, the proposed method has lower computational complexity, closed-form expressions for the detection and false-alarm probabilities, and comparable detection performance. The performance of cooperative spectrum sensing depends on the correctness of the local sensing data reported by the secondary users. If malicious users enter a legitimate secondary network and compromise the secondary users, false detection results will be reported to the fusion center, and this kind of attack is called a spectrum-sensing data falsification (SSDF) attack [75]. In order to guarantee a satisfying detection performance under SSDF attack, a weighted sequential probability ratio test (WSPRT) is proposed in [75], which incorporates a reputation-based mechanism into the
1.3 Spectrum sensing and analysis
23
sequential probability ratio test. If a secondary user’s local detection result is identical to the final result after decision fusion, his/her reports will carry more weight in future decision fusion. The proposed WSPRT approach is more robust against SSDF attack than are commonly adopted decision-fusion rules, such as AND, OR, and majority rules [83].
1.3.3.3
Efficient information sharing In order to coordinate the cooperation in spectrum sensing, a lot of information exchange among secondary users is needed, such as their locations, estimation of the primary user’s location and power, which users should be clustered into a group, and which users should perform cooperate sensing at a particular time epoch. Such a large amount of information exchange brings a lot of overhead to the secondary users, which necessitates efficient information sharing among the secondary users. The guess protocol, an incremental gossiping approach, is proposed in [4] as a means to coordinate the dissemination of spectrum-sensing results. It is shown that the proposed approach can reduce overhead because the amount of information exchange among secondary users is limited, and this method accommodates network alternations such as node movement or node failures and exhibits fast information convergence. In order to reduce the bandwidth required by a large number of secondary users for reporting their sensing results, a censoring method with quantization is proposed in [411]. Only users with reliable information will send their local observations, i.e., a one-bit decision 0 or 1, to the common receiver. It is shown that the sensing overhead is greatly reduced at the cost of a little degradation in detection performance. A pipelined spectrum-sensing framework is proposed in [157], where spectrum sensing is conducted concurrently while secondary users are sending their detection reports. The proposed method alleviates sensing overhead by making use of the reporting time, provides more time for spectrum sensing, and thus improves the detection performance. A multi-threaded sequential probability ratio test is further proposed for data fusion in the pipelined framework. Random matrix theory (RMT) is applied in cooperative spectrum sensing in [51]. The proposed method uses multiple secondary receivers to infer the structure of the primary signal using RMT without requiring information about the noise statistics or its variance. It is shown that the proposed method can estimate the spectrum occupancy reliably only with a small amount of received primary-signal samples.
1.3.3.4
Interference diversity Traditional cooperative spectrum sensing schemes usually utilize the shadowing/ multipath diversity among multiple secondary users to enhance the detection reliability, while the potential presence of an unknown number of low-power and time-varying interference sources is actually the main reason for noise uncertainty [334]. Since individual users have different local observation of those low-powered interference sources, e.g., the arrival or departure of one interfering source merely causes a few nearby sensors to trigger false alarms whereas a primary user’s activity can be detected by many more secondary users far apart, such interference diversity is utilized to
24
Introduction to cognitive radios
develop an event-based cooperative sensing scheme for detecting the primary user [334].
1.3.3.5
Distributed cooperative sensing Cooperative spectrum sensing has been shown to be able to greatly improve the sensing performance in CR networks. However, if cognitive users belong to different service providers, they tend to contribute less in sensing in order to increase their own data throughput. A distributed cooperating spectrum sensing scheme based on evolutionary game theory is proposed in [451] to answer the question of “how to collaborate” in multiuser de-centralized CR networks. Using replicator dynamics, the evolutionary game modeling provides an excellent means to address the strategic uncertainty that a user may face by exploring various actions, adaptively learning during the strategic interactions, and approaching the best response strategy under changing conditions and environments. The behavior dynamics and the optimal cooperation strategy of the secondary users are characterized. A distributed learning algorithm is further developed so that the secondary users approach the optimal strategy solely on the basis of their own payoff observations. The proposed game is demonstrated to achieve a higher system throughput than the fully cooperative scenario, where all users contribute to sensing in every time slot. Another form of throughput-enhancing cooperative spectrum sensing is proposed in [264], where secondary users share their decisions about the spectrum occupancy of the primary users, and have more opportunities for access to idle spectrum with fewer collisions with primary users. The proposed scheme requires a common control channel, and can work in a distributed fashion.
1.3.3.6
Experimental measurements Cooperative sensing using energy detection has been implemented on a wireless testbed [81]. Experimental study has demonstrated the improvement in detection performance due to cooperation, such as the need for less sensing time and the achievement of a higher detection probability. A measurement setup for cooperative spectrum sensing and experimental results obtained with it are presented in [459]. It is shown by measurements that the cooperation gain increases with the distance between secondary users and that cooperative sensing can alleviate the sensitivity requirements on a single secondary user. Moreover, since correlated spectrum measurements degrade the gain of cooperation, a robust metric for evaluating the correlations is developed.
1.4
Dynamic spectrum allocation and sharing In the previous section, we have discussed various detection techniques and how to perform efficient cooperative spectrum sensing in order to obtain an accurate estimation of the interference temperature and spectrum-occupancy status. With the detection results, a secondary user will have an idea regarding which spectrum bands he/she could use. However, the availability and quality of a spectrum band may change rapidly with time due to primary users’ activity and competition from other secondary users. In order to
25
1.4 Dynamic spectrum allocation and sharing
Table 1.2. Classification of spectrum-allocation and -sharing schemes Classification criterion
Type 1
Type 2
Spectrum bands that secondary users are using
Open spectrum sharing: access unlicensed spectrum band only
Access technology of licensed spectrum sharing
Spectrum underlay: secondary users transmit concurrently with primary users subject to interference constraints Centralized: a central entity controls and coordinates the spectrum allocation and access Cooperative: all secondary users work toward a common goal
Hierarchical access/licensed spectrum sharing: also access licensed spectrum band Spectrum overlay: secondary users use the licensed spectrum only when primary users are not transmitting Distributed: each user makes his/her own decision on the spectrum-access strategy Noncooperative: different users have different objectives
Network architecture
Access behaviors
utilize the spectrum resources efficiently, secondary users need to be able to address issues such as when and how to use a spectrum band, how to coexist with primary users and other secondary users, and which spectrum band they should sense and access if the one currently in use is not available. Therefore, in this section, we will review the existing approaches to spectrum allocation and sharing that answer these questions. Before going into the details, we would like to briefly discuss the classification of the current schemes for spectrum allocation and sharing. The existing schemes can be classified according to various criteria, as summarized in Table 1.2. The first classification is according to the spectrum bands that secondary users are using. Spectrum sharing among the secondary users who access the unlicensed spectrum band is referred to as open spectrum sharing. One example is the open spectrum sharing in the unlicensed industrial, scientific, and medical (ISM) band. In open spectrum sharing, since no users own spectrum licenses, they all have the same rights in using the unlicensed spectrum. Spectrum sharing among the secondary users and primary users in licensed spectrum bands is referred to as the hierarchical access model [507] or licensed spectrum sharing. Primary users, who are usually not equipped with CRs, do not need to perform dynamic/opportunistic spectrum access, since they have priority in using the spectrum band. Whenever they reclaim the spectrum usage, secondary users have to adjust their operating parameters, such as power, frequency, and bandwidth, to avoid interrupting the primary users. Considering the access technology of the secondary users, licensed spectrum sharing can be further divided into two categories [7] [507]. (i) Spectrum underlay. In spectrum underlay secondary users are allowed to transmit their data in the licensed spectrum band when primary users are also transmitting. The interference-temperature model is imposed on secondary users’ transmission power so that the interference at a primary user’s receiver is within the interference-temperature limit and primary users can deliver their packet to the
26
Introduction to cognitive radios
receiver successfully. Spread-spectrum techniques are usually adopted by secondary users to fully utilize the wide range of spectrum. However, due to the constraints on transmission power, secondary users can achieve only short-range communication. If primary users transmit data all the time in a constant mode, spectrum underlay does not require secondary users to perform spectrum detection to find available spectrum band. (ii) Spectrum overlay. Spectrum overlay is also referred to as opportunistic spectrum access. Unlike spectrum underlay, secondary users in spectrum overlay will use the licensed spectrum only when primary users are not transmitting, so there is no interference-temperature limit imposed on secondary users’ transmission. Instead, secondary users need to sense the licensed frequency band and detect the spectrum white space, in order to avoid harmful interference with primary users. The second classification [7] is according to the network architecture. When there exists a central entity that controls and coordinates the spectrum allocation and access of secondary users, the spectrum allocation is centralized. If there is no such central controller, perhaps because of the high cost of constructing an infrastructure or the ad hoc nature of the network such as for emergency or military use, that kind of spectrum sharing belongs to the category of distributed spectrum sharing. In distributed spectrum sharing, each user makes his own decision about his spectrum-access strategy, mainly on the basis of local observation of the spectrum dynamics. The third classification is according to the access behavior of secondary users [7]. If all secondary users work toward a common goal, for instance they belong to the same operator or service provider, they will coordinate their allocation and access in order to maximize their social welfare. This is called cooperative spectrum sharing. Most forms of centralized spectrum allocation can be considered cooperative. On the other hand, it is not always the case that all secondary users belong to the same service provider; e.g., it is not the case for those who access the open spectrum band. Different users have different objectives, and hence they aim only at maximizing their own benefit from using the spectrum resources. Since users are no longer cooperating to achieve the same objective, this kind of spectrum sharing is noncooperative, and secondary users are selfish in that they pursue their own benefit. In order to give the reader more insight into how to design efficient spectrum allocation and sharing schemes, we next discuss several important issues in dynamic spectrum allocation and sharing.
1.4.1
Medium-access control in CR networks Medium-access control refers to the policy that controls how a secondary user should access a licensed spectrum band. Various medium-access control protocols have been proposed in wireless networking such as carrier-sense multiple access (CSMA) and slotted ALOHA. Owing to the new features of CR networks, such as the requirement for spectrum sensing and access to avoid collision with a primary user, dynamics in
1.4 Dynamic spectrum allocation and sharing
27
spectrum availability, and adaptation to a changing environment, new medium-access protocols need to be designed to address new challenges in CR networks. A cognitive medium-access protocol with stochastic modeling is proposed in [152], which enhances the coexistence of CR with WLAN systems that are based on sensing and prediction. The continuous-time Markov chain is adopted to approximate the primary user’s traffic, and a sense-before-transmit strategy constrains the interference generated toward the primary user. The CR’s throughput is optimized by solving a constrained Markov decision process using linear programming. An implementation of the cognitive medium-access protocol is presented in [149]. A primary-prioritized Markov approach for dynamic spectrum access is proposed in [449], which models the interactions between the primary users and the secondary users as continuous-time Markov chains. By designing appropriate access probabilities for the secondary users, a good tradeoff between spectrum efficiency and fairness, and a higher throughput than with CSMA-based random access, can be achieved. A cognitive MAC (C-MAC) protocol for distributed multi-channel wireless networks is introduced in [48]. Since the C-MAC operates in multiple channels, it is able to deal with the dynamics of channel availability due to primary users’ activity. Beaconing is included in a frame so that users can exchange local information, negotiate channel usage, and avoid hidden-user problems. A distributed and dynamic coordination among nodes in different channels can be achieved using a rendezvous channel. A stochastic channel-selection algorithm based on learning automata is proposed in [381]. This dynamically adapts the probability of access to one channel in real time and asymptotically converges to the optimal channel. It is shown that the probability of successful transmissions is maximized using the proposed selection algorithm. Opportunistic scheduling policies for CR networks using the technique of Lyapunov optimization are investigated in [429]. This technique maximizes the throughput of secondary users while upper-bounding the collisions with primary users. A MultiMAC protocol that can dynamically reconfigure MAC- and physical-layer properties for CR networks is proposed in [105]. This protocol, which is based on per-node and per-flow statistics, provides intelligent reconfiguration of the MAC and physical layers in response to environment variations, and hence achieves the best performance while ensuring correct decoding of incoming frames using the proper MAC-layer algorithm. The impact of channel heterogeneity, such as the heterogeneity in transmission ranges, data rates, etc., on network performance is identified in [243], which motivates the need to account for channel heterogeneity in designing higher-layer protocols. Considering the limited capability of spectrum sensing and limited bandwidth, a hardware-constrained cognitive MAC is proposed in [218], which optimizes the spectrum-sensing decision by formulating sensing as an optimal-stopping problem. Using backward induction, the optimal sensing strategy is derived on the basis of observations of reward from the past. Secondary users also need to be aware of their surrounding environment in allocating and accessing the spectrum. Considering that each node’s spectrum usage is unpredictable and unstable, the work in [89] proposes integrating interferenceaware statistical admission control with stability-oriented spectrum allocation. The
28
Introduction to cognitive radios
nodes’ spectrum demand is regulated to allow efficient statistical multiplexing while the outage is minimized. Near-optimal algorithms are developed to solve the NP-hard spectrum-allocation problem. An efficient opportunistic access scheme should achieve a high data rate of secondary users while sufficiently protecting the primary user from harmful interference. Since secondary users operating in different frequency bands at different locations are constrained by different interference requirements, a good spectrum access scheme needs to take the interference heterogeneity into consideration. A distance-dependent MAC protocol is proposed in [389] to optimize the CR network throughput subject to a power-mask constraint to protect the primary user. The protocol adopts a probabilistic channel assignment algorithm, which exploits the dependence of the signal’s attenuation model on the transmission distance while considering the local traffic profile. An idea of how to utilize location awareness to facilitate spectrum sharing between secondary and primary users is illustrated in [439]. With the development of discontiguous orthogonal frequency-division multiplexing, discontiguous spectrum access and spectrum aggregation has become possible, since spectrum fragments could be aggregated and further utilized. An aggregation-aware spectrum-assignment scheme is proposed in [90] to optimize the spectrum assignment when the available spectrum band is not contiguous. The modeling of opportunistic spectrum access considering the dynamic traffic pattern is usually based on queuing theory, since queuing theory provides a systematic analytic tool for studying the performance in terms of packet delay, buffer length, system throughput, and so on. A queuing analytic framework is developed in [362] to evaluate the performance of secondary users in a CR network, such as queuing delay and buffer statistics of secondary users’ packets. A channel-allocation scheme based on the queuing analytic model is considered. This approach can guarantee a required statistical delay performance. The collision probability and overlapping time are introduced in [176] to evaluate the protection of a primary user. With constraints on sufficient primary-user protection, various spectrum access schemes using different sensing, backoff, and transmission mechanisms are presented, which reveal the impact of several important design criteria, such as sensing, packet-length distribution, back-off time, packet overhead, and grouping.
1.4.2
Spectrum handoff When the current channel conditions become worse, or the primary user appears and reclaims his assigned channel, secondary users need to stop transmitting data and find other available channels in which to resume their transmission. This kind of handoff in CR networks is termed spectrum handoff [7]. Since the transmissions of secondary users are suspended during spectrum handoff, they will experience longer packet delay. Therefore, a good spectrum handoff mechanism should provide secondary users with a smooth frequency shift with the least possible latency. A good way to alleviate the performance degradation due to long delay is to reserve a certain number of channels for potential spectrum handoff [508]. When secondary users need to switch to another frequency, they can immediately pick one channel from
1.4 Dynamic spectrum allocation and sharing
29
the reserved bands. However, if a secondary user reserves too much bandwidth for spectrum handoff, the throughput may be unnecessarily low, because the primary user might not reclaim his licensed band very frequently. Therefore, there is a tradeoff in optimizing the channel reservation. Assuming the arrival and service processes are Poisson, the process of spectrum occupation is modeled as a continuous-time Markov chain [508], with which the probability of service disruption due to primary users’ appearance can be calculated. By optimizing the number of channels reserved for spectrum handoff, the blocking probability can be minimized and the secondary users’ throughput is maximized. A location-assisted handover algorithm is proposed in [44]. A set of candidate channels is maintained by a secondary base station. Secondary users equipped with location-estimation and sensing devices can report their locations back to the secondary base station. Whenever handoff becomes a must, secondary users can switch their frequency to one of the candidate channels, depending on their locations. The algorithm can reduce the packet error rate due to primary users’ activity and channel impairments, alleviate signaling overhead in choosing new available channels in real time, and maintain effective handoff and seamless communication. In a multi-hop CR network, the question of how to design a spectrum handoff mechanism becomes more complicated, because multiple links are involved. A joint spectrum handoff scheduling and routing (JSHR) protocol in multi-hop multi-radio CR networks is proposed in [117], which extends the spectrum handoff of a single link to that of multiple links. The JSHR problem is formulated so as to minimize the total handoff latency under the constraint on network connectivity. A distributed greedy algorithm is developed to solve the NPhard problem, and a rerouting mechanism with spectrum-handoff scheduling is further designed to improve the network throughput. Three types of spectrum handoff for link maintenance have been studied in [438], including evaluation of the link-maintenance probability and effective throughput. Numerical results show that the probability of erroneous channel selection, the radio sensing time, and the number of handoff trials are important for spectrum-handoff design. In order to achieve reliable continuous communication among secondary users in the presence of random reclaims from a primary user, secondary users should select their channels from different licensed bands owned by different primary users [444]. Such multi-band spectrum diversity helps to reduce the impact of the appearance of a primary user and improve the reliability of secondary spectrum access. Moreover, through adding some redundancy to the payload data, secondary users’ transmission will be made robust against errors due to primary users’ corruption. Multi-band diversity has also been suggested in [244], where the multimedia content is distributed over multiple temporarily idle spectrum bands. However, the nature of multimedia content distribution requires packet-scheduling strategies that ensure the quality of the received data; this will complicate the system design in a secondary-user environment where the arrival of a primary user is unpredictable. Therefore, the authors of [244] propose the usage of digital fountain codes, which not only helps multimedia content distribution to secondary users without coordination among them, but also combats the packet loss due to a primary user’s activity and channel impairments. Optimal channel selection to meet the QoS requirement of multimedia streaming is discussed too. Luby transform
30
Introduction to cognitive radios
(LT) codes are proposed in [228] to compensate for the loss caused by the primary-user interference, and the optimal number of channels that maximizes the secondary users’ spectral efficiency, given fixed parameters of the LT code, is studied. It is also suggested that secondary users transmit concurrently with the primary users in the licensed band while mitigating interference to the primary users by coding techniques. A joint coding and scheduling method for cognitive multiple access is proposed in [68]. A successive interference decoder is utilized in the physical layer to mitigate the secondary user’s interference with the primary user, and thus the secondary user is allowed to share a channel with the primary user. A joint channel-aware and queueaware scheduling protocol is proposed in the MAC layer to minimize the secondary user’s delay with given power constraints. With the proposed approach, secondary users do not have to wait until the end of the primary users’ duty cycle to start transmission, and thus this approach improves the spectrum utilization with reduced delay.
1.4.3
Cognitive relaying Cooperative relaying utilizing the broadcasting nature of wireless networks has been proposed in recent years [258] [257] as a means by which to improve the network performance through spatial and multiuser diversity. In cooperative relaying, relay nodes can forward a source node’s data to a destination. Combined with CR technology, cooperative relaying can offer a more significant performance gain, because cognitive relay nodes can forward a source node’s data by using the spectrum white space they have detected. In wireless networks, a source node’s traffic is in general bursty, so a cognitive relay can utilize the periods of silence of the source to enable cooperation. Motivated by this fact, the authors of [391] proposed a novel cognitive multiple-access strategy in the presence of a cooperating relay. Since the cognitive relay forwards data only when the source is not transmitting, no extra channel resources are allocated for cooperation at the relay, and hence the proposed protocols provide significant performance gains over conventional relaying strategies. By exploiting source burstiness, secondary users utilize primary users’ periods of silence to access the licensed spectrum and help primary users forward their data, which not only increases their channel access probabilities but also achieves a higher throughput [112]. A cognitive OFDM-based spectrum-pooling system is considered in [337]. The source node transmits data to the relay using a certain pool of subcarrier frequencies, and the relay forwards the received data to the destination node on a possibly different pool of OFDM subcarriers. By determining an optimum assignment of the subcarrier pools, the capacity of the spectrum-pooling relay system is maximized. A frequency-sharing multi-hop CR network is studied in [127]. By recognizing the radio environment in each relay node, the system can autonomously avoid transmission in an interference area. In [219], an infrastructure-based secondary network architecture is proposed to leverage relay-assisted discontiguous OFDM for data transmission. A relay node that can bridge the source and the destination using its common channels between the two nodes will be selected. Relay selection and spectrum
1.4 Dynamic spectrum allocation and sharing
31
allocation are jointly optimized, and a corresponding MAC protocol is proposed and implemented in a Universal Software Radio Peripheral-based testbed. There are also several other works that study the performance of cognitive relay networks, including the achievable region and outage probability. The achievable region for a two-source, two-destination Gaussian interference channel with a cognitive relay is studied in [407], where the cognitive relay has access to messages transmitted by both sources and assists them in relaying the messages to their respective destinations. A one-sided interference channel assisted by a cognitive relay is considered in [380], where the relay has a link only to the destination that observes interference. Good relay strategies are investigated. It is found from the achievable region that, under certain conditions, the relay uses most of its power in canceling out the interference instead of boosting the desired signal power. The outage performance of cognitive wireless relay networks is studied in [263]. A group of network clusters consisting of several unlicensed relay nodes helps the source node forward messages to the destination node by using the spectrum white space. High-SNR approximation of the outage probability of the two-hop system is investigated, and it is found that full diversity is achieved only if each relay node successfully detects the spectrum hole. An intra-cluster cooperation scheme to improve the outage performance, in which appropriately many neighboring cognitive relay nodes inside a cluster collaborate with a desired cognitive relay node, is proposed too.
1.4.4
Spectrum sensing and access Owing to energy and hardware constraints, a secondary user might not be able to sense the entire spectrum space and can access only a limited number of channels from those it has sensed. To optimize spectrum access while considering physical-layer spectrum sensing and the primary user’s traffic statistics, a decision-theoretic approach based on a partially observable Markov decision process (POMDP) is proposed in [509]. The proposed method is shown to be able to optimize secondary users’ performance, accommodate spectrum-sensing error, and protect primary users from harmful interference. A joint design and separation principle for opportunistic spectrum access using POMDP is proposed in [94]. The separation principle reveals the optimality of myopic policies for the spectrum-sensor design and access strategy, and reduces the complexity of the POMDP formulation by decoupling the design of the sensing strategy from the design of the access strategy. By exploiting the mixing time of the underlying Markov process of spectrum occupancy, a truncated MDP formulation is developed in [107], which provides a tradeoff between performance and computational complexity. DSA with perfect and imperfect sensing based on Markov chain modeling is studied in [445]. Aspects of system performance such as airtime and blocking probabilities are evaluated, and the impact of the false-alarm and misdetection probabilities on DSA is analyzed. An extension of [509] that incorporates the secondary user’s residual energy and buffer state into the POMDP formulation for spectrum sensing and access is presented in[93] [95]. Monotonicity results are developed: first, the secondary user with data to transmit should sense a channel if and only if the conditional probability of
32
Introduction to cognitive radios
the channel being idle is above a certain threshold; second, the secondary user should transmit over an idle channel if and only if the channel fading is below a certain threshold. This observation can accelerate the decision making of a secondary user about the strategy for spectrum sensing and access. Owing to energy and hardware constraints, secondary users need to choose carefully which bands they should sense and access. Continuously accessing the channel with the highest estimated chance of availability may bring short-term gain, whereas exploration enables the secondary users to learn the statistical behavior of the primary traffic, with long-term gain. Therefore, cognitive medium access is modeled as a multi-armedbandit problem in [247], and an efficient access strategy is developed that achieves a good balance between exploring the availability of other free bands and exploiting the opportunities that have been identified. A multi-cognitive-user scenario is also considered, which is modeled as a game. A similar idea has been proposed in [499]. By formulating the process of spectrum sensing and access as a multi-arm-restless-bandit process, the authors of [499] studied the structure, optimality, and performance of the myopic sensing policy. It is shown that the myopic policy reduces channel selection to a round-robin procedure and alleviates the requirement on knowing the channel’s state-transition probability. The maximum throughput of a multi-channel opportunistic system using the myopic sensing policy and its scaling behavior with respect to the number of channels are also characterized.
1.4.5
Power control in CR networks Power control is a common approach for alleviation of interference. In order to manage the interference among secondary users, or avoid harmful interference with primary users due to secondary spectrum usage, various power-control schemes to coordinate spectrum sharing are also considered in CR networks. Power control in opportunistic spectrum access (OSA) has been studied in [373], which models the packet transmission from source to destination in OSA as crossing a multi-lane highway. If a secondary user tries to use high transmission power to reach the destination in one hop, it has to wait until the primary user is inactive; on the other hand, it can take more advantage of the spectrum opportunities with lower transmission while relying on the intermediate users on the path to destination. The impact of transmission power on the occurrence of spectrum opportunities is investigated in [373], and it is shown that the optimal transmission power of secondary users decreases monotonically with the traffic load of the primary network. Dynamic programming has been used in designing an optimal power- and rate-control strategy, in order to maximize the longterm average rate for a secondary user [153], with the constraints on the total energy budget of the secondary user and the interference from the secondary user with the primary user. The power-adaptation strategies that maximize the secondary user’s SNR and capacity under various constraints are studied in [387]. An opportunistic powercontrol strategy that enables the cognitive user to maximize its transmission rate while guaranteeing that the outage probability of the primary user is not degraded is proposed in [85]. In the proposed method, the cognitive user transmits with its maximum
1.4 Dynamic spectrum allocation and sharing
33
power when it senses that the primary channel is already in outage. When the primary channel is not in outage, it transmits with a fraction of its maximum power that ensures successful transmission of the primary user. A collaborative spectrum-sensing scheme that considers signal strength, localization, and collaboration in the presence of multiple co-channel primary and secondary transmitters is proposed in [309]. The allowed maximum transmitter power of a secondary user in a given channel is determined using a distributed database containing co-channel transmitter information including location, error estimates, power, etc. In most power-control schemes for CR networks, there usually exist interference constraints that prohibit simultaneous transmission by users that are within each other’s transmission range. The interference constraints should be characterized clearly in order to reflect the interference relationship; in addition, the description should not be too complicated, otherwise a closed-form solution cannot be obtained easily. Therefore, a conflict graph is commonly adopted to describe the interference constraints among users, such that a node in the graph represents a user, and an edge between a pair of nodes represents the existence of interference. A multi-channel contention graph is proposed [419] to characterize the interference in a protocol-interference model, on the basis of which the spectrum allocation and scheduling in CR networks can be jointly optimized. Most current works on DSA with interference constraints usually adopt the protocol model that simplifies interference constraints by presenting them as conflict graphs, which may suffer performance degradation due to incorrect interference estimation. A systematic framework to produce conflict graphs on the basis of a physical interference model is presented in [481], which characterizes the cumulative effect of interference while making it possible to use graph theory to solve spectrum-allocation problems under physical interference constraints.
1.4.6
Control-channel management Most DSA systems use a dedicated global control channel to coordinate the spectrum allocation. However, this assumption is not realistic in opportunistic spectrum access since there might be no permanent channel available for secondary users. A distributed group-coordination solution is proposed in [513], where a common control channel is required only locally by the neighboring nodes sharing common channels. The concept of a segment is introduced in [35], where a group of nodes sharing common channels along a routing path coordinate the control channel selection. A clusterbased approach is presented in [92], where a dynamic one-hop cluster is formed by users sharing common channels and the spectrum is managed by cluster heads. A distributed swarm-intelligence-based control channel assignment scheme is proposed in [91], which selects local common control channels among a local group of secondary users according to the quality of the detected spectrum holes and the choice of the neighboring users. In CR networks, control signals for coordinating spectrum sharing are transmitted through a dedicated channel, namely the common control channel (CCC). However,
34
Introduction to cognitive radios
potential control channel saturation will degrade the network performance severely. An alternative MAC protocol without requiring a CCC for multi-hop CR networks is proposed in [221]. By dividing the time into fixed time intervals and having all users listen to a channel at the beginning of each slot, the proposed protocol ensures that control signals can be exchanged among users. Simulation results show that the protocol provides higher throughput than that for a CCC-based protocol.
1.4.7
Distributed spectrum sharing In centralized spectrum allocation, a lot of information needs to be exchanged among the central controller and network users to coordinate their spectrum usage, and this results in a large amount of signaling overhead. Therefore, distributed spectrum sharing is preferred where users can make their decisions on how to use the spectrum solely on the basis of local information. A distributed spectrum management scheme is proposed in [88], where nodes take independent actions and share spectrum resources fairly. Five spectrum rules are presented to regulate node behavior. These rules are shown to achieve similar performance to that obtained with the explicit coordination approach while reducing the overhead due to information exchange. An adaptive approach to manage spectrum usage in dynamic spectrum-access networks is investigated in [87]. This approach achieves a comparable performance in spectrum assignment to that of the conventional centralized approach, with less information exchange. Considering the frequency agility and adaptive bandwidth, the concept of a time–spectrum block is introduced in [479], with which the spectrum-allocation problem is defined as the packing of time–spectrum blocks in a two-dimensional space. A distributed protocol is developed to solve the spectrum-allocation problem, which enables each node to dynamically choose the best time–spectrum block solely on the basis of local information. A biologically inspired spectrum-sharing algorithm based on the adaptive task-allocation model in insect colonies is introduced in [1]. The proposed algorithm enables secondary users to distributively determine the appropriate channels to use with no spectrum-handoff latency due to coordination, and achieves efficient spectrum sharing. A distributed resourcemanagement algorithm that allows network nodes to exchange information and learn the actions of interfering nodes using a multi-agent learning approach is proposed in [406].
1.4.8
Spectrum sharing games Game theory is a well-developed mathematical tool that studies the intelligent behaviors of rational decision makers in strategic interactions, such as cooperation and competition. In dynamic spectrum sharing, secondary users compete for the limited spectrum resources. If they do not belong to the same network entity, secondary users aim only at maximizing their own benefit from utilizing the spectrum resources. Therefore, their strategies in dynamic spectrum sharing can be well analyzed via game-theoretic approaches [206].
1.4 Dynamic spectrum allocation and sharing
35
A game-theoretic modeling that analyzes the behavior of cognitive users in distributed adaptive channel allocation is presented in [304]. Both cooperative and noncooperative scenarios are considered, and a no-regret learning approach is proposed. It is shown that cooperation-based spectrum sharing etiquette improves the overall performance at the cost of a higher overhead due to information exchange. In [110], a repeated-game approach for spectrum allocations is proposed, in which the spectrum sharing strategy could be enforced using the Nash equilibrium of dynamic games. A mechanism design to suppress the cheating behavior of secondary users in open spectrum sharing by introducing a transfer function into the user’s utility is proposed in [466] [462]. The transfer function represents the payment that a user receives (or makes if it is negative) on the basis of the private information he/she announces in the spectrum-sharing game. In the proposed mechanism, it is shown that users can attain the highest utility only by announcing their true private information. A random-access protocol based on continuous-time Markov models for dynamic spectrum access in open spectrum wireless networks, is investigated in [468], where the secondary users’ traffic arrival/departure process is assumed to be a Poisson random process. A distributed implementation of the protocol which controls the secondary users’ access probability on the basis of a homo equalis model is proposed to achieve airtime fairness among them. Spectrum sharing among one primary user and multiple secondary users is formulated as an oligopoly market competition [305], and a Cournot game approach is proposed to obtain the spectrum allocation for secondary users, in which each secondary user’s strategy is chosen on the basis of pricing information obtained from the primary user. The spectrum pricing problem when multiple primary users compete with each other to sell spectrum bands to secondary users is studied in [306], and a distributed algorithm to obtain the solution to the problem is presented. The game can achieve the highest total profit under a punishment mechanism that deters the primary users from deviating from the optimal solution. In dynamic secondary access, the accumulative amount of power from the secondary users should not violate the interference-temperature limit. With this constraint, a dynamic spectrum access optimization problem is formulated in [472] that can also guarantee a certain secondary QoS. A secondary spectrum-sharing potential game model is further proposed to solve the problem in a distributed fashion, using distributed sequential play and stochastic learning. A correlated equilibrium concept that can achieve better spectrum sharing performance than non-cooperative Nash equilibrium in terms of spectrum utilization efficiency and fairness is used in [186]. A no-regret learning algorithm is adopted to achieve the correlated equilibrium with proven convergence. A game-theoretic overview for dynamic spectrum sharing is provided in [206]. Auction mechanisms for spectrum sharing have also been proposed in [163]. Since users access the channel using spread spectrum signaling, they interfere with each other and have to allocate power carefully in order to utilize the spectrum more efficiently. Spectrum sharing among users is modeled as an auction, where the utility of each user is defined as a function of the received SINR. Considering the potential price of anarchy due to the non-cooperative nature of selfish users, the spectrum
36
Introduction to cognitive radios
manager charges each user a unit price for their received SINR or power. With the pricing introduced, the auction mechanism achieves the maximum social utility as well as maximal individual utility. An iterative bid-updating algorithm is also presented for the distributed implementation. A spectrum auction should consider carefully the local spectrum demand and spectrum availability in order to achieve high utilization. A real-time spectrum-auction framework is proposed in [134] to assign spectrum packages to proper wireless users under interference constraints. Different pricing models are considered in order to assess tradeoffs of revenue and fairness, and fast auction clearing algorithms are proposed to compute the revenue-maximizing prices and allocation. In [204] [207], a belief-assisted distributive pricing algorithm is proposed to achieve efficient dynamic spectrum allocation based on double-auction mechanisms, with collusion-resistant strategies that combat possible collusive behavior of users by using optimal reserve prices. A scalable multi-winner spectrum-auction scheme that awards one spectrum band to multiple secondary users with negligible mutual interference is proposed in [465]. Effective mechanisms to suppress dishonest/collusive behaviors are also considered, in case secondary users distort their valuations of spectrum resources and interference relationships. A truthful and computationally efficient spectrum auction is proposed in [495], which can support an eBay-like dynamic spectrum market and maintain truthfulness while maximizing spectrum utilization. A truthful double-auction mechanism is proposed in [512] to further increase spectrum efficiency by allowing spectrum reuse, since wireless users that do not interfere with each other can share the same spectrum bands.
1.4.9
Routing in CR networks In traditional wireless networks, all network nodes will be provided with a certain fixed spectrum band for use. For instance, WLAN uses 2.4- and 5-GHz bands, and GSM uses 900- and 1800-MHz bands. In dynamic spectrum access (DSA) networks, however, there may be no such pre-allocated spectrum that can be used by every node at any time, and the frequency spectrum that can be used for communication may vary from node to node. This new feature of DSA networks imposes even greater challenges on wireless networking, especially on routing. If two neighboring nodes do not have a common channel, or they have common channels but do not tune to the same frequency, then multi-hop communication will not be feasible. Thus, new routing algorithms are needed in order to accommodate the spectrum dynamics and ensure satisfying network performance such as high network capacity and throughput, short latency, and low packet loss. Owing to the heterogeneity of spectrum availability among nodes, the routing problem can not be well solved without considering the spectrum allocation. The interdependence between route selection and spectrum management is studied in [467], where two design methodologies are compared. The first is a decoupled approach in which route selection and spectrum management are performed independently in different protocol layers. The second approach is a collaborative design, in which some tasks of spectrum management are integrated into route selection in the network layer.
1.4 Dynamic spectrum allocation and sharing
37
The network layer will select the packet route as well as decide a time schedule of a conflict-free channel usage. Experimental results show that a well-provisioned collaborative design outperforms the decoupled design. In [474], the topology formation and routing in DSA networks is studied. DSA network nodes first identify spectrum opportunities by detection, and then the detected spectrum opportunities are associated with the radio interfaces of each node. A layered graph model to help assign the spectrum opportunities to the radio interfaces is proposed. Using the model, a routing path between nodes can be computed conveniently for each pair of nodes, which not only diversifies channel selection to prevent interference between adjacent hops along the path but also maximizes network connectivity. A MAC-layer configuration algorithm that enables nodes to dynamically discover the global network topology and node location, and identify common channels for communication, is proposed in [241]. When a CR network can utilize multiple channels for parallel transmission, while the available channels vary with primary users’ activity, traditional routing metrics such as energy consumption, number of hops, congestion, etc., are not sufficient for correct routing decision making. New routing metrics are introduced in [241], such as the number of channel switches along a path, frequency of channel switches on a link, and switching delay. Routing strategies to find the best route according to these new metrics in a CR network are proposed. Other routing metrics that incorporate the primary usage pattern, CR link hold-time, and throughput loss of primary users due to interference are considered in [260]. The capacity (per unit of time) of the links, the available spectrum, the link-disruption probabilities, and the link propagation time between nodes are considered for choosing a proper route in [327]. A spectrum-aware on-demand routing protocol is proposed in [69] [70]. This protocol selects routes according to the switching delay between channels and the back-off delay within a channel. A local coordination scheme is further proposed in [480], in which the intersecting nodes perform data-flow redirection according to the cost evaluation of frequency band switching and queuing delay. A probabilistic path-selection approach is proposed for multi-channel CR networks in [224]. The source node first computes the route that has the highest probability of satisfying a required demand, and then verifies whether the capacity of the potential path does indeed meet the demand. If not, extra channels are judiciously added to the links of the route until the augmented route satisfies the demand at a specified confidence level. The returns of a primary user to a licensed band can also be viewed as a constraint on channel switching, since some channels will become locally unusable when the primary user appears. Considering the channel-switching constraints due to primary users’ activity, the authors of [42] propose analytic models for channel assignment in a general multi-hop CR network, studies the impact of the constraints on network performance, and investigates the connectivity and transport capacity of the network. An analytic model and optimization framework for spectrum sharing in CR networks is proposed in [191], which considers the constraints on routing, flow control, interference, and capacity.
38
Introduction to cognitive radios
A cognitive networking architecture and a preliminary prototype and experimental setup of a cognitive-network access point are presented in [295]. The access point can obtain spatial and temporal patterns of higher-layer network traffic by real-time monitoring, which can be used for routing in CR networks. A wireless mesh network (WMN) can be seen as a special type of wireless ad hoc network. A WMN usually consists of mesh clients (MCs), mesh routers (MRs), and gateways, which are organized in a mesh topology. The MCs are often laptops, mobile users, or other wireless devices, which direct their traffic to the respective MRs. The MRs, which form the backbone of the network, can be viewed as access points that forward the MCs’ traffic over the backbone to and from the gateway in a multi-hop fashion. When one mesh node can not function well and communicate, the remaining nodes can still communicate with each other. Therefore, WMNs provide users with reliable communication and fault tolerance, as well as flexible network architectures and easy deployment. However, the network capacity will be reduced significantly when the node density per transmission channel increases and the network becomes congested. Hence, there is a strong need for rich spectrum resources to support the operation of WMNs, and opportunistic spectrum access (OSA) with CR has become an attractive solution [45]. Equipped with CR, the MCs can monitor the primary channels and identify the spectrum white space. The interference due to the mesh traffic at any frequency in any location can be estimated. An integer linear program is further formulated to solve the channel-assignment problem so that the MCs can fully utilize the idle licensed spectrum under certain interference constraints. The distributed approach for channel selection is scalable and also satisfies the interference requirement from primary users. Moreover, the appearance of spectrum holes is highly dependent on location and time, and the available spectrum in each mesh node may be different. Two neighboring nodes cannot communicate with each other if they do not have a common channel or they are not tuning into the same channel. Therefore, mesh nodes should have knowledge of the available spectrum frequencies, scheduling, and routing path so that they can communicate with a minimal cost and no collision. An optimal two-hop spectrum scheduling in cognitive WMN is proposed in [475], where any pair within a two-hop neighborhood knows the spectrum allocation, collision-free scheduling, and minimal-cost routing path. QoS routing in a cognitive WMN with interference constraints and dynamic channel availability is studied in [193]. This also is based on an integer linear programming formulation. A distributed routing protocol is developed that can optimally select a route and allocate channels and time slots to satisfy the end-to-end bandwidth requirement. Owing to the heterogeneity of primary users’ random behavior, if some node is severely affected by primary users’ activity, that node should not be selected on a routing path. The approach proposed in [410] formulates the stochastic traffic engineering problem to address the issue of how the mesh traffic in the multi-hop cognitive WMN should be routed. Channel assignment with route discovery in cognitive WMN is also discussed in [130].
1.5 Cognitive radio platforms
1.4.10
39
Security in CR networks Owing to their new characteristics, such as the requirement on the awareness of the surrounding environment and internal state, reasoning and learning from observations and previous experience to recognize environment variations, adaptation to the environment, and coordination with other users/devices for better operation, CR networks face unique security challenges. In [40], awareness spoofing and its impact on different phases of a cognitive cycle have been studied. Through spoofing, the malicious attackers can cause an erroneously perceived environment, introduce biases to CR decision-making process, and manipulate secondary users’ adaptation. In [76], the authors have investigated the primary-user emulation attack, where the cognitive attackers mimic the primary signal to prevent secondary users from accessing the licensed spectrum. A localizationbased defense mechanism is proposed. This verifies the source of the detected signals by observing the signal characteristics and estimating its location from the received signal energy. The authors of [75] investigated the spectrum-sensing data-falsification attack, and proposed a weighted sequential probability ratio test to alleviate the performance degradation due to sensing error. In the proposed approach, individual sensing reports are compared with the final decision. Users whose reports are identical to the final decision will have high reputation values, and their reports will then carry more weight in future decision fusion. Several types of denial-of-service attacks in CR networks have been discussed in [38], such as spectrum-occupancy failures when secondary users are induced to interfere with primary users, policy failures that affect spectrum coordination, location failures, sensor failures, transmitter/receiver failures, compromised cooperative CR, and common control channel attacks. Simple countermeasures are also discussed. How to secure a CR network by understanding identity, earning and using trust for individual devices, and extending the usage of trust to networking has been discussed in [55].
1.5
Cognitive radio platforms Although a lot of approaches have been proposed to improve the performance of spectrum sensing and dynamic spectrum access and sharing, most of them merely focus on the theoretical modeling and analysis and few of them have been verified in a practical system. Take spectrum sensing as an example. The primary users, who are usually not equipped with CR functionality, are concerned that the secondary users will interfere with their operation harmfully. This could happen if the secondary users cannot reliably detect a primary user and start transmission, while the primary user is active in the licensed band. Even if the secondary user has detected the primary user, it may fail to switch its frequency to some other available spectrum band fast enough and thus create harmful interference with the primary user’s transmission. Therefore, CR platforms need to be developed as real-world testbeds that can verify the theoretical analysis. In this section, we will first review the existing testbeds/platforms developed by some research institutes and industry, followed by a brief discussion about standardization of CR techniques.
40
Introduction to cognitive radios
1.5.1
Berkeley Wireless Research Center The feasibility of CR usage to efficiently utilize the spectrum resources without causing interference with the primary user cannot be justified unless it is shown in a real working system or testbed that the interference due to secondary users’ activity is sufficiently low. Researchers at the University of California, Berkeley have proposed an experimental setup based on the Berkeley Emulation Engine 2 (BEE2) platform [277] to compare different sensing techniques and develop metrics and test cases so as to measure the sensing performance. Specifically, a good CR system should provide sufficient protection to the primary user, in the sense that the CR can detect the primary user within a very short time, reliably detect the primary user with a high detection probability and a low false-alarm probability, and vacate the spectrum quickly after a correct detection. These metrics impose certain requirements on a CR testbed, including the capability to support multiple radios, the ability to connect various different front-ends to support different frequency ranges, the capability for physical/link-layer adaptation and fast information exchange for sensing and cooperation, and the capability to perform rapid prototyping. The BEE2 can meet these requirements and support the features for a CR testbed. The BEE2 board can connect up to 18 front-ends, which enables the experiments with multiple primary users. It can also be used to perform complex signal processing with the aid of FPGAs, and the high-speed links between the FPGAs foster cooperation emulation among the secondary users. Using the BEE2 platform, research on spectrum sensing using energy detection and sensing with cooperation was tested by experiments in [81], which shows the feasibility and practical performance limits of energy detection under real noise and interference in wireless environments. The required sensing time needed to achieve the desired probabilities of detection and false alarms in a low-SNR regime was measured. The minimum detectable signal strength due to the receiver noise uncertainties and background interference was also investigated. The experiments also measured the improvements in sensing performance obtained through network cooperation, identified the locationand time-relevant threshold rule for hard-decision combining, and quantified the effects of spatial separation between radios in indoor environments. In [415], the feasibility of cyclostationary feature detection has been investigated. It is shown through experiments that cyclostationary feature detectors require tight synchronization between the sampling clock and the signal of interest, so that the cyclostationary features can be useful in low-SNR regimes.
1.5.2
The Center for Wireless Telecommunications at Virginia Tech A distributed genetic-algorithm-based CR engine is proposed in [367] [368]. The cognitive engine focuses on how to provide CR capability to the physical and MAC data link layers. The system is structured so that the cognitive engine can provide cognitive functionality that scales with primary users. Information about the radio spectrum environment and location of the users is used to better classify the environment and choose potential radio configurations by the engine. The cognitive system monitor
1.5 Cognitive radio platforms
41
enables cross-layer cognition and adaptation by classifying the observed channel, matching channel behavior with operational goals, and passing the goals to a wireless system genetic-algorithm adaptive controller module to gradually optimize radio operation. The cognitive-engine framework is compared with the traditional adaptivecontroller framework. It is shown that the cognitive engine can find the best tradeoff between a user’s operational parameters in a changing environment while the traditional adaptive controller can only increase or decrease the data rate, wasting usable bandwidth or power due to its inability to learn. Using this CR engine, an experiment was conducted in [289] to demonstrate the benefits of CR by dynamic spectrum sharing. The experiment focuses on the unlicensed 5.8-GHz ISM band to compare the spectrum utilization of IEEE 802.11 a/g physical layers with the CR version of such a WLAN radio. In the CR OFDM PHY layer model, the access point’s channel is dynamically changed due to the location and interference level, and the subscribers can pick an access point according to the sensed SINR and load condition at each access point. By sensing the radio spectrum environment and making real-time decisions on frequency, bandwidth, and waveform, the CR OFDM PHY layer can achieve an increase of 20 dB in SINR over the standard OFDM PHY layer. A coexistence experiment conducted in [315] studied the feasibility of coexistence of the primary users and secondary users in a common spectrum band. In the worstcase scenario with no guard bands between the primary users and secondary users, the primary users can be minimally affected if the secondary users’ transmissions are properly modified. Issues involved in adapting CR technology to consumer markets are discussed in [21]. The pricing mechanism to trade the network resources will be integrated as part of the user domain and modeling system in the CR engine, and the pricing mechanism should hit a good balance between resource efficiency and computational complexity. It is worth studying the interaction of the proposed pricing system with the cognitive engine, and whether the CR engine is able to produce good solutions within a reasonable period of time.
1.5.3
WINLAB at Rutgers University Researchers at Rutgers University have constructed an Open Access Research Testbed for Next-Generation Wireless Networks (ORBIT) [370] to perform experimentation on CR research. The ORBIT testbed has a two-tier architecture, consisting of an indoor radio grid emulator for controlled experimentation and an outdoor field-trial network for end-user evaluation in real-world settings. Several of the key architectural issues for CR networks are discussed in [366], including spectrum agility and fast spectrum scanning over multiple frequency bands, fast PHY adaptation, the spectrum etiquette protocol and dynamic spectrum coordination, flexible MAC-layer protocols, control and management protocols, and ad hoc group formation and cross-layer adaptation. A high-performance CR platform with integrated physical- and network-layer capabilities [363] based on the architectural foundation [366] is under development using the ORBIT testbed. The CR
42
Introduction to cognitive radios
prototype’s architecture consists of several major elements: an agile RF front-end working over a range of frequency, FPGA-based software-defined radio (SDR) to support a variety of modulation waveforms, a packet-processing engine for protocol and routing functionality, and an embedded CPU core for control and management. The goal of its hardware design is to provide fast RF scanning capability and the software will use the GNU software radio code base. This prototype is differentiated from other CR projects in that the design uses hardware accelerators to achieve programmability and high performance at each layer of the protocol stack. An experimental study on spectrum sensing for localizing transmitters using sensor nodes with CR capability has been proposed in [365], where the sensor nodes can sense only a limited bandwidth at a time. Using triangulation techniques based on the detected power at each sensor, the experiments study how to localize a single transmitter and multiple asynchronous interfering transmitters that are transmitting in the same band, and how to find the spectral occupancy over a band of frequencies. It is shown through the experiments that energy-detection techniques are not sufficient to localize multiple transmitting sources.
1.5.4
Others A real CR governed by a cognitive engine is proposed in [62], since most of the existing DSA protocols have been defined in such a way that they could not be directly implemented on a real CR. The cognitive engine provides the capability both to reason (i.e., AI planning) and to learn (i.e., machine learning). Reasoning helps decide the best action in a particular scenario given knowledge of how the actions will affect the progress toward an objective, while learning helps get more information about how a particular action will affect the overall system state by trying out the action. The work in [62] translates the basic semantics of DSA into the Action Description Language, and implements a primary-prioritized Markov spectrum-access algorithm [449] within the Open-Source Cognitive Radio, which allows spectrum sharing both in frequency and in time. The secondary users can utilize the white space in the time domain by transmitting during the idle periods between primary users’ packet transmissions. In order to make the best use of the white space, a realistic yet tractable model that can provide adequate prediction performance while achieving a balance between statistical accuracy and complexity needs to be established. An experimental testbed is developed in [151] to gather empirical data on the channel statistics. The testbed consists of a wireless router and several workstations with WLAN adapter cards. A vector signal analyzer is used to capture the raw complex baseband data, and various sensing strategies (energyand feature-based detection) can be evaluated on the same data. This not only provides insight when developing real-time implementations, but also confirms the data validity. A prototype of CR-based sensor implementation with off-the-shelf IEEE 802.11 devices was built in [388]. The sensor prototype uses WLAN cards with a built-in Atheros chipset, while slightly modifying the Atheros device driver to assess key ideas
1.5 Cognitive radio platforms
43
of spectrum sensing. Important issues in spectrum sensing have been explored, such as how to choose the energy-detection threshold, how to characterize secondary traffic, and how to schedule the sensing priority. The experimental results provide guidelines for implementing a spectrum sensor in real CR networks. The experimental CR research using commercial platforms is limited by their inability to provide full control of the RF, PHY, and MAC functionalities and change the underlying framework. The prototype system designed in [494] can provide more flexibility and reconfigurability. A real-time MIMO OFDM testbed was developed to support a large number of permutations of physical-layer modes, which are defined by the MAC through an API interface. The header of each MAC-to-PHY transmission contains the value of the configuration for that specific packet, and thus the higher layers can control the type of the packet and its operational mode. On the other hand, the PHY can provide SNR, CRC results, and channel state information to upper layers and enable advanced protocols. Besides the intelligent spectral-allocation feature, the testbed also supports a greater range of data rates and high throughput. The future version of the prototype is expected to allow independent allocation of the RF transceiver chains and intelligently determine the number of antennas used for transmission and sensing. A CR testbed system employing a wideband multi-resolution spectrum-sensing (MRSS) technique is proposed in [184]. The testbed employs a vector signal generator and a vector signal analyzer to provide a variety of built-in-standard wireless signals, such as IEEE 802.16, IEEE 802.11 a/b/g, 3G-wireless, and digital video broadcasting. The hardware-control programs were developed in a Matlab environment. The received signal is investigated with the MRSS hardware to identify its spectral-occupancy status. Specifically, a wavelet transform is employed in the MRSS technique. By adapting the wavelet’s pulse width and its carrier frequency, the spectral-usage status can be represented in multi-resolution format. The MRSS technique is shown to be able to examine a wideband spectrum and detect many sophisticated signal formats in current and emerging wireless standards. Therefore, the testbed can probably provide a flexible and versatile environment for developing CR access schemes. The Kansas University Agile Radio (KUAR) platform is presented in [281] [282]. There is a very flexible RF front-end that can support a large center-frequency range as well as wide transmission bandwidths. The powerful on-board digital processing can support a variety of cognitive functions, such as implementing numerous modulation algorithms, MAC protocols, and adaptation mechanisms. The self-contained, smallform-factor radio unit enables convenient portability. Moreover, the KUAR platform is highly configurable in that it has a robust set of hardware and software tools that allow developers to work in their area without being encumbered by the other layers. The low-cost build cycle also facilitates broad distribution of the KUAR units to the CR research community. A virtual-SDR system using an Atheros platform has been proposed in [105]. The experimental platform adopts a MultiMAC framework, which can dynamically reconfigure MAC- and physical-layer properties in order to achieve the best performance while providing the correct MAC-layer algorithm to decode the data frames.
44
Introduction to cognitive radios
Therefore, the platform can respond quickly to changes in the radio environment and requirements in optimizing the spectrum efficiency. A software-defined CR prototype has been developed in [159], which consists of a hardware platform and a software platform. The hardware platform consists of a multiband antenna with a frequency range in the UHF band and 2–5 GHz, a multi-band RF front-end, an FPGA-based signal-processing unit, and a CPU. The software platform is composed of several managers that control spectrum sensing and reconfiguration by changing software packages, where the software configuration specifies the type of communication system. An adaptive wireless network testbed on the MIRAI Cognitive Radio Execution Framework is proposed in [201]. The physical layer accepts both a virtual and a real CR device with interfaces to the software environment through a gateway plug-in. Therefore, the testbed provides scalability for more than 10 000 nodes by combining real CR devices and virtual nodes. It enables the provision of a flexible configuration to any protocol and application, verifies protocols in the MAC layer, and allows for remote interaction with the testbed over the Internet. However, due to the processing-power limitation of a single PC and the overhead of the standard communications between the CR devices and the PC, the types of experiments that can be done on the testbed are limited and real-time signal processing becomes difficult.
1.5.5
Industry Since 2005, the Shared Spectrum Company has been conducting field tests to measure the spectrum-occupancy status [278] in various locations, including outdoor urban and rural locations, and an indoor location. It has been found that there is significant spectrum white space, and an agile, dynamic spectrum-sharing (DSS) radio can provide high spectrum utilization. Motivated by observations from the measurements, the company started to design effective spectrum detectors and DSS radio. Two implementations of detectors with significantly different operating characteristics are studied in [398], which compares the probability of false alarms and the probability of detection, as well as analyzing how to obtain the threshold level of the detectors using data measured from the environments. Detection thresholds for safe operation in unoccupied TV bands without causing harmful interference with other authorized operations are examined in [298]. A policy-based network management framework for controlling the spectrum access is presented in [331], including a prototype implementation and demonstration. This approach can support easy reconfiguration and policy authoring, secure policy distribution, management and enforcement, automated policy synchronization, conflict resolution, and opportunity discovery. A field framework experimentation is presented in [332], where the distributed, policy-driven system restricts spectrum access on the basis of spectral, temporal, and spatial context, while fully utilizing the available spectrum compared with traditional static spectrum access. The MITRE corporation has developed a testbed, the adaptive spectrum radio (ASR) [180], to demonstrate the feasibility of the ASR concept. The ASR is expected to
1.5 Cognitive radio platforms
45
be able to perform periodic estimation of the channel-occupancy status, periodically adapt the time-limited waveform on the basis of the channel-occupancy status, negotiate spectrum access, and measure the interference with the primary users. The ASR testbed uses commercial off-the-shelf products, and has been used to identify design and policy considerations. A CR testbed has been developed by Northrop Grumman in [343]. The primary hardware component of the testbed is an agile waveform transmitter/receiver that consists of D/A and A/D converters, and FPGAs to support multicarrier transmissions. The algorithm used by the testbed exploits spectrum opportunities to create a low probability of interception and detection waveforms. Other SDR or CR platforms include the Vanu SDR [46] and GNU USRP boards [31].
1.5.6
Standards IEEE 802.22 [49] is proposed to reuse the fallow TV spectrum without harmful interference to TV incumbents. A CR-based PHY and MAC for dynamic spectrum sharing of vacant TV channels is evaluated in [50], which studies spectrum sensing, coexistence of primary and secondary users, spectrum management, reliability and QoS, and their impact on the overall network performance. Dynamic frequency hopping (DFH) has recently been proposed in IEEE 802.22 [49], where sensing is performed on the intended next working channels in parallel to data transmission in the current working channel and no interruption is required for sensing. Efficient and mutually interference-free spectrum usage can be achieved only if multiple users operating in DFH can coordinate their hopping behavior, so in [195], the constitution of DFH communities is proposed so that neighboring secondary users form cooperating communities and coordinate their hopping patterns in DFH. The analysis in [336] quantifies the idle bandwidth in the current TV band assignments, and the statistical analysis shows that secondary users can operate on the discontiguous idle spectrum using OFDM. A feature detector design for TV bands is studied in [99]. IEEE P1900 [273] is a new standard series focusing on next-generation radio and spectrum management. One important focus of the standard is to provide reconfigurable networks and terminals in a heterogeneous wireless environment, where the multihoming-capable terminals enable users to operate multiple links simultaneously. The architectural building blocks include a network reconfiguration management (NRM) module that provides information about the environment, a terminal reconfiguration management (TRM) module that takes information from the NRM and determines the optimal radio resource-usage strategies, and a radio enabler of reconfiguration management that acts as a link between the NRM and the TRM.
2
Game theory for cognitive radio networks
Cognitive radio technology, a revolutionary communication paradigm that can utilize the existing wireless spectrum resources more efficiently, has been receiving growing attention in recent years. Now that network users need to adapt their operating parameters to the dynamic environment, and may pursue different goals, traditional spectrum-sharing approaches based on a fully cooperative, static, and centralized network environment are no longer applicable. Instead, game theory has been recognized as an important tool in studying, modeling, and analyzing the cognitive interaction process. In this chapter, we introduce the most fundamental concepts of game theory, and explain in detail how these concepts can be leveraged in designing spectrum-sharing protocols, with an emphasis on state-of-the-art research contributions in cognitive radio networking. This chapter provides a comprehensive treatment of game theory with important applications in cognitive radio networks, and will aid the design of efficient, self-enforcing, and distributed spectrum-sharing schemes in future wireless networks.
2.1
Introduction Cognitive radio technology [284] has emerged in recent years as a revolutionary communication paradigm, which can provide faster and more reliable wireless services by utilizing the existing spectrum band more efficiently [160] [7]. A notable difference of a cognitive radio network from traditional wireless networks is that users need to be aware of the dynamic environment and adaptively adjust their operating parameters on the basis of interactions with the environment and other users in the network. Traditional spectrum-sharing and management approaches, however, generally assume that all network users cooperate unconditionally in a static environment, and thus they are not applicable to a cognitive radio network. In a cognitive radio network, users are intelligent and have the ability to observe, learn, and act to optimize their performance. If they belong to different authorities and pursue different goals, e.g., compete for an open unlicensed band, fully cooperative behaviors cannot be taken for granted. Instead, users will cooperate with others only if cooperation can bring them more benefit. Moreover, the surrounding radio environment keeps changing, due to the unreliable and broadcast nature of wireless channels, user mobility and dynamic topology, and traffic variations. In traditional spectrum sharing, even a small change in the radio environment will trigger the network controller to
2.1 Introduction
47
re-allocate the spectrum resources, which results in a lot of communication overhead. To tackle the above challenges, game theory has naturally become an important tool that is ideal and essential in studying, modeling, and analyzing the cognitive interaction process, and designing efficient, self-enforcing, distributed, and scalable spectrum-sharing schemes. Game theory is a mathematical tool that analyzes the strategic interactions among multiple decision makers. Its history dates back to the publication of the 1944 book Theory of Games and Economic Behavior by J. von Neumann and O. Morgenstern, which included the method for finding mutually consistent solutions for two-person zero-sum games and laid the foundation of game theory. During the late 1940s, there had come into being cooperative game theory, which analyzes optimal strategies for groups of individuals, assuming that they can enforce collaboration among themselves so as to jointly improve their positions in a game. In the early 1950s, J. Nash developed a new criterion, known as Nash equilibrium, to characterize mutually consistent strategies of players. This concept is more general than the criterion proposed by von Neumann and Morgenstern, since it is applicable to non-zero-sum games, and marks a quantum leap forward in the development of non-cooperative game theory. During the 1950s, many important concepts of game theory were developed, such as the concepts of the core, extensive-form games, repeated games, and the Shapley value. Refinement of Nash equilibria and the concepts of complete information and Bayesian games were proposed in the 1960s. The application of game theory to biology, i.e., evolutionary game theory, was introduced by J. M. Smith in the 1970s, during which time the concepts of correlated equilibrium and common knowledge were introduced by R. Aumann. During the 1960s, game theorists started to investigate a new branch of game theory, mechanism design theory, focusing on the solution concepts for a class of privateinformation games. Nowadays, game theory is widely recognized as an important tool in many fields, such as the social sciences, biology, engineering, political science, international relations, and computer science, for understanding cooperation and conflict between individuals. In cognitive radio networks, network users make intelligent decisions on their spectrum usage and operating parameters on the basis of the sensed spectrum dynamics and actions adopted by other users. Furthermore, users who compete for spectrum resources may have no incentive to cooperate with each other and instead behave selfishly. Therefore, it is natural to study the intelligent behaviors and interactions of selfish network users from a game theoretic perspective. The importance of studying cognitive radio networks in a game-theoretic framework is multifold. First, by modeling dynamic spectrum sharing among network users (primary and secondary users) as games, network users’ behaviors and actions can be analyzed in a formalized game structure, by means of which the theoretical achievements in game theory can be fully utilized. Second, game theory equips us with various optimality criteria for the spectrum-sharing problem. To be specific, the optimization of spectrum usage is generally a multi-objective optimization problem, which is very difficult to analyze and solve. Game theory provides us with well-defined equilibrium criteria to measure game optimality under various game settings. Third, non-cooperative
Figure 2.1
Coalitional games
Bargaining games
Improve equilibrium efficiency
Equilibrium selection
Stochastic games
Cooperative games
Economic games, auction games, and mechanism design
Non-cooperative games and Nash equilibrium
Four categories of the game-theoretic spectrum sharing approaches.
Applications
Shapley value
The core, existence of a nonempty core
Superadditivity; grand coalition
Characteristic function form; partition function form
Transferrable payoff
Applications; distributed implementation
Nash bargaining solution
Nash’s axiomatic model
Correlated equilibrium (regret; applications)
Repeated game (grim trigger, punish-and-forgive; folk theorems; tit-for-tat, fictitious play; applications)
Pricing (price of anarchy; applications)
Evolutionary equilibrium (replicator dynamics; evolutionarily stable strategy; applications)
refinement (extensive-form; imperfect/perfect information; SPE; backward induction; sequential equilibrium)
Pareto optimality
Standard function; uniqueness; applications
Uniqueness of equilibrium
Pure strategy, mixed strategy, existence of equilibrium
Potential game (conditions; applications)
Applications
Bayesian equilibrium, revelation principle
Principle, agent, message, type, decision, transfer
Auction game applications; cheat-proof multi-winner auction; double auction
Optimal auction, reserve price; bidding ring collusion
English/Dutch/second-price/first-price auction; revenue equivalence
Private/interdependent values model
Learning in spectrum markets
Efficiency (price of anarchy; Cartel)
Hierarchical market model
Cartel maintenance game
Stackelberg game (leader-follower model; applications)
Applications
Value iteration
Policy (Markov, stationary)
States, actions, transition probability, payoff, objective function
Mechanism design
Auction games
Oligopolistic competition
Bertrand game
Cournot game
Price-quantity relation, profit-maximizing, first-order optimality
2.2 Non-cooperative games and Nash equilibrium
49
game theory, one of the most important branches of game theory, enables us to derive efficient distributed approaches for dynamic spectrum sharing using only local information. Such approaches become highly desirable when centralized control is not available or flexible self-organized approaches are necessary. In this chapter, we aim at providing a comprehensive treatment of game theory oriented toward applications to cognitive radio networks in recent years. Considering that game theory is still rarely taught in engineering or computer-science curricula, we assume that the reader has very little background in this area. Therefore, we start each section by introducing the most basic game-theoretic concepts, and then address how these concepts can be leveraged in designing efficient spectrum-sharing schemes from a network designer’s perspective. We first discuss non-cooperative spectrum-sharing games in Section 2.2, since network users are mostly assumed to be selfish and aim only at maximizing their own spectrum usage. Then, we talk about the application of economic games and mechanism design to cognitive radio networks in Section 2.3, including spectrum pricing and auctions, where spectrum resources are traded like exchangeable goods in a spectrum market. Cooperative spectrum-sharing games in which network users have an agreement on how to utilize and distribute the spectrum resources are discussed in Section 2.4, and stochastic spectrum-sharing games in which network users adapt their strategies according to the changing environment and other users’ strategies are discussed in Section 2.5. An overview of this chapter is given in Figure 2.1.
2.2
Non-cooperative games and Nash equilibrium Nash equilibrium is a key concept to understand non-cooperative game theory. Given a game in which two or more players interactively make their decisions, it is natural to ask “What will the outcome of a game be like?” The answer is given by Nash equilibrium, which, informally speaking, is an equilibrium such that everyone plays the best strategy when taking the decision-making of others into account. Then, the next questions are “Does a Nash equilibrium always exist in a game?”and “Is it unique?” We will show in Section 2.2.1 that the existence of Nash equilibria is quite general, but the uniqueness has to be analyzed case by case. Nash equilibrium tells us what the equilibrium outcome will be, but it does not answer the question “How can we get to the equilibrium?” This is more important in the context of cognitive radio networks, where players may lack the global information to directly predict the equilibrium. Instead, they may start from an arbitrary strategy, update their strategies according to certain rules, and, it is to be hoped, converge to the equilibrium. Section 2.2.2 provides two specific conditions that guarantee convergence to a unique Nash equilibrium. When there exist multiple equilibria, one needs to select those equilibria that are superior to others. In Section 2.2.3, we discuss several equilibrium-selection criteria. Pareto optimality is defined to compare multi-dimension payoff profiles, and an equilibrium that is not as good as others in the Pareto sense can be ignored. Moreover, some
50
Game theory for cognitive radio networks
Table 2.1. Components of games for cognitive radio networks
Players Actions
Payoff
Open spectrum sharing
Licensed spectrum sharing (auction)
Secondary users that compete for an unlicensed spectrum Transmission parameters, such as transmission power level, access rates, waveform, etc.
Both primary and secondary users
Nondecreasing function of the quality of service (QoS) by utilizing the spectrum
Secondary users: which licensed bands they want to rent and how much they would pay for leasing the licensed bands; primary users: which secondary users they will lease each unused band to and the charge Monetary gains, e.g., revenue minus cost, by leasing the licensed spectrum
refinement can be used to narrow down the game outcomes, e.g., removing the ones with incredible actions or implausible beliefs. The evolutionary equilibrium is the one that is evolutionarily stable. In general, Nash equilibrium often suffers from excessive competition among selfish players in a non-cooperative game, and the outcome of the game is inefficient. Hence, we are eager to know the answer to the following question: “Can we go beyond a Nash equilibrium?” In Section 2.2.4, three approaches, namely, usage of pricing, repeated game formulation, and correlated equilibrium, that can improve the efficiency of Nash equilibria are discussed.
2.2.1
Nash equilibrium Game theory is a mathematical tool that analyzes the strategic interactions among multiple decision makers. Three major components in a strategic-form game model are • a finite set of players, denoted by N ; • a set of actions, denoted by Ai , for each player i; and • a payoff/utility function, denoted by u i : A → R, which measures the outcome for player i determined by the actions of all players, A = ×i∈N Ai . Given the above definition and notations, a strategic game is often denoted by
N , (Ai ), (u i ). In cognitive radio networks, the competition and cooperation among the cognitive network users can be well modeled as a spectrum-sharing game. We provide an example that explains the game components in a cognitive radio network in Table 2.1. In a non-cooperative spectrum-sharing game with rational network users, each user cares only about his/her own benefit and chooses the optimal strategy that can maximize his/her payoff function. Such an outcome of the non-cooperative game is termed Nash equilibrium (NE). This is the most commonly used solution concept in game theory.
2.2 Non-cooperative games and Nash equilibrium
51
Definition 2.2.1 A Nash equilibrium of a strategic game N , (Ai ), (u i ) is a profile a ∗ ∈ A of actions such that for every player i ∈ N we have ∗ ∗ ≥ u i ai , a−i , (2.1) u i ai∗ , a−i for all ai ∈ Ai , where ai denotes the strategy of player i and a−i denotes the strategies of all players other than player i. The definition indicates that no player can improve his/her payoff by a unilateral deviation from the NE, given that the other players adopt the NE. In other words, NE defines the best-response strategy of each player, as stated below: ∗ , for all i ∈ N , (2.2) ai∗ ∈ Bi a−i with the set-valued function Bi defined as the best-response function of player i, i.e., (2.3) Bi (a−i ) = {ai ∈ Ai : u i (a−i , ai ) ≥ u i a−i , ai }, for all ai ∈ Ai . Given the definition of NE, one is naturally interested in whether there exists an NE for a certain game so that we can study its properties. On the basis of the fixed-point theorem, the following theorem has been shown [322]. Theorem 2.2.1 A strategic game N , (Ai ), (u i ) has a Nash equilibrium if, for all i ∈ N , the action set Ai of player i is a nonempty compact convex subset of a Euclidian space, and the payoff function u i is continuous and quasi-concave on Ai . In the above definition and notation, it is implicitly assumed that players adopt only deterministic strategies, also known as pure strategies. More often, the players’ strategies will not be deterministic and are regulated by probabilistic rules. A mixed-strategy NE concept is then designed to describe such a scenario where players’ strategies are non-deterministic. Denote (Ai ) as the set of probability distributions over Ai , then each member of (Ai ) is a mixed strategy of player i. In general, the players adopt their mixed strategies independently of each other’s decision. If we denote a strategy profile of player i by (αi )i∈N , which represents the probability distribution over action set Ai , then the probability of the action profile a = (ai )i∈N will be i∈N αi (ai ), and player j’s payoff under the strategy profile (αi )i∈N is a∈A i∈N αi (ai ) u j (a), if each Ai is finite. The NE defined for strategic games where players adopt pure strategies can then be naturally extended, and a mixed-strategy Nash equilibrium of a strategic game is an NE where players in the game adopt mixed strategies, following the above extension. Without providing proof (interested readers can refer to [322]), we give the property about the existence of a mixed-strategy NE in games where each player has a finite number of actions in the following theorem. Theorem 2.2.2 Every finite strategic game has a mixed-strategy Nash equilibrium.
52
Game theory for cognitive radio networks
Table 2.2. An example of a chicken game
D C
D
C
0,0 3,6
6,3 5,5
We use the following example to explain how to derive the NE of a game. In Table 2.2, we list the payoff of a two-player chicken game that can represent the competition between two users for access to an open spectrum band, where the action Dare (D) means access aggressively with a high rate and the action Chicken out (C) means access moderately with a low rate. If one user accesses the spectrum aggressively while the other does so moderately, the former will gain more. If both access aggressively (no cooperation), neither of them will gain because there will be too frequent collisions. If both access moderately, each will gain a much higher payoff than in the case of no cooperation. In this game, if one user is going to dare, it is better for the other to chicken out. If one user is going to chicken out, it is better for the other to dare. Therefore, there are two pure-strategy NEs in this game, namely, (D,C) and (C,D). To calculate the mixedstrategy NE, we can assume that the probability distribution of player 1 (the row player) over Ai = (D, C) is α1 = [x, 1 − x], and that of player 2 (the column player) is α2 = [y, 1 − y]. Then, the expected payoff of player 1 is u¯ 1 = 0 · x · y + 3 · (1 − x) · y + 6 · x · (1 − y) + 5 · (1 − x) · (1 − y).
(2.4)
According to the definition in (2.3), at equilibrium, player 1’s expected payoff should satisfy ∂ u¯ 1 /∂ x = 0. Solving the equation, we obtain y = 1/4 and further get x = 1/4 in a similar way. Thus, there is a mixed strategy equilibrium where each user dares with probability 1/4.
2.2.2
Uniqueness of equilibrium Besides existence, the uniqueness of an equilibrium is another desirable property. If we know there exists only one equilibrium, we can predict the equilibrium strategy of the players and the resulting performance of the cognitive radio network. By optimally tuning the design parameters of the game, it is possible to manipulate the behavior of the rational players toward efficient spectrum sharing at the equilibrium. Unlike the establishment of existence using the fixed-point theorem, uniqueness of an equilibrium holds only for several special cases. For instance, if the payoff function of each player is strictly convex and the feasible region is also convex, then there exists a unique equilibrium in the game. In the following, we will discuss two other special cases that can guarantee the uniqueness of an equilibrium.
2.2 Non-cooperative games and Nash equilibrium
2.2.2.1
53
Potential games The NE gives the best strategy given that all the other players stick to their equilibrium strategy too. However, the question is how to find the Nash equilibrium, especially when the system is implemented in a distributed manner. One approach is to let players adjust their strategies iteratively on the basis of accumulated observations as the game unfolds, and hope that the process could converge to some equilibrium point. Although this is not true in general, the iteration does converge and lead to the NE when the game has certain special structures. For example, when the game can be modeled as a potential game, convergence to the NE is guaranteed. The concept of potential games, which was proposed in [296], was first applied to cognitive radio networks in [303] and has been employed widely in the context of cognitive radio since then. Definition 2.2.2 A game N , (Ai ), (u i ) is a potential game if there is a potential function P : A → R such that one of the following conditions holds. The game is an exact potential game if the first condition holds, and an ordinal potential game if the second condition holds. (i) P(ai , a−i ) − P ai , a−i = u i (ai , a−i ) − u i ai , a−i , for any i ∈ N , a ∈ A, and ai ∈ Ai . (ii) sgn P(ai , a−i ) − P ai , a−i = sgn u i (ai , a−i ) − u i ai , a−i , for any i ∈ N , a ∈ A, and ai ∈ Ai , where sgn(·) is the sign function. From the definition, it is easy to see that any single player’s individual interest is aligned with the group’s interest (i.e., the potential function), and any player choosing a better strategy given all other players’ current strategies will necessarily lead to improvement in the value of the potential function. A potential game in which all players adopt better strategies sequentially will terminate in finite steps to an NE that maximizes the potential function. Several useful conditions for potential games have been established in [296] and [428], and we summarize them in the following theorem. These conditions can be used to prove a game to be a potential game or guide the design of a potential game. The third condition is of particular interest, since it shows that a game is a potential game as long as payoff functions have some symmetric property. Theorem 2.2.3 A game N , (Ai ), (u i ) is an exact potential game with a potential function P(·): (i) if and only if ∂ 2u j ∂ 2ui = , for all i, j ∈ N , ∂ai ∂a j ∂ai ∂a j
(2.5)
provided that Ai is an interval of real numbers and u i is continuously differentiable for all i ∈ N ;
54
Game theory for cognitive radio networks
(ii) if and only if there exist functions P0 : A → R and Pi : A−i → R (i ∈ N ) such that u i (ai , a−i ) = P0 (ai , a−i ) + Pi (a−i ), for all i ∈ N ,
(2.6)
where P(ai , a−i ) = P0 (ai , a−i ); (iii) if there exist functions Pi j : Ai × A j → R and Pi : Ai → R such that Pi j (ai , a j ) = P ji (a j , ai ) and Pi j (ai , a j ) − Pi (ai ), for all i, j ∈ N , and a ∈ A. (2.7) u i (a) = j∈N \{i}
This is known as the bilateral symmetric game with P(a) =
i−1
Pi j (ai , a j ) −
i∈N j=1
Pi (ai ).
i∈N
Potential games have been used widely in cognitive radio networks, such as in [314] [304] [292] [472] [310] [417] and [137]. For example, let us consider a few applications as follows. • Waveform selection [314]. In this game, players distributively choose their signature waveform ai ∈ Ai to reduce correlation. The signal-to-interference-and-noise ratio (SINR) of player i is γi =
h i pi , h p j=i ji j ρ(a j , ai ) + n i
(2.8)
where h i , pi , n i are the channel gain, power level, and noise variance for player i, h ji is the cross-channel gain from transmitter j to receiver i, and ρ(ai , a j ) is the correlation when player i and player j choose waveform ai and waveform a j , respectively. According to [314], the payoff function is defined as some function of the SINR γi minus costs associated with the selected waveform, u i (ai , a−i ) = f i (γi (ai , a−i )) − ci (ai ),
(2.9)
and the game is claimed to be a bilateral symmetric game (Theorem 2.2.3, condition (iii)) when certain conditions hold. • Power control [314]. This game is similar to the previous one except that the action space consists of all possible power levels and the cost is associated with power levels. The fixed waveforms result in correlation ρ ji between players i and j. The payoff function can be written as ⎛ ⎞ u i (ai , a−i ) = f i,1 (h i ai ) − f i,2 ⎝ h ji a j ρ ji + n i ⎠ − ci (ai ), (2.10) j=i
When f i (·) can be detached to one function of the numerator and another function of the denominator (e.g., if f i (·) is in the form of a logarithm function). It is easy to
55
2.2 Non-cooperative games and Nash equilibrium
show that the first condition in Theorem 2.2.3 is satisfied for this game; hence it is a potential game. • Channel allocation [304]. In this game, a player’s strategy is to select a channel from multiple channels for transmission, and players in the same band interfere with each other. In order to reduce mutual interference, the payoff function is defined as the total interference not only caused by other players but also caused to other players, i.e., u i (ai , a−i ) = −
N
p j h ji 1(a j = ai ) −
j=1, j=i
N
pi h i j 1(ai = a j ),
(2.11)
j=1, j=i
where the indicator function 1(a j = ai ) implies that players i and j experience mutual interference only if they choose the same channel. Condition (iii) in Theorem 2.2.3 is satisfied when we define Pi j (ai , a j ) = −( p j h ji + pi h i j )1(ai = a j ) and Pi (ai ) = 0.
2.2.2.2
Standard functions A standard function was first introduced in [478] to aid the power control in cellular networks. Assume that there are N cellular users, M base stations, and a common radio channel. Denote pi as the transmitted power of user i, h m,i as the channel gain of user i to base station m, and n m as the noise power at base station m. Treating the interference at a base station as noise, we can express the SINR of user i at base station m as μm,i =
h m,i . j=i h m, j p j + n m
(2.12)
When ki is user i’s assigned base station, to ensure an acceptable communication performance, the received power at the base station ki should be no less than a certain level γi , i.e., pi μki ,i (p) ≥ γi , where p denotes the power vector of the N users. The interference constraint can be rewritten as an interference function I(p) = (I1 (p), . . . , I N (p)), with each Ii (p) as defined below, γi . (2.13) pi ≥ Ii (p) = μki ,i It has been shown that the interference function I(p) defined above and several other types of interference function are standard functions, whose precise definition is given as follows [478]. Definition 2.2.3 A function I(p) is standard if, for all p ≥ 0, the following properties are satisfied: • positivity: I(p) > 0; • monotonicity: if p ≥ p , then I(p) ≥ I(p ); • scalability: for all α > 1, αI(p) > I(αp). Owing to the nice properties associated with the standard function, it was possible to propose a synchronous iterative power-control algorithm based on a standard function
56
Game theory for cognitive radio networks
p = I(p), also called a standard power-control algorithm, with proved convergence to a unique fixed point [478]. Theorem 2.2.4 If I(p) is feasible, then, for any initial power vector p, the standard power-control algorithm converges to a unique fixed point p∗ . Since the common radio channel is a shared medium, each user’s transmission will cause interference with others, and the interference becomes increasingly severe with higher transmitted power. On the other hand, the selfish users try to pursue high utility by increasing their transmitted power. Therefore, conventional power control can be cast into a non-cooperative power-control game (NPG). If the best response strategy is a standard function of the variable that represents the user’s action, then the NPG has a unique equilibrium [394]. The idea of a standard function has been used in some previous works [288] [511]. For instance, in [511], which considers a cooperative cognitive radio network, secondary users serve as cooperative relays for the primary users, so that they can have the opportunity to access the wireless channel. The secondary users aim at maximizing the utility, defined as a function of their achievable rate minus the payment, by selecting the proper payment in the non-cooperative game. By proving that the best-response payment is a standard function, it is shown that the non-cooperative payment-selection game has a unique equilibrium.
2.2.3 2.2.3.1
Equilibrium selection Pareto optimality When there is more than one equilibrium in the game, it is natural to ask whether some outperform others, and whether there exists an optimal one. Because game theory solves multi-objective optimization problems, it is not easy to define the optimality in such scenarios. For example, when players have conflicting interests with each other, an increase in one player’s payoff might decrease others’ payoffs. In order to define the optimality, one possibility is to compare the weighted sum of the individual payoffs, which reduces the multi-dimensional problem to a one-dimensional one. A more popular alternative is the Pareto optimality, which, informally speaking, is a payoff profile such that no strategy can make at least one player better off without making any other player worse off. Definition 2.2.4 Let U ⊆ R N be a set. Then u ∈ U is Pareto efficient if there is no u ∈ U for which u i > u i for all i ∈ N ; u ∈ U is strongly Pareto efficient if there is no u ∈ U for which u i ≥ u i for all i ∈ N and u i > u i for some i ∈ N . The Pareto frontier is defined as the set of all u ∈ U that are Pareto efficient. Pareto efficiency, or Pareto optimality, has been widely used in game theory, as well as in economics, engineering, and the social sciences. If there is more than one equilibrium candidate, usually the optimal ones in the Pareto sense are preferred. For example,
2.2 Non-cooperative games and Nash equilibrium
57
in the repeated game that we will discuss later in this chapter, a lot of equilibria may exist if certain strategies have been applied. Out of many possible choices, the ones on the Pareto frontier are superior to others. In the bargaining game, which is also a topic in this chapter, Pareto optimality has been used as an axiom to define the bargaining equilibrium in this game. However, because of the selfish nature of players in a non-cooperative game, an NE may be Pareto inefficient when compared with payoff profiles of all possible outcomes. Several methods to improve an inefficient equilibrium will be introduced later.
2.2.3.2
Equilibrium refinement The NE is an important concept, but it is a relatively weak criterion in some senses, especially when the game exhibits more complex structures. Hence, there may exist multiple equilibria according to the Nash criterion, but some of them might not be desirable or reasonable outcomes, and it is necessary to narrow down, or refine, the equilibrium solutions. One simple example is that, when the game has a symmetric structure, sometimes we may be more interested in symmetric equilibria where every player adopts the same strategy. Taking the chicken game as an example, we know that there are two purestrategy NE and one mixed-strategy NE, but if symmetry in the equilibrium is required, only the mixed-strategy NE will be the outcome. Things become more involved as the game becomes more complex. In the strategicform game, it is assumed that all players move simultaneously; however, it is possible that players move sequentially, and are informed of the previous moves. This is the extensive-form game, also known as the multi-stage game and the dynamic game. When all information of the history is perfectly known to all players, it is called a game with perfect information; otherwise, it is a game with imperfect information. Let us begin with an extensive-form game with perfect information, as shown by the tree in Figure 2.2. In this game, player 1 moves first with possible actions L and R. After observing player 1’s action, player 2 can choose an action l1 or r1 if player 1 took action L, and choose an action l2 or r2 if player 1 took action R. The payoff pair for each possible action history is given in Figure 2.2. At the beginning of the game, player 2 has four strategies, namely, l1l2 , l1 r2 , r1 l2 , and r1r2 . For example, the strategy l1l2 implies that player 2 will choose l1 if player 1 chooses L, and choose l2 if player 1 chooses R. The other strategies are defined in the same way. Then, this new game can be reformed to an equivalent strategic-form game, given by Table 2.3, where it is easy to verify that three Nash equilibria exist, i.e., {R, l1l2 }, {L , l1r2 }, and {R, r1l2 }. 1 R
L 2
Figure 2.2
2
l1
r1
l2
r2
(1,3)
(2,0)
(4,2)
(0,1)
An extensive-form game with perfect information.
58
Game theory for cognitive radio networks
Table 2.3. The equivalent strategic-form game
L R
l1 l2
l1 r2
r1 l2
r1 r2
1,3 4,2
1,3 0,1
2,0 4,2
2,0 0,1
1 L
R M 2
l
r
l
r
(1,1)
(2,–1) (–4,–2) (–1,–2) (0,–1) Figure 2.3
An extensive-form game with imperfect information.
Let us take a closer look at the equilibrium {R, r1 l2 }. This is an equilibrium because R is the best response to a “threat” made by player 2, who pledges to choose r1 given that player 1 uses L and l2 given R. However, this is an incredible threat, because it is l1 rather than r1 that player 2 should choose for a higher payoff after observing player 1 chooses the action L. Therefore, although {R, r1 l2 } is a Nash equilibrium viewed at the beginning of the game, as the game progresses, the strategy is no longer optimal for the subgame where player 1 has already chosen L for some reason. Similar analysis disqualifies {L , l1r2 } as a reasonable equilibrium. The remaining one, {R, l1l2 }, guaranteeing the NE for any subgame of the original game, is called a subgame perfect equilibrium. For an extensive-form game with finite stages, backward induction can be employed to obtain the subgame perfect equilibria. For the previous example, player 2’s credible actions are l1 and l2 at stage 2, which reduces the possible outcomes of the game to (1, 3) and (4, 2). Then, player 1’s best response is R, and {R, l1l2 } is the subgame perfect equilibrium. When the game has imperfect information, even the concept of subgame perfect equilibria is not strong enough. The tree in Figure 2.3 presents a game in which the game terminates if player 1 chooses R, and continues to the second stage otherwise. Player 2 fails to observe whether player 1 has chosen L or M, as indicated by the dotted line in Figure 2.3. In this case, {L , l} and {R, r } are two subgame perfect equilibria, but {R, r } is not a preferable one. Although player 2’s action r is a credible threat, it is based on an implausible belief. When making a decision, player 2 is uncertain about whether player 1 has used L or M, and action r is reasonable if player 2 believes that player 1 played M with a probability larger than 1/2. However, from player 1’s perspective, action M is dominated by the action R, since it always yields a lower payoff regardless of player 2’s action. Given that player 1 did not choose R, he/she must have chosen L, which makes player 2’s belief implausible. By contrast, the other equilibrium {L , l} passes the plausible-belief criterion, and is called a sequential equilibrium.
2.2 Non-cooperative games and Nash equilibrium
59
Furthermore, when stability is an issue, other stronger refinements, such as trembling-hand equilibria and proper equilibria, are needed. Interested readers are referred to [322] for more details.
2.2.3.3
Evolutionary equilibrium In the above, we have discussed the concept of Pareto optimality, under which no player can improve his/her utility individually without hurting the others’ benefit. The establishment of Pareto optimality usually relies on the assumption that players are fully aware of the game they are playing and others’ actions in the past, and that players are rational and willing to cooperate in their moves. More often, however, players have only limited information about the other players’ strategies, or they are even unaware of the game being played. In addition, not all players are rational and always following their optimal strategy. Under all these circumstances, will there exist an NE? If so, how many? If there is more than one equilibrium, which one will be selected, if some players adopt out-of-equilibrium strategies? These questions can be answered by evolutionary game theory (EGT), with evolutionary equilibrium as the core concept. The idea of evolutionary games was inspired by the study of ecological biology. EGT differs from classical game theory by focusing on the dynamics of strategy change more than on the properties of strategy equilibria. It can tell us how a rational player should behave to approach a best strategy against a small number of players who do not follow the best strategy, and thus EGT can better handle the unpredictable behavior of players. More specifically, assume that the game is played by a set of homogeneous players, who have the same form of utility function u i (·) = u(·) and action space [442]. Assume that the players are programmed to play pure strategies at each time. At time t, the number of players adopting pure strategy ai is pai (t), then the population size is p(t) = ai ∈Ai pai (t), and the population share that is playing strategy ai becomes x ai (t) = pai (t)/ p(t). The replicator dynamics in continuous-time format can be written as ¯ x˙ai = [u(ai , x−ai ) − u(x)]x ai ,
(2.14)
¯ denotes the where u(ai , x−ai ) denotes the average payoff of players using ai , and u(x) average payoff of the entire population. Equation (2.14) means that the higher the payoff a strategy ai achieves, the greater the population share using ai , and the growth rate is proportional to the difference between ai ’s average payoff and the average payoff in the entire population. Using the replicator dynamics, players can adapt their strategy and converge to the evolutionarily stable strategy (ESS). We provide the definition of the ESS for a twoplayer game as follows. Definition 2.2.5 In a symmetric strategic game with two players G = {1, 2}, ( A, A), (u i ), where u 1 (a, a ) = u 2 (a , a) = u(a, a ) for some utility function u, an evolutionarily stable strategy (ESS) of G is an action a ∗ ∈ A for which u(a, a ∗ ) ≤ u(a ∗ , a ∗ ), and u(a, a) < u(a ∗ , a) for every best response a ∈ A with a = a ∗ .
60
Game theory for cognitive radio networks
These conditions ensure that, as long as the fraction of mutants who play a is not too large, the average payoff of a will fall short of that for a ∗ . Since strategies with a higher payoff value are expected to propagate faster, evolution will cause the players not to use mutation strategy a, and instead use the ESS. Therefore, in a game with a few players taking out-of-equilibrium strategies, the equilibrium after the convergence of the replicator dynamics is the ESS. Actually, the first condition in Definition 2.2.5 says that the ESS must first be a Nash equilibrium, and the second condition can be viewed as a selection criterion that ensures the stability of the equilibrium under strategy mutation. Since EGT with the replicator dynamics characterizes the change of the population sizes, we can apply EGT to cognitive radio networking, which can provide guidelines for upgrading existing networking protocols and determining operating parameters related to new protocols, and thus achieve reconfigurability with respect to the timevarying radio environment with stability guaranteed. In [452], an evolutionary game modeling for cooperative spectrum sensing in which selfish users tend to overhear the others’ sensing results and contribute less to the common task is proposed. The behavior dynamics of secondary users are studied using the replicator-dynamics equations. In the distributed implementation derived from the replicator dynamics, users update their strategies by exploring different actions at each time, adaptively learning during the strategic interaction, and approaching the best response strategy. Another evolutionary game-theoretic approach in cognitive radio networking is considered in [333], where sensor nodes act as players and interact in randomly drawn pairs in an impulse radio UWB sensor network. Each player adapts the value of the pulse-repetition frequency upon observing the bit error rate of the other player in the interactive pair. It is shown that, through the interaction–learning process, a certain QoS can be guaranteed.
2.2.4 2.2.4.1
How to improve an inefficient NE Pricing From a network designer’s point of view, he/she would like to have a satisfying social welfare, which can be defined as maximizing the sum of all users’ payoff values (utilitarian type), or maximizing the minimum payoff value among all users’ payoffs (egalitarian type). However, this contradicts with users’ selfish nature, if they do not work toward a common goal. In a cognitive radio network consisting of selfish users competing for spectrum resources, the social optimum is usually not achieved at the NE, since selfish users are interested only in their own benefit. The price of anarchy is an important measure with which to study the optimality of the outcome of a non-cooperative game. This is the ratio between the worst possible NE and the social optimum that can be achieved only if a central authority is available. By studying the bounds on the price of anarchy, we can gain better understanding about the NE of non-cooperative games in cognitive radio networks. The price of anarchy is extensively studied for non-cooperative spectrum-sharing games in [166], in which the channel assignment for the access points (APs) is
2.2 Non-cooperative games and Nash equilibrium
61
studied for WiFi networks. The price of anarchy in this scenario represents the ratio between the number of APs assigned spectrum channels in the worst NE and the optimal number of covered APs if a central authority assigns the channels. The analysis of the NE in spectrum sharing games is performed by considering it as a maximal coloring problem. The theoretical bounds on the price of anarchy are derived for scenarios with various numbers of spectrum buyers and sellers. One interesting finding is that the price of anarchy is unbounded in general spectrum sharing games unless certain constraints are applied, such as the distribution of the users. Similarly, in [86], the price of anarchy is studied for spectrum assignment in a local-bargaining scenario. To improve the efficiency of the NE of non-cooperative games in cognitive radio networks, pricing can be introduced when designing the non-cooperative game, since selfish network users will be guided to a more efficient operating point [394]. Intuitively, pricing can be viewed as the cost of the services or resources a network user receives, or the cost of harm the user imposes on other users, in terms of performance degradation, revenue deduction, or interference. Since the selfish network users optimize only their own performance, their aggressive behavior will degrade the performance or QoS of all the other users in the network, and hence cause the system efficiency to deteriorate. Adopting an efficient pricing mechanism will make selfish users aware of the inefficient NE, encourage them to compete for the network resources more moderately and efficiently, and bring more benefit for all network users and a higher revenue for the entire network. Linear pricing, which increases monotonically with the transmit power of a user, has been adopted widely [288] [2], because of its simplicity of implementation and reasonable physical meaning. In [288], a network-user hierarchy model consisting of a spectrum manager, service provider, and end users for dynamic spectrum leasing is proposed for joint power control and spectrum allocation. When optimizing their payoff, the end users trade off the achievable data rate and the spectrum cost through transmission power control. With a proper pricing term, which is defined as a linear function of the spectrum access cost and transmission power, efficient power control can be achieved, which alleviates interference between end users; moreover, the revenue of the service provider is maximized. In [2], the service provider charges each user a certain amount of payment for each unit of transmitting power on the uplink channel in wideband cognitive radio networks for revenue maximization, while ensuring incentive compatibility for the users. In [450], the authors further point out that most existing pricing techniques, e.g., a linear pricing function with a fixed pricing factor for all users, can usually improve the equilibrium by pushing it closer to the Pareto-optimal frontier. However, they might not be (Pareto) optimal, and not suitable for distributed implementation, because they require global information. Therefore, a user-dependent linear pricing function that drives the NE close to the Pareto-optimal frontier is proposed [450], through analysis of the Karush–Kuhn–Tucker conditions. The optimal pricing factor for a link depends only on its neighborhood information, so the proposed spectrum management can be implemented in a distributed way.
62
Game theory for cognitive radio networks
More sophisticated nonlinear pricing functions can also be used, according to the specific problem setting and requirements. In an underlay spectrum-sharing problem [440] where secondary users transmit in the licensed spectrum concurrently with primary users, secondary users’ transmission is constrained by the interference-temperature limit (ITL), N
p j h m j ≤ Q max m ,
(2.15)
j=1
where p j denotes secondary user j’s transmit power, h m j denotes the channel gain max denotes the ITL of primary from secondary user j to primary user m, and Q m N ω max m user m. Thus, an exponential part, e , with ωm = δ , Q max m j=1 p j h m j − Q m is introduced as a pricing factor into the pricing function, as well as another part representing the interference with other secondary users. When the ITL is violated, the utility function will decrease dramatically. In this way, efficient secondary spectrum sharing will be achieved, with sufficient protection of primary transmission. In the spectrum-sharing problem considered in [162], each wireless transmitter selects a single channel from multiple available channels and the transmission power. To mitigate the effects of interference externalities, users should exchange information that can reflect interference levels. Such information is defined by interference “prices,” ∂u γ ϕ(k) (pϕ(k) ) k k ϕ(k) , = πk (2.16) ϕ(k) ϕ(k) ∂ p h j=k j jk ϕ(k) where u k γk (pϕ(k) ) represents the users’ utility function, which is a concave and ϕ(k) ϕ(k) ϕ(k) increasing function of the received SINR γk (pϕ(k) ), and j=k p j h jk represents the interference at receiver k. So the interference price in (2.16) indicates the marginal loss/increase in user k’s utility if its received interference is increased/decreased by one unit. With the definition of an interference price, user k’s new utility becomes the net benefit ϕ(k) ϕ(k) ϕ(k) ϕ(k) π j h jk . (2.17) u k γk (pϕ(k) ) − pk j=k
It is shown in [162] that the proposed algorithm considering an interference price always outperforms the heuristic algorithm with which each user just picks the best channel without exchanging interference prices, and the iterative water-filling algorithm with which users do not exchange any information.
2.2.4.2
Repeated-game and folk theorems In order to model and analyze long-term interactions among players, we use the repeated-game model, in which the game is played for multiple rounds. A repeated game is a special form of an extensive-form game in which each stage is a repetition
63
2.2 Non-cooperative games and Nash equilibrium
of the same strategic-form game. The number of rounds may be finite or infinite, but usually the infinite case is more interesting. Because players care about not only the current payoff but also the future payoffs, and a player’s current behavior can affect the other players’ future behavior, cooperation and mutual trust among players can be established. Definition 2.2.6 Let N , (Ai ), (u i ) be a strategic game. A δ-discounted infinitely repeated game of N , (Ai ), (u i ) (0 < δ < 1) is an extensive-form game with perfect information and simultaneous moves in which • the set of players is N ; • for every value of t, the chosen action a t may depend on the history (a 1 , a 2 , . . . , a t−1 ); • the set of actions available to any player i is Ai , regardless of any history; • the payoff function for player i is the discounted average of immediate payoffs t−1 from each round of the repeated game, u i (a 1 , a 2 , . . . , a t , . . .) = (1 − δ) ∞ t=1 δ t u i (a ). Note that the discount factor δ measures how much the players value the future payoff over the current payoff. The larger the value of δ is, the more patient the players are. There are alternative ways to define an infinitely repeated game without the use of the discount factor, such as the limited-means infinitely repeated game and the overtaking infinitely repeated game, but they are rarely applied to cognitive radio networks. The so-called “grim-trigger” strategy is a common approach to stimulate cooperation among selfish players. Initially, all players are in the cooperative stage, and they continue to cooperate with each other until someone deviates from cooperation. Then, the game jumps to the punishment stage, in which the deviating player will be punished by other peers, and there will be no cooperation forever. A less harsh alternative, also known as the “punish-and-forgive” strategy, is similar except for the limited punishment, such that deviation is forgiven and cooperation resumes after long enough punishment. Because cooperation is often more beneficial, the threat of punishment will prevent players from deviation, and hence cooperation is maintained. This is formally established by folk theorems, a family of theorems characterizing equilibria in repeated games. To begin with, we give some definitions. Definition 2.2.7 Player i’s minmax payoff in a strategic game N , (Ai ), (u i ) is min max u i (ai , a−i ).
a−i ∈A−i ai ∈Ai
(2.18)
The minmax payoff is the lowest payoff that the other players can force upon player i, and can be used as the threat of punishment. Another option is the threat to use the NE as punishment. It is known that the NE gives at least the minmax payoff to any player, so the Nash threat is usually weaker than the minmax threat.
64
Game theory for cognitive radio networks
Definition 2.2.8 A vector v ∈ R N is a payoff profile of N , (Ai ), (u i ) if there is an outcome a ∈ A = ×i∈N Ai such that v = u(a). A vector v ∈ R N is referred to as a feasible payoff profile of game N , (Ai ), (u i ) if it is a convex combination of payoff profiles of outcomes in A. Denote the minmax payoff of player i as vi0 . A payoff profile v is (strictly) individually rational if vi > vi0 for all i ∈ N . Depending on the type of the equilibrium (the NE or the subgame perfect equilibrium), the length of punishment (grim-trigger or punish-and-forgive), the punishment payoff (the minmax threat or the Nash threat), and the criterion of the infinitely repeated game (δ-discounted or others), folk theorems vary slightly from case to case. Here, we pick just one of them to present, that is, the perfect grim-triggerstrategy folk theorem with Nash threats for the discounting criterion. The proof and other variants can be found in [358] and [322]. Theorem 2.2.5 For any feasible and strictly individually rational payoff profile v such that vi > viN , for all i ∈ N , with viN being the payoff of the stage-game Nash equilibrium, there exists δ ∈ (0, 1), such that, for all δ ∈ [δ, 1), there exists a repeated-game strategy profile which is a subgame perfect equilibrium of the repeated game and yields the expected payoff profile v. In Figure 2.4, we illustrate the feasible utility region of a repeated game with two players that can be envisioned in a simplified power-control problem for a Gaussian interference channel. Point D represents the case that both users transmit with very high power levels and suffer from severe interference, point A or C represents the case that one user transmits with high power while the other uses low transmit power, and point B represents the case that the two users cooperate by transmitting with low power levels to alleviate interference and improve utility. If the game is played for only one round, the NE will correspond to D and thus is very inefficient; however, if the game is played for multiple rounds, any point lying in the convex hull (the shaded area in Figure 2.4) can be achievable, according to the folk theorems. In other words, the efficiency of the game can be greatly improved. u2
Pareto frontier C B
Pareto frontier
D A
0
Figure 2.4
u1
The feasible utility region of a repeated two-player game.
2.2 Non-cooperative games and Nash equilibrium
65
Other than the grim-trigger strategy and the punish-and-forgive strategy, “tit-fortat” and “fictitious play” are also popular strategies in a repeated game. Both of them involve learning from opponents. When using the “tit-for-tat” strategy, a player chooses an action on the basis of the outcome of the very last stage of the game, for example, he/she decides to cooperate only when all the other players cooperated last time. If the “fictitious play” strategy is used, a player learns the empirical frequency of each action of the other players from all history outcomes, and then chooses the best strategy accordingly, assuming that the opponents are playing stationary strategies. Examples of cooperation enforcement and adaptive learning in repeated games can be found in the context of cognitive radio networks, e.g., [110] [466] [431] [409] [448] and so on. For instance, it is shown in [110] that any achievable rate in a Gaussian interference channel, in which multiple unlicensed users share the same band, can be obtained by piece-wise-constant power allocations in that band with the number of segments at most twice the number of users. The paper also shows that only the pure-strategy Nash equilibria exist, and that, under certain circumstances, spreading power evenly over the whole band is the unique NE, which is often inefficient. Then, a set of Pareto-optimal operating points is made possible by repeated-game modeling and punishment-based strategies. Extending this work to time-varying channels, [466] also applies the repeated-game framework to achieve better performance than the inefficient NE. In this paper, users exchange instantaneous channel-state information and cooperatively share the spectrum using the “punish-and-forgive” strategy.
2.2.4.3
Correlated equilibrium In deriving the NE of a game, players are assumed to choose their strategies independently of the others’ decisions. When they no longer do so, for instance, by following the recommendation of a third party [170] [186] [286] [174], the efficiency of the game outcome can be significantly improved. In the following, we will discuss the concept of correlated equilibrium, if the recommended strategy further satisfies a certain property. Correlated equilibrium is a more general equilibrium concept than the NE, where players can observe the value of a public signal and choose their actions accordingly. When no player would deviate from the recommended strategy, given that the others also adopt the recommendation, the resulting outcome of the game is a correlated equilibrium. Take the chicken game shown in Table 2.2 as an example. Assume now that both players observe a public signal from a third party, who draws one of three cards labeled (D,C), (C,D), and (C,C) uniformly (with probability 1/3 for each card). The players choose their actions according to the one assigned from the card drawn by the third party. If a player is assigned D, he/she will not deviate, since the payoff of 6 is the highest, assuming that the other player also played the assigned action. If action C is assigned, either of the cards (C,D) and (C,C) can be drawn, with equal probability. 1 Thus, the expected payoff of action D is 0 · 2 + 6 · 12 = 3, while the expected payoff of action C is 3 · 12 + 5 · 12 = 4. So the player will not deviate from action
66
Game theory for cognitive radio networks
C, either. Clearly, no player has an incentive to deviate, and the resulting outcome forms a correlated equilibrium. Note that, in this equilibrium, the expected payoff of a player is 14/3, which is greater than that of the pure-strategy NE or mixed-strategy NE, as can be calculated in Section 2.2.1. Hence, correlated equilibrium can be more system efficient than NE. After explaining the chicken-game example, we now give the definition of correlated equilibrium [322]. Definition 2.2.9 A correlated equilibrium of a strategic game N , (Ai ), (u i ) consists of • a finite probability space (, π ), where is a set of states and π is a probability measure on , • an information partition Pi of , ∀i ∈ N , and • a function σi : → Ai , which represents player i’s strategy and maps an observed state to an action, with σi (ω) = σi (ω ), ω, ω ∈ Pi , for some Pi ∈ Pi , such that
π(ω)u i (σ−i (ω), σi (ω)) ≥
ω∈
π(ω)u i (σ−i (ω), τi (ω)),
(2.19)
ω∈
for all i ∈ N , and any strategy function τi (·). From this definition, we know that action σi (ω) in equilibrium, where state ω occurs with a positive probability, is optimal to player i, given that the other players’ strategies also follow the correlated equilibrium. Since the equilibrium points are defined by a set of linear constraints in (2.19), there can exist multiple correlated equilibrium points of a game. For instance, in the chicken game we considered above, drawing the three cards (D,C), (C,D), and (C,C) with probabilities 0.2, 0.2, and 0.6, respectively is another equilibrium. Moreover, the set of mixed-strategy NE is a subset of the set of correlated equilibria. In order to adjust their strategies and converge to the set of correlated equilibria, players can track a set of “regret” values for strategy update [178]. The regret value can be defined by ⎧ ⎫ ⎨ 1 ⎬ u it ai , a−i − u it (ai , a−i ) , 0 , (2.20) RiT ai , ai = max ⎩T ⎭ t≤T
where u it ai , a−i represents the payoff obtained by player i at time t by taking action ai against the other players taking action a−i . Therefore, the regret value denotes the average payoff that player i would have obtained, if she/he had adopted action ai every time in the past instead of action ai . If the regret value is smaller than 0, meaning that adopting action ai brings a higher average payoff, then there will be no regret, and thus the regret value is lower bounded by 0. According to the regret value, players can
2.3 Economic games, auction games, and mechanism design
67
update their strategies by adjusting the probabilities of taking various actions. If player i’s action at time t is ait = ai , then the probability of taking action ai = ai at time t + 1 is updated by 1 pit+1 ai = Rit ai , ai ; μ otherwise, it is updated by pit+1 (ai ) = 1 −
ai =ai
pit+1 ai .
(2.21)
(2.22)
If all players learn their strategies according to (2.21) and (2.22), then their strategies will converge to the set of correlated equilibria almost surely, as time goes to infinity [178]. The concept of correlated equilibrium and regret learning has been used to design dynamic spectrum access protocols in [186] [286], where secondary users compete for access to spectrum white space. The users’ utility function is defined as the average throughput in [186], and in [286] a term representing performance degradation due to excess access and collisions is further included. Since the common history observed by all users can serve as a natural coordination device, users can pick their actions on the basis of observations about the past actions and payoff values, and achieve better coordination with a higher performance.
2.3
Economic games, auction games, and mechanism design Since game theory studies interaction between rational and intelligent players, it can be applied to the economic world to deal with how people interact with each other in the market. The marriage of game theory and economic models yields interesting games and fruitful theoretical results in microeconomics and auction theory. On the one hand, they can be regarded as applied branches of game theory that build on key game-theoretic concepts such as rationality and equilibria. Often, players are sellers and buyers in the market (firms, individuals, and so on), payoff functions are defined as the utility or revenue that players want to maximize, and equilibrium strategies are of considerable interest. On the other hand, they are distinguished from fundamental game theory, not only because additional market constraints such as supply and demand curves and auction rules give insight into market structures, but also because they are fully developed with their own research concerns. In fact, research on the Cournot model, one of the market equilibria, dates back to much earlier than when game theory as such came into existence as a unique field. Hence, we present a separate section to address those economic games, so as to respect the distinct character of these games and to highlight their intensive use in cognitive radio networks. The application of economic games to cognitive radio networks is relevant for the following reasons. First, economic models are suitable for the scenario of the secondary spectrum market where primary users are allowed to sell unused spectrum rights
68
Game theory for cognitive radio networks
to secondary users. Primary users, as sellers, have the incentive to trade temporarily unused spectrum for monetary gains, while secondary users, as buyers, may want to pay for spectrum resources for data transmissions. The deal is made through pricing, auctions, or other means. Second, the applicability of these economic games is not confined to the scenario with explicit buyers and sellers, and the ideas behind them can be extended to some cognitive radio scenarios other than secondary spectrum markets. One example is that the Stackelberg game, originally describing an economic model, has been generalized to a strategic game consisting of a leader and a follower. More details and other examples will be discussed in this section. Third, because cognitive radio goes far beyond technology and its success will rely highly on the combination of technology, policy, and markets, it is of extreme importance to understand cognitive radio networks from the economic perspective and develop effective procedures (e.g., auction mechanisms) to regulate the spectrum market. To highlight the underlining economic features in these games, we will use p and q to refer to prices and quantities in this section. In general, p and q are interrelated given a certain market, and their relation can be modeled by the demand curve and the supply curve. For example, at the given market price p, the amount of a good that buyers are willing to buy is q = D( p), whereas the amount that sellers are willing to sell is q = S( p). The functions D(·) and S(·) are known as the demand function and the supply function. Moreover, if the quantity q is fixed in the market, the price that buyers are willing to pay can be derived by evaluating the inverse demand function, i.e., p = D−1 (q), and similarly, the price charged by sellers is given by the inverse supply function, i.e., p = S −1 (q). Often, the demand function is a non-increasing function of p and the supply function is a non-decreasing function of p.
2.3.1
Oligopolistic competition When the market is fully competitive, the market equilibrium, denoted by ( p ∗ , q ∗ ), is the intersection of the demand curve and the supply curve, q ∗ = D( p ∗ ) and q ∗ = S( p ∗ ).
(2.23)
The other extreme is monopoly, when only one firm has full control over the market of one product. Assuming that the cost associated with the quantity q is C(q), the firm can maximize the profit, which is revenue minus cost, u(q) = qD−1 (q) − C(q),
(2.24)
by applying the first-order condition ∂u(q) ∂D−1 (q) ∂C(q) = D−1 (q) + q − = 0. ∂q ∂q ∂q
(2.25)
The situation lying between full competition and no competition (monopoly), which is called oligopoly, is more complicated and interesting, being defined as a market with only a few firms and with substantial barriers to entry in economics. Because the number of firms is limited, each one can influence the price and hence affect other firms;
2.3 Economic games, auction games, and mechanism design
69
for example, their strategies are to decide the quantity or price of goods supplied to the market. The interaction and competition between firms can be modelled well by game theory, and several models were proposed long ago. These models share common attributes including price–quantity relations, profit-maximizing goals, and first-order optimality, but they are different in terms of actions (quantities vs. prices), structures (simultaneous moves vs. sequential moves), or forms (competition vs. cooperation). In what follows, we give a brief summary of these games. We assume that there are only two competing firms (i.e., a duopoly) for convenience, but it is straightforward to generalize to the scenario with multiple firms. In the Cournot game, oligopoly firms choose their quantities q1 , q2 independently and simultaneously. Because the market price depends on the total quantity, each firm’s action directly affects others’ profits. The market price is determined by the inverse demand function p = D−1 (q1 + q2 ). Assume that the cost associated with a production quantity qi is Ci (qi ), i = 1, 2, for the two firms. The utility function of each firm is revenue minus cost, u i (qi ) = qi D−1 (q1 + q2 ) − Ci (qi ), i = 1, 2. (2.26) ∗ ∗ Hence, the equilibrium of this game q1 , q2 is the solution to the following equations derived from first-order conditions: c
∂D−1 (q1 + q2 ) ∂C1 (q1 ) ∂u 1 (q1 ) = D−1 (q1 + q2 ) + q1 − = 0, ∂q1 ∂q1 ∂q1 ∂D−1 (q1 + q2 ) ∂C2 (q2 ) ∂u 2 (q2 ) = D−1 (q1 + q2 ) + q2 − = 0. ∂q2 ∂q2 ∂q2
(2.27)
In the Bertrand game, firms also decide their actions independently and simultaneously, but their decisions are the prices p1 and p2 and their production capacity is unlimited. Although it looks like the Cournot game, the outcome is significantly different. Since the firm with the lower price will occupy the entire market, firms will try to reduce their price until they hit a bottom line with zero profit. Hence, the equilibrium of this game is trivial. A modification of the game is to assume that each firm produces a somewhat differentiated product. The demand function of product 1 is D1 ( p1 , p2 ), which is a decreasing function of p1 and often an increasing function of p2 . Similarly, we can define D2 ( p1 , p2 ). Then, the equilibrium price can be found through first-order conditions that maximize the profit given by u i ( pi ) = pi Di ( p1 , p2 ) − Ci (Di ( p1 , p2 )), i = 1, 2.
(2.28)
In the Stackelberg game, firms still choose their quantities q1 , q2 as in the Cournot game, but the two firms make decisions sequentially rather than simultaneously. The firm that moves first is called the leader, and the other is called the follower. Without loss of generality, we assume that firm 1 is the leader in this game. Because firm 2 takes action after firm 1 has announced the quantity q1 , the best response of firm 2 can be derived from ∂D−1 (q1 + q2 ) ∂C2 (q2 ) ∂u 2 (q2 ) = D−1 (q1 + q2 ) + q2 − = 0, ∂q2 ∂q2 ∂q2
(2.29)
70
Game theory for cognitive radio networks
which is essentially a function of q1 . We denote it by q2∗ (q1 ) to emphasize that it is firm 2’s best response to the announced quantity q1 . Knowing that firm 2 will choose the quantity q2∗ (q1 ), firm 1 can maximize its profit by setting q1 according to the first-order condition ∂ q1 D−1 q1 + q2∗ (q1 ) − C1 (q1 ) ∂u 1 (q1 ) = = 0. (2.30) ∂q1 ∂q1 This process is known as backward induction. If firm 1 chooses the Cournot-equilibrium quantity, the best response of firm 2 will also be the Cournot-equilibrium quantity. Because the optimal quantity from (2.30) works better than (or at least equally as well as) the Cournot equilibrium, the leader gains an advantage from the asymmetric structure. In the cartel-maintenance game, things are quite different because firms no longer compete with each other but instead cooperate. In general, they can reduce output, which leads to higher prices and higher profits for each firm. One example is OPEC, which manipulates the stability of international oil price. The Cartel maintenance in order to enforce cooperation among selfish firms can be modeled as a repeated game, as has been introduced earlier. From the firms’ perspective, cooperation in the form of a cartel reduces competition and improves their profits, but in reality such action is harmful to economic systems and hence is forbidden by antitrust laws in many countries. In what follows, we will show some examples of how these microeconomic concepts inspire research in cognitive radio networks. Depending on the assumptions and structures of spectrum markets, different models can be applied. The spectrum market in [307] consists of one primary user and multiple secondary users who compete for spectrum resources. Secondary user i requests a quantity qi for the allocated spectrum size, and the price is determined by the inverse supply function S −1 i∈N qi . This is essentially a Cournot game , but the players in the game are buyers instead of sellers in the original setting. With the inverse supply function in the paper specified as c3 −1 qi = c1 + c2 qi , (2.31) S i∈N
i∈N
where c1 , c2 , and c3 are non-negative constants and c3 ≥ 1, the payoff is defined as ⎛ ⎛ ⎞c3 ⎞ qj⎠ ⎠ , (2.32) u i (qi ) = u i0 qi − qi ⎝c1 + c2 ⎝ j∈N
where u i0 is the effective revenue per unit bandwidth for user i. The equilibrium can be derived from the first-order condition, i.e., ⎛ u i0 − c1 − c2 ⎝
j∈N
⎞c3
⎛
q ∗j ⎠ − c2 c3 qi∗ ⎝
j∈N
⎞c3 −1 q ∗j ⎠
= 0.
(2.33)
71
2.3 Economic games, auction games, and mechanism design
Another spectrum market proposed in [306] consists of multiple competing primary users and one secondary-user network. This game falls into the category of Bertrand games, since the primary users adjust the price of spectrum resources. To avoid the triviality that we mentioned in the introduction of the Bertrand game, the spectrum resources cannot be identical, and the authors adopt a commonly used quadratic utility function [405] for the secondary-user network, ⎛ ⎞ 1 u= u i0 qi − ⎝ qi2 + 2ν qi q j ⎠ − pi qi , (2.34) 2 i∈N
i∈N
i= j
i∈N
leading to linear demand functions D( p) =
(1 + (N − 2)ν) u i0 − pi − ν i= j u 0j − p j (1 − ν)(1 + (N − 1)ν)
.
(2.35)
Here, pi and qi are the price and quantity purchased from primary user i, u i0 is the effective revenue per unit bandwidth, and the parameter ν(−1 ≤ ν ≤ 1) reflects the cross elasticity of demand among different spectrum resources. Specifically, ν > 0 implies substitute products, that is, one spectrum band can be used in place of another, while ν < 0 implies complementary products, that is, one band has to be used together with another (like uplink and downlink). The value of ν measures the degrees of substitution or complementariness. In this model, the revenue is defined as the sum of monetary gains collected from the secondary network and the transmission gains of primary services, whereas the cost is defined as the performance loss to primary services due to spectrum transactions. Then, the equilibrium pricing is derived from the first-order condition. The structure of spectrum markets could be more complicated. For instance, [217] proposes a hierarchical model in which there are two levels of markets: in the upper level, a few wireless service providers buy some spectrum bands from spectrum holders, and in the lower level, they sell these bands to end users. Wireless service providers are the players in this game who decide not only the quantity bought from spectrum holders but also the price charged to end users. Therefore, this game is actually a combination of the Cournot game in the upper level with the inverse supply function (2.31) and the Bertrand game in the lower level with the demand function (2.35). The two levels are coupled in that the quantity sold to end users cannot exceed the quantity bought from license holders. The authors discuss four possible cases in the lower-level game due to quantity limitation, and conclude that only one equilibrium exists in the whole game. Another hierarchical market is proposed in [288], which considers both channel allocation and power allocation. In this model, the spectrum holder takes control of the upper-level market and hence the market fits in the monopoly model. In the lower-level game, service providers adjust the price of resources in the market, but the demand from end users comes from the equilibrium of a non-cooperative power-control game. Just like in other non-cooperative games, the Nash equilibria in these games are often inefficient due to competition among players. The difference between the equilibrium utility and the ideal maximum utility is known as the price of anarchy, which has been
72
Game theory for cognitive radio networks
introduced in Section 2.2.4.1. In [306] [307] and [217], the price of anarchy for the proposed spectrum market has been analyzed through theoretical derivation or demonstrated by simulation results. In addition, [306] and [171] show that the efficiency can be improved by enforcing cooperation among users, that is, establishing a cartel. In game models, it is common to assume that all players have full knowledge about each other. However, that is not always true in a realistic setting such as a cognitive radio network. For instance, one player may know nothing about other players’ profits or current strategies. Therefore, to make those games implementable in spectrum markets, it is crucial to involve learning processes. The learning processes in [307] [306] [217] and [288] can be roughly classified into two categories. When the information about strategies is available, players always update their strategies with the best response against other players’ current strategies, ai (t + 1) = B(a−i (t)),
(2.36)
where action a may refer to the quantity or the price depending on the market model. When only local information is available, a gradient-based update rule can be applied, i.e., ai (t + 1) = ai (t) + ε
∂u i (a(t)) , ∂ai
(2.37)
where ε is the learning rate and the partial derivative can be approximated by local observations. The convergence of the learning process has been analyzed using the Jacobian matrix, e.g., see [306]. Although it was originally formulated as a game between two sellers of the same product, the Stackelberg game in a broad sense can refer to any two-stage game in which one player moves after the other has made a decision. The problem can be formulated as max
a1 ∈A1 ,a2 ∈A2
u 1 (a1 , a2 ),
s.t. a2 ∈ arg max u 2 a1 , a2 a2 ∈A2
(2.38)
where player 1 is the leader and player 2 is the follower. Similarly to the Stackelberg game in an oligopoly market, the general Stackelberg game can also be solved using the backward induction. A few applications on cognitive radio networks can be found in [401] [511] [109] [14] and so on. For instance, in [401], the Stackelberg game is employed to model and analyze the cooperation between a primary user and several secondary users where the primary user trades some spectrum usage to some secondary users for cooperative communications. Specifically, the primary user can choose to transmit the entire time slot on its own, or choose to ask for secondary users’ cooperation by dividing one time slot into three fractions with two parameters τ1 and τ2 (0 ≤ τ1 , τ2 ≤ 1). During the first (1 − τ1 ) fraction of the slot, the primary transmitter sends data to secondary users, and then they form a distributive antenna array and cooperatively transmit information to the primary receiver during the following τ1 τ2 fraction of the slot. As rewards, the secondary users involved in the cooperative communications are granted spectrum rights during
2.3 Economic games, auction games, and mechanism design
73
the remaining τ1 (1 − τ2 ) fraction of the slot. The primary user chooses the strategy including τ1 , τ2 , and the set of secondary users for cooperation, and then the selected secondary users will choose powers for transmission according to the primary user’s strategy. As the leader of the game, the primary user is aware of secondary users’ best response to any given strategy, and hence is able to choose the optimal strategy that maximizes the payoff. The cooperation structure in [511] is similar; the major difference is that the secondary users pay for spectrum opportunities in addition to cooperative transmissions for the primary user. The implementation protocol and utility functions change, but the underlying Stackelberg game remains the same.
2.3.2
Auction games Auction theory is an applied branch of game theory in which one analyzes interactions in auction markets and researches the game-theoretic properties of auction markets. An auction, conducted by an auctioneer, is a process of buying and selling products by eliciting bids from potential buyers (i.e., bidders) and deciding the auction outcome on the basis of bids and auction rules. The rules of the auction, or auction mechanisms, determine whom the goods are allocated to (i.e., the allocation rule) and what price they have to pay (i.e., the payment rule). As efficient and important means of resource allocation, auctions have quite a long history and have been used widely for a variety of objects, including antiques, real estate, bonds, and spectrum resources. For example, the Federal Communications Commission (FCC) has used auctions to award spectrum since 1994, and the United States 700-MHz FCC wireless spectrum auction held in 2008, also known as Auction 73, generated 19.1 billion dollars in revenue by selling licenses in the 698–806-MHz band [43]. The spectrum-allocation problem in cognitive radio networks, although microscaled and of short-term nature compared with the FCC auctions, can also be settled by auctions. Auctions are used precisely because the seller is uncertain about the values that bidders attach to the product. Depending on the scenario, the values of different bidders for the same product may be independent (the private-values model) or dependent (the interdependent-values model). Almost all the existing literature on auctions in cognitive radio networks concerns private values. Moreover, if the distribution of values is identical for all bidders, the bidders are symmetric. Last, it is common to assume a riskneutral model, in which the bidders care only about the expected payoff, regardless of the variance (risk) of the payoff. There are many ways to classify auctions. We start with the four simple auctions. • English auction: a sequential auction where price increases round by round from a low starting price until only one bidder is left, who wins the product and pays his/her bid. • Dutch auction: a sequential auction where price decreases round by round from a high starting price until one bidder accepts the price, who wins the product and pays the price at acceptance.
74
Game theory for cognitive radio networks
• Second-price (sealed-bid) auction: an auction where each bidder submits a bid in a sealed envelope simultaneously, and the highest bidder wins the product with payment equal to the second highest bid. • First-price (sealed-bid) auction: an auction where each bidder submits a bid in a sealed envelope simultaneously, and the highest bidder wins the product with payment equal to his/her own bid. Interestingly, the four simple auctions, albeit quite different at first glance, are indeed equivalent in some senses under certain conditions. The main idea was established in the seminal work [433] by William Vickrey, a Nobel laureate in economics. It is summarized in the following theorem; interested readers are referred to [237] for more details. Theorem 2.3.1 1. The Dutch auction is strategically equivalent to the first-price sealed-bid auction, that is, for every strategy in the first-price auction, there is an equivalent strategy in the Dutch auction, and vice versa. 2. Given the assumption of private values, the English auction is equivalent to the second-price sealed-bid auction. 3. Given symmetric and risk-neutral bidders and private values, all four auctions yield the same expected revenue of the seller. This is a special case of a more general revenue-equivalence theorem. If the assumptions in Theorem 2.3.1 hold, it will suffice to study or adopt only one kind of auction out of the four basic forms. Usually, the second-price auction is a favorite candidate, because the procedure is simple, and, more importantly, the mechanism forces bidders to bid their true values, as stated in Theorem 2.3.2. In a second-price auction, bidder i whose value of the product is vi submits a sealed bid bi to the auctioneer. Then, the winner of the auction is arg max j∈N b j , and payoffs are vi − max j=i b j , if i = arg max j∈N b j , ui = (2.39) 0, otherwise. Theorem 2.3.2 In a second-price sealed-bid auction, it is a weakly dominant strategy to bid truthfully, i.e., bi = vi for all i ∈ N . The seller plays a passive role in the auctions so far, because his/her benefit has not been taken into consideration. When the seller wants to design an auction game that has the NE with the highest possible expected revenue, it is called the optimal auction [300]. Assume that all bidders’ values of the product are drawn from i.i.d. random variables with the same probability distribution, whose probability density function and probability distribution function are denoted by f (v) and F(v), respectively. Then, an optimal auction may be constructed by adding a reserve price on top of a second-price auction. In this case, the seller reserves the right not to sell the product to any bidder if the highest bid is lower than the reserve price.
2.3 Economic games, auction games, and mechanism design
75
Theorem 2.3.3 Suppose that the values of all bidders are private and symmetric, and the function T (v) = v − (1 − F(v))/ f (v) is increasing. Then setting a reserve price equal to b0 = T −1 (v0 )
(2.40)
in a second-price auction constitutes an optimal auction. The function T −1 (·) is the inverse function of T (·), and v0 is the seller’s value of the product. In addition, setting a reserve price is also an effective measure against bidding-ring collusion, where some or even all of the bidders collude not to overbid each other and hence the price is kept low. An auction becomes more involved when more than one item is being sold simultaneously and bidders bid for “packages” of products instead of individual products. This is known as the combinatorial auction [78]. The second-price mechanism can be generalized to the Vickrey–Clarke–Groves (VCG) mechanism, which maintains the incentive to bid truthfully. The basic idea is that the allocation of products maximizes the social welfare and each winner in the auction pays the opportunity cost that their presence introduces to all the other bidders. Beyond the basic types of auctions, there are other forms of auctions such as the clock auction, the proxy auction, the double auction, and so on. Furthermore, there are many practical concerns and variants in real-world auctions. We will not go into the details of these issues; instead, we will focus on the auction games in cognitive radio networks in what follows. In [163], the study concerns SINR auctions and power auction mechanisms that have been subjected to a constraint on the accumulated interference power, or the so-called “interference temperature,” at a measurement point, which must be below the tolerable amount of the primary system. In this auction game, the resource to sell is not the spectrum band; instead, users compete for the portion of interference that they may cause to the primary system, because the interference is the “limited resource” in this auction. Another kind of auction has been used in [85], where spectrum-sensing effort, rather than monetary payment, is the price to pay for the spectrum opportunities. The auction still follows the form of first-price and second-price sealed-bid auctions. In the auction framework proposed in [134], users bid for a fraction of the band and the auction outcome has to satisfy the interference constraint. In this auction, each user has a piece-wise-linear demand curve, and it is assumed that all users reveal demand curves to the auctioneer truthfully. Because the corresponding revenue is a piece-wisequadratic function, the auctioneer can find the revenue-maximizing point under the constraint that the allocation is conflict-free. The property of being cheat-proof is a major concern in auction design, and we have mentioned that the VCG mechanism is capable of enforcing truth-telling. However, the VCG mechanism sometimes suffers from high complexity and vulnerability to collusive attacks. In [495] and [220], system efficiency is traded for low complexity using the
76
Game theory for cognitive radio networks
greedy algorithm, while the authors carefully design the mechanism to guarantee that truth-telling is still a dominant strategy in this auction game. Because of the unique feature in wireless communications that spectrum can be reused by users who are geographically far apart, spectrum resources are quite different from other commercial commodities in that they are often interference-limited rather than quantity-limited. From this point of view, [465] establishes a new auction not existing in the economics literature. In this auction game, one spectrum band is simultaneously awarded to multiple users without interference, and the number of winners highly depends on the mutual interference among secondary users. For this so-called “multi-winner auction,” proper auction mechanisms are developed to eliminate user collusion and improve revenue, and near-optimal algorithms are further applied to reduce the complexity. When there are multiple sellers who also compete in selling the spectrum, the scenario can be modeled as a double auction. A truth-telling-enforced double-auction mechanism has been proposed in [512], and an anti-collusion double-auction mechanism has been developed in [207], where history observations are employed to estimate users’ private values.
2.3.3
Mechanism design An auction is one of the many possible ways of selling products. On reducing any particular selling format (e.g., an auction format) to its basics, we arrive at a fundamental question: what is the best way to allocate a product? This generalized allocation problem falls into the category of mechanism design, a field of game theory on a class of private-information games. The 2007 Nobel Memorial Prize in Economic Sciences was awarded to Leonid Hurwicz, Eric Maskin, and Roger Myerson as the founders of mechanism-design theory. The distinguishing feature of mechanism design is that the game structure is “designed” by a game designer called a “principal” who wants to choose a mechanism for his/her own interest. Like in an auction, the players, called the “agents,” hold some information that is not known by the others, and the principal asks the agents for some messages (like the bids in an auction) to elicit the private information. Hence, this is a game of incomplete information where each agent’s private information, formally known as the “type,” is denoted by θi , a value drawn from a set i , for i ∈ N . On the basis of messages from agents, the principal makes an allocation decision d ∈ D, where D is the set of all potential decisions on how resources are allocated. Because agents are not necessarily honest, incentives have to be given in terms of monetary gains, known as transfers. The transfer may be negative values (as if one were paying tax) or positive values (as if one were receiving compensation). Then, agent i’s utility is the benefit from the decision d plus a transfer, i.e., u i = vi (d, θi ) + ti , which may provide agents with incentives to reveal the information truthfully. In summary, the basic insight of mechanism design is that resource constraints and incentive constraints are coequally considered in an allocation problem with private information [301].
2.4 Cooperative games
77
Definition 2.3.1 A mechanism defines a message space Mi for each agent i ∈ N and an allocation function (d, ti,i∈N ) : ×i∈N Mi → D × R N . For a vector of messages m ∈ ×i∈N Mi , d(m) is the decision while ti (m) is agent i’s transfer. For a given mechanism, the agents’ strategy is mapping the individual type to a message, i.e., m : i → Mi , being aware that their own utilities depend on all the reported messages. Assume that the prior distribution of all agents’ types is known. Then, every agent wants to maximize the expected payoff, and hence the Bayesian equilibrium, a concept similar to the NE, can be developed for this message game with incomplete information. The strategy profile, m i∗ (θi ), i ∈ N , is a Bayesian equilibrium if E(vi (d(m ∗ (θ )), θi ) + ti (m ∗ (θ ))) ≥ E vi d m i (θi ), m ∗−i (θ−i ) , θi (2.41) + ti m i (θi ), m ∗−i (θ−i ) holds for all i ∈ N , m i (θi ) ∈ Mi , and m i (θi ) = m i∗ (θi ). Notice that the expectation E(·) is taken over the prior distribution given that agent i’s type is θi . Because there are unlimited possibilities of choosing message spaces and allocation functions, analyzing the equilibrium and designing the mechanism seem to be extremely challenging. However, thanks to the equivalence established in the “revelation principle” in Theorem 2.3.4, as shown below, we can restrict our attention to “direct” mechanisms in which the message space coincides with the type space, i.e., Mi = i , and all agents will truthfully announce their types [123]. Theorem 2.3.4 For any Bayesian equilibrium supported by any general mechanism, there exists an equivalent direct mechanism with the same allocation and an equilibrium m i (θi ) = θi , ∀i ∈ N . In [119], mechanism design has been applied to multimedia resource allocation problem in cognitive radio networks. For the multimedia transmission, the utility function is defined as the expected distortion reduction resulting from using the channels. Since the system designer wants to maximize the system utility, mechanism-based resource allocation is used to enforce users to represent their private parameters truthfully. A cheat-proof strategy for open spectrum sharing has been proposed based on the Bayesian mechanism design [466] [462]. In this work, a cooperative sharing is maintained by repeated game modeling, and users share the spectrum on the basis of their channel-state information. In order to provide users with an incentive to reveal their private information honestly, mechanism design has been employed to determine proper transfer functions.
2.4
Cooperative games In this section, we discuss two important types of cooperative spectrum-sharing games, namely bargaining games and coalitional games, where network users have an agreement on how to fairly and efficiently share the available spectrum resources.
78
Game theory for cognitive radio networks
2.4.1
Bargaining games The bargaining game is one interesting kind of cooperative game in which individuals have the opportunity to reach a mutually beneficial agreement. In this game, individual players have conflicts of interest, and no agreement may be imposed on any individual without his/her approval. Despite the fact that there are other models such as the strategic approach with a specified bargaining procedure [371], we will focus on Nash’s axiomatic model, which was established in Nash’s seminal paper [308], because it has been widely applied to cognitive radio networks. For convenience, we consider the two-player bargaining game N = {1, 2}, which can be extended to more players straightforwardly. For a certain agreement, player 1 receives utility u 1 and player 2 receives utility u 2 . If players fail to reach any agreement, they receive utilities u 01 and u 02 , respectively. The set of all possible utility pairs is the feasible set denoted by U . Definition 2.4.1 A two-player bargaining problem is a pair U, u 01 , u 02 , where U ⊂ R2 is a compact and convex set, and there exists at least one utility pair(u 1 , u 2) ∈ ∗ ∗ U such thatu 1 > u 01 and u 2 > u 02 . A bargaining solution is a function u 1 , u 2 = 0 0 0 0 f U, u 1 , u 2 that assigns a bargaining problem U, u 1 , u 2 to a unique element of U . The axioms imposed on the bargaining solution u ∗1 , u ∗2 are listed as follows [326]. 0 0 ∗ ∗ • Individual rationality. ∗ ∗ u 1 > u 1 and u 2 > u 2 . • Feasibility. u 1 , u 2 ∈ U . • Pareto efficiency. If (u 1 , u 2 ), u 1 , u 2 ∈ U , u 1 < u 1 , and u 2 < u 2 , then f U, u 01 , u 02 = (u 1 , u 2 ). • Symmetry. Suppose that a bargaining problem is symmetric, i.e., (u 1 , u 2 ) ∈ S ⇐⇒ (u 2 , u 1 ) ∈ S and u 01 = u 02 . Then, u ∗1 = u ∗2 . • Independence of irrelevant alternatives. If u ∗1 , u ∗2 ∈ U ⊂ U , then f U , u 01 , u 02 = f U, u 01 , u 02 = u ∗1 , u ∗2 . • Independence of linear transformations. Let U be obtained from U by the linear transformation u 1 = c1u 1 + c2 and u 2 = c3 u 2 + c4 , with c1 , c3 > 0. Then, 0 0 ∗ ∗
f U , c1 u 1 + c2 , c3 u 2 + c4 = c1 u 1 + c2 , c3 u 2 + c4 .
Theorem 2.4.1 There is a unique bargaining solution satisfying all the axioms above, which is given by ∗ ∗ u 1 , u 2 = arg
max
(u 1 ,u 2 )∈U,
u 1 >u 01 ,
u 2 >u 02
(u 1 − u 01 )(u 2 − u 02 ).
(2.42)
This is called the Nash bargaining solution (NBS). Figure 2.5 illustrates the feasible utility region of a two-player game. The shaded to the area point corresponds ∗ U∗ represents the feasible range of u 1 and u 2 , and the NBS u 1 , u 2 in Figure 2.5, where Cmax is the largest value of u 1 − u 01 u 2 − u 02 for the
79
2.4 Cooperative games
u2
(u1–u10) (u2–u20) = Cmax (u1*, u2*) U
u20
θ
θ
u10 Figure 2.5
u1
The NBS of a two-player game.
feasible set U . The meaning of the NBS is that, after the players have been assigned the minimal utility, the remaining welfare is divided between them in a ratio equal to the rate at which the utility can be transferred [169]. Some remarks are in order. 1. The NBS is well defined. Since the function u 1 − u 01 u 2 − u 02 is strictly quasiconcave on (u 1 , u 2 ) ∈ U : u 1 > u 01 , u 2 > u 02 , a non-empty compact and convex set guaranteed by the definition of the bargaining problem, there exists a unique maximizer for this maximization problem. 2. The proof of the theorem can be found, e.g., in [321]. The idea is first to show that the NBS satisfies these axioms, and then to show that it is the only choice satisfying all of the axioms. 3. Moreover, a detailed discussion included in [321] shows that no axioms are superfluous. In other words, removing any of the axioms will not guarantee the uniqueness of the bargaining solution. 4. It is straightforward to extend Theorem 2.4.1 to the bargaining problem with more than two players. Specifically, the NBS for the bargaining problem with the player set N is the solution to the following optimization problem: (u k − u 0k ). (2.43) arg max (u 1 ,u 2 ,...)∈U, u k >u 0k , ∀k∈N k∈N
5. As shown in [169], when u 0k = 0, ∀k ∈ N , the NBS coincides with the proportional fairness which is one of the widely used criteria for resource allocation. In simpler words, the NBS achieves some degree of fairness among cooperative players through bargaining. In what follows, we give a brief summary of how bargaining games have been applied to cognitive radio networks where cooperation among players is possible and fairness is an important concern. In [177], the NBS is directly applied to allocate frequency–time units in an efficient and fair way, after a learning process has first been applied to find the payoffs with disagreement.
80
Game theory for cognitive radio networks
The symmetry axiom implies that all players are equal in the bargaining game; however, sometimes this is not true because some players have priority over others. To accommodate this situation, a variant of the NBS is to offset the disagreement point to some other payoff vectors that implicitly incorporate the asymmetry amongplayers. w An alterative approach is to modify the objective function to k∈N u k − u 0k k with weights wk reflecting the priority of players. For instance, in the power-allocation game consisting of primary users and secondary users in [9], different values of u 0k are set for primary users and secondary users because primary users have the priority to use spectrum resources in cognitive radio networks. In [352] with heterogeneous wireless users, the disagreement point in the NBS objective function is replaced by the threat made by individual players. Moreover, for finding the NBS one needs global information, which is not always available. A distributed implementation is proposed in [86], where users adapt their spectrum assignment to approximate the optimal assignment through bargaining within local groups. Although this is not stated explicitly, it actually falls into the category of the NBS, because the objective is to maximize the total logarithmic user throughput, which is equivalent to maximizing the product of user payoffs. In this work, neighboring players adjust spectrum band assignment for better system performance through oneto-one or one-to-many bargaining. In addition, a theoretical lower bound is derived to guide the bargaining process. A similar approach is implemented in [378], which iteratively updates the powerallocation strategy using only local information. In this game, players allocate power to channels and their payoffs are the corresponding capacity. Given the assumption that players far away from each other have negligible interference, from a particular player’s perspective, the global objective is separated into two parts: the product of faraway players’ payoffs and the product of neighboring players’ payoffs. Because the player’s power-allocation strategy affects only the second term, maximizing the second term is equivalent to maximizing the global objective. Each player sequentially adjusts the strategy, and it is proved that the iterative process is convergent. Although it is not known for sure whether it converges to the NBS, simulation results show that the convergence point is close to the true NBS. The concept of the NBS can also be applied to scenarios without explicit bargaining. For example, the NBS is employed to determine how to split payment among several users in a cognitive spectrum auction in [465], where the auctioneer directly set the NBS as the price to each player, and they will be ready to accept because the NBS is an equilibrium. In this paper, the objective function is defined as the product of individual payoffs, which is similar to the NBS, but additional constraints have been introduced to eliminate collusive behavior in the auction.
2.4.2
Coalitional games A coalitional game is another type of cooperative game. It describes how a set of players can cooperate with others by forming cooperating groups and thus improve their payoff in a game.
2.4 Cooperative games
81
Denote the set of players by N , and a nonempty subset of N , i.e., a coalition, by S. Since the players in coalition S have agreed to cooperate, they can be viewed as one entity and this is associated with a value v(S), which represents the worth of coalition S. Then, a coalitional game is determined by N and v(S). Coalitional games of this kind are known as games with transferrable payoff , since the value v(S) is the total payoff that can be distributed in any way among the members of S, e.g., using an appropriate fairness rule. However, in some coalitional games, it is difficult to assign a single real-number value to a coalition. Instead, each coalition S is characterized by an arbitrary set V (S) of consequences. Such games are known as coalitional games without transferable payoff. In these games, the payoff that each player in S receives depends on the joint actions of the members of S, and V (S) becomes a set of payoff vectors, where each element xi of a vector x ∈ V (S) represents player i’s payoff as a member of S. In coalitional games with or without transferrable payoff values, the value of a coalition S depends only on the members of S, not being affected by how the players outside coalition S are partitioned. We say that these coalition games are in characteristicfunction form. Sometimes, the value of S is also affected by how the players in N \ S are partitioned into various coalitions, and we say that those coalitional games are of the partition-function form [418]. In partition-function-form games, the coalitional struc ture is denoted by a partition S of N , where S = {S1 , . . . , S K }, Si S j = ∅, for i = j, K Si = N . The value of Si ∈ S depends on the coalitional structure S, and can and i=1 be denoted by v(S, S). In characteristic-function-form coalitional games, cooperation by forming larger coalitions is beneficial for players in terms of a higher payoff. This property is referred to as superadditivity. For instance, in games with transferrable payoff, superadditivity means ! " S2 ≥ v(S1 ) + v(S2 ), ∀S1 , S2 ⊂ N , S1 S2 = ∅. v S1
(2.44)
Therefore, forming larger coalitions from disjoint (smaller) coalitions can bring at least a payoff that can be obtained from the disjoint coalitions individually. Owing to this property, it is always beneficial for players in a superadditive game to form a coalition that contains all the players, i.e., a grand coalition. Since a grand coalition provides the highest total payoff for the players, it is the optimal solution that is preferred by rational players. Naturally, one may wonder whether a grand coalition is always achievable and stable. To answer this question, we first introduce the solution concept for coalitional games, the core [322]. The idea behind the core is similar to that behind a Nash equilibrium of a non-cooperative game: a strategy profile such that no player would deviate unilaterally to obtain a higher payoff. In a coalitional game, an outcome is stable if no coalition is willing to deviate and obtain an outcome that is better for all its members. Let N , v denote a coalitional game. For any payoff profile (xi )i∈N of real numbers and any coalition S, let x(S) = i∈S xi . A vector (xi )i∈S is an S-feasible payoff vector if x(S) = v(S).
82
Game theory for cognitive radio networks
Definition 2.4.2 The core of the coalitional game N , v is the set of feasible payoff profiles (xi )i∈N for which there is no coalition S and S-feasible payoff vector (yi )i∈S , such that yi > xi for all i ∈ S. In other words, no coalition S ⊂ N has an incentive to reject the proposed payoff profile in the core, deviate from the grand coalition, and form coalition S instead. Therefore, the definition of the core is equivalent to # $ C = (xi ) : xi = v(N ), and xi ≥ v(S), ∀S ⊆ N . (2.45) i∈N
i∈S
As long as one can find a payoff allocation (xi ) that lies within the core, the grand coalition is a stable and optimal solution for the coalitional game. It can be seen that the core is the set of payoff profiles that satisfy a system of weak linear inequalities, and thus is closed and convex. Moreover, we can find the core by solving a linear program min xi , s.t. xi = v(N ), xi ≥ v(S), ∀S ⊆ N . (2.46) (xi )i∈N
i∈N
i∈N
i∈S
The existence of the core depends on the feasibility of the linear program and is related to the property of balance of a game. A coalitional game with transferable payoff is called balanced if and only if the inequality λ S v(S) ≤ v(N ) (2.47) S⊆N
holds for all non-negative-weight collections λ = (λ S ) S⊆N , where the collection (λ S ) S∈S of numbers in [0,1] denotes a balanced collection of weights, and the sum of λ S over all the coalitions that contain player i is S⊇i λ S = 1. If we assume that any player i ∈ N has a single unit of time for distribution among all the coalitions in which he is a member, then, in order for a coalition S to be active for a fraction of time λ S , all members of S must be active in S during λ S , and the resulting payoff is λ S v(S). Then, the property of balance means that there exists no feasible allocation of time that can yield a total payoff higher than that of the grand coalition v(N ), and thus the grand coalition is optimal, indicating there may exist a nonempty core. Without giving the detailed proof (the interested reader can refer to [322]), we present the result about the existence of a nonempty core in the following theorem [322]. Theorem 2.4.2 A coalitional game with transferable payoff has a nonempty core if and only if it is balanced. If the property of balance of a game does not hold, the core will be empty, and one will have trouble finding a suitable solution of a coalitional game. Thus, an alternative solution concept that always exists in a coalitional game is needed. Shapley proposed a solution concept, known as the Shapley value ψ, to assign a unique payoff value to each player in the game. In the following, we provide an axiomatic characterization of
83
2.5 Stochastic games
the Shapley value, where ψi denotes the payoff assigned to player i according to the Shapley value. • (Symmetry) If player i and player j are interchangeable in v, i.e., v(S {i}) = v(S { j}) for every coalition S that does not contain player i or player j, then ψi (v) = ψ j (v). • (Dummy player) If player i is a dummy in v, i.e., v(S) = v(S {i}) for every coalition S, then ψi (v) = 0. • (Additivity) For any two games u and v, define the game u + v by (u + v)(S) = u(S) + v(S), then ψi (u + v) = ψi (u) + ψi (v), for all i ∈ N . • (Efficiency) i∈N ψi (v) = v(N ). The Shapley value is the only value that satisfies all the above axioms, and is usually calculated as the expected marginal contribution of player i when joining the grand coalition given by |S|!(|N | − |S| − 1)! ψi (v) = [v(S ∪ {i}) − v(S)]. (2.48) |N |! S⊆|N |\{i}
In a cognitive radio network, cooperation among rational users can generally improve the network performance due to the multiuser diversity and spatial diversity in a wireless environment. Therefore, coalitional game theory has been used to study user cooperation and design optimal, fair, and efficient collaboration strategies. In [299], spectrum sharing through receiver cooperation is studied in a coalitional game framework. The authors model the receiver cooperation in a Gaussian interference channel as a coalitional game with transferable payoff, where the value of the game is defined as the sum-rate achieved by jointly decoding all users in the coalition. It is shown that the grand coalition that maximizes the sum-rate is stable, and the rate allocation to members of a coalition is solved by a bargaining game modeling. Receiver cooperation by forming a linear multiuser detector is modeled as a game without transferable payoff, where the payoff of each player is the received SINR. In a high-SINR regime, the grand coalition is proved to be stable and sum-rate maximizing. The work in [384] has modeled cooperative spectrum sensing among secondary users as a coalition game without transferable payoff, and a distributed algorithm for coalition formation through merge and split operations is proposed. It is shown that the secondary users can self-organize into disjoint independent coalitions, and the detection probability is maximized while maintaining a certain false-alarm level.
2.5
Stochastic games In the above, we have discussed the various aspects of game theory with their applications to cognitive radio networking, from non-cooperative spectrum competition to spectrum trading using market-equilibrium concepts and cooperative spectrum-sharing games. Generally speaking, in these games the players are assumed to face the same stage in the game at each time, meaning that the game and the players’ strategies do
84
Game theory for cognitive radio networks
not depend on the current state of the network. However, this is not true for a cognitive radio network where the spectrum opportunities and the surrounding radio environment keep changing over time. In order to study the cooperation and competition behaviors of cognitive users in a dynamic environment, we apply the theory of stochastic games, since this provides a better fit. A stochastic game [383] is a Markov decision process (MDP) [129] extended by considering the interactive competition among different agents. In a stochastic game G, there is a set of states, denoted by S, and a collection of action sets, A1 , . . . , A|N | , one for each player in the game. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. After the players have selected and executed their actions, the game then moves to a new random state with transition probability determined by the current state and one action from each player: T : S × A1 × · · · × A|N | → P D(S). Meanwhile, at each stage each player receives a payoff u i : S × A1 × · · · × A|N | → R, which also depends on the current state and the chosen actions. The game is played continually for a number of stages, and each player attempts to maximize an objective function. The objective function%can be defined as & ∞ j the expected sum of discounted payoffs in an infinite horizon, E j=0 γ u i,t+ j , where u i,t+ j is the reward received j steps into the future by player i and γ is the discount factor. It can also be defined as the expected sum of discounted payoffs over a finite time horizon, or the limit of the average reward. Since, in a cognitive radio network, data transmission is usually assumed to last for a sufficiently long time and to be sensitive to time delay (e.g., multimedia content), the most widely adopted form of objective function is the expected sum of discounted payoffs over an infinite horizon. The solution, also called a policy of a stochastic game is defined as a probability distribution over the action set at any state, πi : S → P D(Ai ), for all i ∈ N . Given the current state s t at time t, if player i’s policy πit at time t is independent of the states and actions in all previous time slots, the policy πi is said to be Markov. If the policy is further independent of time, it is said to be stationary. The stationary policy of the players in a stochastic game, i.e., their optimal strategies, can be obtained by value iteration according to Bellman’s optimality condition. For example, in a two-player stochastic game with opposite objectives, let us denote V (s) as the expected reward (of player 1) for the optimal policy starting from state s, and Q(s, a1 , a2 ) as the expected reward of player 1 for taking action a1 against player 2 taking action a2 from state s and continuing optimally thereafter [249]. Then, the optimal strategy for player 1 can be obtained from the following iterations, V (s) = max
min
π∈α(A1 ) a2 ∈A2
Q(s, a1 , a2 )πa1 ,
(2.49)
a1 ∈A1
Q(s, a1 , a2 ) = u 1 (s, a1 , a2 ) + γ
T (s, a1 , a2 , s )V (s ),
(2.50)
s ∈S
where πa1 denotes player 1’s strategy profile, and T (s, a1 , a2 , s ) denotes the transition probability from state s to s , when player 1 takes a1 and player 2 takes a2 .
2.5 Stochastic games
85
For several special kinds of stochastic games, a linear program can also be formulated to obtain the optimal policy, which is defined as the probability profile (πi (s, a1 , . . . , a|N | ))i∈N , for all s ∈ S, and ai ∈ Ai . Examples include single-controller discounted games, separable-reward-state independent-transition discounted games, and switching-controller discounted games. Interested readers are referred to [129] for more details. In the following, we use several example applications of stochastic game theory to cognitive radio networking to illustrate how to formulate a stochastic game for various problems and how to solve the game. • Spectrum auction [128]. At each time slot, a central spectrum moderator auctions the currently available spectrum resources, and secondary users strategically bid for the resources. Since the secondary users need to cope with uncertainties from both the environment (e.g., channel availability and quality variations, packet arrivals from the source) and interactions with the other secondary users (e.g., resource allocation from the auction), the state of the stochastic game is composed of the buffer state and channel state, where the buffer state is dependent on the current spectrum-allocation status. The transition probability of the game can be derived, since packet arrival is assumed to be a Poisson process and the channel-state transition is modeled as a Markov chain. Strategic secondary users want to maximize the number of transmitted packets by choosing the optimal bidding strategy. To this end, an interactive learning algorithm is proposed, whereby the high-dimensional state space is decomposed and reduced to a simpler expression, on the basis of conjecture from previous spectrum allocations, and state-transition probabilities are further estimated using past observations on transitions between different states. In this way, secondary users can approximate the future reward and approach the optimal policy through value iteration. • Transmission control [173]. The secondary users’ rate-adaptation problem is formulated as a constrained zero-sum stochastic game. Under TDMA assumption, the system state-transition probabilities depend only on the user who is transmitting, and thus the game falls into the category of a switching-controller game. The state of the transmission-control stochastic game comprises the channel state, the secondary users’ buffer state, and the incoming traffic state; and the action of each user is the transmission rate. The cost that the users try to minimize is composed of a transmission cost, which is a function related to channel quality and transmission rate, and a holding cost, which is a function related to the buffer state. It is shown that there exist NE in the transmission-control game, since it is a zero-sum game; moreover, a stochastic approximation algorithm is proposed to search for the NE. • Anti-jamming defense [463]. In a cognitive radio network, there may exist cognitive attackers who can adapt their attacking strategy to the time-varying spectrum opportunities and secondary users’ strategy. A dynamic security mechanism to alleviate the damage caused by cognitive attackers is investigated in [463] by a stochastic game modeling. The state of the anti-jamming game includes the spectrum availability, the channel quality, and the status of jammed channels observed at the current time slot.
86
Game theory for cognitive radio networks
The action of the secondary users reflects how many channels they should reserve for transmitting control and data messages and how to switch between the different channels. Since the secondary users and attackers have opposite objectives, the antijamming game is a zero-sum game, and the optimal policy of the secondary users is obtained by the minimax-Q learning algorithm based on (2.49) and (2.50).
2.6
Summary In this chapter, we provided a comprehensive overview of game theory and its applications to research on cognitive radio networks. To this end, we classify state-of-the-art game-theoretic research contributions on cognitive radio networking into four categories, namely non-cooperative spectrum sharing, spectrum trading and mechanism design, cooperative spectrum sharing, and stochastic spectrum sharing games. For each category, we explained the fundamental concepts and properties, and provided a detailed discussion about the methodologies for how to apply these games in the design of spectrum sharing protocols. The in-depth theoretical analysis and overview of the most recent practical implementations in this chapter will aid the design of efficient, fair, and distributed spectrum management and allocation for next-generation wireless networks.
3
Markov models for dynamic spectrum allocation
In a dynamically changing spectrum environment, it is very important to consider the statistics of different users’ spectrum access so as to achieve more efficient spectrum allocation. In this chapter, we study a primary-prioritized Markov approach for dynamic spectrum access through modeling the interactions between the primary and the secondary users as continuous-time Markov chains (CTMCs). Using the CTMC models, to compensate for the throughput degradation due to the interference among secondary users, we derive the optimal access probabilities for the secondary users, by which means the spectrum access of the secondary users is optimally coordinated and the spectrum dynamics clearly captured. Therefore, a good tradeoff between the spectrum efficiency and fairness can be achieved. The simulation results show that the primary-prioritized dynamic spectrum access approach under the criterion of proportional fairness achieves much higher throughput than do the CSMA-based random access approaches and the approach achieving max–min fairness. Moreover, it provides fair spectrum sharing among secondary users with only small performance degradation compared to the approach maximizing the overall average throughput.
3.1
Introduction Efficiently and fairly sharing the spectrum among secondary users in order to fully utilize the limited spectrum resources is an important issue, especially when multiple dissimilar secondary users coexist in the same portion of the spectrum band. Although existing dynamic spectrum access schemes have successfully enhanced spectrum efficiency, most of them focus on spectrum allocation among secondary users in a static spectrum environment. Therefore, several fundamental challenges still remain to be answered. First, the radio-spectrum environment is constantly changing. In conventional power control to manage mutual interference for a fixed number of secondary users, after each change of the number of contending secondary users, the network needs to re-optimize the power allocation for all users completely. This results in high complexity and much overhead. Second, if a primary user appears in some specific portion of the spectrum, secondary users in that band need to adapt their transmission parameters to avoid interfering with the primary user. Furthermore, in addition to maximizing the overall spectrum utilization, a good spectrum sharing scheme should also achieve fairness among dissimilar users. If multiple secondary users are allowed to
88
Markov models for dynamic spectrum allocation
access the licensed spectrum, dynamically coordinating their access to alleviate mutual interference and avoid conflict with primary users should be carefully considered. With motivation by the preceding considerations, in this chapter we consider a primary-prioritized Markov approach for dynamic spectrum access. Specifically, we model the interactions between the primary users (legacy spectrum holders) and the secondary users (unlicensed users) as CTMCs, by which means we can capture the system’s evolution dynamics, especially the effect of the primary user’s activities on the secondary users. It has been shown in [65], [66] that, when unlicensed devices coexist with licensed devices in the same frequency and time simultaneously, the capacity achieved by unlicensed devices with reduced power is very low, while they still cause harmful interference with the licensed users. Therefore, in this chapter, we assume that, when primary users exist in some spectrum band, secondary users cannot operate in the same band simultaneously. Further, in order to coordinate secondary spectrum access in a fair and efficient manner, dynamic spectrum access under various criteria is considered on the basis of CTMC models. In the approach presented here, the spectrum access of different users is optimally coordinated through the modeling of secondary spectrum-access statistics to alleviate mutual interference. The benefits of the primary-prioritized Markov approach for dynamic spectrum access are numerous. First, the radio system’s evolutionary behavior, including the primary user’s activities, is thoroughly captured through CTMC modeling. Second, we consider various policies of spectrum access by employing various optimality criteria, among which we focus on the proportional-fair (PF) spectrum access approach to achieve the optimal tradeoff between spectrum utilization efficiency and fairness. Third, the PF spectrum access approach can achieve better performance than that with the CSMA-based scheme, and can be generalized to spectrum sharing among multiple secondary users. The remainder of this chapter is organized as follows. The dynamic spectrum access system model is described in Section 3.2. The primary-prioritized Markov models are derived in Section 3.3, and dynamic spectrum access approaches based on these models are developed in Section 3.4. The simulation studies are provided in Section 3.5.
3.2
The system model We consider dynamic spectrum access networks where multiple secondary users are allowed to access the temporarily unused licensed spectrum bands on an opportunistic basis, without conflicting or interfering with the primary spectrum holders’ usage. Such scenarios can be envisioned in many applications. Considering the fact [114] that heavy spectrum utilization often takes place in unlicensed bands while licensed bands often experience low (e.g., TV bands) or medium (e.g., some cellular bands) utilization, IEEE 802.22 [196] considers reusing the fallow TV spectrum without causing any harmful interference to incumbents (e.g., the TV receivers). Moreover, with regard to more efficient utilization of some cellular bands, [317] considers sharing the spectrum
89
3.2 The system model
between a cellular communication system and wireless local-area network (WLAN) systems. In rural areas where there is little demand on the cellular communication system, the WLAN users can efficiently increase their data rates by sharing the spectrum. In order to take advantage of the temporarily unused spectrum holes in the licensed band, without loss of generality we consider a snapshot of the above spectrum access networks shown in Figure 3.1, where two secondary users and one primary user coexist, and the secondary users opportunistically utilize the spectrum holes in the licensed band. Note that the system diagram shown here serves only as an example model with which to gain more insight and the scenario with multiple secondary users will be studied in detail in the following section. The primary user, denoted by P, has a license to operate in the spectrum band. The offered traffic for primary user P is modeled with two random processes. The service request is modeled as a Poisson process with rate λP s−1 . The service duration (holding time) is negative-exponentially distributed with mean time 1/μP s, so the departure of user P’s traffic is another Poisson process with rate μP s−1 . The secondary users are denoted by A and B, and set S is defined as S = {A, B}. For each secondary user γ , where γ ∈ S, its service session is similarly characterized by two independent Poisson processes, with arrival rate λγ s−1 and departure rate μγ s−1 . They contend to access the spectrum when primary user P is not using the spectrum band. Since the primary user has a license to operate in the spectrum band, its access should not be affected by the operation of any other secondary user, and priority to access the spectrum is given to primary user P. We assume that the secondary users equipped with cognitive radios are capable of detecting the primary user’s activities, i.e., the appearance of the primary user in the spectrum band and its departure from the spectrum.
Base Station
User B
Primary user Secondary users
User A
Throughput
User P
A
B
P
Time Figure 3.1
The system model (upper: system diagram; lower: throughput vs. time).
90
Markov models for dynamic spectrum allocation
Furthermore, the secondary users’ access is assumed to be controlled by a secondary management point so that they can distinguish whether the spectrum is occupied by the primary user or secondary users. Therefore, when primary user P appears, the secondary users should adjust their transmission parameters, for instance reduce the transmit power or vacate the channels and try to transfer their communications to other available bands. The interference-temperature model proposed by the FCC in [115] allows secondary users to transmit in licensed bands with carefully adjusted power, provided that secondary users’ transmission does not raise the interference temperature for that frequency band over the interference-temperature limit. Although it can provide better service continuity for the secondary users to remain operating in the band with reduced power, the capacity they can achieve is very low [65] [66]. Therefore, in this chapter, we assume that, when primary user P appears, any secondary user should vacate the channel and the traffic currently being served is cut off. While primary user P is being served, any entry of the secondary user’s traffic into the spectrum is denied until service of primary user P has finished. At the bottom of Figure 3.1, we show an example of system throughput versus time for dynamic spectrum access. First, user A accesses the spectrum band, followed by user B. During service of B, user A accesses the band again and shares the spectrum band with user B, which may result in less throughput to both user A and user B due to their mutual interference. After service of user A has finished for a while, primary user P accesses the band, and user B’s service is interrupted. After user P has vacated the band, service of user B continues until its service duration ends. Afterwards, user A accesses the band, and its service is ceased when primary user P appears and resumed when service of P has finished in the same way as for user B. For any secondary user γ that operates in the spectrum band alone, its maximal data rate [79] can be represented by ' ( pγ G γ γ γ , (3.1) r1 = W log2 1 + n0 where W is the communication bandwidth, n 0 is the power of the additive white Gaussian noise (AWGN), pγ is the transmission power for user γ , and G γ γ is the channel gain for user γ . The secondary users A and B are allowed to share the spectrum band. We assume that the transmitter of a secondary user can vary its data rate through a combination of adaptive modulation and coding, so the transmitter and receiver can employ the highest rate that permits reliable communication, given the signal-to-interferenceplus-noise ratio (SINR). We assume that the secondary users use random Gaussian codebooks, so their transmitted signals can be treated as white Gaussian processes and the transmission of other secondary users are treated as Gaussian noise. Then, the maximal rate of user γ when secondary users share the spectrum can be represented by pγ G γ γ γ , (3.2) r2 = W log2 1 + n 0 + α=γ pα G αγ where α = γ , α ∈ S, and G αγ is the channel gain from user α’s transmitter to user γ ’s receiver.
91
3.3 Primary-prioritized Markov models
3.3
Primary-prioritized Markov models In this section, we derive primary-prioritized Markov models to capture the dynamics of spectrum access statistics for the primary user and the secondary users.
3.3.1 3.3.1.1
Primary-prioritized CTMC without queuing CTMC without queuing In dynamic spectrum access, where the secondary users opportunistically access the unused licensed spectrum, priority should be given to the primary user. That is, secondary users cannot operate in the same spectrum band as the primary user at the same time; when the primary user appears in the spectrum band, all secondary users in the same band should stop operating in the spectrum. Moreover, the arrival and departure of different users’ traffic are assumed to be independent Poisson processes. Therefore, we model the interactions between the secondary users and the primary user as a primary-prioritized CTMC. In the CTMC, when the secondary users contend to access the idle spectrum using CSMA, collisions occur only when their service requests arrive at exactly the same time. This case rarely happens for independent Poisson processes. Therefore, in the CTMC model we omit the collision state of the secondary users, and assume that their service durations always start from different time instances. If we assume that, when the primary user appears, there is no queuing of the interrupted service for the secondary users, then we can model the spectrum-access process as a five-state CTMC as shown in Figure 3.2. We denote this five-state Markov chain by “CTMC-5” for short, where state 0 means that no user operates in the spectrum, state γ means that user γ operates in the spectrum with γ ∈ {A, B, P}, and state 2 means that both user A and user B operate in the spectrum. Assume that at first the spectrum band is idle, i.e., CTMC-5 is in state 0. Secondary users contend to operate in the spectrum. Upon the first access attempt of some user, say
A λA
μA
λB
μB
λP
λP
λP
0
P
2
μP
λA
λB
λP
μA
μB B Figure 3.2
The rate diagram of CTMC with no queuing.
92
Markov models for dynamic spectrum allocation
user A, CTMC-5 enters state A with transition rate λA s−1 . If user A’s service completes before any other user requests spectrum access, CTMC-5 then transits to state 0 with rate μA s−1 . If user B’s service request arrives before service of A has been completed, CTMC-5 transits to state 2 with rate λB s−1 , where both secondary users share the spectrum. Once service of user B (or A) has been completed, CTMC-5 transits from state 2 to state A (or B), with rate μB (or μA ) s−1 . However, the primary user P may, once in a while, appear during the duration of service of the secondary users, i.e., when CTMC-5 is in state A, B or 2. At that time, the secondary user’s traffic is dropped to avoid conflict with the primary user, and CTMC-5 transits to state P with rate λP s−1 . While the primary user is operating in the spectrum band, no secondary user is given access to the spectrum. CTMC-5 transits to state 0 with rate μP s−1 only if service of P has been completed. The “flow-balance” (the rate at which transitions out of state si take place equals the rate at which transitions into state si take place) and normalization [242] equations governing the above system are given by μA A + μP P + μB B = (λA + λB + λP )0 , λA 0 + μB 2 = (μA + λP + λB )A , λP (0 + A + 2 + B ) = μP P ,
(3.3) (3.4) (3.5)
λB 0 + μA 2 = (μB + λP + λA )B ,
(3.6)
λB A + λA B = (μB + λP + μA )2 ,
(3.7)
0 + A + B + P + 2 = 1,
(3.8)
where si represents the stationary probability of the system being in state si , si ∈ S = {0, A, B, P, 2}. The solutions to the above equations, i.e., the probabilities when the spectrum is occupied by either the primary user P or the secondary users, are given by P = λP /(λP + μP ),
(3.9)
A = C1 λA [λB μB + (λP + μB )(λA + λP + μA + μB )],
(3.10)
B = C1 λB [λA μA + (λP + μA )(λB + λP + μA + μB )],
(3.11)
2 = C1 λA λB (λA + λB + 2λP + μA + μB ),
(3.12)
where, for simplicity, the coefficient C1 is defined as C1 = (1 − P )[(λA + μA + λP )(λB + μB + λP )(λA + μA + λB + μB + λP )]−1 . (3.13) One of the most important goals in spectrum sharing is efficient spectrum utilization, i.e., high throughput achieved by each secondary user through successful acquisition of a spectrum band. From a statistical point of view, the secondary users want to maximize their average throughput. Given the solutions of the steady-state probabilities, we
93
3.3 Primary-prioritized Markov models
know that si is the stationary probability that the system is in state si , so it can be thought of as the expected long-run fraction of the time that the CTMC spends in state si [242]: ) 1 T Pr{S(t) = si }dt, (3.14) si = lim T →∞ T 0 where S(t) is the state of the CTMC at time t. If we define ( ') T 1 Uγ = lim Rγ (S(t))dt E T →∞ T 0
(3.15)
as the long-run expected average throughput for user γ , where Rγ (S(t)) is the throughput of user γ achieved in state S(t), we have ) 1 T E(Rγ (S(t)))dt Uγ = lim T →∞ T 0 ) 1 T Rγ (si )Pr{S(t) = si }dt = lim T →∞ T 0 si ∈S ) 1 T = Rγ (si ) lim Pr{S(t) = si }dt T →∞ T 0 si ∈S Rγ (si )si . (3.16) = si ∈S
The interchanges of limits, integrals, sums, etc. are permitted as long as si ∈S |R|γ (si )si < ∞. Thus, from CTMC-5, we can express the total average throughput for user γ as γ
γ
Uγ = γ r1 + 2r2 ,
(3.17) γ
γ
where γ and 2 are as solved in (3.10)–(3.12), and r1 and r2 are defined in (3.1) and (3.2), respectively. The first term on the right-hand side of (3.17) represents the throughput when user γ occupies the spectrum alone, and the second term represents the throughput when two secondary users share the spectrum. Therefore, by using CTMC-5, we can not only capture the dynamic utilization of the unused licensed spectrum for secondary users without conflicting with the primary user, but also study their stationary behaviors and quantify their spectrum utilization from a statistical point of view.
3.3.1.2
Multiuser CTMC without queuing The CTMC previously introduced can also be generalized to model the scenario with more than two secondary users. Suppose the set of N secondary users is denoted by S = {1, . . . , N }, then the state space A consists of 2 N + 1 combinations of the status of the primary user P and the secondary users:
94
Markov models for dynamic spectrum allocation
(P ,S ) ∈ A = {(1, [0, . . . , 0])}
!
{(0, φS ) : φS = [n N , . . . , n 1 ] ∈ {0, 1} N }, (3.18)
where the state (1, [0, . . . , 0]) represents the case when the primary user is in service in the spectrum band alone, and {(0, φS )} represents all 2 N states for which the primary user P is not in service and from zero up to N secondary users are in service. For this generalized Markov model, the rate diagram can be drawn as an N dimensional hypercube. Each vertex of the hypercube represents a state in {(0, φS )}; each edge connecting two vertices is bi-directional, and it represents the transition that some secondary user begins or completes its service. The center of the hypercube represents the state (1, [0, . . . , 0]); a straight line from each vertex to the center represents the transition when the primary user P begins its service, and another line from the center to the state (0, [0, . . . , 0]) represents the transition when user P completes its service. An example rate diagram for a three-secondary-user CTMC is shown in Figure 3.3. The stationary probabilities can be obtained by solving the corresponding linear equations as follows. • Notation. Let Si denote state (0, [n N , . . . , n 1 ]), where n k ∈ {0, 1}, k = 1, . . . , N , and i = Nj=1 2 j−1 n j , S2 N denote state (1, [0, . . . , 0]), and qi j = q{Si → S j } denote the transition rate from state Si to S j . • Construct the generator matrix Q = [qi j ]: (i) for Si = (0, [n N , . . . , n j , . . . , n 1 ]), where i = 0, . . . , 2 N − 1, and j = 1, . . . , N , q{(0, [n N , . . . , n j , . . . , n 1 ]) → (0, [n N , . . . , 1 − n j , . . . , n 1 ])} = μ j (n j = 1), or λ j (n j = 0); q{Si → S2 N } = λP ; qii = − j=i qi j ; (ii) q{S2 N → S0 } = μP , q{S2 N → S2 N } = −μP . Using this construction, the generator matrix corresponding to the three-user CTMC shown in Figure 3.3 can be written (we omit the diagonal elements for simplicity) as (0110) λ3(μ 3)
S6
(0010) S2
(0111)
λ1(μ 1)
λ3(μ 3)
S7
(0011) λ1(μ 1)
S3 λ2(μ 2)
λ2(μ 2)
S8
λ2(μ 2)
λ2(μ 2) S4 λ3(μ 3) S0 (0000) Figure 3.3
λ1(μ 1)
(0100)
λ1(μ 1)
S1
S5
(0101) λ3(μ 3)
(0001)
The rate diagram of three-user CTMC without queuing.
95
3.3 Primary-prioritized Markov models
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Q=⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
..
.
λ2
0
λ3
0
0
0
λ2
0
λ3
0
0
0
0
λ2
0
0
0
λ3
λ2
0 λ2
μ1
λ1 .. .
μ2
0
0 .. .
0
μ2
μ1
λ1 .. .
μ3
0
0
0
0 .. .
0
μ3
0
0
μ1
λ1 .. .
0
0
μ3
μ2
0
0
0 .. .
0
0
0
μ3
0
μ2
μ1
λ1 .. .
μP
0
0
0
0
0
0
0
⎤ λP ⎥ ⎥ λP ⎥ ⎥ ⎥ λP ⎥ ⎥ ⎥ λP ⎥ ⎥ ⎥ ⎥ λP ⎥ . ⎥ ⎥ λP ⎥ ⎥ ⎥ λP ⎥ ⎥ ⎥ λP ⎥ ⎦ .. .
(3.19)
• Solve the stationary probability = [ S0 , . . . , S2 N −1 , S2 N ] from Qaug T = b, 0 where Qaug =
QT 11×(2 N +1)
1
0 and b =
0(2 N +1)×1 1
(3.20) 1 .
For each secondary user γ , γ ∈ S = {1, . . . , N }, its average throughput consists of 2 N −1 components, each of which represents the average throughput when user γ , together with from zero up to all the other N − 1 secondary users, is in service. Since more secondary users contend for spectrum access, the contention in the generalized Markov model becomes heavier than CTMC-5. As a result, each secondary user shares less spectrum access on average. Moreover, the interference also increases on introducing more secondary users. Therefore, as the number of secondary users increases, the average throughput for each of them is reduced.
3.3.2 3.3.2.1
Primary-prioritized CTMC with queuing CTMC with queuing In CTMC-5 presented in Section 3.3.1.1, the service of the secondary users is forced to stop and be dropped when the primary user P appears in the spectrum band. After service of the primary user P has been completed, CTMC-5 will transit to the idle state. However, there may be some time interval wasted between when the system is in the idle state and when the next secondary user accesses the spectrum. In order to further increase spectrum utilization, queuing of the secondary users’ service requests due to the primary user’s presence is considered. More specifically, when the spectrum is being occupied by secondary users, upon the appearance of the primary user, the secondary users should stop transmission, buffer their interrupted service session, and continue scanning the licensed band until the licensed band becomes available again. Also, if the primary user begins to operate in the previously idle spectrum, new service requests of
96
Markov models for dynamic spectrum allocation
secondary users are also queued. In this chapter, we assume that there is one waiting room for the secondary user, i.e., each user can buffer only a single service request; however, if a service request already exists in the queue, the secondary user will direct the following service requests to other available licensed bands to avoid potential delay, and that scenario is beyond the scope of this chapter. By considering the above factors, we model the spectrum access with queuing as an eight-state CTMC, denoted by “CTMC-8.” The rate diagram of CTMC-8 is shown in Figure 3.4. Compared with CTMC-5 and its dynamics, in CTMC-8 three additional states are introduced: (P, Aw ), (P, Bw ), and (P, (AB)w ). State (P, γw ) means that the primary user P is in service and secondary user γ is waiting, and state (P, (AB)w ) means that P is in service and both secondary users are waiting. The transitions in CTMC-8 occur as follows. When the spectrum band is occupied by secondary user A, if A detects that the primary user P needs to acquire the spectrum band, it buffers the unfinished service session, sensing the licensed band until the end of the primary user’s service session, and CTMC-8 transits from state (0,A) to state (P, Aw ) with rate λP s−1 . If the primary user P finishes its service before B’s access, CTMC-8 transits from state (P, Aw ) to state (0,A) with rate μP s−1 . In contrast, if secondary user B requests access to the licensed spectrum before the primary user P has completed its service duration, then B also buffers its service session, and CTMC-8 transits to state (P, (AB)w ) with rate λB s−1 . In state (P, (AB)w ), both A and B keep sensing the spectrum. Once P has vacated the channel, CTMC8 transits to state (0,AB) with rate μP s−1 , where A and B share the spectrum band. Also, when CTMC-8 is in state (P,0), if secondary users attempt to access the spectrum, they will keep sensing the licensed band until the primary user vacates, and CTMC-8 transits to state (P, Aw ) or state (P, Bw ), with rate λA s−1 or λB s−1 , respectively. The equations governing the above system and the corresponding solutions can be obtained in a similar way to that given in Section 3.3.1.1.
0,A λA
λB
μP
μA
λP P,Aw
λA
μB λB
μP
λP P,(AB)w
P,0
0,0
μP
λB λB
μB
λA
P,Bw
μP
λP 0,B
Figure 3.4
0,AB λP
The rate diagram of CTMC with queuing.
λA
μA
3.4 Primary-prioritized dynamic spectrum access
3.3.2.2
97
Multiuser CTMC with queuing CTMC with queuing can also be generalized to model the scenario with more than two secondary users. For the Markov chain with a set S = {1, . . . , N } of secondary users, the state space B consists of all possible 2 N +1 combinations of the status for primary user P and the secondary users:
(P ,S ) ∈ B =
%
1, ψSw
!
& (0, ψS ) : ψS = [n N , . . . , n 1 ] ∈ {0, 1} N ,
(3.21)
where 1, ψSw represents all 2 N states in which the primary user is in service and from zero up to N secondary users are waiting, and {(0, ψS )} represents all 2 N states where the primary user P is not in service and some of the N secondary users are in service. The rate diagram for this model can be drawn in a way similar to that given in Section 3.3.1.2, and the stationary probabilities can be obtained as follows. • Notation. Let Si denote state (0, [n N , . . . , n 1 ]), and Siw denote state (1, [n N , . . . , n 1 ]w ). • Construct the generator matrix Q = [qi j ]: (i) for Si = (0, [n N , . . . , n j , . . . , n 1 ]), where i = 0, . . . , 2 N − 1, and j = 1, . . . , N , q{(0, [n N , . . . , n j , . . . , n 1 ]) → (0, [n N , . . . , 1 − n j , . . . , n 1 ])} = μ j (n j = 1), or λ j (n j = 0); q{(1, [n N , . . . , n j , . . . , n 1 ]w ) → (1, [n N , . . . , 1 − n j , . . . , n 1 ]w )} =λ j (n j = 0); (ii) q Si → Siw = λP ; q Siw → Si = μP ; qii = − j=i qi j . • Solve the equation array similar to (3.20). As more secondary users contend the spectrum, in addition to increased interference, more waiting time is also introduced; therefore, the average throughput for each secondary user will be reduced.
3.4
Primary-prioritized dynamic spectrum access In this section, we will first analyze the effect of secondary users’ behavior on the system performance. Then, we study primary-prioritized dynamic spectrum access with various optimality criteria and compare them with CSMA-based random-access approaches. In order to develop primary-prioritized dynamic spectrum access, it is important to first analyze the behavior of the secondary users. Since the secondary users contend for the spectrum, if they access the spectrum in a greedy manner such that all of their injected traffic is admitted, then the Markov chain is more likely to be in a state for which the spectrum is being shared by more than one user. Hence, the secondary users may suffer a throughput degradation due to interference, if there is no control on very high arrival rates. On the other hand, if the secondary users reduce their arrival rates too much so as to avoid interference, the average throughput may be unnecessarily low. Therefore, secondary-user spectrum access should be carefully controlled.
98
Markov models for dynamic spectrum allocation
A aA,2λA
aB,1λB
μA
μB
λP
λP
λP
0
P
2
μP aB,2 λB
λP
μB
aA,1 λA
μA
B Figure 3.5
A modified CTMC with access control (no queuing).
In the dynamic spectrum access scheme, we introduce the state-dependent spectrum access probabilities for user A and user B, and the resulting random access process can be approximated by slightly modifying the original CTMCs. Without loss of generality, we take CTMC-5 as an example, and the modified Markov chain is shown in Figure 3.5. It can be seen from Figure 3.5 that, when one secondary user, e.g., user B, already occupies the spectrum and the system is in state B, user A’s spectrum-access requests are admitted with probability aA,1 , where 0 ≤ aA,1 ≤ 1. Since on average one out of 1/aA,1 access requests by user A will be allowed when user B is in service, the chance of coexistence of the secondary users and mutual interference can be reduced. Owing to the decomposition property of a Poisson random process [242], if each access request of user A has a probability aA,1 of being admitted, then the number of access requests actually admitted is also a Poisson process with parameter aA,1 λA s−1 . Hence, the transition rate from state B to state 2 now becomes aA,1 λA s−1 . It is also seen that user A’s access requests are admitted with probability aA,2 when the spectrum is idle (i.e., the transition from state 0 to state A). However, there is no interference in state A. In order to obtain a high throughput, we assume that, when the spectrum is sensed to be idle, user A is allowed to access the spectrum with probability unity, i.e., aA,2 = 1. In addition, it is expected that, if the mutual interference between the secondary users is high, aA,1 should be close to 0; if there is little mutual interference, aA,1 should be close to 1. User B’s spectrum access is controlled in a similar way to that of user A, because the CTMC is symmetric. Denote the access probability for user A and that for user B as vectors aA = [aA,1 , aA,2 ], and aB = [aB,1 , aB,2 ], respectively. Then, the optimization goal is to determine aA and aB , such that the system performance can be maximized, i.e., {aγ } = arg max
0≤aγ ≤1
U ({aγ }),
(3.22)
where ∀γ ∈ {A, B}. Since a good spectrum sharing scheme can not only efficiently utilize the spectrum resources, but also provide fairness among different users, we first consider maximizing
99
3.4 Primary-prioritized dynamic spectrum access
the average throughput on the basis of the PF criterion [231] [169]. Thus, in (3.22), U (aA , aB ) can be written as Uγ (aA , aB ). (3.23) UPF (aA , aB ) = γ ∈S
We also consider other criteria to compare with PF, expressed by the following maximal-throughput criterion Uγ (aA , aB ), (3.24) U (aA , aB ) = γ ∈S
and max–min fairness criterion U (aA , aB ) = min Uγ (aA , aB ). γ ∈S
(3.25)
For the maximal-throughput optimization, the overall system throughput is maximized, but the users with the worst channel conditions may be starved of access. For the max–min fairness optimization, the performance of the secondary user with the worst channel condition is optimized, resulting in inferior overall system performance. In this chapter, we will demonstrate that PF dynamic spectrum access is preferred because it can ensure more fairness than the maximal-throughput optimization, while achieving better performance than the max–min fairness optimization. Specifically, the definition of PF is expressed as follows. Definition. The throughput distribution is proportionally fair if any change in the distribution of throughput pairs results in the sum of the proportional changes of the throughput being non-positive [231], i.e., Uγ (aA , aB ) − Uγ∗ (aA , aB ) γ ∈S
Uγ∗ (aA , aB )
≤ 0,
(3.26)
where Uγ∗ (aA , aB ) is the proportionally fair throughput distribution, and Uγ (aA , aB ) is any other feasible throughput distribution for user γ . It can be proved that the optimal solution Uγ∗ (aA , aB ) defined in (3.26) can be obtained by solving (3.22), with U (aA , aB ) defined in (3.23). The proof is sketched as follows. Since the natural logarithm (ln) function is monotonic, the PF-based utility defined in (3.23) is equivalent to ln Uγ (aA , aB ). (3.27) γ ∈S
Define U˜ γ = ln Uγ , then the gradient of U˜ γ at the PF utility Uγ∗ is ∂ U˜ γ 1 = ∗. ∂Uγ ∗ Uγ Uγ
Uγ∗
Since the PF utility optimizes (3.27), for a small feasible perturbation from the PF utility, we can omit the high-order polynomials in the Taylor series, apply first-order Taylor approximation, and obtain the following condition:
100
Markov models for dynamic spectrum allocation
∂ U˜ γ ∂U γ γ
Uγ∗
Uγ − Uγ∗ Uγ − Uγ∗ = ≤ 0. Uγ∗ γ
(3.28)
Since the feasible region for Uγ is a convex set and the logarithm function (3.27) is strictly concave, (3.28) holds for any point deviating from the PF utility. Therefore, the definition of the PF criterion in (3.23) and (3.26) is equivalent. As mentioned earlier in this section, we assume aA,2 = aB,2 = 1, then the two access probabilities to be optimized are aA,1 and aB,1 . We denote them by aA and aB for simplicity, and can write Uγ as γ
γ
Uγ (aA , aB ) = γ (aA , aB )r1 + 2 (aA , aB )r2 ,
(3.29)
where A (aA , aB ) = C1 λA [(λ P + μB )(aA λA + λP + μA + μB ) + aA λB μB ], B (aA , aB ) = C1 λB [(λP + μA )(aB λB + λP + μA + μB ) + aB λA μA ], 2 (aA , aB ) = C1 λA λB [aA (aB (λA + λB ) + λP + μA ) + aB (λP + μB )],
(3.30)
with C1 = (1 − P ) aA λA [aB λB (λA + λB + λP ) + (λB + λP )(λP + μA ) + (λB + λP + μA )μB + λA (λP + μB )] + (λP + μA + μB )[λA (λP + μB ) + (λP + μA )(λB + λP + μB )] + aB λB [(λP + μA ) −1 · (λB + λP + μB ) + λA (λP + μA + μB )] .
(3.31)
When 0 ≤ aγ ≤ 1, we have A (aA , aB ) ≥ 0, B (aA , aB ) ≥ 0, 2 (aA , aB ) ≥ 0, and Uγ (aA , aB ) ≥ 0. On taking the derivative of Uγ (aA , aB ) with respect to aA , we can show that ∂UA (aA , aB ) > 0, ∂aA
∂UB (aA , aB ) < 0. ∂aA
(3.32)
So when secondary user A is given more chance to access the frequency band, i.e., when aA increases, UA (aA , aB ) becomes larger while UB (aA , aB ) shrinks, indicating that there is a possible tradeoff involved in choosing the optimal aA that maximizes UPF (aA , aB ) = UA (aA , aB )UB (aA , aB ). However, it can be seen that there are many variables in Uγ (aA , aB ) and hence the objective function UPF (aA , aB ). In addition, the utility of each secondary γ γ user Uγ is a complicated function of the sets {λγ , μγ , aγ } and the data rates r1 , r2 . Therefore, it is analytically difficult to justify the concavity for parameters. Nevertheless, given a specific set of parameters {λγ , μγ } γarbitrary γ and r1 , r2 , we can substitute their values in to (3.30) and determine the concavity of UPF (aA , aB ) by observing the Hessian matrix ∇ 2 UPF (aA , aB ), for 0 ≤ aA , aB ≤ 1. When the two eigenvalues of ∇ 2 UPF (aA , aB ) are not greater than zero, i.e., the Hessian matrix is negative semi-definite, we can determine that UPF (aA , aB ) is concave with respect to aA and aB , and the optimal access probabilities can be expressed as
3.4 Primary-prioritized dynamic spectrum access
101
Table 3.1. Primary-prioritized dynamic spectrum access 1. 2. 3.
4. 5.
Initially the primary user P is operating in the spectrum band A secondary access point obtains optimal access probabilities defined in (3.33) for secondary users (other optimality criteria can also be implemented) Once the primary user P is sensed to have completed its service, secondary users start to access the spectrum band with the probabilities obtained in Step 2 depending on various states When the primary user P reappears in the band, secondary users currently operating in the band vacate it If secondary users still have service not completed, go back to Step 3; if the statistics of secondary users’ services or their locations change, go to Step 2
opt aγ ,i
2 ∗ = min max aγ ,i , 0 , 1 ,
(3.33)
where aγ∗,i is the solution to the following equations: ∂UPF (aA , aB ) = 0, ∀γ , i ∈ S. ∂aγ ,i
(3.34)
If UPF (aA , aB ) is not concave, we can check the points on the boundary of the feasible γ γ region of (aA , aB ). For example, for some value of λγ , μγ , if r1 r2 , indicating heavy mutual interference, the function UPF (aA , aB ) might not be concave. However, the optimal solution of aγ is 0 to avoid interference. Another instance in which UPF (aA , aB ) is not concave happens when λγ μγ , and the optimal solution is aγ = 1. We assume that there exists a secondary base station (BS) that can control the medium access for all the secondary users. The secondary users send periodic reports to the BS informing it about their service statistics and data rates. Using the information gathered from all secondary users, the BS evaluates the spectrum utilization, computes the optimal access probability in different states (i.e., when different set of secondary users are in service), and sends the access probability to the secondary users. We illustrate our primary-prioritized Markov approach for dynamic spectrum access formulated on the basis of the above discussions in Table 3.1. Primary-prioritized dynamic spectrum access shares some characteristic with conventional medium access control (MAC) protocols, since they all target appropriate coordination of different users’ access to the medium. For instance, in IEEE 802.11 [198], a CSMA/CA mechanism is employed. If the medium is sensed to be idle, a user transmits its packet; if the medium is sensed to be busy, then the user may reschedule the retransmission of the packet according to some random back-off time distribution. These kinds of protocol are effective when the medium is not heavily loaded, since they allow users to transmit with minimum delay. However, under a heavy traffic load, there is always a chance that users’ attempts conflict with each other. If the conflicting users are kept waiting for an idle medium, their packets suffer significant delay and may expire.
102
Markov models for dynamic spectrum allocation
In primary-prioritized dynamic spectrum access, different secondary users are allowed to share the spectrum band simultaneously. This will increase spectrum utilization for the following reasons. First, for independent Poisson processes, the service durations of different secondary users are generally not the same. For instance, in CTMC-5, even though user B begins operating in the spectrum band right after user A, it is possible that user A completes its service much earlier than user B. After user B has been admitted to occupy the spectrum band, the two secondary users share the spectrum only for a very short time. Once A has finished its service, the Markov chain transits to the state in which B operates in the spectrum alone and no interference exists. Using CSMA protocols, however, user B is forced to retransmit its packet after a random backoff time, which might not be short. Therefore, using the approaches presented here, the spectrum can be more efficiently utilized. Furthermore, in the schemes presented here, optimal access probabilities are employed to carefully control the coexistence of the secondary users. In this way, the interference is maintained at a low level. Also, in a mobile network, the radio-spectrum environment is dynamic. When using global optimization approaches specific to a fixed environment, for instance conventional power control to manage mutual interference among a fixed number of secondary users, after each change in the number of contending secondary users, the network needs to re-optimize the power allocation for all users completely. This results in high complexity and much overhead, especially when there are frequent service requests and the service duration is short. In the approach considered in this chapter, controlling the access probabilities for secondary users means that there is no need to perform delicate power control to manage the interference, and computational complexity is reduced while the average throughput is maximized in the long run. In order to achieve optimal dynamic spectrum access, a certain overhead is needed. More specifically, the overhead mainly comes from controlling and sensing the access of primary users. To optimally coordinate the access of the secondary users, necessary measurements need to be taken, such as the throughput and arrival/departure rates for different secondary users. On the other hand, detecting a primary user’s presence relies mainly on observations by the secondary users and the necessary spectral analysis.
3.5
Simulation results and analysis In this section, we first compare the performance of CTMC-8 implementations with various optimization goals (maximal throughput, max–min, and PF). Then we compare the performance of CTMC-8, CTMC-5, and nonpersistent CSMA-based random access. Finally we show the throughput gain of spectrum sharing among more than two secondary users against the case without access control. The parameters in the simulations are chosen as follows. We set the bandwidth of the licensed spectrum as W = 200 kHz, the transmission power of each secondary user as pγ = 2 mW, the noise power as n 0 = 10−15 W, and the propagation-loss exponent factor as 3.6. The departure rates μA , μB , and μP are set to be 100 s−1 . According to [266], in the spectrum band allocated to cellular phone and Specialized Mobile
103
3.5 Simulation results and analysis
Radio (SMR), the fraction of time that the spectrum is being used by primary users in an urban environment has been measured to be approximately 45%. Thus, when μP is 100 s−1 , we set the arrival rate of the primary user λP = 85 s−1 . The arrival rate of secondary user B is λB = 85 s−1 , and we vary λA from 70 to 100 s−1 . In the simulation results, we use “Max-Thr” to denote the maximal-throughput criterion and “Max-Min” to denote the max–min fairness criterion.
3.5.1
CTMC-8 for the symmetric-interference case In the first set of simulations, we test the case in which two secondary users experience symmetric interference. The transmitter of user A is at (0 m, 0 m), and its receiver is at (200 m, 0 m). The transmitter of user B is at take over (200 m, 460 m), and its receiver is at (0 m, 460 m). From their symmetric locations, we know that r1B = r1A > r2B = r2A from (3.1) and (3.2). In Figure 3.6, we show the optimal access probability versus λA for each secondary user when the other secondary user is transmitting, i.e., the access probability associated with the transition from state (0, γ ) to (0,AB) in CTMC-8 (see Figure 3.4). Since CTMC-8 is symmetric for the two users, when λA < λB = 85 s−1 , user A will have a smaller time share than that of user B if there is no control over access probability. Further more, because we have r1B = r1A > r2B = r2A , from the definition of the average throughput in (3.17), user A will experience a lower average throughput than that of B. In order to provide more fairness, PF and max–min optimization assigns user B a zero access probability and assigns user A a higher access probability than that of user B when λA < λB = 85 s−1 . With increasing λA , the difference between the 0.14
Access prob.
PF A PF B
0.12
Max−Thr A
0.1
Max−Thr B Max−Min A Max−Min B
0.08 0.06 0.04 0.02 0 70
Figure 3.6
75
80
85 λA
90
95
Access probability vs. λA (symmetric interference, λB = 85 s−1 ).
100
104
Markov models for dynamic spectrum allocation
two users’ time shares becomes smaller, so the access probability of user A decreases and becomes equal to B’s access probability when λA = λB . When λA > λB , user B is assigned a higher access probability to compensate for a smaller time share, while user A’s access requests are denied. However, when λA > λB = 85 s−1 , λA is much higher and the probability of state A is also higher, so, in order to reduce the mutual interference, the growth of B’s access probability is not symmetric to the decrease of A’s. Owing to the mutual interference, the maximal-throughput optimization assigns zero access probability to both users when the other user is in service. In Figure 3.7, we show the throughput Uγ for each user. Max–min fairness optimization provides absolute fairness to both users: the two Uγ are identical and increase as λA increases. In the PF optimization, when λA < λB , we have UA < UB . As λA becomes higher, UA increases; however, as shown in Figure 3.6, user A’s access probability decreases as λA increases until λA = λB = 85 s−1 , so the mutual interference is managed and UB also increases. When λA = λB = 85 s−1 , UA = UB , since the secondary users are identical in terms of both channel conditions and service requests. As λA further increases, UA > UB and UA keeps increasing; since user B’s access probability increases for λA > λB = 85 s−1 (see Figure 3.6), UB also increases. For the maximalthroughput optimization, as seen from Figure 3.6, the access probabilities of the two users are both zero, indicating that they are not allowed to transmit simultaneously, so UA keeps increasing as λA increases, while UB drops quickly, which is unfair. In Figure 3.8, we show the effect of λP on the average access probability. In this set of simulations, λB is still set as 85 s−1 and we vary λA from 70 to 100 s−1 . We know from Figure 3.6 that the user with the higher access rate has a zero access probability when the other user is in service. Therefore, we demonstrate only the nonzero access probability of the user with a lower access rate, i.e., we show user A’s access probability
0.54
0.52
U
0.5
0.48
PF A PF B Max−Thr A Max−Thr B Max−Min A Max−Min B
0.46
0.44 70 Figure 3.7
75
80
85 λA
90
95
Average throughput vs. λA (symmetric interference, λB = 85 s−1 ).
100
105
3.5 Simulation results and analysis
Access prob of A vs. λA
Access prob of B vs. λA 0.08
0.16
λ P = 90
0.07
λ P = 80
0.14
λ P = 70
0.06
λ P = 90 λ P = 80 λ P = 70
Access prob.
Access prob.
0.12 0.1 0.08 0.06
0.05 0.04 0.03
0.04
0.02
0.02
0.01
0 70
75
80
85
0 85
λA Figure 3.8
90
95
100
λA
Access probability for various values of λP (λB = 85 s−1 ).
when λA < λB = 85 s−1 and user B’s access probability when λA > λB = 85 s−1 . In Figure 3.8, we compare the values of the average access probability when λP is chosen from {90, 80, 70} s−1 . We know that, as λP increases, the competition between the secondary users becomes more severe. In order to reduce mutual interference, when λA is a fixed value, both users’ access probabilities decrease as λP becomes larger.
3.5.2
CTMC-8 for the asymmetric-interference case In the second set of simulations, the transmitter of user A is at (0 m, 0 m), and its receiver is at (200 m, 0 m). The transmitter of user B is at (185 m, 460 m), and its receiver is at (15 m, 460 m). Under these settings, we have r1B > r1A > r2B > r2A from (3.1) and (3.2), so the interference is asymmetric. In Figure 3.9, we show the optimal access probabilities versus λA for each secondary user when the other is transmitting. Since user A has a worse channel condition than user B, for the maximal-throughput optimization, user A’s access probability is 0 (user A’s requests are always rejected) when user B is in service, which is unfair. For the PF or max–min fairness optimization, when λA < λB = 85 s−1 , user A’s access probability is 1 (user A’s requests are always admitted), while only some, not all, of B’s requests are admitted, due to fairness concerns. When λA is a little greater than λB , unlike in the symmetric-interference case, user A’s access probability is still 1 and higher than B’s access probability, because user A has a worse channel condition than that of user B. When λA exceeds 90 s−1 , the probability of coexistence is so high that the access probabilities for both users drop to avoid interference.
106
Markov models for dynamic spectrum allocation
1 0.9 0.8
Access prob.
0.7 0.6 0.5 PF A PF B Max−Thr A
0.4 0.3
Max−Thr B Max−Min A Max−Min B
0.2 0.1 0 70
Figure 3.9
75
80
85 λA
90
95
100
Access probability vs. λA (asymmetric interference, λB = 85 s−1 ). 0.75 0.7 0.65 0.6
PF A PF B Max−Thr A Max−Thr B Max−Min A Max−Min B
U
0.55 0.5 0.45 0.4 0.35 0.3 70
Figure 3.10
75
80
85 λA
90
95
100
Average throughput vs. λA (asymmetric interference, λB = 85 s−1 ).
In Figure 3.10, we show the average throughput for each secondary user. We know from Figure 3.9 that, in the maximal-throughput optimization, user A’s access probability is 0 and user B’s access probability is 1; therefore, UB is much greater than UA . The PF optimization greatly reduces the throughput difference between the two users, with only a small loss of total throughput.
107
3.5 Simulation results and analysis
3.5.3
Comparison with a CSMA-based scheme In Figure 3.11, we show the overall throughput of PF dynamic spectrum access for CTMC-8 and CTMC-5, and the overall throughput for a CSMA-based scheme [240]. The transmitters for both secondary users are uniformly located in a 200 m × 200 m square area, the transmitter–receiver distance for each pair is uniformly distributed in the range [100 m, 200 m], and the other parameters are the same as in the previous setting. We choose the slotted version of the nonpersistent CSMA to avoid frequent collisions, assuming that the secondary users experience severe contention for the licensed spectrum, and the slot size is 0.005. So, when primary user P is absent and one secondary user γ is transmitting, the later-coming secondary user senses the spectrum once every 0.005/μγ s until the licensed spectrum becomes available again. We can see that PF access for both CTMCs gives better performance than that provided by the CSMA-based scheme as λA increases. This is because, in CSMA, the two secondary users cannot utilize the spectrum at the same time. Thus, even though interference exists when secondary users share the spectrum, by allowing spectrum sharing between them and optimally controlling their access probabilities, a performance gain can still be achieved. As λA increases, the overall throughput of the PF access for both CTMCs increases, whereas the throughput of the CSMA-based scheme decreases. When λA = 100 s−1 , CTMC-5 can achieve about 50% throughput gain over CSMA, and CTMC-8 can achieve more than 95% throughput gain. This shows that the PF access approach has a larger capability than CSMA to accommodate more traffic. Moreover, the spectrum efficiency of CTMC-8 is higher than that of CTMC-5, due to queuing of the interrupted service.
1
UA + UB (Mbps)
0.9
0.8
0.7 CTMC−8 CTMC−5 CSMA
0.6
0.5 70
75
80
85
90
λA Figure 3.11
Overall throughput for CTMC-5, CTMC-8, and CSMA.
95
100
108
Markov models for dynamic spectrum allocation
3.5.4
Comparison with a uniform-access-probability scheme In [447], we considered a uniform access probability for each secondary user no matter what state the CTMC is in. However, when the licensed spectrum is idle, the access probability may restrain full spectrum utilization. Moreover, the interference condition for one secondary user varies when different subsets of secondary users share the spectrum. Just optimizing a single access probability may result in a sub-optimal solution. In this subsection, we conduct simulations to compare the scheme in this chapter with the one in [447]. In the comparison, we adopt the PF method, while the transmission power, the request/service rates, and the locations of the secondary users are all uniformly distributed in a proper interval, and we test 1000 independent experiments to get the average. The histogram of the performance gain (UPF ) is shown in Figure 3.12. We see that the scheme in this chapter with state-dependent access probability achieves on average a 24% higher system throughput than the scheme using a uniform access probability in [447].
3.5.5
Spectrum sharing among multiple secondary users Spectrum access with multiple secondary users can also be optimally controlled using a method in which the access probabilities are obtained with numerical search algorithms. The transmitter–receiver pair of each user is uniformly distributed in a 200 m × 200 m square area, and the transmission power is uniformly chosen between 1 mW and 3 mW. In Figure 3.13, we compare the total throughput of the PF spectrum access with that without access control (i.e., all service requests are admitted with probability unity). By 200 180 160
# of experiments
140 120 100 80 60 40 20 0
Figure 3.12
0
10
20
30 40 50 Performance gain (%)
The histogram of throughput improvement (UPF ).
60
70
80
109
3.6 Summary and bibliographical notes
100 95
Total throughput (kbps)
90 85 80 75 70 65 60 PF
55 50
Figure 3.13
no control
2
2.5
3
3.5 4 4.5 5 Number of secondary users
5.5
6
Overall throughput for multiple secondary users.
optimizing the access probabilities, the PF scheme achieves a 17% higher throughput on average, since the interference is successfully alleviated. We also see that, as the number of competing secondary user increases, the average throughput for each user is greatly reduced, since the spectrum competition becomes much heavier and each user has a smaller spectrum share.
3.6
Summary and bibliographical notes In this chapter, we presented a primary-prioritized Markov approach for dynamic spectrum access. We model the interactions between the primary users and the secondary users as continuous-time Markov chains, and optimize the state-dependent access probabilities for secondary users so that the spectrum resources can be efficiently and fairly shared by the secondary users in an opportunistic way without interrupting the primary usage. The simulation results show that spectrum access with the PF criterion can achieve up to a 95% performance gain over a CSMA-based random access approach, and also achieves the optimal tradeoff between efficient spectrum utilization and fairness. In the current work, the spectrum access is coordinated by a secondary management point. A distributed algorithm to find the best access pattern with less measurement overhead and signaling would be interesting to address in future work. Secondary users could distributively adapt their access probabilities according to observation of their own data throughput, and the approach in this chapter would then serve as an upper bound to the performance of such distributed algorithms.
110
Markov models for dynamic spectrum allocation
There have been several efforts addressing the issue of how to efficiently and fairly share the limited spectrum resources on a negotiated/pricing basis [166] [86][492] [110] [163] [134] [206] [204] [372] or an opportunistic basis [468] [227]. In [166], game theory was employed to resolve channel conflicts distributively by associating the Nash equilibrium with a maximal-coloring problem for spectrum sharing. A local bargaining mechanism was considered in [86] to distributively optimize the efficiency of spectrum allocation and maintain bargaining fairness among secondary users. Rule-based approaches that regulate users’ spectrum access in order to trade off fairness and utilization with communication costs and algorithmic complexity were considered in [492]. In [110], the authors developed a repeated-game approach, in which the spectrum sharing strategy could be enforced using the Nash equilibrium of dynamic games. Auctions were proposed for sharing spectrum among multiple users such that the interference was below a certain level in [163]. A real-time spectrum auction framework was proposed in [134] to achieve a conflict-free allocation that maximizes auction revenue and spectrum utilization. In [206] [204], belief-assisted dynamic pricing was used to optimize the overall spectrum efficiency while basing the participation incentives of the selfish users on double-auction rules. A centralized spectrum server to coordinate the transmissions of a group of wireless links sharing a common spectrum was considered in [372]. Recently, attention has been drawn to opportunistic spectrum sharing. In [468], there was presented a distributed random access protocol to achieve airtime fairness between dissimilar secondary users in open-spectrum wireless networks without considering primary users’ activities. The work in [227] examined the impact of secondary users’ access patterns on the blocking probability and on the improvement in spectrum utilization achievable with statistical multiplexing, and a feasible spectrum sharing scheme was developed.
4
Repeated open spectrum sharing games
In dynamic spectrum access, users who are competing with each other for spectrum may have no incentive to cooperate, and they may even exchange false private information about their channel conditions in order to gain more access to the spectrum. In this chapter, we present a repeated spectrum sharing game with cheat-proof strategies. In a punishment-based repeated game, users have an incentive to share the spectrum in a cooperative way; and, through mechanism-design-based and statisticsbased approaches, user honesty is further enforced. Specific cooperation rules have been developed on the basis of maximum-total-throughput and proportional-fairness criteria. Simulation results show that the scheme presented here can greatly improve the spectrum efficiency by alleviating mutual interference.
4.1
Introduction In order to achieve more flexible spectrum access in long-run scenarios, we need to address the following challenges. First, in self-organized spectrum sharing, there is no central authority to coordinate the spectrum access of different users. Thus, the spectrum access scheme should be able to adapt distributively to the spectrum dynamics, e.g., channel variations, with only local observations. Moreover, users competing for the open spectrum may have no incentive to cooperate with each other, and they may even exchange false private information about their channel conditions in order to gain more access to the spectrum. Therefore, cheat-proof spectrum sharing schemes should be developed in order to maintain the efficiency of the spectrum usage. Motivated by the preceding considerations, in this chapter we present a cheat-proof etiquette for unlicensed spectrum sharing by modeling the distributed spectrum access as a repeated game. In this game, punishment will be triggered if any user deviates from cooperation, and hence users are forced to access the spectrum cooperatively. We consider two sharing rules, which are based on the maximum-total-throughput and proportional-fairness criteria, respectively. Accordingly, two cheat-proof strategies are developed: one provides players with the incentive to be honest on the basis of mechanism-design theory [123], whereas the other makes cheating nearly unprofitable by statistical approaches. Therefore, the competing users are enforced to cooperate with each other honestly. The simulation results show that such schemes can greatly improve the spectrum efficiency under mutual interference.
112
Repeated open spectrum sharing games
The remainder of this chapter is organized as follows. In Section 4.2, the system model for open spectrum sharing is described. In Section 4.3, we develop a punishmentbased repeated game for open spectrum sharing. The specific design of cooperation rules and misbehavior detection are discussed in Section 4.4. In Section 4.5, we develop two cheat-proof strategies for the spectrum sharing rules. Simulation results are shown in Section 4.6.
4.2
The system model We consider a situation in which K groups of unlicensed users coexist in the same area and compete for the same unlicensed spectrum band, as shown in Figure 4.1. The users within the same group attempt to communicate with each other, and their usage of the spectrum will introduce interference with other groups. For simplicity, we assume that each group consists of a single transmitter–receiver pair, and that all the pairs are fully loaded, i.e., they always have data to transmit. At time slot n, all pairs are trying to occupy the spectrum, and the received signal at the ith receiver yi [n] can be expressed as yi [n] =
K
h ji [n]x j [n] + wi [n],
i = 1, 2, . . . , K ,
(4.1)
j=1
where x j [n] is the transmitted information of the jth pair, h ji [n]( j = 1, 2, . . . , K ; i = 1, 2, . . . , K ) represents the channel gain from the jth transmitter to the ith receiver, and wi [n] is the white noise at the ith receiver. In the rest of the chapter, the time index n will be omitted wherever no ambiguity is caused. We assume that 2 the channels are Rayleigh fading, i.e., h ji ∼ CN 0, σ ji , and distinct h ji are statistically independent. The channels are assumed to remain constant during one time slot, and change independently from slot to slot. The noise is independently identically distributed (i.i.d.) with wi ∼ CN (0, N0 ), where N0 is the noise power. Being limited by
Unlicensed Band
Figure 4.1
An illustration of open spectrum sharing.
4.3 Repeated spectrum sharing games
113
the instrumental capability, the transmission power of the ith user cannot exceed his/her own peak power constraint PiM , i.e., |xi [n]|2 ≤ PiM at any time slot n. Usually, there is no powerful central unit to coordinate the spectrum access in the unlicensed band, and different coexisting systems do not share a common goal of helping each other voluntarily. It is reasonable to assume that each transmitter–receiver pair is selfish: pursuing self-interest is the only goal for the wireless users. Such selfish behaviors can be well analyzed by game theory. Therefore, we choose to model the spectrum sharing game as follows: Players: the K transmitter–receiver pairs, Actions: each player can choose the transmission power level pi in 0, PiM , Payoffs: Ri ( p1 , p2 , . . . , p K ), the gain of transmission achieved by the ith player after power levels p1 , p2 , . . . , p K have been chosen by individual players. In general, the gain of transmission is a non-negative increasing function of data throughput. For simplicity, we assume that all the players share the same valuation model that the gain of transmission equals data throughput. The results can be easily extended to cases with different valuation models. The averaged payoff of the ith player can be approximated by pi |h ii |2 (4.2) Ri ( p1 , p2 , . . . , p K ) = log2 1+ N0 + j=i p j |h ji |2 when mutual interference is treated as Gaussian noise, e.g., when the code division multiple access (CDMA) technique is employed.
4.3
Repeated spectrum sharing games In this section, we find the equilibria of the spectrum sharing game. We assume that all the players are selfish and none is malicious. In other words, players aim to maximize their own interest, but will not jeopardize others or even the entire system at their own cost. Because all the selfish players try to access the unlicensed spectrum as much as possible, severe competition often leads to strong mutual interference and low spectrum efficiency. However, since wireless systems coexist over a long period of time, the spectrum sharing game will be played multiple times, during which the undue competition could be resolved by mutual trust and cooperation. We consider a punishment-based repeated game to boost cooperation among competing players.
4.3.1
The one-shot game First, we look into the one-shot game in whichplayers are myopic and care only about the current payoff. The vector of power levels p1∗ , p2∗ , . . . , p ∗K is called a Nash equilibrium only if, for all i = 1, 2, . . . , K and all possible power-level choices if and pi ∈ 0, PiM ,
114
Repeated open spectrum sharing games
Ri p1∗ , p2∗ , . . . , pi∗ , . . . , p ∗K ≥ Ri p1∗ , p2∗ , . . . , pi , . . . , p ∗K
(4.3)
always holds. The Nash equilibrium, from which no individual would have the incentive to deviate, provides a stable point in which the system resides. For this one-shot spectrum sharing game, the equilibrium occurs when every pair transmits at the highest power level, as shown in the following proposition. Theorem 4.3.1 The only Nash equilibrium for this one-shot game is P2M , . . . , PKM .
P1M ,
P ROOF. First, we show that P1M , P2M , . . . , PKM is a Nash equilibrium. According to the definition of the payoff (4.2), when p1 , p2 , . . . , pi−1 , pi+1 , . . . , p K are fixed, and hence the interference power is fixed, the ith player’s payoff Ri ( p1 , p2 , . . . , p K ) PiM grows as the power level pi increases. Therefore, for any player M i,MdeviatingMfrom to any lower value will decrease the payoff, which makes P1 , P2 , . . . , PK a Nash equilibrium. we show Assume that by contradiction that no other equilibria exist. Then, p1∗ , p2∗ , . . . , p ∗K is any equilibrium other than P1M , P2M , . . . , PKM , which means that at least one entry is different, say pi∗ = PiM . However, this player can always gain a higher payoff by deviating from pi∗ to PiM , which violates the definition of a Nash equilibrium. When channel states are fixed, substituting the equilibrium strategy pi = PiM for all i into (4.2) yields PiM |h ii |2 S , (4.4) Ri (h 1i , h 2i , . . . , h K i ) = log2 1 + N0 + j=i P jM |h ji |2 where the superscript S stands for “selfish.” This is indeed the only possible outcome of the one-shot game with selfish players. Furthermore, when channel fading is taken into account, the expected payoff can be calculated by averaging over all channel realizations, 4 3 riS = E {h ji , j=1,...,K } RiS (h 1i , h 2i , . . . , h K i ) . (4.5) In this chapter, the payoff represented by the upper-case letter is the utility under a specific channel realization, whereas the payoff represented by the lower-case letter is the utility averaged over all channel realizations. Proposition 1 implies that the common open spectrum is excessively exploited owing to lack of cooperation among the selfish players. In order to maximize their own profit, all the players always occupy the spectrum with maximum transmission power, which, in turn, makes everyone suffer from strong mutual interference. If the players can somehow share the spectrum in a more cooperative and regulated fashion, everyone will be better off because interference has been greatly reduced. Since spectrum sharing lasts over quite a long period of time, it can be seen as a game played for numerous rounds, in which cooperation is made possible by the establishment of individual reputation and mutual trust.
115
4.3 Repeated spectrum sharing games
4.3.2
The repeated game In open spectrum sharing, players cannot be “forced” to cooperate with each other; instead, they must be self-enforced to participate in cooperation. We consider a punishment-based repeated game to provide players with the incentive for cooperation. First of all, we have to define the payoff for the repeated game. Since players view the multiple rounds in the game as a whole, the payoff of a repeated game is defined as the sum of payoffs discounted over time, Ui = (1 − δ)
+∞
δ n Ri [n],
(4.6)
n=0
where Ri [n] is player i’s payoff at the nth time slot, and δ (0 < δ < 1) is the discount factor. When δ is closer to 1, the player is more patient. Because players value not only the current payoff but also the rewards in the future, they have to constrain their behavior in the present in order to keep a good credit history; otherwise, a bad reputation may cost even more in the future. In general, if players do not cooperate with each other, the only reasonable choice is the one-shot-game Nash equilibrium with the expected payoff riS given in (4.5). However, if all the players follow some predetermined rules to share the spectrum, higher expected one-slot payoffs riC (C stands for “cooperation”) may be achieved, i.e., riC > riS for i = 1, 2, . . . , K . For example, the cooperation rule may require that only several players access the spectrum simultaneously, and hence mutual interference is greatly reduced. Nevertheless, without any commitment, selfish players always want to deviate from cooperation. One player can take advantage of others by transmitting in the time slots during which he/she is not supposed to, and the instantaneous payoff at one specific slot is a random variable denoted by RiD (D stands for “deviation”). Although it is not a stable equilibrium in the one-shot game, cooperation enforced by the threat of punishment is an equilibrium in the repeated game. Specifically, every player states the threat to others: if anyone deviates from cooperation, there will be no more cooperation forever. Such a threat, also known as the “trigger” punishment, deters deviation and helps maintain cooperation. For example, assume that player i hesitates regarding whether to deviate or not. Denote the discounted payoff with deviation as UiD , and that without deviation as UiC . As shown by the following proposition, the payoffs strongly converge to constants regardless of the channel realizations. Then, for the sake of the player’s own benefit, it is better not to deviate as long as riC > riS . Theorem 4.3.2 As δ → 1, UiD converges to riS almost surely, and UiC converges to riC almost surely. P ROOF. First, we show that, as δ → 1, the discounted payoff defined in (4.6) is asymptotically equivalent to the average of the one-time payoffs. By switching the order of operations, we obtain
116
Repeated open spectrum sharing games
N 1−δ n δ Ri [n] δ→1 N →+∞ 1 − δ N +1
lim Ui = lim
δ→1
=
lim
lim
N →+∞
N ' n=0
n=0
δn
− δ n+1 lim δ→1 1 − δ N +1
1 Ri [n], N +1
( Ri [n]
N
=
lim
N →+∞
(4.7)
n=0
where the last equality holds according to l’Hôpital’s rule. Assume that player i deviates at time slot T0 . Then, the payoffs {Ri [n], n = 0, 1, . . . , T0 − 1} are i.i.d. random variables with mean riC , whose randomness comes from the i.i.d. channel variations. Similarly, the payoffs {Ri [n], n = T0 + 1, T0 + 2, . . .} are i.i.d. random variables with mean riS . Deviating provides a benefit only at time slot T0 . According to the strong law of large numbers [146], the payoff UiD converges to its mean riS almost surely. On the other hand, if no deviation ever happens, the repeated game always stays in the cooperative stage. Using the same argument, UiC converges to riC almost surely. Because the selfish players always choose the that their own strategy maximizes payoffs, they will maintain cooperation if UiC = riC > UiD = riS , that is, all players are self-enforced to cooperate in the repeated spectrum-sharing game because of punishment after any deviation. Nevertheless, such a harsh threat is neither efficient nor necessary. Note that not only does the deviating player get punished, but also the other (“good”) players suffer from the punishment. For example, if one player deviates by mistake or punishment is triggered by mistake, there will be no cooperation due to this punishment, which results in a lower efficiency for all players. We have to review the purpose of the punishment. The aim of punishment is more “preventing” the deviating behaviors from happening rather than punishing for revenge after deviation. As long as the punishment is long enough to negate the reward from a one-time deviation, no player has an incentive to deviate. The new strategy, called “punish-and-forgive,” is stated as follows: the game starts from the cooperative stage, and will stay in the cooperative stage until some deviation happens. Then, the game jumps into the punishment stage for the next T − 1 time slots before the misbehavior is forgiven and cooperation resumes from the T th time slot. T is called the duration of punishment. In the cooperative stage, every player shares the spectrum in a cooperative way according to their agreement, whereas, in the punishment stage, players occupy the spectrum non-cooperatively, as they would do in the one-shot game. The following proposition shows that cooperation is a subgame-perfect equilibrium, which ensures Nash optimality for subgames starting from any round of the whole game. Theorem 4.3.3 Provided that riC > riS for all i = 1, 2, . . . , K , there is δ¯ < 1, such ¯ the game has a subgame-perfect that, for a sufficiently large discount factor δ > δ, C equilibrium with discounted utility ri , if all players adopt the “punish-and-forgive” strategy.
117
4.3 Repeated spectrum sharing games
for the one-shot game, P ROOF. Because P1M , P2M , . . . , PKM is a Nash equilibrium transmitting with the power vector P1M , P2M , . . . , PKM in every time slot is one of the Nash equilibria for the repeated game. Then, the proof of this proposition follows the folk theorem with Nash threats [324]. The theorem states that the “punish-and-forgive” strategy yields a subgame perfect equilibrium for sufficiently patient players (i.e., δ close to 1), whenever the game has a pure one-shot Nash equilibrium. Using the oneshot Nash equilibrium strategy as punishment and cooperating otherwise will force all the players to cooperate. The parameter T can be determined by analyzing the incentives of the players. For example, we investigate under what condition player i will lose the motivation to deviate at time slot T0 . Although cooperation guarantees an average payoff riC at each time slot, the worst-case instantaneous payoff could be 0. In contrast, deviation will prompt an instantaneous payoff at that slot. Assume that the maximal profit obtained from deviation is RiD . If player i chooses to deviate, the punishment stage will last for the next T − 1 slots; otherwise, cooperation will always be maintained. Thus, the expected payoffs with and without deviation are bounded by ⎛ ⎞ T T0 +T −1 +∞ 0 −1 u iD = E UiD ≤ (1 − δ) · ⎝ δ n riC + δ T0 RiD + δ n riS + δ n riC ⎠ n=T0 +1
n=0
(4.8)
n=T0 +T
and u iC =
⎛ ⎞ T +∞ 0 −1 3 4 E UiC ≥ (1 − δ) · ⎝ δ n riC + 0 + δ n riC ⎠ ,
(4.9)
n=T0 +1
n=0
respectively. From selfish players’ point of view, the action with the higher payoff is clearly the better choice. T should be large enough to deter players from deviating such that u iC > u iD for all i = 1, 2, . . . , K . Then, the necessary condition for T can be solved as log δ − T > max
(1 − δ)RiD riC − riS
log δ
i
,
(4.10)
which can be further approximated by T > max i
RiD riC − riS
+ 1,
(4.11)
by application of l’Hôpital’s rule when δ is close to 1. If the tendency to deviate is stronger (i.e., RiD / riC − riS is larger), the punishment should be harsher (longer duration of punishment) to prevent the deviating behavior.
118
Repeated open spectrum sharing games
4.4
Cooperation with optimal detection In this section, we will discuss the specific design of the cooperation rules for spectrum sharing, as well as the method by which to detect deviation. When designing the rules, we assume that players can exchange information over a common control channel. On the basis of such information, each individual can independently determine who is eligible to transmit in the current time slot according to the cooperation rule, and thus the scheme does not require a central management unit. Cooperative spectrum sharing can be designed in the following way: in one time slot, only a few players with small mutual interference can access the spectrum simultaneously. In the extreme case, only one player is allowed to occupy the spectrum during one time slot, and the mutual interference can be completely prevented. In this chapter, we will limit our attention to such orthogonal channel allocation for the following reasons, and more general cooperation rules will be studied later. • It is quite simple, and the performance is good in an environment where the interference level is medium to high, as illustrated by the simulation results. This is the case where wireless users are concentrated in a small area, e.g., a cluster of users inside an office building or within a coffee house. • If several players are allowed to access the spectrum simultaneously, they will have to negotiate how much power each one can use. However, for the orthogonal assignment, if one player gets the exclusive right to occupy the channel, the maximum power will be used. Therefore, the action space boils down from a continuum of power levels to a binary choice (either 0 or PiM ), which simplifies the problem. • In order to decide who can access the spectrum, information such as the channel gains is needed. If multiple players are allowed to transmit in one time slot, knowledge of the whole channel-state matrix {h ji , j = 1, 2, . . . , K , i = 1, 2, . . . , K } is necessary in order to decide which players can be grouped together. The total amount of exchanged information is O(K 2 ). In contrast, the orthogonal assignment is interference-free, and only the diagonal entries {h ii , i = 1, 2, . . . , K } have to be exchanged, which means that the overhead reduces to O(K ). • The orthogonal allocation also facilitates the detection of deviating behavior. In general, the detector is required to catch the deviation by distinguishing the ineligible players from the players allowed to access the spectrum. The detection becomes much easier in the case of orthogonal assignment. The only eligible player in the current time slot will declare an event of deviation and trigger the punishment once he/she finds that someone else is also active in the unlicensed band. The slot structure for the spectrum sharing is shown in Figure 4.2. Every slot is divided into three phases: during the first phase, each player broadcasts information to others, such as channel gains; in the second phase, each player collects all the necessary information and decides whether to access the spectrum or not, according to the cooperation rule; then the eligible player will occupy the spectrum during the third phase of the slot. If the channel does not change too rapidly, the length of a slot can be designed to be long enough to make the overhead (the first and second phases) negligible. Since it
119
4.4 Cooperation with optimal detection
......
SLOT n − 1
I Figure 4.2
SLOT n
II
......
SLOT n + 1
III
Slot structure for spectrum sharing. Phase I: exchange information; phase II: make decision; phase III: transmit and detect.
is necessary to detect the potential deviating behavior and punish correspondingly, the eligible player cannot transmit all the time during the third phase. Instead, the player has to suspend his/her own transmission sometimes and listens to the channel to catch the deviators. The eligible player both transmits and detects during the third phase: a portion of the time is reserved for detection, while the rest can be used for transmission. When detection will be performed during the slot is kept secret by individuals; otherwise, the other players may take advantage by deviating when the detector is not operating. Finally, if detection shows that someone is deviating, an alert message will be delivered in the first phase of the next time slot.
4.4.1
Cooperation criteria There are numerous cooperation rules to decide which players can have exclusive priority to access the channel, such as time-division multiple access (TDMA). Out of many possible choices, the cooperation rules must be reasonable and optimal under some criteria, such as the maximum-total-throughput (MTT) criterion [175] and the proportional-fairness criterion [235]. Given a cooperation rule d, player i would have an expected discounted payoff riCd . Denote D as the set of all possible cooperation rules. The MTT criterion aims to improve the overall system performance by maximizing the sum of individual payoffs, dMax = arg max d∈D
K
riCd ,
(4.12)
i=1
whereas the proportional-fairness criterion is known to maximize their product, dPF = arg max d∈D
K
riCd .
(4.13)
i=1
The rule based on the MTT criterion is quite straightforward. In order to maximize the total throughput, each time slot should be assigned to the player that makes best use of it. Denote gi [n] = PiM |h ii [n]|2 as the instantaneous received signal power of the ith player at time slot n, and {gi [n]} are i.i.d. exponentially distributed random variables with mean PiM σii2 according to the assumption about {h ii [n]}. The allocation rule is to assign the channel to the player with the highest instantaneous received signal power, i.e.,
120
Repeated open spectrum sharing games
d1 (g1 , g2 , . . . , g K ) = arg max gi . i
(4.14)
Since only the information for the current time slot is necessary and the same rule applies to every time slot, the time index n has been omitted. The expected payoff is ' ( ) +∞ gi riC1 = log2 1 + (4.15) Pr(gi > max j=i g j ) f (gi )dgi , N0 0 where f (gi ) = 1/ PiM σii2 exp −gi / PiM σii2 is the probability density function of the random variable gi , and Pr(·) denotes the probability that the statement within the parentheses holds true. The MTT criterion is optimal from the system designer’s perspective; however, in a heterogeneous situation in which some players always have better channels than others, the players operating under poor channel conditions may have little chance to access the spectrum. To address the fairness problem, another rule is considered, which allocates the spectrum according to the normalized channel gain g¯ i = gi /E[gi ] instead of the absolute values, d2 (g¯ 1 , g¯ 2 , . . . , g¯ K ) = arg max g¯i . i
(4.16)
Note that all {g¯i , i = 1, 2, . . . , K } are exponentially distributed random variables with mean 1, the symmetry of which implies that every player will have an equal chance (1/K ) to access the spectrum. Theorem 4.4.1 The closed-form payoff with the rule (4.16) used can be represented as follows: ) +∞ K −1 PiM σii2 g¯ C2 ri = log2 1 + dg. ¯ (4.17) exp(−g) ¯ 1−exp(−g) ¯ N0 0 P ROOF. The probability distribution function of each g¯i is F(g¯i ) = 1 − exp(−g¯i ). Invoking order statistics [104], the maximum among the K i.i.d. random variables K ¯ = 1 − exp(−g) ¯ . Since {g¯i , i = 1, 2, . . . , K } has the distribution function FM (g) each player can be the one with the largest g¯i with probability 1/K due to symmetry, the expected payoff is ) +∞ PiM σii2 g¯ 1 C2 ri = log2 1 + ¯ (4.18) dFM (g). N0 K 0 Substituting FM (g) ¯ yields the form of the payoff in (4.17). The following remarks are in order.
• The rule (4.16) is an approximation to the proportional-fairness criterion (4.13). gi can be decomposed into a fixed component E[gi ] and a fading component g¯i . When the channel is constant without fading, i.e., gi = E[gi ], the proportionalfairness problem becomes
121
4.4 Cooperation with optimal detection
max {ωi }
K i=1
' ( E[gi ] ωi log2 1 + N0
s.t.
K
ωi ≤ 1,
(4.19)
i=1
where ωi is the probability that the ith player should occupy the channel. The optimal solution is ωi = 1/K for any i, which means that an equal share is proportionally fair. On the other hand, when only the fading part is considered, since g¯i is completely symmetric for all players, assigning resources to the player with the largest g¯i will maximize the product of payoffs. The two aspects suggest that rule (4.16) is a good approximation that requires only the information for the current time slot, and we will refer to it as the approximated proportional-fairness (APF) rule in the rest of the chapter. • The rule (4.16) can be extended to a more general case that allocates the band according to the weighted normalized channel gain πi g¯i , where πi is a weight factor reflecting a player’s priority for heterogeneous applications.
4.4.2
Optimal detection The punishment-based spectrum sharing game can provide all players with the incentive to obey the rules, since deviation is deterred by the threat of punishment. Detection of the deviating behavior is necessary to in order ensure that the threat is credible; otherwise, selfish players will tend to deviate, knowing that their misbehavior will not be caught. Because only one player can occupy the spectrum during one time slot according to the cooperation rules, if that player finds that any other player is deviating, the system will be alerted into the punishment phase. There are several ways to detect whether the spectrum resources are occupied by others; in this chapter we assume that the player can listen to the channel from time to time using an energy detector [97]. The detectors are generally imperfect, and some detection errors are inevitable. There is the possibility that the detector believes someone else to be using the channel although in fact nobody is. By triggering the game into the punishment phase by mistake, this false-alarm event reduces the system efficiency, and hence the probability of false alarms should be kept as low as possible. Generally speaking, the performance of the detector can be improved by increasing the detection time. Nevertheless, the player cannot transmit and detect at the same time because one cannot easily distinguish one’s own signal from other players’ signal in the same spectrum. Therefore, there is a tradeoff between transmission and detection: the more time one spends on detection, the less time one reserves for data transmission. Assume that all the other parameters, such as the length of one time slot, have been fixed. Then, the question is how much time within a slot should be used for detection. Let α denote the fraction of the time used for detection, Ts the length of one slot, and Ws the bandwidth, and assume that an energy detector with a threshold of λ is used. Then the false-alarm probability is [97] ξ(α) =
(αTs Ws , αλ/2) , (αTs Ws )
(4.20)
122
Repeated open spectrum sharing games
where (·) and (·, ·) are the gamma function and incomplete gamma function, respectively. We have shown that the expected discounted payoff u i equals riC without considering the detection error. When the imperfection of the detector is taken into account, the modified discounted payoff, denoted by uˇ i (α), will depend on α. The expected throughput from the current time slot is (1 − α)riC , since only the remaining (1 − α) part of the duration can be employed for transmission. The system will jump into the punishment stage with probability ξ(α) due to the false-alarm event, and stay in the cooperative stage with probability 1 − ξ(α). If the system stays in the cooperative stage, the expected payoff in the future is uˇ i (α) discounted by one time unit; otherwise, the expected throughput in each time slot is riS until cooperation resumes from the T th slot onward, which yields the payoff uˇ i (α) discounted by T time units. Overall, the modified discount utility should satisfy uˇ i (α) = (1 − δ)(1 − α)riC + (1 − ξ(α))δ uˇ i (α) T −1 δ n riS + δ T uˇ i (α) , + ξ(α) (1 − δ)
(4.21)
n=1
from which uˇ i (α) can be solved as uˇ i (α) =
(1 − δ)(1 − α)riC + (δ − δ T )ξ(α)riS . 1 − δ + (δ − δ T )ξ(α)
(4.22)
Note that the discounted payoff uˇ i (α) is a convex combination of (1 − α)riC and riS , and thus riS < uˇ i (α) < riC for all 0 < α < 1 − riS /riC . Therefore, the imperfection of the detector will reduce the utility from riC to a smaller value uˇ i (α). However, uˇ i (α) is always larger than riS , which means that the players still have the incentive to join in this repeated game and cooperate. The optimal α ∗ that maximizes the modified discounted payoff (4.22) can be found from the first-order condition ∂ uˇ i (α) = 0. ∂α Or equivalently, α ∗ is the solution to the following equation: 3 4 3 4 ξ (α) 1 − δ + (δ − δ T ) riC + (1 − α)riC − riS (δ − δ T ) = 0, ξ(α)
(4.23)
(4.24)
where ξ (α) is the derivative of ξ(α) with respect to α. Note that, by replacing riC by uˇ i (α ∗ ), the impact of imperfect detection is incorporated into the game, and requires no further consideration.
4.5
Cheat-proof strategies The repeated game discussed so far is based on the assumption of complete and perfect information. Nevertheless, this information, such as the power constraints and channel
4.5 Cheat-proof strategies
123
gains, is actually private information of each individual player, and thus there is no guarantee that players will reveal their private information honestly to others. If cheating is profitable, selfish players will cheat in order to get a higher payoff. Since the cooperation rules being discussed here always favor players with good channel conditions, selfish players will tend to exaggerate their situations in order to acquire more opportunities to occupy the spectrum. Therefore, enforcing truth-telling is a crucial problem, since distorted information would undermine the repeated game. In [110], a delicate scheme is designed to testify whether the information provided by an individual player has been revealed honestly. However, the method is complex and difficult to implement, especially for time-varying channels. With the allocation rules considered here, much easier strategies can be employed to induce truth-telling. When the MTT rule is used for spectrum sharing, we design a mechanism to make players self-enforced to reveal their true private information, and when the APF rule is adopted, a scheme based on statistical properties is developed to discourage players from cheating.
4.5.1
Mechanism-design-based strategy Since the MTT sharing rule assigns the spectrum resources to the player who claims the highest instantaneous received signal power, players tend to exaggerate their claimed values. To circumvent the difficulty of telling whether the exchanged information has been distorted or not, a better way is to make players self-enforced to tell the truth. Mechanism design is employed to provide players with incentives to be honest. To be specific, the players claiming high values are asked to pay a tax, and the amount of the tax will increase as the value claimed increases, whereas the players reporting low values will get some monetary compensation. This is called “transfer” in Bayesian mechanism-design theory [123]. When the transfer of a player is negative, he/she has to pay others; otherwise, he/she receives compensation from others. Because players care not only about the gain of data transmission but also about their monetary balance, the overall payoff is the gain of transmission plus the transfer. In other words, on introducing transfer functions, the spectrum sharing game actually becomes a new game with the original payoffs replaced by the overall payoffs. By appropriately designing the transfer function, the players can get the highest payoff only when they claim their true private values. According to the cooperative-allocation rule, the private information {g1 , g2 , . . . , g K } has to be exchanged among players. Assume that, within one time slot, {g˜ 1 , g˜ 2 , . . . , g˜ K } is a realization of the random variables {g1 , g2 , . . . , g K }. Observing his/her own private information, the ith player will claim gˆi to others, which need not be necessarily the same as the true value g˜i . All of the players claim the information simultaneously. Since {gˆ 1 , gˆ 2 , . . . , gˆ K } is common knowledge but {g˜ 1 , g˜ 2 , . . . , g˜ K } is not, the allocation decision and transfer calculation have to be based on the claimed rather than the true values. In the MTT spectrum sharing game, the player with index d1 (gˆ 1 , gˆ 2 , . . . , gˆ K ) defined in (4.14) can access the channel, and thus the data throughput for the current time slot can be written in a compact form:
124
Repeated open spectrum sharing games
log2 (1 + g˜i /N0 ) if d1 (gˆ 1 , gˆ 2 , . . . , gˆ K ) = i, 0 otherwise. (4.25) The transfer of the ith player in the cheat-proof strategy is defined as Ri (g˜i , d1 (gˆ 1 , gˆ 2 , . . . , gˆ K )) =
ti (gˆ 1 , gˆ 2 , . . . , gˆ K ) = i (gˆi ) − where
⎡
i (gˆi ) = E ⎣
K
1 K −1
K
j (gˆ j ),
R j (g j , d1 (g1 , g2 , . . . , g K ))
j=1, j=i
(4.26)
j=1, j=i
⎤ gi =gˆi
⎦.
(4.27)
Note that the expectation is taken over all realizations of {g1 , g2 , . . . , g K } except gi , since the player has no knowledge about others of the current time slot when deciding what to claim. i (gˆi ) is the sum of all other players’ expected data throughput given that player i claims a value gˆi . Intuitively, if user i claims a higher gˆi , he/she will gain a greater chance to access the spectrum, and all the other players will have a smaller spectrum share. However, higher payment may negate the additional gain from more spectrum access obtained through bragging. In contrast, if user i claims a smaller gˆi , this user will receive some compensation at the cost of less chance to occupy the spectrum. Therefore, there is an equilibrium such that each user reports his/her true private information. A rigorous proof is provided in the following proposition. Theorem 4.5.1 In the mechanism discussed here, there is an equilibrium such that each player reports his/her true private information, i.e., gˆi = g˜i , i = 1, 2, . . . , K . P ROOF. To prove the equilibrium, it suffices to show that, for any i ∈ {1, 2, . . . , K }, if all players except player i reveal their private information without distortion, the best response of player i is also to report the true private information. Without loss of generality, we assume that player 2 through player K report true values gˆi = g˜i , i = 2, 3, . . . , K . Then, the expected overall payoff of player 1 is the expected data throughput plus the transfer. The expectation is taken over all realizations of {g2 , g3 , . . . , g K } throughout the proof. When claiming gˆ 1 , player 1 gets the expected overall payoff r1t (gˆ 1 ) = E R1 (g˜ 1 , d1 (gˆ 1 , g2 , . . . , g K ) + t1 (gˆ 1 , gˆ 2 , . . . , gˆ K ) ⎡ ⎤ K = E ⎣ R1 (g˜ 1 , d1 (gˆ 1 , g2 , . . . , g K ) + R j (g j , d1 (gˆ 1 , g2 , . . . , g K ))⎦ j=2
−
1 K −1
K
j (gˆ j ).
(4.28)
j=2
From analysis of incentive compatibility, player 1 will claim a distorted value gˆ 1 instead of g˜ 1 if and only if reporting gˆ 1 results in a higher payoff, i.e., r1t (g˜ 1 ) < r1t (gˆ 1 ), or, equivalently,
125
4.5 Cheat-proof strategies
⎡ E ⎣R1 (g˜ 1 , d1 (g˜ 1 , g2 , . . . , g K ) +
K
⎤ R j (g j , d1 (g˜ 1 , g2 , . . . , g K ))⎦
j=2
⎡
< E ⎣R1 (g˜ 1 , d1 (gˆ 1 , g2 , . . . , g K ) +
K
⎤ R j (g j , d1 (gˆ 1 , g2 , . . . , g K ))⎦.
(4.29)
j=2
Note that the MTT rule maximizes the total throughput, that is, for any realization K K Ri (g˜i , d1 (g˜ 1 , g˜ 2 , . . . , g˜ K )) > i=1 Ri (g˜i , d o ) for of {g2 , g3 , . . . , g K }, we have i=1 o any other possible allocation strategy d . After taking the expectation, we have ⎤ ⎡ K R j (g j , d(g˜ 1 , g2 , . . . , g K ))⎦ E ⎣ R1 (g˜ 1 , d(g˜ 1 , g2 , . . . , g K ) + j=2
⎡ > E ⎣ R1 (g˜ 1 , d o ) +
K
⎤
R j (g j , d o )⎦ for any d o ,
(4.30)
j=2
which contradicts (4.29). Therefore, player 1 is self-enforced to report the true value, i.e., gˆ 1 = g˜ 1 . Hence, in the equilibrium, all players will reveal their true private information. The proposition proves that adopting the mechanism-based strategy with the transfer function defined in (4.26) gives every player the incentive to reveal true private information to others. For the homogeneous case in which PiM = P, h ii ∼ CN (0, 1) for all i, the transfer function can be further simplified into the following form by application of order statistics: ' ( K ) gˆ j /P K −2 Pg log2 1 + dg. exp(−g) 1 − exp(−g) ti (gˆ 1 , gˆ 2 , . . . , gˆ K ) = N0 gˆi /P j=1
(4.31) Moreover, with the transfer functions, the summation of all players’ payment/income adds up to 0 at any time slot: ⎞ ⎛ K K K 1 ⎝i (gˆi ) − ti (gˆ 1 , gˆ 2 , . . . , gˆ K ) = j (gˆ j )⎠ K −1 i=1
j=1, j=i
i=1
=
K i=1
i (gˆi ) −
K
j (gˆ j ) = 0.
(4.32)
j=1
This means that the monetary transfer occurs only within the community of cooperative players without either a surplus or a deficit at any time. This property is very suitable for the open-spectrum-sharing scenario. The Vickrey–Clarke–Groves (VCG) mechanism, another well-known mechanism, can also enforce truth-telling, but it cannot keep the budget balanced. If the VCG mechanism is used, at each slot some players will have to pay a third party outside the community (e.g., a spectrum manager), which goes
126
Repeated open spectrum sharing games
against the intention with an unlicensed band. Furthermore, having to pay for the band may make players less willing to access the spectrum. Despite the fact that the VCG mechanism is a good choice for auctions in licensed spectrum, for the unlicensed band, since our goal is increasing spectrum efficiency and enforcing truth-telling rather than making money out of the spectrum resources, the mechanism discussed above is more appropriate.
4.5.2
Statistics-based strategy For the APF rule, every player reports the normalized channel gain, and the player with the highest reported value will have access to the spectrum. Since the normalized gains are all exponentially distributed with mean 1, if the true values are reported, the symmetry will result in an equal share of the time slots in the long run, i.e., each player will have 1/K fractional access to the spectrum. If player i occupies the spectrum more than (1/K + ε) of the total time, where ε is a predetermined threshold, it is highly likely that he/she may have cheated. Consequently, the selfish players, in order not to be caught cheating, can access only up to (1/K + ε) of all the time slots even if they distort their private information. Thus, the statistics-based cheat-proof strategy for the APF spectrum-sharing rule can be designed as follows. Everyone keeps a record of the spectral usage in the past. If any player is found to be overusing the spectrum, i.e., transmitting for more than (1/K + ε) of the entire time, that player will be marked as a cheater and thus be punished. In this way, the profit derivable from cheating, defined as the ratio of the extra usage over the normal usage, is greatly limited. Theorem 4.5.2 The profit derivable from cheating is bounded when the statistics-based strategy is employed; furthermore, the profit approaches 0 as n → ∞. P ROOF. The worst case is that the cheater gets exactly (1/K + ε) of the total resources without being caught. The profit derivable from cheating is at most ε/(1/K ) = K ε, which is bounded. Moreover, the threshold ε can shrink with time; to make it explicit, we use ε[n] to denote the threshold at slot n. At a particular time slot, the event that a particular player accesses the spectrum is a Bernoulli-distributed random variable with mean 1/K . Then, the n-slot averaged access rate of a player is the average of n i.i.d. Bernoulli random variables, since the channel fading is independent from slot to slot. According to the central limit theorem [146], the average access rate converges in distribution to a Gaussian random variable with mean 1/K , whose variance decays with rate 1/n. To keep the √ same false-alarm rate, ε[n] can be chosen to decrease with rate 1/ n. Then, the upper bound of the cheating profit K ε[n] will decay to 0 as n → ∞. Therefore, we can conclude that the benefit to the cheater, or equivalently speaking, the harm to the others, is quite limited. As a result, this statistics-based strategy is cheat-proof.
127
4.6 Simulation results
4.6
Simulation results In this section, we conduct numerical simulations to evaluate the spectrum-sharing game with cheat-proof strategies. We first look into the simplest case with two players (K = 2) to gain some insight. We assume that the two players are homogeneous with P1M = P2M = P, {h 11 , h 22 } ∼ CN (0, 1), and {h 12 , h 21 } ∼ CN (0, γ ), where γ = E[|h 12 |2 ]/E[|h 11 |2 ] reflects the relative strength of interference over the desired signal powers, and we call it the interference level. The prerequisite for the players to join the game is that each individual can obtain more profit by cooperation riC > riS ; however, cooperation is unnecessary in the extreme case when there is no mutual interference (γ = 0). Therefore, we want to know under what interference level γ the presented cooperation is profitable. Figure 4.3 shows the cooperation payoff riC and non-cooperation payoff riS versus γ when the averaged SNR is P/N0 = 15 dB. Since the two rules are equivalent in the homogeneous case, only the MTT rule is demonstrated. Under cooperative spectrum sharing, since only one player gets the transmission opportunity in each time slot, the expected payoff is independent of the strength of interference, and thus is a horizontal line in the figure. The non-cooperation payoff drops significantly as the interference strength increases. From Figure 4.3, we can see that the payoff associated with cooperation is larger than that for non-cooperation for a wide range of the interference level (γ > 0.15). Therefore, cooperation is profitable for a medium-to-high-interference environment, which is typical for an urban area with a high user density.
4.5 r S, Non−cooperative payoff i
4
r C, Cooperative payoff i
3.5
Payoffs
3 2.5 2 1.5 1 0.5
Figure 4.3
0
0.5
1 Interference Level γ
1.5
2
Comparison of payoffs when the players share the spectrum either cooperatively or non-cooperatively.
128
Repeated open spectrum sharing games
4 Deviation 3.5
Payoffs
3
2.5
2 Punishment Stage 1.5
1
Figure 4.4
0
50
100
150 200 250 Time index n
300
350
400
An illustration of the punishment-based repeated game.
In Figure 4.4, we illustrate the idea of the punishment-based repeated game. Assume that player 1 deviates from cooperation at slot 150, and the duration of the punishment stage is T = 150. According to the “punish-and-forgive” strategy, the game will stay in the punishment stage from time slot 151 to time slot 300. Figure 4.4 shows an averaged result over 100 independent runs. We can see that, although the player gets a high payoff at time slot 150 by deviation, the temporary profit will be negated during the punishment stage. Hence, considering the consequence of deviation, players have no incentive to deviate. The effect of δ is demonstrated in Figure 4.5. We assume that all players have the same tendency toward deviation, i.e., τi = τ, ∀i where τi = RiD / riC − riS . We evaluate the effect of δ when τ equals 5, 10, and 20. Given a fixed δ, any punishment duration T above the curve can deter against deviation, and the duration should be longer for larger τ since with larger τ players have greater incentive to deviate. In addition, when δ is close to 1, the minimal duration goes to τ + 1 as in (4.11), and δ has to be larger than δ¯ = τ/(1 + τ ) to guarantee that punishment can prevent players from deviating (δ > δ¯ is necessary in order to make (4.10) valid). In other words, in situations where players are impatient (δ is far away from 1) and the tendency to deviate is strong, it is impossible to maintain cooperation using the “punish-and-forgive” strategy with repeated-game modeling. Now, we take imperfect detection into consideration. Figure 4.6 shows how a player’s discount utility u(α) ˇ is affected by α when an energy detector with a fixed detection threshold is employed. We can see that, when the detection time is short, the utility is quite low due to the high false-alarm rate. On the other hand, when the detection time
129
4.6 Simulation results
100 90
τ=5 τ = 10 τ = 20
80 70
Tmin
60 50 40 30 20 10 0 0.8
Figure 4.5
0.85
0.9 δ
0.95
1
Effect of the discount factor δ on the punishment duration T . 2.6
Modified Discounted Utility
2.4 2.2 2 1.8 1.6 1.4 1.2 1
Figure 4.6
0
0.05 0.1 Portion of time slot for detection α
0.15
Effect of the length of detection on the discounted utility.
is too long, a significant portion of the transmission opportunity is wasted. Therefore, α should be carefully designed to achieve the optimal tradeoff that maximizes the utility. Next, we show the payoffs of cooperation rules in a heterogeneous environment. By heterogeneity, we mean that different players may differ in terms of their
130
Repeated open spectrum sharing games
9 NOC 1 NOC 2 MTT 1 MTT 2 APF 1 APF 2 MMF 1 MMF 2
8 7
Payoffs
6 5 4 3 2 1 0
0
5
10
15
20
P 2M/P1M Figure 4.7
The payoffs under a heterogeneous setting with different cooperation rules.
power constraints, averaged direct-channel gains {h ii , i = 1, 2, . . . , K }, averaged crosschannel gains {h i j , i = j}, or some combination of them. Here we illustrate the results only for when the power constraints are different, since other types of asymmetry have similar results. In the simulation, we fix the power constraint of player 1, and increase P2M , the power constraint of player 2. The payoffs with the MTT and APF rules are shown in Figure 4.7, where “1” and “2” refer to the payoffs of player 1 and player 2, respectively. As benchmarks, the payoffs accrued without cooperation and payoffs obtained using the max–min fairness criterion (another resource-allocation criterion sacrificing efficiency for absolute fairness, see [231]), denoted by “NOC” and “MMF,” respectively, are provided. Since player 2 has more power to transmit data, he/she can be seen as a strong player in this heterogeneous environment. As can be seen from Figure 4.7, both the MTT rule and the APF rule outperform the non-cooperation case, which means that players have an incentive to cooperate no matter which rule is used. Furthermore, the MTT rule favors the strong player in order to maximize the system efficiency, and the APF rule achieves a tradeoff between efficiency and fairness. The MMF curves show that the strong user is inhibited in order to reach the absolute level of fairness, which might conflict with selfish users’ interest. We also conduct simulations for spectrum sharing with more than two users. In Figure 4.8, the cooperation gain, which is characterized by the ratio of riC /riS , is plotted versus the number of players K . We assume a homogeneous environment with a fixed interference level γ = 1. Since the allocation rules can allow one to reap multiuser diversity gains, the cooperation gain increases as more players become involved in the sharing game.
131
4.6 Simulation results
4.5
Cooperation Gain riC /ri S
4
3.5
3
2.5
2
1.5
Figure 4.8
2
4
6
8 10 Number of Players K
12
14
16
The cooperation gain in a K -player spectrum-sharing game. 2
Expected overall payoffs
1.8
1.6
1.4
1.2
1
0.8
Player 1, true value = 0.4 Player 2, true value = 0.8 Player 3, true value = 1.1
0 Figure 4.9
0.5
1 Claimed private values
1.5
2
The expected overall payoffs for various claimed values.
Finally, we examine the mechanism-design-based cheat-proof strategy. We assume a three-user spectrum sharing game with the MTT rule. At one specific time slot, for example, the true private values are g˜ 1 = 0.4, g˜ 2 = 0.8, and g˜ 3 = 1.1. In Figure 4.9, the expected overall payoffs (throughput plus transfer) versus the claimed values are shown for each player, given that the other two are honest. From Figure 4.9, we can see
132
Repeated open spectrum sharing games
that the payoff is maximized only if the player honestly claims his/her true information. Therefore, players are self-enforced to tell the truth with the mechanism presented here.
4.7
Summary and bibliographical notes We have discussed a spectrum sharing scheme with cheat-proof strategies to improve the efficiency of open spectrum sharing. The spectrum sharing problem is modeled as a repeated game in which any deviation from cooperation will trigger punishment. We present two cooperation rules with efficiency and fairness considered, and optimize the detection time to alleviate the impact due to imperfect detection of the selfish behavior. Moreover, two cheat-proof strategies based on mechanism design and properties of channel statistics to enforce that selfish users report their true channel information are considered. Simulation results show that the scheme has efficiently improved the spectrum usage by alleviating the mutual interference. In [306], the authors proposed a repeated-game approach to increase efficiency when multiple primary users sell their bands. The repeated game was also employed to model the open-spectrum-sharing problem in [110] with the assumption that the channels are time-invariant. In [63], iterative waterfilling power allocation was proposed for Gaussian interference channels with frequency-selective fading, and convergence was discussed in [393] under the more general assumption that users updated the powers in an asynchronous way. Some practical difficulties of the iterative waterfilling method were circumvented in [164] by exchanging an “interference price” that took mutual interference into consideration. Interested readers can refer to [466].
5
Pricing games for dynamic spectrum allocation
In a cognitive radio network, collusion among selfish users may have seriously deleterious effects on the efficiency of dynamic spectrum sharing. The network users’ behaviors and dynamics need to be taken into consideration for efficient and robust spectrum allocation. In this chapter, we model spectrum allocation in wireless networks with multiple selfish legacy spectrum holders and unlicensed users as multi-stage dynamic games. In order to combat user collusion, we present a pricing-based collusion-resistant approach for dynamic spectrum allocation to optimize overall spectrum efficiency, while not only keeping the incentives to participate of the selfish users but also combating possible user collusion. The simulation results show that the scheme achieves a high efficiency of spectrum usage even with severe user collusion.
5.1
Introduction Traditional network-wide spectrum assignment is carried out by a central server, namely a spectrum broker. Distributed spectrum allocation approaches that enable efficient spectrum sharing solely on the basis of local observations have recently been studied. From the economic point of view, the deregulation of spectrum use further encourages market mechanisms for implementing efficient spectrum allocation. Because of the spectrum dynamics and lack of centralized authority, the spectrum allocation needs to distributively adapt to the dynamics of wireless networks due to node mobility, channel variations or varying wireless traffic on the basis of local observed information. From the game-theoretic point of view, first of all, the spectrum allocation needs to be studied in a multi-stage dynamic game framework instead of the static game approach. Second, with the emerging applications of mobile ad hoc networks envisioned in civilian usage, the users may be selfish and aim to maximize their own interests. In dynamic spectrum allocation scenarios, the selfish users’ cheating behaviors need to be well handled in order to have a robust spectrum allocation scheme. The collusive behavior of selfish users [123] [237], which is a prevalent threat to efficient dynamic spectrum allocation, has generally been overlooked and needs to be studied extensively. Therefore, novel spectrum allocation approaches considering the spectrum dynamics and countermeasures to the users’ collusive behaviors need to be developed. Considering a general network scenario in which multiple primary users (legacy spectrum holders) and secondary users (unlicensed users) coexist, primary users attempt
134
Pricing games for dynamic spectrum allocation
to sell unused spectrum resources to secondary users for monetary gains while secondary users try to acquire spectrum usage permissions from primary users to achieve certain communication goals, which generally introduce reward payoffs for them. In this chapter, we model the spectrum allocation as multi-stage dynamic games and discuss an efficient pricing-based distributive collusion-resistant dynamic spectrumallocation approach to optimize the overall spectrum efficiency, meanwhile retaining the participation incentives of the users based on double-auction rules. The main points of this chapter are as follows. First, by modeling the spectrum sharing as a multi-stage dynamic pricing game, we are able to coordinate the spectrum allocation among primary and secondary users through a bilateral pricing process to maximize the utilities of both primary and secondary users according to the spectrum dynamics. More importantly, our study is further focused on the countermeasures to selfish users’ collusive behaviors. A distributed collusion-resistant dynamic pricing approach with optimal reserve prices is designed to achieve efficient spectrum allocation while combating the user collusion. The pricing overhead can be greatly decreased by introducing a belief function for each user to help decision making. Moreover, the Nash bargain solution is applied to derive the performance lower bound of the scheme considering the presence of user collusion, and time diversity of spectrum resources is further exploited by considering the budget constraints of the secondary users. The remainder of this chapter is organized as follows. The system model of dynamic spectrum allocation is described in Section 5.2. In Section 5.3, we formulate the spectrum allocation as pricing games based on the system model. In Section 5.4, a collusion-resistant dynamic pricing approach is considered in order to achieve efficient spectrum allocation while combating user collusion. The simulation studies are provided in Section 5.5. Finally, Section 5.6 summarizes this chapter.
5.2
The system model We consider wireless networks where multiple primary users and secondary users operate simultaneously, which may represent various network scenarios. For instance, the primary users can be the spectrum broker connected to the core network and the secondary users are the base stations equipped with cognitive radio technologies; or the primary users are the access points of a mesh network and the secondary users are the mobile devices. On the one hand, considering that the authorized spectrum of primary users might not be fully utilized over time, they prefer to lease the unused channels to the secondary users for monetary gains. On the other hand, since the unlicensed spectrum becomes more and more crowded, the secondary users may try to lease some unused channels from primary users for more communication gains by providing leasing payments. In our system model, we assume that all users are selfish and rational, that is, their objectives are to maximize their own utilities, not to cause damage to other users. However, users are allowed to cheat whenever they believe cheating behaviors can help them
135
5.3 Pricing-game models
to increase their utilities. Note that not only is cheating behavior by a single selfish user possible, but also collusive cheating behaviors among several selfish users will pose threats to efficient spectrum allocation. Generally speaking, in order to acquire spectrum licenses from regulatory bodies such as the FCC, the primary users have to pay for certain operating costs. With regard to secondary users, in order to gain the rewards of achieving certain communication goals, they want to utilize more spectrum resources. Specifically, we consider the collection of the available spectrum from all primary users as a spectrum pool, which in total consists of N non-overlapping channels. Assume that there are J primary users and K secondary users, indicated by the sets P = { p1 , p2 , . . . , p J } and S = {s1 , s2 , . . . , s K }, respectively. We represent the channels j j authorized to primary user pi using a vector Ai = {ai } j∈{1,2,...,n i } , where ai represents the channel index in the spectrum pool and n i is the total number of channels belonging to user pi . Define A as the set of all channels in the spectrum pool. Moreover, denote a
j
the acquisition costs of user pi ’s channels as the vector Ci = {ci i } j∈{1,2,...,n i } , where the jth element represents the acquisition cost of the jth channel in Ai . For simplicity, a
j
j
we write ci i as ci . We use C to denote the set of the costs of all spectrum channels. As j for secondary user si , we define her/his payoff vector as Vi = {vi } j∈{1,2,...,N } , where the jth element is the reward payoff if this user successfully leases the jth channel in the spectrum pool.
5.3
Pricing-game models
5.3.1
Game settings for dynamic spectrum allocation On the basis of the system model in the previous section, we are able to define the utility functions of the players in our dynamic game. Specifically, if primary user pi reaches agreements for the leasing of all or part of her/his channels to secondary users, the utility function of this primary user can be written as follows: ni aj j Ai = φa j − ci αi i , U pi φAi , αi j=1
(5.1)
i
where φAi = {φa j } j∈{1,2,...,n i } and φa j is the payment that user pi obtains from the i
i
j
secondary user by leasing the channel ai in the spectrum pool. Note that αiAi = j ai
j ai
{αi } j∈{1,2,...,n i } and αi ∈ {0, 1}, which indicates whether the jth channel of user a
j
j
pi has been allocated to a secondary user or not. For simplicity, we write αi i as αi . Similarly, the utility function of secondary user si can be modeled as follows: N j j vi − φ j βi , Usi φA , βiA = j=1
(5.2)
136
Pricing games for dynamic spectrum allocation
j j where φA = {φ j } j∈{1,2,...,N } , βiA = βi j∈{1,2,...,N } . Note that βi ∈ {0, 1} illustrates whether secondary user si successfully leases the jth channel in the spectrum pool or not. From the above discussion, we can see that the players may have conflicting interests. Specifically, the primary users want to earn as much payment as possible by leasing the unused channels and the secondary users aim to accomplish their communication goals by providing the least possible payment for leasing the channels. Moreover, the spectrum allocation involves multiple channels over time. Therefore, the users involved in the spectrum allocation process construct a multi-stage non-cooperative pricing game [322] [123]. Moreover, the selfish users will not reveal their private information to others unless some mechanisms have been applied to guarantee that it is not harmful to disclose the private information. In general, such a non-cooperative game with incomplete information is complex and difficult to study because the players do not know the perfect strategy profile of others. Nevertheless, within our game setting, the well-developed auction theory [237] can be applied to formulate and analyze our pricing game. In auction games, according to an explicit set of rules, the principles (auctioneers) determine resource allocation and prices on the basis of bids from the agents (bidders). In our spectrum allocation pricing game, the primary users (principles) attempt to sell the unused channels to the secondary users and the secondary users (bidders) compete with each other to buy permission to use primary users’ channels. In our pricing game, multiple primary and secondary users coexist, which indicates the relevance of the double-auction scenario. This means that not only the secondary users but also the primary users need to compete with each other to make the beneficial transactions possible by eliciting the willingness of participants to make payments in the forms of bids or asking prices. Specifically, the most important property of the double-auction mechanism is its high efficiency such as in the New York Stock Exchange (NYSE) and the Chicago Merchandize Exchange (CME). However, in our spectrum allocation games, collusive cheating behaviors can cause serious deterioration of the game outcomes when participants form cartels to suppress competition among users and distort the supply and demand relationships. Moreover, due to the dynamic property of spectrum resources, it becomes more difficult for the network users to discern whether the spectrum pricing variations arise from user collusion or from the changing spectrum dynamics. Also, unlike with the NYSE or CME, either the existence of powerful centralized authorities cannot be pre-assumed or the bandwidth of control channels is very limited. Therefore, we aim to develop an efficient pricing approach for spectrum allocation that not only combats collusion among users but also adapts to the spectrum dynamics distributively.
5.3.2
Static pricing games and competitive equilibrium Assume that the channels available from the primary users are leased for usage for a certain time period T . Also, we assume that the cost of the primary users and reward
137
5.3 Pricing-game models
payoffs of the secondary users remain unchanged over this period. Before this spectrumsharing period, we define a trading period τ , within which the users exchange their information of bids and asking prices to achieve agreements regarding spectrum usage. The time period T + τ is considered as one stage in our pricing game. We first study the interactions of the players in static pricing games. Note that the users’ goals are to maximize their own payoff functions. For the primary users, the optimization problem can be written as follows: O( pi ) = max U pi (φAi , αiAi ), ∀i ∈ {1, 2, . . . , J } Ai
φAi ,αi
s.t. Usˆ j ({φ−a j , φa j }, βiA ) ≥ Usˆ j ({φ−a j , φ˜ a j }, βiA ), ai
i
ai
i
i
j
sˆa j = 0, ai ∈ Ai ,
i
(5.3)
i
where φ˜ a j is any feasible payment and φ−a j is the payment vector excluding the element i
i
j
of the payment for the channel ai . Note that sˆa j is defined as follows: i
sˆa j = i
⎧ ⎨
sk
⎩
if βk i = 1,
0
if βk i = 0, ∀k ∈ {1, 2, . . . , K }.
a
j
a
j
(5.4)
Thus, (5.3) is the incentive-compatible constraint [237]. It means that the secondary users have incentives to provide the optimal payment because they cannot achieve extra gains by unilaterally cheating against the primary users. Similarly, the optimization problem for the secondary users can be written as follows: O(si ) = max Usi (φA , βiA ), ∀i ∈ {1, 2, . . . , K } φA ,βiA
s.t. U pˆ j ({φ− j , φ j }, βiA ) ≥ U pˆ j ({φ− j , φ˜ j }, βiA ),
j
pˆ j = 0, βi = 1,
(5.5)
where pˆ j is defined as # pˆ j =
pk 0
j
j
if βi = 1, j ∈ Ak , αk = 1, otherwise, ∀k ∈ {1, 2, . . . , J }.
(5.6)
Note that pˆ j represents the primary users who have leased a channel to secondary user si . Similarly, (5.5) is the incentive-compatible constraint for the primary users, which guarantees that the primary user will give permission for usage of their channels to the secondary users so that they can receive the optimal payments. From (5.3) and (5.5), we can see that, in order to obtain the optimal allocation and payments, a multi-objective optimization problem needs to be solved, which becomes extremely complicated due to our game setting that involves only incomplete information. Thus, in order to make this problem tangible, we analyze it from the game-theoretic point of view. Considering the double-auction scenarios of our pricing game, competitive equilibrium (CE) [136] is a well-known theoretical prediction of the outcomes. It is the price at which the number of buyers willing to buy is equal to the number of
138
Pricing games for dynamic spectrum allocation
Price (acquisition costs or reward payoffs)
Supply Curve Demand Curve
Competitive Equilibrium
Quantity (the number of channels) Figure 5.1
Supply and demand functions.
sellers willing to sell. Alternatively, CE can also be interpreted as the situation in which the supply and demand match. We describe the supply and demand functions of spectrum resources in Figure 5.1. Note that CE has also been proved to be Pareto optimal in stationary double-auction scenarios.
5.3.3
Multi-stage dynamic pricing games for spectrum allocation Considering the spectrum dynamics due to mobility, channel variations or variations in wireless traffic, the secondary users’ reward payoffs and primary users’ costs may j j change over time or spectrum. Thus, ci and vi need to be considered as random variables in dynamic scenarios. Without loss of generality, we assume the homogeneous j j game settings for the statistics of ci and vi , which satisfy the probability density functions (PDFs) f c (c) and f v (v), respectively. Therefore, considering dynamic network conditions, we further model the spectrum sharing as a multi-stage dynamic pricing game. Let γ be the discount factor of our multi-stage pricing game. Using (5.3) and (5.5), the objective functions for the primary users and secondary users can be rewritten as follows: ˜ pi ) = max E j j O( c ,v A
i
φAi ,t ,αi,ti
i
5
˜ i ) = max E j j O(s c ,v A φA,t ,βi,t
5∞
i
i
γ · U pi ,t t
Ai φAi ,t , αi,t
6
,
(5.7)
t=1 ∞ t=1
6 A γ · Usi ,t φA,t , βi,t , t
(5.8)
5.4 Collusion-resistant dynamic spectrum allocation
139
where the subscript t indicates the tth stage of the multi-stage game. Generally speaking, there may exist some overall constraints on spectrum sharing such as each secondary user’s total budget for leasing spectrum resources or each primary user’s total available spectrum supply. Under these constraints, the above problem can be further solved using a dynamic programming process as in [214] [204]. However, the major difficulties of robust and efficient spectrum allocation lie in the issue of how to efficiently combat the selfish users’ cheating behaviors, especially user collusion based solely on local information.
5.4
Collusion-resistant dynamic spectrum allocation In this section, we first discuss the impact of user collusion on auction-based approaches to dynamic spectrum allocation. Then, we study collusion-resistant dynamic spectrum allocation for two simplified scenarios: (1) multiple secondary users and one primary user (MSOP); and (2) one secondary user and multiple primary users (OSMP). In Sections 5.4.3 and 5.4.4 we extend our study to a more generalized spectrum allocation scenario with multiple primary users and multiple secondary users (MSMP). Note that the collusion-resistant spectrum allocation in this section is studied under the multi-stage pricing-game framework discussed in the previous section.
5.4.1
User collusion in auction-based spectrum allocation In order to have a robust dynamic spectrum allocation mechanism in wireless networks with selfish users, the cheating behaviors of selfish users need to be well studied and counteracted. Otherwise, the spectrum allocation mechanism may become unsustainable and could lead to unpredictable outcomes. On the one hand, spectrum allocation can generally be regarded as being similar to generic medium access control (MAC) problems in existing systems and studied from the perspective of wireless resource allocation. On the other hand, efficient spectrum allocation can be achieved by studying it from the perspective of the driving economic force and mechanisms. Therefore, the unique nature of dynamic spectrum allocation imposes new challenges on its mechanism design against cheating behaviors. Basically, all the cheating behaviors related to MAC problems in wireless systems still threaten the functionalities of spectrum sharing mechanisms. More importantly, wireless spectrum becomes a scarce resource and has huge economic potential, which can be exploited only through efficient pricing-based market designs. Thus, the cheating threats against these market designs make robust dynamic spectrum access an even more complicated problem. Since cheating behaviors on MAC protocols can still be dealt with using traditional countermeasures and the auction mechanism is incentive-compatible for each single user, we will focus our study on efficient collusion-resistant dynamic spectrum-allocation mechanisms. Although incentive-compatibility can be assured in most auction-based dynamic spectrum allocation approaches such as the optimal auction or Vickrey auction, which indicates that no selfish user will cheat on the auction mechanism unilaterally, one
140
Pricing games for dynamic spectrum allocation
prevalent cheating behavior, bidding collusion among users, has generally been overlooked. To be specific, the bidders (or sellers) act collusively and engage in bid rigging with a view to obtaining lower prices (or higher prices). The resulting arrangement is called a bidding ring. In the scenarios of auction-based spectrum allocation, the operation of a bidding ring among the primary users (or secondary users) will result in increasing their utilities by collusively leasing the spectrum channels at a higher price (or at a lower price). Considering the spectrum dynamics caused by variations in wireless channel, user mobility or varying wireless traffic, it becomes difficult to tell whether the price variation arises from possible bidding collusion or from the varying demand and supply of spectrum resources. Hence, traditional auction-based spectrum allocation mechanisms become vulnerable and unstable in the presence of collusive behaviors. In Figures 5.2 and 5.3, we illustrate snapshots of pricing-based dynamic spectrum access networks where there is no user collusion and where there exists user collusion, respectively. Here, we consider the primary base station as the primary user and the
Figure 5.2
The situation of no collusion in pricing-based dynamic spectrum allocation.
Figure 5.3
User collusion in pricing-based dynamic spectrum allocation.
5.4 Collusion-resistant dynamic spectrum allocation
141
unlicensed mobile users as the secondary users. When there is no user collusion, as in Figure 5.2, the pricing interactions between the primary user and secondary users lead to efficient spectrum allocation. When there exist several bidding rings as in Figure 5.3, each bidding ring will elicit only one effective bid for spectrum resources, which distorts the supply and demand of spectrum resources and yields inefficient spectrum allocation. Further, in the extreme case that all secondary users collude with each other, an arbitrarily low bid price will become eligible. Thus, collusion-resistant dynamic spectrum allocation is important for efficient next-generation wireless networking. In the traditional scenarios with an open ascending price, i.e., the English auction (or reverse English auction), there is one seller and multiple buyers (or one buyer and multiple sellers). In order to combat the bidding ring, the seller (or buyer) can enhance their utilities by setting proper reserve prices on the basis of the size of the bidding ring, i.e, the number of collusive users, and the statistics of each user’s true value. However, in our scenarios of dynamic spectrum allocation with multiple primary and secondary users having only local information, either the number of collusive users is not known or determination of the reserve price becomes very complicated, given limited and imperfect information. Therefore, the question of how to design efficient collusion-resistant dynamic spectrum-allocation mechanisms becomes imminent and crucial.
5.4.2
MSOP and OSMP scenarios In this section, we develop dynamic spectrum allocation mechanisms that are robust against user collusion in MSOP and OSMP scenarios. Considering that we have one primary user and multiple secondary users in a snapshot of wireless networks, we assume the system model as in Section 5.2 except that only one primary user pi is available for providing spectrum leasing services. The standard open ascending-price auction is chosen for the secondary users to compete for the spectrum resources, which is theoretically equivalent to a sealed-bid second-price auction. Here, the presence of collusion among secondary users may generate extra utilities for the collusive users by suppressing competition for spectrum resources. Owing to the network dynamics and imperfect available information, the primary user cannot make a credible assumption about the presence of collusion or the number of collusive users, and there do not exist trustworthy anti-cartel authorities in the network. Therefore, the only instrument giving the primary user possible leverage against collusion is the option of setting an optimal reserve price. In the rest of this section, we first derive the theoretical optimal reserve price for the spectrum allocation game. Then, by considering the properties of our spectrum allocation game such as the unknown number of collusive users and imperfect/local bidding information, a collusion-resistant dynamic spectrum allocation mechanism is developed to efficiently allocate spectrum resources while combating collusive cheating behaviors. Specifically, we assume that K secondary users are divided into K r bidding rings Kr m k = K , m k ≥ 1. Basiand that the size of the kth bidding ring is m k . Note that k=1 cally, the collusion among the secondary users within each bidding ring does not affect
142
Pricing games for dynamic spectrum allocation
the strategies of users who do not participate in the bidding ring. Further, the bidding ring can be represented by the collusive secondary user with the highest reward payoff. The other collusive users submit only non-serious bids at or below the reserve price, which substantially limits the competition among secondary users. Thus, instead of K effective competing secondary users, only K r effective users should be considered to be bidding for spectrum resources. Assume that the equivalent reward payoff of the kth a
j
bidding ring is νmik , namely the highest reward payoff among m k collusive users for j the ai th channel in the spectrum pool. Thus, the payoff vector for effective users can j aj aj a be represented as ν1 i , ν2 i , . . . , ν Kir . Note that, for simplicity, we omit the superscript j
ai in the following if the spectrum assignment is being considered just for one specific channel. Further, let the highest and second-highest reward payoffs among all effective secondary users be v(1) and v(2) , respectively. In order to combat the collusive behaviors of secondary users, the primary user needs to set a reserve price, which means that its spectrum resources will not be sold for less than the reserve price. Considering the theoretical equivalence of the open ascendingprice auction and the second-price auction, we then study the optimal reserve price for the second-price auction setting in our spectrum-allocation game. Let the optimal reserve price be φr, pi . Then, the spectrum channel can be leased by pi if and only if v(1) > φr, pi . Moreover, if v(2) > φr, pi , the spectrum channel is leased for v(2) ; otherwise, it is leased at the reserve price φr, pi . Let Fv(1) (x) and Fv(2) (x) denote the cumulative distribution functions (CDFs) of v(1) and v(2) , respectively. Let f v(1) (x) and f v(2) (x) denote the PDFs of v(1) and v(2) , respectively. Now, the expected utility gain obtained by the primary user with reserve price φr, pi by leasing her/his jth channel can be written as E
j a Vi ,ci i
j a j U pi ai , φr, pi = φr, pi − E ci i Fv(2) φr, pi − Fv(1) φr, pi ) +
M
φr, pi
a j z − E ci i f v(2) (z)dz,
(5.9)
j
where M represents the largest possible vi . Note that the first term on the right-hand side (RHS) of (5.9) represents the utility when the spectrum channel is leased at the reserve price. This happens if v(1) > φr, pi but v(2) < φr, pi because this channel will not be able to be leased at the second highest bid, but at the reserve price. Note that (Fv(2) (φr, pi ) − Fv(1) (φr, pi )) is the probability that this event happens. The second term on the RHS of (5.9) represents the expected utility when v(2) ≥ φr, pi . Assuming that an interior maximum exists for (5.9), the optimal reserve price φr,∗ pi satisfies the following first-order condition of (5.9): a j Fv(2) φr,∗ pi − Fv(1) φr,∗ pi − φr,∗ pi − E ci i f v(1) φr,∗ pi = 0.
(5.10)
Thus the optimal reserve price can be determined by (5.12) if the statistical descriptions for v(1) and v(2) are available.
5.4 Collusion-resistant dynamic spectrum allocation
143
Similarly, in the scenarios of OSMP, if we let the lowest and second-lowest acquisition costs among all effective primary users be c(1) and c(2) , respectively, the expected utility gain obtained by the secondary user si with reserve price φr,si by leasing a channel from the primary users can be written as j E Vi ,C [Usi (φr,si )] = E vi − φr,si (Fc(1) (φr,si ) − Fc(2) (φr,si )) ) φr,s i j E vi − z f c(2) (z)dz. (5.11) + 0
Correspondingly, the first-order condition of (5.11) can be obtained as follows if an interior maximum exists for (5.11): ∗ ∗ 3 j4 ∗ ∗ − F φ + E v (5.12) Fc(2) φr,s − φ c(1) r,si r,si f c(1) φr,si = 0. i i However, in general scenarios of spectrum allocation, each user operates solely on the basis of her/his local information and there might be no anti-cartel authorities. Thus, the number of collusive users and the number of bidding rings are unknown to each user. Consequently, even though the statistics of each user’s reward payoff is available or can be estimated under homogeneous settings, the order statistics [328] of v(2) and c(2) cannot be derived without knowledge of the number of collusive users. Then, how to further obtain the optimal reserve prices considering the constraints in our spectrum allocation game remains unanswered. Since our pricing game belongs to the category of non-cooperation games with incomplete information [322], the players need to build up certain beliefs regarding other players’ future possible strategies to assist their decision making. In order to obtain the optimal reserve prices from (5.10) and (5.12) for robust spectrum allocation, we first derive the belief functions for primary and secondary users in the scenarios of MSOP and OSMP, respectively. Considering that there are multiple players with private information in the pricing game and that it is the bid/request prices that directly affect the outcome of the game, it is more efficient to define common belief functions on the basis of the publicly observed bid/request prices rather than generating specific belief for every other player’s private information. Hence, we consider the primary/secondary users’ beliefs as the ratio of their bids/asking prices being accepted at different price levels. Let x and y be the asking price of the primary users and the bid price of the secondary users, respectively. At each time, the ratio of offers from primary users at x that have been accepted can be written as follows: r˜p (x) =
μA (x) , μ(x)
(5.13)
where μ(x) and μA (x) are the number of offers at asking price x and the number of accepted offers at asking price x, respectively. Similarly, at each time during the dynamic spectrum sharing, the ratio of bids from secondary users at y that have been accepted is r˜s (y) =
ηA (y) , η(y)
(5.14)
144
Pricing games for dynamic spectrum allocation
where η(y) and ηA (y) are the number of bids at y and the number of accepted bids at y, respectively. Usually, r˜p (x) and r˜s (y) can be accurately estimated if great numbers of buyers and sellers are participating in the pricing at the same time. However, in our pricing game, only a relatively small number of players will be involved in the spectrum sharing at a specific time. The beliefs, namely, r˜p (x) and r˜s (y), cannot be practically obtained, so we need to further consider using the historical bid/asking-price information to build up empirical belief values. In MSOP scenarios, we have the following observations: if a bid y˜ > y is rejected, the bid at y will also be rejected; and if a bid y˜ < y is accepted, the bid at y will also be accepted. On the basis of these observations, the secondary users’ beliefs can be further defined as follows using the past bidding information. Definition 5.4.1 Secondary users’ beliefs: for each potential bid at y, define ⎧ ⎪ 0 y = 0, ⎪ ⎪ ⎨ w≤y ηA (w) y ∈ (0, M), r˘s (y) = ⎪ ⎪ w≤y ηA (w) + w≥y ηR (w) ⎪ ⎩ 1 y ≥ M,
(5.15)
where ηR (w) is the number of bids at w that have been rejected and M is a large enough value that the bids greater than M will definitely be accepted. Also, it is intuitive that the bid at 0 will not be accepted by any primary users. In OSMP scenarios, the primary users’ beliefs can be similarly derived as follows using past asking-price information. Definition 5.4.2 Primary users’ beliefs: for each potential offer at x, define ⎧ 1 ⎪ ⎪ ⎨ w≥x μA (w) r˘p (x) = ⎪ μA (w) + w≤x μR (w) ⎪ w≥x ⎩ 0
x = 0, x ∈ (0, M),
(5.16)
x ≥ M,
where μR (w) is the number of offers at w that have been rejected. Also, it is intuitive that the offer at 0 will definitely be accepted since no cost is introduced. Noting that it is too costly to build up beliefs on every possible bid or offer price, we can update the beliefs only at some fixed prices and use interpolation to obtain the belief function over the price space. Considering the characteristics of an open ascending auction in MSOP scenarios, the secondary user with the highest reward payoff does not need to bid her/his true value to win the auction. Instead, she/he need bid only at the second-highest possible payoff in order to have all other secondary users drop out of the auction. Therefore, the secondary users’ belief function (5.15) actually represents the CDF of v(2) . Similarly, we can obtain the CDF of c(2) on the basis of the primary users’ belief function (5.16) as 1 − r˘p (x) [328].
5.4 Collusion-resistant dynamic spectrum allocation
145
Further, since the total number of active secondary users and the statistics of the reward payoff for each user are generally available, the PDF of v(1) in MSOP scenarios can be easily obtained using the order statistics: Fv(1) (x) = Fvi (x). (5.17) i∈{1,2,...,K }
Also, the PDF of c(1) in OSMP scenarios can similarly be obtained: (1 − Fci (y)). Fc(1) (y) = 1 −
(5.18)
i∈{1,2,...,J }
Therefore, the optimal reserve price for the primary user to combat user collusion in MSOP scenarios can be obtained from (5.10) using (5.15) and (5.17). Moreover, as for OSMP scenarios, the optimal reserve price for the secondary user can be obtained from (5.12) using (5.16) and (5.18).
5.4.3
MSMP scenarios In the general scenarios of MSMP, efficient collusion-resistant spectrum allocation needs to be carried out among multiple primary users and secondary users while considering various patterns of collusion happening on both sides of spectrum markets. The situation becomes highly complicated and difficult to analyze. We will first derive a collusion-resistant dynamic spectrum allocation mechanism for MSMP scenarios on the basis of the results for the OSMP/MSOP scenarios. Then, a lower bound is developed to measure the performance of the mechanism by considering the extreme case of all-inclusive collusion within the sets of primary users and secondary users. Before we derive the collusion-resistant dynamic spectrum allocation mechanism, let us discuss several challenges that arise in MSMP scenarios. First, collusion may occur not only among the primary users but also among the secondary users. The outcomes of the spectrum-allocation game are determined by the collusive behaviors on both sides of the spectrum market. Second, collusion highly distorts the true supply and demand of spectrum resources in such a way that the spectrum allocation efficiency will deteriorate. This is because, except for the primary user with the lowest acquisition cost and the secondary user with the highest reward payoff, the supply or demand of spectrum resources from other users in the bidding rings will no longer be elicited through the bidding process. Also, the dynamic nature of spectrum resources requires that the countermeasures against collusion are able to easily adapt to the spectrum dynamics while using only limited resources such as bandwidth of control channels and implementation complexity. Consider an important property of the bidding ring in our game settings, namely that the collusive behaviors within a bidding ring will not affect the strategies of the users who are not in the bidding ring. This means that, for instance, a primary user’s optimal reserve price is determined solely by the spectrum-demand statistics and will not be affected by the collusive behaviors of other primary users. Similar arguments can be applied to the secondary users. Therefore, an efficient collusion-resistant dynamic
146
Pricing games for dynamic spectrum allocation
spectrum allocation approach in MSMP scenarios can be similarly derived on the basis of the results of the above discussion on the OSMP and MSOP scenarios. First, the definition of the beliefs of primary users and secondary users need to be redefined according to the characteristics of a double auction. We have the following new observations in MSMP scenarios. • If a bid y˜ > x is made, the asking price at x will also be accepted. • If an offer x˜ < y is made, the bid at y will also be accepted. On the basis of the above observations, the users’ beliefs in MSMP scenarios can be further refined as follows using the past bid/asking-price information. Definition 5.4.3 Primary users’ beliefs: for each potential asking price at x, define ⎧ 1 x = 0, ⎪ ⎪ ⎨ μ (w) + η(w) A w≥x w≥x x ∈ (0, M), rˆp (x) = ⎪ μA (w) + w≥x η(w) + w≤x μR (w) ⎪ w≥x ⎩ 0 x ≥ M. (5.19) Definition 5.4.4 Secondary users’ beliefs: for each potential bid at ⎧ ⎪ 0 ⎪ ⎪ ⎨ w≤y ηA (w) + w≤y μ(w) rˆs (x) = ⎪ ⎪ w≤y ηA (w) + w≤y μ(w) + w≥y ηR (w) ⎪ ⎩ 1
y, define y = 0, y ∈ (0, M), y ≥ M. (5.20)
By using the above belief functions and the order statistics of v(1) and c(1) given the number of active primary and secondary users, the optimal reserve price for the primary user pi and that for the secondary user si can be obtained for MSMP scenarios as φr,∗ pi ∗ , respectively. and φr,s i Before we develop the collusion-resistant dynamic spectrum allocation algorithm, we first look at the spread-reduction rule (SRR) of double-auction mechanisms. Generally, before the double-auction pricing game converges to the equilibrium, there may exist a gap between the highest bid and the lowest asking price, which is called the spread of the double auction. The SRR states that any asking price that is permissible must be lower than the current lowest asking price, i.e., the outstanding asking price [136], and then each new asking price either results in an agreed transaction or becomes the new outstanding asking price. A similar argument can be applied to bids. Defining the current outstanding asking price and bid as xo and yo , respectively, we let r¯p (x) = rˆp (x) · I[0,xo ) (x) for each x and r¯s (y) = rˆs (x) · I(yo ,M](y) for each y, which are modified belief functions considering the SRR. Note that I(a,b) (x) is defined as 1 if x ∈ (a, b), (5.21) I(a,b) (x) = 0 otherwise.
5.4 Collusion-resistant dynamic spectrum allocation
147
Table 5.1. Collusion-resistant dynamic spectrum allocation 1. Initialize the users’ beliefs and bids/asking prices " The primary users initialize their asking prices as large values close to M and their beliefs as small positive values less than 1 " The secondary users initialize their bids as small values close to 0 and their beliefs as small positive values less than 1. 2. Belief update based on local information Update primary and secondary users’ beliefs using (5.19) and (5.20), respectively 3. Optimal reserve price for primary and secondary users ∗ using (5.10), (5.17), and (5.19) Update primary users’ optimal reserve prices φr, pi ∗ using (5.12), (5.18), and (5.20) Update secondary users’ optimal reserve prices φr,s i 4. Update optimal bid/asking price ∗ " Obtain the optimal asking price for each primary user by solving (5.22) given φr, pi ∗ " Obtain the optimal bid for each secondary user by solving (5.23) given φr,si 5. Update leasing agreement and spectrum pool " If the outstanding bid is greater than or equal to the outstanding asking price, a leasing agreement between the corresponding users will be signed " Update the spectrum pool by removing the assigned channel 6. Iteration If the spectrum pool is not empty, go back to Step 2
Using the belief function r¯p (x), the payoff maximization of selling the ith primary user’s jth channel can be written as max E[U pi (x, j)],
x∈(yo ,xo )
(5.22)
the jth channel when where U pi (x, j) represents the payoff introduced by allocating j the asking price is x, and then E[U pi (x, j)] = x − ci · r¯p (x), x > φr,∗ pi . Similarly, for the secondary user si , the payoff maximization of leasing the jth channel in the spectrum pool can be written as max E[Usi (y, j)],
y∈(yo ,xo )
(5.23)
where Usi (y, j) represents the payoff introduced by leasing the jth channel in the j ∗ . spectrum pool when the bid is y, and then E[Usi (y, j)] = vi − y · r¯s (y), y < φr,s i Therefore, by solving the optimization problem for each effective primary and secondary users using (5.22) and (5.23), respectively, the optimal decisions regarding spectrum allocation at every stage can be made conditional on dynamic spectrum demand and supply. Note that, when a leasing agreement for a specific spectrum channel is achieved for a pair of primary and secondary users, the order statistics of v(1) and c(1) need to be updated, as do the optimal reserve prices for achieving the next leasing agreement. Table 5.1 illustrates our collusion-resistant dynamic pricing algorithm for spectrum allocation on the basis of the above considerations. Note that using the belief functions would help to decrease the pricing overhead greatly for collusion-resistant auction-based spectrum allocation.
148
Pricing games for dynamic spectrum allocation
5.4.4
Performance lower bounds for MSMP scenarios In order to measure the performance of the collusion-resistant dynamic spectrum allocation mechanism, we derive its performance lower bound in the presence of user collusion in the following. An efficient spectrum allocation scheme can be achieved by balancing the supply and demand of spectrum resources as shown in Figure 5.1. Thus, it is straightforward that the most inefficient spectrum allocation occurs when all the supply and demand information is concealed by the collusive behaviors of selfish users, which happens when two all-inclusive collusion sets are formed among the primary users and among the secondary users. In this situation, the spectrum allocation game becomes a bargaining game between two players, i.e, the primary user p(1) with lowest acquisition cost c(1) and the secondary user s(1) with highest reward payoff v(1) . By studying this extreme case, the lower bound of the collusion-resistant scheme can be obtained. Generally speaking, the primary user p(1) and secondary user s(1) value a spectrum channel differently so that a surplus is created. The objective of the bargaining game is to determine in which way the primary and secondary users agree to divide the surplus. Considering that our bargaining game only involves two players, assume the minimal utilities that the users may obtain during the bargaining process to be U p(1) and U s(1) for user p(1) and user s(1) , respectively. Let U = {U , U }. Assume S¯ to be a closed p(1)
s(1)
and convex subset of R 2 , which represents the set of feasible utilities that the users can achieve if they cooperate with each other. Thus, our bargaining game between primary user p(1) and secondary user s(1) can be represented by (s, U). Moreover, assume a bargaining solution to (s, U) to be represented as ϕ(s, U) = U pb(1) , Usb(1) . Among all possible bargaining outcomes, the Nash bargaining solution [326] provides a unique and fair Pareto-optimal outcome when the bargaining solution satisfies the following six axioms. • Individual rationality. U pb(1) , Usb(1) ≥ (U p(1) , U s(1) ). ¯ • Feasibility. U pb(1) , Usb(1) ∈ S. • Pareto optimality. If (Up , Us ) ∈ S¯ and (Up , Us ) ≥ U pb(1) , Usb(1) , then (Up , Us ) = b U p(1) , Usb(1) . ¯ and • Independence of irrelevant alternatives. If U pb(1) , Usb(1) ∈ S˜ ⊂ S, b b b b ¯ U). then U p , Us ˜ U). U p , Us = ϕ(S, = ϕ(S, (1)
(1)
(1)
(1)
• Independence of linear For any linear transformation ψ, b b transformations. ¯ ϕ(ψ(S), ψ(U)) = ψ U p(1) , ψ Us(1) . • Symmetry. If S¯ is invariant under all exchanges of agents and U p(1) = U s(1) , then U pb(1) = Usb(1) . Noting that the above axioms are generally true for our bargaining game (s, U), the corresponding Nash bargaining solution can be represented as follows:
149
5.4 Collusion-resistant dynamic spectrum allocation
max E c(1) ,v(1) [U p(1) (φb , c(1) ) · Us(1) (φb , v(1) )] φb
G(U p(1) , Us(1) ) ≤ U˜ ,
s.t.
U p(1) , Us(1) ≥ 0,
(5.24)
where U p(1) (φb , c(1) ) = φb − c(1) and Us(1) (φb , v(1) ) = v(1) − φb . The two constraints give the feasible sets of U p(1) and Us(1) . Note that, by virtue of the definition of linear utility functions for the users, the first constraint can be simplified as U p(1) + Us(1) ≤ v(1) − c(1) . Therefore, the lower bound of the spectrum efficiency in the presence of user collusion can be obtained by solving (6.5). Moreover, after a leasing agreement between a primary user and a secondary user has been achieved, spectrum allocation continues by solving (6.5) with updated statistics of v(1) and c(1) .
5.4.5
Dynamic pricing with budget constraints Now, on the basis of the belief-assisted dynamic pricing algorithm, we further consider the optimal spectrum allocation when each secondary user is constrained by a total monetary budget for leasing spectrum usage. Note that the spectrum allocation problem can be similarly solved when there exist overall constraints for primary users. Considering the budget constraints of secondary users, we rewrite their optimization objectives as follows: 6 5∞ t A ˆ i ) = max E j j γ · Usi ,t φA,t , βi,t , ψ˜ i,t , O(s A ,ψ φA,t ,βi,t i
s.t.
ci ,vi
t=1
U pˆ j ,t ({φ− j,t , φ j,t }) ≥ U pˆ j ,t ({φ− j,t , φ˜ j,t }),
∞
ψt ≤ Bi ,
(5.25)
t=1
where ψi = {ψi,t }t∈{1,2,...,∞} , ψi,t is the total monetary payment used during the tth stage for the ith secondary user leasing the channels, and Bi is the ith secondary user’s total budget. Note that ψ˜ i,t = Bi − ττ =t−1 =1 ψi,τ , which is the residual budget at the tth stage and can be considered as a state parameter. Hence, the first and second constraints in (5.25) are the incentive-compatibility constraint and the total budget constraint, respectively. Since it is difficult to solve (5.25) directly, we study the dynamic programming approach to simplify the multi-stage optimization problem. Define the value function Q si ,t (ψ˜ i ) as the ith secondary user’s maximum expected payoff obtainable from periods t, t + 1, . . . , ∞ given that the monetary budget left is ψ˜ i . On simplifying (5.25) using the Bellman equation [19], we have the maximal expected payoff Q si ,t (ψ˜ i ) written as follows: % 3 4& A ˜ Q si ,t (ψ˜ i ) = max , ψi + γ · Q si ,t+1 ψ˜ i − ψi,t E c j ,v j Usi ,t φA,t , βi,t , A ,ψ φA,t ,βi,t i
i
i
s.t. U pˆ j ,t ({φ− j,t , φ j,t }) ≥ U pˆ j ,t ({φ− j,t , φ˜ j,t }).
(5.26)
The boundary conditions for the above dynamic-programming problem are Q si ,∞ (ψ˜ i ) = 0, ψ˜ i ∈ (0, Bi ].
(5.27)
150
Pricing games for dynamic spectrum allocation
Note that the first term on the RHS of (5.26) represents the payoff at the current stage and the second term on the RHS of (5.26) represents the future payoff obtained after the applying tth stage given the budget state ψ˜ i − ψi,t . Further, & the principle of optimality % A , ψ that achieves the maximum in [19], the spectrum sharing configuration φA,t , βi,t i
j j in (5.26) given ψ˜ i , t and the statistics of ci and vi is also the optimal solution for the overall optimization problem (5.25). In order to obtain Q si ,t (ψ˜ i ), the maximal payoff of one stage needs to be first derived for different residual budget values ψ˜ i . The difference between the current payoff function in (5.25) and the one-stage payoff function in (5.5) lies in the fact that the applied budget constraint affects the outcomes of the pricing game. For instance, even though both the primary users and the secondary users can achieve higher payoffs by assigning a channel to user si , the user si might not have a large enough budget to lease this channel. Thus, the algorithm in Table 5.1 cannot be directly applied here for optimal spectrum sharing. We need to modify the bid-update step as follows: user si updates his/her bid by min{ψ˜ i , y}, where y is obtained from (5.23). Note that it is highly complicated to derive the closed-form solution for the one-stage payoff function in (5.25). Thus, we use simulation to approximate it for different residual budget values, which proceeds as follows. Generate a large number of samples of the secondary and primary users with reward payoffs and costs satisfying f v (v) and f c (c), respectively. Using the above modified version of the algorithm in Table 5.1, calculate the average onestage payoffs given different ψ˜ on the basis of the outcomes of the spectrum allocation samples. Using the numerical results of the one-stage payoff function, we then derive Q si ,t (ψ˜ i ) using dynamic programming methods. Considering infinite spectrum allocation stages, the maximum payoff Q si ,t (ψ˜ i ) in (5.26) can be written as
Q ∗si (ψ˜ i ) =
% max
A ,ψ φA,t ,βi,t i
3 4& A ˜ , ψi + γ · Q ∗si ψ˜ i − ψi,t E c j ,v j Usi ,t φA,t , βi,t , (5.28) i
i
or, equivalently, Q ∗si = T Q ∗si , where T is the operator updating Q ∗si using (5.28). Let S be the feasible set of the state parameter. The convergence proposition of the dynamicprogramming algorithm can be applied here. This states that, for any bounded function Q : S → R, the optimal payoff function satisfies Q ∗ (x) = lim p→∞ (T p Q)(x), ∀x ∈ S. Since Q si (ψ˜ i ) is bounded in our algorithm, we are able to apply the value-iteration method to approximate the optimal Q si (ψ˜ i ). This method proceeds as follows. Start from some initial function for Q si (ψ˜ i ) as Q 0si (ψ˜ i ) = g(x), where the superscript stands p+1 for the iteration number. Then, iteratively update Q si (ψ˜ i ) by letting Q si (ψ˜ i ) = p+1 p p T Q si (ψ˜ i ). The iteration process continues until Q si (ψ˜ i ) − Q si (ψ˜ i ) ≤ , for all ψ˜ i ∈ S, where is the error bound for Q ∗si (ψ˜ i ). Intuitively, the basic idea behind our dynamic pricing approach for spectrum allocation with budget constraints can be explained as follows. Considering the overall budget constraints, the users make their spectrum sharing decisions on the basis of not only their current payoffs but also their expected future payoffs. Specifically, if the competition
5.5 Simulation results
151
for spectrum resources is high at the current stage, users prefer to save their monetary budgets for future usage, which will yield higher overall payoffs for the users. Therefore, by using the dynamic pricing approach presented here, the spectrum allocation can be optimized not only in the space and frequency domains but also in the time domain.
5.5
Simulation results In this section, we evaluate the performance of the belief-assisted dynamic spectrum sharing approach in wireless networks. Considering a wireless network covering an area of 100 × 100, we simulate J primary users by randomly placing them in the network. These primary users can be the base stations serving for different wireless network operators or different access points in a mesh network. Here we assume that the primary users’ locations are fixed and their unused channels are available to secondary users within a distance of 50. Then, we randomly deploy K secondary users in the network; these are assumed to be mobile devices. The mobility of the secondary users is modeled using a simplified random-waypoint model [209], in which we assume that the “thinking time” at each waypoint is close to the effective duration of one channelleasing agreement, the waypoints are uniformly distributed within a distance of 10, and the traveling time is much smaller than the “thinking time.” Let the cost of an available channel in the spectrum pool be uniformly distributed in [10, 30] and the reward payoff of leasing one channel be uniformly distributed in [20, 40]. If a channel is not available to some secondary users, let the corresponding reward payoffs of this channel be 0. Note that J = 5 and 103 pricing stages have been simulated. Let n i = 4, ∀i ∈ {1, 2, . . . , J } and γ = 0.99. In Figure 5.4, we compare the total utilities of the competitive equilibrium (CE), our dynamic pricing scheme with reserve prices, and our dynamic pricing scheme without reserve prices for various levels of user collusion. It can be seen from Figure 5.4 that, when there is no collusion, the dynamic pricing scheme without reserve prices is able to achieve a similar performance to the theoretical CE outcomes. Moreover, in the presence of collusion, the scheme with reserve prices achieves much higher total utilities than those of the scheme without reserve prices. Note that the total utility increases when the number of secondary users increases. This is because competition among more secondary users helps to increase the spectrum efficiency. However, under the scenarios of user collusion, the performance gap between the scheme with a reserve price and the CE becomes greater when the number of secondary users increases. The reason is that, in the scheme with reserve prices, one needs to set stricter reserve prices in order to combat severe collusion when there are more secondary users. Further, the lower bound of the collusion-resistant scheme shown in Figure 5.4 provides an efficient measurement of the maximal possible performance loss due to collusion. Now we study the overhead of the scheme using the average number of bids and offers for each stage. In Figure 5.5, the overheads of the scheme with or without reserve prices are compared with those of the traditional continuous double auction when the
152
Pricing games for dynamic spectrum allocation
200 Competitive equilibrium without user collusion Dynamic pricing without reserve prices when no collusion Nash bargaining solution with all−inclusive collusion The proposed scheme with 25% colluders The proposed scheme with 80% colluders Pricing without reserve prices with 80% colluders
180
The Total Utilities (× 102)
160 140 120 100 80 60 40 20
Figure 5.4
5
10 15 Number of Secondary Users
20
Comparison of the total utilities of the CE, a pricing scheme without reserve prices, and the discussed scheme with various levels of collusion.
same total utility is achieved. Assume the minimal bid/asking-price step δ of the continuous double auction to be 0.01. It can be seen from Figure 5.5 that our approach substantially decreases the overhead associated with pricing communication either with or without collusion. Note that, while decreasing the overhead, this approach may introduce extra complexity because of the need to update the beliefs and optimal reserve prices. Then, we study the effect of collusion for dynamic spectrum allocation when each secondary user is constrained by his/her monetary budget as in [204], in which each secondary user needs to allocate the budget optimally among multiple pricing stages. For comparison, we define a static scheme in which the secondary users make their spectrum leasing decisions without considering their budget limits. Without loss of generality, we assume that the budget constraints are the same for all secondary users. By applying the scheme with reserve prices to the dynamic programming approach in [204] considering budget constraints, we are similarly able to obtain the performance of the collusion-resistant scheme when optimal spectrum allocation over time needs to be considered. In Figure 5.6, we compare the total utilities of the scheme with those of the static scheme for various budget constraints when collusion is present. Note that the collusion-resistant scheme is applied to both dynamic and static pricing considering budget constraints. It can be seen from Figure 5.6 that, in the presence of user collusion, the scheme with reserve prices achieves significant performance gains over the static scheme when the budget constraints are taken into consideration. That’s
153
5.5 Simulation results
Average Number of Offers and Bids for Each Stage
900
700 600 500 400 300 200 100 0
Figure 5.5
The proposed scheme with no collusion Continuous double auction δ = 0.01 The proposed scheme with 25% colluders Continuous double auction with 25% colluders
800
5
10 15 Number of Secondary Users
20
Comparison of the overhead between the discussed scheme and continuous double auction scheme.
100 90
Dynamic pricing with 80% user collusion Dynamic pricing without user collusion Static scheme without user collusion Static scheme with 80% user collusion
The Total Payoff (× 102)
80 70 60 50 40 30 20 15
20
25
30
35
40
45
50
Budget Constraint for Each Secondary User (× 103) Figure 5.6
Comparison of the total utilities of the discussed scheme with those of the static scheme.
154
Pricing games for dynamic spectrum allocation
because the performance loss due to the setting of reserve prices can be partly offset by exploiting the time diversity of spectrum resources among multiple sharing stages.
5.6
Summary and bibliographical notes Dynamic spectrum allocation is a promising approach for enhancing spectrum efficiency for wireless networks. However, collusion among selfish users has severely deleterious effects on the efficiency of spectrum sharing. In this chapter, we model the dynamic spectrum allocation as a multi-stage pricing game and present a collusionresistant dynamic pricing approach by which to maximize the users’ utilities while combating their collusive behaviors using the derived optimal reserve prices. Further, the lower bound of the scheme presented here is analyzed using the Nash bargaining solution. Simulation results show that the scheme can achieve high spectrum efficiency with only limited overhead in various situations of collusion among users. Researchers have already started to study dynamic spectrum access via pricing and auction mechanisms [163] [166] [202] [232] [204] [205]. In [163], the authors considered an auction-based mechanism to efficiently share spectrum among the users in interference-limited systems. In [166], the price of anarchy was analyzed for spectrum sharing in WiFi networks. A demand-responsive pricing framework was studied in [202] to maximize the profits of legacy spectrum operators while considering the users’ response model to the operators’ pricing strategy. In [232], the authors considered a multi-unit sealed-bid auction for efficient spectrum allocation. In [204], a belief-assisted distributive pricing algorithm was proposed as a means by which to achieve efficient dynamic spectrum allocation. Interested readers can refer to [207].
6
A multi-winner cognitive spectrum auction game
Spectrum auction is one important approach for dynamic spectrum allocation, in which secondary users lease some unused bands from primary users. However, spectrum auctions are different from existing auctions studied by economists, because spectrum resources are interference-limited rather than quantity-limited, and it is possible to award one band to multiple secondary users with negligible mutual interference. To accommodate this special feature in wireless communications, in this chapter, we present a novel multi-winner spectrum auction game that does not not exist in the auction literature. Since secondary users may be selfish in nature and tend to be dishonest in pursuit of higher profits, we develop effective mechanisms to suppress their dishonest/collusive behaviors when secondary users distort their valuations about spectrum resources and interference relationships. Moreover, in order to make the game scalable when the size of the problem grows, the semi-definite programming (SDP) relaxation is applied to reduce the complexity significantly. Finally, simulation results are presented in order to evaluate the auction mechanisms, and to demonstrate the reduction in complexity.
6.1
Introduction With the development of cognitive radio technologies, dynamic spectrum access has become a promising approach. This allows unlicensed users (secondary users) dynamic and opportunistic access to the licensed bands owned by legacy spectrum holders (primary users) in either a non-cooperative fashion or a cooperative fashion. In non-cooperative dynamic spectrum sharing, secondary users’ existence is transparent to primary users, and secondary users frequently have to sense the radio environment to detect the presence of primary users. Whenever they find a spectrum opportunity when the primary user is absent, secondary users are allowed to occupy the spectrum; but they must immediately vacate the band when the primary user appears. However, imperfect spectrum sensing may lead to missed spectrum opportunities as well as collision with primary users. To circumvent the difficulties, an alternative is the cooperative approach whereby spectrum opportunities are announced by primary users rather than discovered by secondary users. Since primary users have an incentive to trade their temporarily unused bands for monetary gains and secondary users want to lease some bands for data transmission, they may negotiate the price for a short-term
156
A multi-winner cognitive spectrum auction game
lease. With the emerging applications of mobile ad hoc networks envisioned in civilian usage, it is reasonable to assume that secondary users are selfish and aim at maximizing their own interests because they do not serve a common goal or belong to a single authority. Since they are operated by humans or service providers, they are also capable of acting intelligently. The spectrum resource is quite different from other commodities in that it is interference-limited rather than quantity-limited, because it is reusable by wireless users who are geographically far apart. In some application scenarios where secondary users need to communicate only within a short range, such as a wireless personal area network (WPAN) centered around a person’s workspace, the transmission power is quite low, and hence even users separated by moderate distances can simultaneously access the same band. In this case, allowing multiple winners to lease the band is an option that is consented to by everyone: primary users get higher revenue, secondary users get more chances to access the spectrum, and the spectrum-usage efficiency is boosted also from the system designer’s perspective. To the best of our knowledge, no such auction exists in the literature, and we coin the name multi-winner auction to highlight the special features of the new auction game, in which auction outcomes (e.g., the number of winners) highly depend on the geographical locations of the wireless users. Moreover, since secondary users are selfish by nature, they may misrepresent their private information in order to gain a higher payoff. Therefore, proper mechanisms have to be developed to provide incentives to reveal true private information. Although the Vickrey–Clarke–Groves (VCG) mechanism is a possible choice for enforcing that users bid their true valuations [78], it is also well known to suffer from several drawbacks such as low revenue [8] [101]. Since auction rules significantly affect bidding strategies, it is of essential importance to develop new auction mechanisms to overcome the disadvantages. In addition, mechanisms to be developed should take into consideration the collusive behavior of selfish users, which is a prevalent threat to efficient spectrum utilization but has generally been overlooked [207]. Driven by their pursuit of higher payoffs, a clique of secondary users may cheat together, and sometimes they may even have a more facilitated way to exchange information for collusion if they belong to the same service provider. Furthermore, by awarding the same band to multiple buyers simultaneously under interference constraints, the multi-winner auction makes possible new kinds of collusion [464], besides the bidding-ring collusion discussed in the previous chapter. Emerging kinds of collusion will be discussed in detail later in this chapter, and effective countermeasures against them have to be developed. To make the spectrum auction scheme feasible, it must be easy to implement, and must be scalable when more and more users are incorporated into the auction game. However, as we analyze later in this chapter, the optimal resource allocation that maximizes the system utility in the auction is an NP-complete problem whose exact solution needs a processing time that increases exponentially with the size of the problem, and hence the computational complexity becomes too formidable for this scheme to be
157
6.2 The system model
practical when the number of users is large. By applying the SDP relaxation [254] to the original problem, a tight upper bound can be obtained in polynomial time. The rest of this chapter is organized as follows. In Section 6.2, the model for a multi-winner cognitive spectrum auction is described, and several kinds of collusion are illustrated. In Section 6.3, we develop auction mechanisms that not only yield high revenue but also prevent collusion, and employ the SDP relaxation to make the scheme implementable and scalable. The one-band auction game is generalized to a multi-band auction in Section 6.4. Simulation results are presented in Section 6.5, and Section 6.6 summarizes the chapter. We employ the following notation. A ∈ M m×n means that A is a matrix with dimension m × n, and b ∈ M m×1 indicates that b is a column vector with length m. We denote their entries as Ai j and bi , respectively. The trace of a matrix A is denoted by tr(A), and its rank is denoted by rank(A). The 2-norm of a vector b is denoted by |b|2 . The all-zero, all-one, and identity matrices are denoted by O, 1, and I, respectively, and their dimensions are given in the subscript when there is room for confusion. S ∈ S n means that S is an n × n real symmetric matrix, and S # O implies that S is positive semi-definite. The Kronecker product of two matrices A and B is denoted by ⎡ ⎤ A11 B A12 B · · · A1n B ⎢ A21 B A22 B · · · A2n B ⎥ ⎢ ⎥ A⊗B=⎢ (6.1) ⎥. .. .. .. .. ⎣ ⎦ . . . . Am1 B
Am2 B
···
Amn B
Denote b−i = [b1 , b2 , . . . , bi−1 , bi+1 , . . . , bm ]T as a new vector with the ith entry of b excluded. Similarly, if W is a set of indices, b−W implies that all the entries whose indices fall in W are removed. |W | denotes the cardinality of a set W . For two sets W1 and W2 , the set difference is defined as W1 \ W2 = {x|x ∈ W1 and x ∈ W2 }.
6.2
The system model We consider a cognitive radio network where N secondary users coexist with M primary users, and primary users seek to lease their unused bands to secondary users for monetary gains. We model it as an auction where the sellers are the primary users, the buyers are the secondary users, and the auctioneer is a spectrum broker who helps coordinate the auction. Assume that there is a common channel to exchange necessary information and a central bank to circulate money in the community. For simplicity, we assume that each primary user owns one band exclusively, and each secondary user needs only one band. In this chapter, we first consider the auction with a single band (M = 1), and later extend it to a multi-band auction. The system designer determines a fixed leasing period T according to channel dynamics and overhead considerations, that is, the duration should be short enough to make spectrum access flexible but not too short, since then the overhead of the auction would become problematic. At the beginning of each leasing period, if a primary
158
A multi-winner cognitive spectrum auction game
user decides not to use his/her own licensed band for the next duration of T , he/she will notify the spectrum broker of the intention to sell the spectrum rights. Meanwhile, the potential buyers submit their sealed bids b = [b1 , b2 , . . . , b N ]T to the spectrum broker simultaneously, where bi is the bid made by user i. According to the bids and channel availability, the broker decides both the allocation x = [x1 , x2 , . . . , x N ]T and the prices p = [ p1 , p2 , . . . , p N ]T , where xi = 1 means that secondary user i wins some band, xi = 0 otherwise, and pi is the price of the band for the ith secondary user. Differently from the case of a sequential auction that lasts for multiple rounds, in a sealed-bid auction where buyers submit their bids simultaneously only once, the “pay-as-bid” strategy cannot enforce truth-telling. Hence, the price pi for secondary user i is upper bounded by his/her bid bi , and determined by the auction mechanism. Alternatively, we can define the set of winners as W ⊆ {1, 2, . . . , N }, where i ∈ W if and only if xi = 1. Assuming that user i gains value vi from transmitting information in the leased band, his/her reward is ri = vi xi − pi ,
i = 1, 2, . . . , N .
(6.2)
Given all users’ valuations v = [v1 , v2 , . . . , v N ]T , the system utility, or the social welfare, can be represented by Uv (x) =
N i=1
vi xi =
vi .
(6.3)
i∈W
The social welfare measures the system-wide utility created by the transaction of commodities in the auction. Since the prices paid by buyers and revenue gained by sellers cancel each other out, the social welfare does not depend on prices { pi }. Since the multi-winner auction awards the band simultaneously to several secondary users according to their mutual interference, interference plays an important role in the auction. There are several models for wireless interference, such as the protocol model and the physical model. In this chapter, we will mainly focus on the well-known protocol model [138], which is simpler to understand, in order to highlight our contributions regarding auction mechanisms. With the protocol model employed, the mutual interference in Figure 6.1(a), where N = 6 secondary cognitive base stations compete for the spectrum lease, can be well captured by a conflict graph (Figure 6.1(b)), or, equivalently, by an N × N adjacency matrix C (Figure 6.1(c)). By collecting reports from secondary users about their locations or their neighbors, the spectrum broker keeps the matrix C updated, even if the interference constraints change from time to time because of the slow movement of secondary users. When Ci j = 1, user i and user j cannot access the same band simultaneously, and, if they do, neither of them gains due to collision. Therefore, the interference constraint is xi + x j ≤ 1 if Ci j = 1. However, our method can also be extended to the physical model in [17], which describes interference in a more accurate way but is more complicated. Under the physical model, only transmissions with the received signal-to-interferenceand-noise ratio (SINR) exceeding some threshold β are considered successful,
159
6.2 The system model
User 4
User 1
User 2
User 5
User 3 User 6
(a) User 4
0 1 1 1 0 1
User 1
1 0 1 0 1 0 1 1 0 0 0 1 User 2
User 5
1 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0
User 3 User 6 (b) Figure 6.1
(c)
Illustration of the interference structure in a cognitive spectrum auction: (a) physical model; (b) graph representation; and (c) matrix representation.
8 i.e., gii P j=i g ji P x j + Z i ≥ β, where g ji represents the channel gain from the jth user’s transmitter to the ith user’s receiver, Z i is the noise at receiver i, and we assume that all users use the same power P. By neglecting the noise term when interference is the dominant factor in the system, the condition for simultaneous transmissions when there is no impairment by mutual interference can be further reduced to Nj=1 α ji x j ≤ 0 if xi = 1, where we define αii = −1 and α ji = βg ji /gii , i = j. We will briefly discuss auction mechanisms under the physical model at the end of Section 6.3. The special feature that multiple winners can share one band results in new kinds of potential collusion. In the following, we illustrate them by simple examples in the situation of Figure 6.1, assuming that users {2, 4, 6} are supposed to be winners in the absence of collusion. • Loser collusion. A group of losers without mutual interference, e.g., users {1, 5}, cannot win the band because the band is worth less to them than it is to the winners. However, by collusively raising their bids beyond their valuations, the group may beat the winners and win the band instead. If the prices charged to them are still lower than
160
A multi-winner cognitive spectrum auction game
their true valuations even if they overbid, they will receive positive payoffs from this kind of collusion. Hence, auction mechanisms should prevent overbidding colluders from gaining profits. • Sublease collusion. After the auction, some of the winners may decide not to access the spectrum but instead sublease to other secondary users who failed to win the band in the auction. For instance, users {2, 6} may negotiate a price with potential buyers {3, 5} and sublease the band as long as all of them agree on the sublease price. Since this amounts to earning extra profits effortlessly, such collusion takes away some of the benefits which should be credited to the primary user. • Kick-out collusion. Several users belonging to the same group of interests attempt to manipulate the auction outcome by misrepresenting mutual interference. Assume that users {4, 5, 6} form such a group. Now, user 4 and user 6 will claim to experience mutual interference with user 2, i.e., Cˆ 42 = 1 and Cˆ 62 = 1, to kick user 2 out, and welcome their ally, user 5, into the set of winners.
6.3
One-band multi-winner auctions In defining rules for winner determination and price determination, mechanism design plays an important role in an auction, since it greatly affects the auction outcome as well as user behavior. For example, the widely employed VCG mechanism ensures that the maximum system utility is attained and enforces that all buyers bid their true valuations in the absence of collusion, i.e., bi = vi (i = 1, 2, . . . , N ). Although the VCG mechanism could be applied to the multi-winner auction, serious drawbacks such as low revenue and vulnerability to collusion make it less attractive, as shown in [464] through specific examples in cognitive spectrum auctions. Therefore, we need to develop suitable mechanisms for the multi-winner auction that guarantee system efficiency, yield high revenue, prevent potential collusion, and are easy to implement.
6.3.1
The optimal allocation Because the goal of dynamic spectrum access is to improve the efficiency of spectrum utilization, the auction mechanisms should be designed such that the social welfare is maximized, that is, the band is awarded to the secondary users who value it the most. In a cognitive spectrum auction, only those without mutual interference can be awarded the band simultaneously, and we group them together as virtual bidders, whose valuations equal the sum of the individual valuations. Taking Figure 6.1 for example, there are 17 virtual bidders, such as {1}, {1, 5}, {4, 5, 6}, and so on; on the other hand, combinations like {1, 3} and {2, 5, 6} are not virtual bidders due to interference. In order to achieve full efficiency, the virtual bidder with the highest bid will win the band. It is unnecessary to list all virtual bidders explicitly; instead, the optimal allocation x can be determined by the following N -variable binary integer programming (BIP) problem,
161
6.3 One-band multi-winner auctions
Uv∗ = max x
N
vi xi ,
i=1
s.t. xi + x j ≤ 1, ∀i, j if Ci j = 1 (interference constraints), xi = 0 or 1, i = 1, 2, . . . , N ,
(6.4)
where interference constraints require that secondary users with mutual interference should not be assigned the band simultaneously.
6.3.2
Collusion-resistant pricing strategies After introducing the concept of virtual bidders, the multi-winner spectrum auction becomes similar to the single-winner auction, and hence it is possible to employ the second-price strategy. In a second-price auction, the bidder with the highest bid wins the commodity, and pays the amount of money equal to the second highest bid. It is well known that submitting bids equal to their true valuations is the dominant strategy, since the buyer may end up paying more money than what it is actually worth if he/she submits a bid higher than the true valuation, and may lose the opportunity to win if he/she submits a lower one. Applying the second-price mechanism to the auction consisting of virtual bidders means that the virtual bidder with the highest bid wins the band (ties are broken randomly if two virtual bidders have the same valuation), and pays the highest bid made by the virtual bidder consisting only of losers. This can be done by solving two optimal-allocation problems in succession. First, we solve (6.4) to determine the set of winners W , or the virtual winner. Then, we remove all the winners W from the system, and solve the optimization problem again to calculate the maximum utility, denoted by Uv∗−W , which is the amount of money that the virtual winner has to pay. We have to point out that the new pricing strategy sacrifices the enforcement of truthtelling a little bit for higher revenue and more robustness against collusion; however, since the pricing strategy is quite similar to the second-price mechanism, where users bid their true valuations, we expect that users will not shade their bids too much from their true valuations. Thus, we neglect the difference between bi and vi in the following analysis to focus on revenue and robustness aspects of the new mechanisms. The remaining problem is splitting the payment Uv∗−W among the secondary users within the virtual winner. This is quite similar to a Nash bargaining game where each selfish player proposes his/her own payment during a bargaining process such that the total payment equals Uv∗−W , and it is well known that the Nash bargaining solution (NBS), which maximizes the product of individual payoffs, is an equilibrium. In the proposed auction, no individual bargaining is necessary; instead, the spectrum broker directly sets the NBS prices for each winner, and everyone is ready to accept them since they are equilibrium prices. The pricing strategy is the solution to the following optimization problem: (vi − pi ), s.t. pi = Uv∗−W . (6.5) max { pi ∈[0,vi ], i∈W }
i∈W
i∈W
162
A multi-winner cognitive spectrum auction game
Proposition 6.3.1 User i has to pay the price pi = max {vi − ρ, 0} , for i ∈ W,
(6.6)
∗ where ρ is chosen such that i∈W pi = Uv−W . In particular, if pˆ i = vi − ∗ Uv − Uv−W /|W | ≥ 0 for any i, pi = pˆ i will be the solution. P ROOF. Letting qi = vi − pi for i ∈ W and using the fact that i∈W vi = Uv∗ , the optimization problem (6.5) is equivalent to the following convex optimization problem: min
{qi ∈[0,vi ], i∈W }
− ln qi ,
s.t.
i∈W
qi = Uv∗ − Uv∗−W = U.
(6.7)
i∈W
After introducing the Lagrange multipliers λ and μi ≥ 0, i ∈ W , the Lagrangian function is ln qi + λ qi − U + μi (qi − vi ), (6.8) L(q, λ, μ) = − i∈W
i∈W
i∈W
from which Karush–Kuhn–Tucker (KKT) conditions [41] can be obtained as follows: 1 , μi ≥ 0, μi (qi − vi ) = 0, qi = U. (6.9) qi = λ + μi i∈W
Define ρ = 1/λ. For those i such that qi = vi , qi = 1/(λ + μi ) ≤ 1/λ = ρ; for other i such that qi < vi , the third condition implies μi = 0, and thus qi = 1/(λ + μi ) = ρ. Therefore, the solution is qi = min(vi , ρ),
(6.10)
with ρ chosen such that the last condition in (6.9) is satisfied. Finally, pi = vi − qi yields (6.6). In particular, if U /|W | ≤ mini (vi ), ρ = U /|W | and pi = vi − ρ will be the solution. It can be seen that the payment is split in such a way that the profits are shared among the winners as equally as possible. Differently from the VCG pricing strategy, which sometimes may yield low revenue or even zero revenue, such a pricing strategy always guarantees that the seller receives revenue as great as Uv∗−W . Moreover, if some losers collude to beat the winners by raising their bids, they will have to pay more than Uv∗−W ; however, the payment already exceeds what the band is actually worth to them, and as a result, loser collusion is completely eliminated. Nevertheless, users can still benefit from the sublease collusion, and hence we call the pricing strategy in (6.5) the partially collusion-resistant pricing strategy. In order to find a fully collusion-resistant pricing strategy, we have to analyze how sublease collusion takes place, and add more constraints accordingly. It happens when a subset of the winners WC ⊆ W subleases the band to a subset of the losers L C ⊆ L, where L = {1, 2, . . . , N } \ W denotes the set of all losers. The necessary condition for the sublease collusion is i∈WC pi < i∈L C vi , so that they can find a sublease
163
6.3 One-band multi-winner auctions
price in between that is acceptable to both parties. Given any colluding-winner subset WC ⊆ W , the potential users who may be interested in subleasing the band should have no mutual interference with the remaining winners W \ WC ; otherwise, the band turns out to be unusable. Denote the set of all such potential users by L(W \ WC ), i.e.,
L(W \ WC ) = {i ∈ L|Ci j = 0, ∀ j ∈ W \ WC }. Therefore, as long as prices are set such that i∈WC pi ≥ max L C ∈L(W \WC ) i∈L C vi , there will be no sublease collusion. Note that max L C ∈L(W \WC ) i∈L C vi is the maximum system utility Uv∗L(W \W ) which can be C obtained by solving the optimal-allocation problem within the user set L(W \ WC ), thus the optimum collusion-resistant pricing strategy is the solution to the following problem: max (vi − pi ), s.t. pi ≥ Uv∗L(W \W ) , ∀WC ⊆ W. (6.11) { pi ∈[0,vi ], i∈W }
i∈W
i∈WC
C
When WC = W , the constraint reduces to i∈W pi ≥ Uv∗−W , which incorporates the constraint in (6.5) as a special case. There are 2|W | − 1 constraints in total because each of them corresponds to a subset WC ⊆ W except WC = ∅. From another perspective, this actually takes into consideration sets of virtual bidders consisting of both winners and losers, in contrast to the previous pricing strategy, in which only those consisting of losers are considered.
6.3.3
Interference matrix disclosure So far, our auction mechanism is based on the assumption that the underlying interference matrix C reflects the true mutual interference relationships between secondary users. However, since C comes from secondary users’ own reports, it is quite possible that selfish users manipulate this information just as they may do with their bids. If cheating could help a loser become a winner, or help a winner pay less, selfish users would have incentives to do so, which would compromise the efficiency of the spectrum auction. Also, the cheating behavior may happen individually or in a collusive way. Therefore, we have to carefully consider whether they have such an incentive to deviate, and, if so, how to fix the potential problem. Here, we assume that there are no malicious users who are determined to do harm to others or even to the whole system. In order to obtain the matrix C, the spectrum broker has to collect information from secondary users. Secondary users may report their locations in terms of coordinates, and the spectrum broker calculates the matrix according to their distances. In this way, secondary users do not have much freedom to fake an interference relationship in favor of themselves. Alternatively, secondary users may directly inform the spectrum broker about who their neighbors are, and hence they are able to manipulate the matrix, either by concealing an existing interference relationship or by fabricating an interference relationship that actually does not exist. When secondary users have little information about others, they will misrepresent the interference relationships only if they do not get punished for doing so, even in the worst case. Assume that user j lies about C jk . When users j and k do not mutually
164
A multi-winner cognitive spectrum auction game
interfere, i.e., C jk = 0, but user j claims Cˆ jk = 1, he/she may lose an opportunity of being a winner since an extra interference constraint has been added; on the other hand, if C jk = 1 but user j claims Cˆ jk = 0, user j may end up winning the band together with user k, but the band cannot be used at all due to strong interference. In short, the worst-case analysis suggests that secondary users have no incentive to cheat whenever information is limited. When secondary users somehow have more information about others, they may distort the information in a more intelligent way, that is, they can choose when to cheat and how to cheat. Nevertheless, by investigating whether user j is better off by misrepresenting C jk , we show that truth-telling is an equilibrium from which no individual would have an incentive to deviate unilaterally. We discuss all possible situations in what follows. (i) Under the condition that user j is supposed to be a loser. (a) Claim Cˆ jk = 1 against the truth C jk = 0. By doing this, user j actually introduces an additional interference constraint against himself/herself, but since user j is already a loser, nothing would change. (b) Claim Cˆ jk = 0 against the truth C jk = 1. Removing a constraint possibly helps user j to become a winner, but, in that case, user k is also one of the winners. Then, user j has to pay for a band that turns out to be unusable due to strong mutual interference with user k. This is unacceptable to user j. (ii) Under the condition that user j is supposed to be a winner. (a) Claim Cˆ jk = 0 against the truth C jk = 1. If user j is the only one among the winners that experiences interference with user k, this claim would take user k into the set of winners, which would in turn make user j suffer from mutual interference. (b) Claim Cˆ jk = 1 against the truth C jk = 0. If user k is not a winner, doing this would change nothing. If user k is indeed a winner, user j takes the risk of throwing himself/herself out of the set of winners. Even if user j has enough information to ensure that he/she can still be a winner, kicking out user k does not necessarily make user j pay less. In sum, no individual has the incentive to cheat even if there is enough information to make intelligent cheating possible. A similar analysis can be applied to the situation in which a group of secondary users is able to distort the information collusively, and we find that kick-out collusion as defined in Section 6.2 is the only way in which colluders gain an advantage. If the channels are symmetric, i.e., C jk = Ck j always holds, we can apply the following conservative rule: the spectrum broker sets C jk to 1 only when both user j and user k confirm that they have mutual interference. With this trick applied, colluding users cannot unilaterally fabricate an interference relationship with an innocent user who is honest, and they will lose their incentive to cheat because their efforts are in vain. If the channels are asymmetric, however, the spectrum broker will have to ask secondary users to report their locations when there is a discrepancy between the reported C jk and Ck j .
165
6.3 One-band multi-winner auctions
In sum, we examine secondary users’ incentives to lie about the underlying interference relationships, and conclude that no single user or group of users would have an incentive to cheat individually or collusively, when the spectrum broker employs the conservative rule to determine the interference matrix C from secondary users’ reports, under the condition of symmetric channels.
6.3.4
Complexity issues We have to examine the complexity of the mechanism to see whether it is scalable when more users are involved in the auction game. Since fully collusion-resistant pricing is a convex optimization problem when linear inequality constraints are known, they can be efficiently solved by numerical methods such as the interior-point method [41]. However, one optimal allocation problem has to be solved to find the set of winners W , and another 2|W | − 1 problems have to be solved to obtain the Uv∗L(W \W ) used in the C constraints. Unfortunately, the optimal-allocation problem can be seen as the maximal weighted independent-set problem [145] in graph theory, which is known to be NPcomplete in general1 even for the simplest case with vi = 1 for all i [226]. Since the computational complexity becomes formidable when the number of users N is large, the auction mechanism seems unscalable. Therefore, near-optimal approximations with polynomial complexity are of great interest. √ √ √ T v1 , v2 , . . . , v N , the optimal-allocation probLemma 6.3.1 On defining μv = lem (6.4) with x∗ as its optimizer is equivalent to the following optimization problem: 9v∗ = max μTv y 2 , U y
s.t. yi y j = 0, ∀i, j if Ci j = 1,
|y|2 = 1,
(6.12)
√ whose optimizer y∗ is given by yi∗ = c vi xi∗ , where c is a normalization constant such that |y∗ |2 = 1. P ROOF. Assume that W ∗ and V ∗ are the supports of the optimizers x∗ and y∗ , respectively, i.e., i ∈ W ∗ if and only if xi∗ = 0, and i ∈ V ∗ if and only if yi∗ = 0. Note that the constraint xi + x j ≤ 1 can also be written as xi x j = 0 for binary √ norm integers. Define yi = vi xi∗ / k∈W |y|2 equals 1, and, moreover, √∗ vk , whose vi v j / k∈W ∗ vk xi∗ x ∗j = 0. Satisfying both confor i, j such that Ci j = 1, yi y j = straints, y is in the feasible set, which should yield a value not exceeding the optimum, 9v∗ U
2 ≥ μTv y =
2 i∈W ∗ vi : = vi = Uv∗ . k∈W ∗ vk i∈W ∗
(6.13)
1 It is true except in some special cases, e.g., when the graph is perfect. The graph corresponding to our
optimal-allocation problem does not possess those special properties.
166
A multi-winner cognitive spectrum auction game
On the other hand, knowing that y∗ is the optimizer, we can confine the problem to V ∗ as follows: 2 √ max ∗ vi yi , s.t. yi2 = 1. (6.14) yi ,i∈V
i∈V ∗
i∈V ∗
2 √ to the Cauchy–Schwartz inequality, ≤ i yi i∈V ∗ v√ i∈V ∗ vi × According 2 ∗ ∗ i∈V ∗ yi = i∈V ∗ vi , where the equality holds when yi = c vi (i ∈ V ) for some ∗ constant c. Furthermore, it is impossible to find i, j ∈ V such that Ci j = 1; otherwise, yi∗ y ∗j = 0 will violate the constraint. This implies that V ∗ is also a compatible group of users without interference, and we have 9v∗ = vi ≤ Uv∗ . (6.15) U i∈V ∗
9v∗ = Uv∗ , and the optimizers are On comparing (6.13) with (6.15), we conclude that U √ related by yi∗ = c vi xi∗ with the normalization factor c. The optimal allocation is no longer an integer programming problem, but still difficult to solve because of the non-convex feasible set. To make it numerically solvable in polynomial time, the SDP relaxation can be applied, which enlarges the feasible set to a cone of positive semi-definite matrices (which is a convex set) by removing some constraints [254]. To this end, let S = yyT , i.e., Si j = yi y j . The objective function in (6.12) becomes μTv Sμv , and the two constraints turn out to be Si j = 0, ∀i, j if Ci j = 1 and tr(S) = 1, respectively. The problem has to be optimized over {S ∈ S N |S = yyT , y ∈ M N ×1 }, or, equivalently, {S ∈ S N |S # O, rank(S) = 1}. On discarding the rank requirement while keeping only the constraint of positive semi-definiteness, we arrive at the following convex optimization problem: ϑ(C, v) = max μTv Sμv , S#O
s.t. tr(S) = 1,
Si j = 0, ∀i, j if Ci j = 1,
(6.16)
which is also known as the theta number [252] [142] in graph theory. With the feasible set enlarged by relaxing a constraint to its necessary condition, the new optimization problem provides an upper bound to the original one: if the optimizer S∗ can be decomposed as S∗ = y∗ y∗T , which means that S∗ falls within the original feasible set, y∗ will be the exact solution to (6.12); otherwise, ϑ(C, v) is an upper bound that is unattainable. Fortunately, we verify by simulation that the near-optimal algorithm with relaxation performs well: in our problem setting, it gives the exact solution most of the time (>90%), and even for those unattainable cases, the bound is considerably tight since the average gap is within 5%. Since the upper bound is quite tight, we can replace Uv∗L(W \W ) in (6.11) by its correC sponding bound without significantly changing the prices charged to each winner; but we have to find out the optimizer x∗ when deciding who the winners are. To this end, we examine whether S∗ can be decomposed as the outer product of a vector and itself, and, if the answer is yes, we can get y∗ and then map it back to x∗ . A much simpler way is to let xi∗ = 1 if Sii∗ = 0, since Lemma 6.3.1 tells us that xi∗ = 0 if and only if
167
6.3 One-band multi-winner auctions
yi∗ = 0, and then check whether this allocation is interference-free.2 Most of the time, we will end up with the exact solution; and, for failed trials, we could resort to other sub-optimal algorithms such as the greedy algorithm to find sub-optimal allocation, or simply solve the original problem directly.
6.3.5
The physical interference model In this subsection, we extend our auction mechanism to the situation where the physical model is employed to describe mutual interference. Now, the optimal allocation becomes social-welfare maximization under physical interference constraints, Uv∗
= max x
N
vi xi ,
i=1
s.t.
N
α ji x j ≤ 0, if xi = 1,
j=1
xi = 0 or 1, i = 1, 2, . . . , N .
(6.17)
Recall that those α terms are defined in Section 6.2 as αii = −1 and α ji = βg ji /gii , i = j, which basically depend on channel gains. Thus, the optimal allocation remains much the same except that protocol interference constraints are replaced by physical interference constraints. Pricing strategies are similar, too. Nevertheless, the SDP relaxation is a bit difficult because the constraints are much more complicated than the constraints exerted by the protocol model. First, we replace the constraint “ Nj=1 α ji x j ≤ 0 if xi = 1” by an equivalent but compact form N xi j=1 α ji x j ≤ 0, because xi is a binary integer variable. Then, we can apply sim√ ilar approaches, i.e., yi = c vi xi and S ji = y j yi , and finally get the following relaxed optimization problem: max μTv Sμv S#O
s.t. tr(S) = 1,
S ji = 0, ∀i, j if α ji > 1, N α ji S ji ≤ 0, i = 1, 2, . . . , N . √ v j vi
(6.18)
j=1
Note that, when α ji > 1, i.e., user j experiences strong interference with user i, user i cannot transmit simultaneously with user j, because, if xi = x j = 1, we have N j=1 α ji x j ≥ α ji x j + αii x i = α ji − 1 > 0, which will violate the constraint. Hence, the corresponding constraint is quite similar to that under the protocol model. Moreover, compared with (6.16), the SDP relaxation under the physical model (6.18) incorporates additional constraints reflecting the accumulation of interference power. Additional difficulties arise when we recover the optimal allocation vector x∗ from the optimizer S∗ . The reason is that under the protocol model we are able to map y∗ back to x∗ on the basis of the converse part of Lemma 6.3.1, but this no longer 2 In practice, because S∗ is solved by some numerical method, the entries in S∗ need not be strictly equal to 0. Thus, we can set Sii∗ = 0 whenever |Sii∗ | is smaller than a particular threshold, say 10−5 .
168
A multi-winner cognitive spectrum auction game
holds under the physical model. However, we can still exploit hints from y∗ to con√ struct a near-optimal 9 x using the greedy algorithm. Specifically, we sort yi∗ / vi in descending order, and denote the sorted index as [n(1), n(2), . . . , n(N )]; for example, √ n(1) = arg maxi 9 yi / vi . After initializing 9 x as an all-zero vector and setting 9 xn(1) = 1, x still satisfies the physical interference constraint and we set 9 xn(2) = 1 if the resultant 9 xn(k) (k = 3, 4, . . . , N ) one by one 9 xn(2) = 0 otherwise. We determine binary values of 9 in the same way, and finally obtain the vector 9 x which will be used as the allocation vector.
6.4
Multi-band multi-winner auctions It is more interesting to study the case when M primary users want to lease their unused bands or a single primary user divides the band into M sub-bands for lease. In other words, there are M bands (M > 1) available for secondary users to lease. In this section, we extend our existing results to the multi-band auction.
6.4.1
Multi-band auction mechanisms Since usually there are many secondary users competing for the spectrum resources, it is unfair if some users can access several bands while others are starved of access. In addition, if each secondary user is equipped with a single radio, the physical limitation will make it impossible to access several bands simultaneously. Therefore, we require that each user should lease at most one band, and we further assume that secondary users do not care which band they get, i.e., any band’s value is vi to user i. On extending the one-band auction to a more general multi-band one, we have to find the counterpart of the auction mechanism including the optimal allocation and pricing strategies. Since there are M sets of winners W 1 , W 2 , . . . , W M , we define M vectors x1 , x2 , . . . , x M correspondingly, where xim = 1 indicates that user i wins the mth band. Including the additional constraint that each user cannot lease more than one band, we have the M-band optimal allocation as follows: Uv∗ = s.t.
max
M N
x1 ,x2 ,...,x M
xim
+
M
x mj
vi xim ,
m=1 i=1
≤ 1, ∀i, j if Ci j = 1, ∀m,
xim ≤ 1, ∀i,
m=1 xim =
0 or 1, i = 1, 2, . . . , N ; m = 1, 2, . . . , M.
(6.19)
j In the multi-band auction, the set of losers becomes L = {1, 2, . . . , N } \ M j=1 W instead. Similarly to the case for the single-band partially collusion-resistant pricing strategy, the winners of the mth band have to pay the highest rejected bid from the losers, and the payment is split according to the NBS equilibrium,
169
6.4 Multi-band multi-winner auctions
max
{ pi ∈[0,vi ], i∈W m }
(vi − pi ),
s.t.
i∈W m
pi = Uv∗
−
i∈W m
M Wj j=1
.
(6.20)
The single-band fully collusion-resistant pricing strategy can be generalized too; for instance, the prices for the mth band are determined by max m (vi − pi ), s.t. pi ≥ Uv∗L(W m \W ) , ∀WC ⊆ W m . (6.21) { pi ∈[0,vi ], i∈W }
i∈W m
C
i∈WC
When M = 1, the two pricing strategies reduce to the single-band case.
6.4.2
The SDP relaxation for the multi-band auction In order to apply the SDP relaxation to the multi-band optimal allocation, we first 3 T 4T T T and ν = 1 M×1 ⊗ v. introduce auxiliary variables χ = x1 , x2 , . . . , x M M m Notice that, for binary integers, the constraint m=1 xi ≤ 1 is equivalent to m k xi + xi ≤ 1, ∀m = k. Hence, the optimal allocation (6.19) can be written in an equivalent form, max χ
MN
νi χi ,
s.t. χi + χ j ≤ 1, ∀i, j if i j = 1,
i=1
χi = 0 or 1, i = 1, 2, . . . , M N , where
⎡ ⎢ ⎢ =⎢ ⎣
C I I C .. .. . . I I
··· ··· .. .
I I .. .
(6.22) ⎤ ⎥ ⎥ ⎥ ⎦
(6.23)
··· C
is the effective interference matrix. Viewed as a single-band auction with M N users, the optimal allocation is upper bounded by the theta number, ϑ(, ν) = max μTν Sμν , S#O
s.t. tr(S) = 1,
Si j = 0, ∀i, j if i j = 1. (6.24)
This problem is optimized over S ∈ S M N , which has 12 M N (M N + 1) degrees of freedom; however, there is some redundancy resulting from the structural symmetry of the problem: according to our assumption that all bands are equivalent, interchanging the winners of band m and band k makes no difference. In general, if 3 T 4T T T is an optimal solution, then, for any permutation π, χ = x1 , x2 , . . . , x M 3 T 4T T T χ(π ) = xπ(1) , xπ(2) , . . . , xπ(M) will also be an optimal solution. Similarly to the one-to-one mapping in one-band optimal allocation proved √ √ √ above by Lemma 6.3.1, we define η = c ν1 χ1 , ν2 χ2 , . . . , ν M N χ M N = 3 4 T T T T y1 , y2 , . . . , y M with c chosen such that |η|2 = 1. Owing to symmetry, if S = ηηT is a solution to (6.24), so will be S(π ) = η(π )ηT (π ), where η(π ) = 3 T T T 4T . The average of all M! permutations is yπ(1) , yπ(2) , . . . , yπ(M)
170
A multi-winner cognitive spectrum auction game
⎡ S=
1 1 ⎢ ⎢ S(π ) = ⎢ M! π M⎣
SD SF .. .
SF SD .. .
··· ··· .. .
SF SF .. .
SF
SF
···
SD
⎤ ⎥ ⎥ ⎥, ⎦
(6.25)
where the diagonal blocks are SD =
M m m T y y ,
(6.26)
m=1
and the off-diagonal blocks are SF =
M 1 M −1
M
m k T y y .
(6.27)
m=1 k=1,k =m
As shown by the following proposition, we need optimize only over two small matrices SD and SF rather than the large-dimension matrix S in (6.24). Just as in the one-band case, the idea is to relax the specific structures of matrices SD and SF by necessary conditions that they should satisfy in terms of positive-semi-definite properties. Proposition 6.4.1 The multi-band optimal allocation (6.19) can be relaxed by the following optimization problem: max μTv (S D + (M − 1)SF )μv
SD ,SF
s.t. tr(SD ) = 1,
(6.28) (6.29)
(SD )i j = 0, ∀i, j if Ci j = 1,
(6.30)
(SF )ii = 0, ∀i,
(6.31)
SD # O, SD − SF # O,
(6.32)
SD + (M − 1)SF # O.
(6.33)
P ROOF. The objective function is μTν Sμν . Since S = (1/M) (I M×M ⊗ SD + (1 M×M − I M×M ) ⊗ SF ) and μν = 1 M×1 ⊗ μv , we apply the properties of Kronecker products [36], μTν Sμν =
1 T 1 M×1 I M×M 1 M×1 ⊗ μTv SD μv M + 1TM×1 (1 M×M − I M×M )1 M×1 ⊗ μTv SF μv
= μTv (SD + (M − 1)SF )μv .
(6.34) M
M we have that 1 = ηT η = m=1 |η| (ym )T (ym ) = m=1 2 = 1, Because tr (ym ) (ym )T = tr(SD ), and hence we obtain constraint (6.29). Moreover, the interference constraints in the original problem (6.19) imply that yim y mj = 0 if Ci j = 1 and yim yik = 0, ∀m = k, according to the relationship of x and y established in Lemma 6.3.1. Hence, ∀i, j such that Ci j = 1,
171
6.5 Simulation results
M (SD )i j = m=1 yim y mj = 0, which is the constraint (6.30). Constraint (6.31) can be proved in a similar way. The final step is the SDP relaxation. To show that a matrix S ∈ S N is positive semidefinite, we prove by definition that zT Sz ≥ 0 for any vector z ∈ M N ×1 . Given any vector z, define scalars z m = zT ym , m = 1, 2, . . . , M. Then, zT SD z =
M
(z m )2 ≥ 0.
(6.35)
m=1
In addition, zT (SD − SF )z =
M
(z m )2 −
m=1
1 = M −1 =
where M
M
m=1 (z
m )2
≥
M ⎛
2
zT (SD + (M − 1)SF )z =
M
(z m )2 −
zm zk
M M
m=1
m=1 k=1
M
M
(z m )2 −
m=1
zm zk 2 ⎞
zm
⎠ ≥ 0,
(6.36)
m=1
follows the Cauchy–Schwartz inequality. Also, M
(z m )2 +
m=1
=
M
m=1 k=1,k =m
1 ⎝ M M −1
M m m=1 z
M 1 M −1
M M m=1 k=1
M
M
m=1 k=1,k =m
zm zk =
M
zm
zm zk 2 ≥ 0.
(6.37)
m=1
Therefore, the matrices SD , SD − SF , and SD + (M − 1)SF are all positive semidefinite. We verify by simulation that this upper bound is also tight and hence can be employed in our auction to reduce complexity. Since the new optimization problem is optimized over two symmetric matrices SD , SF ∈ S N , the total number of degrees of freedom is N (N + 1), which is significantly smaller than that of direct relaxation 12 M N (M N + 1). Roughly speaking, the number of degrees of freedom, an important factor affecting the computational complexity, is reduced from O(M 2 N 2 ) to O(N 2 ).
6.5
Simulation results In this section, we evaluate the performance of the collusion-resistant multi-winner spectrum-auction mechanisms by computer experiments. Consider an area of 1000 m × 1000 m, in which N secondary users are uniformly distributed. Assume that each secondary user is a cognitive base station with RI -meter coverage radius, and, according to
172
A multi-winner cognitive spectrum auction game
the protocol model, two users at least 2RI meters apart can share the same band without mutual interference. We use two values for RI : RI = 150 m, for a light-interference network; and RI = 350 m, for a heavy-interference network. The valuations of different users {v1 , v2 , . . . , v N } are assumed to be i.i.d. random variables uniformly distributed in [20, 30]. We consider a one-band auction, i.e., M = 1. Figure 6.2 shows the seller’s revenue versus the number of secondary users when various auction mechanisms are employed. The result is averaged over 100 independent runs, in which the locations and valuations of the N secondary users are generated randomly with a uniform distribution. As shown in Figure 6.2, directly applying the second-price scheme under-utilizes spectrum resources, and the VCG mechanism also suffers from low revenue. The collusion-resistant methods, however, significantly improve the primary user’s revenue, e.g., by nearly 15% compared with the VCG outcome when RI = 350 m, and by 30% when RI = 150 m. This means that the algorithms have better performance when more secondary users are allowed to lease the band simultaneously. Moreover, the auction mechanisms can effectively combat collusion. We use the percentage of the system utility taken away by colluders to represent the vulnerability to sublease colluding attacks. Figure 6.3 demonstrates the results from 100 independent runs. For example, when RI = 150 m and there are 20% colluders, colluders may steal up to 10% of the system utility with the VCG pricing mechanism, 140 Second−price strategy VCG pricing Partially collusion−resistant pricing Fully collusion−resistant pricing
120
Seller Revenue
100
80
RI = 150 m
60
RI = 350 m 40
20
Figure 6.2
8
10
12 14 16 Number of Secondary Users
18
The seller’s revenue when various auction mechanisms are employed.
20
173
6.5 Simulation results
Percentage of System Utility Taken by Collusion (%)
100
100 VCG pricing, average
90
Partially collusion−resistant pricing, average
80
80
Partially collusion−resistant pricing, worst case Fully collusion−resistant pricing
70
70 60
60
RI = 350 m
RI = 150 m 50
50
40
40
30
30
20
20
10
10
0 20 Figure 6.3
90
VCG pricing, worst case
40
60
0 80 100 20 40 Percentage of Colluders (%)
60
80
100
Normalized collusion gains under various auction mechanisms versus the percentage of colluders in a spectrum auction with N = 20 secondary users.
and much greater profits could be taken away by colluders if more secondary users became colluders. To protect the primary user’s interests, collusion-resistant mechanisms can be applied. As shown in Figure 6.3, the partially collusion-resistant pricing strategy might be not as good as the VCG mechanism on average under some circumstances because it cannot completely remove sublease collusion, but it makes the worst-case colluding gains drop considerably; for instance, when RI = 150 m and all users are able to collude, more than half of the system’s utility could be taken away if the VCG pricing were used, but only 22% with the partially collusion-resistant pricing method. The fully collusion-resistant pricing strategy, as expected, completely eliminates collusion, and hence is an ideal choice when there is a risk of sublease collusion. The performance of the near-optimal algorithm is presented in Figure 6.4. As shown by the simulation results, the near-optimal algorithm can yield the exact solution in more than 90% of the total number of runs. Even when the near-optimal algorithm fails to return the exact solution, it can still yield a tight upper bound with an average difference of less than 5%; to show the robustness of the algorithm, we further provide the 90% confidence intervals (i.e., the range within which 90% of the data fall), which show that the gap between the near-optimal solution and the exact solution is within 10%.
174
A multi-winner cognitive spectrum auction game
Percentage of total trials for which the near−optimal algorithm yields the exact solution 1 0.98 0.96
RI = 150 m
0.94
RI = 350 m
0.92
10
15
20
25
30
35
40
Average errors with 90% confidence intervals when RI = 150 0.04 0.02 0
10
15
20
25
30
35
40
Average errors with 90% confidence intervals when RI = 350
0.1
0.05
0
Figure 6.4
10
15
20
25 Number of users
30
35
40
The percentage of total trials for which the near-optimal algorithm yields the exact solution (upper), and the average difference with 90% confidence intervals (i.e., from 5% to 95%) between the near-optimal solution and the exact solution for the failed trials (middle and lower, for RI = 150 m and 350 m, respectively).
Finally, we show the reduction of complexity in terms of processing time when optimization is done in MATLAB.3 In Figure 6.5, the processing time of solving the optimal-allocation problem is compared with that of solving the near-optimal-allocation problem in 100 independent runs, when RI = 350 m and the number of users is N = 20, 30, and 40. With increasing N , the time taken to find the optimal solution increases dramatically, whereas the time taken to find a near-optimal solution using the SDP relaxation increases only slightly. Moreover, the processing time of the optimal algorithm varies considerably in different realizations, but the processing time with the SDP relaxation shows only a small variation. Next, we show that, for multi-band allocation, applying relaxation with a reduced dimension (given in Proposition 6.4.1) can further reduce the complexity of the straightforward relaxation. We fix N = 40, increase the number of bands M from 2 to 6, and present the average of the processing time in 100 independent runs in Figure 6.6. As shown in Figure 6.6, the complexity results in 3 To solve the SDP problem, CVX software [135], a MATLAB-based software for disciplined convex
optimization, is employed.
175
6.5 Simulation results
Optimal allocation, N = 20 Near−optimal allocation, N = 20 Optimal allocation, N = 30 Near−optimal allocation, N = 30 Optimal allocation, N = 40 Near−optimal allocation, N = 40
Processing time (seconds)
103
102
101
100
10−1
0
Figure 6.5
10
20
30
40
50 Sample
60
70
80
90
100
Sampled processing time of the optimal allocation and the near-optimal allocation with the SDP relaxation, when the number of users is N = 20, 30, and 40.
110 Straightforward relaxation
100
Relaxation with reduced dimension
Average Processing Time (seconds)
90 80 70 60 50 40 30 20 10 0
Figure 6.6
2
3
4 Number of Bands M
5
6
The average processing time of straightforward SDP relaxation and SDP relaxation with a reduced dimension, as the number of bands M increases.
176
A multi-winner cognitive spectrum auction game
terms of processing time are consistent with the analysis in the previous section, that is, solving the near-optimal allocation using reduced-dimension relaxation is more efficient when M is large.
6.6
Summary In this chapter, we present a novel multi-winner auction game for the spectrum-auction scenario in cognitive radio networks in which secondary users can lease some temporarily unused bands from primary users. Since this kind of auction does not exist in the literature, where commodities are usually quantity-limited, suitable auction mechanisms are developed to guarantee full efficiency of spectrum utilization, yield higher revenue to primary users, and help eliminate user collusion. To make the scheme scalable, the SDP relaxation is applied to get a near-optimal solution in polynomial time. Moreover, we extend the one-band auction mechanism to the multi-band case. Simulation results are presented to demonstrate the performance and complexity of auction mechanisms. For related references, interested readers can refer to [465].
7
Evolutionary cooperative spectrum sensing games
Cooperative spectrum sensing has been shown to be able to greatly improve the sensing performance in cognitive radio networks. However, if cognitive users belong to different service providers, they tend to contribute less to sensing in order to increase their own throughput. In this chapter, we discuss an evolutionary game framework to answer the question of “how to collaborate” in multiuser decentralized cooperative spectrum sensing, because evolutionary game theory provides an excellent means to address the strategic uncertainty that a user/player may face by exploring different actions, adaptively learning during the strategic interactions, and approaching the best response strategy under changing conditions and environments using replicator dynamics. We derive the behavior dynamics and the evolutionarily stable strategy (ESS) of the secondary users. We then prove that the dynamics converge to the ESS, which makes possible a decentralized implementation of the proposed sensing game. Employing the dynamics, we further develop a distributed learning algorithm so that the secondary users approach the ESS solely on the basis of their own payoff observations. Simulation results show that the average throughput achieved in the proposed cooperative sensing game is higher than that in the case in which secondary users sense the primary user individually without cooperation. The proposed game is demonstrated to converge to the ESS, and to achieve a higher system throughput than that of the fully cooperative scenario, in which all users contribute to sensing in every time slot.
7.1
Introduction Since primary users should be carefully protected from interference due to secondary users’ operation, spectrum sensing has become an essential function of cognitive radio devices. Cooperative spectrum sensing with the help of relay nodes and multiuser collaborative sensing has recently been shown to greatly improve the sensing performance. However, in most of the existing cooperative spectrum-sensing schemes [148] [297] [139] [341] [265] [434] [140], a fully cooperative scenario is assumed: all secondary users voluntarily contribute to sensing and fuse their detection results in every time slot to a central controller (e.g., secondary base station), which makes a final decision. However, sensing the primary band consumes a certain amount of energy and time, which could otherwise be used for data transmission, so it might not be optimal to have
178
Evolutionary cooperative spectrum sensing games
all users participate in sensing in every time slot, in order to guarantee a certain system performance. Moreover, with the emerging applications of mobile ad hoc networks envisioned in civilian usage, the secondary users may be selfish and not serve a common goal. If multiple secondary users occupy different sub-bands of one primary user and can overhear the others’ sensing outcomes, they tend to take advantage of the others and wait for the others to sense so as to reserve more time for their own data transmission. Therefore, it is of great importance to study the dynamic cooperative behaviors of selfish users in a competing environment while simultaneously boosting the system performance. In this chapter, we model cooperative spectrum sensing as an evolutionary game, where the payoff is defined as the throughput of a secondary user. Evolutionary games have previously been applied to modeling networking problems, such as resourcesharing mechanisms in peer-to-peer networks [510] and congestion control [456] using behavioral experiments. In this chapter, we incorporate practical multiuser effects and constraints into the spectrum sensing game. The secondary users want to perform a common task, i.e., given a required detection probability to protect the primary user from interference, sense the primary band collaboratively for the sake of getting a high throughput by sharing the sensing cost. The users who do not take part in cooperative sensing can overhear the sensing results and thus have more time for their own data transmission. However, if no user spends time sensing the primary user, all of them may get a very low throughput. Therefore, secondary users need to try different strategies at each time slot and learn the best strategy from their strategic interactions using the methodology of understanding-by-building. In order to study the evolution of secondary users’ strategies and answer the question of how they should cooperate in the evolutionary spectrum sensing game, we analyze the process of secondary users updating their strategy profile with replicator dynamics equations [124], since a rational player should choose a strategy more often if that strategy brings a relatively higher payoff. We derive the ESS of the game, and prove the convergence to the ESS through analyzing the users’ behavior dynamics. Then we extend our observation to a more general game with heterogeneous users, analyze the properties of the ESSs, and develop a distributed learning algorithm so that the secondary users approach the ESS with just their own payoff history. Simulation results show that, as the number of secondary users and the cost of sensing increase, users tend to have less incentive to contribute to cooperative sensing. However, in general they can still achieve a higher average throughput in the spectrum sensing game than that of single-user sensing. Furthermore, using the cooperative game can achieve a higher total throughput than that of asking all users to contribute to sensing at every time slot. The rest of this chapter is organized as follows. In Section 7.2, we present the system model and formulate the multiuser cooperative spectrum sensing as a game. In Section 7.3, we introduce the background on evolutionary game theory, analyze the behavior dynamics and the ESS of the game, and develop a distributed learning algorithm for the ESS. Simulation results are shown in Section 7.4. Finally, Section 7.5 summarizes the chapter.
7.2 The system model and spectrum sensing game
7.2
The system model and spectrum sensing game
7.2.1
The hypothesis of channel sensing
179
When a secondary user is sensing the licensed spectrum channel in a cognitive radio network, the received signal r (t) from the detection has two hypotheses, namely when the primary user is present or absent, denoted by H1 and H0 , respectively. Then, r (t) can be written as hs(t) + w(t), if H1 , r (t) = (7.1) w(t), if H0 . In (7.1), h is the gain of the channel from the primary user’s transmitter to the secondary user’s receiver, which is assumed to exhibit slow flat fading; s(t) is the signal of the primary user, which is assumed to be an i.i.d. random process with mean zero and variance σs2 ; and w(t) is additive white Gaussian noise (AWGN) with mean zero and variance σw2 . Here s(t) and w(t) are assumed to be mutually independent. Assuming that we use an energy detector to sense the licensed spectrum, the test statistic T (r ) is defined as T (r ) =
N 1 |r (t)|2 , N
(7.2)
t=1
where N is the number of samples collected. The performance of licensed spectrum sensing is characterized by two probabilities. The probability of detection, PD , represents the probability of detecting the presence of primary user under hypothesis H1 . The false-alarm probability, PF , is the probability of detecting the primary user’s presence under hypothesis H0 . The higher PD , the better protection the primary user will receive; the lower PF , the more spectrum access the secondary user will obtain. If the noise term w(t) is assumed to be circularly symmetric complex Gaussian (CSCG), then, using the central limit theorem, the PDF of the test statistic T (r ) under H0 can be approximated by a Gaussian distribution N σw2 , σw4 /N . Then, the false-alarm probability, PF , is given by '' ( ( √ λ − 1 N , (7.3) PF (λ) = Q σw2 where λ is the threshold of the energy detector, and Q(·) denotes the complementary distribution function of the standard Gaussian, i.e., ' 2( ) ∞ t 1 exp − dt. Q(x) = √ 2 2π x Similarly, if we assume that the primary signal is a complex PSK signal, then, under hypothesis H1 , the PDF of T (r ) can be approximated by a Gaussian distri bution N (γ + 1)σw2 , (2γ + 1)σw4 /N , where γ = |h|2 σs2 /σw2 denotes the received SNR of the primary user under H1 . Then, the probability of detection PD can be approximated by
180
Evolutionary cooperative spectrum sensing games
⎛ ⎜ PD (λ) = Q⎝
'
λ −γ −1 σw2
(<
⎞ N ⎟ ⎠. 2γ + 1
(7.4)
Given a target detection probability P¯D , the threshold λ can be derived, and the falsealarm probability PF can be further rewritten as : √ PF ( P¯D , N , γ ) = Q 2γ + 1Q−1 ( P¯D ) + N γ , (7.5) where Q−1 (·) denotes the inverse function of Q(·).
7.2.2
The throughput of a secondary user While sensing the primary user’s activity, a secondary user cannot simultaneously perform data transmission. If we denote the sampling frequency by f s and the frame duration by T , then the time duration for data transmission is given by T − δ(N ), where δ(N ) = N / f s represents the time spent sensing. When the primary user is absent, in those time slots where no false alarm is generated, the average throughput of a secondary user is RH0 (N ) =
T − δ(N ) (1 − PF )CH0 , T
(7.6)
where CH0 represents the data rate of the secondary user under H0 . When the primary user is present, and not detected by the secondary user, the average throughput of a secondary user is RH1 (N ) =
T − δ(N ) (1 − PD )CH1 , T
(7.7)
where CH1 represents the data rate of the secondary user under H1 . If we denote by PH0 the probability that the primary user is absent, then the total throughput of a secondary user is R(N ) = PH0 RH0 (N ) + (1 − PH0 )RH1 (N ).
(7.8)
In dynamic spectrum access, it is required that the secondary users’ operation should not conflict or interfere with the primary users, and PD should be unity in the ideal case. According to (7.5), however, PF is then also equal to unity, and the total throughput of a secondary user (7.8) is zero, which is impractical. Hence, a primary user who allows secondary spectrum access usually predetermines a target detection probability P¯D very close to unity [265], under which we assume that secondary spectrum access will be prohibited as a punishment. Then, from the secondary user’s perspective, he/she wants to maximize his/her total throughput (7.8), given that PD ≥ P¯D . Since the target detection probability P¯D is required by the primary user to be very close to unity, and we usually have CH1 < CH0 due to the interference from the primary user with the secondary user, the second term in (7.8) is much smaller than the first term and can be omitted. Therefore, (7.8) can be approximated by
181
7.2 The system model and spectrum sensing game
˜ ) ≈ PH0 RH0 (N ) = PH0 T − δ(N ) (1 − PF )CH0 . R(N T
(7.9)
We know from (7.5) that, given a target detection probability P¯D , PF is a decreasing function of N . As a secondary user reduces N (or δ(N )) in the hope of having more time for data transmission, PF will increase. This indicates that there is a tradeoff for the ˜ ). In order secondary user to choose an optimal N that maximizes the throughput R(N to reduce both PF and N , i.e., keep a low false-alarm probability PF with a smaller N , a good choice for a secondary user is to cooperatively sense the spectrum with the other secondary users in the same licensed band.
7.2.3
Spectrum sensing games A diagram of a cognitive radio network where multiple secondary users are allowed to access one licensed spectrum band is shown in Figure 7.1. Here we assume that the secondary users within each others’ transmission range can exchange their sensory data about primary-user detection. The cooperative spectrum sensing scheme is illustrated in Figure 7.2. We assume that the entire licensed band is divided into K
Primary base station
Cognitive radio network
Figure 7.1
The system model. sensing
data transmission
Frequency
Signaling sK sK–1
s2 s1 T
Figure 7.2
Cooperative spectrum sensing.
2T
3T
Time
182
Evolutionary cooperative spectrum sensing games
sub-bands, and that each secondary user operates exclusively in one of the K subbands when the primary user is absent. Transmission time is slotted into intervals of length T . Before each data transmission, the secondary users need to sense the primary user’s activity. Since, on becoming active, the primary user will operate in all the sub-bands the secondary users within each other’s transmission range can jointly sense the primary user’s presence, and exchange their sensing results via a narrowband signaling channel, as shown in Figure 7.2. In this way, each of them can spend less time detecting while enjoying a low false-alarm probability PF via some decision fusion rule [83], and the spectrum sensing cost (δ(N )) can be shared by whoever is willing to contribute (C). However, depending on their locations and the quality of the received primary signal, it might not be optimal to have all secondary users participate in spectrum sensing at every time slot, in order to guarantee a certain level of system performance. Moreover, having all secondary users cooperating in sensing may be difficult, if the users do not serve a common authority, and instead act selfishly to maximize their own throughput. In this case, once a secondary user is able to overhear the detection results from the other users, he/she can take advantage of that by refusing to take part in spectrum sensing, which is called denying (D). Although each secondary user in the cognitive radio network still achieves the same false-alarm probability PF , the users who refuse to join in cooperative sensing have more time for their own data transmission. The secondary users get a very low throughput if no one senses the spectrum, in the hope that someone else will do the job. Therefore, we can model the spectrum sensing as a non-cooperative game. The players of the game are the secondary users, denoted S = {s1 , . . . , s K }. Each player sk has the same action/strategy space, denoted A = {C, D}, where “C” represents pure strategy contribute and “D” represents pure strategy refuse to contribute (denying). The payoff function is defined as the throughput of the secondary user. Assume that the secondary users contributing to cooperative sensing form a set, denoted Sc = {s1 , . . . , s J }. Denote the false-alarm probability of the cooperative sensing among members of the set Sc with fusion rule “RULE” and a target detection probability P¯D by P Sc = PF ( P¯D , N , {γi }i∈Sc , RULE). Then the payoff for a contributor s j ∈ Sc can be F
defined as
( ' δ(N ) 1 − PFSc Cs j , if |Sc | ∈ [1, K ], U˜ C,s j = PH0 1 − |Sc |T
(7.10)
where |Sc |, i.e., the cardinality of set Sc , represents the number of contributors, and Cs j is the data rate for user s j under hypothesis H0 . Therefore, if user s j chooses to cooperate, then he/she will share the sensing time with the other cooperative users, and the cost is divided equally among all the cooperative users. In (7.10), we assume that the spectrum sensing cost is equally divided among all the contributors; otherwise, there / Sc , who selects strategy D, is then may be a fairness issue. The payoff for a user si ∈ given by (7.11) U˜ D,si = PH0 1 − PFSc Csi , if |Sc | ∈ [1, K − 1],
7.2 The system model and spectrum sensing game
183
since si will not spend time sensing. Therefore, if user s j chooses not to contribute to sensing, he/she will rely on the contributors’ decision, have more time for data transmission, and can expect a higher throughput. If no secondary user contributes to sensing and each of them waits for the others to sense, i.e., |Sc | = 0, from (7.5), we know that lim N →0 PF = 1, especially for the low-received-SNR regime and high- P¯D requirement. In this case, the payoff for a denier becomes U˜ D,si = 0,
if |Sc | = 0.
(7.12)
The decision fusion rule can be selected to be the logical-OR rule, logical-AND rule, or majority rule. In this chapter, we use the majority rule to derive PFSc , though the other fusion rules could be similarly analyzed. Denote the detection and false-alarm probabilities for a contributor s j ∈ Sc by PD,s j and PF,s j , respectively. Then, under the majority rule we have PD = Pr[at least half of the users in Sc report H1 |H1 ],
(7.13)
PF = Pr[at least half of the users in Sc report H1 |H0 ].
(7.14)
and
Hence, given a P¯D for set Sc , each individual user’s target detection probability P¯D,s j can be obtained by solving the following equation: |Sc |
P¯D =
c| k=& 1+|S 2 '
'
( |Sc | ¯ k PD,s j (1 − P¯D,s j )|Sc |−k , k
(7.15)
where we assume that each contributor s j ∈ Sc takes equal responsibility in making the final decision on account of fairness concerns and therefore P¯D,s j is identical for all s j . Then, from (7.5) we can write PF,s j as PF,s j = Q
> : 2γs j + 1Q−1 ( P¯D,s j ) + N /|Sc |γs j ,
(7.16)
and can further obtain PFSc by substituting (7.16) into (7.14). Since secondary users try to maximize their own payoff values, i.e., the average throughput, given the three possible outcomes in (7.10)–(7.12), the selfish users’ behaviors are highly unpredictable. Contributing to cooperative sensing can provide a stable throughput, but the stable throughput is achieved at the cost of less time for data transmission; being a free-rider may save more time for useful data transmission, but the secondary users also face the risk of having no one sense the spectrum and thus might get zero throughput. Therefore, how should a selfish but rational secondary user collaborate with other selfish users in cooperative spectrum sensing? Is it best always to contribute to sensing, always free ride, or neither? In the next section, we will answer this question by analyzing the dynamics of rational secondary users’ behavior and derive the equilibrium strategy, with the aid of evolutionary game theory.
184
Evolutionary cooperative spectrum sensing games
7.3
Evolutionary sensing games and strategy analysis In this section, we first introduce the concept of evolutionarily stable strategy (ESS), and then use replicator dynamics to model and analyze the behavior dynamics of the secondary users in the sensing game.
7.3.1
Evolutionarily stable strategy Evolutionary game theory provides a good means to address the strategic uncertainty that a player faces in a game by taking out-of-equilibrium behavior, learning during the strategic interactions, and approaching a robust equilibrium strategy. One such equilibrium strategy concept that is widely adopted in evolutionary game theory is the evolutionarily stable strategy (ESS) [395], which is “a strategy such that, if all members of the population adopt it, then no mutant strategy could invade the population under the influence of natural selection.” Let us define the expected payoff as the individual fitness, and use π( p, p) ˆ to denote the payoff of an individual using strategy p against another individual using strategy p. ˆ Then, we have the following formal definition of an ESS. Definition 7.3.1 A strategy p ∗ is an ESS if and only if, for all p = p ∗ , (i) π( p, p ∗ ) ≤ π( p ∗ , p ∗ ), (equilibrium condition) (ii) if π( p, p ∗ ) = π( p ∗ , p ∗ ), π( p, p) < π( p ∗ , p) (stability condition). Condition (i) states that p ∗ is the best response strategy to itself, and is hence a Nash equilibrium (NE). Condition (ii) is interpreted as a stability condition. Suppose that the incumbents play p ∗ , and some mutants play p. Then conditions (i) and (ii) ensure that, as long as the fraction of mutants playing p is not too large, the average payoff to p will fall short of that to p∗ . Since strategies with a higher fitness value are expected to propagate faster in a population through strategic interactions, evolution will cause the population using mutation strategy p to decrease until the entire population uses strategy p ∗ . Since data transmission for each secondary user is continuous, the spectrum sensing game is played repeatedly and evolves over time. Moreover, new secondary users may join in the spectrum sensing game from time to time, and the existing secondary users may even be unaware of their appearance and strategies. Hence, a stable strategy that is robust against mutants using different strategies is especially preferred. Therefore, we consider the use of evolutionary game theory [442] to analyze the behavior dynamics of the players and further derive the ESS as the secondary users’ optimal collaboration strategy in cooperative spectrum sensing.
7.3.2
Evolution dynamics of the sensing game When a set of rational players are uncertain of each other’s actions and utilities, they will try different strategies in every play and learn from the strategic interactions using
7.3 Evolutionary sensing games and strategy analysis
185
the methodology of understanding-by-building. During the process, the percentage (or population share) of players using a certain pure strategy may change. Such a population evolution is characterized by replicator dynamics in evolutionary game theory. Specifically, consider a population of homogeneous individuals with identical data rate Csi and received primary SNR γi . The players adopt the same set of pure strategies A. Since all players have the same Csi and γi , payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. Therefore, all players have the same payoff function U . At time t, let pai (t) ≥ 0 be the number of individuals who are currently using pure strategy ai ∈ A, and let p(t) = ai ∈A pai (t) > 0 be the total population. Then the associated population state is defined as the vector x(t) = {xa1 (t), . . . , x|A| (t)}, where xai (t) is defined as the population share xai (t) = pai (t)/ p(t). By application of replicator dynamics, at time t the evolution dynamics of xai (t) is given by the following differential equation: x˙ai = [U¯ (ai , x−ai ) − U¯ (x)]xai ,
(7.17)
where U¯ (ai , x−ai ) is the average payoff of the individuals using pure strategy ai , x−ai is the set of population members who use pure strategies other than ai , U¯ (x) is the average payoff of the whole population, and is some positive number representing the time scale. The intuition behind (7.17) is as follows: if strategy ai results in a higher payoff than the average level, the population share using ai will grow, and the growth rate x˙ai /xai is proportional to the difference between strategy ai ’s current payoff and the current average payoff for the entire population. By analogy, we can view xai (t) as the probability that one player adopts pure strategy ai , and x(t) can be equivalently viewed as a mixed strategy for that player. If a pure strategy ai brings a higher payoff than the mixed strategy, strategy ai will be adopted more frequently, and thus xai (t), the probability of adopting ai , will increase. The rate of the probability increase x˙ai is proportional to the difference between pure strategy ai ’s payoff and the payoff achieved by using the mixed strategy. For the spectrum sensing game with heterogeneous players, whose γi and/or Csi are different from each other, denote the probability that user s j adopts strategy h ∈ A at time t by x h,s j (t). Then the time evolution of x h,s j (t) is governed by the following dynamics equation: (7.18) x˙h,s j = U¯ s j (h, x−s j ) − U¯ s j (x) x h,s j , where U¯ s j (h, x−s j ) is the average payoff for player s j using pure strategy h, x−s j is the set of strategies adopted by players other than s j , and U¯ s j (x) is s j ’s average payoff using mixed strategy xs j . Equation (7.18) indicates that, if player s j achieves a higher payoff by using pure strategy h than by using his/her mixed strategy xs j , strategy h will be adopted more frequently, the probability of using h will increase, and the growth rate of x h,s j is proportional to the excess of strategy h’s payoff and the payoff of the mixed strategy U¯ s j (x).
186
Evolutionary cooperative spectrum sensing games
7.3.3
Analysis of sensing games with homogeneous players A strategy is ESS if and only if it is asymptotically stable with respect to the replicator dynamics [442] [374]. Therefore, we can derive the ESS of the spectrum-sensing game by proving its asymptotical stability. In this subsection, we study the ESS of games with homogeneous players; we will discuss the heterogeneous case in the next section. As shown in Figure 7.1, players of the sensing game are secondary users within each other’s transmission range. If the transmission range is small, we can approximately consider all the received γs j very similar to each other. Since the γs j are usually very low, in order to guarantee a low PF given a target P¯D , the number of sampled signals N should be large. Under these assumptions, we can approximately view PFSc as the same for different Sc , denoted by PˆF . Further assume that all users have the same data rate, i.e. Csi = C, for all si ∈ S. Then, the payoff functions defined in (7.10)–(7.12) become τ UC (J ) = U0 1 − , if J ∈ [1, K ], (7.19) J and
UD (J ) =
U0 , if J ∈ [1, K − 1], 0, if J = 0,
(7.20)
where U0 = PH0 (1 − PˆF )C denotes the throughput achieved by a free-rider who relies on the contributors’ sensing outcomes, J = |Sc | denotes the number of contributors, and τ = δ(N )/T denotes the fraction of the entire sensing time shared by all contributors over the duration of a time slot. It can be seen from (7.19) and (7.20) that, when there is more than one contributor, if a player chooses to contribute to sensing, the payoff UC (J ) is in general smaller than a free-rider’s payoff UD (J ), due to the sensing cost τ/J . However, in the worst case, when no one contributes to sensing (J = 0), the payoff UD (J ) is the smallest. Since the secondary users are homogeneous players, (7.17) can be applied to the special case because all players have the same evolution dynamics and equilibrium strategy. Denote by x the probability that one secondary user contributes to spectrum sensing. Since the average payoff for pure strategy C is the payoff of a player choosing C against another K − 1 players, who contribute to sensing with probability x, the average −1 UC ( j + 1)Pr( j), payoff for pure strategy C, U¯ C (x) can be expressed as U¯ C (x) = Kj=0 where Pr( j) denotes the probability that there are in total j contributors among K − 1 other players. Because ' ( K −1 j Pr( j) = x (1 − x) K −1− j , j we can obtain U¯ C (x) as U¯ C (x) =
K −1 ' j=0
( K −1 j x (1 − x) K −1− j UC ( j + 1). j
(7.21)
7.3 Evolutionary sensing games and strategy analysis
187
Similarly, the average payoff for pure strategy D is given by U¯ D (x) =
K −1 ' j=0
( K −1 j x (1 − x) K −1− j UD ( j). j
Since the average payoff U¯ (x) = x U¯ C + (1 − x)U¯ D , (7.17) becomes x˙ = x(1 − x) U¯ C (x) − U¯ D (x) .
(7.22)
(7.23)
In equilibrium x ∗ , no player will deviate from the optimal strategy, indicating that x˙ ∗ = 0, and we obtain x ∗ = 0, or 1, or the solution of U¯ C∗ (x) = U¯ D∗ (x). On subtracting U¯ D (x) from U¯ C (x) we get U¯ C (x) − U¯ D (x) =
K −1 ' j=0
=
K −1 ' j=0
= −U0 τ
( K −1 j x (1 − x) K −1− j [UC ( j + 1) − UD ( j)] j ( ( 0 ' 1 K −1 j τ − U0 + M t x (1 − x) K −1− j U0 1 − j +1 j K −1 ' j=1
=−
( 1 K −1 j + Mt x (1 − x) K −1− j j +1 j
K −1 K! τ U0 x j+1 (1 − x) K − j−1 + Mt xK ( j + 1)!(K − j − 1)! j=1
K ' ( τ U0 K j =− x (1 − x) K − j + Mt xK j j=2
τ U0 = [(1 − x) K + K x(1 − x) K −1 − 1] + Mt xK U0 τ (1 − x) K + K x(1 − x) K −1 − τ = , K x
(7.24)
with Mt = (1 − x) K −1 U0 (1 − τ ). By using l’Hôpital’s rule, we know that lim U¯ C (x) − U¯ D (x)
x→0
U0 [−K τ (1 − x) K −1 + K (1 − x) K −1 − K x(K − 1)(1 − x) K −2 ] K (7.25) = U0 (1 − τ ) > 0.
= lim
x→0
Thus, x = 0 is not a solution to the equation U¯ C (x) − U¯ D (x) = 0, and the solution must satisfy τ (1 − x ∗ ) K + K x ∗ (1 − x ∗ ) K −1 − τ = 0.
(7.26)
Therefore, we can solve the K th-order equation (7.26) to get the remaining equilibrium, besides x ∗ = 0 and x ∗ = 1.
188
Evolutionary cooperative spectrum sensing games
Next we show that the dynamics defined in (7.17) converges to the above-mentioned equilibria, which are asymptotically stable and hence the ESS. Note that the variable in (7.17) is the probability that a user chooses strategy ai ∈ {C, D}, so we need to guarantee that xC (t) + xD (t) = 1 in the dynamic process. We show this in the following proposition. Proposition 7.3.1 The sum of the probability that a secondary user chooses strategy C and the probability that this secondary user chooses D is equal to unity in the replicator dynamics of a symmetric sensing game. P ROOF. Summing xai in (7.17) over ai yields x˙C + x˙D = [xC U¯ (C, xD ) + xD U¯ (D, xC ) − (xC + xD )U¯ (x)].
(7.27)
Since U¯ (x) = xC U¯ (C, xD ) + xD U¯ (D, xC ), and initially a user chooses xC + xD = 1, (7.27) is reduced to x˙C + x˙D = 0. Therefore, xC (t) + xD (t) = 1 holds at any t during the dynamic process. A similar conclusion also holds in an asymmetric game. In order to prove that the replicator dynamics converge to the equilibrium, we first show that all non-equilibrium strategies of the sensing game will be eliminated during the dynamic process. It suffices to prove that (7.17) is a myopic adjustment dynamic [124]. Definition 7.3.2 A system is a myopic adjustment dynamic if U¯ s j (h, x−s j )x˙h,s j ≥ 0, ∀s j ∈ S.
(7.28)
h∈A
Inequality (7.28) indicates that the average utility of a player will not decrease in a myopic adjustment dynamic system. We then prove that the dynamics (7.17) satisfy Definition 7.3.2. Proposition 7.3.2 The replicator dynamics (7.17) are myopic adjustment dynamics. P ROOF. On substituting (7.17) into (7.28), we get x˙ai U¯ (ai , x−ai ) = U¯ (ai , x−ai )[U¯ (ai , x−ai ) − U¯ (x)]xai ai ∈A
ai ∈A
=
ai ∈A
⎡ xai U¯ 2 (ai , x−ai ) − ⎣
⎤2 xai U¯ (ai , x−ai )⎦ .
(7.29)
ai ∈A
From Jensen’s inequality, we know that (7.29) is non-negative, which completes the proof. In addition, we can show that (7.28) also holds for a game with heterogeneous players in a similar way.
7.3 Evolutionary sensing games and strategy analysis
189
In the following theorem, we show that the replicator dynamics in (7.17) converge to the ESS. Theorem 7.3.1 Starting from any interior point x ∈ (0, 1), the replicator dynamics defined in (7.17) converge to the ESS x ∗ . In specific, when τ = 1, the replicator dynamics converge to x ∗ = 0; when τ = 0, the replicator dynamics converge to x ∗ = 1; when 0 < τ < 1, the replicator dynamics converge to the solution of (7.26). P ROOF. From the simplified dynamics (7.23), we know that the sign of x˙C (t) is determined by the sign of U¯ C (x) − U¯ D (x), given x ∈ (0, 1) and > 0. U¯ C (x) and U¯ D (x) are simplified to the following: U¯ C (x) = U0 − U0 (1 − x) K −1 τ − U0
K −1 ' j=1
U¯ D (x) = U0 − U0 (1 − x)
K −1
( τ K −1 j , x (1 − x) K − j−1 j + 1 (7.30) j
.
Furthermore, the difference U¯ C (x) − U¯ D (x) is calculated using (7.24) as U0 τ (1 − x) K + K x(1 − x) K −1 − τ . U¯ C (x) − U¯ D (x) = K x
(7.31)
Depending on the value of the parameter τ , we prove the theorem in three different cases. Case I (τ = 1). From (7.30) we know that U¯ C (x) < U¯ D (x), dx/dt < 0, and the replicator dynamics converge to x ∗ = 0. Case II (τ = 0). From (7.30) we have U¯ C (x) > U¯ D (x), dx/dt > 0, and the replicator dynamics converge to x ∗ = 1. Case III (0 < τ < 1). Define (x) = U¯ C (x) − U¯ D (x) = [U0 /(K x)] f (x), with f (x) = τ (1 − x) K + K x(1 − x) K −1 − τ . When x → 0, using l’Hôpital’s rule, we know from (7.31) that limx→0 (x) = (1 − τ )U0 > 0. When x → 1, limx→1 (x) = −τ/K < 0. Since (0) > 0, (1) < 0, and (x) is a continuous function of x in (0, 1), (x) must have at least one intersection with the x-axis, i.e., ∃x, ˜ such that (x) ˜ = 0. If there is only one such x, ˜ then we can infer that (x) > 0 when x < x, ˜ and (x) < 0 when x > x. ˜ Since (x) has the same sign as f (x) when 0 < x < 1, it suffices to prove that there exists only one solution in (0, 1) to the equation f (x) = 0. On taking the derivative of f (x) with respect to x, we get d f (x) = (1 − x) K −2 − (K − τ )x + (1 − τ ) . dx
(7.32)
When x = (1 − τ )/(K − τ ), d f (x)/dx = 0. By observing (7.32) we find that f (x) is increasing when 0 < x < (1 − τ )/(K − τ ) with f (0) = 0, while it is decreasing when (1 − τ )/(K − τ ) < x < 1 with f (1) = −τ < 0. This means that the equation f (x) = 0 has only one root x ∗ in (0, 1), which is the equilibrium solved in (7.26). When 0 < x < x ∗ , f (x) > 0; and when x ∗ < x < 1, f (x) < 0. Since (x) has the same sign as f (x), we can conclude that for 0 < x < x ∗ , (x) > 0, i.e., dx/dt > 0;
190
Evolutionary cooperative spectrum sensing games
Table 7.1. The payoff table of a two-user sensing game
C D
C
D
D1 A(1 − τ/2), D2 A(1 − τ/2) D1 B2 , D2 B2 (1 − τ )
D1 B1 (1 − τ ), D2 B1 0, 0
for x ∗ < x < 1, (x) < 0, i.e., dx/dt < 0. Thus, the replicator dynamics converge to the equilibrium x ∗ . Therefore, we have proved the convergence of the replicator dynamics to the ESS x ∗ . In practice, the time spent in sensing should be a positive value that is smaller than the duration of a time slot, i.e., we have 0 < δ(N ) < T and 0 < τ = δ(N )/T < 1. Therefore, the optimal strategy for the secondary users is to contribute to sensing with probability x ∗ , where x ∗ is the solution of (7.26).
7.3.4
Analysis of sensing games with heterogeneous players For games with heterogeneous players, it is generally very difficult to represent U¯ s j (h, x−s j ) in a compact form, and directly obtain the ESS in closed form by solving (7.18). Therefore, we first analyze a two-user game to gain some insight, then generalize the observation to a multiuser game.
7.3.4.1
Two-player games When there are two secondary users in the cognitive radio network, i.e., S = {s1 , s2 }, according to equations (7.10)–(7.12) we can write the payoff matrix as in Table 7.1,
where for simplicity we define A = 1 − PFSc , with Sc = {s1 , s2 }, Bi = 1 − PF,si , Di = PH0 Ci , and τ = δ(N )/T . Let us denote by x 1 and x2 the probabilities that user 1 and user 2, respectively, take action C. Then we have the expected payoff U¯ s1 (C, x2 ) when user 1 chooses to contribute to sensing as ' ( τ (7.33) x2 + D1 B1 (1 − τ )(1 − x2 ), U¯ s1 (C, x2 ) = D1 A 1 − 2 and the expected payoff U¯ s1 (x) as ' ( τ ¯ x1 x2 + D1 B1 (1 − τ )x1 (1 − x2 ) + D1 B2 (1 − x1 )x2 . (7.34) Us1 (x) = D1 A 1 − 2 Thus we get the replicator-dynamics equation of user 1 according to (7.18) as 3 4 x˙1 = x1 (1 − x1 )D1 B1 (1 − τ ) − E 1 x2 ,
(7.35)
where E 1 = B2 + B1 (1 − τ ) − A(1 − τ/2). Similarly, the replicator-dynamics equation of user 2 is written as
7.3 Evolutionary sensing games and strategy analysis
x˙2 = x2 (1 − x2 )D2 [B2 (1 − τ ) − E 2 x1 ] ,
191
(7.36)
where E 2 = B1 + B2 (1 − τ ) − A(1 − τ/2). At equilibrium we know that x˙1 = 0 and x˙2 = 0, then from (7.35) and (7.36) we get five equilibria: (0, 0), (0, 1), (1, 0), (1, 1), and the mixed-strategy equilibrium ( ' B2 (1 − τ ) B1 (1 − τ ) , . E2 E1 According to [77], if an equilibrium of the replicator-dynamics equations is a locally asymptotically stable point in a dynamic system, it is an ESS. So we can view (7.35) and (7.36) as a nonlinear dynamic system and judge whether the five equilibria are ESSs by analyzing the Jacobian matrix. By taking partial derivatives of (7.35) and (7.36), we obtain the Jacobian matrix as 0 1 D1 (1 − 2x1 )E 11 −x1 (1 − x1 )D1 E 1 , (7.37) Jm = −x2 (1 − x2 )D2 E 2 (1 − 2x2 )D2 E 22 where E 11 = B1 (1 − τ ) − E 1 x2 and E 22 = B2 (1 − τ ) − E 2 x1 . The asymptotical stability requires that det(Jm ) > 0 and tr(Jm ) < 0. By substituting the five equilibria into (7.37), we can obtain the ESSs for various values of A, B1 , and B2 and conclude the following optimal collaboration strategy for a cooperative sensing game with two heterogeneous players. (i) When A(1 − τ/2) < B1 , there is one ESS, (1, 0), and the strategy profile user 1 and user 2 adopt converges to (C, D); (ii) When A(1 − τ/2) < B2 , there is one ESS, (0, 1), and the strategy profile converges to (D, C); (iii) When A(1 − τ/2) > B2 and A(1 − τ/2) > B1 , there is one ESS, (1, 1), and the strategy profiles converges to (C, C); (iv) When A(1 − τ/2) < B1 and A(1 − τ/2) < B2 , there are two ESSs, (1, 0) and (0, 1), and the strategy profile converges to (C, D) or (D, C) depending on the initial strategy profile. In order to explain the above-mentioned conclusions and generalize them to a multiplayer game, we next analyze the properties of the mixed-strategy equilibrium, although it is not an ESS. Let us take the derivative of x1∗ = B2 (1 − τ )/E 2 with respect to the performance of a detector (A, B2 ) and the sensing cost τ . We get ∂ x1∗ B2 (1 − τ/2)(1 − τ ) > 0, = ∂A E 22
(7.38)
∂ x1∗ [A(1 − τ/2) − B1 ](1 − τ ) = < 0, ∂ B2 E 22
(7.39)
∂ x1∗ (A/2 − B1 )B2 < 0. = ∂τ E 22
(7.40)
and
192
Evolutionary cooperative spectrum sensing games
Inequality (7.39) holds because A(1 − τ/2) − B1 < 0; otherwise x1∗ = B2 (1 − τ )/ E 2 > 1, which is impractical. Inequality (7.40) holds because, in practical applications, we have PF,si < 0.5, Bi = 1 − PF,si > 0.5, and A < 1; therefore, A/2 < Bi , and ∂ x1∗ /∂τ < 0. From (7.38) we know that, when cooperative sensing brings a greater gain, i.e., as A increases, x1∗ (and x2∗ ) increases. This is why, when A(1 − τ/2) > Bi , i = 1, 2, the strategy profile converges to (C, C). From (7.39) we find that the incentive of a secondary user si contributing to cooperative sensing decreases as the other user s j ’s detection performance increases. This is because, when user si learns through repeated interactions that s j has a better B j , si tends not to sense the spectrum and enjoys a free ride. Then s j has to sense the spectrum; otherwise, he is at risk of having no one sense and receiving a very low expected payoff. That is why, when A(1 − τ/2) < B1 (or A(1 − τ/2) < B2 ), the strategy profile converges to (C, D) (or (D, C)). When the sensing cost (τ ) becomes higher, the secondary users will be more reluctant to contribute to cooperative sensing and x1∗ decreases, as shown in (7.40).
7.3.4.2
Multi-player games From the above-mentioned observation, we can infer that, if some user si has a better detection performance Bi , the other users tend to take advantage of si . If there are more than two users in the sensing game, the strategy of the users with worse Bi (and γi ) will converge to D. Using replicator dynamics, users with better detection performance tend to contribute to spectrum sensing (i.e., choose C), because they are aware of the low throughput if no one senses the spectrum. Similarly, if the secondary users have different data rates, the user with a lower rate Cs j tends to take advantage of those with higher rates (i.e., they choose D), since the latter suffer relatively heavier losses if no one contributes to sensing and they have to become more active in sensing. It is possible to select a proper subset of secondary users in cooperative sensing so as to optimize detection performance [341]. However, it is necessary to assume that the information about the received SNRs (γi ) is available at the secondary base station. In the evolutionary game framework discussed in this chapter, the secondary users can learn the ESS by using replicator dynamics with just their own payoff history. Therefore, it is suitable for distributed implementation when there exists no secondary base station and the secondary users behave selfishly. In the next section we discuss a distributed learning algorithm and further justify the convergence with computer simulations.
7.3.5
A learning algorithm for the ESS In the cooperative sensing games with multiple players outlined above, we have shown that the ESS is solvable. However, solving the equilibrium requires knowledge of the utility function as well as exchange of private information (e.g., γs j and Cs j ) and strategies adopted by the other users. This results in a lot of communication overhead. Therefore, a distributed learning algorithm that gradually converges to the ESS without too much information exchange is preferred.
7.3 Evolutionary sensing games and strategy analysis
193
From (7.18), we can derive the strategy adjustment for the secondary user as follows. Denote the pure strategy taken by user s j at time t by As j (t). Define an indicator function 1shj (t) as 1shj (t) =
1, 0,
if As j (t) = h, if As j (t) = h.
(7.41)
At some interval mT , we can approximate U¯ s j (h, x−s j ) by . U¯ s j (h, x−s j ) =
0≤t≤mT
U˜ s j (As j (t), A−s j (t))1shj (t) , h 0≤t≤mT 1s j (t)
(7.42)
where U˜ s j (As j (t), A−s j (t)) is the payoff value for s j determined by (7.10)–(7.12). The numerator on the right-hand side of (7.42) denotes the cumulative payoff of user s j when s j chooses pure strategy h from time 0 to mT , while the denominator denotes the cumulative total of the number of times strategy h has been adopted by user s j during this time period. Hence, (7.42) can be used to approximate U¯ s j (h, x−s j ), and the approximation becomes more precise as m → ∞. Similarly, U¯ s j (x) can be approximated by the average payoff of user s j from time 0 to mT , . 1 ˜ U¯ s j (x) = Us j (As j (t), A−s j (t)). m
(7.43)
0≤t≤mT
Then, the derivative x˙h,s j (mT ) can be approximated by substituting the estimations (7.42) and (7.43) into (7.18). Therefore, the probability of user s j taking action h can be adjusted to (7.44) x h,s j ((m + 1)T ) = x h,s j (mT ) + ηs j U¯ s j (h, x−s j ) − U¯ s j (x) x h,s j (mT ), with ηs j being the step size of adjustment chosen by s j . Equation (7.44) can be viewed as a discrete-time replicator-dynamic system. It has been shown in [187] that, if a steady state is hyperbolic and asymptotically stable under continuous-time dynamics, then it is asymptotically stable for sufficiently small time periods in corresponding discrete-time dynamics. Since the ESS is the asymptotically stable point in the continuous-time replicator dynamics and also hyperbolic [124], if a player knows precise information about x˙h,s j , adapting strategies according to (7.44) can give convergence to an ESS. With the learning algorithm, users will try different strategies in every time slot, accumulate information about the average payoff values on the basis of (7.42) and (7.43), calculate the probability change of some strategy using (7.18), and adapt their actions to an equilibrium. The procedures of the proposed learning algorithm are summarized in Table 7.2. By summarizing the above learning algorithm and analysis in this section, we can arrive at the following cooperation strategy in decentralized cooperative spectrum sensing.
194
Evolutionary cooperative spectrum sensing games
Table 7.2. A learning algorithm for ESS 1. Initialization: " for ∀s j , choose a proper step size ηs j " for ∀s j , h ∈ A, let x(h, s j ) ← 1/|A| 2. During a period of m slots, in each slot, each user s j " chooses an action h with probability x(h, s j ) " receives a payoff determined by (7.10)–(7.12) " records the indicator function value by (7.41) 3. Each user s j approximates U¯ s j (h, x−s j ) and U¯ s j (x) by (7.42) and (7.43), respectively 4. Each user s j updates the probability of each action by (7.44) 5. Go to Step 2 until convergence to a stable equilibrium occurs
Denote the probability of contributing to sensing for user si ∈ S by xc,si , then the following strategy will be used by si . • If starting with a high xc,si , si will rely more on the others and reduce xc,si until further reduction of xc,si decreases his throughput or xc,si approaches 0. • If starting with a low xc,si , si will gradually increase xc,si until any further increase of xc,si decreases his throughput or xc,si approaches 1. • si shall reduce xc,si by taking advantage of those users with better detection performance or higher data rates. • si shall increase xc,si if cooperation with more users can bring a better detection performance than that in the case of single-user sensing without cooperation. In the next section, we will demonstrate the convergence to the ESS of the distributedlearning algorithm through simulations.
7.4
Simulation results and analysis The parameters used in the simulation are as follows. We assume that the primary signal is a baseband QPSK modulated signal, the sampling frequency is f s = 1 MHz, and the frame duration is T = 20 ms. The probability that the primary user is inactive is set as PH0 = 0.9, and the required target detection probability P¯D is 0.95. The noise is assumed to be a zero-mean CSCG process. The distance between the cognitive radio network and the primary base station is very large, so the received γs j are in the lowSNR regime, with an average value of −12 dB.
7.4.1
A sensing game with homogeneous players We first illustrate the ESS of the secondary users in a homogeneous K -user sensing game as in Section 7.3.3, where the data rate is C = 1 Mbps. In Figure 7.3(a),
195
7.4 Simulation results and analysis
1 K=2 K=3 K=4 K=5 K=6 K=7 K=8
0.9 0.8 0.7
ESS x*
0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.2
0.4
τ (δ(N)/T)
0.6
0.8
1
(a) Probability of being a contributor 0.8 K=2 K=3 K=4 K=5 K=6 K=7 K=8 single
Average throughput per user
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.2
0.4
τ (δ(N)/T)
0.6
0.8
1
(b) Average throughput per user Figure 7.3
The ESS and average throughput vs. τ .
we show the equilibrium probability of being a contributor x ∗ . The x-axis represents τ = δ(N )/T , the ratio of the sensing time over the frame duration. From Figure 7.3(a), we can see that x ∗ decreases as τ increases. For the same τ , x ∗ decreases as the number of secondary users increases. This indicates that the incentive of contributing to
196
Evolutionary cooperative spectrum sensing games
cooperative sensing drops as the cost of sensing increases and more users exist in the network. This is because the players tend to wait for someone else to sense the spectrum and can then enjoy a free ride, when they are faced with a high sensing cost and there are more counterpart players. In Figure 7.3(b), we show the average throughput per user when all users adopt the equilibrium strategy. We see that there is a tradeoff between the cost of sensing and the throughput for an arbitrary number of users, and the optimal value of τ is around 0.25. For comparison, we also plot the throughput for a single user sensing (the dotted line “single”), for which the optimal value of τ is around 0.15. Although the cost of sensing increases, we see that, as more users share the sensing cost, the average throughput per user still increases, and the average throughput values for the cooperative sensing game are higher than that of the single-user sensing case.
7.4.2
Convergence of the dynamics In Figure 7.4, we show the replicator dynamics of the game with homogeneous users, where τ = 0.5. We observe in Figure 7.4(a) that, starting from a high initial probability of cooperation, all users gradually reduce their degree of cooperation, because being a free-rider more often saves more time for one’s own data transmission and brings a higher throughput. However, too low a degree of cooperation greatly increases the chance of having no user contribute to sensing, so the users become more cooperative starting from a low initial probability of cooperation as shown in Figure 7.4(b). It takes fewer than 20 iterations to attain the equilibrium by choosing a proper step size ηsi = 3. In Figure 7.5, we show the replicator dynamics for the game with three heterogeneous players, using the learning algorithm discussed in Section 7.3.5. We choose τ = 0.5, γ1 = −14 dB, γ2 = −10 dB, and γ3 = −10 dB. As expected, starting from a low initial probability of cooperation, the users tend to increase the degree of cooperation. During the iterations, the users with a worse γi (user 1) learn that listening to the detection results from the users with a better γi can bring a higher throughput. Hence, user 1’s strategy converges to D in the long run, while the users with better detection performance (user 2 and user 3) have to sense the spectrum to guarantee their own throughput.
7.4.3
Comparison of ESS and full cooperation In Figure 7.6, we compare the total throughput of a three-user sensing game using their ESS and the total throughput when the users always participate in cooperative sensing and share the sensing cost, i.e., xsi = 1. For the first four comparisons we assume a homogeneous setting, where γi of each user takes a value from {−13, −14, −15, −16} dB, respectively. For the last four, a heterogeneous setting is assumed, where γ1 equals {−12, −13, −14, −15} dB, respectively, and γ2 and γ3 are kept the same as in the homogeneous setting. We find in Figure 7.6 that using the ESS provides better performance than that in the case of all secondary users cooperating in sensing at every time
197
7.4 Simulation results and analysis
0.8 K=2 K=3 K=4 K=5 K=6 K=7 K=8
Probability of cooperation
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
5
10 15 20 Number of iterations
25
30
(a) initial x = 0.8 0.8 K=2 K=3 K=4 K=5 K=6 K=7 K=8
Probability of cooperation
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Figure 7.4
0
5
10 15 20 Number of iterations (b) initial x = 0.2
25
30
Behavior dynamics of a homogeneous K -user sensing game.
slot. This is because, under the ESS, users can take turns to jointly complete the common task, and on average contribute less time to sensing and enjoy a higher throughput. This indicates that, to guarantee a certain detection performance, it is not necessary to force all users to contribute in every time slot, and the ESS can achieve a satisfying system performance even when there exist selfish users.
Evolutionary cooperative spectrum sensing games
0.2 0.15 0.1 0.05 0 20
40
60
80 100 120 Number of iterations
140
160
180
200
20
40
60
80 100 120 Number of iterations
140
160
180
200
20
40
60
80 100 120 Number of iterations
140
160
180
200
1 User 2
Probability of cooperation
User 1
198
0.8 0.6 0.4 0.2
User 3
1 0.8 0.6 0.4 0.2
Figure 7.5
Behavior dynamics of a heterogeneous three-user sensing game.
2.5 ESS Full
Total throughput (Mbps)
2
1.5
1
0.5
0 Figure 7.6
1
2
3
4
Comparison of ESS and full cooperation.
5
6
7
8
7.5 Summary and bibliographical notes
7.5
199
Summary and bibliographical notes Cooperative spectrum sensing with multiple secondary users has been shown to achieve a better detection performance than does single-user sensing without cooperation. However, how to collaborate in cooperative spectrum sensing over decentralized cognitive radio networks is still an open problem, since selfish users are not willing to contribute their energy/time to sensing. In this chapter, we present an evolutionary game-theoretic framework within which to develop the best cooperation strategy for cooperative sensing with selfish users. Using replicator dynamics, users can try different strategies and learn a better strategy through strategic interactions. We study the behavior dynamics of secondary users, derive and analyze the properties of the ESSs, and further study a distributed learning algorithm that helps the secondary users to approach the ESS with just their own payoff history. From simulation results we find that the game provides a better performance than having all secondary users sense at every time slot, in terms of total throughput. Moreover, the average throughput per user in the sensing game is higher than that in the single-user-sensing case without user cooperation. For related references, readers can refer to [452]. In [148], the authors proposed collaborative spectrum sensing to combat shadowing/fading effects. The work in [297] proposed light-weight cooperation in sensing based on hard decisions to reduce the sensitivity requirements. The authors of [139] showed that cooperative sensing can reduce the detection time of the primary user and increase the overall agility. How to choose proper secondary users for cooperation was investigated in [341]. The authors of [265] studied the design of the sensingslot duration to maximize secondary users’ throughput under certain constraints. Two energy-based cooperative detection methods using weighted combining were analyzed in [434]. Spatial diversity in multiuser networks to improve the spectrum sensing capabilities of centralized cognitive radio networks were exploited in [140].
8
Anti-jamming stochastic games
Various spectrum management schemes have been proposed in recent years to improve the spectrum utilization in cognitive radio networks. However, few of them have considered the existence of cognitive attackers who can adapt their attacking strategy to the time-varying spectrum environment and the secondary users’ strategy. In this chapter, we investigate the security mechanism when secondary users are facing a jamming attack, and consider a stochastic game framework for anti-jamming defense. At each stage of the game, secondary users observe the spectrum availability, the channel quality, and the attackers’ strategy from the status of jammed channels. According to this observation, they will decide how many channels they should reserve for transmitting control and data messages and how to switch between the different channels. Using minimax-Q learning, secondary users can gradually learn the optimal policy, which maximizes the expected sum of discounted payoffs defined as the spectrum-efficient throughput. The optimal stationary policy in the anti-jamming game is shown to achieve much better performance than the policy obtained from myopic learning, which maximizes only each stage’s payoff, and a random defense strategy, since it successfully accommodates the environment dynamics and the strategic behavior of the cognitive attackers.
8.1
Introduction In order to utilize the spectrum resources efficiently, various spectrum management approaches have been considered in the literature and in previous chapters. Most of them are based on the assumption that the users aim only at maximizing the spectrum utilization, either in a cooperative way where all users are coordinated by the same network controller and serve a common goal, or in a selfish manner where the autonomous secondary users want to maximize their own benefit. However, this assumption does not hold when the secondary users are in a hostile environment, where there exist malicious attackers whose objective is to cause damage to the legitimate users and prevent the spectrum from being utilized efficiently. Therefore, how to secure spectrum sharing is of critical importance to the wide deployment of cognitive radio technology. In this chapter, we focus on a class of powerful attack, the jamming attack, in a cognitive radio network and consider a stochastic game framework for anti-jamming defense
8.1 Introduction
201
that can accommodate the dynamic spectrum opportunity, channel quality, and changes in strategy both by secondary users by attackers. Most existing anti-jamming defense schemes are not directly applicable to cognitive radio networks, since the spectrum availability keeps changing with the primary users returning to and vacating the licensed bands. For instance, the work in [234] used error-correcting codes (n, m) to ensure reliable data communication with a high throughput. However, this approach requires that at each time there are at least n channels available, which might not be the case if many licensed bands are occupied by primary users. Moreover, in most works it is assumed that the attackers adopt a fixed strategy that will not change with time. However, if the attackers are also equipped with cognitive radio technology, it is highly likely that they will adapt their attacking strategy according to the environment dynamics as well as the secondary users’ strategy. Therefore, in this chapter, we model the strategic and dynamic competition between the secondary users and the cognitive attackers as a zero-sum stochastic game. In order to ensure reliable transmission, we consider it necessary to reserve multiple channels for transmitting control messages, and the control channels should be switched with the data channels from time to time, depending on the attackers’ strategy. We define the spectrum availability, the channel quality, and the observation of the attackers’ action as the state of the game. The secondary users’ action is defined as how many control or data channels they should reserve and how to switch between the control and data channels, and their objective is to maximize the spectrum-efficient throughput, defined as the ratio between the expected achievable throughput and the total number of active channels used for transmitting control and data messages. Using the minimax-Q learning algorithm, the secondary users can implement the optimal policy, with a proved convergence. Simulation results show that, when the channel quality is not good, the secondary users should reserve a lot of data channels and a few control channels to improve the throughput. As the channel quality becomes better, they should reserve more control channels to ensure reliability of communication. When the channel quality further increases, the secondary users should become more conservative by reserving fewer data channels to improve the spectrum-efficient throughput. In the states when some control or data channels are observed to be jammed, the secondary users should adopt a mixed strategy to avoid being severely jammed next time. When more than one licensed band is available, the attackers’ decision making becomes more difficult, and the secondary users can take more aggressive action by having more data channels. It is also shown that the secondary users can achieve a higher payoff using the stationary policy learned from the minimax-Q learning than by using myopic learning and a random strategy. The remaining of the chapter is organized as follows. In Section 8.2, we introduce the system model about the secondary user network and the anti-jamming defense. In Section 8.3, we formulate the anti-jamming defense as a stochastic game by defining the states, actions, objective functions, and state transition rules. In Section 8.4, we obtain the optimal policy of the secondary user network using the minimax-Q learning algorithm. In Section 8.5 we present the simulation results, followed by conclusions in Section 8.6.
202
Anti-jamming stochastic games
8.2
The system model In this section, we present the model assumptions about the secondary user network and the anti-jamming defense against the malicious attackers.
8.2.1
The secondary-user network In this chapter, we consider a dynamic spectrum access network where multiple secondary users equipped with cognitive radio are allowed to access temporarily unused licensed spectrum channels that belong to multiple primary users. There is a secondary base station in the network, which coordinates the spectrum usage of all secondary users. In order to avoid conflict or harmful interference with the primary users, the secondary users need to listen to the spectrum before every attempt at transmission. We assume that the secondary network is a time-slotted system, and that, at the beginning of each time slot, secondary users need to reserve a certain time to detect the presence of a primary user. Various detection techniques are available, such as energy detection, or feature detection if the secondary users know some prior information about the primary users’ signal. In cooperative spectrum sharing such as a spectrum auction, secondary users can avoid harmful interference by listening to the primary users’ announcement about whether they would share the licensed channels with the secondary users. To simplify analysis, we assume perfect sensing or cooperative spectrum sharing in this chapter. Therefore, the secondary user network can take every opportunity to utilize the currently unused licensed spectrum, and will vacate the spectrum whenever a primary user reclaims the spectrum rights. Owing to the primary users’ activity and channel variations, the spectrum availability and quality keep changing. In order to coordinate the spectrum usage and achieve efficient spectrum utilization, necessary control messages need to be exchanged between the secondary base station and the secondary users through dedicated control channels. Control channels serve as a medium that can support high-level network functionality, such as access control, channel assignment, and spectrum handoff. If the control messages are not correctly received by the secondary users or base station, certain network functions will be impaired.
8.2.2
Anti-jamming defense in cognitive radio networks Radio jamming is a denial-of-service (DoS) attack intended to disrupt communications at the physical and link layers of a wireless network. By keeping the wireless spectrum busy, e.g., constantly injecting packets into a shared spectrum [302], a jamming attacker can prevent legitimate users from accessing an open spectrum band. Another type of jamming is to transmit packets around the vicinity of a victim [460] [473], so that the SNR deteriorates greatly and no data can be received correctly. In a cognitive radio network, malicious attackers can launch jamming attacks to prevent efficient utilization of the spectrum opportunities. In this chapter, we assume that
203
8.2 The system model
the attackers will not jam the licensed bands when the primary users are active, either because there may be a very heavy penalty on the attackers if their identity becomes known by the primary users, or because the attackers cannot get close to the primary users. Moreover, due to the limitation on the number of antennas and/or the total power, we assume that the attackers can jam at most N¯ channels in each time slot. Then, the objective of the attackers is to cause the greatest possible damage to the secondary-user network with their limited jamming capability. Given the limited jamming capability, the attackers can adopt an attacking strategy that targets as many data channels as possible in order to reduce the gain of the secondary user network by transmitting data. On the other hand, if the number of control channels is smaller than N¯ , while the number of data channels is greater than N¯ , the attackers can try to target the control channels in order to make the attack even more powerful. If the secondary user network adopts a fixed channel assignment scheme for transmitting data and control messages, a cognitive attacker can easily capture such a pattern, distinguish between the data channels and control channels, and target only the data or control channels and hence cause the greatest possible damage. Therefore, secondary users need to perform channel hopping/switching to alleviate the potential damage due to a fixed channel-assignment schedule. As shown in Figure 8.1, the channels that are used for transmitting data/control messages in this time slot might no longer be data/control channels in the next time slot. By introducing
data
control
jammed
P primary freq.
idle
P l =2
P P P
P l =1
P P P time
Figure 8.1
Anti-jamming defense.
204
Anti-jamming stochastic games
randomness into their channel assignment, secondary users render their access pattern more unpredictable. Then, the attackers also have to strategically change the channels they will attack with time. Therefore, channel hopping is more resistant to a jamming attack than is a fixed channel assignment. When designing the channel-hopping mechanism in a cognitive radio network, the secondary users need to take the following facts into consideration. • There is a tradeoff in choosing a proper number of control channels. The functionality of the secondary user network relies heavily upon the correct reception of control messages. Thus, it is more reliable to transmit duplicate control messages in multiple channels (i.e., control channels). However, if the secondary user network reserves too many control channels, the number of channels in which data messages are transmitted (i.e., data channels) will be small, and the gain achievable through utilizing the licensed spectrum will be unnecessarily low. Therefore, a good selection should be able to balance the risk of having no control messages successfully received and the gain of transmitting data messages. To make the defense mechanism more general, we assume that the secondary user network can choose to transmit nothing in some channels even when the licensed band is available. This is because, when the secondary base station believes that it has reserved enough data or control channels under very severe jamming attack, allocating more channels for transmitting messages can only result in a waste of energy, and it will be better to leave some channels idle, if the energy consumption is a concern of the secondary user network. • The channel-hopping mechanism must be adaptive with respect to the attackers’ strategy. This is because the attackers may also be equipped with cognitive radio technology and may adjust their strategies on the basis of observations about the spectrum environment dynamics and the secondary users’ strategy. Thus, the secondary users cannot pre-assume that the attackers will adopt a fixed attack strategy. Instead, they need to build a stochastic model that captures the dynamic strategy adjustment of the attackers, as well as the spectrum environment variations. According to the above-mentioned assumptions about the system model and the jamming attack, we know that the secondary users aim at maximizing the spectrum utilization with carefully designed channel-switching schedules, while the malicious attackers want to minimize the secondary users’ gain by strategic jamming. Therefore, they have opposite objectives and their dynamic interactions can be well modeled as a non-cooperative (zero-sum) game. Since we assume that the spectrum access of all the secondary users is coordinated by the secondary base station, and the malicious users work together to cause the greatest possible damage to the secondary users, we can view all the secondary users in the network as one player, and all the attackers as another player. Moreover, considering that the spectrum opportunity, channel quality, and the strategies both of the secondary users and of the malicious attackers are changing with time, the non-cooperative game should be considered in a stochastic setting, i.e., the dynamic anti-jamming defense in the secondary user network should be formulated as a stochastic game.
8.3 Formulation of the stochastic anti-jamming game
8.3
205
Formulation of the stochastic anti-jamming game Before we go into the details of the formulation of the stochastic anti-jamming game, let us first introduce the stochastic game to get a general idea. A stochastic game [383] is an extension of a Markov decision process (MDP) [129] by considering the interactive competition among different agents. In a stochastic game G, there is a set of states, denoted by S, and a collection of action sets, A1 , . . . , Ak , one for each player in the game. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. After the players have selected and executed their actions, the game then moves to a new random state with transition probability determined by the current state and one action from each player: T : S × A1 × · · · × Ak → P D(S). Meanwhile, at each stage each player receives a payoff Ri : S × A1 × · · · × Ak → R, which also depends on the current state and the chosen actions. The game is played continually for a number of stages, and each player ∞ j attempts to maximize his/her expected sum of discounted payoffs, E j=0 γ ri,t+ j , where ri,t+ j is the reward received j steps into the future by player i and γ is the discount factor. After introducing the concepts of a stochastic game, we next formulate the antijamming game by defining each component of the game.
8.3.1
States and actions We consider a spectrum pooling system, such that the secondary user network can use the temporarily unused spectrum bands that belong to L primary users. Since the bandwidths of different licensed bands may be different, we assume that each licensed band is divided into a set of adjacent channels with the same bandwidth. Then, there are Nl channels in primary user l’s band, and we assume that all of them will be occupied/released when primary user l reclaims/vacates the band. Then, we can denote primary user l’s states in the lth band at time t as Plt , whose value can be either Plt = 1, meaning that primary user l is active at time t, or Plt = 0, meaning that primary user l will not use the licensed band at time t and the secondary users can access the channels in the lth band. According to some empirical studies on the primary users’ access pattern [152], the states Plt can be modeled by a two-state Markov 1→1 = p P t+1 = 1 P t = 1 chain, where the transition probabilities are denoted by p l l l and pl0→1 = p Plt+1 = 1 Plt = 0 . The secondary user network will achieve a certain gain by utilizing the spectrum opportunity in the licensed bands. The gain can be defined as a function of the data throughput, packet loss, delay, or some other proper quality-of-service (QoS) measure, and is often an increasing function of the channel quality. Owing to the channel variations in each licensed band, the channel quality may change from one time slot to another, so the gain associated with utilizing a licensed band also changes over time. We assume that the gain for each channel within the same licensed band l is identical at any time t, and that it can take any value from a set of discrete values, i.e., glt ∈ {q1 , q2 , . . . , qn }. Since the channel quality (in terms of SNR) is often modeled as
206
Anti-jamming stochastic games
a finite-state Markov chain (FSMC) [498], the dynamics of the lth licensed band’s gain glt can also be expressed by an FSMC. Note that the gain achievable by utilizing the licensed bands also depends on the primary users’ status, i.e., when the primary user is active in the lth band Plt = 1 , the secondary users are not allowed access to band l, and thus glt = 0. So the state of the FSMC should be able to capture the joint dynamics the primary users’ access and the channel quality, which can be denoted by tof both t Pl , gl . The transition probability of the FSMC with states Plt , glt can be derived as follows. When the lth licensed band is not available for two consecutive time slots, the transition depends only on the primary users’ access pattern, so we have (8.1) p Plt+1 = 1, glt+1 = 0 Plt = 1, glt = 0 = pl1→1 . When the lth band becomes available with gain qn at time t + 1, we have , p Plt+1 = 0, glt+1 = qn Plt = 1, glt = 0 = 1 − pl1→1 pg0→n l
(8.2)
denotes the probability that the gain of band l is qn at time t + 1, given that where pg0→n l t Pl = 1 and Plt+1 = 0. When the lth band is available for two consecutive time slots, we have the state transition probability as , (8.3) p Plt+1 = 0, glt+1 = qn Plt = 0, glt = qm = 1 − pl0→1 pgm→n l is the probability that the gain transits from qm at time t to qn at time where pgm→n l t + 1. Finally, when the lth band becomes unavailable from time t to time t + 1, the transition probability is (8.4) p Plt+1 = 1, glt+1 = 0 Plt = 0, glt = qm = pl0→1 , since the transition does not depend on the gain glt at time t. In the above, we have discussed the dynamics of primary users’ returning/vacating the licensed bands and the gains associated with utilizing the licensed spectrum. Clearly, these dynamics will affect the secondary users’ decisions about how to allocate the channels for transmitting control and data messages. For instance, in order to obtain higher utilization of the spectrum opportunities, the secondary users tend to allocate more channels with higher gains as data channels and those with lower gains as control channels. However, their channel allocation decisions should also depend on the observations about the malicious attackers’ strategies, which can be conjectured from knowledge of which channels are jammed by the attackers. Thus, the secondary users should maintain a record of which channels have been jammed by the attackers and what types of messages have been transmitted in the jammed channels. Since the channels within the same licensed band are assumed to have the same gain, what matters to the secondary users is only the number and the type of the jammed channels. On the basis these assumptions, the observations of the secondary user network are denoted of t , Jt t t by Jl,C l,D , where Jl,C and Jl,D denote the numbers of control and data channels that are jammed in the lth band observed at time slot t, and l ∈ {1, 2, . . . , L}. Such
8.3 Formulation of the stochastic anti-jamming game
207
observations can be obtained when the secondary users do not receive confirmation of message receipt from the receiver. The secondary users cannot tell whether an idle channel gets jammed or not, since no messages are transmitted in those channels. Thus, the number of idle channels that get jammed is not an observation of the secondary users, and will not be considered in the state of the stochastic game. In summary, the state of the stochastic anti-jamming game at time t is defined by st = s1t , s2t , . . . , s Lt , where t , Jt slt = Plt , glt , Jl,C l,D denotes the state associated with the lth band. After observing the state at each stage, both the secondary users and the attackers will choose their actions for the current time slot. The secondary users might no longer choose the previously jammed channels as control or data channels if they believe that the attackers will stay in the jammed channels until they detect no activity of the secondary users. On the other hand, if the attackers believe that the secondary users will hop away from the jammed channels, they will choose the previously unattacked channels to jam; then, for the secondary users, remaining in the previously jammed channels may be a better choice. When facing such uncertainty about each other’s strategy, both the secondary users and the attackers should adopt a randomized strategy. The secondary users will still transmit control or data messages in some of the previously jammed channels in case the attackers choose to jam the previously unattacked channels, and start transmitting in some of the previously unattacked channels in case the attackers keep jamming the previously jammed channels for a while. Similarly, the attackers will keep jamming some of the previously jammed channels and start to jam the channels that were not jammed in the previous time slot. In addition, as discussed in Section 8.2, the secondary users may need to perform channel switching to make their channel access pattern more unpredictable to the attackers and alleviate the potential damage due to jamming. Thus, at every time the secondary users can switch a control channel to a data channel or an idle channel, and vice versa. If so, when there are Nl channels in each licensed band l, the secondary users will have 3 Nl different actions to choose from for the lth band and L Nl actions in total. This will complicate the decision making of the secondary l=1 3 users. To have the decision making computable in a reasonable time, we formulate the action set for both players as follows. Note that more complicated action modeling will only affect the performance, while not affecting the stochastic anti-jamming game framework. the actions of the secondary users are defined as at = Mathematically, t t t t t t , al,D , al,C , al,D , where action al,C (or al,D ) a1t , a2t , . . . , a Lt , with alt = al,C 1 2 1 1 2 1 t means that the secondary network will transmit control (or data) messages in al,C1 (or t ) channels uniformly selected from the previously unattacked channels, and action al,D 1 t t (or al,D ) means that the secondary network will transmit control (or data) mesal,C 2 2 t t ) channels uniformly selected from the previously jammed chansages in al,C2 (or al,D 2 t t , . . . , at nels. Similarly, the actions of the attackers are defined as atJ = a1,J , a2,J L ,J , t = at , at t t with al,J l,J1 l,J2 , where action al,J1 (or al,J2 ) means that the attackers will jam t t ) channels uniformly selected from the previously unattacked (or attacked) (or al,J al,J 1 2
208
Anti-jamming stochastic games
channels at the current time t. It can be seen that the above choice of actions has modeled the players’ uncertainty about each other’s strategy on the jammed and un-jammed channels, as well as the need for channel switching.
8.3.2
State transitions and stage payoff With the state and action space defined, we next discuss the state transition rule. We assume that the players choose their actions in each band independently, so the transition probability can be expressed by L t . p slt+1 slt , alt , al,J p st+1 st , at , atJ =
(8.5)
l=1
Since the dynamics of the primary users’ activity and the channel variations are supposed to be independent of the players’ actions, the transition probability t+1 t t t p sl sl , al , al,J can be further separated into two parts, i.e., t+1 t+1 t t t t p slt+1 slt , alt , al,J = p Jl,C , Jl,D Jl,C , Jl,D , alt , al,J × p Plt+1 , glt+1 Plt , glt , (8.6) where the first term on the right-hand side of (8.6) represents the transition probability of the number of jammed control and data channels, and the second term represents the transition of the primary user status and the channel condition. Since the second term has been derived in (8.1)–(8.4), we need only derive the first term for different cases. Case 1: Plt = 1. As discussed in Section 8.2, we assume that the attackers will not jam the licensed bands when the primary users are active; then, when the lth band is occupied by the primary user at time slot t, i.e., Plt = 1, the action of the attackers will t = (0, 0), and the state variable J t+1 and J t+1 will be 0. Therefore, when P t = 1, be al,J l l,D l,C we have t+1 t+1 t = p Plt+1 , glt+1 Plt , glt , if Jl,C = 0 and Jl,D = 0. (8.7) p slt+1 slt , alt , al,J Case 2: Plt = 0. When the lth band is available to the secondary users, according to t and J t at time t about the jammed-channel status in the previous the observation Jl,C l,D t t t t , , al,D , al,C , al,D time slot, the secondary network will choose an action alt = al,C 1 2 1 2 t = at , at and the attackers choose an action al,J . Since the jammed control (or l,J1 l,J2 data) channels at the next time slot t + 1 include those control (or data) channels that the secondary network has selected both from among the previously un-jammed channels and from among jammed channels, when deriving the transition probability t+1 t+1 t t , at , at p Jl,C , Jl,D Jl,C , Jl,D l l,J we need to consider all possible pairs of (n C1 , n C2 ) and (n D1 , n D2 ), where n C1 (or n D1 ) denotes the number of jammed control (or data) channels that had previously not been jammed and n C2 (or n D2 ) denotes the number of jammed control (or data) channels that had previously been jammed, with
8.3 Formulation of the stochastic anti-jamming game
209
t+1 t+1 n C1 + n C2 = Jl,C and n D1 + n D2 = Jl,D . Given that the secondary users uniformly t t (or a ) channels as control (or data) channels from among the unchoose al,C l,D1 1 t t t channels, the jammed Nl − Jl,C − Jl,D channels, and the attackers uniformly jam al,J 1 probability that n C1 control channels and n D1 data channels are jammed at time t can be written as ' ( t t t t t N − a − a al,C a l,D1 l,1 l,C1 l,D1 1 t al,J − n C1 − n D1 n D1 n C1 t 1 t t , (8.8) = , alt , al,J p n C1 , n D1 Jl,C , Jl,D t Nl,1 t al,J 1 t = N − J t − J t . Similarly, the transition probability of n where Nl,1 l C2 and n D2 is l,D l,C expressed as ' ( t t t t t N − a − a al,C a l,D l,2 l,C l,D 2 2 2 2 t al,J − n C2 − n D2 n D2 n C2 t 2 t t t , (8.9) p n C2 , n D2 Jl,C , Jl,D , al , al,J = t Nl,2 t al,J 2 t = J t + J t denotes the number of jammed channels. Then, the transition where Nl,2 l,D l,C t and J t becomes probability of Jl,C l,D
t+1 t+1 t t t Jl,C , Jl,D , Jl,D , alt , al,J p Jl,C t t t = p n C1 , n D1 Jl,C , Jl,D , alt , al,J t+1 t+1 n C1 +n C2 =Jl,C n D1 +n D2 =Jl,D
t t t . , Jl,D , alt , al,J × p n C2 , n D2 Jl,C
(8.10)
By substituting (8.3), (8.4), and (8.10) into (8.6), we can get the state transition probability. After the secondary users and the attackers have chosen their actions, the secondary users will transmit control and data messages in the selected channels, and the attackers will jam their selected channels. In order to coordinate the spectrum access and simplify operation, we assume that the same control messages are transmitted in all the control channels, and that one correct copy of the control information at time t is sufficient for coordinating the spectrum management in the next time slot t + 1. The gain of a channel can be achieved only when it is used for transmitting data messages and at least one control channel is not jammed by the attackers. Considering that it costs energy for the secondary users to transmit control and data messages and they may be energyconstrained, the objective of the secondary users is to achieve the highest possible gain with a limited energy. Therefore, the stage payoff of the secondary users can be defined as the expected gain per active channel. Another explanation of the stage payoff is that the secondary users want to maximize the spectrum-efficient gain.
210
Anti-jamming stochastic games
Under these assumptions, the stage payoff can be expressed as r st , at , atJ = T st , at , atJ × 1 − p block st , at , atJ ,
(8.11)
where T st , at , atJ denotes the expected spectrum-efficient gain when not all con trol channels are jammed, and p block st , at , atJ denotes the probability that all control channels in all L bands are jammed. t As explained in Section 8.3.1, we assume that the attackers uniformly select al,J 1 t t channels from the previous Nl,1 unattacked channels to jam, and select al,J2 channels t attacked channels to jam. Then, the probability that a channel from the previous Nl,2 t /N t t t will not be jammed at time t can be represented by 1 − al,J l,1 and 1 − al,J2 /Nl,2 , 1 t respectively. Given the of gain the channels gl ,t the expected gain of using band l is t t t t t t al,D1 1 − al,J1 /Nl,1 + al,D2 1 − al,J2 /Nl,2 gl . Then, we can express T st , at , atJ as L
T st , at , atJ =
l=1
0 ' t al,D 1− 1
t al,J
( 1
t Nl,1
' t + al,D 1− 2
t al,J
(1 2
t Nl,2
L t t t t al,C + al,D + al,C + al,D 1 2 1 2
glt ,
(8.12)
l=1
where the denominator denotes the total number of control and data channels. Thus, (8.12) reflects the spectrum-efficient gain. Only when all the control channels in each licensed band l are jammed t cant the sect block s , a , aJ can ondary network be blocked. Therefore, the blocking probability p be expressed as t t − at t t − at al,C N N a l,1 l,C1 l,C2 l,2 l,C2 1 t t t t t t L al,C a − a a a − a l,J l,J l,C l,C l,C2 1 2 1 1 2 block t t t s , a , aJ = p × t t Nl,1 Nl,2 l=1 t t al,J al,J 1 2 t t t t Nl,1 − al,C1 Nl,2 − al,C2 t t t t L al,J − a a l,J2 − al,C2 l,C1 1 = × , (8.13) t t Nl,1 Nl,2 l=1 t t al,J al,J 1 2 where the first (or second) term in the product represents the probability that all the control channels uniformly selected from the previously un-jammed (or jammed) channels in the lth band are jammed at time t. By substituting (8.12) and (8.13) back into (8.11), we can obtain the stage payoff for the secondary users, and the attackers’ payoff is the negative of (8.11).
8.4 Solving optimal policies of the stochastic game
8.4
211
Solving optimal policies of the stochastic game On the basis of the formulation of the stochastic anti-jamming game in the previous section, in this section, we discuss how to come up with the optimal strategy, i.e., the optimal defending policy of the secondary users. In general, the secondary users have a long sequence of data to transmit, and the energy of the attackers is sufficiently great that they can afford to jam the secondary network for a long time given that the number of channels jammed at each stage will not exceed N¯ . Thus, we can assume that the anti-jamming game is played for an infinite number of stages. Moreover, the secondary users treat the payoff in different stages differently, and a recent payoff counts for more than a payoff that will be received in the distant future. Then, the secondary users’ objective is to derive an optimal policy that maximizes the expected sum of discounted payoffs $ #∞ t t t t (8.14) γ r s , a , aJ , max E t=0
where γ is the discount factor of the secondary-user network. At time t when the game has been played for t − 1 stages, the objective of the secondary network becomes ⎧ ⎫ ∞ ⎨ ⎬ t+ j max E γ j r st+ j , at+ j , aJ , (8.15) ⎩ ⎭ j=0
which is based on the history of the states and actions in all of the previous t − 1 stages. A policy in the stochastic game refers to a probability distribution over the action set at any state. Then, the policy of the secondary network is denoted by π : S → P D(A), and the policy of the attackers can be denoted by πJ : S → P D(AJ ), where st ∈ S, at ∈ A, and atJ ∈ AJ . Given the current state st , if the defending policy π t (or jamming policy πJt ) at time t is independent of the states and actions in all previous time slots, the policy π (or πJ ) is said to be Markov. If the policy is further independent of time,
t t i.e., π t = π t , given that t s t = ts the policy is said to be stationary. Since the payoff r s , a , aJ of the secondary users at each stage is determined not only by their own action, but also by the attackers’ action, when maximizing the objective function defined in (8.14) or (8.15), the secondary users need to think of the possible choices of the attackers. The attackers have the opposite objective to the secondary users, and aim at minimizing the secondary users’ payoff. Therefore, a reasonable policy for the secondary network should be optimal in the sense that it can maximize the secondary users’ payoff in the worst case of jamming, i.e., the minimax solution of the zero-sum game. It is a conservative strategy and can provide the lower-bound performance under a jamming attack. It is known [249] that every stochastic game has a nonempty set of optimal policies, and at least one of them is stationary. Since the game between the secondary network and the attackers is a zero-sum game, the equilibrium of each stage game is the unique minimax equilibrium, and thus the optimal policy will also be unique for each player. It is straightforward to solve for the equilibrium of each stage game, since, given a
212
Anti-jamming stochastic games
state, the secondary users can explicitly figure out all their own possible actions and the actions that can be chosen by the attackers, construct the payoff table, and calculate the minimax solution by linear programming. However, solving the optimal policy for the stochastic anti-jamming game is much more complicated than solving the stage game. As shown in (8.15), rewritten as ⎫ ⎧ ∞ ⎬ ⎨ t+ j , (8.16) γ j r st+ j , at+ j , aJ max E r st , at , atJ + ⎭ ⎩ j=1
at each time t when the secondary users want to maximize the objective function, they need to consider not only the immediate expected payoff at the current stage (the first term of (8.16)), but also the expected sum of discounted future payoffs (the second term of (8.16)). The current policy has an implicit impact on the payoff that can be achieved in the future, since different policies at the current time t result in different future states at times t + 1, t + 2, . . . , and thus in different future payoff values. Therefore, we need to solve the optimal policy in the stochastic game using a valueiteration-based approach, while calculating a minimax solution in each iteration. Such a method is called minimax-Q learning [249], which will be discussed in detail in the following. Minimax-Q learning is an extension of the Q-learning algorithm considered in a zero-sum game setting. Thus, we start by introducing the Q-learning method. Q-learning is a type of reinforcement learning [37] method that approaches the optimal policy in an MDP by learning an action-value function. The objective function of an MDP can be defined similarly to (8.14), which is defined for a stochastic game, while there is only one player. Then, the value of a state V (s) given a policy π starting from state s can be defined as #∞ $ π t t t 0 γ r (s , a )s = s; π . (8.17) V (s) = E t=0
Owing to the recursive nature of the value function, (8.17) can be further rewritten as #∞ $ π 0 0 0 t−1 t t 0 γ r (s , a )s = s; π V (s) = E{r (s , a ) s = s; π} + γ E t=1
= r (s, π(s)) + γ
p(s |s, π(s))E
s
= r (s, π(s)) + γ
#∞
γ
t−1
r (s , a )s 1 = s ; π t
$
t
t=1
p(s |s, π(s))V π (s ).
(8.18)
s
The quality of a state–action pair under policy π, also termed a Q-function, Q π (s, a), is defined as the expected discounted payoff achieved by a policy that takes action a at state s and follows policy π thereafter, i.e., p(s |s, a)V π (s ). Q π (s, a) = r (s, a) + γ (8.19) s
8.4 Solving optimal policies of the stochastic game
213
At each stage s, the optimal policy π ∗ always chooses the action that results in the highest Q-value, i.e., ∗
V π (s) = max Q(s, a). a∈A
(8.20)
Then, (8.19) and (8.20) constitute the core procedure of the Q-learning method, and the value iteration will finally converge to the true values of Q and V for each state, which will be sufficient for the player to choose the optimal action. Since a stochastic game is an extension of an MDP, a value iteration for finding the optimal policy of the secondary the anti-jamming game can be similarly users in derived. Here, the Q-function Q st , at , atJ at stage t is defined as the expected discounted payoff when the secondary users take action at , the attackers take action atJ , and both of them follow their stationary policies thereafter. Since the Q-function is essentially an estimate of the expected total discounted payoff which evolves over time, in order to maximize the worst-case performance, at each stage the secondary users payoff of a matrix game, where at ∈ A and atJ ∈ AJ . should treat Q st , at , atJ as the Given the payoff Q st , at , atJ of the game, the secondary users can find the minimax equilibrium and update the Q-value with the value of the game [249]. Therefore, the value of a state in the anti-jamming game becomes V (st ) = max min Q st , at , atJ π(at ), (8.21) t t π(a ) πJ (aJ ) t a ∈A where Q st , at , atJ is updated by Q st , at , atJ = r st , at , atJ + γ p st+1 |st , at , atJ V (st+1 ). st+1
(8.22)
Since the value function (8.20) in Q-learning is replaced by the minimax solution (8.21), the learning algorithm is called minimax-Q learning. In order to update the Q-value by (8.22), secondary users need to know the state transition probability. Although the transition probability can be calculated by (8.5), the attackers’ action atJ , ∀t, cannot be observed by secondary users directly. It is possible for secondary users to estimate the state transition from empirical observations [128]; however, due to the high dimensionality of the state and action space, it takes a very long time for the estimate to converge to the true transition probability. Even though the secondary users could “merge” several states into one state, like a quantization, to reduce the dimensionality of the state and action space and make the estimate converge faster, how to merge the states is still a question. As discussed in [255] [194], for zero-sum stochastic games in which the players have opposite objectives, we can approach the optimal policy by means of a modified value iteration without explicit use of the state transition probability. Specifically, the Q-function is updated according to 4 3 (8.23) Q st , at , atJ = (1 − α t )Q st , at , atJ + α t r st , at , atJ + γ V (st+1 ) ,
214
Anti-jamming stochastic games
Table 8.1. Minimax-Q learning for the anti-jamming stochastic game 1. At state st , t = 0, 1, . . ., " if state st has not been observed previously, add st to shist , • generate action set A(st ), and AJ (st ) of the attackers; • initialize Q(st , a, aJ ) ← 1, for all a ∈ A(st ), aJ ∈ AJ (st ); • initialize V (st ) ← 1; • initialize π(st , a) ← 1/|A(st )|, for all a ∈ A(st ); " otherwise, use previously generated A(st ), AJ (st ), Q(st , a, aJ ), V (st ), and π(st ). 2. Choose an action at at time t: " with probability pexp , return an action uniformly at random; " otherwise, return action at with probability π(st , a) under the current state st . 3. Learn. Assume that the attackers take action atJ , after receiving reward r st , at , atJ for moving at . from state st to st+1 by taking action " Update Q-function Q st , at , atJ according to (8.23). " Update the optimal strategy π ∗ (st , a) by t t π ∗ (st ) ← arg maxπ(st ) minπJ (st ) a π(s , a)Q(s , a, aJ ). t ∗ t t " Update V (s ) ← minπJ (st ) a π (s , a)Q(s , a, aJ ). " Update α t+1 ← α t ∗ μ. " Go to Step 1 until convergence occurs.
where α t denotes the learning rate decaying over time by α t+1 = μα t , with 0 < μ < 1, and V (st+1 ) is obtained by use of (8.21). In the modified update in (8.23), the current value of a state V (st+1 ) is used as an approximation of the true expected discounted future which will be improved during the value iteration; and the estimate of payoff, Q st , at , atJ is updated by mixing the previous Q-value with a correction from the new estimate at a learning rate α t that decays slowly over time. It is shown in [255] that the minimax-Q learning approach converges to the true Q and V values and hence the optimal policy, as long as each action is tried infinitely many times in every state. Then, the minimax-Q learning for the secondary users to obtain the optimal policy is summarized in Table 8.1. Since no secondary user (or attacker) will transmit in (or jam) a licensed band when the primary user is active, when the primary users’ status differs in various states, the corresponding action spaces of the players in these states are also different. Thus, the action space depends on the state. At the beginning of each stage t, the secondary users check whether they have observed state st before: if not, they will add st to the observation history about every state shist , and initialize the variables used in the learning algorithm, Q, V , and policy π(st , a). If st already exists in the history shist , the secondary users just call the corresponding action sets and function values. Then, the secondary users will choose an action at : with a certain probability pexp , they choose to explore the entire action space A(st ) and return an action uniformly. With probability 1 − pexp , they choose to take action at that is drawn according to the current π(st ). After the attackers have taken action atJ , the secondary users receive the reward, and the game transits to the next state st+1 . The secondary users update the Q and V function values, update policy π(st ) at state st , and decay the learning rate.
215
8.5 Simulation results
The value iteration will continue until π(st ) approaches the optimal policy, and we will demonstrate the convergence of the minimax-Q learning in the simulation results. Note that, in order to obtain the value of a state V (st ), the secondary users need to solve for the equilibrium of a matrix game, where the payoff is Q(st , a, aJ ), for all a ∈ A(st ), and aJ ∈ AJ (st ). Assume that the attackers form the row player, whose strategy is denoted by the vector πJ (st ), and the secondary users form the column player, whose strategy is denoted by the vector π(st ). Then, the value of the game can be expressed by max min πJ (st )T Q(st , a, aJ )π(st ),
π(st ) πJ (st )
(8.24)
which cannot be solved directly. If we assume that the secondary users’s strategy π(st ) is fixed, then the problem in (8.24) becomes min πJ (st )T Q(st , a, aJ )π(st ).
πJ (s t )
(8.25)
Since Q(st , a, aJ )π(st ) is just a vector, and πJ (st ) is a probability distribution, the solution of (8.25) is equivalent to mini [Q(st , a, aJ )π(st )]i , i.e., finding the minimal element of Q(st , a, aJ )π(st ). Then, the problem in (8.24) is simplified to max min [Q(st , a, aJ )π(st )]i .
π(st )
i
(8.26)
On defining z = mini [Q(st , a, aJ )π(st )]i , we then have [Q(st , a, aJ )π(st )]i ≥ mini [Q(st , a, aJ )π(st )]i = z. Therefore, the original problem (8.24) becomes s.t. [Q(st , a, aJ )π(st )]i ≥ z, π(st ) ≥ 0, 1T π(st ) = 1,
max z,
π(st )
(8.27)
where π(st ) ≥ 0 means that each probability element in π(st ) must be non-negative. By treating the objective z also as a variable, (8.27) can be turned into the following: max 0Taug π , π
s.t. Q π ≤ 0, π(st ) ≥ 0, 1Taug π = 1,
where π =
0
(8.28)
1 π(st ) , z
Q = [O 1] − [Q(st , a, aJ ) 0] , 1Taug = [1T 0], and 0Taug = [0T 1]. Problem (8.28) is a linear program, so the secondary users can easily obtain the value of the game z from the optimizer π .
8.5
Simulation results In this section, we conduct simulations to evaluate the secondary user network’s performance under a jamming attack. We first demonstrate the convergence of the minimax-Q learning algorithm, and analyze the strategies of the secondary users and attackers for
216
Anti-jamming stochastic games
several typical states. Then, we compare the levels of performance achievable when the secondary users adopt various strategies. For illustrative purpose, we focus on examples with only one or two licensed bands to provide more insight; however, similar policies can be observed when there are more licensed bands available.
8.5.1
Convergence and strategy analysis
8.5.1.1
Anti-jamming defense in one licensed band We first study the case when there is only one licensed band available to the secondary users, i.e., L = 1. There are eight channels in the licensed band, from among which the attackers can choose at most four channels to jam at each time. The gain of utilizing each channel in the licensed band glt can take any value from {1, 6, 11}, and the transij→1 j→2 j→3 tion probability of the gain from any q j to qi is pgl = pgl = 0.4, pgl = 0.2, for j = 1, 2, 3, as well as for j = 0 when the primary user becomes inactive. The transition probabilities for the primary user’s access are given by pl1→1 = 0.5 and pl0→1 = 0.5. We first study the strategy of the secondary users and the attackers for those states when the primary user is inactive and no channels were observed to have been successfully jammed in the previous stage. Recall the state of anti-jamming the stochastic t that t t t t t t repregame with L = 1 is denoted by s = P1 , g1 , J1,C , J1,D , where J1,C and J1,D sent the number of jammed control and data channels observed from the previous stage, then three such states are (0, 1, 0, 0), (0, 6, 0, 0), and (0, 11, 0, 0). We show the learning curve of the secondary users’ strategy in these states in the left column of Figure 8.2, and the learning curve of the attackers’ strategy in the right column. We see from Figure 8.2 that, using the minimax-Q learning, the strategies of the secondary users and the attackers both converge within fewer than 400 iterations, and the optimal strategy for each player is a pure Recall that the action of the secondary users on strategy. t t t t the lth band is denoted by al,C1 , al,D1 , al,C2 , al,D2 , and the action of the attackers is t t al,J1 , al,J . Then, in Figures 8.2(a) and (b) for state (0, 1, 0, 0), we see that the opti2 mal strategy of the secondary users finally converges to (2, 6, 0, 0), meaning that the secondary users uniformly choose two channels as control channels and six channels as data channels; and the attackers’ optimal strategy converges to (3, 0), meaning that they uniformly choose three channels to jam. This is because the gain of each channel in this state is only 1, and the secondary users choose to reserve a lot of channels for transmitting data messages and a few channels for control messages, in the hope of obtaining a higher gain while at a higher risk of having all the control channels jammed. When the gain increases to 6 per channel, as shown in Figures 8.2(c) and (d), the secondary users become more risk-averse, reserving five control channels and three data channels, and the attackers become more aggressive by attacking the maximal number of channels they can. This is because the gain of each channel is higher, and the secondary users want to ensure a certain gain by securing at least one control channel from being jammed. When the gain further increases to 11 (Figures 8.2(e) and (f)), the secondary users become even more conservative, having only two data channels and
217
8.5 Simulation results
Learning curve of the strategy (malicious user)
Learning curve of the strategy (secondary user) 1
1
Probability
Probability
0.8 0.5
0.6 0.4
0
0.2 0 100
200
300
400 500 Iteration
600
700
0
800
200
300
400 500 Iteration
600
700
800
(b) action (3, 0) at state (0, 1, 0, 0)
Learning curve of the strategy (secondary user)
Learning curve of the strategy (malicious) user
1
1
0.8
0.8
0.6 0.4
0.6 0.4
0.2
0.2
0
0 0
100
200
300 400 Iteration
500
600
700
0
800
(c) action (5, 3, 0, 0) at state (0, 6, 0, 0)
0.8
0.8
Probability
1
0.6 0.4
0
0
400
500
600
700
Iteration
(e) action (3, 2, 0, 0) at state (0, 11, 0, 0) Figure 8.2
400 500 Iteration
600
700
800
0.4 0.2
300
300
0.6
0.2
200
200
Learning curve of the strategy (malicious user)
1
100
100
(d) action (4, 0) at state (0, 6, 0, 0)
Learning curve of the strategy (secondary user)
Probability
100
(a) action (2, 6, 0, 0) at state (0, 1, 0, 0)
Probability
Probability
−0.5 0
800
100
200
300
400
500
600
700
Iteration
(f) action (4, 0) at state (0, 11, 0, 0)
Learning curves of the secondary users (left column) and the attackers (right column).
800
218
Anti-jamming stochastic games
three control channels. This is because the objective of the secondary users is defined as the spectrum-efficient gain as in (8.12), and leaving more channels idle will probably increase the payoff. Next, we observe how the players’ strategy will change when some of the state variables are different, for instance, some control or data channels are jammed by the attackers in the previous stage. We choose only two states for illustration, state (0, 6, 2, 0) and state (0, 6, 0, 2), to compare with the strategy at state (0, 6, 0, 0). In Figure 8.3, we demonstrate the learning curve of the secondary users and the attackers at state (0, 6, 2, 0), where two control channels were jammed in the previous stage. We see that both players’ strategies converge within 50 iterations, and the optimal policies of both players at this state are mixed strategies. Since the attackers successfully jammed two control channels in the previous stage, it is highly likely that most of the remaining un-jammed channels are data channels. Thus, the attackers tend to jam the previously un-jammed channels with a relatively high probability, as shown by actions (1, 0), (2, 0), (3, 0), and (2, 1) in Figure 8.3(b), the total probability of which is very high at the beginning. Then, the secondary users tend to reserve most of the previously t ≥ 1 with jammed channels as data channels, as shown by those actions for which al,D 2 a total probability greater than 0.9, and reserve only a few of the previously un-jammed channels as data channels, as shown by actions for which al,D1 ≤ 3 with a total probability greater than 0.8. Moreover, since the attackers will attack fewer than three channels from among the previously un-jammed channels, the secondary users reserve only at most three control channels there to ensure reliable communications. The attackers generally jam fewer than four channels. If they choose to jam four channels, the secondary users facing a high chance of being attacked will leave more channels idle. This may in return increase the secondary users’ expected payoff, and thus the attackers jam at most three channels. Both players’ strategies at state (0, 6, 0, 2) are shown in Figure 8.4. Since two data channels are successfully jammed in the previous stage, the secondary users tend to reserve fewer than one channel that had previously been jammed as data channels in order to avoid being “second jammed”, as shown by actions (5, 0, 1, 1) and (5, 1, 1, 0) with a total probability greater than 0.7. Considering that the attackers will probably attack the previously un-jammed channels, the secondary users reserve most un-jammed channels as control channels to ensure reliability, again as shown by actions (5, 0, 1, 1) and (5, 1, 1, 0), for which five un-jammed channels are selected as control channels. In response to the secondary users’ strategy, the attackers will keep attacking the previously jammed channels, as shown by actions (0, 2), (1, 2), and (2, 2) with a total probability greater than 0.94, where al,J2 = 2. On comparing Figures 8.4 and 8.3, we find that, when the attackers successfully jam some data channels, more information about the secondary users’ strategy (on locating the data channels) is revealed, the damage of the jamming attack will be more severe, and the secondary users have to reserve more channels for control use, which leads to a reduced payoff.
219
Probability
8.5 Simulation results
0.2
0.2
0.2
0.4
0.15
0.15
0.15
0.3
0.1
0.1
0.1
0.2
0.05
0.05
0.05
0.1
0
0 50 100 (0, 3, 1, 1)
Probability
0.1
Probability
0 50 100 (1, 5, 0, 1)
0.05
0
0.02 0
0 50 100 (2, 0, 0, 2)
0.06
0.06
0.04
0.04 0.02
0.05
0.02
0 0 0 0 0 50 100 0 50 100 0 50 100 0 50 100 0 50 100 (3, 0, 2, 0) (3, 2, 1, 1) (3, 3, 0, 1) (4, 2, 0, 1) (5, 1, 2, 0) (a) Learning curve of the secondary users at state (0, 6, 2, 0)
0.2
0.3
0.15
0.2
0.1
0.1
0.05
0
50 100 (0, 1)
1
0.5
0
50 100 (2, 0)
0
0.1
0.05
0
50 100
0
0
(0, 2)
50 100
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
(1, 0) 0.2
0.2
0.6
0.15
0.15
0.4
0.1
0.1
0.2
0.05
0.05
0
50 100 (2, 1)
0
0
50 100 (2, 2)
0
50 100
0
(1, 1)
0.8
0
0
0
50 100 (1 ,2)
0.1
0.05
0
50 100 (3, 0)
(b) Learning curve of the attackers at state (0, 6, 2, 0) Figure 8.3
0 50 100 (2, 1, 1, 0)
0.08
0.1
0.1
0.4
0
0 50 100 (2, 0, 0, 0)
0.04
0.15
0.05
0
0
0.2 0.15
0
Probability
0
0.06
Learning curves of secondary users and attackers at state (0, 6, 2, 0).
0
0
50 100 (3, 1)
220
Anti-jamming stochastic games
0.4 Probability
0.04
0.3 0.2
0.02 0.1 0
0
10
20
30
40
0 0
10
Probability
(2, 2, 1, 0) 0.8
0.2
0.6
0.15
0.4
0.1
0.2
0.05
0 0
10
20
20
30
40
30
40
(5, 0, 1, 1)
30
40
0
0
10
(5, 1, 1, 0)
20 (6, 0, 0, 2)
(a) Learning curve of the secondary users at state (0,6,0,2)
0.4 Probability
0.3
0.3
0.2
0.2
0.1
0.1
0
0 0
10
20 (0, 2)
30
40
0
10
20 (1, 2)
30
40
0
10
20 (4, 0)
30
40
Probability
1 0.2 0.5
0
0.1
0 0
10
20
30
40
(2, 2)
(b) Learning curve of the attackers at state (0,6,0,2) Figure 8.4
8.5.1.2
Learning curves of secondary users and attackers at state (0, 6, 0, 2).
Anti-jamming defense in two licensed bands We now discuss the strategy of the secondary users and attackers when there are two licensed bands available, i.e. L = 2. There are four channels within each band, and the gain of the channels in each band still takes a value from {1, 6, 11}, with the same transition probability as that in the one-band case. The transition probability for the primary
8.5 Simulation results
221
user’s access for the first band is p11→1 = p10→1 = 0.5, while the transition probability for the second band is p21→1 = p20→1 = 0.2, meaning that the probability of the second band being available is higher than that of the first band. The attackers can jam at most four channels at each time. To compare with the one-band case, we first study the strategy of both players at state ((0, 6, 0, 0), (0, 6, 0, 0)), where both bands are available, with gain g1t = g2t = 6, and no control or data channels have been jammed in the previous stage. We show the learning curves of both players in Figure 8.5, where the number below each plot denotes the index of the action shown in that plot. We see that the secondary users’ strategy converges to the optimal policy within 800 iterations, while the attackers’ strategy converges within 400 iterations. Under the optimal policy, the secondary users mostly take action ((1, 3, 0, 0), (1, 3, 0, 0)) indexed as 104, action ((0, 2, 0, 0), (2, 2, 0, 0)) indexed as 27, action ((2, 0, 0, 0), (1, 3, 0, 0)) indexed as 119, and action ((2, 2, 0, 0), (1, 1, 0, 0)) indexed as 147; the attackers mostly take action ((0, 0), (3, 0)) indexed as 3, action ((0, 0), (4, 0)) indexed as 4, and action ((4, 0), (0, 0)) indexed as 14. Since the availability of the second band is higher, the attackers tend to jam the channels in the second band (with a total probability 0.7 of actions 3 and 4). But there is still a chance that they will attack the first band, indicating that the attackers’ strategy is random. Compared with the equivalent state (0, 6, 0, 0) in the one-band case, where the secondary users’ policy is (5, 3, 0, 0), the secondary users’ policy in the two-band case is more aggressive, as can be seen from the fact that the secondary users assign more data channels and fewer control channels in total. This is because, when there are two available bands, the attackers’ strategy becomes more random, and thus an aggressive policy can bring a higher gain to the secondary users. Then, we study the strategy at state ((0, 1, 0, 0),(0, 6, 0, 0)), where g1t = 1 and t g2 = 6. The learning curves are shown in Figure 8.6. Since the second band has higher gain and is also more likely to be available in the next slot, the attackers tend to jam the second band, as can be seen from the probability of action ((0, 0), (3, 0)) indexed as 3 and action ((0, 0), (4, 0)) indexed as 4. In response to the attackers’ strategy, the secondary users tend to reserve more control channels in the first band, since it is less likely to be attacked, and more data channels in the second band, since it has a higher gain for each channel, as can be seen from the probability of action ((2, 1, 0, 0), (1, 1, 0, 0)) indexed as 132 and action ((3, 0, 0, 0), (0, 4, 0, 0)) indexed as 160.
8.5.2
Comparison of different strategies We also compare the performance of the secondary users when they adopt the optimal policy obtained from minimax-Q learning with results obtained using other policies, in order to evaluate the stochastic anti-jamming game and the learning algorithm. We assume that the attackers use their optimal stationary policy that is trained against the secondary users who adopt minimax-Q learning. We then consider the following three scenarios with different strategies for the secondary users.
Anti-jamming stochastic games
Probability
222
0.4
0.8
0.8
0.4
0.4
0.3
0.6
0.6
0.3
0.3
0.2
0.4
0.4
0.2
0.2
0.1
0.2
0.2
0.1
0.1
Probability
0
0
500 1000 27
0
0
500 1000 104
0
500 1000 106
0
0
500 1000 119
0
0.8
0.8
0.8
0.4
0.8
0.6
0.6
0.6
0.3
0.6
0.4
0.4
0.4
0.2
0.4
0.2
0.2
0.2
0.1
0.2
0
0
500 1000 125
0
0
Probability
0
0 0 0 500 1000 0 500 1000 0 500 1000 0 147 179 182 (a) Learning curve of the secondary users
0.35
0.8
0.35
0.3
0.7
0.3
0.6
0.25
500 1000 123
500 1000 187
0.25
0.5 0.2
0.2 0.4
0.15
0.15 0.3 0.1
0
0.1
0.2
0.05
Figure 8.5
0
0.05
0.1
0
500 3
0 500 1000 0 4 (b) Learning curve of the attackers
1000
0
0
500 14
1000
Learning curves at state ((0, 6, 0, 0), (0, 6, 0, 0)) when L = 2.
• The secondary users adopt the stationary policy obtained by minimax-Q learning (denoted by “optimal”). • The secondary users adopt a stationary policy obtained by myopic learning. By myopic, we mean that they care more about the immediate payoff than about the future payoffs. In the myopic policy considered here, we assume that the secondary
223
8.5 Simulation results
Probability
0.8 0.6 0.4
0 500 1000 33
0
Probability
0
0 5001000 39
0.15
0.6
0.1
0.4
0.05
0.2
0 500 1000 43
0
0.2
0.2
0.08
0.6
0.15
0.15
0.06
0.4
0.1
0.1
0.04
0.2
0.05
0.05
0.02
0 500 1000 68
0
0 500 1000 86
0 0 0 0 500 1000 0 500 1000 0 500 1000 0 500 1000 132 160 169 173 (a) Learning curve of the secondary users
0.1
0.2
1
0.15 0.05
0.1
0.5
0.05 0
Probability
0.8
0.8
0
0
500 1
1000
0
0
500 3
1000
0
0.4
0.2
0.15
0.3
0.15
0.1
0.2
0.1
0.05
0.1
0.05
0
500 9
0 500 1000 0 12 (b) Learning curve of the attackers
1000
0
0
500
1000
4
0.2
0
Figure 8.6
0.2
0.05
0.5
0.2 0
Probability
0.1
1
0
500 14
1000
Learning curves at state ((0, 1, 0, 0), (0, 6, 0, 0)) when L = 2.
users ignore the effect of their current action on the future payoffs, so it is the extreme case in which γ = 0 (denoted by “myopic”). • The secondary users adopt a fixed strategy that draws an action uniformly from the action space A(st ) for each st (denoted by “fixed”). In Figure 8.7, we compare the accumulated average payoffs with these strategies at each iteration t , calculated using
224
Anti-jamming stochastic games
0.55 Fixed Myopic Optimal
Accumulated average payoff
0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1
Figure 8.7
0
1000
2000
3000
4000 5000 Iteration
6000
7000
8000
Average payoffs of different strategies. 9
Sum of discounted payoff
8 7 6 5 4 3 Fixed Myopic Optimal
2 1 0
0
200
400
600
800
1000
Iteration Figure 8.8
Summed discounted payoffs of different strategies.
r¯ (t ) =
t 1 r (s(t), a(t), aJ (t)). t
(8.29)
t=1
We see that, since the “optimal” strategy and the “myopic” strategy maximize the worstcase performance, whereas the “fixed” strategy merely uniformly picks any action regardless of the attackers’ strategy, the former two strategies have a higher average payoff than that of the “fixed” strategy. Moreover, as shown in Figure 8.8, since the “optimal” strategy also considers the future payoff when optimizing the strategy at
8.6 Summary and bibliographical notes
225
the current stage, the “optimal strategy” achieves the highest sum of discounted payoff (15% more than that of the “myopic” strategy and 42% more than that of the “fixed” strategy). Therefore, when the secondary users face a group of intelligent attackers that can adapt their strategy to the environment dynamics and the opponent’s strategy, adopting minimax-Q learning in the stochastic anti-jamming game modeling achieves the best performance.
8.6
Summary and bibliographical notes In this chapter, we have studied the design of an anti-jamming defense mechanism in a cognitive radio network. Considering that the spectrum environment is time-varying, and the cognitive attackers are able to use an adaptive strategy, we model the interactions between the secondary users and the attackers as a stochastic zero-sum game. The secondary users adapt their strategy on how to reserve and switch between control and data channels according to their observations about the spectrum availability, channel quality, and attackers’ actions. Simulation results show that the optimal policy obtained from minimax-Q learning in the stochastic game can achieve much better performance in terms of spectrum-efficient throughput than do the myopic learning policy, which merely maximizes the payoff at each stage without considering the environment dynamics and the attackers’ cognitive capability, and a random defense policy. The stochastic game framework can be generalized to model various defense mechanisms in other layers of a cognitive radio network, since it can well model the different dynamics due to the environment as well as the cognitive attackers. Malicious attackers can launch various types of attacks in different layers of a cognitive radio network. In [76], the authors studied the primary-user-emulation attack, in which the cognitive attackers mimic the primary signal to prevent secondary users from accessing the licensed spectrum. The authors investigated a localization-based defense mechanism that verifies the source of the detected signals by observing the characteristics of the signal and estimating its location. The work in [75] investigated the spectrum sensing data-falsification attack, and considered a weighted sequential probability ratio test to alleviate the performance degradation due to sensing error. Other possible security issues such as denial-of-service attacks in cognitive radio networks are discussed in [38] and [55]. However, the authors of most of these works [38] [55] provide only qualitative analysis of the countermeasures, and have not considered the real dynamics in the spectrum environment and the cognitive attackers’ capability to adjust their attacking strategy. Jamming attacks in wireless networking have been studied extensively, and existing anti-jamming solutions include physical-layer defenses, such as directional antennas [312] and spread spectrum [350], link-layer defenses such as channel hopping [302] [154] [461] [234], and network-layer defenses, such as spatial retreats [473].
9
Opportunistic multiple access for cognitive networks
In this chapter, opportunistic multiple access to the under-utilized channel resources is investigated. By exploiting source burstiness, secondary cognitive nodes utilize primary nodes’ periods of silence to access the channel and transmit their packets. Cognitive relays could also make use of these silent periods to offer spatial diversity without incurring losses of bandwidth efficiency. First, we consider the cognitive cooperation protocol and discuss two different relay-assignment schemes. A comparison of the two schemes is carried out through a maximum stable throughput analysis of the network. Then, secondary nodes’ access to the remaining idle channel resources is considered. Queueing-theoretic analysis and numerical results reveal that, despite the fact that relays occupy part of the idle resources to provide cooperation, secondary nodes surprisingly achieve higher throughput in the presence of relays. The rationale is that relays help primary nodes empty their queues at faster rates; therefore, secondary nodes observe increased channel access opportunities. Moreover, the scenario where secondary nodes present themselves as relays to the primary network is also presented. By working as relays, secondary nodes further help primary nodes empty their queues, hence increasing their channel access opportunities and achieving higher throughput, as is revealed by our analytical and numerical results.
9.1
Introduction The scarcity of energy and bandwidth, the two fundamental resources for communications, imposes severe limitations on the development of communications networks in terms of capacity and performance. Among the new technologies that have emerged recently in the effort to intelligently and efficiently utilize these scarce resources are cooperative communications and cognitive spectrum sharing. Both technologies have shown great potential for enhancing the performance of wireless networks and meeting the demands of future wireless applications. In cooperative communications, a portion of the channel resources is assigned to one or more relays for cooperation. These relays cooperate with a source node to help forward its data to a destination. This can achieve spatial diversity because the data are transmitted via spatially independent channels, but also results in some loss of bandwidth efficiency because of the channel resources assigned to the relays to perform their task.
9.1 Introduction
227
On the other hand, cognitive radio prescribes the coexistence of licensed (or primary) and unlicensed (secondary or cognitive) radio nodes on the same bandwidth. While the first group is allowed to access the spectrum at any time, the second seeks opportunities for transmission by exploiting the idle periods of primary nodes. It is clear that cooperative communications and cognitive radios are closely related problems in the sense that the available unused or under-used channel resources can be utilized to improve the primary system’s performance via cooperation, or can be shared by a secondary system to transmit new information. Despite this fact, these two problems have been studied independently. In this chapter, we address the issue of exploiting the under-utilized channel resources by using both cognitive cooperative relays and cognitive secondary nodes. Our main focus is on how this coexistence of primary relays and secondary nodes affects the performance of both primary and secondary networks. At a first glance one might jump to the conclusion that, since relays are part of the primary network and thus have higher priority over secondary nodes, the primary network will benefit from cooperation while secondary nodes will suffer from reduced channel access opportunities. We prove that this argument is not correct, and that, even in a situation with interfering relays and secondary transmissions, both networks will benefit from the presence of relays in terms of maximum stable throughput. First, we consider the uplink of a TDMA network as the primary network, and study how cognitive relays can exploit the empty time slots to offer help to the primary nodes. In [391], this problem was studied in a network with a single relay. Here we consider the effect of multiple relays and address the issue of how relays share the empty time slots among themselves. Furthermore, two relay-selection criteria are considered, namely the nearest neighbor and the maximum success probability, and their performance in terms of maximum stable throughput is thoroughly investigated. Then we consider the issue of secondary nodes also trying to exploit empty time slots in the primary network. While most of the research on cognitive radios has focused on the dynamic spectrum sharing aspect of the problem, this chapter focuses on the opportunistic multiple access aspect of the problem in the TDMA framework. To gain access to the channel, secondary nodes sense the channel for any primary activity, namely transmissions by either primary nodes or relay nodes. In order to have an upper and lower bound on the system’s performance, we study two different scenarios. The first is when secondary nodes have the ability to perfectly sense relays’ transmissions, and thus access the channel when all primary nodes’ and relay nodes’ queues are empty. In the second scenario secondary nodes cannot sense relays’ transmissions at all. Since the cognitive principle is based on the idea that the presence of the secondary system should be transparent to the primary system (in our case both primary and relay nodes), appropriate countermeasures should be adopted at the secondary nodes to minimize interference with relay transmissions. The stability region is characterized for these two scenarios and compared with that in the case in which the primary network does not employ relays. Analytic and numerical results reveal that, although relays occupy part of the empty time slots that would have been available to secondary nodes, it is always beneficial both to primary and to secondary nodes that the maximum possible number of relays be employed. On the one hand, relays help the primary network
228
Opportunistic multiple access for cognitive networks
achieve higher stable throughput by offering different reliable paths along which packets can reach the destination. On the other hand, relays will help primary nodes empty their queues at much faster rates, thus providing secondary nodes with more opportunities to transmit their own information. It is interesting to note that, even when secondary nodes interfere with relays’ transmissions, there is a significant improvement in both primary and secondary throughput due to this fast rate of emptying the queues. Given the observation that cooperation in the primary network helps create opportunities for secondary nodes, we also consider the possibility that secondary nodes work as relays and complement the operation of the primary relays. The rationale is that having packets relayed by the secondary nodes can further help empty primary nodes’ queues and decrease the load of the primary relays, therefore creating transmitting opportunities for the secondary nodes. Through our analysis and numerical results we try to answer the following question: can relaying of primary packets by secondary nodes increase the stable throughput of the secondary network, especially when secondary transmissions are interfering with the primary relays? The rest of the chapter is organized as follows. The network and channel models used are described in Section 9.2. The cognitive cooperative protocol and relay-selection criteria are presented and analyzed in Section 9.3. Then, secondary nodes are introduced into the system, and the interaction between relays and secondary nodes, and secondary nodes’ relaying capability are studied in Section 9.4. Finally, we present conclusions regarding this work in Section 9.5.
9.2
Network and channel models We consider the uplink of a TDMA cellular network as the primary network. The primary network consists of Mp source nodes numbered 1, 2, . . . , Mp communicating with a base station (BS) dp located at the center of the cell as illustrated in Figure 9.1. As part
Base Station Primary Relay Secondary
Figure 9.1
The network’s queuing model.
229
9.2 Network and channel models
of the primary network, Mr cognitive relay nodes numbered 1, 2, . . . , Mr are deployed to help primary nodes forward their packets to the base station. The relay nodes will exploit the under-utilized channel resources (time slots in this case) to forward primary packets without incurring any loss in bandwidth efficiency. A secondary group consisting of Ms nodes numbered 1, 2, . . . , Ms forms an ad hoc network, and tries to exploit the unutilized channel resources to communicate their own data packets using slotted ALOHA as their MAC protocol. We consider a circular cell of radius R. The BS is located at the center of the cell, the different nodes are uniformly distributed within the cell area. Let Mp = 1, 2, . . . , Mp denote the set of primary nodes, Mr = 1, 2, . . . , Mr denote the set of relay nodes, and Ms = 1, 2, . . . , Ms denote the set of secondary nodes.
9.2.1
The channel model The wireless channel between a node and its destination is modeled as a Rayleigh flat fading channel with additive white Gaussian noise. The signal received at a receiving node j from a transmitting node i at time t can be modeled as > −γ yit j = G i ρi j h it j xit + n it j , (9.1) where G i is the transmitting power, which is assumed to be the same for all nodes, ρi j denotes the distance between the two nodes, γ is the path loss exponent, and h it j is the channel fading coefficient between nodes i and j at time t, which is modeled as an i.i.d zero-mean, circularly symmetric complex Gaussian random process with unit variance. The term xit denotes the transmitted packet with average unit power, and n it j denote i.i.d additive white Gaussian noise processes with zero mean and variance N0 . Since the arrivals, the channel gains, and the additive noise processes are all assumed to be stationary, we can drop the index t without loss of generality. Success and failure of packet reception are characterized by outage events and the outage probability. Outage in the link between nodes i and j is defined as the event that the received SNR falls below a certain threshold β, −γ
SNRi j =
ρi j G i |h i j |2 N0
≤ β.
(9.2)
The SNR threshold β is determined according to the application and the transmitter/receiver structure. If the received SNR is higher than the threshold β, the receiver is assumed to be able to decode the received message with negligible probability of error. Using the channel model (9.1), the outage probability can be calculated as follows: β N0 o Pr{SNRi j < β} = Pi j = 1 − exp − . (9.3) γ G i ρi j Since the secondary ad hoc network adopts a slotted ALOHA protocol as a MAC protocol, there will be situations in which more than one packet transmission will occur simultaneously. In these situations, the outage event in the link between nodes i and j
230
Opportunistic multiple access for cognitive networks
is defined as the event that the received signal-to-interference-plus-noise ratio (SINR) falls below a certain threshold β: −γ
SINRi j =
G i ρi j |h i j |2 ≤ β, −γ N0 + k∈M G k ρk j |h k j |2
(9.4)
where M denotes the set of nodes transmitting simultaneously with node i and interfering with it. Given the channel model (9.1), it can be shown that the outage probability conditioned on the set of interfering nodes M is given by Pr{SINRi j < β} = Pi oj (M) 5 6 1 = ρ −γn j G n n∈M
·
k∈M
⎡
ρ −γk j G k
'
1
l∈M\k
'
ρ
−γl j G
l
−
N0 β exp − −γ ⎢ ij G ρ i ⎢ · ⎢1 − −γk j ⎣ 1 + βρ−γi j G k ρ
9.2.2
( 1
ρ
−γk j G
(⎤ ⎥ ⎥ ⎥. ⎦
k
(9.5)
Gi
The queuing model Each primary, relay, or secondary node has an infinite buffer for storing fixed-length packets. The channel is slotted in time and the slot duration equals the packet transmission time. The arrivals at the ith primary node’s queue i ∈ Mp and at the jth secondary node’s queue (i ∈ Ms ) are Bernoulli random variables, i.i.d from slot to slot p p p s with means λi and λ j , respectively. Hence, the vector = λ1 , . . . , λ Mp , λs1 , . . . , λsMs denotes the average arrival rates. Arrival processes are assumed to be independent from one node to another. Primary users access the channel by dividing the channel resources, time in this case, them; hence, each node is allocated a fraction of the time. Let p = p among p p p ω1 , ω2 , . . . , ω Mp denote a resource-sharing vector, where ωi ≥ 0 is the fraction of time allocated to node i ∈ Mp , or it can represent the probability that node i is allocated the whole time slot [229]. The set of all feasible resource-sharing vectors is specified as follows: ⎧ ⎫ ⎨ ⎬ p p p p p = p = ω1 , ω2 , . . . , ω Mp ∈ *+Mp : ωi ≤ 1 . (9.6) ⎩ ⎭ i∈Mp
In a communication network, the stability of the network’s queues is a fundamental performance measure. The system is called stable for a given arrival-rate vector and resource-sharing vector pair (, ) if all the queues are stable, i.e., the primary and secondary nodes’ and relays’ queues are stable. If any queue is unstable, then the whole
9.3 Multiple relays for the primary network
231
system is considered unstable. For an irreducible and aperiodic Markov chain with a countable number of states, the chain is stable if and only if there is a positive probability of each queue being empty, i.e., lim Pr{Q i (t) = 0} > 0.
t→∞
(9.7)
(For a rigorous definition of stability under more general scenarios see [412] and [359].) If the arrival and departure processes of a queuing system are strictly stationary, then one can apply Loynes’ theorem to check for stability conditions [253]. This theorem states that, if the arrival and departure processes of a queuing system are strictly stationary, and the average arrival rate is less than the average departure rate, then the queue is stable; if the average arrival rate is greater than the average departure rate, then the queue is unstable.
9.3
Multiple relays for the primary network In a TDMA system without relays, if a node does not have a packet to transmit, its time slot remains idle, i.e., channel resources are wasted. The possibility of utilizing these wasted channel resources to provide some sort of spatial diversity and increased reliability to the TDMA system by employing a single cooperative relay node was considered in [391]. Here we consider the case of a network with multiple relays. We assume that relays can sense the communication channel to detect empty time slots. This assumption is reasonable for the orthogonal multiple-access scheme used, since there is no interference, and the relay can employ coherent or feature detectors that have a high detection probability. In the presence of interference, knowledge of the interference structure can help in the detection; however, this is beyond the scope of this chapter. The second assumption we make is that the errors and delay in packet-acknowledgment feedback are negligible, which is reasonable for short-length ACK/NACK packets since low-rate codes can be employed in the feedback channel.
9.3.1
The cooperation protocol First, we describe the relays’ cooperation protocol. For the purpose of protocol description and analysis we will assume that the relay-selection phase has already taken place, and that every primary node has assigned to it the best relay from the group of available relays. Note that every primary node gets help from only one relay, but a relay might help more than one primary node. Later in this section we will discuss two different relay-selection criteria and compare them. • At the beginning of a time slot, a node transmits the packet at the head of its queue (if any) to the destination. Owing to the broadcast nature of the wireless channel, relays can listen to the packets transmitted by the nodes to the BS. • If the packet is not received correctly by the BS, a NACK message is fed back from the BS declaring the packet’s failure. If the relay assigned to the packet owner was
232
Opportunistic multiple access for cognitive networks
Sensing
Figure 9.2
Data
The time-slot structure, showing the sensing period used by the relays to detect the presence of a primary user.
able to decode the packet correctly, it stores the packet in its queue and sends back an ACK message to declare successful reception of the packet at the relay. We assume that the ACK/NACK feedback is immediate and error-free. • The node drops the packet from its queue if it is correctly received by either the BS or the relay (an ACK is received from either the BS or the relay). • At the beginning of each time slot, a primary node will transmit a beacon signal prior to transmitting its data packet, as shown in Figure 9.2. This beacon signal will enable relay nodes to reliably sense the presence of a primary user. On the basis of the beacon sensing outcome, a relay decides to listen to the primary transmission, or transmits a packet from its queue in the case of there being no primary transmissions. • Relays distribute the available time slots in a TDMA fashion. Therefore, if a time slot is detected to be empty, this free time slot will be assigned to relay i ∈ M r r with probability ωi . As is the case with TDMA networks, r = ω1r , ω2r , . . . , ωrMr denotes a resource-sharing vector, and the set of all feasible resource-sharing vectors is specified as follows: ⎧ ⎫ ⎨ ⎬ r r r = r = ω1 , ω2 , . . . , ωrMr ∈ *+Mr : ωir ≤ 1 . (9.8) ⎩ ⎭ i∈Mr
• Relay i then transmits the packet at the head of its queue. • We assume that there is enough guard time at the beginning of each time slot to enable sensing, and that channel sensing is error-free.
9.3.2
Maximum stable throughput analysis In this section we characterize the maximum stable throughput region of the cooperative protocol and compare it against the maximum stable throughput of TDMA without cooperation. For the whole system to be stable, all queues therein should be stable. Hence, the stability region of the network is the intersection of the stability regions of the source nodes’ queues and the relay nodes’ queues. First, consider the stability region of the system defined by the source nodes’ queues. A source node succeeds in transmitting a packet if either the BS or its assigned relay receives the packet successfully. Therefore, the success probability of node i can be calculated as % & " o o + 1 − Piro i − 1 − Pid 1 − Piro i , (9.9) Oiri = 1 − Pid Pi = Pr Oid
233
9.3 Multiple relays for the primary network
where Oi j denotes the complement of the event that the channel between node i and receiver j ∈ (ri , d) (ri denotes node i’s relay, and d the BS) is in outage (i.e., the event that the packet was received successfully). If source node i has no relay assigned to it, its success probability is then given by o Pi = Pr Oid = 1 − Pid . (9.10) Since for each queue i ∈ Mp , the queue behaves exactly as in a TDMA system with success probability determined by (9.9) or (9.10), and, from Loynes’s theorem, the primary nodes’ stability region Rp is defined as p p p p p p Rp = λ1 , . . . , λ Mp ∈ R +Mp : λi < ωi Pi , ∀i ∈ Mp , ω1 , . . . , ω Mp ∈ p , (9.11) which can be easily shown to be equivalent to ⎧ ⎫ p ⎨ ⎬ λ p p i Rp = ≤1 . λ1 , . . . , λ Mp ∈ R +Mp : ⎩ ⎭ Pi
(9.12)
i∈Mp
Next we consider the stability of the relays’ queues. In order to apply Loynes’ theorem, it is required that the arrival and service processes of the relays’ queues are stationary. Let Q tj denote the size of the jth ( j ∈ Mr ) relay queue at time t, then its evolution can be modeled as + = Q tj − Y jt + X tj , (9.13) Q t+1 j where X it represents the number of arrivals in time slot t and Yit denotes the possibility of serving a packet at this time slot from the ith relay queue (Yit takes values {0, 1}). The function (·)+ is defined as (x)+ = max(x, 0). Now we establish the stationarity of the arrival and service processes. If source nodes’ queues are stable, then by definition the departure processes from these nodes are stationary. A packet departing from a node’s queue is stored in the relay’s queue (i.e., counted as an arrival) if simultaneously the following two events happen: the node-destination channel is in outage and the noderelay channel is not in outage. Hence, the process of arrival in the queue can be modeled as follows: 4 " 3 " " Q it = 0 1 Ait Oid Oi j , (9.14) X tj = i∈S j t where 1[·] is the function and Ai denotes the event that slot t is assigned to indicator t source node i. Q i = 0 denotes the event that node i’s queue is not empty, i.e., the node has apacket to transmit, and, according to Little’s theorem [25], it has probability p p λi / ωi Pi , where Pi is node i’s success probability and is defined in (9.9) and (9.10). Finally, S j denotes the set of source nodes to which relay j is assigned to help. The random processes involved in the above expression are all stationary; hence, the arrival process for the relay is stationary. The average arrival rate in the relay’s queue can be computed as
234
Opportunistic multiple access for cognitive networks
3
4
λrj = E X tj =
p
λi
i∈S j
o 1 − P o Pid ij Pi
.
(9.15)
Similarly, we establish the stationarity of the service process from the jth relay queue. The service process of the relay queue depends by definition on the empty slots available from primary nodes, and the channel from relay to destination not being in outage. Assuming the source nodes’ queues to be stable, they offer stationary empty slots (stationary service process) to the relay. Also the channel statistics is stationary; hence, the relay’s service process is stationary. The service process of the jth relay’s queue can be modeled as 3 " " t " t4 Q it = 0 (9.16) 1 Ait O jd Uj , Y jt = i∈Mp
where U tj is the event that the current idle time slot is assigned to relay j to service its queue, which has probability ωrj according to the TDMA resource-sharing policy employed by the relays. The average service rate of the relay can then be determined from the following equation: ⎛ ⎞ 3 4 λp r i ⎠ o ωj. (9.17) 1 − P jd μrj = E Y jt = ⎝1 − Pi i∈Mp
Using Loynes’ theorem, the stability condition for the jth relay queue is λrj < μrj . The stability region Rr of the system comprised of the relays’ queues is then defined as p p Rr = λ1 , . . . , λ Mp ∈ R +Mp : λrj < μrj , ∀ j ∈ Mr , ω1r , . . . , ωrMr ∈ r , (9.18) which can easily be shown to be equivalent to ⎧ ⎫ p Pid o (1−P o fi j ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ λi ⎪ ⎪ Pi ⎨ ⎬ i∈S j p p +Mp λ1 , . . . , λ M p ∈ R Rr = : ≤1 . ⎪ ⎪ λip ⎪ ⎪ j∈Mr ⎪ ⎪ ⎪ ⎪ 1− f jd ⎩ ⎭ Pi
(9.19)
i∈Mp
Finally, the maximum stable throughput region of the complete system defined by the source nodes and relays queues is given by the intersection of the maximum stable throughput regions of source nodes’ queues and relays’ queues Rp Rr , which can be shown to be equal to Rr .
9.3.3
Relay selection Up to this point it has been assumed that for each source node there is a relay selected from the pool of available relays to provide help to that node. In this section we discuss two different relay-assignment criteria and compare their performance in terms of maximum stable throughput.
235
9.3 Multiple relays for the primary network
9.3.3.1
Nearest neighbor It is noted from (9.9) that the probability of a successful transmission by a source node (correctly received by either the destination or the relay) is an increasing function of the success probability of the source–relay link. This probability is in turn a decreasing function of the distance between the source node and the relay node (9.2). Therefore, in order to maximize source nodes’ service rates, each source node will get help from its closest relay. Although this criterion for relay selection maximizes source nodes’ service rates, it suffers from some performance degradation if the success probability of the source– destination link is better than that of the relay–destination link. This is a result of the following observation. It is well known that the stability region of TDMA is determined by a hyperplane. It is noted from (9.19) that the stability region of the cooperative protocol is also bounded by a hyperplane. Therefore, it suffices to compare the intersection of these hyperplanes with the coordinate axes when comparing the two stability regions. Considering the ith source node, this intersection for the cooperative protocol is equal to Pi 1 − Pioj p∗ , (9.20) λi (Coop) = o 1 − Po + 1 − Po Pid ij jd where it was assumed that relay node j is assigned to source node i. The corresponding value for TDMA is given by p∗
o λi (TDMA) = 1 − Pid .
(9.21) p∗
For source node i to benefit from the relay’s help, we should have λi (Coop) > p∗ o < P o , i.e., the λi (TDMA). Using (9.20) and (9.21), this condition is equivalent to P jd id relay–destination link has a larger success probability than does the source–destination link. This is intuitive, since one cannot expect to gain any benefit by moving packets from a queue with a certain service rate to a queue with a lower service rate. On the basis of this observation, the relay-selection criterion is modified such that a source node selects its nearest-neighbor relay from the group of relays that are closer to the destination than they are to the source node itself. For the implementation of such a selection criterion, we assume that each user can know his distance from the destination through, for example, calculating the average received power. Then, through a simple distributed protocol, each node sends out a “Hello” message searching for its nearest neighbors. This can be done using time-of-arrival (TOA) estimation, for example; see [402] and [377]. Each source node then selects the nearest-neighbor relay node which is closer to the destination than is the source node itself.
9.3.3.2
Maximum success probability The nearest-neighbor criterion considers only the maximization of the source node’s service rate, and makes sure that there will be no performance degradation due to the selection of an ill-positioned relay.
236
Opportunistic multiple access for cognitive networks
In order to maximize the network’s stability region, it is necessary to consider also the service rates of the relays in the relay-selection process. Intuitively, it is beneficial (from a stability point of view) to favor relays with higher service rates over ones with lower service rates. To take the relay–BS link into consideration, we consider the following criterion whereby source node i selects a relay according to o o arg max 1 − Pioj 1 − P jd , s.t. P jd < Pioj , (9.22) j∈Mr
i.e., node i selects the relay that maximizes the overall packet-transmission success probability over both source–relay and relay–BS links, under the constraint that the relay–BS link has a higher success probability than the source–BS link. Using the definition of the outage probability (9.2), it can be shown that the relay-selection criterion of (9.22) is equivalent to arg min ρi j + ρ jd , j∈Mr
s.t. ρ jd < ρi j ,
(9.23)
where ρi j is the distance between source node i and relay node j, and ρ jd is the distance between relay node j and the destination. Therefore, the maximum-success-probability criterion reduces to a minimization of the sum of source–relay and relay–BS distances. It is noteworthy that, for a given source node, the optimal relay location is at the midpoint of the line between the source node and the destination. This selection criterion can be implemented using the same distributed protocol as was described above.
9.3.4
Numerical results To gain insight into the analysis, we consider the following scenario. Mp = 20 source nodes and Mr = 1 to 20 relay nodes are deployed uniformly in a circular cell of radius R = 200 m; the BS is located at the center of the cell. The propagation path loss is taken equal to γ = 3.7 and the SNR threshold β = 35 dB. The transmitted signal power is G = 100 mW, and the noise power is N0 = 10−11 . For ease of illustration we assume that the primary network is symmetric, i.e., λi = λp /Mp ) for i ∈ Mp . Figure 9.3 compares the maximum stable throughput of the cooperative and noncooperative networks as a function of the number of relays in the network. Furthermore, it shows a performance comparison of the two relay-selection schemes. It is clear that the cooperative protocol outperforms its non-cooperative counterpart; even with a single relay (which of course is not helping all the nodes) a 25% increase in throughput is achieved. As the number of relays increases we notice a fast increase in throughput; for example, with five relays the throughput is increased by 128%. Increasing the number of relays to ten leads to a 167% increase in throughput. This is mainly because increasing the number of relays increases the number of source nodes that are getting help from these relays, hence leading to higher throughput. From Figure 9.3, it can be seen that the “maximum-success-probability” relayselection criterion outperforms the “nearest-neighbor” criterion by a margin of 3% to 4% on average. Furthermore, it is to be noted that the gap between the two criteria increases with increasing number of relays. This is due to the fact that, with an increased
237
9.4 Opportunistic multiple access for secondary nodes
Aggregate maximum stable throughput
0.9 0.8
0.7
0.6
0.5
0.4
0.3
0.2
Figure 9.3
No Relay Nearest Neighbor Max. Success Prob.
2
4
6
8 10 12 14 Number of relays
16
18
20
Maximum stable throughput vs. number of relays. Mp = 20 primary nodes.
relay density in the network, there will be a higher probability that a source node finds a relay at or near the optimal relay position corresponding to that source node. While the “maximum-success-probability” criterion will be able to select the relay at the optimal (or near-optimal) location, the “nearest-neighbor” criterion will always pick the closest relay to the source node.
9.4
Opportunistic multiple access for secondary nodes In the previous section, the problem of utilizing the idle channel resources to enable cognitive relays to help source nodes forward their packets was considered. Aside from being used by relays, these idle channel resources could be used by a group of secondary (unlicensed) nodes to transmit their own data packets. Therefore, the use of these idle channel resources (time slots, in our network) offers either diversity to the primary nodes through the group of relays, or multiplexing through the group of secondary nodes that send new information over the channel. In this section, we study the effect of sharing the idle time slots between relays and secondary nodes on the performance of both primary and secondary networks. Mainly, we investigate how the secondary network’s throughput is affected when some of the idle channel resources are used by the relays and how the primary network’s throughput is affected when secondary transmissions interfere with relay transmissions. Furthermore, we study the possibility that secondary nodes work as relays for the primary network. By working as relays, the secondary nodes aim at creating more transmission opportunities for themselves by helping primary nodes empty their queues at a faster rate.
238
Opportunistic multiple access for cognitive networks
The secondary network consists of Ms nodes forming an ad hoc network, in which nodes are grouped into source–destination pairs in which each source node communicates with its associated destination node. To access the channel, secondary nodes will use the beacon sent by primary nodes at the beginning of each time slot, as shown in Figure 9.2, to detect primary activity. As with the relays, we assume that the primaryuser-detection process is error-free. To share the idle time slots among nodes of the secondary network, secondary nodes employ slotted ALOHA as a MAC protocol. Therefore, whenever an idle slot is detected, secondary nodes with nonempty queues will attempt to transmit their packets with channel access probability αs . It is clear that relays and secondary nodes make use of the primary beacon signal at the start of a time slot to detect the presence of a primary user prior to attempting transmissions. Because of that, and because of timing differences, situations in which secondary nodes are unable to detect relays’ transmissions might arise. In such situations, secondary transmissions will interfere with relays’ transmissions. To take this interference into consideration, we will study two extreme cases. The first is when the secondary nodes are always unable to detect relays’ transmissions, thus always interfering with the relays. The second case is when secondary nodes always detect relays’ presence successfully; thus there is no interference at all. The study of these two cases enables us to find inner and outer bounds on the maximum stable throughput region of the network.
9.4.1 9.4.1.1
Maximum stable throughput analysis Case I: maximum interference Since secondary nodes are always interfering with relay nodes, the system of relay and secondary nodes’ queues are interacting. In other words, the service process of a given queue depends on the state of all other queues, i.e., whether they are empty or not. Studying the stability conditions for interacting queues is a difficult problem that has been addressed for ALOHA systems. The concept of dominant systems was introduced and employed in [359] to help find bounds on the stability region of ALOHA with a collision channel. The dominant system was defined by allowing a set of terminals with no packets to continue transmitting dummy packets. In this manner, the queues in the dominant system stochastically dominate the queues in the original system. Or, in other words, with the same initial conditions for queue sizes in the original system and in the dominant system, the queue sizes in the dominant system are not smaller than those in the original system. To study the stability of the interacting system of queues consisting of the relay and secondary nodes’ queues, we make use of the dominant-system approach to decouple the interaction between the queues. We define the dominant system as follows • Arrivals at each queue in the dominant system are the same as in the original system. • Time slots assigned to primary node i ∈ Mp in the two systems are identical. • The outcomes of the “coin tossing” (that determines transmission attempts of the relay and secondary nodes) in every slot are the same.
9.4 Opportunistic multiple access for secondary nodes
239
• Channel realizations for the two systems are identical. • The noise generated at receiving ends of the two systems is identical. • In the dominant system, secondary nodes attempt to transmit dummy packets when their queues are empty. Because the service processes of primary queues both in the original system and in the dominant system are independent of the state of any other queue in the network, primary queues evolve identically in the two systems, given identical initial queue sizes. On the other hand, given identical initial queue sizes for the original system and the dominant system, the relay and secondary queues in the dominant system are always no shorter than those in the original one. This follows because relay nodes suffer from an increased collision probability, and thus longer queues, in the dominant system compared with the case in the original one since secondary nodes always have a packet to transmit (possibly a dummy packet). This implies that relay nodes’ queues empty faster in the original system and therefore relays experience a lower probability of collision than in the dominant system, and as a result will have shorter queues. Consequently, the stability conditions for the dominant system are sufficient for the stability of the original system. First, we consider the arrival and departure processes of relay queues. The process of arrival of packets in relay queues depends only on primary queues and on the primary– relay and primary–destination channel conditions. Therefore, the average arrival rate for the relay is still given by (9.16). The service process of the relay’s queue depends by definition on the empty slots available from primary nodes and on the channel from relay to destination not being in outage. The service process of the jth relay queue can then be modeled as 4 3 " " t " t Q it = 0 1 Ait Uj O jd (S) , (9.24) Y jt = i∈Mp
where O tjd (S) is the event that the relay–BS link is not in outage, given the set S of simultaneously transmitting secondary nodes. The average service rate of the jth relay is given by ⎛ ⎞ λi ⎠ 1 − P o.avg , μrj = E Y jt = ⎝1 − (9.25) jd Pi i∈Mp
o.avg
is the outage probability of the relay–BS link averaged over all possible where P jd combinations of simultaneously transmitting secondary nodes, and is given by ⎞ ⎛ ' ( |Sn | M s
o.avg
o o P jd = P jd , (9.26) αsn (1 − αs ) Ms −n ⎝ Sni ⎠ + (1 − αs ) Ms P jd n n∈Ms
i=1
combinations of n out of Ms where Sni is the ith element of the set Sn of allpossible o and secondary nodes, which has cardinality |Sn | = Mns . The outage probabilities P jd
o S i are given by (9.2) and (9.5), respectively. P jd n
240
Opportunistic multiple access for cognitive networks
Next, we consider the service processes for the secondary queues. The service process of a secondary node depends on the idle time slots unused by the primary nodes and on the state of the relays’ queues. Therefore, the service process of the kth secondary node can be modeled as 0 " " " t Q it = 0 1 Ait Ykt = Uj Ps i∈Mp j∈Mr
" %
Q tj
&" ! '% &" ! (21 t t t =0 Okd (S) Okd S j , (9.27) Q j = 0
which is the event that the primary nodes to which the current time slot has been assigned have empty queues, the kth secondary node has permission to transmit, and the secondary–destination link is not in outage (given the set of transmitting secondary nodes, and the cases when the jth relay queue is either empty or not). Assuming primary and relay nodes’ queues to be stable, they offer stationary empty slots. Also the channel statistics is stationary; hence, the secondary service process is stationary. The average secondary service rate is then given by ⎛ ⎞ ⎡ λp ' M s − 1( i ⎠ s t ⎝ ⎣ αsn (1 − αs ) Ms −1−n μk = E Yk = 1 − n Pi i∈Mp j∈Mr n∈Ms \k ⎞⎤⎤ ⎡ ⎞ ⎛ ⎛ |Sn | |S r n |+1 ! λ λrj j
o
o j ⎠⎦⎦ αs . · ⎣ 1− r ⎝ Pkd Pkd Snl ⎠ + r ⎝ Snl μj μj l=1
l=1
(9.28) Using Loynes’ theorem and (9.16), (9.25), and (9.28), the maximum aggregate stable throughput of the secondary network is defined by the following optimization problem: max μsk (αs ), s.t. λrj < μrj , ∀ j ∈ Mr . (9.29) αs
k∈Ms
This optimization problem requires a one-dimensional search, and can be solved by using standard methods [20]. Up to this point we have only proved the sufficient conditions for stability. To prove the necessary conditions we follow a similar argument that was used in [359] and [311] for ALOHA systems to prove the indistinguishability of the dominant and original systems at saturation. Consider the dominant system where secondary nodes transmit dummy packets. If, along some realizations of secondary queues of nonzero probability, secondary queues never empty, then the original system and the dominant system are indistinguishable. Thus, with a particular initial condition, if secondary queues never empty with nonzero probability in the dominant system (i.e., it is unstable), then secondary queues in the original system must be unstable as well. This means that the boundary of the stability region of the dominant system is also a boundary for the stability region of the original system. Thus, the conditions for stability of the dominant system are both sufficient and necessary for the stability of the original system.
241
9.4 Opportunistic multiple access for secondary nodes
9.4.1.2
Case II: no interference Here we consider the case when secondary nodes are always able to successfully detect relays’ transmissions. Therefore, no interference is exhibited by relay nodes from secondary transmissions. Since primary and relay nodes are not affected by the presence of secondary nodes in the network (assuming perfect detection of idle and busy time slots), the stability region of the primary network will be defined by the relay nodes’ stability region (9.19), as discussed in the previous section. Through a dominant system similar to the one used for case I, we can study the stability of the secondary network. The service process of a secondary node depends on the idle time slots unused by the primary nodes and on the state of the relays’ queues. Therefore, the service process of the kth secondary node can be modeled as &4 3 " " " t " % t " t Q it = 0 Uj Qj = 0 1 Ait Ps Okd (S) , Ykt = i∈Mp j∈Mr
(9.30) which is the event that both the primary and the relay nodes to which the current time slot has been assigned have empty queues, the kth secondary node has permission to transmit, and the secondary–destination link is not in outage (given the set of transmitting secondary nodes). Assuming primary and relay nodes’ queues to be stable, they offer stationary empty slots. Also the channel statistics is stationary; hence, the secondary service process is stationary. The average secondary service rate is then given by ⎛ ⎞ ⎡ λp ' M s − 1( i ⎠ s t ⎝ ⎣ αsn (1 − αs ) Ms −1−n μk = E Yk = 1 − n Pi i∈Mp j∈Mr n∈Ms \k ⎡ ⎞⎤⎤ ⎛ |S | n λrj
o ·⎣ 1− r ⎝ Pkd Snl ⎠⎦⎦ αs . (9.31) μj l=1
Using Loynes’ theorem and (9.16), (9.25), and (9.31), the maximum stable throughput of the secondary network is defined by the following optimization problem max μsk (αs ), s.t. λrj < μrj , ∀ j ∈ Mr . (9.32) αs
9.4.1.3
k∈Ms
Numerical results To study the sharing of idle time slots between relay and secondary nodes, and the effect of this sharing on the maximum stable throughput of both primary and secondary networks, we consider a network with Mp = 20 primary nodes, Ms = 10 secondary nodes, and Mr = 1, 5, and 10 relay nodes. The maximum-success-probability criterion is used for relay selection since it offers better performance. Figure 9.4 depicts the stability region of the system consisting of the primary and secondary nodes’ and relays’ queues. It is clear that increasing the number of relays in
242
Opportunistic multiple access for cognitive networks
No Relays 1 Relay 5 Relays 10 Relays
0.35 0.3
λs
0.25 0.2 0.15 0.1 0.05 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
λp Figure 9.4
The stability region with various numbers of relays. Mp = 20 primary nodes and Ms = 10 secondary nodes.
the network leads to significant improvement in the overall stability region and does not just affect primary nodes’ stability, e.g., for λp = 0.2 we see an increase of around 460% in secondary throughput when using ten relays. This is due to the fact that, although it is apparent that increasing the number of relays will use more and more of the idle time slots and hence decrease the secondary nodes’ chance to access the channel, the relays help primary nodes empty their queues at a faster rate, and hence provide the secondary nodes with more opportunities to access the channel. We conclude that, with the secondary nodes able to detect both primary and relay transmissions, it is always better to have the maximum possible number of relays, since this will maximize throughput both for the primary and for the secondary network. Figure 9.5 depicts the stability region for a system with Mp = 20 primary users, Ms = 10 secondary users, and Mr = 1, 5, and 10 relay nodes. It is to be noted that, for low values of the aggregate primary arrival rate, the maximum stable throughput achievable by the secondary nodes is reduced by about 25% when a single relay is used compared with the case in which there are no relays. This gap in throughput significantly decreases as the number of relays is increased to ten. Although it is apparent that increasing the number of relays leads to an increased level of interference with secondary nodes, and hence further degradation in secondary throughput, those relays will help the primary nodes empty their queues at a much faster rate, which offers the secondary nodes more free time slots to access. It is clear that the gains from increasing the number of relays in the system outweigh the losses due to increased interference. These gains are even more significant for high values of the primary arrival rate, at which secondary nodes achieve much higher throughput even in the case of increased interference.
243
9.4 Opportunistic multiple access for secondary nodes
No relays 1 relay 5 relays 10 relays
0.3 0.25
λs
0.2 0.15 0.1 0.05 0
0
0.1
0.2
0.3
0.4
λp
0.5
0.6
0.7
0.8
Figure 9.5
The stability region for interfering relays and secondary nodes. Mp = 20 primary nodes and Ms = 10 secondary nodes.
9.4.2
Secondary nodes with relaying capability Here we investigate the possibility that secondary nodes have the ability to act as relays for the primary network. The rationale for adding relaying capability to the secondary nodes is the following. As was discussed in the previous section, the increase in the number of relays in the network increases the probability that a primary node finds a relay close to the optimal location, thereby maximizing the service rates of the primary nodes and helping them empty their queues faster. Moreover, as the number of relays in the network increases, the arrival rate of individual relays should on average decrease, which leads to an average decrease in the number of time slots required to empty the relays’ queues. Clearly, higher primary service rates and lower required service rates for relays create transmitting opportunities for the secondary nodes. This increased number of time slots available to the secondary nodes has, of course, to be shared between the transmission of their own packets and transmission of relayed packets. Assessing the benefits of this modified structure is then not trivial. During the relay-selection process, secondary nodes will present themselves as relay nodes, reporting their distances to the primary BS. Therefore, primary nodes will observe an augmented pool of relays to choose from. Here we don’t assume that the group of primary relays will consider secondary nodes as a part of that group. Therefore, secondary nodes access the channel in the same manner as that discussed in the previous section, i.e., employing slotted ALOHA with the possibility of interfering with primary relays’ transmissions. Each secondary node maintains two queues, one containing own packets and the other collecting packets received from primary transmissions to be relayed to the primary BS. Whenever a secondary node has access to an idle slot,
244
Opportunistic multiple access for cognitive networks
it transmits a packet from its relaying queue with probability αr , or a packet from its own packets queue with probability 1 − αr . Moreover, if one of the queues is empty, the node will service the other queue with probability 1. From the point of view of primary nodes concerning relay operations, secondary nodes appear exactly as primary relays. Therefore, the average service rate of the primary nodes is given by (9.9), and the average arrival rate in the kth secondary relay queue is given by p Po 1 − Po id ik sr λk = λi , (9.33) Pi i∈Sk
where Sk denotes the set of primary nodes that have node k assigned as relay. By following a similar analysis to that in the previous section and assuming that the node’s own packets queue will have dummy packets if it is empty, it can be shown that the average service rate of the kth secondary relaying queue is given by
s μsr k = μk αr ,
(9.34)
where μ s k is given by (9.28) with the difference that the destination is now the primary BS rather than another secondary node. The average service rate of the kth secondary node’s own queue is given by ' ( λsr k ss s μk = μk 1 − αr sr , (9.35) μk which involves the event that either the node’s relay queue is empty or the node’s own packet queue gets service (with probability 1 − αr ). On substituting (9.34) into (9.35), sr and under the condition that the node’s relay queue is stable (i.e., λsr k < μk ), the average service rate of the kth secondary node’s own queue is now ' ( λsr k s μss = μ 1 − , (9.36) k k μ s k which is independent of the probability αr . This result might seem counter-intuitive, but from queuing theory [457] it is known that, for a stable queue, the average departure rate is equal to the average arrival rate. Therefore, given an arrival rate αksr in the node’s relay queue and αr that stabilizes that queue, the node’s relay queue will always occupy the same amount of idle time slots. Therefore, the node’s own packets queue will always have the same average channelaccess opportunities, irrespective of the value of αr . Thus, we can use a value of αr = 1 to ensure the maximum service rate for the relay’s queue. Finally, the maximum aggregate stable throughput of the secondary network is defined by the following optimization problem # λrj < μrj , ∀ j ∈ Mr ss (9.37) μk (αs ), s.t. max αs λisr < μisr , ∀i ∈ Ms . k∈Ms
245
9.5 Summary and bibliographical notes
No sec. Relays Sec. Relays
0.25
λs
0.2
0.15
0.1
0.05
0 0
Figure 9.6
0.1
0.2
0.3
0.4
λp
0.5
0.6
0.7
The stability region for networks with and without secondary relaying capability. Mp = 20 primary nodes and Ms = 10 secondary nodes.
The first constraint ensures the stability of the primary relays’ queues, while the second ensures the stability of the secondary relays’ queues. Figure 9.6 compares the cases of networks with and without secondary relaying capability. The network consists of Mp = 20 primary users, Ms = 10 secondary users, and Mr = 5 relay nodes. It is to be noted that enabling secondary nodes to work as relays and complement the primary relays brings about an increase by an average of 33% in the network’s stability region. It should be noted that, since the secondary nodes work as relays and complement the primary relays operation, there is an increased probability that primary source nodes can locate relays at near-optimal positions to cooperate with. Therefore, primary nodes exhibit increased service rates and empty their queues at faster rates. Moreover, as the total number of relays in the network increases, the arrival rates for the primary relays decrease, and as a result primary relays will utilize a smaller portion of the idle time slots to empty their queues. Therefore, secondary nodes will have access to more idle time slots to service both their relaying queues and their own data queues, leading to a total increase in the network’s stability.
9.5
Summary and bibliographical notes In this chapter, the exploitation of idle channel resources in a TDMA network is investigated. First, the use of these idle channel resources cognitively by a group of relay nodes that help the source nodes is studied. Two different relay-selection criteria, namely, the nearest neighbor and maximum success probability, were considered and analyzed.
246
Opportunistic multiple access for cognitive networks
Stability analysis reveals that the use of cognitive relays can increase the maximum stable throughput of the network by up to 167%. Then the problem of sharing the idle channel resources between the group of relays and a group of secondary nodes trying to transmit new information over the network is considered. Two different scenarios are studied in detail. The first is when the secondary nodes can sense relay transmissions and organize their access to the channel accordingly. The second scenario is when the secondary nodes interfere with relays’ transmissions in the idle time slots. These two scenarios form inner and outer bounds on the actual network’s maximum stable throughput region. Numerical results reveal that, in both scenarios, it is beneficial both to the primary network and to the secondary network that the maximum possible number of relays is always used. That is because the gain to both networks due to cooperation outweighs the losses that might occur due to the interference between relays’ and secondary nodes’ transmissions. Moreover, the ability of the secondary nodes to operate as relays is considered. The results reveal that a further increase in the maximum stable throughput can be achieved through the use of secondary relays. In order to alleviate the loss of bandwidth efficiency due to the channel resources assigned to the relays in order for them to perform their forwarding task, the authors of [391] developed a cognitive multiple access protocol. The protocol exploits source burstiness to enable cooperation of different nodes in a TDMA network during silent periods. In other words, a cooperative relay will detect and utilize empty time slots in the TDMA frame to retransmit failed packets. Therefore, no extra channel resources are allocated for cooperation and the system suffers no bandwidth losses. The authors analyzed the protocol’s performance from a maximum-stable-throughput point of view, and their results revealed significant performance gains over conventional cooperation strategies. There have also been several previous investigations on efficient spectrum sharing in cognitive radio networks. In [103] and [213] the cognitive radio problem was investigated from an information-theoretic standpoint, where the cognitive transmitter is assumed to transmit at the same time and on the same bandwidth of the primary link, being able to mitigate its interference with the primary through complex precoding techniques that are based on the perfect prior information about the signal transmitted by the primary. Centralized and decentralized protocols at the MAC layer intended to minimize secondary nodes’ interference with primary transmissions were studied in [509] and [468] by modeling the radio channel as either busy (i.e., the primary user is active) or available (i.e., the primary user is idle) according to a Markov chain.
Part II
Resource awareness and learning
10
Reinforcement learning for energy-aware communications
This chapter considers the problem of average throughput maximization relative to the total energy consumed in packetized sensor communications. A near-optimal transmission strategy that chooses the optimal modulation level and transmit power while adapting to the incoming traffic rate, buffer condition, and channel condition is presented. Many approaches require the state transition probability, which may be hard to obtain in a practical situation. Therefore, we are motivated to utilize a class of learning algorithms, called reinforcement learning (RL), to obtain the near-optimal policy in point-to-point communication and a good transmission strategy in multi-node scenarios. For comparison purposes, stochastic models are developed to obtain the optimal strategy in point-to-point communication. We show that the learned policy is close to the optimal policy. We further extend the algorithm to solve the optimization problem in a multi-node scenario by independent learning. We compare the learned policy with a simple policy, whereby the agent chooses the highest possible modulation and selects the transmit power that achieves a predefined signal-to-interference ratio (SIR) given one particular modulation. The learning algorithm achieves more than twice the throughput per energy of the simple policy, particularly in the highpacket-arrival-rate regime. Besides the good performance, the RL algorithm results in a simple, systematic, self-organized, and distributed way to decide the transmission strategy.
10.1
Introduction Recent advances in micro-electro-mechanical-system (MEMS) technology and wireless communications have made possible the large-scale deployment of wireless sensor networks (WSNs), which consist of small, low-cost sensors with powerful processing and networking capabilities. These WSNs have found several important applications such as battlefield surveillance, health-care monitoring, habitat monitoring, maintenance of modern highways, and managing future manufacturing systems. Owing to the broad potential applications, the WSN has been identified as one of the most important technologies nowadays. A crucial characteristic of a WSN is that it should have a very long network lifespan, since human intervention for energy-supply replenishment might not be possible in many applications. A long network lifespan also implies very low energy consumption in each sensor.
250
Reinforcement learning for energy-aware communications
The traditional low-power design that focuses mainly on circuits and systems has been shown to be inadequate in WSN applications. The stringent energy requirement of the sensor calls for highly energy-efficient resource allocation. This highly energyefficient resource allocation requires the application of an energy-awareness system, whereby the communication system reconfigures the transmission parameters from different communication layers according to the environment. The communication system’s environment includes several aspects, such as the sensor’s communicationchannel condition, the sensor’s buffer condition, and the energy left in each sensor node. Such cross-layer optimization can be realized in several ways. One practical solution is to employ an intelligent control agent that interacts with different communication layers and dynamically reconfigures each layer’s parameters. Several attempts to design the resource-allocation protocol for WSNs are based on the existing wireless resource-allocation methods. However, in practice, realistic probability models might not be available when the optimization is being done. This motivates us to develop and investigate an optimization scheme that learns the optimal policy without knowing the probability model. In this chapter, we focus on average throughput maximization relative to the total energy consumed in packetized wireless sensor communications from an optimalcontrol point of view. We consider point-to-point communication and multi-node scenarios. In both cases, we assume that an intelligent control agent resides in the transmitter and decides the right action in the right situation. We utilize the RL algorithm to solve online optimization problems. In point-to-point communication, the communication takes place between one transmitter and one receiver. Before the transmission, the transmitter observes the number of packets in its buffer and the channel gain from the previous transmission. Given this knowledge, the objective of the intelligent control agent is to find the best modulation level and transmission power to maximize the longterm average throughput relative to the total energy consumed. The long-term average throughput relative to the total energy consumed is obtained by averaging the throughput per unit energy at every transmission. The total energy consumed at every transmission consists of the transmission energy and buffer processing cost. Clearly, the buffer in the transmitter is affected by the agent’s decision. In this scenario, we compare the optimal policy with the policy learned by an RL algorithm and show that the RL algorithm obtains the near-optimal control policy. Moreover, we also compare the learned policy with a policy whereby the control agent chooses the highest possible modulation and uses the transmit power that achieves a predefined SIR given one particular modulation. We refer to this policy as the simple policy. We demonstrate that the learned policy obtains more than twice the throughput per unit energy of the simple policy, especially in the high-mean-packet-arrival-rate region. In contrast to point-to-point communication, we consider N transmitters simultaneously communicating to one receiver in a multi-node scenario. The channel link experienced by one node depends on the transmission power (decision) employed by other nodes in multi-node scenarios. Hence, the optimal (equilibrium) solution generally depends on the policy employed by the other nodes. We extend the RL algorithm to solve the multi-node problem. We let every node independently learn its transmission
10.2 The Markov decision process and dynamic programming
251
strategy on the basis of its buffer condition and the measured channel interference. Similarly, we compare the independently learned policy with the simple policy whereby each node chooses the highest modulation level and selects the transmit power level to achieve a predefined SIR at a given modulation. The modified RL algorithm provides a significant improvement in average throughput relative to the total energy consumed. The main points of this chapter are as follows. We present an optimization framework that generally captures several parameters from different communication layers and develop practical algorithms that are based on the RL algorithm to learn a near-optimal control policy in point-to-point communication and a good transmission strategy in multi-node scenarios. The optimization scheme is simple, inherently distributed, and self-organized. The rest of this chapter is organized as follows. In the next section, we review the MDP and its optimal solution. The RL algorithm is introduced in Section 10.3. In Section 10.4, we formulate the throughput maximization relative to the total energy consumed in a point-to-point communication scenario. The extension of this formulation to a multi-node scenario is presented in Section 10.5. A discussion of the applicability of the algorithms to WSNs is given in Section 10.6.
10.2
The Markov decision process and dynamic programming An MDP [64] is defined as a (S, A, P, R) tuple, where S is the state space that contains all possible states of the system, A is the set of all possible control actions at each state, P is a transition function S × A × S → [0, 1], and R is a reward function S × A → R. The transition function defines a probability distribution over the next state as a function of the current state and the agent’s action, i.e. [P]sk ,sk+1 (ak ) = Psk ,sk+1 (ak ) specifies the transition probability from state sk ∈ S to state sk+1 ∈ S under the control action ak ∈ A. Here, the notation [A]i, j denotes the element on the ith row and the jth column of matrix A. The transition probability function P describes the dynamic of the environment as the response to the agent’s current decision. The reward function specifies the reward incurred at state sk ∈ S under control action ak ∈ A. The interaction between the agent and environment in an MDP is illustrated in Figure 10.1. At time k, the control agent detects sk ∈ S and decides an action ak ∈ A. The decision ak causes the state to evolve from sk to sk+1 with probability Psk ,sk+1 (ak ), and some reward R(sk , ak ) is obtained. Environment Action ak
Next State sk+1
Reward R (sk ,ak )
Control Agent Figure 10.1
Interaction between agent and environment in an MDP.
252
Reinforcement learning for energy-aware communications
The MDP’s solution consists of finding the decision policy π : S → A that maximizes the objective function. Several typical objective functions are the expected discounted reward, expected total reward, and average reward per stage [19]. Since we are interested in maximizing the long-term average throughput relative to the total energy consumed in the packetized sensor communication, we focus on the average reward per stage, which is represented as 6 5n−1 1 π R (sk , π(sk )) , sk ∈ S, π(sk ) ∈ A, (10.1) ρ (s0 ) = lim E π n→∞ n k=0
where ρ π (s0 ) is the average reward obtained using decision policy π when the initial state is s0 . This objective function (10.1) exactly describes the average throughput per unit energy that we want to maximize. We note that the expectation operation in (10.1) is the conditional expectation given one particular policy. The optimal policy is the decision rule that maximizes the average reward per stage ρ π (sk ) over all possible policies π. When the Markov chain resulting from applying every stationary policy is recurrent or ergodic, it is well known that the optimal average reward per stage is independent of the initial state s0 [19] [351]. Moreover, there exists an optimal stationary policy that satisfies Bellman’s equation [19] for all s ∈ S: ⎤ ⎡ |S| (10.2) ρ ∗ + h ∗ (s) = max ⎣ R(s, a) + Ps,s (a)h ∗ (s )⎦, a∈A(s)
s =1
where ρ ∗ is the optimal average reward per stage and h ∗ (s) is known as the optimal relative state-value function for each state s. For any stationary policy, the corresponding average reward ρ π and relative state value h π (s) satisfy the following relation for all s ∈ S: ⎡ ⎤ |S| ρ π + h π (s) = ⎣ R(s, π(s)) + Ps,s (π(s))h π (s )⎦ , (10.3) s =1
We note that several computational approaches that solve the Bellman optimality equation are known as dynamic programming (DP) approaches. In the next section, we introduce the RL algorithm and explain the relation between the update equations in the algorithm and the Bellman optimality equation (10.2).
10.3
Reinforcement learning The RL algorithm is a popular paradigm for solving learning-control MDPs. In RL, an agent learns to make optimal decisions by experiencing the reward received as the result of its action. Moreover, the agent does not require an explicit model of the environment. Hence, RL is useful when the agent has little knowledge of the environment.
10.3 Reinforcement learning
253
An excellent tutorial on RL algorithms can be found in [37]. The essense of RL algorithms is to update the relative state value function h(s) and ρ in (10.2) using incremental averaging. In the following, we explain in a step-by-step manner the development of update equations in an RL algorithm and show their connection with Bellman’s equation. Define the operator B(h π (s)) = R(s, π(s)) + |S| π
s =1 Ps,s (π(s))h (s ). The relation (10.3) can then be expressed as h πk+1 (sk ) = B(h π (sk )) − ρk , ρk+1 = B(h π (sk )) − h πk (sk ).
(10.4)
The RL algorithm eliminates the need for knowledge of the state transition probability by replacing the operator B(·) by B (h π (s)) = R(s, π(s)) + h π (s ), where s is the next state occurring in the sample path. Obviously, the next state s occurs with probability Ps,s (π(s)). The RL algorithm learns the state-value function as h πk+1 (sk ) = (1 − αk )h πk (sk ) + αk h πk+1 (sk ) = (1 − αk )h πk (sk ) + αk B h πk (sk ) − ρk = h πk (sk ) + αk R(sk , π(sk )) + h πk (sk+1 ) − h πk (sk ) − ρkπ . Similarly, the average reward ρ is updated as π = ρkπ + βk R(sk , π(sk )) + h πk (sk+1 ) − h πk (sk ) − ρkπ . ρk+1
(10.5)
(10.6)
We note that αk and βk determine the weighting of the current and future estimate of the state-value function and the average reward. The term (R(sk , π(sk )) + h πk (sk+1 ) − h πk (sk ) − ρkπ is often referred to as the temporal difference [37] or the error between the current estimate and the next. This temporal difference (error) guides the learning process. αk and βk determine the learning rate for the differential state-value function and the average reward. Since the Bellman equation chooses the action that optimizes the right-hand side (RHS) of (10.2), there should be some function related to the decision made in each iteration. The RL algorithm chooses the decision/action according to the Gibbs softmax method, i.e., action ak is chosen in state sk with probability Pr(ak = a|sk = s) = e p(s,a) / b e p(s,b) . Whenever an action ak is chosen at state sk , the preference metric p(sk , ak ) is updated as p(sk , ak ) = p(sk , ak ) + k [R(sk , ak ) + h k (sk+1 ) − h k (sk ) − ρk ],
(10.7)
where k determines the learning rate for the preference metric. The preference-metric update equation has the following interpretation. The algorithm typically is initialized using p(s, a) = 0, ∀s ∈ S, ∀a ∈ A. This implies that initially the algorithm chooses every action uniformly. As the iteration proceeds, the action that results in increasing the relative state-value function h k+1 (sk ) is prioritized by increasing the preference metric of choosing that particular action (10.7). In contrast, the action that results in a smaller relative state-value function (the temporal difference is negative) is penalized by reducing its preference metric. In this sense, the set of equations (10.5)–(10.7) chooses the action that maximizes the RHS of (10.2). Hence, the RL algorithm resembles the Bellman optimality equation. The set of equations (10.5)–(10.7) is also known as the actor–critic (AC) algorithm, which will be completely presented in Section 10.4.2.
254
Reinforcement learning for energy-aware communications
Transmitter 1
μ
Modulator BPSK~16PSK
n1b,k
Transmitter j
PA
m1k
PA ACK
Intelligent Optimal Control Agent
mkj
j pt,k
p1t,k
μ
Modulator BPSK~16PSK j nb,k
Intelligent Optimal Control Agent
SIR SIR
ACK
Transmitter i Receiver
μ
Modulator BPSK~16PSK
PA
mki
i nb,k
Intelligent Optimal Control Agent
Figure 10.2
i pt,k
ACK SIR
PA
Forward Transmission ACK, SIR feedback Power Amplifier
Interaction of nodes in a distributed control agent.
In the following sections, we formulate the average throughput maximization relative to the total energy consumed in WSNs as an MDP for point-to-point communication and the multi-node scenario. In both scenarios, we consider time-slotted packet transmission. The interaction between the communicating nodes for both scenarios can generally be illustrated as in Figure 10.2. Point-to-point communication can be considered as the special case in which only one transmitter and one receiver are participating in the communication. We will refer to this illustration when explaining the interaction between the optimal control agent and the environment.
10.4
Throughput maximization in point-to-point communication We study the average throughput maximization relative to the total energy consumed considering the parameters of the channel condition, the transmitter buffer, the modulation, and the transmit power. In the following, we first present the reward function and the AC algorithm used to learn the near-optimal policy. In order to compare the learned strategy with the optimal solution, we present the models that constitute the MDP in point-to-point communication. These models are required for solving the Bellman optimality equation. We note that the optimal control framework does not depend on any particular model used in our formulation. Hence, more accurate models, should such be discovered, can be employed without changing the optimization framework.
10.4.1
Reward functions Several utility functions or reward functions have been used in the context of powercontrol schemes. For the application of WSNs, the energy consumption, throughput,
10.4 Throughput maximization in point-to-point communication
255
and delay are all very critical parameters. We certainly do not want to minimize the energy consumption at the cost of an unacceptable throughput or infinite delay. Hence, we employ the number of successfully transmitted packets relative to the total energy consumed as our objective function. To enforce the bounded delay transmission, we incorporate the buffer processing cost/energy into the total energy, which is the summation of the transmission energy and the buffer processing cost/energy. Including the buffer processing cost/energy minimizes the possibility of buffer overflow, which can be interpreted as enforcing the quality of service (QoS). Suppose that the transmitter sends a packet consisting of L b information bits, and let the number of bits in one packet after adding error-decoding code be L bits. The transmission rate is R bits/s. Figure 10.2 illustrates this scenario, where only one transmitter–receiver pair is communicating. We assume that the receiver feeds back its current received channel gain γ to the transmitter before the next transmission. Let m and pt denote the modulation level and the transmission power. Also let S((γ , pt ), m) denote the probability of successful packet reception, where (γ , pt ) is the targeted SIR. Let K denote the number of retransmissions required to successfully transmit a packet. Assuming that each transmission is statistically independent, K is a geometric random variable with mass function PK (k) = S((γ , pt ), m)[1 − S((γ , pt ), m)]k−1 .
(10.8)
The time duration for each transmission is L/(Rm) seconds and the total retransmission time becomes K L/(Rm) seconds. When the transmitted power is pt watts, the energy consumed per packet transmission is E[K ] pt L/(Rm), where E[·] is the expectation. In one packet, the useful information portion is only L b /L. Hence, the utility function becomes Good Packet L b R · m · S((γ , pt ), m) , = · Transmit Energy L L · pt
(10.9)
where the unit for the utility function is packets per joule. Let pb denote the buffer processing cost/energy. The buffer processing cost is assumed to be a monotonically increasing function with respect to the number of packets in the buffer n b , i.e. pb = f (n b ). Thus, the reward function is expressed as ⎧ ⎨ L b · R · m · S ( (γ , pt ) , m) × 10−3 if n = 0 and p = 0, b t L L · ( pt + f (n b )) R((n b , γ ), (m, pt )) = ⎩ 0 otherwise, (10.10) where (n b , γ ) is the aggregate state and (m, p t ) is the action that can be taken by the control agent. The reward function is interpreted as the number of good received packets relative to the total energy consumed. We note that the reward function is equal to zero if there is no packet in the buffer or no transmission occurs (the transmit power is zero). Also, adding the buffer processing cost means that the control agent will gradually become more aggressive as the buffer increases. Hence, the probability of buffer overflow will be decreased.
256
Reinforcement learning for energy-aware communications
Actor
State
Critic
Policy
Action
Value Function Error Reward
Environment Figure 10.3
Actor–critic architecture.
10.4.2
Near-optimal solution using the actor–critic algorithm In this section, we present the complete AC algorithm developed in Section 10.3 to solve an MDP with an average reward per stage. The architecture of an AC algorithm is shown in Figure 10.3. As we can infer from its name, the AC algorithm consists of two major parts, the actor and the critic. The policy structure is known as the actor, because it decides the action, and the estimated state-value function is known as the critic, since it generates a temporal difference (error) that criticizes the actions made by the actor. The complete AC algorithm is shown in Table 10.1. The AC algorithm uses the state-value function update and the average reward update as in (10.5) and (10.6). The actor selects the decision according to the Gibbs softmax method. In parallel with the discussion in Section 10.3, the Gibbs softmax method (10.7) acts as the actor and the temporal difference serves as the critic (10.5) and (10.6). In the Gibbs softmax method, the actor chooses the action with the highest conditional probability of state action π(sk , ak ) = Pr(ak |sk ). The higher π(sk , ak ) is, the more likely it is that ak will be chosen. The algorithm starts with equal π(sk , ak ) for every action. Therefore, the actor has an equal probability of choosing any of the available actions at the initial stage. This stage is often referred to as an exploration stage. As in all RL algorithms, the AC algorithm needs to balance the exploration and exploitation step in the learning process. The exploitation step is used to search for the averagereward-maximizing decision and the exploration step is used to try out all possible best decisions.
10.4.3
The optimal dynamic programming solution As described in Section 10.2, the solution of the Bellman optimality equation requires knowledge of the state transition probability. Before describing the optimal solution, we present models of each of the components that constitute the MDP system in a point-to-point scenario as follows.
10.4 Throughput maximization in point-to-point communication
257
Table 10.1. The actor–critic algorithm Initialize α, β, , k = 0, h(sk ) = 0, ∀sk ∈ S, and ρk = 0. Set preference function p(s, a) = 0, ∀s ∈ S, ∀a ∈ A(s). Set s0 arbitrarily. Loop for k = 0, 1, 2, . . . 1. Choose ak in sk according to the Gibbs softmax method: π(sk , ak ) = Pr(ak = a|sk = s) = e p(s,a) / b e p(s,b) . 2. Get reward from current decision and observe next state sk+1 : r = R(sk , ak ). 3. Evaluation of temporal difference (error): δ = r + h(sk+1 ) − h(sk ) − ρk . 4. Update relative state-value function and average reward per state: h(sk ) = h(sk ) + αδ, ρk+1 = ρk + βδ. 5. Update actor preference: p(s, a) = p(s, a) + δ. End loop.
10.4.3.1
The finite-state Markov channel (FSMC) In point-to-point communication, the wireless channel dynamic can be modelled using a finite-state Markov channel (FSMC). The FSMC approach for wireless channels is to partition the received SNR or the equivalent channel gain into a finite number of intervals. Suppose that the channel gain is partitioned into K intervals, 0 = 0 < 1 · · · < K . The channel gain is said to be in state k if it is between k−1 and k . In the packettransmission system, the channel transition occurs at the time-slot boundary, and the channel gain is constant during one time slot of transmission. Furthermore, channel transition occurs only from a given state to its two adjacent states as in Figure 10.4. The state transition probability completely specifies the dynamics of the channel and is determined as follows. 1. Steady-state probabilities: ζk =
)
k
k−1
p(γ )dγ ,
k = 1, . . . , K .
(10.11)
In a Rayleigh fading channel, γ is exponentially distributed with probability density function p(γ ) = 1/γ0 exp(−γ /γ0 ), where γ0 is the average channel gain. 2. State transition probabilities: pc (k, k + 1) = N (k+1 )Tp /πk , pc (k, k − 1) = N (k )Tp /πk ,
k = 1, . . . , K − 1, (10.12) k = 2, . . . , K , √ where N (·) is the level-crossing function given by N () = 2π /γ0 f D × exp(−/γ0 ), Tp is the packet transmission time and f D is the maximum Doppler frequency.
258
Reinforcement learning for energy-aware communications
p23
p12 1
2 p21
p11 Figure 10.4
10.4.3.2
p32 p22
pk – 1,k
p34 3
K p43
p33
pk,k – 1 pk,k
An FSMC with K states.
Construction of the state-transition probability We construct the system state as the aggregate of the number of packets in the buffer, n b , and the channel gain, γ , that is s ≡ (n b , γ ). The control space consists of the modulation level and transmit power, i.e. a ≡ (m, pt ). The state transition probability maps (n b , γ ) × (n b , γ ) × (m, pt ) → [0, 1]. In particular, the state transition probability depends on the probability of packet arrival, the channel transition probability and the probability of successful packet transmission. We model the packet arrival process as a Poisson process with mean packet arrival rate μ. The channel is modeled as an FSMC and the channel gain state transition probability is calculated according to Section 10.4.3.1. The probability of successful packet transmission, S((γ , pt ), m), depends on the targeted SIR, (γ , pt ), which is represented as (γ , pt ) = γ ×
W pt × A t , R σ2
(10.13)
where γ , modeled as the FSMC, is the variation in channel gain between the transmitter and the receiver, W denotes the total bandwidth of the transmission, R is the transmission rate (W/R is also known as the processing gain in CDMA literature), At ∝ 1/d 4 is the attenuation factor resulting from the path loss, d is the distance between the transmitter and the receiver, and σ 2 is the variance of the thermal noise. Denote the number of packet arrivals as n a = 0, 1, . . ., the probability of packet arrival as pa (n a ), the probability of successful packet transmission as S((γ , pt ), m), and the channel-transition probability as pc (γk , γk+1 ). Here, pc (γk , γk+1 ) indicates the transition probability of the channel gain from state γk at time instant k to state γk+1 at the next time instant. Suppose that the current state is sk = (n b,k , γk ), where n b,k is the number of packets in the buffer at time k and γk is the channel gain that is fed back. The action taken at time k is ak = (m k , pt,k ), where m k and pt,k denote the modulation level and the transmit power employed at time k. Assuming that the events of packet arrival, successful transmission, and channel transition are all mutually independent, the corresponding system state transition probability is determined as follows. 1. Transmission failure: sk+1 = (n b,k + n a , γk+1 ), Psk ,sk+1 (ak ) = pa (n a )(1 − S(, m k )) pc (γk , γk+1 ).
(10.14)
10.4 Throughput maximization in point-to-point communication
259
Table 10.2. Single-node simulation parameters
probability
L b = 64, L = 80 W = 10 MHz, R = 100 kbits/s, Tp = 0.8 ms σ 2 = 5 × 10−15 W f D = 50 Hz, γ ∈ = [−8, −6, ..., 8] dB At = 1.916 × 10−14 f (n b ) = 0.05(n b + 4) if n b = max(n b ) max(n b ) = 7, f (max(n b )) = 3 m = 1, 2, 3, 4 (BPSK, QPSK, 8PSK, 16PSK), L S((γ , pt ), m) = √(1 − P((γ , pt ), m)) P(, m) = erfc ∗ sin(π/2m )
Transmit power SNR range
pt = [0, 0.2, ..., 2] W = [0, 1, ..., 24] dB
Packet size System parameters Channel gain Buffer cost Modulation level Packet success
2. Successful transmission: sk+1 = (n b,k + n a − m k , γk+1 ), Psk ,sk+1 (ak ) = pa (n a )S(, m k ) pc (γk , γk+1 ).
(10.15)
The formulation of the MDP has the following interpretation. Before transmission of a packet, the transmitter is in some state (obtained from the previous history of transmission, i.e. buffer content and channel condition, see Figure 10.2). The transmitter uses this information to determine what modulation and transmit power should be used to maximize the average throughput relative to the total energy consumed. At the end of a packet transmission, the transmitter obtains feedback information from the receiver containing the quantized channel gain and ACK/NACK. The quantized channel gain is used to track the channel evolution γk . The ACK/NACK is used to update the buffer content. When an ACK signal is received, the transmitter will send the following packet at the next transmission time. Otherwise, it retransmits the packet. The number of successfully transmitted packets relative to the energy consumed during one transmission time is recorded as the reward, R(sk , ak ) = R((n b,k , γk ), (m k , pt,k )).
10.4.4
Numerical results In this section, we construct the simulation using the parameters shown in Table 10.2. We note that the total number of states is 72 and the total number of actions is 44. Given the MDP state transition probability, the optimal solution of the posed MDP problem is solved numerically using the policy-iteration method [19]. We compare the optimal solution with the policy learned by the AC algorithm. The AC algorithm parameters are α = 0.01, β = 0.0001, and = 0.01. For comparison purposes, we also simulate the simple policy, whereby the transmitter tries to transmit at the highest throughput (modulation) possible, while maintaining a predefined link SNR given a particular modulation. Specifically, the transmitter chooses BPSK to transmit when there is only
260
Reinforcement learning for energy-aware communications
Optimal versus Learned throughput in point−to−point scenario, Average Arrival Load = 1.0
packets/millijoule
2.5 2 1.5 1
Learned Throughput Learned Average Throughput Optimal Average Throughput
0.5 0
0
0.5
1 1.5 Number of packet transmissions
2
2.5 x 106
Tracking capability of learning algorithm packets/millijoule
2 1.5 1 0.5 0
Figure 10.5
Learned Throughput Optimal Average Throughput
0
0.5
1 1.5 Number of packet transmissions
2
2.5 x 106
The performance of the learning algorithm.
one packet in the buffer, and it chooses QPSK, 8PSK, and 16PSK to transmit when there are two packets, three packets, and more than four packets in the queue, respectively. For each modulation, the transmitter selects the transmit power to achieve a fixed predefined SNR. We use (6, 10, 15, 20) dB as the predefined link SNRs for BPSK to 16PSK, respectively. These predefined SNRs can achieve a probability greater than 80% of correct packet reception. Figure 10.5 shows the average throughput learned by the AC algorithm and the optimal throughput when μ = 2.0. It is obvious that the learned throughput is asymptotically very close to the optimal one. Moreover, due to the selection of the small, constant learning parameters α, β, and (as opposed having a decreasing magnitude of learning parameters with time) the AC algorithm has the ability to track the variation in the governing probability as demonstrated in Figure 10.5. In Figure 10.5, the mean packet-arrival rate varies as μ = (0.5, 1.0, 1.5, 2.0, 1.0). On the basis of the sample realization, the AC algorithm adjusts the learned policy by adapting to different packet arrival rates. The capability of the AC algorithm to obtain the near-optimal policy and track the variation in the governing probability is due to the fact that the algorithm explores all the possible decisions and selects the throughput-maximizing policy. This exploration is achieved by the initial stage of the Gibbs softmax method used in the actor part of the algorithm. The corresponding optimal and learned policies for μ = 2.0 are shown in Figure 10.6. In these figures, the channel is better when the channel gain is larger, and the buffer
261
10.4 Throughput maximization in point-to-point communication
Optimal Modulation level selection
2
1 10
0 −10
5
0 Channel Gain (dB)
10 0
Modulation level
Transmit power (Watt)
Optimal Transmit Power selection
buffer content
10
0 −10
5 buffer content
Figure 10.6
0
Modulation level
Transmit power (Watt)
1
10
2 1 10 5 0 −10
10 0 Channel Gain (dB)
Near−Optimal Transmit Power selection
2
5
3
buffer content
Near−Optimal Transmit Power selection
−5 0 Channel Gain (dB)
4
4 3 2 1 10 buffer content
10
5
0 0 −10
Channel Gain (dB)
Learned and optimal policies, packet-arrival load μ = 2.0.
content indicates the number of packets in the buffer. For the same buffer content, the optimal policy tends to use higher modulation levels when the channel is good and lower modulation when the channel is bad. The agent also tends to select higher power levels when the channel is bad in order to guarantee an acceptable throughput. At the same channel gain, as more packets are queued in the buffer, the agent becomes more aggressive and attempts a higher modulation and power level. This effect is due to including the buffer processing cost in the reward function, causing the agent to try to balance the transmission energy and buffer processing cost/energy to obtain the maximum average throughput relative to the total energy expended. Moreover, the optimal DP and the near-optimal AC solution jointly decide the best modulation and transmit power to maximize the average throughput relative to the energy expended. Figure 10.7 shows the throughput that is achieved for the optimal policy, AC learned policy, and the simple policy described in the previous paragraph for various packetarrival rates. It is obvious that the policy learned by the AC algorithm is very close to the optimal policy. Compared with the simple policy, the AC algorithm obtains twice to three times the throughput relative to the total energy expended. Hence it is
262
Reinforcement learning for energy-aware communications
2.6 Optimal average throughput Learned average throughput Simple-policy average throughput
2.4 2.2
packets/millijoule
2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.5 Figure 10.7
1
1.5
2
2.5 3 3.5 Mean Packet Arrival
4
4.5
5
Average throughput vs. packet arrival rate for load μ = 2.0.
a higher-energy-efficiency scheme. It is important to point out that the optimal solution might not be feasible in practical applications, since the optimal solution requires knowledge of the channel transition probability and packet arrival probability. The AC algorithm and simple-policy algorithm do not require any knowledge of the governing probability, but the AC algorithm is still able to obtain a near-optimal average throughput.
10.5
Multi-node energy-aware optimization In this section, we extend the throughput maximization relative to the total energy consumed in point-to-point communication to the multi-node scenario as shown in Figure 10.2, where multiple transmitters send packets to one receiver. The main difference between these two scenarios is the channel model. In point-to-point communication, the channel gain evolves according to the FSMC model and it is unresponsive to the selection of the transmitter power. The transmitter exercises all the available actions (modulation level and transmission power) to obtain the highest throughput per unit energy, while adapting to the channel variation, packet arrival rate, and buffer condition. In a multi-node scenario, the interference in one link depends on the power transmitted from other nodes. In fact, the channel experienced by one particular node depends on its previous decision and other nodes’ decisions. When one node increases its power level, it will increase the interference experienced by the other nodes. This event may trigger the other nodes to increase their transmission power and may result in an increase of the channel interference experienced by the original node.
263
10.5 Multi-node energy-aware optimization
In the following, we first describe the channel model and problem formulation in multi-node communication. We develop an extension of the AC algorithm for the multinode problem and evaluate the performance by simulations. Similarly to what we did in the case of point-to-point communication, we compare the learned policy with the simple policy, whereby each transmitter chooses the highest modulation possible, while maintaining a predefined SIR of the link for a particular modulation.
10.5.1
The channel model for multi-node communication and problem formulation The channel model in the multi-node scenario captures the interaction dynamics between nodes. This interaction is described in terms of the received SIR of each node. Suppose that there are N nodes that want to simultaneously communicate with the receiver. The SIR of link i can be expressed as [143] i pt1 , . . . , ptN =
R
W Ait pti j=i
j
j
A t pt + σ 2
,
(10.16)
where W and R are the system bandwidth and transmission rate, pti is the transmission power employed by node i, Ait is the path loss corresponding to link i, and σ 2 is the variance of the thermal noise. The path loss Ait depends on the distance between the transmitter i and the receiver, that is Ait = c/(d i )4 , where d i is the distance between the transmitter i and the receiver. Equivalently, (10.16) can be written in dB as ( ' W pti i pt1 , . . . , ptN dB = 10 log10 − ηi , R
(10.17)
8 j j 2 where ηi = 10 log10 Ait is the equivalent interference of link i. j=i At pt + σ Other nodes’ power transmission influences the link quality of node i through the relation (10.17). The MDP formulation of multi-node scenarios is similar to the formulation of the point-to-point communication case. We point out as follows. At time the differences instant k, the system’s state at node i is ski ≡ n ib,k , ηki , where n ib,k and ηki are the number of packets in the transmitter’s buffer and the quantized link inter i equivalent i i ference experienced by node i, and the action space is ak ≡ m k , pt,k , where m ik and i denote the modulation level and transmission power employed by node i at time pt,k instant k, respectively. We assume that the transmitting node i receives the quantized estimation of its link quality from the receiver through the error-free channel, hence the transmitting node knows the history of the quantized interference of its link, ηi . This implies that the control agent can fully observe its corresponding state, s i ≡ n ib , ηi . Having observed its system state ski ≡ n ib,k , ηki at time instant k, every node exercises i all the possible actions aki ≡ m ik , pt,k to maximize its average throughput per unit energy while adapting to the incoming traffic, buffer condition, and link quality. The reward obtained is represented as
264
Reinforcement learning for energy-aware communications
⎧ i i i , mi ) ⎪ ⎨ L b · R · m · S ( × 10−3 L 2 · pti + f i n ib R i n ib , ηi , m i , pti = ⎪ ⎩0
if n ib = 0 and pti = 0, otherwise, (10.18)
where the superscript i denotes the node i. The actions taken by every node in current transmission affect the next transmission’s link quality through (10.16). The objective of every node is to maximize its average throughput relative to the total energy consumed.
10.5.2
Extension of reinforcement learning To solve the posed multi-node problem, we extend the single-agent AC algorithm in Table 10.1 to learn independently the policy in each agent. In this independent learning, each agent learns its transmission strategy by assuming that the agent itself is the only agent that influences the evolution of its state. Each agent uses only its local state information to make the decision, that is the agent does not take into account the state, action, and reward involved in other agents’ decision-making processes. Although the independent AC learning might not be optimal, it has several advantages. First, since no global information (the state and the decision of other agents) is used in the learning process, less control handshaking (each node doesn’t need to exchange its state information and the decision employed) is required. Second, the extension of the AC algorithm has the same computational complexity as in the single-agent scenario. Each agent is required only to update the relative state-value function, average reward, and actor preference function of the state and action it experiences (Table 10.1). In the extension of single-agent RL, the AC algorithm is applied directly to each node in multi-node scenarios. Before the transmission of a packet, every node uses the Gibbs softmax method to make the decision on the basis of its current local state. At the end of the packet transmission, each node observes the ACK/NACK signal and receives the feedback information from the receiver containing the channel link quality in the previous transmission. Also, each node observes the packet arrival and records the reward (throughput per unit energy). The agent uses this information to update its state, state-value function, and learned average reward as in Table 10.1. The above procedure is repeated throughout the transmission. We note that the resulting RL algorithm is inherently distributed, since every node applies the AC algorithm and makes its decision on the basis of its local information. For comparison purposes, we simulate a policy whereby each node tries to transmit as high a throughput as possible with a predefined SIR for one particular modulation. As before, we refer to this policy as the simple policy.
10.5.3
Simulation results: multi-node scenarios In this section, we assess the performance of the independent AC algorithm in multinode scenarios. We simulate a multi-node system with three nodes communicating with one receiver. The locations of the transmitting nodes are 340 m, 460 m, and 570 m away from the receiver. Node 1 is the nearest node to the receiver and node 3 is the node
265
10.5 Multi-node energy-aware optimization
Table 10.3. Simulation parameters d = [320, 460, 570] m, Ait = 0.097/(d i )4 f (n b ) = 0.05(n b + 4) if n b = max(n b ) max(n b ) = 15, f (max(n b )) = 3 m = 1, 2, 3, 4 (BPSK, QPSK, 8PSK, 16PSK), pt = [0, 0.2, ..., 2] W = [0, 1, ..., 24] dB η = [−16, −15, ..., 14] dB
Channel model Buffer cost Modulation level Transmit power SIR range Quantized interference
Learned throughput in multi-node scenario packets/millijoule
4 Node1
3
Node 2 2 Node 3 1 0
Node 1 Node 2 Node 3
0
packets/millijoule
1 1.5 2 2.5 Number of packet transmissions
3
3.5 x 106
Throughput in multi-node scenario using the simple policy
3.5
Node 1
3 2.5 2
Node 2
1.5 1
Node 1 Node 2 Node 3
Node 3
0.5 0
Figure 10.8
0.5
0
2
4 6 8 Number of packet transmissions
10
12 x 104
Learned and simple-policy throughputs, for average packet-arrival load μ = 2.0.
furthest from the receiver. Most of the simulation parameters are similar to those in Table 10.2, except for those shown in Table 10.3. The total number of states in each agent is 496 and the total number of actions is 44. The AC algorithm is initialized with α = 0.05, β = 0.0005, and = 0.01. Figure 10.8 shows the average throughput learned by the AC algorithm and the simple policy for packet-arrival rate μ = 2.0. It is obvious that, in both policies, the node nearer to the receiver will effectively have a higher packet throughput per unit energy, since it requires less energy for achieving the same throughput. Figure 10.9 shows the throughput that is achieved for the AC learned policy and the simple policy for
266
Reinforcement learning for energy-aware communications
4.5 4
packets/millijoule
3.5
Learned throughput Node 1 Learned throughput Node 2 Learned throughput Node 3 Simple-policy throughput Node1 Simple-policy throughput Node2 Simple-policy throughput Node3
3 2.5 2 1.5 1 0.5 0 0.5
Figure 10.9
1
1.5
2 2.5 Mean Packet Arrival
3
3.5
4
Average throughput vs. packet-arrival rate for load μ = 2.0.
various packet arrival rates. From Figure 10.9, it can be seen that these two policies achieve similar throughputs when the packet arrival rate is low (μ ≤ 1). However, when the packet arrival rate becomes large, the AC algorithm achieves a higher throughput. In particular, the AC algorithm achieves 1.5 times more throughput for node 1 when μ = 3.0 and 6.3 and 7.1 times higher throughput for node 2 when μ = 3.0 and node 3 when μ = 2.0, respectively. We note that, in the simple policy, node 3 is not able to transmit anything for a packet arrival rate beyond μ = 2.0. In this situation, the energy in node 3 is completely wasted, without the ability to transmit anything. Using the AC algorithm, each node in the network is able to achieve a higher throughput per unit energy for a broad range of packet arrival rates. This is due to the fact that the AC algorithm has the ability to explore policies other than the greedy policy by adapting to the channel condition and packet arrival rate. The greedy policy will obviously result in the total breakdown of the network.
10.6
Discussion One of the major advantages of the RL algorithm is its capability to learn the environment with very little information. This property is very suitable for WSN applications, where each node might not have exact knowledge of its environment. Moreover, the wireless environment in WSNs tends to be varying, for many practical reasons. Having the learning capability, the algorithm decides the best transmission mode by adapting to the variation in the environment. Since very little information is required in the learning process, the algorithm needs to explore all the possible actions/decisions and determines the best action
10.6 Discussion
267
according to the current environment/state. From (10.5)–(10.7), one observes that the learning algorithm uses the iterative averaging method to learn/estimate the value function h, the average reward ρ, and the preference metric. Intuitively, as the number of actions and states become larger, more data are required to refine the accuracy of the estimates. As a consequence, more time is required to experience and explore all possible decisions. In short, the convergence time of the algorithm is highly dependent on the number of actions and states in the learning system. The larger the state and action spaces are, the more time will be required for the learning. As discussed in the previous paragraph, the number of states and actions affects the time required for the learning process. In general, the number of states and actions in the learning system is closely related to the type of application. In particular, when the state is a sample of a physical quantity such as interference, a larger number of states results in a more accurate solution. On the other hand, the number of actions in the system reflects the degree of reconfiguration in the system. Therefore, the states and actions in the system should, on the one hand, be chosen carefully to accurately model the physical situation. On the other hand, the excessively large number of states and actions makes the learning algorithms slow. In our problem, the aggregate of the number of packets queued in the buffer and the interference level constitutes the state space and the action space consists of the power and modulation. The determination of these parameters may be dictated by the accuracy of the model and the cost of deploying the sensors. One obvious way to keep the number of states and actions small is to use a small buffer length, a small transmit-power range, and limited modulation levels. In this way, the resulting number of states and actions can be kept small enough to make the convergence fast, as will be demonstrated below. To demonstrate that a smaller number of states and actions can actually have a shorter learning stage, we perform another simulation, in which seven nodes are simultaneously communicating with one receiver. The nodes are (320, 460, 570, 660, 740, 810, 880) m distant from the receiver. Two modulation levels, BPSK and QPSK, are available and the transmit-power levels are [0, 0.5, 1] W. The buffer length is equal to 4. The quantized interference has eight levels. In this simulation, each agent has 6 actions and 40 states. The resulting learned throughput per unit energy and the throughput obtained from the simple policy are shown in Figure 10.10. Obviously, by reducing the number of states and actions, the learning time of the algorithm is also reduced. The learned policy outperforms the simple policy by providing (1.001, 1.1100, 1.1324, 1.7565, 2.6799, 3.2403, 3.0946) times the achievable throughput for nodes 1 to 7, respectively. Another important property of the learning algorithm is that the transient learning period occurs only once, during the initial warming-up stage of the sensor. Moreover, the algorithms efficiently capture the history information when learning the state value and average reward function; hence, it is not necessary for each node to record all of the history of the transmission. After the learning stage, the algorithm is able to use the history efficiently to obtain good decisions.
268
Reinforcement learning for energy-aware communications
Throughput in multi-node scenario using learned policy packets/millijoule
2 1.5 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7
1 0.5 0
0
0.5
1
1.5 2 2.5 Number of packet transmissions
3
3.5 x 105
Throughput in multi-node scenario using the simple policy
packets/millijoule
2 1.5 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7
1 0.5 0
0
0.5
1
1.5 2 2.5 Number of packet transmissions
3
3.5 x 105
Figure 10.10
Learned and simple-policy throughputs per unit energy for packet-arrival load μ = 1.5. The learned policy achieves (1.001, 1.1100, 1.1324, 1.7565, 2.6799, 3.2403, 3.0946) times the throughput per unit energy of the simple policy, for nodes 1 to 7, respectively.
10.7
Summary and bibliographical notes We formulate the average throughput maximization relative to the total energy consumed in packetized wireless sensor communications for point-to-point and multi-node scenarios, and we utilize the RL algorithm to solve the problem posed above. To evaluate the performance of the learning algorithm, we obtain the optimal solution in point-to-point communication and compare the optimal solution with the learned policy. The performance of the learned policy is very close to that of the optimal one. Compared with the simple policy, the learned policy obtains more than twice the throughput. We note that both the simple policy and the learning algorithm do not need the state transition probability and use only the feedback information to make the decision. We also extend the learning scheme to the multi-node scenario. In the multi-node scenario, the algorithm achieves 1.5 to 6 times the throughput per unit energy of the simple policy, particularly for high mean packet arrival rates. The results also indicate that the scheme with the learning capability provides a simple, systematic, self-organized, and distributed algorithm with which to achieve highly effective resource management in WSNs. Furthermore, the optimization scheme can serve as a framework within which
10.7 Summary and bibliographical notes
269
to perform cross-layer optimization and to study the collaboration and competitive interaction in sensor network communications. In [28], a power-control scheme for wireless packet networks is formulated using dynamic programming (DP). The extension of this work to multi-modal power control is also investigated in [29]. In these two schemes, the power control follows some threshold policy that balances between the buffer content and the channel interference. The DP formulation for power control with imperfect channel estimation is addressed in [165], where it is shown that the DP solution is better than the fixed-SIR approach. Jointly optimized bit-rate and delay control for packet wireless networks has also been studied within the DP framework in [364]. Most of the literature assumes knowledge of the exact probability model, and the optimal solution is obtained by solving Bellman’s optimality equation [19]. Several utility functions or reward functions have been used in the context of powercontrol schemes. In [28] [29], the transmit power and cost incurred in the buffer are used as the objective functions to be minimized. In [394], the number of information bits successfully transmitted per joule is used as the objective function. More information can be found in [340].
11
Repeated games and learning for packet forwarding
In wireless ad hoc networks, autonomous nodes are reluctant to forward others’ packets because of the nodes’ limited energy. However, such selfishness and non-cooperation causes deterioration both of the system’s efficiency and of nodes’ performance. Moreover, distributed nodes with only local information might not know the cooperation point, even if they are willing to cooperate. Hence, it is crucial to design a distributed mechanism for enforcing and learning cooperation among the greedy nodes in packet forwarding. In this chapter, we consider a self-learning repeated-game framework to overcome the problem and achieve the design goal. We employ the selftransmission efficiency as the utility function of an individual autonomous node. The self-transmission efficiency is defined as the ratio of the power for self packet transmission over the total power for self packet transmission and packet forwarding. Then, we present a framework to search for good cooperation points and maintain cooperation among selfish nodes. The framework has two steps. First, an adaptive repeated-game scheme is designed to ensure cooperation among nodes for the current cooperative packet-forwarding probabilities. Second, self-learning algorithms are employed to find the better cooperation probabilities that are feasible and benefit all nodes. We then discuss three learning schemes for different information structures, namely learning with perfect observability, learning through flooding, and learning through utility prediction. Starting from non-cooperation, the above two steps are employed iteratively, so that better cooperating points can be achieved and maintained in each iteration.
11.1
Introduction Some wireless networks such as ad hoc networks consist of autonomous nodes without centralized control. In such autonomous networks, the nodes might not be willing to fully cooperate and accomplish the network task. Specifically for the packet-forwarding problem, forwarding the others’ packets consumes the node’s limited battery resources. Therefore, it might not be in the node’s best interest to forward others’ arriving packets. However, rejection of forwarding others’ packets non-cooperatively will severely affect the network’s functionality and impair the nodes’ own benefits. Hence, it is crucial to design a mechanism to enforce cooperation among greedy nodes. In addition, the randomly located nodes with local information might not know how to cooperate, even if they are willing to cooperate.
11.2 The system model and design challenge
271
Since, in some wireless networks, it is difficult to implement a virtual payment system because of the practical implementation challenges such as enormous signaling, in this chapter we concentrate on the second type of approach and design a mechanism such that cooperation can be enforced in a distributed way. In addition, unlike in the previous works in which it was assumed that the nodes know the cooperation points or other nodes’ behaviors, we argue that randomly deployed nodes with local information might not know how to cooperate even if they are willing to do so. Motivated by these facts, we present a self-learning repeated-game framework for enforcing and learning cooperation. We define self-transmission as the transmission of a user’s own packets. We quantify the node’s utility as its self-transmission efficiency, which is defined as the ratio of the power for successful self-transmission over the total power used for self-transmission and packet forwarding. The goal of the node is to maximize the long-term average efficiency. Using this utility function, a distributed self-learning repeated-game framework is considered to ensure cooperation among autonomous nodes. The framework consists of two steps. First, the repeated game enforces cooperation in packet forwarding. This first step ensures that any cooperation equilibrium that is more efficient than the Nash equilibrium (NE) of the one-stage game can be sustained. The repeated game allows nodes to consider the history of actions/reactions of their opponents in making decisions. The cooperation can be enforced/sustained using the repeated game, since any deviation causes punishment from other nodes in the future. The second step utilizes the learning algorithm to achieve the desired efficient cooperation equilibrium. We consider three learning algorithms for different information structures, namely learning with perfect observability, learning through flooding, and learning through utility prediction. Starting from the non-cooperation point, the two steps are applied iteratively. A better cooperation is discovered and maintained in each iteration, until no more efficient cooperation point can be achieved. From the simulation results, our framework is able to enforce cooperation among selfish nodes. Moreover, compared with the optimal solution obtained by a centralized system with global information, the learning algorithms achieve similar performance in the symmetric network. Depending on the learning algorithms and the information structures, our schemes achieve near-optimal solutions in the random network. The rest of the chapter is organized as follows. In Section 11.2, we give the system model and explain the design challenge. In Section 11.3, we present and analyze the repeated-game framework for packet forwarding under different information structures. In Section 11.4, we construct self-learning algorithms corresponding to different information structures in detail. In Section 11.5, we evaluate the performance of the scheme using extensive simulations. Finally, a summary is presented in Section 11.6.
11.2
The system model and design challenge We consider a network with N nodes. Each node is battery-powered and has a transmitpower constraint. This implies that only nodes within the transmission range are
272
Repeated games and learning for packet forwarding
n + k n + k +1
n learning Figure 11.1
n + K n + K +1
maintain cooperation
learning
maintain cooperation
Time-slotted transmission to two alternative stages.
neighbors. Packet delivery typically requires more than one hop. In each hop, we assume that transmission occurs in a time-slotted manner as illustrated in Figure 11.1. The source, the relays (intermediate nodes), and the destination constitute an active route. We assume an end-to-end mechanism that enables a source node to know whether the packet has been delivered successfully. The source node can observe whether there is a packet drop in one particular active path. However, the source node might not know where the packet is dropped. Finally, we assume that the routing decision has already been made before optimizing the packet-forwarding probabilities.1 Let’s denote the set of sources and destinations as {Si , Di }, for i = 1, 2, . . . , M, where M represents the number of source–destination pairs active in the network. Suppose that the shortest path for each source–destination pair has been discovered. Let’s denote the route/path as Ri = Si , f R1i , f R2i , . . . , f Rni , Di , where Si denotes the source node, Di denotes the destination node, and f R1i , f R2i , . . . , f Rni is the set of intermediate/relay nodes, thus, there are n + 1 hops from the source node to the destination node. Let V = {Ri : i = 1, . . . , M} be the set of routes corresponding to all source– destination pairs. Let’s denote further the set of routes where node j is the source as V js = {Ri : S(Ri ) = j, i = 1, . . . , M}, where S(Ri ) represents the source of route Ri . The power expended in node i for transmitting its own packet is Ps(i) = μ S(r ) · K · d(S(r ), n(S(r ), r ))γ , (11.1) r ∈Vis
where μ S(r ) is the transmission rate of source node S(r ), K is the transmission constant, d(i, j) is the distance between node i and node j, n(i, r ) denotes the neighbor of node i on route r , and γ is the transmission path-loss coefficient. For the link from node i to its next hop n(i, r ) on route r , K · d(i, n(i, r ))γ describes the reliable successful transmission power per bit transmission. We note that (11.1) can also be interpreted as the average signal power required for successful transmission of a certain rate μ S(r ) . This implies that transmission failure due to the channel fading has been taken into account by the transmission constant K . Let αi for i = 1, . . . , N be the packet-forwarding probability for node i. Here, we use the same packet-forwarding probability for every source–destination pair for following reasons. First, given the assumption of greediness of the nodes, there is no reason for one particular node to forward some packets on some routes and reject forwarding other packets on other routes. Second, the use of different packet-forwarding probabilities on 1 We note that it is always possible for nodes to carry out manipulation in the routing layer. However, that is
beyond the scope of this chapter. For more information, please refer to [408].
273
11.2 The system model and design challenge
different routes will only complicate the deviation detection of a node and will not change the optimization framework discussed in this chapter. So, in our first step to analyze the problem, we assume the same forwarding probability on every route. In future work, we will also be exploring the case in which the nodes use different packetforwarding probabilities for different routes. Clearly, the probability of successful transmission from node i to its destination depends on the forwarding probabilities employed in the intermediate nodes and it can be represented as i = αj, (11.2) PTx,r j∈(r \{S(r )=i,D(r )})
where D(r ) is the destination of route i and (r \ {S(r ) = i, D(r )}) is the set of nodes on route r excluding the source and destination. Let us define the good power con(i) sumed in transmission node i, Ps,good as the product of the power used for transmitting node i’s own packet and the probability of successful transmission from node i to its destination, (i) i μ S(r ) · K · d(S(r ), n(S(r ), r ))γ PTx,r . (11.3) Ps,good = r ∈Vis
Moreover, let the set of routes where node j is the forwarding node be W j . The power used to forward others’ packets is given by (i) i d(i, n(i, r ))γ μ S(r ) PF,r , (11.4) Pf = αi · K · r ∈Wi i is the probability that node i receives the packet to forward along route r , where PF,r i is the total rate that node i receives for packet forwarding. The and r ∈Wi μ S(r ) PF,r probability that node i receives the forwarded packet along route r is represented as i = αj, (11.5) PF,r % & j∈ fr1 , fr2 ,..., frm−1
where r = S(r ), fr1 , . . . , frm−1 , frm = i, . . . , frn , D(r ) is the (n + 1)-hop route from i source S(r ) to destination D(r ), and the mth forwarding node frm is node i. PF,r depends on the packet-forwarding probabilities of the nodes on the route r before node i. We refer to the task of transmitting the node’s own information as self-transmission and the task of relaying others’ packets as packet forwarding. We focus on maximizing the self-transmission efficiency, which is defined as the ratio of successful selftransmission power (good power) over the total power used for self-transmission and packet forwarding. Therefore, the stage utility function for node i can be represented as U (i) (αi , α−i ) =
(i) Ps,good
Ps(i) + Pf(i)
,
(11.6)
274
Repeated games and learning for packet forwarding
where αi is node i’s packet-forwarding probability and α−i = (α1 , . . . , αi−1 , αi+1 , . . . , α N )T are the other nodes’ forwarding probabilities. On putting (11.1), (11.3), and (11.4) into (11.6), we obtain U (i) =
r ∈Vis
r ∈Vis
μ S(r ) d(S(r ), n(S(r ), r ))γ
μ S(r ) d(S(r ), n(S(r ), r ))γ + αi
r ∈Wi
j∈(r \{S(r )=i,D(r )})
d(i, n(i, r ))γ μ S(r )
αj
& % j∈ fr1 ,..., frm−1
αj
.
(11.7) Since the power for successful self-transmission depends on the packet forwarding used by other nodes, the self-transmission efficiency captures the tradeoff between the power used for packet transmission of the node’s own information and packet forwarding for the other nodes. The problem in packet forwarding arises because the autonomous nodes such as in ad hoc networks have their own authorities to decide whether to forward the incoming packets. Under this scenario, it is very natural to assume that each node selfishly optimizes its own utility function. In parallel to (11.7), node i selects αi in order to maximize the transmission efficiency U (i) (αi , α−i ). This implies that node i will selfishly (i) minimize Pf , the portion of energy used to forward others’ packets. In the game-theory literature, Nash equilibrium (NE) is a well-known concept, which states that in the equilibrium every node selects the best response strategy to the other nodes’ strategies. The formal definition of NE is as follows. Definition 11.2.1 Define the feasible range as [0, 1]. The Nash equilibrium T ∗ α1 , . . . , α ∗N is defined as ∗ ∗ ≥ U (i) αi , α−i , ∀i, ∀αi ∈ , (11.8) U (i) αi∗ , α−i i.e., given that all nodes play NE, no node can improveits utility by unilaterally chang ∗ = α∗, . . . , α∗ , α∗ , . . . , α∗ T. ing its own packet-forwarding probability. Here α−i N 1 i−1 i+1 Unfortunately, the NE for the packet-forwarding game described in (11.7) is αi∗ = 0, ∀i. This can be verified by finding the forwarding probability αi ∈ [0, 1] such that U (i) is unilaterally maximized. To maximize the transmission efficiency of node i, the (i) node can only make the forwarding energy Pf as small as possible. This is equivalent to setting αi as small as possible, since the probability of its own successful packet transmission in (11.2) depends only on the other nodes’ willingness to forward the packets. By greedily dropping its packet-forwarding probability, node i reduces its total transmission power used for forwarding others’ packets, thereby increasing its instantaneous efficiency. However, if all nodes play the same strategy, this causes zero efficiency in all nodes, i.e., U (i) α1∗ , . . . , α ∗N = 0, ∀i. As a result, the network breaks down. Hence, playing the NE is inefficient not only from the network point of view but also for the individual’s own benefit. It is very important to emphasize that the inefficiency of the NE is independent of the utility function in (11.7). This inefficiency is merely the result
11.3 The repeated-game framework and punishment analysis
275
of greedy optimization done unilaterally by each of the nodes. In the next two sections, we present a self-learning repeated-game framework and show how cooperation can be enforced using our scheme.
11.3
The repeated-game framework and punishment analysis As demonstrated in Section 11.2, the packet-forwarding game has αi∗ = 0, ∀i as its unique NE if the game is played only once. This implies that all nodes in the network won’t be cooperating in forwarding the packets. In practice, nodes typically participate in the packet-forwarding game for a certain duration of time, and this is more suitably modeled as a repeated game (a game that is played multiple times). If the game never ends, it is called an infinitely repeated game, which we will use in this chapter. In fact, the repeated game need not necessarily be infinite. The important point is that the nodes/players do not know when the game ends. In this sense, the properties of the infinitely repeated game can still be valid. In this chapter, we employ the normalized average discounted utility with discounting factor δ given by (i) (i) = lim U¯ t = (1 − δ) U¯ ∞ t →∞
∞
δ (t−1) U (i) (. α (t)),
(11.9)
t=1
α (t)) is the utility of node i at each stage of the where α. (t) = (α1 , . . . , α N )T , U (i) (. (i) ¯ game (11.7) played at time t, and Ut is the normalized average discounted utility from time 1 to time t . Unlike the one-time game, the repeated game allows a strategy to be contingent on the past moves and results in reputation and retribution effects, so that cooperation can be sustained [123] [225] [125]. We also note that the utilities in (11.7) and (11.9) are indeed heterogeneous in the sense that they carry the information about the channel, routing, and node behaviors. In other words, the utility functions in (11.7) and (11.9) reflect different energy consumptions according to different distances, rates, and routes between nodes.
11.3.1
Design of punishment scheme under perfect observability In this subsection, we analyze a class of punishment policy under the assumption of perfect observability. Perfect observability means that each node is able to observe actions taken by other nodes during the history of the game. This implies that each node knows which node drops the packet and is aware of the identity of other nodes. This condition allows every node to detect any defection of other nodes and it also allows nodes to know whether any node does not follow the game rule. Perfect observability is the ideal case and serves as the performance upper bound. In the next subsection, this assumption is relaxed to a more practical situation, in which an individual node has only limited local information. T Let’s denote the NE in a one-stage forwarding game as α. ∗ = α1∗ , . . . , α ∗N , and the T ∗ T α ∗ ), . . . , U (N ) (. α ∗ ) . We corresponding utility functions as v1 , . . . , v ∗N = U (1) (. also denote
276
Repeated games and learning for packet forwarding
U = {(v1 , . . . , v N )| ∃. α ∈ N , α ), . . . , U (N ) (. α ))}, s.t. (v1 , . . . , v N ) = (U (1) (. V = convex hull of U, V† = (v1 , . . . , v N ) ∈ V| vi > vi∗ , ∀i .
(11.10) (11.11) (11.12)
We note that V consists of all feasible utilities, and V† consists of feasible utilities that Pareto-dominate the one-stage NE; this set is also known as the individually rational utility set. The Pareto-dominant utilities denote all utilities that are strictly better than the one-stage NE. From the game-theory literature, the existence of equilibria that Pareto-dominate the one-stage NE is given by the folk theorem [125]. Theorem 11.3.1 (Folk theorem) Assume that the dimensionality of V† is N . Then, for any (v1 , . . . , v N ) in V† , there exists δ ∈ (0, 1) such that, for all δ ∈ (δ, 1), there exists an equilibrium of the infinitely repeated game with discounted factor δ in which player i’s average utility is vi . Before we give the application of the folk theorem in the packet-forwarding game, it is useful to recall the notion of dependency graph. Given the routing algorithm and the source–destination pairs, the dependency graph is the directed graph that is constructed as follows. The number of nodes in the dependency graph is the same as the number of nodes in the network. When node i sends packets to node j via nodes f 1 , . . . , f n , then there exist directed edges from node i to nodes f 1 , . . . , f n . The resulting dependency graph is a directed graph, which describes the node dependency in performing the packet-forwarding task. Let’s define degin (i) and degout (i) as the numbers of edges going into node i and coming out from node i, respectively. Obviously, degin (i) indicates the number of nodes whose packets are forwarded by node i and degout (i) is the number of nodes that help forward node i’s packets. Using the notation of the corresponding dependency graph, the application of the folk theorem in the packet-forwarding game is stated as follows. Theorem 11.3.2 (Existence of Pareto-dominant forwarding equilibria under perfect observability) We assume the following conditions: (1) the game is perfectly observable; (2) the corresponding dependency graph satisfies the condition degout (i) > 0, ∀i;
(11.13)
(3) V† has full dimensionality (V† has the dimensionality of N ). We note that V† having the dimensionality of N implies that the space formed by all points in V† has the dimensionality of N . Then, for any (v1 , . . . , vn ) ∈ V† , there exists δ ∈ (0, 1), such that, for all δ ∈ (δ, 1), there exists an equilibrium of the infinitely repeated game with node i’s average utility vi .
11.3 The repeated-game framework and punishment analysis
277
P ROOF. Let α. = (α1 , . . . , α N )T be the joint strategy results in (U (1) (. α ), . . . , U (N ) (. α )). (1) α ), . . . , U ( j−1) (. α ), U ( j) The full dimensionality conditionensures that the set U (. (. α ) − ε, U ( j+1) (. α ), . . . , U (N ) (. α ) , for any ε > 0, is in V† . Let node i’s maximum (i) α ), ∀i. This maximum utility is obtained when all nodes try utility be v i = maxα. U (. α ) ∈ V† , ∀i. The to maximize node i’s utility. Let the cooperating utility be vi = U (i) (. cooperating utilities are obtained when all nodes play the agreed packet-forwarding probabilities. Let the maximum utility which node i can get when it is punished be α ). Let’s denote node j’s utility when punishing node i as wij . v i = maxαi minα−i U (i) (. We note that, from (11.7), the max–min utility v i coincides with the one-stage NE. If there exist and the punishment period for node i, Ti , such that vi < 1 + Ti , U (i) −
(11.14)
then the following rules ensure that any individually rational utilities can be enforced. (i) Condition I. All nodes play cooperation strategies if there is no deviation in the last stages. After any deviations go to Condition II (suppose that node j deviates). (ii) Condition II. Nodes that can punish the deviating node (node j) play the punishing strategies for the punishment period. The rest of the nodes keep playing cooperating strategies. If there is any deviation in Condition II, restart Condition II and punish the deviating node. If any punishing node does not play punishment in the punishment period, the other nodes will punish that particular node during the punishment period. Otherwise, after the end of the punishment period, go to Condition III. (iii) Condition III. Play the strategy that results in utility U (1) , . . . , U ( j−1) , U ( j) − ε, U ( j+1) , . . . , U (N ) . If there is any deviation in Condition III, start Condition II and punish the deviating node. First, the cooperating strategy is the strategy that all nodes agree upon. In contrast, the strategy of the punishing node i is the strategy that results in max–min utility in node α ). In the following, we show that, under the proposition’s i, v i = maxαi minα−i U (i) (. assumptions, • the average efficiency gained by the deviating node is smaller than the cooperating efficiency, and • the average efficiency gained by the punishing node that does not play the punishment strategy in the punishment stage is worse than the efficiency gained by that node when it conforms to the punishing strategy. If node j deviates in Condition I and then conforms, it receives at most v j when it deviates, v j for T j periods when it is punished, and (U ( j) − ε) after it conforms to the cooperative strategy. The average discounted deviation utility can be expressed as: δ(1 − δ T j ) δ T j +1 ( j) ( j) Uˆ ∞ = v j + vj + (U − ε). 1−δ 1−δ
(11.15)
278
Repeated games and learning for packet forwarding
If the node conforms throughout the game, it has the average discounted utility of [1/(1 − δ)]U ( j) . So the gain of deviation is given by ( j)
U ( j) = Uˆ ∞ −
1 δ(1 − δ T j ) 1 − δ T j +1 ( j) U ( j) < v j + vj − (U − ε). (11.16) 1−δ 1−δ 1−δ
We note that v j coincides with the one-stage NE, which is v j = 0, ∀ j. As δ → 1, (1 − δ T j +1 )/(1 − δ) tends to 1 + T j . Under the condition (11.14), the deviation gain in (11.16) will be strictly less than zero. This indicates that the average cooperating efficiency is strictly larger than the deviation efficiency. Hence, any rational node will not deviate from the cooperation point. If the punished node still deviates in the punishment period, the punishment period (Condition II) restarts and the punishment duration experienced by the punished node is lengthened. As a result, deviation in the punishment period postpones receipt by the punished node of the strictly better utility (U ( j) − ε) in Condition III. Hence, it is better not to deviate in the punishment stage. On the other hand, if the punishing node i does not play the punishing strategy during the punishment of node j, node i receives at most δ(1 − δ T ) δ T +1 (i) (i) = vi + vi + (U − ε). Uˆ ∞ 1−δ 1−δ
(11.17)
However, if node i conforms with the punishment strategy, it will receive at least 1 − δT j δ T +1 (i) (i) = wi + U . U˜ ∞ 1−δ 1−δ
(11.18)
j
Here wi is the utility for node i of punishing node j. Therefore, node i’s reward for carrying out the punishment is (11.18) minus (11.17), 1 − δT j δ T +1 ε (i) (i) − Uˆ ∞ = wi − δv i − v i + . U˜ ∞ 1−δ 1−δ
(11.19)
Using v i = 0, ∀i and letting δ → 1, the expression (11.19) is equivalent to j (i) (i) − Uˆ ∞ = T · wi − v i + U˜ ∞
ε . 1−δ
(11.20)
By selecting δ close to unity, this expression can always be made larger than zero. As a result, the punishing node always conforms to the punishment strategy in the punishment stage. The same argument of no node deviating under Condition I can be used to show that no nodes deviates under Condition III. Therefore, we conclude that deviations under all conditions are not profitable. The proof above is based on two conditions. First, the proof assumes that there always exist nodes that can punish the deviating nodes; this is guaranteed by the assumption degout (i) > 0 in the corresponding dependency graph. Second, nodes are able to identify which node is defecting and which node does not carry out the punishment. This is guaranteed by the assumption of perfect observability. The strategy of punishing those
279
11.3 The repeated-game framework and punishment analysis
1
2
Traffic flows:
6
3
1
2
3
3
4
5
5
6
1
1
6
5
3
2
1
5
4
3
5
6
6
1
2
3
2
6
5
4
2
3
4
4
2
1
6
4
4
5
U (i) (t) =
Original Graph
2 + 2αi (t) U1
1
2
3
U3
U2 1
1
1
4
Dependency Graph degin(i) = degout (i) = 2; ∀i
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
1
2
3
4
5
6
0.6
0.2
0
α1
2
0
U4
0
1
2
3
4
5
6
2
0
α2
2
0
0
1
2
3
4
5
6
U5 0.6
0
α3
2
0
1
2
3
4
5
6
0
–2
0
1
2
3
4
5
6
U6 1
0.4
0
Figure 11.2
1
0.8
0.4
5
, ∀i
0.8
0
6
αmod(i–2, 6)+1 (t) + αmod(i, 6)+1 (t)
α5
0 1
0.8
0.8
0.6
0.6
0.4
0.4
0.2 0
α4
0
1
2
3
4
5
6
0.2 α6 0
An example of the punishment scheme under perfect observability.
who misbehave and those who do not punish the misbehaving nodes can be an effective strategy to cope with a collusion attack. Now let us consider the following example to understand the punishment behavior. We assume μ S(r ) = 1, K = 1, and d(i, j) = 1. The resulting utilities are shown in Figure 11.2. Each node has one-stage utility U (i) =
αmod(i−2,6)+1 + αmod(i,6)+1 . 2 + 2αi
(11.21)
On selecting the discounted factor, δ = 0.9, and T = 2 appropriately, all nodes are made better off when they are cooperating in packet forwarding by setting αi = 1, ∀i. If all nodes conform to the cooperative strategies, the six-stage normalized average dis(i) counted utilities defined in (11.9) are given by U¯ 6 = 0.2343, ∀i. In Figure 11.2, we plot the utility functions and forwarding probabilities of all nodes. The x-axis of the plot denotes the round of the game, the left y-axis denotes the value of node’s utility, and the right y-axis denotes the value of the forwarding probability. The forwarding probability is denoted by the squared plot and the utility function is denoted by the plot with stars. In Figure 11.2, we show that node 1 is deviating in the second round of the game by setting its forwarding probability to zero. At this time, node 1’s utility changes from 0.5 to 1, as can be seen in Figure 11.2. As a consequence, node 2 and node 6 are punishing node 1 at the following T = 2 stages by setting their forwarding probabilities to zero. In
280
Repeated games and learning for packet forwarding
the third round of the game, node 1 has to return to cooperation. Otherwise, the punishment from others restarts and consequently the average discounted utility will be further lowered. After the punishment, all nodes come back to the cooperative forwarding probabilities (as shown in Figure 11.2). The resulting six-stage normalized average utilities (1) (2) (6) (3) (5) (4) are U¯ 6 = 0.2023, U¯ 6 = U¯ 6 = 0.2887, U¯ 6 = U¯ 6 = 0.1958, and U¯ 6 = 0.2343. So node 1 gains less utility by deviation than by cooperation. Moreover, if both node 2 and node 6 fail to punish node 1, they will be punished by other nodes during the following T periods of the game. The resulting normalized average utilities are U¯ 6(1) = 0.3485, (2) (6) (3) (5) (4) U¯ 6 = U¯ 6 = 0.1425, U¯ 6 = U¯ 6 = 0.3035, and U¯ 6 = 0.165. Therefore, node 2 and node 6 will carry out the punishment, since otherwise they will in turn be punished and have less utility. The same argument can be used to prevent nodes deviating from the punishment strategy. We note that in this example the corresponding dependency graph has degin (i) = degout (i) = 2, ∀i. Therefore, there are always punishing nodes available whenever any node deviates. Finally, we discuss the discounting factor δ, which represents the importance of the future. When the discounting factor is small, the future is less important. This will cause the pathological situation in which the instantaneous deviation gain of the defecting node exceeds any future punishment by the other nodes. Therefore, it is advantageous for the node to deviate rather than cooperate, and it becomes very hard (if not impossible) to encourage all nodes to cooperate in this scenario. We also note that the selfish nodes are better off choosing δ approaching unity. Since, if the node chooses δ close to zero, this implies that the future is not important to the node, the node will definitely ask other nodes to transmit its own packet at the very beginning of the game and stop forwarding others’ packets afterwards. This will invoke punishment from its neighboring nodes, which will thus not forward that particular node’s packets. This implies that that node will automatically be excluded from the network. Therefore, it is advantageous for nodes in the network to choose δ approaching unity.
11.3.2
The design of a punishment scheme under imperfect local observability We have shown that, under the assumption of perfect observability, the packetforwarding game combined with the punishment scheme can achieve any Paretodominant efficiency. However, perfect observability may be difficult to implement in ad hoc networks, due to the enormous overheads and signaling. Therefore, we try to relax the condition of perfect observability in this subsection. There are many difficulties in removing the assumption of perfect observability. Suppose that each node observes only its own history of the stage utility function. In this situation, the node knows nothing about what has been going on in the rest of the network. The node knows only the deviation of nodes on which it relies for packet forwarding, and it cannot detect the deviation in the remainder of the network, even though it can be the one that can punish the deviating node. Therefore, it is impossible to implement the folk theorem in this information-limited situation. Moreover, nodes might not know whether the system is in the punishment stage or not. As soon as one of the nodes sees a deviation, it
11.3 The repeated-game framework and punishment analysis
281
starts the punishment period. This will quickly start another punishment stage by other nodes, since the nodes cannot tell whether the change in stage efficiency is caused by the punishment stage or the deviating node. As a result, the defection spreads like an epidemic and cooperation breaks down throughout the whole network. This is known as the contagious equilibrium [225]. Indeed, the only equilibrium in this situation is the one-stage NE. The main reason for the contagious equilibrium is that all nodes have inconsistent beliefs about the state of the system; they do not know whether the system is currently in the punishment stage, the deviation state, or the end of the punishment stage. Therefore, any mistake in invoking the punishment stage can cause the contagious equilibrium. The lack of consistent knowledge of the system state can be mitigated using communications between nodes. Suppose that each node observes only a subset of the other nodes’ behaviors. The communication is introduced by assuming that each node makes a public announcement about the behaviors of the nodes it observes. This public announcement can be implemented by having the nodes exchange the behaviors of nodes they observe through broadcasting. The intersection of these announcements can be utilized to identify the deviating node. At the end of each stage, the nodes report either that no nodes deviate or the identity of the deviating node. Since these announcements can be exchanged at a relatively low frequency and only to the related nodes, the communication overheads are limited. Under this assumption of local observability, the following theorem inspired by the folk theorem for privately monitoring with communication is considered. Theorem 11.3.3 Suppose that V† has dimensionality N (full dimensionality), where N is the number of nodes in the network. If every node i is monitored by at least two other nodes, this implies the following. 1. If node i participates in the routes which have only two hops, then degin (i) ≥ 2 is sufficient. 2. If node i participates in routes among which one of the routes has only two hops, then degin (i) ≥ 3 is sufficient. 3. If node i participates in the routes which have more than two hops, then degin (i) ≥ 4 is sufficient. Also, there always exists a node that can punish the deviating node, i.e., degout (i) > 0, ∀i.
(11.22)
Moreover, the monitoring nodes can exchange the observations. Then, for every v in the interior of V† , there exist δ ∈ (0, 1), such that, for all δ ∈ (δ, 1), v = (v1 , . . . , v N ) is an equilibrium of an infinitely repeated game in which node i’s average utility is vi . P ROOF. Suppose that there exist ε, δ, and punishment period Ti such that (11.14) holds and 2 max ∞ i {Ti }−1 t
δ max max vi (α) − vi (α ) < δ t ε, (11.23) t=0
i
(α,α )
t=maxi {Ti }
282
Repeated games and learning for packet forwarding
then the following rules of the game (Conditions I to III) achieve equilibrium when degin (i) = 2, ∀i. Condition I. When there is no announcement of the deviating nodes. (a) If the previous stage is in the cooperating state, continue the cooperating state. (b) If the nodes played in the previous stage the strategy U (1) , . . . , U (k−1) , U (k) − ε, U (k+1) , . . . , U (N ) for k ∈ {1, . . . , N }, continue the previous state. (c) If the previous stage is in the state of punishing node k and the punishment has not ended, then continue the punishing. Otherwise, switch to the strategy that results in U (1) , . . . , U (k−1) , U (k) − ε, U (k+1) , . . . , U (N ) . Condition II. When node j is incriminated by both of its monitors j1 and j2 . (a) If state of the previous stage’s strategy was punishing node j, the implementing U (1) , . . . , U ( j−1) , U ( j) − ε, U ( j+1) , . . . , U (N ) , implementing U (1) , . . . , U ( j) − ε, . . . , U (l) − ε, . . . , U (N ) , for some l = j, or implementing U (1) , . . . , U (l) + ε, . . . , U ( j) − ε, . . . , U (N ) , for some l = j, then start the punishment stage for punishing node j. (b) If the previousstage’s strategy was punishing node j1 , then switch to the strategy that results in U (1) , . . . , U ( j2 ) + ε, . . . , U ( j) − ε, . . . , U (N ) . A similar argument is applied to increase node j1 ’s utility by ε when node j2 was punished in the previous stage. Condition III. When there is any inconsistent announcement by node j1 and node j2 . We note that the inconsistent announcement happens when there are at least two announcements of the deviation node, but the deviation nodes specified in the announcements are different. (a) If the previous state is punishing node j1 or node j2 , then restart the punishment stage. (b) Otherwise, implement U (1) , . . . , U ( j1 ) − ε, . . . , U ( j2 ) − ε, . . . , U (N ) . In the above rules, we consider three different conditions, namely when there is no announcement of a deviating node (Condition I), when the announcements are consistent (Condition II), and when the announcements are inconsistent (Condition III). Then we discuss the different strategies for different states within each condition. We note that only the nodes whose packets are forwarded by node j have the potential ability to detect the deviation of node j. The above game rule ensures that, if every node in the network is monitored by at least two other nodes and there always exist nodes to punish the deviating node, then any v ∈ V† can be realized.
11.3 The repeated-game framework and punishment analysis
283
If both the monitors (node j1 and node j2 ) of node j incriminate node j, then node j is punished in a similar way to the punishment in Theorem 11.3.1. The deviator (node j) is punished for a certain period of time if the previous state is in one of the following states: punishing node j (this implies that the punishment stage will be restarted), finished punishing node j (i.e., the state with utility function U (1) , . . . , U ( j−1) , U ( j) − ε, U ( j+1) , . . . , U (N ) ), after penalizing nodes that make inconsistent announcements (i.e., the state with utility U (1) , . . . , U (k) − ε, . . . , U (l) − ε, . . . , U (N ) ), where nodes k and l are the nodes that previously make inconsistent announcements, or the state with utility U (1) , . . . , U (l) + ε, . . . , U (k) − ε, . . . , U (N ) . In all these states, the deviator (node j) will be punished for a certain period of time (Condition II(a)). However, if the previous state is the state of punishing node j1 , then the system switches to the strategy that results in U (1) , . . . , U ( j2 ) + ε, . . . , U ( j) − ε, . . . , U (N ) (Condition II(b)). This strategy gives additional incentives (U ( j2 ) + ε) for node j2 to punish node j. Obviously, node j1 has an incentive to announce whether node j deviates, since this announcement will end node j1 ’s punishment. Because of the possibility of early termination of the punishment period, node j1 also has an incentive to wrongly incriminate node j; this will be prevented by Condition III(a). Condition II(b) is also used to avoid the situation in which node j2 lies in its announcement even though it observes that node j deviates. This condition will become obvious as we discuss Condition III. Next, we consider the case when there are incompatible announcements. We note that incompatible announcements imply that there are two nodes or two groups of nodes that make different announcements about the deviation. These announcements can be in the following forms: either node j is incriminated by only one of the nodes (a group of nodes) or two different nodes are incriminated by two other nodes (two other groups of nodes). When there are incompatible announcements about node j (Condition III) and the previous state is not the state of punishing node j1 or j2 , the nodes that make incompatible announcements will be penalized and receive utility U ( ji ) − ε for i = 1, 2 (Condition III(b)). In the case when node j1 was being punished in the previous stage, Condition III(a) prevents node j1 from falsely accusing node j. Condition III(a) and Condition III(b) are sufficient to prevent lying in announcements. However, including Condition III(a) creates a situation in which node j2 enjoys punishing node j1 . This means that, when node j1 is being punished and node j has really deviated, node j2 has an incentive to lie in its announcement and announces that no node is deviating. This problem is solved by Condition II(b), which gives an additional reward for node j2 to tell the truth and punish node j. Moreover (11.23) implies that this additional reward for node j2 outweighs the benefit from punishing node j1 . Thus (11.23) can be thought of as the incentives for the monitoring nodes to punish the deviating node when the announcements are inconsistent. Previous arguments ensure that, if every node in the network is monitored by at least two other nodes, then any feasible v ∈ V† can be realized. Next, we analyze the three cases listed in Theorem 11.3.3. In the first case, if all routes in which node i participates have only two hops, and degin (i) ≥ 2, this implies that every node can be perfectly monitored by two or more nodes. It is obvious that the above game rules can be applied
284
Repeated games and learning for packet forwarding
directly. In the second case, when node i participates in routes with one of the routes being of exactly two hops, and degin (i) ≥ 3, both the announcement from the source of the two-hop route and the aggregate announcements from the sources of the rest of the routes serve as the final announcements. We note that the intersection of the aggregate announcements will incriminate a certain node. The node that does not tell the truth can be determined by a majority-voting method. Finally, for the case in which node i participates in the routes which have more than two hops and degin (i) ≥ 4, the sources can form two groups and use the previous game rule. The lying node will be detected using majority voting. In summary, any potential deviation in a network satisfying the conditions of Theorem 11.3.3 can be detected. Moreover, the game rules guarantee that any feasible rational utilities can be enforced. We note that, from the announcement forwarder’s perspective, it faces two scenarios, namely either the announcement contains negative information about the forwarder itself or it contains negative information about the other nodes. In the first case, the forwarding node might not forward the announcement; however, even though that node itself does not forward the announcement, there is only a small probability that the announcement does not go through the whole network as illustrated in Figure 11.3. Moreover, the condition that every node is monitored by at least two nodes indicates that the case illustrated is less probable. In the second case, the forwarding nodes do not have any immediate gain from not forwarding the announcement, i.e., the forwarder is indifferent regarding forwarding the announcement. However, it is advantageous for the forwarding nodes to forward the truthful announcement in order to catch and punish the deviating node. Otherwise, the forwarding nodes may also become victims of the deviation in the future. Moreover, the announcement consumes much less energy than does the packet transmission itself. Hence, by indifferent we meant that each node is better off making a truthful announcement, which will consume just a small portion of the energy of transmission, rather than facing a bigger loss arising from deviation by the deviating node. The analyses for different information structures in Sections 11.3.1 and 11.3.2 guarantee that any individually rational utilities can be enforced under some conditions. However, the individual distributed nodes need to know how to cooperate, i.e., what the good packet-forwarding probabilities are. In the next section, we describe the learning algorithms which can be employed to achieve better utilities.
S
bypass f
S f
b
f
b
bypass f a Figure 11.3
a
Suppose that the victim node, S, is at the edge of the network and every transmission coming from node S should go through node f. Suppose that node f deviates and blocks the announcement from S. Node S can increase its transmission power to bypass node f to broadcast the announcement.
285
11.4 Self-learning algorithms
11.4
Self-learning algorithms From Section 11.3, any Pareto-dominant solutions better than one-stage NE can be sustained. However, the analysis does not explicitly determine which cooperation point is to be sustained. In fact, the system can be optimized with respect to different cooperating points, depending on the system designer’s choices. For instance, the system can be designed to maximize the weighted sum of the average infinitely repeated game’s utilities as follows: U sys =
N
N
(i)
w(i)U ∞ ,
i=1
w(i) = 1.
(11.24)
i=1
In particular, when w(i) = 1/N , ∀i, maximization of the average utility per node is usually employed in network optimization: U sys =
N 1 (i) U∞. N
(11.25)
i=1
We use (11.25) as an example, but we emphasize that any system objective function can be incorporated into the learning algorithm in a similar way. From an individual point of view, as long as cooperation can generate a better utility than non-cooperation, the autonomous node will participate. Moreover, any optimization other than the system optimization can be monitored by the other nodes as deviation. Consequently, punishment can be explored in the future. The basic idea of the learning algorithm is to search iteratively for the good cooperating forwarding probability. Similarly to the punishment design, we consider the learning schemes for different types of information availability, namely perfect observability and local observability. In parallel with the system model in Section 11.2, we consider timeslotted transmission that interleaves the learning mode and the cooperation-maintenance mode as shown in Figure 11.1. In the learning mode, the nodes search for better cooperating points. In the cooperation-maintenance mode, nodes monitor the actions of other nodes and apply punishment if there is any deviation. In the learning mode, the nodes have no incentive to deviate since they do not know whether they can get benefits. So they do not want to miss the chance of obtaining better utilities in the learning mode. It is also worth mentioning that, if a node deviates just before a learning period, it will still be punished in the following cooperation-maintenance period. So the assumption of an infinitely repeated game is still valid for this time-slotted transmission system.
11.4.1
Self-learning under perfect observability Under the information structure of perfect observability, every node is able to detect the deviation of any defecting node, and observe which nodes help forward others’ packets. This fact implies that every node is able to perfectly predict the average efficiencies of other nodes and optimize the cooperating point on the basis of the system criterion
286
Repeated games and learning for packet forwarding
Table 11.1. The self-learning repeated-game algorithm under perfect observability For node i: given α. −i , small increment β, and minimum forwarding probability αmin Iteration: t = 1, 2, . . . α (t − 1)) Calculate ∇U sys (. α (t − 1)) Calculate α. (t) = α. (t − 1) − β ∇ U sys (. α (t)]i , αmin }, 1} Select αi (t) = min {max {[.
(11.25). The basic idea of the learning algorithm is to use steepest-descent-like iterations. All nodes predict the average efficiencies of the others and the corresponding gradients. The detailed algorithm is shown in Table 11.1. Learning with perfect observability assumes perfect knowledge of the utility functions of all nodes in the network, and provides the best solution that any learning algorithm can achieve.
11.4.2
Self-learning under local observability In this subsection, we focus on the learning algorithm with the information structure available under local observability. Under this condition, the nodes might not have complete information about the exact utility of other nodes. On the basis of this information structure, we develop two learning algorithms. The first algorithm is called learning through flooding. The second algorithm makes a prediction of the other nodes’ stage efficiency on the basis of the flows that go through the predicting node. We called the second algorithm learning through utility prediction.
11.4.2.1
Learning through flooding The basic idea of the learning algorithm is as follows. Since the only information the node can observe is the effect of changing its forwarding probability on its own utility function, the best way for the nodes to learn the packet-forwarding probability is to gradually increase the probability and monitor whether the utility function improves. If the utility improves, the new forwarding probability will be employed. Otherwise, the old forwarding probability will be kept. The algorithm lets all nodes change their packet forwarding probabilities simultaneously. This can be done by flooding the instruction for changing the packet-forwarding probability. After the packet-forwarding probability has been changed, the effect propagates throughout the network. All nodes wait for a period of time until the network becomes stable. At the end of this period, the nodes obtain their new utilities. If the utilities are better than the original ones, then the new packet-forwarding probabilities are employed. Otherwise, the old ones are kept. We note that the increment in packet-forwarding probability is proportional to the increase in the utility function: nodes with higher increments in their utility functions increase their forwarding probabilities more than nodes with lower utility increment do. Here, we introduce the normalization factor U (i),t−1 αit−1 (the
11.4 Self-learning algorithms
287
Table 11.2. The self-learning repeated-game algorithm (flooding) Initialization: t = 0 αit = α0 , ∀i. Choose small increment ξ , η. Iteration: t = 1, 2, . . . Calculate U (i),t−1 αit−1 and U (i),t−1 αit−1 + ξ , Calculate U (i),t−1 = U (i),t−1 αit−1 + ξ − U (i),t−1 αit−1 , (i),t−1 For each i such that U > 0, (i),t−1 α t−1 , αit = αit−1 + ηU (i),t−1 /U i t t αi = max(min αi , 1 , αmin ). End when there is no improvement. Keep monitoring the deviation Start the punishment scheme if there is a deviation
utility before changing the forwarding probability) in order to keep the updates in forwarding probability bounded. The increment in forwarding probability depends on a small increment constant η and the normalization factor. The above process is implemented until no further improvement can be made. The detailed algorithm is shown in Table 11.2. We note that the time taken for the network to become stable is defined as the time until all of the nodes do not observe fluctuations in their utility functions as a result of flooding/changing forwarding probabilities in the previous round. In practice, this waiting time can be either predefined or adjusted online as follows. Depending on the size of the network, a waiting period will be set in each node. If the node observes that its utility function fluctuates more than the preset period of time, that node can propose a prolongation of the preset time in the next round of flooding; otherwise the old preset waiting time is employed. When a node observes requests to prolong the waiting time, it sets the maximum of the broadcast waiting times and its own waiting time as the current waiting time. Hence, nodes will wait until the effect of changing the forwarding probability has propagated to the whole network before the next flooding (changing of forwarding probability) happens. The maximum delay can also be set to keep the delay time bounded.
11.4.2.2
Learning with utility prediction In the second approach, we observe that some of the routing information can be used to learn the system optimal solution (11.25). We assume that the routing decision has been made before performing the packet-forwarding task. For instance, in the algorithm for route discovery using dynamic source routing (DSR) without route caching in [209], the entire selected route is included in the packet header in the packet transmission. The intermediate nodes use the route (in the packet header) to determine to whom the packet will be forwarded. Therefore, it is clear that the transmitting node knows which nodes the packet goes through, the relaying nodes know where the packet comes from and heads to, and the receiving node knows where the packet comes from. The nodes use this information to predict the utilities of others’ nodes. We note that, because not
288
Repeated games and learning for packet forwarding
1
U (1) = α2α3; U (2) =
3
2
α3 1 + α 2 + α 2α 3
; U (3) =
4
α2 1 + α 2 + α 2α 3
U2(1) = U3(1) = α2α3;
U1(1) = α2α3; U1(2) = 0;
U2(2) = U3(1) =
U1(3) = α2;
α3 ; 1+α2 + α2α3;
α2 U2(3) = U3(3) = 1+α3 + α2α3;
U1(4) = α2α3;
; U (4) = α2α3
U4(1) = α2α3; U4(2) = α3; U4(3) = 0; U4(4) = α2α3;
U2(4) = U3(4) = α2α3; U (j) indicates the utility of node j predicted by node i i Figure 11.4
An example of learning with utility prediction.
all nodes are involved in all of the flows in the network, the utility prediction might not be perfectly accurate. However, from the simulation results, it can be seen that the performance degradation is minimal since only the nearby nodes matter. The utility prediction is illustrated using an example shown in Figure 11.4, assuming ( j) μ S(r ) = 1, K = 1, and d(i, j) = 1. We denote by Ui the utility of node j predicted by node i. From Figure 11.4, node 1 receives flows from node 3 and node 4, and node 4 receives flows from node 1 and node 2. It is obvious that the flow from node 2 to node 4 is not perceived by node 1. Hence, the utilities of node 2 and node 3 predicted by node 1 are not accurate. Similarly, the flow from node 3 to node 1 is not perceived by node (2) (3) 4. Therefore, U4 and U4 are not accurate. The accuracy of the prediction depends on the flows. If all flows involving node i pass through node j then U (i) j will be accurate and vice versa as illustrated in Figure 11.4. However, as we show by simulations, the inaccuracy in the prediction does not affect the results of the optimization too much. Since the objective of the optimization is to achieve the system optimal solution (11.25), the best node i can do is to find the solution that minimizes the total average predicted utility function, which is N 1 ( j) (1) min Ui αˆ i , . . . , αˆ i(N ) , N
( j)
s.t. αmin ≤ αˆ i
≤ 1, ∀ j,
(11.26)
j=1
( j)
where αˆ i is the packet-forwarding probability that node j should employ as predicted by node i. The algorithm is shown in detail in Table 11.3. The algorithm in Table 11.3 imitates the steepest-descent algorithm on the basis of the predicted utility, such that every node finds the gradient of the predicted utility and optimizes the predicted system utility (11.26). After having obtained {αˆ (i) }, each node sets its own packet(i) forwarding probability as αit = αˆ i . We note that the optimization problem (11.26) can be dealt with in a distributed manner, since the optimization does not require global
289
11.4 Self-learning algorithms
Table 11.3. The self-learning repeated-game algorithm with utility prediction Initialization: t = 0 (i),t αj = α0 , ∀i, j. Choose small increment ζ . Iteration: t = 1, 2, . . . For each node j = 1, . . . , N Calculate 6 5 N (n) (n) N 4 3 ∂ n=1 Uj ∂ n=1 Uj (1) (N ) = ∇j ,...,∇j (1),t , . . . , (N ),t (i),t
Calculate α j
N ∂ αˆ j (i),t−1
(i)
N ∂ αˆ j
= αj + ζ∇j (i),t (i),t Set α j = max min α j , 1 , αmin .
(i)
(i),t
End when there is no improvement, and return α j = α j Keep monitoring the deviation, and go to the punishment scheme whenever there is a deviation.
, ∀i, j.
knowledge of the utility function. Each node performs the optimization on the basis of its own prediction and sets its packet-forwarding probability according to the optimized predicted average utility. Finally, we discuss how to handle the mobility of nodes. We note that the scheme will work well in situations of moderate node mobility when the neighbors of each node do not change very often. Under this condition, the long-term relationship between nodes can be established by means of the repeated game and reputation announcement as described in Section 11.3. As a result, cooperation can be learned and enforced. Obviously, the long-term relationship may be hard to establish if there is a node that deviates in one part of the network, moves quickly to another part of the network, deviates again, and so on and so forth. In this case, there are two possible solutions. First, when the node moves to a new place, in order for the node to transmit, some background check is necessary. This can be done in two ways. First, if the nearby nodes can share the announcement, then the neighbors of the node can obtain the announcement from the node’s previous neighbors, and the new neighbors will know the reputation of this new node. The analogy of this case in real life is when someone applies for a new job, and the prospective new employer asks for references from the applicant’s previous employers. The previous and prospective employers can work harmoniously in a distributed manner. In the literature, the above idea is implemented in trust establishment for an ad hoc network such as in [408]. The other solution is obtained by increasing the sampling of the learning algorithm. As long as the node mobility does not change the relationship between neighboring nodes drastically, the effect of mobility on the learning algorithm can be leveraged by imposing a more frequent learning period on the slotted transmission as in Figure 11.1. This case is similar to tracking a non-stationary channel; the faster the channel changes, the more frequently transmission of the training sequence is required.
290
Repeated games and learning for packet forwarding
11.5
Simulation results To investigate the effectiveness of the framework, we perform simulations with the following settings. We generate two networks with 25 nodes: the ring-25 network and the random-25 network. The ring-25 network consists of 25 nodes that are arranged in a circle of radius 1000 m. The random-25 network consists of 25 nodes that are uniformly distributed within an area of 1000 m × 1000 m. We define a maximum distance dmax , such that two nodes are connected if the distance between them is less than dmax . We select the maximum distance between two nodes to ensure connectivity of the whole network. In the ring-N network, the angular separation between two neighboring nodes is 2π /N , and the distance between two neighboring nodes is 2r sin[2π /(2N )], where r is the radius of the circle. In particular, the maximum distance for the ring-25 network can be calculated as 2000 sin 2π /50 m = 250.7 m. In the random-25 network, the maximum distance between two nodes is 350 m to ensure connectivity of the whole network with a high probability. We also define the flows as source–destination (SD) pairs. We assume that the routing decision has been made before performing packet-forwarding optimization. The shortest path routing is employed in the simulations. In the random-25 network, we vary the number of SD pairs. When there are traffic flows from all nodes to all other nodes, we call this traffic a dense flow, which implies that each node has packets destined to the rest of the nodes in the network. Obviously, the dense flow has N × (N − 1) SD pairs in the N -node network. When the total flow is less than the dense flow, the SD pairs are determined randomly. In the ring-25 network, the number of SD pairs is defined in the following way. The (K · N ) SD pairs are obtained when every node i sends packets to nodes ({mod(i + 2, 25), . . . , mod(i + K + 1, 25)}. For instance, 25 SD pairs are obtained when every node i transmits packets to node mod(i + 2, 25), 50 SD pairs are obtained when every node i sends packets to nodes {mod(i + 2, 25), mod(i + 3, 25)}, etc. The rest of the simulation parameters are as follows: the transmission rate of source i is μi = 1, ∀i, the transmission constant K = 1, and the distance attenuation coefficient γ = 4. We compare three learning algorithms according to the availability of information. The parameters for the learning algorithms are β = 0.05, ξ = 0.001, η = 1.0, and ζ = 0.05. The minimum forwarding probability is set to be αmin = 0.1 and the maximum forwarding probability is set to be αmax = 1. Finally, all algorithms are initiated with α0 = 0.5, ∀i. We note that, in the following simulations, we employ the average efficiency per node defined in (11.25) as our performance metric. Figure 11.5(a) shows the average efficiency of the deviation node in the ring-25 network when the number of SD pairs is 75 with discount factor δ = 0.9. In Figure 11.5(a), node 3 deviates at time instant 10. This deviation causes the stage efficiencies of nodes 1, 2, and 25 to become lower. From the route, nodes 1, 2, and 25 suspect that nodes in the sets {2, 3, 4}, {3, 4, 5}, and {1, 2, 3} are deviating, respectively. The nodes in the network know that node 3 is consistently incriminated for deviation and start the punishment stage. (Here, the punishment period is set to 3.) The punishment scheme results in a lower average stage efficiency as described in Figure 11.5(a). From Figure 11.5(a) it can be seen that, the average efficiency without deviation is better than the average
291
11.5 Simulation results
U(3) in ring−25 network 0.9 Stage efficiency
0.8
Average efficiency with deviation Average efficiency without deviation
Stage efficiency
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
5
10
15 20 Time (a) Ring network
25
30
U (10) in random network with 16 nodes 0.7 Stage efficiency Average efficiency with deviation
0.6
Average efficiency without deviation
Stage efficiency
0.5 0.4 0.3 0.2 0.1 0
Figure 11.5
0
5
10
15
20 25 Time (b) Random network
30
35
40
Punishment in repeated games in the ring network and the random network.
efficiency with deviation. It is clear that it is advantageous for node 3 to conform to the previously agreed cooperation point. As a result, no node wants to deviate, since the deviation results in worse average efficiency. Similarly, Figure 11.5(b) shows the average utilities of a deviating node and other nodes in a random network with 16
292
Repeated games and learning for packet forwarding
nodes with discount factor 0.9. At time instant 11, node 10 in the network deviates. At the next time instant, all related nodes that detect deviation exchange their lists of the incriminated nodes. The consistently incriminated node (in this case node 10) is punished for a certain period of time (here, the duration is 8). From Figure 11.5(b), it is clear that node 10 will have a higher average efficiency when it conforms. So, from Figures 11.5(a) and 11.5(b), it can be seen that the repeated game can enforce cooperation among autonomous greedy nodes. Figures 11.6 and 11.7 show the learning curves for the self-learning repeated-game scheme for the ring-25 network and the random-25 network, respectively. In these figures, we compare the optimal solution, learning with perfect observability, learning with flooding, and learning with utility prediction. In Figure 11.6, all of the algorithms achieve the system optimal value with 100, 200, and 275 SD pairs. Learning with perfect observability and learning with utility prediction have approximately the same convergence speed. Learning with flooding converges slower, since learning with flooding uses a trial-and-error method to find the better forwarding probabilities. This unguided optimization, despite requiring minimal information, has an inferior convergence speed. Figure 11.7 shows the learning curves of the algorithms presented here for the random-25 network with different SD pairs. One can observe that learning with utility prediction achieves a very similar efficiency per node to those for
ring 25 nodes, 100 SD pairs
Average efficiency per node
0.25
ring 25 nodes, 200 SD pairs
0.2
0.15
0.1 ring 25 nodes, 275 SD pairs
0.05
0
Figure 11.6
0
100
200
300 Simulation time
Learning perfect 100 SD pairs Optimal solution 100 SD pairs Flooding 100 SD pairs Utility Prediction 100 SD pairs Learning perfect 200 SD pairs Optimal solution 200 SD pairs Flooding 200 SD pairs Utility Prediction 200 SD pairs Learning perfect 275 SD pairs Optimal solution 275 SD pairs Flooding 275 SD pairs Utility Prediction 275 SD pairs
400
500
600
The learned average efficiency per node for various algorithms and traffic loads in the ring network.
293
11.5 Simulation results
0.7
Average efficiency per node
0.65
random−25, 600 SD pairs
0.6
0.55 random−25, 100 SD pairs 0.5
Learning, perfect 100 SD pairs Optimal solution, 100 SD pairs Flooding 100 SD pairs Utility Prediction 100 SD pairs
0.45
Flooding, random−25, 100 SD pairs Flooding, random−25, 600 SD pairs
0.4
0.35
Figure 11.7
Learning, perfect 600 SD pairs Optimal solution, 600 SD pairs Flooding 600 SD pairs Utility Prediction 600 SD pairs
0
20
40 60 Simulation time
80
100
The learned average efficiency per node for various algorithms and traffic loads in the random network.
the optimal solution and learning with perfect observation. In contrast, learning with flooding achieves an inferior efficiency per node. Figure 11.8(a) shows the learned average efficiency per node for the various algorithms with different traffic flows in the ring-25 network. The efficiency decreases as the number of SD pairs increases. This can be explained as follows. Because of the symmetric nature of the utility functions, the local optimal forwarding probabilities for all nodes are the same. It can easily be shown that the local optimal forwarding probability in the ring-25 network is a unity for all nodes.2 Therefore, the larger the number of SD pairs, the more packets a node needs to forward and the higher the value of the denominator of the stage utility function in (11.7). As a result, the average efficiency per node decreases as the number of SD pairs increases. By a simple calculation, it can be shown that the average efficiency per node decays as NSD /N , (NSD /N + 0.5)(NSD /N + 1)(NSD /N ) where NSD is the number of SD pairs. In Figure 11.8(a), all of the learning algorithms perform similarly for the different numbers of SD pairs. 2 This is not true in the random network in general.
294
Repeated games and learning for packet forwarding
0.5 Learning perfect observation Optimal solution
Average efficiency per node
0.45
Learning using Flooding Algorithm Learning using utility prediction
0.4 0.35 0.3 0.25 0.2 0.15 0.1
0
50
100 150 200 250 Number of Source−Destination pairs (a) Ring network
300
Average efficiency per node
0.65
0.6
0.55
0.5
0.45
0.4
Figure 11.8
Learning, perfect observability Optimal solution Learning using Flooding Algorithm Learning using Utility Prediction
0
120 240 360 480 Number of Source−Destination pairs (b) Random network
600
The average efficiency per node for various algorithms and traffic loads in the ring network and the random network.
Figure 11.8(b) shows the achievable efficiency per node after the learning algorithms have converged for different numbers of SD pairs in the random-25 network. We observe that learning with utility prediction achieves a very similar efficiency to those of learning with perfect observation and the optimal solution. Learning with flooding achieves a lower efficiency per node, but still achieves a much better efficiency than the Nash equilibrium. On average, learning with utility prediction achieves around 99.2% of the efficiency achieved by the optimal solution. In contrast, learning with flooding achieves more than 73.18% of the optimality.
295
11.5 Simulation results
Table 11.4. Normalized average efficiency per node for various numbers of nodes in the random network with dense traffic Number of nodes
Average efficiency per node (optimal solution) Normalized learning with perfect observability Normalized learning using flooding Normalized learning using utility prediction
9
16
25
36
49
64
81
0.7438
0.7581
0.5930
0.5574
0.5629
0.5316
0.4916
99.63%
99.91%
99.39%
100%
100%
100%
99.94%
84.79%
71.45%
72.81%
65.36%
68.56%
58.21%
59.40%
100%
97.91%
98.98%
99.27%
96.59%
99.88%
96.70%
On comparing Figures 11.8(a) and 11.8(b), we can see that learning with flooding performs well in the ring-25 network but poorly in the random-25 network. The reason for this phenomenon is that, in the ring-25 network, the utilities of all nodes are symmetric, and optimizing the system criterion (11.25) results in the same average efficiency in each node. Since learning with flooding tries to increase a node’s efficiency by changing its own forwarding probability synchronously, this iteration will finally reach the point at which all nodes’ efficiencies are the same due to the symmetric structure of the network. This solution is coincidentally the same as the solution of the optimization of the system criterion (11.25). In contrast to the ring-25 network, the utility functions for each node are highly asymmetric in the random-25 network. In this case, the node that first reaches a better solution will not change its forwarding probability, even though changing its forwarding probability would result in a slightly lower efficiency for that particular node but increase the other nodes’ efficiencies significantly. Owing to this greedy and unguided optimization, learning with flooding achieves an inferior average efficiency per node to that obtained with learning using utility prediction, which obtains information from routing information and performs better learning. Next, we investigate the performances of the learning algorithms in the case of dense flow with various numbers of nodes in the random network. Table 11.4 shows the average efficiency per node (11.25) for various sizes of the network normalized with respect to the average efficiency obtained by the optimal solution. We can observe that, as the number of nodes increases, the optimal average efficiency per node decreases. This is because the total power required for self-transmission and packet-forwarding increases much faster than does the power for successful self-transmission as the number of nodes increases. Therefore, the stage utility for each node (11.7) decreases as the number of nodes increases in the case of dense flow. As a result, the average efficiency per node decreases as the number of nodes increases. We also observe that learning with utility prediction achieves 96%–100% of the average efficiency per node achieved by the optimal solution for various sizes of the network. On the other hand, learning with flooding
296
Repeated games and learning for packet forwarding
achieves 60%–85% of the average efficiency obtained by the optimal solution. We note that learning using flooding achieves a lower efficiency when the number of nodes is larger; this is due to the unguided optimization. As the number of nodes becomes larger, it becomes more probable that one will get into a situation in which only a small portion of nodes will exhibit high efficiency and the rest will have very low efficiencies. In contrast, the performance of learning using utility prediction slightly decreases but achieves a very similar performance to that obtained with learning with perfect observability for various sizes of the network, as shown in Table 11.4. The decrease occurs because, as the number of nodes becomes larger, the utility prediction becomes less accurate.
11.6
Summary and bibliographical notes In this chapter, we consider a distributed mechanism for enforcing and learning the cooperation points among selfish nodes in wireless networks. A repeated-game framework to enforce cooperation and learning algorithms to search for better cooperation points are presented. From the analysis and simulations, we show that the framework provides a very effective means by which to enforce cooperation among greedy/selfish nodes. In practice, selfish nodes with local information might not know how to cooperate even though they are willing to do so. We also present learning algorithms to guide the distributed nodes to find better cooperating points. Depending on the information structures, the learning algorithms with learning by flooding and with utility prediction achieve 60%–85% and 96%–100% of the efficiency that is obtained by the optimal solution with global information and centralized optimization. For more information, interested readers can refer to [338]. The packet-forwarding problem in ad hoc networks has been studied extensively in the literature. The fact that nodes act selfishly to optimize their own performances has motivated many researchers to apply game theory [326] [123] in solving this problem. Broadly speaking, the approaches used to facilitate packet forwarding can be categorized into two methods. The first type of methods makes use of virtual payment. Virtual-currency, pricing, and credit-based methods [58] [493] fall into this first type. The second type of approaches is related to personal and community enforcement to maintain the long-term relationship among nodes. Cooperation is sustained because defection against one node causes personal retaliation or sanction by others. This second approach includes the following works. Marti et al. [283] propose mechanisms called watchdog and pathrater to identify the misbehaving nodes and deflect the traffic around them. Buchegger and Le Boudec [30] define protocols based on a reputation system. Altman et al. [6] consider a punishment policy to show cooperation among participating nodes. In [185], Han et al. propose learning repeated-game approaches to enforce cooperation and obtain better cooperation solutions. Some other works using game theory in solving communication problems can be found in [245], [279], and [168].
12
Dynamic pricing games for routing
In self-organized mobile ad hoc networks (MANETs), where each user is its own authority, fully cooperative behaviors, such as unconditionally forwarding packets for each other or honestly revealing private information, cannot be directly assumed. The pricing mechanism is one way to provide incentives for the users to act cooperatively by awarding some payment for cooperative behaviors. In this chapter, we consider efficient routing in self-organized MANETs and model it as multi-stage dynamic pricing games. A game-theoretic framework for dynamic pricing-based routing in MANETs is considered to maximize the sender/receiver’s payoff by invoking the dynamic nature of MANETs. Meanwhile, the forwarding incentives of the relay nodes can also be maintained by optimally pricing their packet-forwarding services on the basis of auction rules and introducing a cartel-maintenance enforcing mechanism. The simulation results illustrate that the dynamic pricing-based routing approach provides significant performance gains over the existing static pricing approaches.
12.1
Introduction In recent years, MANETs have received much attention due to their potential applications and the proliferation of mobile devices [330] [420]. In general, MANETs wireless multi-hop networks formed by a set of mobile nodes without requiring centralized administration or fixed network infrastructure, in which nodes can communicate with other nodes located beyond their direct-transmission ranges through cooperatively forwarding packets for each other. In traditional crisis or military situations, the nodes in a MANET usually belong to the same authority and work in a fully cooperative way of unconditionally forwarding packets for each other to achieve their common goals. Recently, MANETs have also been envisioned in civilian applications, where nodes typically do not belong to a single authority and might not pursue a common goal. Furthermore, such a network could be completely self-organizing, such that the network would be run solely by the operation of the end users. Consequently, fully cooperative behaviors cannot be directly assumed since the nodes are selfish in order to maximize their own interests. We refer to such networks as self-organized MANETs. In analyzing the selfish behaviors of network users for efficient self-organized wireless networking, game-theoretic study is important for understanding and analyzing the interactions among intelligent users. Generally speaking, game theory models strategic
298
Dynamic pricing games for routing
interactions among agents using formalized incentive structures. It not only provides game models for efficient self-enforcing distributed design but also derives welldefined equilibrium criteria to measure the optimality of game outcomes for various scenarios. Although there have been some extensive studies of cooperation stimulation in selforganized MANETs for packet forwarding among selfish users, cooperation stimulation for more advanced and sophisticated networking functionalities has not been fully addressed. One important fundamental issue that needs to be studied further is the question of how to carry out efficient self-organized routing in the dynamic scenarios of MANETs with selfish users. The routing process is built upon successful packet forwarding among the nodes, but the self-organized routing is much more complicated than packet forwarding for several reasons. First, the routing in ad hoc networks involves many selfish nodes at the same time for multi-hop packet forwarding, and the behaviors of the selfish nodes may be correlated. Moreover, in MANETs, there usually exist multiple possible routes from the source to the destination. Furthermore, due to mobility, the available routes between the sources and the destinations may change frequently. In this chapter, we refer to path diversity as the fact that in general there exist multiple routes between any two nodes, with each route having different characteristics, such as the number of hops, cost (or requested price), and valid time. We refer to time diversity as the fact that, due to the mobility, dynamic topology, and traffic variations, the routes between two nodes will keep changing over time. In order to achieve efficient routing in selforganized MANETs, a comprehensive study considering the above aspects needs to be carried out. In addition, to have optimal pricing-based routing, both the path diversity and the time diversity of MANETs should be exploited. Specifically, the source (here we assume that the source makes payments to the forwarding nodes) is responsible for exploiting the path diversity, such as by introducing competition among the multiple available routes through an auction, to minimize the payment needed at the current stage. Each node also needs to exploit the time diversity to maximize its overall payoff over time. In each stage the source adaptively decides the number of packets being transmitted according to the price it needs to pay, which is determined by the current routing conditions. For instance, when the routing conditions are good (i.e., the cost to transmit a packet is low), more packets should be transmitted in the current stage; otherwise, fewer or no packets should be transmitted in the current stage. In this chapter, we model the routing process as multi-stage dynamic games and consider an optimal pricing-based approach to dynamically maximize the sender/receiver’s payoff over multiple routing stages considering the dynamic nature of MANETs, meanwhile keeping the forwarding incentives of the relay nodes by optimally pricing their packet-forwarding actions on the basis of auction rules. The main points of this chapter are as follows. • First, by modeling the pricing-based routing as a dynamic game, the senders are able to exploit the time diversity in MANETs to increase their payoffs by adaptively
12.2 The system model
299
allocating the packets to be transmitted among different stages. Considering the mobility of the nodes, the possible routes for each source–destination pair are changing dynamically over time. According to the path diversity, the sender will pay a lower price for transmitting packets when there are more potential routes. Thus, the criterion for allocation can be developed on the basis of the fact that the sender prefers to send more packets in the stage with lower costs. The cartel-maintenance mechanism is introduced to ensure cooperation within each route. • Second, an optimal dynamic programming approach is considered in order to implement efficient multi-stage pricing for self-organized MANETs. Specifically, the Bellman equation is used to formulate and analyze the above dynamic programming problem by considering the optimization goal in terms of two parts: current payoffs and future opportunity payoffs. A simple allocation algorithm is developed and its optimality is proved on the basis of the auction structure and routing dynamics. • Third, the path diversity of MANETs is exploited using the optimal auction mechanism in each stage. The application of the optimal auction [237] makes it possible to study separately the optimal-allocation problem and the mechanism design of the auction protocol on the basis of the well-known revenue-equivalence theorem [237], which simplifies the dynamic algorithm while keeping the optimality. The reminder of this chapter is organized as follows. The system model of selforganized MANETs is explained in Section 12.2. In Section 12.3, we formulate the pricing process as dynamic games based on the system model. In Section 12.4, the optimal dynamic auction framework is presented for the optimal pricing and allocation of the multi-stage packet transmission. In Section 12.5, extensive simulations are conducted to study the performance of the approach presented here.
12.2
The system model In this chapter we consider self-organized MANETs in which nodes belong to different authorities and have different goals. We assume that each node is equipped with a battery with limited power supply, can move freely inside a certain area, and communicates with other nodes through wireless connections. For each node, packets are scheduled to be generated and delivered to certain destinations, with each packet having a specific delay constraint, that is, if a packet cannot reach the destination within its delay constraint, it will become useless. In our system model, we assume that all nodes are selfish and rational, that is, their objectives are to maximize their own payoff, not to cause damage to other nodes. However, nodes are allowed to cheat whenever they believe that cheating behaviors can help them to increase their payoff. Since nodes are selfish and forwarding packets on behalf of others will incur some cost, without necessary compensation, nodes have no incentive to forward packets for others. In our system model, we assume that, if a packet can be successfully delivered to its destination, then the source and/or the destination of the packet can get some benefits, and, when a node forwards packets for others,
300
Dynamic pricing games for routing
it will ask for some compensation, such as virtual money or credits [493], from the requesters to at least cover its cost. In our system model, to simplify our illustration, we assume that the source of a packet makes payments to the intermediate nodes that have forwarded the packet for it. However, it can also easily be extended to handle the situation in which the destinations pay. Like in [493], we assume that there exist some bank-like centralized management points, whose only function is to handle the billing information, such as performing credit transfers among nodes on the basis of information submitted by these nodes. Each node needs to contact these central banking points only periodically or aperiodically. In general, due to the multi-hop nature of ad hoc networks, when a node wants to send a packet to a certain destination, a sequence of nodes needs to be requested to help forward this packet. We refer to the sequence of (ordered) nodes as a route, the intermediate nodes on a route as relay nodes, and the procedure by which to discover a route as route discovery. The routing protocols are important for MANETs in order to establish communication sessions for each source–destination pair. Here, we consider on-demand (or reactive) routing protocols for ad hoc networks, in which a node attempts to establish a route to some destination only when it needs to send packets to that destination. Since on-demand routing protocols are able to handle many changes of node connectivity due to the node’s mobility, they perform better than periodic (or proactive) routing protocols in many situations [32] [208] [347] by virtue of their having much lower overheads. In MANETs, due to the mobility, nodes need to carry out route discovery frequently. We refer to the interval between two consecutive route-discovery procedures as a routing stage, and assume that, for each source–destination pair, the selected route between them will remain unchanged during a particular routing stage. Furthermore, to simplify our analysis, we assume that, for each source–destination pair, the routes discovered in different routing stages are independent. After route discovery has been carried out in each stage, multiple forwarding routes can be exploited between the source and the destination. Assume that there are " possible routes and let vi, j be the forwarding cost of the jth node on the ith route, which is also referred to as the node type in this chapter. Considering the possibility of node mobility in MANETs, " and vi, j are no longer fixed values, and can be modeled as random variables. Let the probability mass function (PMF) of " be f˜(") and the cor˜ responding cumulative density function (CMF) be F("). vi, j is characterized by its probability density function (PDF) fˆi, j and the cumulative density function (CDF) Fˆi, j . Define the cost vector of the ith route as vi = {vi,1 , vi,2 , . . . , vi,h i }, where h i is the number of forwarding nodes on the ith route. Thus, we have the total cost on the ith route i vi, j , which is also a random variable. Let the PDF and CDF of ri be f i and ri = hj=1 Fi , respectively. Figure 12.1 illustrates our system model by showing a network snapshot of pricingbased multi-hop routing for a source–destination pair. It can be seen from Figure 12.1 that there are three routing candidates with different numbers of hops and routing costs (such as energy-related forwarding costs) between the source and destination. Each route will bid as one entity for providing the packet-forwarding service for the source– destination pair at this routing stage. Then, the source will choose the route with the
301
12.2 The system model
$
$
Source
Destination
$
$
Relay Nodes Figure 12.1
Pricing-based routing in self-organized MANETs.
Lowest Cost of Routing Candidates over Time
Transmitted Packets for Each Stage
Routing Stage over Time Figure 12.2
Dynamic pricing-based routing considering time diversity.
lowest bid to transmit the packets. The price that the source pays to the selected route may either be equivalent to the cost incurred or include a premium on top of the true forwarding cost. Note that the asking prices from each route and the payment from the source may vary according to the pricing mechanisms applied. Further, the payment that the source provides to the selected route needs to be shared among the nodes on the selected route in such a way that no node on the selected route has an incentive to deviate from the equilibrium strategy. Considering the network dynamics due to the node mobility, dynamic topology, or channel fading, the number of available routes, the
302
Dynamic pricing games for routing
number of required hops, and the forwarding costs will change over time. In Figure 12.2, we consider a dynamic scenario and illustrate the relationship of the number of packets to be transmitted and the lowest cost of the available routes at each stage. In order to maximize its payoff by utilizing time diversity, the source tends to transmit more packets when the cost is lower and fewer packets when the cost is higher. The optimal relationship between them will be derived in later sections.
12.3
Pricing game models In this chapter, we model the process of establishing a route between a source and a destination node as a game. The players of the game are the network nodes. With respect to a given communication session, any node can play only one of the following roles: sender, relay node, or destination. In self-organized MANETs, each node’s objective is to maximize its own benefits. Specifically, from the sender’s point of view, he/she aims to transmit packets with the least possible payments; from the relaying nodes’ points of view, they want not only to earn enough payment to cover their forwarding cost but also to gain as much extra payment as possible; while from the network designers’ point of view, they prefer that the network throughput and/or lifetime be maximized. Therefore, the source–destination pair and the nodes on the possible forwarding routes construct a non-cooperative pricing game. Since the selfish nodes belong to different authorities, the nodes have only the information about themselves and will not reveal their own types to others unless efficient mechanisms have been applied to guarantee that truth-telling does not harm their interests. Generally, such a non-cooperation game with imperfect information is complex and difficult to study because the players do not know the perfect strategy profiles of others. However, on the basis of our game setting, the well-developed auction theory can be applied to analyze and formulate the pricing game. Auction games belong to a special class of game with incomplete information known as games of mechanism design, in which there is a “principal” who would like to condition his actions on some information that is privately known by the other players, called “agents.” In an auction, the principal (auctioneer) determines resource allocation and prices on the basis of bids from the agents (bidders) according to an explicit set of rules. In the pricing game, the source can be viewed as the principal, who attempts to buy the forwarding services from the candidates of the forwarding routes. The possible forwarding routes are the bidders who compete with each other for the privilege of serving the source node, by which means they may gain extra payments for future use. In order to maximize their own interests, the selfish forwarding nodes will not reveal their private information, i.e., the actual forwarding costs, to others. They compete for the forwarding request by indicating their willingness to accept particular payments for forwarding in the form of bids. Thus, because of the path diversity of MANETs, the sender is able to lower its forwarding payment by taking advantage of the competition among the routing candidates on the basis of the auction rules. It is important to note
12.3 Pricing game models
303
that, instead of considering each node as a bidder, we consider each route as a bidder, which has the following advantages. • First, by considering the nodes on the same forwarding route as one entity, the sender can fully exploit the path diversity to maximize its own payoffs by lowering the bidding premium for ensuring truth-telling for each node on the route. • Second, since it has been proved in [504] that there does not exist a forwardingdominant protocol for ad hoc pricing games, we analyze pricing-based routing in a two-step approach: we first study the payoff-maximization allocation by considering the route-based bidding, and then derive the truth-telling profit-sharing among the nodes on the selected route on the basis of repeated game theory. • Moreover, less bidding information is required for a route-based approach than in [3] since each route is considered as only one bidding entity. In this section, we first consider the static pricing game (SPG), which is played only once for the fixed topology. Then, the dynamic pricing game (DPG) is studied and formulated considering playing the pricing game for multiple stages.
12.3.1
The static pricing game In this subsection, we study the static pricing game model. By taking advantage of the auction approach, our goal is to maximize the profits of the source–destination communication pair for transmitting packets while keeping the forwarding incentives of the forwarding routes. Specifically, we consider an auction mechanism (Q, M) that consists of a pair of functions Q : D → P and M : D → R N , where D is the set of announced bids and P is the set of probability distributions over the set of routes L. Note that Q i (d) is the probability that the ith route candidate will be selected for forwarding and Mi (d) is the expected payment for the ith route, where d is the vector of bidding strategies for all routes, i.e., d = {d1 , d2 , . . . , d" } ∈ D. Then, the payoff function of the ith forwarding route can be represented as follows: Ui (di , d−i ) = Mi (di , d−i ) − Q i (di , d−i ) · ri .
(12.1)
Before studying the equilibria of the auction game, we first define the direct revelation mechanism as the mechanism in which each route bids its true cost, di = ri . The revelation principle [237] states that, given any feasible auction mechanism, there exists an equivalent feasible direct revelation mechanism that gives to the auctioneer and all bidders the same expected payoffs as in the given mechanism. Thus, we can replace the bids d by the cost vector of the routes, i.e., r = {r1 , r2 , . . . , r L } without changing the outcome and the allocation rule of the auction game. Therefore, the equilibrium of the SPG can be obtained by solving the following optimization problem to maximize the sender’s payoff while providing incentives for the forwarding routes:
304
Dynamic pricing games for routing
5
#
E ",r max g · Q,M
s.t.
"
Q i (r) −
i=1
"
$6 Mi (r)
i=1
Ui (ri , d−i ) ≥ Ui (di , d−i ), ∀di ∈ D, Q i (r) ∈ {0, 1},
,
"
(12.2)
Q i (r) ≤ 1.
i=1
where the first constraint is also referred as the incentive-compatibility (IC) constraint, which ensures that the users report their true types, and g is the marginal profit of transmitting one packet. Note that {0, 1} represents a set having two elements, 0 and 1.
12.3.2
The dynamic pricing game Considering the dynamic nature of MANETs, the network topology may change over time due to the mobility of the nodes. Thus, route discovery needs to be performed frequently. Moreover, for different routing stages, there may exist different numbers of available routes with different numbers of hops. It is important for each source– destination pair to decide the transmission and payment behaviors for each stage according to the route conditions. Therefore, the pricing game under such dynamic conditions can no longer be modeled as a static game. Game theorists use the concept of dynamic games to model such multi-stage games and analyze the long-run behaviors of players. In dynamic games, the strategies of the players depend not only on the opponents’ current strategies but also on the past outcomes of the game and the future possible actions of other players. Our pricing game for MANETs falls exactly into the category of dynamic games. In this chapter, we will focus on studying the dynamic pricing game. Intuitively, the sender prefers to transmit more packets when more routing candidates are available and the number of hops is small. This is because, considering the application of auction protocols in each stage, the sender has a higher probability of getting the service with a lower price when there are more bidders (routes) with lower type values. Moreover, the practical constraints in MANETs, such as the delay constraint of packet transmission and the bandwidth constraint on the maximal number of packets which can be transmitted within unit time duration, need to be considered in the DPG. Therefore, in order to maximize its profit, the source–destination pair needs not only to optimally allocate the packets to the routes within one time period but also to schedule the packets for all periods. In our DPG, it is important to note that the optimal packet transmission strategy for each source–destination pair is affected both by the past plays and by the possible future outcomes. Generally speaking, the packet transmission decision is made by comparing the current transmission profit and future opportunity profits. Also, due to the delay and bandwidth constraints, the past transmission plays affect current decision making. Capturing the dynamics becomes the key to finding the optimal solution of our DPG. Let "t denote any realization of the route number at the tth stage and rt be a realization of the
305
12.3 Pricing game models
types of all routing candidates at the tth stage. The probabilistic structures of "t and rt are different for different ad hoc networking scenarios. By choosing an appropriate stage duration, the dependence of the routing dynamics on stages can be made negligible. Consider a T -period dynamic game, in which the overall payoff-maximization problem for the source–destination pair can be formulated as follows: T
5
#5
β · E "t ,rt max t
Q,kt
t=1
s.t.
G(Kt ) ·
"t
Q i − kt ·
i=1
"t
6$6 Mi (rt )
,
i=1
Ui,t (ri,t , d−i,t ) ≥ Ui,t (di,t , d−i,t ), ∀di,t ∈ D, L
Q i ∈ {0, 1},
Q i ≤ 1,
i=1
kt ≤ B,
T
kt = M,
(12.3)
t=1
where kt is the number of packets transmitted in the tth stage and Kt is the vector of the numbers of packets transmitted in the first T − t + 1 stages, which can be represented as Kt = {k T , k T −1 , . . . , kt }. Note that a smaller t in this chapter stands for a later time stage. Here, G(Kt ) is the profit that the sender gains in the tth stage, which might not only depend on how many packets are transmitted in the current stage, i.e., kt , but also be affected by how many packets have been transmitted in previous stages, Kt+1 . Considering the rate-distortion theory [79], we assume that the profit function is concave in kt . For example, the marginal profit of transmitting one more packet when a lot of packets have already been transmitted should be limited. Also, the subscript t indicates the tth routing stage and β is the discount factor for multi-stage games. For different applications, β needs to be determined differently: for real-time applications such as video streaming or voice, β is a smaller value less than unity so that the payoff at the current stage contributes most to the overall payoff; for non-real-time applications such as data transmission, β can be chosen to be very close to unity so that the overall payoff is almost evenly affected by the payoff at each stage. Note that T and B are the delay constraint and the bandwidth constraint, respectively. M is the total number of packets to be transmitted within T stages. The above DPG formulation (12.3) extends the optimal-pricing problem to the time dimension, by which means one can exploit the potential of time diversity in the selforganized ad hoc network given its dynamic nature. For example, if the current routing condition is not good, the user could hold its transmission for the future, in the hope that the routing cost may become lower. Thus, what appears to be a current payoff loss can be compensated for by a much higher future payoff so that the overall payoff can be optimized. It is worth mentioning that directly solving the nonlinear integer programming problem is very difficult. This is because not only does the current routing realization affect the allocation decision, but also the past play and allocation decision influence the feasible actions and payoff functions in the current period.
306
Dynamic pricing games for routing
12.4
Optimal dynamic pricing-based routing In order to achieve efficient self-organized routing in the DPG given the dynamic nature of MANETs, we consider the optimal pricing-based routing approach in this section. First, the optimal auction mechanism is considered for maximizing the payoffs for the source–destination pair while keeping the forwarding incentives of the relaying nodes. Then, the dynamic multi-stage game is further formulated using the optimal auction and dynamic programming approach. Finally, the mechanism design and the profit sharing among the nodes on the selected route are considered.
12.4.1
The optimal auction for static pricing-based routing In Section 12.2, we formulated the static pricing game based on auction principles as the optimization problem (12.2). Here, we further utilize the results of the optimal auction [300] to simplify the optimization problem. From [300], we know that, by considering the optimal auction, the sender’s expected total payoff can be expressed only in terms of the allocation Q, which is independent of the payment to each route candidate. Specifically, the optimization problem (12.2) can be rewritten as follows: 5
#
E ",r max g · Q
"
Q i (r) −
i=1
s.t. Q i (r) ∈ {0, 1},
"
$6 Ji (ri )Q i (r)
,
i=1 "
Q i (r) ≤ 1.
(12.4)
i=1
where J (ri ) = ri + 1/ρ(ri ), and ρ(ri ) = f i (ri )/Fi (ri ) is the hazard rate [300] function associated with the distribution of the routing cost. Note that J (ri ) is also called the virtual type of the ith player. It has been proved in [300] that the solution of the above optimization also satisfies the incentive-compatibility constraint. The assumptions for the above formulation are rather general: (1) F is continuous and strictly increasing; and (2) the allocations Q i (ri , r−i ) are increasing in ri . From (12.4) and the revenueequivalence theorem, it follows that all mechanisms that result in the same allocations Q for each realization of r yield the same expected payoff. Thus, in order to obtain the optimal pricing strategies, the mechanism design process proceeds in two steps: • first, find the optimal allocation Q(r); • second, find an implementable mechanism that produces Q for each realization r. Using the optimal auction approach for pricing, the payoff-maximized allocation for the sender is to choose the route with the minimal virtual type J (ri ) when g − J (ri ) ≥ 0, otherwise the sender will not transmit the packet since it will cause negative payoff and violate his individual rationality. Therefore, if we assume that J (v) is strictly increasing in v, we can define v ∗ = maxv {(g − J (v)) = 0} as the reserved price for the sender, which is the largest payment he/she can offer for transmitting a packet. Note
307
12.4 Optimal dynamic pricing-based routing
that the distributions that have increasing J (v) include the uniform, normal, logistic, exponential distributions, etc. On the basis of the above discussion, we find that the static pricing game is not efficient if the current routing realization has a high cost. Considering the dynamic properties of MANETs, a more efficient pricing mechanism can be achieved by studying it as a multi-stage game and optimally allocating the packet transmissions over multiple time periods.
12.4.2
The optimal dynamic auction for dynamic pricing-based routing Considering the optimal auction results in the DPG model formulated in Section 12.2, we further discuss the optimal dynamic auction framework for pricing in self-organized MANETs. Since it is difficult to solve (12.3) directly, we study the dynamic programming approach to simplify the multi-stage optimization problem. Define the value function Vt (x) as the maximum expected profit obtainable from periods t, t − 1, . . . , 1 given that there are x packets to be transmitted within the constraint of time periods. On simplifying (12.4) using the Bellman equation, we have the maximal expected profit Vt (x) written as follows: 5
#5
Vt (x) = E "t ,r max Q,kt
s.t.
G(Kt ) ·
"t
Q i − kt ·
i=1
Q i (r) ∈ {0, 1},
"t
"t
$6
6
J (vi )Q i + β · Vt−1 (x − kt )
,
i=1
Q i (r) = 1, kt ≤ B.
(12.5)
i=1
Moreover, the boundary conditions for the above dynamic programming problem are V0 (x) = 0,
x = 1, . . . , M,
(12.6)
Recall that we have the delay constraint T of the maximal allowed time stages and the bandwidth constraint B of the maximal number of packets which can be transmitted for each stage. On the basis of the principle of optimality in [19], an allocation Q that achieves the maximum in (12.5) given x, t, and r is also the optimal solution for the overall optimization problem (12.3). Note that the above formulation is similar to that of the multi-unit sequential auction [436] studied by the economists. First, note that, from (12.5) and the monotonicity of J (·), it is clear that, if the sender transmits k packets within one time period, these packets should all be awarded to the forwarding route with the lowest cost ri . Therefore, define # Rt (k) = max G(Kt ) ·
"t
Q
s.t.
Q i (r) ∈ {0, 1},
i=1
Q i (r) − k · i
"t
$ J (ri )Q i (r) ,
i=1
Q i (r) = 1,
(12.7)
308
Dynamic pricing games for routing
which can also be solved and written as 0 Rt (k) = G(k, Kt+1 ) − k · J (r(1) )
if k = 0, if k > 0,
(12.8)
where r(1) means the lowest cost of the forwarding routes. Thus, the dynamic optimization objective (12.5) can be rewritten in terms of Rt (k) as follows: 1 0 {Rt (kt ) + β · Vt−1 (x − kt )} , (12.9) Vt (x) = E "t ,r max 0≤kt ≤min{B,x}
which is also subject to (12.6). Let kt∗ (x) denote the optimal solution above, which is the optimal number of packets to be transmitted on the winning route at the tth stage given remaining capacity x. Letting Rt (i) ≡ Rt (i) − Rt (i − 1) and Vt (i) ≡ Vt (i) − Vt (i − 1), we can rewrite the maximal expected profit Vt (x) as $6 5 #k t [Rt (i) − β · Vt−1 (x − i + 1)] max Vt (x) = E "t ,r 0≤kt ≤min{B,x}
i=1
+ β · Vt−1 (x).
(12.10)
The above formulation will help us to simplify the optimal dynamic pricing problem. Then, in order to solve the dynamic pricing problem (12.5)–(12.6), we need to first introduce the following lemmas based on (12.10). Lemma 12.4.1 If Vt−1 (x) ≥ Vt−1 (x + 1), then kt∗ (x) ≤ kt∗ (x + 1) ≤ kt∗ (x) + 1, ∀x ≥ 0. P ROOF. We study the left-hand-side inequality first. If kt∗ (x) = 0, the inequality holds. If kt∗ (x) > 0, considering the assumption Vt−1 (x) ≥ Vt−1 (x + 1), the optimal allocation kt∗ (x + 1) may be higher due to the additional packet in the queue. Hence, kt∗ (x + 1) ≥ kt∗ (x). Insofar as the right-hand-side (RHS) inequality is concerned, we prove it by contradiction. Assume that kt∗ (x + 1) ≥ kt∗ (x) + 2. From (12.8), we know that R(k) decreases with its argument. Further, from (12.10) and the assumption of this lemma, Vt−1 (x) ≥ Vt−1 (x + 1), we obtain that achieving the optimal k for the tth stage in (12.10) is equivalent to finding the maximal k satisfying the following inequality: Rt (k) > β · Vt−1 (x − k + 1).
(12.11)
Therefore, given the optimal kt∗ (x + 1), we have Rt (m) > β · Vt−1 (x + 1 − m + 1),
for m = 1, 2, . . . , kt∗ (x + 1).
(12.12)
Since we assume kt∗ (x + 1) ≥ kt∗ (x) + 2 and let m = kt∗ (x) + 2 in (12.12), we obtain Rt kt∗ (x) + 2 > β · Vt−1 x + 1 − kt∗ (x) + 2 + 1 (12.13) = β · Vt−1 x − kt∗ (x) + 1 + 1 .
309
12.4 Optimal dynamic pricing-based routing
Since R(k) decreases with k, (12.13) can be further written as Rt kt∗ (x) + 1 ≥ Rt kt∗ (x) + 2 > β · Vt−1 x − kt∗ (x) + 1 + 1 .
(12.14)
Considering the optimality criterion of kt∗ (x) in (12.11), kt∗ (x) should be the largest number of packets satisfying (12.11). Therefore, (12.14) contradicts the optimality of kt∗ (x). The RHS inequality is proved. It can be seen from the proof of Lemma 12.4.1 that the optimal allocation of packet transmission over multiple stages can also be determined under the condition Vt−1 (x) ≥ Vt−1 (x + 1). Then, we will prove that the above condition holds for all t in the following lemma. Lemma 12.4.2 Vt (x) decreases with x for any fixed t and increases with t for any fixed x. P ROOF. First, we prove that Vt (x) decreases with x at any fixed time period t. Note that the induction method is used to prove this part of Lemma 12.4.2. For t = 0, the lemma obviously holds since V0 (x) = 0 for all x. Assume the inductive hypothesis for period t − 1 as Vt−1 (x) ≥ Vt−1 (x + 1). Then, we will show that, if the inductive hypothesis holds, Vt (x) also decreases. Consider a realization of "t routes and their cost vector r = (r1 , r2 , . . . , r"t ). Define the inner maximized term in (12.9) as follows: Ut (x, "t , r) =
max
{Rt (k) + β · Vt−1 (x − k)}.
0≤k≤min{B,x}
(12.15)
Define the difference function as Ut (x, "t , r) = Ut (x, "t , r) − Ut (x − 1, "t , r).
(12.16)
Thus Vt (x) can be obtained as Vt (x) = E "t ,r [Ut (x, "t , r)].
(12.17)
For simplicity and without loss of generality, we omit the arguments "t , r in Ut (x, "t , r) and simply use Ut (x). Moreover, it can be seen from (12.17) that it is sufficient to prove that Ut (x) decreases with x for the proof that Vt (x) decreases with x. Using the inductive hypothesis and Lemma 12.4.1, we have the constraint on kt∗ (x + 1) as kt∗ (x) ≤ kt∗ (x + 1) ≤ kt∗ (x) + 1.
(12.18)
Given this constraint, we then study the value of Ut (x + 1) for the two possible outcomes, kt∗ (x + 1) = kt∗ (x) and kt∗ (x + 1) = kt∗ (x) + 1.
310
Dynamic pricing games for routing
(1) If kt∗ (x + 1) = kt∗ (x), then Ut (x + 1) = β · Vt−1 x − kt∗ (x) + 1 from (12.15) and (12.16). Also, from the optimality condition of k in (12.11), we know Rt kt∗ (x + 1) + 1 ≤ β · Vt−1 x + 1 − kt∗ (x + 1) + 1 + 1 . (12.19) Considering kt∗ (x + 1) = kt∗ (x), (12.19) can be rewritten as Rt kt∗ (x) + 1 ≤ β · Vt−1 x − kt∗ (x) + 1 . (12.20) (2) Similarly, if kt∗ (x + 1) = kt∗ (x) + 1, then Ut (x + 1) = Rt kt∗ (x) + 1 from (12.15) and (12.16), and Rt kt∗ (x) + 1 > β · Vt−1 x − kt∗ (x) + 1 . (12.21) Thus, it can be concluded from the above two cases that Ut (x + 1) satisfies Ut (x + 1) = max Rt kt∗ (x) + 1 , β · Vt−1 x − kt∗ (x) + 1 . (12.22) Consider now Ut (x + 1) and Ut (x) and compare their values. Given the constraint on kt∗ (x) imposed by Lemma 12.4.1, the value of Ut (x + 1) in (12.22), and considering that Rt (m) and Vt−1 (m) decrease with their arguments, we have the following expressions: Ut (x) = max Rt kt∗ (x − 1) + 1 , β · Vt−1 x − 1 − kt∗ (x − 1) + 1 ≥ max Rt kt∗ (x) + 1 , β · Vt−1 x − kt∗ (x) − 1 = Ut (x + 1).
(12.23)
Therefore, the first part of Lemma 12.4.2 is proved by the above discussion. Next, we show that Vt (x) increases with t for any fixed x. Similarly, it suffices to prove the statement for a particular realization "t , r. Following the results in (12.22), we get that Ut (x) ≥ β · Vt−1 x − kt∗ (x) , (12.24) and, from the fact that Vt−1 (·) is decreasing, we have Ut (x) ≥ β · Vt−1 (x).
(12.25)
Since taking the expectation with respect to "t , r on both sides of (12.25) does not affect the inequality, we prove Vt (x) ≥ Vt−1 (x).
(12.26)
The idea of this lemma can also be illustrated in an intuitive way as follows. At any fixed time period, the marginal benefit Vt (x) of each additional packet declines because the future possible routes are limited; therefore, the chance of transmitting the additional packet at low prices also decreases. Similarly, for any given remaining packet number x, the marginal benefit of an additional packet increases with t, because more possible future routes are available when more time periods remain;
12.4 Optimal dynamic pricing-based routing
311
therefore, the chance of getting a higher marginal benefit increases. Also, Lemma 12.4.2 relaxes the assumption of Lemma 12.4.1 and we always have kt∗ (x) ≤ kt∗ (x + 1) ≤ kt∗ (x) + 1, ∀x ≥ 0. Considering Lemmas 12.4.1 and 12.4.2, the optimal allocation of packet transmission for the dynamic auction framework can be characterized by the following theorem. Theorem 12.4.1 For any realization ("t , r) at the tth stage, the optimal number of packets to transmit in state (x, t) is given by ⎧ ⎨ max{1 ≤ k ≤ min{x, B} : Rt (k) > β · Vt−1 (x − k + 1)} kt∗ (x) = (12.27) if Rt (1) > β · Vt−1 (x), ⎩ 0 otherwise. Moreover, it is optimal to allocate these kt∗ (x) packets to the route with the lowest cost ri . P ROOF. Vt (x) is the summation of two terms in (12.10). Since the second term is fixed given x, the optimal kt∗ maximizing the first term needs to be studied. From the definition (12.8), R(·) decreases with its argument. Also, Vt−1 (·) decreases with its argument according to Lemma 12.4.2. Thus, R(k) − β · Vt−1 (x − k + 1) also decreases monotonically with k. Therefore, the optimal allocation at the tth time period with x packets in the queue, kt∗ (x), is the largest k for which this difference is positive. Theorem 12.4.1 shows how the source node should allocate packets to different time periods. The basic idea is to progressively allocate the packets to the route with the smallest realization of J (r(1) ) until the marginal benefit Rt (i) drops below the marginal opportunity cost Vt−1 (x − i + 1). In order to have the optimal allocation strategies using Theorem 12.4.1, we first need to know the expected profit function Vt (x), ∀t, x. For a finite number of time periods, T , in (12.5), the optimal dynamic programming proceeds backward using the Bellman equation [19] to obtain Vt (x). Owing to the randomness of the route number and its type, it is difficult to obtain a closed-form expression for Vt (x). Thus, we use simulation to approximate the values of Vt (x) for different t and x, which proceeds as follows. Start from the routing stage 0. For each stage t, generate N samples of the number of available routes and their types, which follow the PDFs f " (") and f i (ri ), respectively. For each realization and for each pair of values (x, t), calculate kt∗ (x) using Theorem 12.4.1. By using the conclusion of Lemma 12.4.1, we simplify the computation of kt∗ (x) and need only O(N M) operations to calculate Vt (x) for all x at fixed time period t. Therefore, O(N M T ) operations are required for the whole algorithm. Note that the computation of Vt (x) can be done offline, which will not increase the complexity of finding the optimal allocation for each realization. We then study the expected profit function for an infinite number of routing stages. Such a scenario gives the upper bound of the expected profit, because the source node can wait until low-cost routes become available for transmission. For an infinite horizon, the maximal profit Vt (x) in (12.5) can be rewritten as
312
Dynamic pricing games for routing
5 ∗
V (x) = E ",r min Q,k
#" t
$6 ∗
(G(K) − k · J (ri ))Q i (r) + β · V (x − k)
,
(12.28)
i=1
or, equivalently, V ∗ = T V ∗ , where T is the operator updating V ∗ using (12.28). Assuming that S is the feasible set of states, the convergence proposition of the dynamic programming algorithm [19] states that, for any bounded function V : S → R, the optimal profit function satisfies V ∗ (x) = lim p→∞ (T p V )(x), ∀x ∈ S. Since V (x) is bounded in our algorithm, we are able to apply the value-iteration method to approximate the optimal V (x), which proceeds as follows. Start from some initial function for V (x) as V 0 (x) = g(x), where the superscript stands for the iteration number. Then, iteratively update V (x) by letting V p+1 (x) = (T V p )(x). The iteration process ends when |V p+1 (x) − V p (x)| ≤ , for all x, where is the error bound for V ∗ (x).
12.4.3
Mechanism design In the previous section, we developed the optimal dynamic pricing-based routing approach. Next, our task is to find auction mechanisms that achieve the derived optimal strategy. Many forms of auction can be applied to achieve the optimal strategy. Given the truth-telling property of the second-price auction, we focus on this mechanism now. In a traditional second-price auction, the bidder with the highest bid wins the item and pays the second highest bid for it. Here, the source node is trying to find the route with the lowest cost, which implies the application of a reverse second-price auction. The source node allocates the packet transmission to the route with the lowest payment bid and actually pays the second-lowest bid to the selected route. Moreover, the auction mechanism can be implemented in many forms, such as open auctions and sealed-bid auctions. Open auctions allow the bidders to submit bids many times until finally only one bidder stays in the game. In sealed-bid auctions, the bidders submit their bids only once. Since sealed-bid auctions require less side information and hence save on wireless resources, we analyze the sealed-bid second-price auction for our optimal allocation policy. It is important to note that straightforward application of the reverse secondprice auction cannot guarantee the truth-telling property of the bidders. Let J˜t (r ) = G(1, Kt+1 ) − J (r ) and r˜t = J˜t−1 (Vt−1 (xt )), where xt is the number of packets to be transmitted from the tth stage. Considering the scenario in which the lowest cost of the t > r˜ , it can be seen from Theorem 12.4.1 that no packet will be assigned routes r(1) t for forwarding within the current time period. Hence, the route with the lowest cost may have an incentive to bid below the true cost and thereby satisfy the threshold constraint. In this way, this route will win the packet and gain a positive payoff because the sender awards the transmission to the second lowest bid. But the expected profit of the sender will decrease according to (12.10). Therefore, we need to modify the second-price mechanism by using r˜t as the reserve price for every stage, which is the highest price that the sender agrees to pay for transmitting one packet within the current time period. Specifically, given the submitted bid vector, dt = {d1,t , d2,t , . . . , d",t }, the
12.4 Optimal dynamic pricing-based routing
313
sender allocates the packet to the route with the lowest bid below the reserve price and the selected route gets the payment max{d(2) , r˜ }, where d(2) is the second-lowest type of the forwarding routes. Note that the mechanism we developed above can prevent the single route from not reporting the true cost. However, in the presence of collusion of the routes, it might not be able to maintain the truth-telling property. This problem can be fixed in two ways. First, the greediness of the selfish routes can help to prevent collusion. Assume that two routes collude to increase their profits. The collusion requires the two routes to act and share the extra gain cooperatively, but the greediness of the routes means that the cooperative game cannot be carried out between them. The non-cooperative behaviors will eventually lead to an inefficient outcome and break the collusion of the players. Second, in our scheme, the sender can discourage collusion among the routes by setting a higher reserve price. The collusive behavior of bidders is also referred to as a bidding ring in the context of the auction theory. The use of an optimal reserve price to combat the collusion of bidders is analyzed in [237], which can be directly applied to our scheme for handling the route collusion.
12.4.4
Profit sharing among the nodes on a selected route In the above sections, we have developed the optimal dynamic routing approach through multi-stage pricing in MANETs and designed the mechanism of the second-price auction with reserve prices to assure the truth-telling property of each route. However, in this chapter, we consider each route as an entity. Thus, the residual problem is that of how to share the forwarding profits of the route defined as in (12.1) among the forwarding nodes on the route. Although the mechanism can ensure the truth-telling of each route as one bidder, cooperation among the nodes on one route cannot be pre-assumed and truth-telling mechanisms need to be further designed for the profit-sharing problem. In this part, we will first prove that no dominant truth-telling strategy exists for each node on the selected multi-hop forwarding route in static profit-sharing scenarios. Then, we design truth-telling profit-sharing mechanisms to enforce cooperation among the nodes on the selected route in dynamic scenarios. Since the nodes on the same forwarding route belong to their own authorities, they will act greedily to get greater profits from the total profit that the route gains, which forms a static profit-sharing game (SPSG). The players in the profit-sharing game are all the nodes on the same forwarding route. The payoff of each node is defined as the profit it obtained through packet-forwarding efforts, which is represented as Pi, j for the jth node on the ith route. The action strategy of the jth node on the ith discovered route can be represented as {αi, j , vˆi, j }, where αi, j is the percentage of the total profit that this node will get for its packet-forwarding efforts and vˆi, j is the forwarding cost that it reported while performing the route-based pricing. Note that vˆi, j need not be the true forwarding cost and our aim is to design mechanisms to enforce truth-telling behaviors. Assume that the number of hops on the ith route is h i . Let the profit-sharing i αi, j = 1. Denote the vector for the ith route be αi = {αi,1 , αi,2 , . . . , αi,h i }, where hj=1 reported cost vector of the nodes on the ith route as vˆ i = {vˆi,1 , vˆi,2 , . . . , vˆi,h i }. Recall
314
Dynamic pricing games for routing
that the type vector of the nodes on the ith route is defined as vi = {vi,1 , vi,2 , . . . , vi,h i } and the PDF of vi is fˆi , which we assume to be identical for all nodes without loss of generality. Then, we study the existence of the dominant truth-telling strategies in the following theorem. Theorem 12.4.2 There exists no dominant truth-telling strategy {αi , vˆ i } in the static profit-sharing game. P ROOF. We prove this theorem by contradiction. Assume that αi∗ is a dominant truthtelling profit-sharing strategy in the static profit-sharing game, which means that, by using αi∗ , every forwarding node’s dominant strategy on the ith route is to report its true type (or cost). Equivalently, if the jth node reports a higher cost, vˆi, j = vi, j + , than its true type vi, j while other nodes report the true value, the jth node will get a lower profit. In order to show the dominant strategy αi∗ , we need to calculate and compare the node’s profit when it is cheating with that when it is not. First, the total profit of the ith route is obtained and then we study the profit of each node. With our second-price mechanism and considering (12.1), the total profit of the ith route can be represented as follows: Ui (ˆri ) = Prob(ˆri < r(1) (r−i )) · (E r−i [r(1) (r−i )|ˆri < r(1) (r−i )] − rˆi ),
(12.29)
where rˆi is the bidding cost of the ith route, which the ith route believes to be the true cost, but which need not be if some node on the ith route is cheating by reporting a higher type value, and r(1) (r−i ) represents the lowest cost of all routes except the ith route. Without loss of generality, we assume the PDF of ri to be identical to f for all routes. Using the results of order statistics [346], we have the condition expectation of the payment as follows: ) ∞ 1 E r−i [r(1) (r−i )|ˆri < r(1) (r−i )] = [1 − F(x)]"−1 dx. (12.30) [1 − F(ˆri )]"−1 rˆi We note that the probability of winning the auction for the ith route is Prob(ˆri < r(1) (r−i )) = [1 − F(ˆri )]"−1 .
(12.31)
On substituting (12.30) and (12.31) into (12.29), the total profit can be written as ) ∞ Ui (ˆri ) = [1 − F(x)]"−1 dx. (12.32) rˆi
Then, using the profit-sharing strategy αi∗ , the profit of the jth node on the ith route can be calculated. We consider two cases: (a) the node reports the true type vi, j ; and (b) the node cheats and reports a higher value vˆ = vi, j + . For case (a), the profit of the jth node on the ith route is represented as follows: Ui, j (vi, j ) = αi,∗ j · Ui (ri ) ) ∞ [1 − F(x)]"−1 dx. = αi,∗ j · ri
(12.33)
315
12.4 Optimal dynamic pricing-based routing
For case (b), the profit includes the cheating profit of reporting an extra cost and the allocated profit from the ith route, which can be written as Ui, j (vˆi, j ) = · Prob(ˆri < r(1) (r−i )) + αi,∗ j · Ui (ˆri ) ) ∞ [1 − F(x)]"−1 dx. (12.34) = · [1 − F(ri + )]"−1 + αi,∗ j · ri +
On subtracting (12.33) from (12.34), we have Ui, j (vˆi, j ) − Ui, j (vi, j ) = [1 − F(ri + )]"−1 ) ri + × − αi, j ri
2 [1 − F(x)]"−1 dx . (12.35) [1 − F(ri + )]"−1
From the mean-value theorem, we know that there exists some λ ∈ [0, 1] satisfying ' ( ) ri + [1 − F(x)]"−1 [1 − F(ri + λ)] "−1 dx = · . (12.36) [1 − F(ri + )] [1 − F(ri + )]"−1 ri For simplicity, let ' () =
[1 − F(ri + λ)] [1 − F(ri + )]
("−1
,
(12.37)
which is a decreasing function in , and has the limit lim () = 1.
→0
(12.38)
Thus, there always exists a positive value δ. When < δ, () < 1/αi,∗ j . Further, by putting (12.36) into (12.35), we obtain 4 3 Ui, j (vˆi, j ) − Ui, j (vi, j ) = · [1 − F(ri + )]"−1 1 − αi,∗ j · () . (12.39) Therefore, ∃δ, for < δ, Ui, j (vˆi, j ) − Ui, j (vi, j ) > 0, which contradicts the assumption that αi,∗ j is a dominant truth-telling strategy. Considering that such a contradiction holds for any αi,∗ j , we finally prove that there does not exist a cheat-proof strategy for the profit-sharing game. Since there is no dominant truth-telling strategy in static profit-sharing games, as Theorem 12.4.2 shows, it is necessary to design certain mechanisms to enforce cooperation among the forwarding nodes on a particular forwarding route. There are many ways to design such mechanisms. For instance, an intuitive idea is to provide over-payment to the nodes on the winning route as compensation for their cooperative behavior. The over-payment should be more than the gain the nodes can obtain by cheating. But who is responsible for the over-payment? It is not reasonable to ask the sender to contribute the compensation payment, because, in this case, the sender may have an incentive to switch his/her transmission to the route with higher true cost, which asks for less over-payment. It is also a rational behavior for such a route to require a smaller over-payment, which may make them have a positive payoff instead of losing the auction with zero payoffs.
316
Dynamic pricing games for routing
Therefore, a more practical way is to let the central bank periodically compensate the forwarding nodes with some payments. The amount of the over-payment can be decided on the basis of the Vickrey–Clarke–Groves (VCG) mechanism [237] [3], which pays each node the difference between the routing cost without this node and the other nodes’ routing cost with the presence of this node. It is important to note that the application of the VCG mechanism here does not conflict with our dynamic pricing mechanism. They are carried out separately by the central bank and the sender to ensure the cooperation of forwarding nodes on one route and to maximize the total profit of the sender, respectively. However, the over-payment method still requires some information about the overall topology and forwarding costs, which might not be available in dynamic scenarios. In order to have enforceable truth-telling mechanisms, it is reasonable to model the profitsharing interactions as a repeated game for each route. Generally speaking, repeated games belong to the family of dynamic games, in which a similar static game is played many times. The overall payoff in a repeated game is represented as a normalized discounted summation of the payoff at each stage game. A strategy in the repeated game is a complete plan of action that defines the players’ actions in every stage game. At the end of each stage, all the players can observe the outcome of the stage game and decide the future actions using the history of plays. The repeated profit-sharing game (RPSG) can be defined as follows. Definition 12.4.1 Let be a static profit-sharing game and β be a discount factor. The T -period profit-sharing repeated game, denoted as (T, β), consists of game repeated T times. The repeated game payoff is given by Pi, j =
T −1
β t Pi,t j ,
(12.40)
t=0
where Pi,t j denotes the payoff of the jth node on the ith route in period t. If T goes to infinity, then (∞, β) is referred to as the infinite repeated game. Note that the Nash equilibrium [123] [322] is an important concept in terms of which to measure the outcome of the SPSG, which is a set of strategies, one for each player, such that no selfish player has an incentive to unilaterally change his/her action. However, the selfishness of players will result in inefficient non-cooperative Nash equilibria in static games. As for dynamic games, the subgame perfect equilibrium (SPE) can be used to study the game outcomes. This is an equilibrium such that users’ strategies constitute a Nash equilibrium in every subgame [123] [322] of the original game. In the RPSG, since the game is not played only once, the players is able to make decisions conditioned on past moves for better outcomes, thus allowing for reputation effects and retribution. Therefore, in order to measure the outcome of the RPSG, we apply the folk theorems [123] [96] of the infinite repeated games to have the following theorem.
12.5 Simulation studies
317
Theorem 12.4.3 In the RPSG, there exists a discount factor βˆ < 1 such that any feasible and individually rational payoff can be enforced by an equilibrium for any ˆ 1). discount factor β ∈ (β, The above theorem illustrates that feasible profit-sharing outcomes can be enforced in the RPSG when no dominant strategy is available. However, it does not answer the question of how the feasible profit-sharing outcomes can be enforced, that is, how to design the enforcing mechanisms in the RPSG. First, we define two strategies: the cooperative strategy and the non-cooperative strategy. In the cooperative strategy, the node will report the true forwarding cost; in the non-cooperative strategy, the node will report a very high forwarding cost so that the route with this node will not be selected for packet forwarding. We summarize the following cartel-maintenance profit-sharing (CAMP) mechanism to enforce truth-telling strategies for the RPSG. (1) Each node on the selected route plays the cooperative strategy in the first stage. i Pi, j ≥ U˜ , each node (2) If the cooperation strategy is played in stage t and Ui = hj=1 plays the cooperative strategy in stage t + 1. (3) If the cooperation strategy is played in stage t and Ui < U˜ , each node switches to a punishment phase for T − 1 stages, in which the non-cooperative strategy is played regardless of the outcomes realized. At the T th period, each node switches back to the cooperative strategy. Note that U˜ is the cartel-maintenance threshold. As in [96] [345] [171], the optimal ˜ U and T can be obtained using the routing statistics. The CAMP mechanism uses the non-cooperative punishment launched by all nodes to prevent any strategies deviating from the cooperative strategy. Specifically, although the deviating behaviors may benefit a node in the current stage, its payoff will be decreased more in future stages. Using the CAMP mechanism, truth-telling profit sharing is enforceable among the nodes on the selected route. On the basis of Theorem 12.4.3, we can enforce any feasible profitsharing strategy such as equal sharing or proportional sharing according to the effort of each node.
12.5
Simulation studies In this section, we evaluate the performance of the dynamic pricing approach in multi-hop ad hoc networks. We consider an ad hoc network where N nodes are randomly deployed inside a rectangular region of 10γ m × 10γ m according to a twodimensionally uniform distribution with the maximal transmission range γ = 100 m for each node. Let λ = N π/100 denote the normalized node density, that is, the average number of neighbors for each node in the network. Each node moves according to the random waypoint model [209]: a node starts at a random position, waits for a duration called the pause time, then randomly chooses a new location and moves toward the new
318
Dynamic pricing games for routing
Table 12.1. Simulation parameters Node density Minimum velocity (vmin ) Maximum velocity (vmax ) Average pause time Dimensions of space Maximum transmission range Average packet inter-arrival time Data packet size Link bandwidth
10, 20, 30 10 m/s 30 m/s 100 s 1000 m × 1000 m 100 m 1s 1024 bytes 8 Mbps
location with a velocity uniformly chosen between vmin and vmax . When it arrives at the new location, it waits for another random pause time and then repeats the process. The physical layer assumes that two nodes can directly communicate with each other successfully only if they are within each other’s transmission range. The MAC-layer protocol simulates the IEEE 802.11 distributed coordination function (DCF) with a fourway handshaking mechanism [197]. Table 12.1 shows all of the simulation parameters. Note that each source–destination pair is formed by randomly picking two nodes in the network. Furthermore, multiple routes with different numbers of hops may exist for each source–destination pair. Since the routes with the fewest hops have much higher probabilities of achieving lower costs, without loss of generality, we consider only the fewest-hop routes as bidding routes for simplicity in the optimal dynamic auction framework. Considering the mobility of each node, its forwarding cost is no longer a fixed value and we assume that its PDF fˆ(v) follows the uniform distribution U[u, ¯ u], which has mean μ and variance σ 2 . Thus, using the central limit theorem [346], the cost of an h-hop route can be approximated by the normal distribution with mean h · μ and variance h · σ 2 . In our simulation, we first study the dynamics of MANETs and then illustrate the performance of our framework for various network settings. In order to study the dynamics of MANETs, we first conduct simulations to study the number of hops on the fewest-hop route for source–destination pairs. Let h¯ (n i , n j ) = &dist(n i , n j )/γ ' denote the minimum number of hops needed to traverse from node i to node j, where dist(n i , n j ) denotes the physical distance between node i and j, and let h¯˜ (n i , n j ) denote the number of hops on the actual fewest-hop route between the two nodes. Note that we simulate 106 samples of topologies to study the dynamics of MANETs. First, Figure 12.3 shows the approximated cumulative probability mass function (CMF) of the difference between the h¯˜ (n i , n j ) and h¯ (n i , n j ) for various node densities. From these results, the average number of hops associated with the fewest-hop route from node i to j can be approximated using dist(n i , n j ), γ , and the corresponding CMF of the hop-number difference. Also, it can be seen from Figure 12.3 that a lower node density results in having a larger number of hops for the fewesthop routes, since the numbers of neighboring nodes are limited for packet forwarding in such situations. Second, we study the time and path diversity of MANETs by finding the maximum number of fewest-hop routes for the source–destination pair. Note that
319
12.5 Simulation studies
1
The cumulative probability
0.9 0.8 0.7 0.6 0.5 0.4 0.3
λ = 10 λ = 20 λ = 30
0.2 0.1
Figure 12.3
0
1
2 3 The hop-number difference
4
5
The cumulative probability mass function of the hop-number difference between the h(u, v) and r (u, v).
there may exist scenarios in which the node may be on multiple fewest-hop forwarding routes for the same source–destination pair. For simplicity, we assume that, during the route-discovery phase, the destination randomly picks one such route as the routing candidate and feeds the routing information of node-disjoint fewest-hop routes back to the source. Figure 12.4 shows the CMF of the number of fewest-hop routes for various numbers of hops when the node density is 10. The results for node densities of 20 and 30 are shown in Figures 12.5 and 12.6, respectively. It can be seen from these figures that, when the node density increases, the probability of having more routes for each source– destination pair becomes much higher. Such facts also indicate that a higher order of path diversity can be exploited when each node has more neighbors. Moreover, the possibility of getting more routes for the route with more hops is much lower since the path diversity for multi-hop routing is limited by the forwarding node with the worst neighboring situation. Therefore, the number of routing candidates and their types can be approximated using the above results. In the following parts, we consider the performance for three different schemes: our scheme with a finite time horizon, our scheme with an infinite time horizon, and the fixed allocation scheme. Note that an infinite time horizon cannot be achieved in real applications. However, it can serve as an upper bound for measuring the performance of our scheme. The fixed scheme allocates a fixed number of packets to each stage while also using the optimal auction at each stage. Assume that the cheat-proof profitsharing mechanisms are in place to ensure the cooperation of the forwarding nodes on the same route. Let the benefit function be G(K) = g · k, where g is the benefit from successfully transmitting one packet. Note that the simulation parameters are set as
320
Dynamic pricing games for routing
1
The cumulative probability
0.9 0.8 0.7 0.6 0.5
Two−hop route Three−hop route Four−hop route Five−hop route Six−hop route
0.4
1 Figure 12.4
2
3
4 5 The number of routes
6
7
8
The cumulative probability mass function of the number of fewest-hop routes when the node density is 10. 1
The cumulative probability
0.9 0.8 0.7 0.6 0.5 Two−hop route Three−hop route Four−hop route Five−hop route Six−hop route
0.4 0.3 0.2 2
Figure 12.5
4
6 8 10 The number of routes
12
14
The cumulative probability mass function of the number of fewest-hop routes when the node density is 20.
T = 20, M = 100, and B = 10. Let g = 60, u¯ = 10, and u = 15. In Figure 12.7, we compare the overall profits of the three schemes for various node densities. The concavity of the simulated value functions of our scheme matches the theoretical statement in Lemma 12.4.2.
321
12.5 Simulation studies
1
The cumulative probability
0.9 0.8 0.7 0.6 0.5 0.4 Two−hop route Three−hop route Four−hop route Five−hop route Six−hop route
0.3 0.2 0.1 2
Figure 12.6
4
6
8 10 12 The number of routes
14
16
18
The cumulative probability mass function of the number of fewest-hop routes when the node density is 30. 3500 Finite time horizon, λ = 10 Infinite time horizon, λ = 10 Fixed scheme, λ = 10 Finite time horizon, λ = 20 Infinite time horizon, λ = 20 Fixed scheme, λ = 20 Finite time horizon, λ = 30 Infinite time horizon, λ = 30 Fixed scheme, λ = 30
3000
Overall profit
2500
2000
1500
1000
500
0
Figure 12.7
0
10
20
30 40 50 60 70 The number of transmitted packets
80
90
100
The overall profits of our scheme with finite time horizon, our scheme with infinite time horizon and the fixed scheme.
It can be seen from Figure 12.7 that our scheme achieves significant performance gains over the fixed scheme, which come mainly from the time diversity exploited by the dynamic approach. We observe that the performance gap between the two schemes becomes larger when the node density decreases. Thus, in order to increase the profit in situations of low node densities, it becomes much more important to exploit the time
322
Dynamic pricing games for routing
diversity. Also, the total profit of our scheme increases with incrementation of the node density due to the higher order of path diversity. Besides, since the performance gap between the schemes with finite and infinite time horizons is small, only a few routing stages are required in order to exploit the time diversity. 38 36
Average profit
34 32 30 28 Finite time horizon, λ = 10 Infinite time horizon, λ = 10 Finite time horizon, λ = 20 Infinite time horizon, λ = 20 Finite time horizon, λ = 30 Infinite time horizon, λ = 30
26 24 22 10 Figure 12.8
20
30 40 50 60 70 80 The number of transmitted packets
90
100
The average profits of our scheme with a finite time horizon, our scheme with an infinite time horizon, and the fixed scheme.
2500 M = 100 M = 80 M = 60 M = 40
Overall profit
2000
1500
1000
500
0
Figure 12.9
0
2
4
6
8
10 12 Time stage
14
16
18
20
The overall profits of our scheme with different packets to be transmitted when the node density is 10.
323
12.6 Summary and bibliographical notes
2500 T = 20 T = 15 T = 10
Overall profit
2000
1500
1000
500
0
Figure 12.10
0
10
20 30 40 50 60 70 80 Total number of packets to be transmitted
90
100
The overall profits of our scheme with different time stages when the node density is 10.
In Figure 12.8, the average profits of the three schemes for various node densities are shown. This figure shows that the average profit of transmitting one packet decreases as the number of packets to be transmitted increases. This is because the packets need to share the limited routing resources from both time diversity and path diversity. When the node density is 30, the average profit degrades much slower than it does in the other cases since the potential of utilizing both time diversity and path diversity is high. The overall profits of our scheme with a finite time horizon for various total numbers of packets in for a node density of 10 are compared Figure 12.9. This figure shows that the overall profit increases with increasing number of routing stages due to the time diversity. Also, saturation behavior can be observed on using more stages. In Figure 12.10, the overall profits for various time stages are compared. Considering the limited routing resource, the overall profit saturates when the number of packets is high.
12.6
Summary and bibliographical notes In this chapter, we study how to conduct efficient pricing-based routing in self-organized MANETs by assuming that packet forwarding will impose a cost on the relay node and that successful transmission brings benefits to the source–destination pairs. Considering the dynamic nature of MANETs, we model the routing procedure as a multi-stage pricing game and develop an optimal dynamic pricing-based routing approach to maximize the payoff of the source–destination pair while keeping the forwarding incentives of the relay nodes on the selected routes by optimally pricing their packet-forwarding services through the auction protocol. It is important to notice that not only the path diversity but also the time diversity in MANETs can be exploited by our dynamic pricing-based
324
Dynamic pricing games for routing
approach. Also, the optimal dynamic auction algorithm is developed to achieve the optimal allocation of packets to be transmitted, which provides the corresponding pricing rules while taking into consideration the node’s mobility and the routing dynamics. Extensive simulations have been conducted to study the performance obtained with this approach. The results illustrate that the dynamic pricing game approach achieves significant performance gains over the existing static routing approaches. Considering self-organized MANETS, two types of game-theoretic approach have been proposed to analyze the interactions among network users and stimulate cooperation for self-organized networking: reputation-based methods and pricing-based methods. In reputation-based methods such as those presented in [283] [15] [290] [215] [483], a node determines whether it should forward packets for other nodes or request other nodes to forward packets for it on the basis of their past behaviors. In such schemes, by continuing to monitor packet-forwarding activities, reputation effects and retribution are allowed to detect the misbehaving nodes and isolate them from the rest of the network, thus forcing the selfish nodes to behave cooperatively in order to obtain better payoffs. In the pricing-based methods such as those in [493] [214], a selfish node will forward packets for other nodes only if it can get the properly priced payment from those requesters as compensation. Several approaches have been developed to exploit the path diversity during the routing process in self-organized MANETs, such as those in [3] [111; 504]. On the basis of ideas concerning the auction-like pricing and routing protocols for the Internet [126] [313], the authors in [3] [111; 504] have introduced some auction-like methods for costefficient and truthful routing in MANETs, in which the sender-centric Vickrey auction has been adopted to discover the most cost-efficient routes, which has the advantage that its incentive-compatibility ensures truthful routing among the nodes. Router-based auction approaches [73] [158] have also been proposed to encourage packet forwarding in MANETs, whereby each router constitutes an auction market instead of submitting bids to the sender. Besides that, a strategy-proof pricing algorithm for truthful multi-cast routing has been proposed in [453]. Although the existing pricing-based approaches in [3] [111] [504] have achieved some success in cost-efficient and incentive-compatible routing for MANETs with selfish nodes, most of them assume that the network topology is fixed or the routes between the sources and the destinations are known and predetermined. Further, none of the existing approaches have addressed how to exploit time diversity for efficient routing. More information can be found in [216].
13
Connectivity-aware network lifetime optimization
In this chapter, we consider a class of energy-aware routing algorithms that explicitly take into account the connectivity of the remaining sensor network for lifetime improvement. In typical sensor-network deployments, some nodes may be more important than other nodes because the failure of these nodes causes network disintegration, which results in early termination of information delivery. To mitigate this problem, we consider a class of routing algorithms called keep-connect algorithms, which use computable measures of network connectivity in determining how to route packets. Such algorithms embed the importance of the nodes in the routing cost/metric. The importance of a node is characterized by the algebraic connectivity of the remaining graph when that node fails. We prove several properties of the routing algorithm, including the energy-consumption upper bound. Using extensive simulations, we demonstrate that the algorithm achieves significant performance improvement compared with the existing routing algorithms. More importantly, we show that it is more robust in terms of algebraic network connectivity for lifetime improvement than the existing algorithms. Finally, we present a distributed implementation of the algorithm.
13.1
Introduction Advances in low-power integrated-circuit devices and communications technologies have enabled the deployment of low-cost, low-power sensors that can be integrated to form a sensor network. This type of network has vastly important applications, i.e., from battlefield surveillance systems to modern highway and industry monitoring systems; from emergency rescue systems to early forest-fire detection and very sophisticated earthquake early-detection systems, etc. Having such a broad range of applications, the sensor network is becoming an integral part of human lives. Moreover, it has been identified as one of the most important technologies nowadays. There are many important characteristics of a sensor network. First, the sensor nodes are typically deployed in an area with high redundancy and each of the sensor nodes has limited energy, therefore it is prone to failure. In order to accomplish the mission, it is essential for the sensor nodes to collaborate. Second, the nodes in the sensor networks typically stay in their original places of deployment for their entire lifetime. Hence, it
326
Connectivity-aware network lifetime optimization
is very important to keep the remaining network always connected, since disintegrated clusters of nodes are useless for information gathering. Moreover, due to the immobility of the nodes, it might not be possible to reorganize the remaining nodes to create a new connected network. Owing to these characteristics of sensor networks, the design of routing algorithms becomes very different from that of typical ad hoc networks in the following respect. Instead of minimizing the hop count and delivery delay in the network, the routing algorithms in the sensor networks focus more on extending the limited battery lifetime of the nodes. Furthermore, most of the existing energy-aware routing algorithms use the time until the first node in the network dies as the definition of the network lifetime [80] [421]. Since, in many practical sensor applications, the death of the first node might not influence the information collection task, we argue that the network lifetime should be defined as the time until there is no route from any particular source to any particular destination. In other words, the network lifetime should be defined as the time until the network becomes disconnected/disintegrated. Using this definition as the network lifetime, the network connectivity becomes an important criterion to be explicitly considered in the routing algorithm. To be precise, in this chapter we employ the notion of the algebraic connectivity of a graph in spectral graph theory to quantify the importance of a node. In particular, the importance of a node is quantified by the Fiedler value of the remaining graph when that particular node fails. One property of the Fiedler value is that the larger it is, the more connected the graph will be. For this reason, the Fiedler value is also referred to as the algebraic connectivity of a graph. By considering the nodes’ importance from the graph-connectivity perspective in the routing design, the node with higher importance will be retained in the network, therefore the connectivity of the remaining network is maintained for as long as possible. For embedding the nodes’ importance in the routing cost, we present a class of algorithms called keep-connect algorithms to solve the problem posed. Such an algorithm has several advantages, namely it is an online algorithm, which implies that the algorithm does not need to know ahead of time the sequences of messages to be routed. This characteristic is important, since the information generation typically is not known a priori in a sensor network. This algorithm is efficient at maintaining the connectivity of the remaining network. Finally, this algorithm is flexible and can be used together with any other existing energy-aware routing algorithms that employ distributed Bellman–Ford/Dijkstra algorithms in their implementations. This chapter is organized as follows. We first give the system description and problem formulation in Section 13.2. Several important facts from spectral graph theory are briefly outlined in Section 13.3. In Section 13.4, the keep-connect algorithms are explained. The upper bound on the energy consumption is proved in Section 13.5. In Section 13.6, we present one possible distributed implementation of the algorithm. The effectiveness is verified through simulations in Section 13.7.
13.2 The system model and problem formulation
13.2
327
The system model and problem formulation In this section, we present the network model, review several definitions of the sensor network lifetime, and explain the problem formulation.
13.2.1
The network model A wireless sensor network can be modeled as an undirected simple finite graph G(V, E), where V = {v1 , . . . , vn } is the set of nodes in the network, E is the set of all links/edges, |V | = n is the number of vertices in the graph, and |E| = m is the number of edges in the graph. The undirected nature of the graph implies that all the links in the network are bi-directional, i.e., the fact that node vi is able to reach node v j implies the converse. The simple nature of the graph implies that there is no self-loop in each node and there are no multiple edges connecting two nodes. The finite nature of the graph implies that the cardinality of the nodes and edges is finite. The link (vi ,v j ) implies that node v j ∈ Svi can be directly reached by node vi with a certain transmit-power level within the predefined dynamic range, where Svi is the set of nodes that can be directly reached by node vi . We assume that every node has the initial battery energy of Ei for ∀i ∈ {1, . . . , n}. The energy consumption for packet transmission from node vi to v j is proportional to d(i, j)α , where d(i, j) is the distance between node vi and node v j . The path loss exponent α depends on the transmission environment [357] and typically ranges from 2 to 4. In this chapter, we assume α = 2 for free-space propagation. We will also discuss the performance of our algorithm when α = 4.0 in Section 13.7. When the energy in one node is exhausted, we say that the node has failed.
13.2.2
Definitions of network lifetime Depending on the application in the wireless sensor network, there are many definitions of the network lifetime. The network lifetime can be defined as the time until the first node/sensor in the network fails. In contrast, the network lifetime can also be defined as the time until all nodes fail. A more general definition of the network lifetime is min{t1 , t2 , t3 }, where t1 is the time it takes for the cardinality of the largest connected component to drop below c1 · n(t), where n(t) is the number of live nodes at time t, t2 is the time it takes for n(t) to drop below c2 · n(0), and t3 is the time it takes for the area covered to drop below c3 · A, where A is the area covered by the initial deployment of the sensors. In the above definition, c1 , c2 , and c3 are the predefined constants between zero and unity. It is well known that the network connectivity is a very important characteristic in ad hoc/sensor networks, therefore it should be taken into account in the definition of network lifetime and algorithm design. In sensor-network applications, the time until the first node/sensor fails might not serve as a good definition of the network lifetime, since the failure of the first node/sensor does not terminate information delivery/collection.
328
Connectivity-aware network lifetime optimization
In contrast, network disintegration typically causes a severe impact on information delivery. This motivates us to employ the time until the remaining network becomes disconnected as our definition of network lifetime. Using this definition, we argue that it is crucial to consider the network connectivity in designing an energy-aware routing algorithm.
13.2.3
Problem formulation The problem of maximizing the minimum residual energy of nodes in the network has been studied extensively [80] [246] [390]. The time until the first node in the network dies can be found using the following linear program: max T s.t. f (c) (i, j) ≥ 0, ∀i ∈ N , ∀ j ∈ Si , ∀c ∈ C, et (i, j) f (c) (i, j) + er ( j, i) f (c) ( j, i) ≤ Ei , j∈Si
j:i∈S j
c∈C
j:i∈S j
(c)
f (c) ( j, i) + T Q i
=
∀i ∈ N ,
c∈C
f (c) (i, j), ∀i ∈ N , ∀c ∈ C,
(13.1)
j∈Si
where f (c) (i, j) is the amount of information of commodity c that is transmitted from node vi to its neighbor node v j ∈ Si until time T , and Si is the set of vi ’s neighboring nodes. Each commodity c ∈ C has a certain source node O (c) and destination node D (c) , where O (c) is the origin/source of commodity c and D (c) is the destination/drain of commodity c. e(i, j) is the energy required to guarantee successful transmission from node vi to node v j , and Q i(c) denotes the information-generation rates at node vi of commodity c. The first constraint indicates that the number of packets transmitted between any two nodes is non-negative. The second constraint implies that the total energy used for packet transmission and reception at one particular node should be less than the battery capacity in that node. This applies for all nodes. The last constraint (c) indicates the flow conservation at each nodes in the network; we note that Q i > 0 (c) (c) for vi ∈ O (c) , Q i < 0 for vi ∈ D (c) , and Q i = 0, otherwise. The notation j : i ∈ S j denotes the summation of the flow from all nodes v j whose neighbor is node vi , there fore, j:i∈S j f (c) ( j, i) denotes the total amount of information going into (inflow to) (c) (i, j) indicates the total amount of infornode vi until time T . Similarly, j∈Si f mation going from (outflow from) node vi until time T . The flow conservation simply states that the total flow coming into node vi plus any information rate generated from node vi is equal to the total flow going out from node vi . We note that the above formulation involves two assumptions. First, the formulation requires knowledge of all commodities on performing the optimization. This implies that the information-generation rate on sources in all commodities throughout the whole duration of the network lifetime is required in order to solve the linear program. Second, the above formulation does not reflect the sequences of the commodities. We know that the performance of an online routing algorithm depends on the sequences
13.3 Facts from spectral graph theory
329
of the commodities [246]. By sequences of commodities, we mean the sequences in which the traffic from different commodities appears in the network. These two assumptions imply that the above formulation is suitable only for offline optimization, which is of limited use in a practical scenario. In practice, the routing decision has to be made online; the routing decision is made upon packet arrival. It will be very hard, if not impossible, to know the information-generation rate of all commodities a priori. A qualitative performance comparison of online and offline algorithms for the routing algorithm is given in [246]. The authors show that there is no online algorithm for message routing that has a constant competitiveness ratio in terms of network lifetime, where the competitiveness ratio is defined as the ratio of the solution to the online algorithm with respect to the optimal offline solution. This implies that the performance of the online algorithm is worse than that of the offline algorithm. Moreover, the formulation in (13.1) is suitable only for the time until the first node dies. For the above reasons, we focus on designing a robust online algorithm by taking into account the connectivity of the remaining network in making the routing decision. The robustness of the scheme comes from the fact that, when the information-generation rate is not known a priori and the routing decision is made on the fly, employing the connectivity weight in the routing decision keeps the remaining network connected for as long as possible.
13.3
Facts from spectral graph theory In this section, we briefly summarize some important facts from spectral graph theory. We merely state the lemmas since they provide insight for understanding the scheme, and we will use some of these lemmas to prove several properties of the scheme. The complete proofs of the lemmas can be found in the references cited above.
13.3.1
Eigenvalues of a Laplacian matrix In this subsection, we briefly discuss the definition of a Laplacian matrix, its eigenvalues, and the relationship between the eigenvalues of Laplacian matrix and the connectivity of the associated graph. The following notation will be used throughout the chapter: G(V, E) is the graph with set of vertices V and set of edges E. We recall that the number of vertices is |V | = n and the number of edges is |E| = m. Moreover, we define G −vi as a graph resulting from removing vertex vi and all its adjacent edges from the original graph G. In the rest of the chapter, we will use vertex and node interchangeably. Similarly, we will use link and edge interchangeably. The Laplacian matrix associated with a graph is defined as follows. Definition 13.3.1 (Laplacian matrix associated with a graph) In a graph G(V, E), the Laplacian matrix associated with a graph, L(G), is an n by n matrix defined as follows:
330
Connectivity-aware network lifetime optimization
⎧ ⎨ dvi if vi = v j , L(i, j) = −1 if (vi , v j ) ∈ E, ⎩ 0 otherwise,
(13.2)
where i, j ∈ {1, . . . , n} are the indices of the nodes. Equivalently, the Laplacian matrix L(G) can be expressed as L(G) = T (G) − A(G),
(13.3)
where T (G) is an n by n diagonal matrix associated with graph G with the (i,i)th entry having value dvi , which represents the number of neighboring nodes. A(G) is an n by n adjacent matrix associated with graph G. The eigenvalues of the Laplacian matrix, L(G), (λ0 (G) ≤ λ1 (G) ≤ · · · ≤ λn−1 (G)), are usually referred to as the graph spectra. The following lemma describes the relationship between the graph spectra and the connectivity of a graph [60]. Lemma 13.3.1 Let’s denote 0 = λ0 (G) ≤ λ1 (G) ≤ · · · ≤ λn−1 (G) as the eigenvalues of the Laplacian matrix L(G). If G is connected, then λ1 (G) > 0. Moreover, if λi (G) = 0 and λi+1 (G) = 0, then G has exactly i + 1 disjoint connected components. It is easy to observe that the Laplacian matrix L(G) has all-zeros row sums; therefore, the matrix L(G) has an eigenvalue 0 with the corresponding eigenvector (1, . . . , 1)T . Moreover, since L(G) is real, symmetric, and nonnegative semi-definite, all the eigenvalues of L(G) should be real and non-negative. It is obvious that the smallest eigenvalue of the Laplacian matrix L(G) is zero. The above lemma indicates that if G is strongly connected (there exists a simple path from any initial node vi to the terminal node v j , where i = j) then the Laplacian matrix L has the simple eigenvalue 0 (the eigenvalue 0 has the multiplicity of 1). Moreover, if the eigenvalue 0 of the Laplacian matrix L(G) has multiplicity n, then there are n disconnected components. Since we always work with the connected graph, we will focus on the second-smallest eigenvalue of the Laplacian matrix L(G) for the rest of this chapter.
13.3.2
The Fiedler value and vector Let’s denote the eigenvalues of the Laplacian matrix, L(G), associated with G(V, E) as λ0 (G), . . . , λn−1 (G) and the corresponding eigenvectors as ν0 (G), . . . , νn−1 (G). Obviously, ν0 (G) = e = (1, . . . , 1)T . Suppose that the graph G(V, E) is strongly connected (the second-smallest eigenvalue is strictly larger than zero, λ1 (G) > 0). This second-smallest eigenvalue can be represented (using the Courant–Fisher theorem [167]) as λ1 (G) =
min
x T x=1,x T ν0 (G)=0
x T L(G)x.
(13.4)
The second-smallest eigenvalue of the Laplacian matrix is always referred to as the algebraic connectivity of the graph G [122]. It is also called the Fiedler value of a
13.4 Keep-connect algorithms
331
graph. The reason for calling the second-smallest eigenvalue the algebraic connectivity of a graph G comes from the following lemmas [122]. Lemma 13.3.2 If G 1 and G 2 are edge-disjoint graphs with the same vertices, then λ1 (G 1 ) + λ1 (G 2 ) ≤ λ1 (G 1 ∪ G 2 ). Lemma 13.3.3 The Fiedler value λ1 (G) is non-decreasing for graphs with the same set of vertices, i.e., λ1 (G 1 ) ≤ λ1 (G), if G 1 (V, E 1 ), G(V, E), and E 1 ⊆ E. We observe that G and G 1 have the same number of vertices. Since G 1 has fewer edges than G and E 1 ⊆ E, this implies that G 1 is less connected than G. From Lemma 13.3.3, we have that the Fiedler value corresponding to G 1 is smaller than that corresponding to G, λ1 (G 1 ) ≤ λ1 (G). It is in this sense that the Fiedler value represents the degree of connectivity in a graph. Finally, the relation of the Fiedler value for a graph obtained by removing a vertex and all its adjacent edges to the Fiedler value for the original graph is given by the following lemma. Lemma 13.3.4 Let G vi be a graph obtained by removing vertex vi from G and all the adjacent edges. Then λ1 (G vi ) ≥ λ1 (G) − 1. The following two lemmas give some upper and lower bounds for the Fiedler value. These lemmas will be used to prove some properties of the algorithm in Section 13.5. Lemma 13.3.5 Let G(V, E), dvi be the degree of node vi and |V | = n be the number of vertices, then 0 1 n (13.5) min dvi . λ1 (G) ≤ n − 1 vi Lemma 13.3.6 Let ε(G) be the edge connectivity of the graph G (the minimum number of edges whose removal would result in losing connectivity of the graph G). Then, we have 0 ' (1 π λ1 (G) ≥ 2ε(G) 1 − cos , (13.6) n where n is the number of vertices |V | = n.
13.4
Keep-connect algorithms In this section, we use the facts of spectral graph theory described in the previous section to develop online routing algorithms to maximize the network lifetime, which is defined as the time until the network becomes disconnected. Before describing the routing algorithms, let’s first consider how to measure the degree of connectivity of the remaining graph in the routing metric. We quantify the connectivity of the remaining graph in terms of the Fiedler value.
332
Connectivity-aware network lifetime optimization
Table 13.1. The keep-connect algorithm using the Fiedler value Let G(V, E) be the original graph. Let us define the graph obtained after removing node vi and all its adjacent edges as G −vi ({V − vi }, E −vi ). Let us also denote as a small threshold. 1. Initialization: set nodes’ weights as zeros W (vi ) = 0, ∀vi ∈ V 2. For each node vi (a) form the Laplacian matrix L(G −vi ) of the graph as (13.2), (b) find the Fiedler value λ1 (G −vi ) as (13.4), (c) set the weight of the node as W (vi ) = 1/λ1 (G −vi ), if λ1 (G −vi ) > . Else, set W (vi ) = 1/. End for
Recall from Section 13.3.2 that the Fiedler value qualitatively represents the connectivity of a graph in the sense that the larger the Fiedler value is, the more connected the graph will be. The degree of connectivity of the remaining graph can be quantified by the Fiedler value of the graph resulting from removing that particular node and all the edges connected to that node from the original graph. We design the connectivity weight of each node as 1/λ1 (G −vi ), where G −vi denotes a graph resulting from removing node vi and all its adjacent edges from the original graph G. In this way, a node whose removal would cause a severe reduction in the remaining algebraic network connectivity will be avoided when performing the routing decision. This is due to the fact that λ1 (G −vi ) measures the degree of connectivity of the graph after removing node vi and all the edges connected to that node. We note that, if vi is an articulation node, (i.e., a node whose removal would result in a disconnected network), then λ1 (G −vi ) theoretically equals zero. This will cause a numerical problem when we embed the weight in the routing algorithm. To avoid this problem, we introduce a small threshold, . If λ1 (G −vi ) ≤ , we set λ1 (G −vi ) = . The detail of the algorithm is given in Table 13.1. In short, the keep-connect algorithm avoids using the articulation nodes in order to keep the remaining network connected. We set the threshold as = 10−5 throughout this chapter. We note that, in order to maintain the remaining algebraic connectivity of a network for as long as possible in the routing algorithm, the route metric should reflect the effect of selecting one particular route on the remaining (algebraic) connectivity of the network. In other words, the severity of the effect of selection of one particular route on the connectivity measure of a network should be quantified. Since the route metric is the sum of the costs of the links that constitute the route, the link cost should also reflect the reduction in the remaining connectivity of the network. In the following subsections, we present an appropriate modification on the existing routing algorithms by incorporating the connectivity weight into the routing metrics.
13.4.1
Routing algorithms In this subsection, we first review the idea of existing routing algorithms and proceed with the modification to include the keep-connect algorithm. Minimum-hop routing is
333
13.4 Keep-connect algorithms
usually used in an ad hoc or wire-line network to minimize the delay in packet delivery. The routing algorithm chooses the route with the minimum number of hops (MH) [18]. The link cost between two nodes is 1. To modify this algorithm to reflect the remaining connectivity in the network, we embed the connectivity weight in the link cost between two nodes, so that it becomes c(u, v) = 12 [W (u) y + W (w) y ]. We call this algorithm minimum hop while keeping the connectivity (MHKC(y)). Both minimum-hop and MHKC routing algorithms can be computed using the standard Dijkstra algorithm. The max–min residual-energy (MMRE) algorithm [421] selects the route that maximizes the minimum residual energy of nodes on the route. The link cost for MMRE is represented as c(u, w) = min{E u (t), E w (t)},
(13.7)
where E u (t) and E w (t) represent the residual energy at time t for transmitting node u and receiving node w, respectively. Another variant of MMRE is to maximize the minimum link cost that is represented as c(u, w) = E u (t) + E w (t).
(13.8)
The max–min route can be implemented using a modified Dijkstra algorithm [421]. In the same way as before, we modify the variant of MMRC by embedding the connectivity into the link cost as c(u, w) = E u (t)W (u) y + E w (t)W (w) y ,
(13.9)
where W (u) and W (w) are the connectivity weights for node u and node v, respectively. We name this algorithm MMREKC(y). This algorithm makes the routing decision on the basis of the remaining energy in nodes and the connectivity criterion. We note that maximizing the minimum residual energy of nodes is not the same as avoiding articulation nodes. Using MMREKC(y), one hopes to balance the choices of avoiding minimum residual energy of nodes and avoiding articulation nodes. If we set the residual energy of nodes in MMRECK(y) to unity, we will get the max–min remaining connectivity (MMKC(y)) algorithm. We note that MMKC is similar to MHKC except that MMKC uses the modified Dijkstra algorithm whereas MHKC uses the standard Dijkstra algorithm. The MMKC algorithm merely avoids overusing some articulation nodes. Precisely, the routing algorithms that use a connectivity criterion avoid using nodes whose failure would result in a severe reduction in the remaining connectivity. In all the keep-connect algorithms, y determines how much the connectivity weight should affect the routing cost. Since in MMREKC(y) and MHKC(y) the routing metrics try to achieve simultaneously two different objectives (e.g., both minimum number of hops and connectivity in the MHKC algorithm), this variable y controls the importance of the connectivity criterion in the routing metric. The link costs for the existing algorithm and the modification are summarized in Table 13.2. The connectivity weight, W (·), in the algorithms is calculated using the keep-connect algorithm (Table 13.1). During the routing computation, if there is more than one node of weight 1/, the route metric will be determined by the link cost of the rest of nodes on the route. If there is a tie in route metric, any tie-breaker rule will suffice.
334
Connectivity-aware network lifetime optimization
Table 13.2. Link costs for several routing algorithms with/without a connectivity criterion Algorithm / method MH/standard Dijkstra MHKC(y)/standard Dijkstra MMRE/modified Dijkstra Variant of MMRE/ modified Dijkstra MMREKC(y)/ modified Dijkstra MMKC(y)[modified Dijkstra]
Link cost c(u, w) = 1 c(u, w) = 12 [W (u) y + W (w) y ] c(u, w) = min{E u (t), E w (t)} c(u, w) = E u (t) + E w (t) c(u, w) = E u (t)W (u) y + E w (t)W (w) y c(u, w) = W (u) y + W (w) y
Table 13.3. MTEKC(y) 1. For any source–destination pairs, find the minimum total energy path with edge cost as et (vi , v j )W (vi ) y + er (vi , v j )W (v j ) y for vi ∈ V , v j ∈ Svi , where et (vi , v j ) and er (vi , v j ) are the transmit and receive energies for delivering a packet from node vi to node v j . Svi denotes the neighbors of node vi . W (vi ) is the weight of node vi . 2. If a node dies, recompute the alive nodes’ weight using the keep-connect algorithm. Redo step 1.
13.4.2
Minimum total energy while keeping connectivity (MTEKC) routing In the previous subsection, the routing metrics were determined either by the number of hops (minimum-hop routing/MH) or the minimum residual energy of nodes (MMRE), with or without the connectivity criterion. The minimum-total-energy (MTE) algorithm minimizes the total energy used for packet transmission and reception along the route [421]. By embedding the connectivity weights of nodes into the MTE algorithm, we obtain MTEKC(y). In particular, the link cost of MTEKC(y) is represented by c(u, w) = et (u, w) · W (u) y + er (u, w) · W (w) y ,
(13.10)
where et (u, w) and er (u, w) are the transmit and receive energies for delivering a packet from node u to node w. The MTEKC(y) algorithm minimizes the total transmit energy while trying to keep the remaining network as connected as possible. The complete algorithm for MTEKC is shown in Table 13.3. One of the purposes of introducing the variable y is to limit the energy consumption of MTEKC. In principle, the algorithm will never achieve both the most energy-efficient and the most robust connectivity route. Introducing y provides a way to trade off between the two objective functions. This is particularly true when the network is large, as we will prove for the bound of the energy consumption in the MTEKC algorithm in Section 13.5. This bound can be easily controlled by the variable y. As with the previous keep-connect algorithms, when there is more than one node of weight 1/, the rest
335
13.5 The upper bound on the energy consumption
of the nodes on these different routes will determine which overall routing metrics are smaller. In a tie situation, any tie-breaker rule will suffice. It is important to emphasize that the MTE algorithm is less effective than MMRE-type algorithms only when one takes the death of the first node as the definition of network lifetime. In this case, since we employ the time until the network becomes disconnected to define the network lifetime, the above argument is not necessarily true in general. In fact this phenomenon can be observed in [421] in that, after more than one node has died, the expiration time (network lifetime) for MTE is higher than that for MMRE. We will discuss this phenomenon further in Section 13.7. In the simulation, we typically observe that more than one node fails before the network becomes disconnected (cf. Figure 13.4 later).
13.5
The upper bound on the energy consumption In this section, we go on to show the upper bound on the energy consumption of the MTEKC(y) algorithm. It is well known [246] that there is a tradeoff between the energy consumption along the route and avoiding overuse of popular nodes in achieving the maximum lifetime or total number of packets delivered before the network becomes disconnected. Similarly, there is a tradeoff between the energy efficiency and the connectivity robustness. Hence, it is important to show the bound on the energy consumption. In this section, we first derive some properties and employ these properties to prove the upper bound of the energy consumption. For simplicity we include only the transmit energy between two nodes in calculating the energy of the route. We denote r ∗ as the minimum-total-energy route connecting any fixed source node v0 and destination node vd . Equivalently, the MTE route is represented as r ∗ = arg
min
r ∈R(v0 ,vd )
d(r )−1
et (vi , vi+1 ),
(13.11)
i=1
where R(v0 , vd ) is the set of all routes connecting source node v0 and destination node vd , and d(r ) is the number of hops on the route. Furthermore, we denote r † as the MTEKC(y) route obtained using the Fiedler value and this route satisfies r † = arg
min
r ∈R(v0 ,vd )
d(r )−1 i=1
et (vi , vi+1 ) . λ1 (G −vi ) y
(13.12)
In the following, we give some simple lemmas on the lower bound and upper bound of the metric used in the keep-connect algorithm. Lemma 13.5.1 [The lower bound of the MTEKC(y) metric] For each route, the MTEKC(y) employing the Fiedler-value metric has the following property:
336
Connectivity-aware network lifetime optimization
d−1
' et (vi , vi+1 )W (vi ) y ≥
i=0
' ≥
n−2 1 n − 1 mini dvi (G)
(y d−1
et (vi , vi+1 )
i=0
( d−1 (n − 2)n y · et (vi , vi+1 ), 2(n − 1)m
(13.13)
i=0
where dvi is the degree of node vi in the graph, n is the number of vertices in the graph, and m is the number of edges in the graph. P ROOF. To prove these inequalities, we require the upper bound of the Fiedler value as follows (stated in Lemma 13.3.5). Consider G(V, E) and let dvi (G) be the degree of node vi in graph G. Then, 0 < λ1 (G) ≤
n min dvi (G). n−1 i
(13.14)
Now, consider the graph G −vi obtained from graph G by removing node vi and all edges connected to node vi . Obviously, λ1 (G −vi ) ≤
n−1 n−1 min dvi (G −vi ) ≤ min dvi (G), n−2 i n−2 i
(13.15)
since the minimum degree of graph G −vi is smaller than or equal to the minimum degree of graph G. Now, using the keep-connect algorithm with the Fiedler value, we have d−1
et (vi , vi+1 ) · W (vi ) = y
i=0
i=0
' ≥
Since n · mini dvi ≤
d−1 et (vi , vi+1 )
i
λ1 (G −vi ) y
n−2 (n − 1)mini dvi (G)
(y d−1 · et (vi , vi+1 ).
(13.16)
i=0
dvi = 2m, we have '
1 mini dvi (G)
'
(y ≥
n 2m
(y .
(13.17)
Then, we obtain the second inequality '
n−2 (n − 1)mini dvi (G)
' (y ( d−1 d−1 (n − 2)n y · et (vi , vi+1 ) ≥ · et (vi , vi+1 ). 2(n − 1)m i=0 i=0 (13.18)
Lemma 13.5.2 [The upper bound of the MTEKC(y) metric] For each route, the MTEKC(y) employing the Fiedler-value metric has the following property:
337
13.5 The upper bound on the energy consumption
d−1 i=0
et (vi , vi+1 ) · W (vi ) y 0
1 ≤ 2(ε(G) − 1)[1 − cos(π/(n − 1))]
1y d−1
et (vi , vi+1 ), (13.19)
i=0
where ε(G) is the edge-cut or edge connectivity of the graph. The edge-cut/edge connectivity is defined as the minimal number of edges whose removal would result in a disconnected graph. n is the number of vertices in the graph. P ROOF. As in Lemma 13.5.1, we use the lower bound of the Fiedler value in Lemma 13.3.6. Consider G(V, E) and let ε(G) be the edge-cut of the graph. Then, λ1 (G) ≥ 2ε(G)[1 − cos(π/n)]. Now, consider the graph G −vi obtained from graph G by removing node vi and all edges connected to node vi . From the upper bound of the Fiedler value, we have λ1 (G −vi ) ≥ 2ε(G −vi )[1 − cos(π/(n − 1))].
(13.20)
Since ε(G −vi ) ≥ ε(G) − 1, λ1 (G −vi ) ≥ 2[ε(G) − 1][1 − cos(π/(n − 1))].
(13.21)
By following a proof similar to that in Lemma 13.5.1, we can obtain d−1
et (vi , vi+1 ) · W (vi ) y =
i=0
d−1 et (vi , vi+1 ) i=0
λ1 (G −vi ) y
0
1 ≤ 2[ε(G) − 1][1 − cos(π/(n − 1))]
1y d−1 · et (vi , vi+1 ). i=0
Lemma 13.5.3 [Complete graph] For a complete graph, the MTEKC(y) route employing the Fiedler-value metric is the same as the MTE route. P ROOF. From the definitions of the MTE route and MTEKC(y) with Fiedler-value route, we have the following inequalities ∗ )−1 d(r
i=1 † )−1 d(r
i=1
et (vi , vi+1 ) ≤
† )−1 d(r
et (vi , vi+1 ),
(13.22)
d(r )−1 et (vi , vi+1 ) et (vi , vi+1 ) ≤ . y λ1 (G −vi ) λ1 (G −vi ) y
(13.23)
i=1 ∗
i=1
We note that removing one node from a complete graph with n nodes results in another complete graph with n − 1 nodes. Therefore, we have λ1 (G −vi ) = n − 1. On simplifying (13.23), we have
338
Connectivity-aware network lifetime optimization
† )−1 d(r
et (vi , vi+1 ) ≤
i=1
∗ )−1 d(r
et (vi , vi+1 ).
(13.24)
et (vi , vi+1 ).
(13.25)
i=1
On combining this with (13.22), we have † )−1 d(r
et (vi , vi+1 ) =
i=1
∗ )−1 d(r
i=1
Since it is impossible to have two different routes with the same total transmit energy in random network deployment for fixed source node v0 and destination node vd , we conclude that r ∗ = r † . Now we are ready to develop the upper bound on the energy consumed in the MTEKC(y) algorithm with the Fiedler value. The following theorem gives the upper bound on the consumed energy. Theorem 13.5.1 The energy consumed for MTEKC(y) using the Fiedler value satisfies the following upper bound: † )−1 d(r
et (vi , vi+1 )
i=1
0
(n − 1)m ≤ n(n − 2)(ε(G) − 1)[1 − cos(π/(n − 1))]
∗ )−1 1 y d(r
et (vi , vi+1 ).
(13.26)
i=1
P ROOF. From inequality (13.23), Lemma 13.5.1, and Lemma 13.5.2, we have '
(n − 2)n 2(n − 1)m
† )−1 ( y d(r
et (vi , vi+1 ) ≤
i=1
† )−1 d(r
i=1
et (vi , vi+1 ) , λ1 (G −vi ) y
(13.27)
∗ )−1 0 1 y d(r et (vi , vi+1 ) 1 ≤ et (vi , vi+1 ). λ1 (G −vi ) y 2(ε(G) − 1)[1 − cos(π/(n − 1))] i=1 i=1 (13.28) On combining the above two inequalities and (13.23), we have ∗ )−1 d(r
0
(n − 2)n 2(n − 1)m
† )−1 1 y d(r
et (vi , vi+1 )
i=1
0
1 ≤ 2(ε(G) − 1)[1 − cos(π/(n − 1))]
∗ )−1 1 y d(r
i=1
et (vi , vi+1 ), (13.29)
339
13.5 The upper bound on the energy consumption
† )−1 d(r
i=1
et (vi , vi+1 ) 0
m(n − 1) ≤ n(n − 2)(ε(G) − 1)[1 − cos(π/(n − 1))]
∗ )−1 1 y d(r
et (vi , vi+1 ).
i=1
(13.30) The above theorem gives the upper bound on the energy consumed along the MTEKC(y) route compared with the minimum total energy for routing the packet. The following theorem gives the bound on the ratio of the energy consumed by MTEKC(y) using the Fiedler value to the minimum total energy when the number of nodes becomes large. Theorem 13.5.2 Suppose that the network generated satisfies m = a1 · n and ε(G) − 1 = a2 , where a1 and a2 are some constants. Then the upper bound on the ratio of energy consumed can be presented as follows: d(r † )−1 i=1
et (vi , vi+1 )
i=1
et (vi , vi+1 )
d(r ∗ )−1
= O((n 2 ) y ).
(13.31)
P ROOF. Using the assumption of the theorem, we have d(r † )−1
et (vi , vi+1 ) i=1 d(r ∗ )−1 et (vi , vi+1 ) i=1 d(r † )−1 et (vi , vi+1 ) i=1 d(r ∗ )−1 et (vi , vi+1 ) i=1
(y a1 n(n − 1) , a2 n(n − 2)[1 − cos(π/(n − 1))] ' ( n(n − 1)[1 + cos(π/(n − 1))] y , ≤C n(n − 2)sin2 (π/(n − 1)) '
≤
(13.32)
where C = (a1 /a2 ) y . As n → ∞, we have d(r † )−1
et (vi , vi+1 ) i=1 d(r ∗ )−1 et (vi , vi+1 ) i=1
' ≤C
(n − 1)2 π2
(y ,
(13.33)
where we have used the small-angle approximation in the sinusoidal function, sin(θ ) ≈ θ , as θ 1. Hence, we have (13.31). From this theorem, we see that, when the network is very large, the energy ratio increases as a quadratic function of the number of nodes, compared with the minimum energy used to route a packet. This energy ratio can easily be controlled by the parameter y, for instance, if y = 1/2, then the ratio of the energy consumed increases as a linear function of the number of nodes in the network. In the extreme case, setting y = O(1/n) makes the algorithm approach MTE as n → ∞.
340
Connectivity-aware network lifetime optimization
13.6
The distributed implementation and learning algorithm In this section, we present the distributed implementation of the robust connectivity routing algorithm. The method is based on the distributed reinforcement learning routing algorithm. We note that the reinforcement learning algorithm has been shown to be an effective online decision-making procedure in sensor-network applications. The resulting algorithm can be characterized as a version of the distributed Bellman–Ford algorithm that performs its path-relaxation step asynchronously and online with the edge cost defined as the weighted energy required to transmit a packet in that hop. Each node learns the routing decision by itself and maintains the best packet-delivery cost to its destinations. In particular, each node vi maintains a table of Q-values Q vi (v j , vd ), for v j ∈ Svi , where v j is in the set of node vi ’s neighbors, Svi , and node vd is the destination. Q vi (v j , vd ) contains the interpretation of node vi ’s best estimated cost that a packet would incur to reach its destination node vd from node vi when the packet is sent via node vi ’s neighbor node v j . Whenever node vi wants to send packets to node vd , it simply sends the packets to its neighbor that satisfies v ∗j = arg min Q vi (v j , vd ). v j ∈Svi
(13.34)
The value in the Q-table will be exchanged between node vi and v j whenever a packet is sent from node vi to node v j , and vice versa. The exchange mechanism is illustrated in Figure 13.1. Whenever node vi transmits a packet P to node v j , node v j feeds back Q v j vk∗ , vd = min Q v j (vk , vd ) (13.35) vk ∈Sv j
to node vi as shown in Figure 13.1. We note that Q v j vk∗ , vd represents the best estimated cost incurred when a packet is sent from node v j to the destination node vd . The node i uses this value to update its own Q-value as follows: Q vi (v j , vd ) = (1 − δ)Q vi (v j , vd ) + δ Q v j vk∗ , vd + c(vi , v j ) , (13.36) where c(vi , v j ) is the cost for sending a packet from node vi to node v j , and δ ∈ [0, 1] is the learning rate for the algorithm. It is important to point out that the storage requirement for the distributed algorithm is O(dvi (n − 1)), since each node is required to store at most dvi (n − 1) Q-values, where dvi is the number of neighbors node vi has. Moreover, the distributed algorithm has O(1) computational complexity per packet Vi Vs
P
Vj
Vd
Qνj (νk *, νd) Qνj (νk* , νd) = minvk ∈Sν {Qνj (νk , νd)} j
Figure 13.1
Exchange and update Q-value.
13.6 The distributed implementation and learning algorithm
341
Table 13.4. Distributed asynchronous MTEKC(y ) 1. 2. 3.
4.
Network initialization: forward the adjacent neighbors’ information to all nodes in the network Each node computes its own connectivity weight Whenever node vi sends a packet to node v j (a) node v j informs vi of its minimum cost for transmitting a packet to the destination vd , Q v j vk∗ , vd (b) node vi updates its metric as 3 4 Q vi (v j , vd ) = (1 − δ)Q vi (v j , vd ) + δ Q v j vk∗ , vd + c(vi , v j ) (c) node vi leaves other estimates unchanged When a node dies, the neighboring nodes flood this information to the rest of the network and repeat step 2
transmission, since, for every packet transmission, the sender and the receiver update their corresponding Q-values according to (13.36). The cost of sending a packet between node vi and node v j is related to the energy consumption for sending the packet. In particular, the costs of sending a packet for MTE and MTEKC routing algorithms are [MTE]: c(vi , v j ) = et (vi , v j ) + er (vi , v j ), [MTEKC(y)]: c(vi , v j ) = et (vi , v j )W (vi ) y + er (vi , v j )W (v j ) y , For MTE, Q vi (v j , vd ) represents the total energy consumption used to guarantee delivery of a packet from node vi to node vd via node vi ’s neighbor node, v j . In contrast, Q vi (v j , vd ) in MTEKC represents the total energy consumption in delivering a packet from node vi to vd via v j , while considering the connectivity of the remaining network. The procedure for implementing the MTEKC is summarized in Table 13.4. We note that, when δ = 1.0, the algorithm becomes the distributed Bellman–Ford iterations.
13.6.1
Improvement on the distributed algorithm In the previous description, it is obvious that the quality of the routing decision depends on the accuracy of the Q-table. The more accurately the Q-table represents the network state, the better the routing decision will be. Moreover, the quality of the Q-table depends on how frequently it is updated. In the distributed scheme described in the previous subsection, the Q-value is updated whenever a packet is transmitted from the transmitter to the receiver. This existing distributed algorithm can be enhanced by improving the accuracy of the Q-table. The accuracy of the Q-table can be improved not only by updating the Q-values when there is packet transmission between two nodes, but also by having nodes periodically send their neighboring nodes small packets containing the request on updating Q vi (v j , vd ). This can be done straightforwardly by updating Q vi (N (vi ), vd ) as if there were packets to be sent from node vi to nodes N (vi ). This approach is similar to the use of a packet radio organization packet (PROP) [212].
342
Connectivity-aware network lifetime optimization
We note that the overhead encountered in each request on updating Q vi (v j , vd ) is only Q v j vk∗ , vd as in (13.35), which will be fed back by node v j . This overhead depends on how many bits are used to represent the Q-value. Typically, 8 bits will be sufficient. It is important also to note that this small packet consumes negligible energy compared with the energy for packet transmission. Therefore, these small packets can be sent more frequently to improve the learning speed of the Q-table. In a real application, there is a tradeoff between how fast the link cost changes and how frequently the small packets should be exchanged. We will show in the simulation that the improvement in the distributed implementation achieves a near-centralized solution.
13.6.2
Distributed weight computation and scalability In the network initialization, each node will power up and start to broadcast a control/organization packet similarly to the DARPA packet radio network protocol [212]. This packet contains information about the neighboring nodes (nodes that are connected to or reachable from this node). This packet will be broadcast to all of its neighbors. The nodes that are one hop away from the origin node will also ripple this information outward in a tiered fashion by appending their own neighbor information. This is similar to the method of building a tier table in [212]. Therefore, all the nodes have the correct view of the network topology. Before performing the routing task, each node calculates its connectivity weight on the basis of this topology view as in Table 13.1. Precisely, the final topology view consists of the adjacency matrix of the graph. Since nodes stay in their original places throughout their entire lifetime in the sensor network, flooding of information occurs rarely. This event occurs whenever nodes join the network or there are nodes that have failed. We assume that this event rarely happens after network initialization. In practical networks, maintaining the algebraic connectivity of the whole of the large network might not be necessary. According to the particular application of the sensor network, it typically can be decomposed into several smaller subnetworks interconnected by access points (APs). These APs need not have power limitations. Each AP is responsible for its own subnetwork. In order to exchange information between nodes within each subnetwork, it is very important to maintain the connectivity within each subnetwork. It is within these smaller subnetworks that the keep-connect algorithm is applied. In this way, the neighboring nodes’ information is disseminated only over the smaller subnetworks and the improved distributed algorithm can be implemented on the fly. Using this hierarchical structure, the routing across subnetworks can be handled via the APs, and the method can be used in this way in a very large network where the connectivity of the whole network is maintained via a combination of local connectivity and the APs.
13.7
Simulation results We simulate the routing algorithms in a packet-level discrete-event simulator. The simulator initially deploys nodes in the network. All events are time-stamped and queued.
13.7 Simulation results
343
The most current event will be dequeued and some task will be performed according to the type of the event. There are three types of events: packet arrival, reporting, and sending. In packet-arrival events, packets are injected from the source node to the destination node. We assume that packet arrival takes place as a Poisson arrival process with mean μ. The reporting event occurs periodically to retrieve the simulation parameters such as the average delivery delay per packet, average number of hops per packet transmission, energy consumed per packet delivered, and number of packets delivered during this reporting interval. All events that are neither packet-arrival events nor reporting events are sending events. In a sending event, a packet is sent to the next hop. The next hop is determined on the basis of the routing algorithm used. Whenever a packet arrives at a node, it is queued in the node’s buffer and will be sent in the next transmission time. Whenever a packet reaches its destination, the number of packets delivered is incremented and the event associated with that packet is freed. The channel between two adjacent nodes in the network is modeled as a fading process that attenuates the transmit signal proportionally to the distance, d −α . We note that the main focus of this chapter is on static sensor networks. In static sensor networks, the effect of fast fading is negligible, since practically the Doppler shift experienced by the receivers is zero. Therefore, this model is general enough to describe both free-space propagation and the two-ray ground propagation model, which has typically been used in many ad hoc simulators [470]. In this simulation section, we first generate 10 uniform random networks in an area of 100 m by 100 m with 36 nodes. The networks have an average number of edges per node from 4.4 to 7.5. All nodes in the network initially have the same amount of energy. The source and destination are selected uniformly from the alive nodes. We use the network lifetime (the time before the remaining network becomes disconnected), packet-delivery time, average transmit energy per packet, and total number of packets delivered before the remaining network becomes disconnected as our performance metrics. In Figure 13.2, we compare the performance of the normalized max–min residual connectivity (MMRC) routing, normalized minimum-hop (MH) routing, and normalized minimum-hop while keeping connectivity (MHKC(y = 1)) routing. All metrics are normalized with respect to the performance of the MTE algorithm. From Figure 13.2, we observe that all the algorithms have lower network lifetimes than that for the MTE algorithm. This also verifies that MTE is not less effective than an MMRE-type algorithm when one defines the network lifetime as the time before the remaining network becomes disconnected as discussed in Section 13.4. This is partly due to the fact that MH, MHKC, and MMRE do not use the transmission energy to guide the routing decision, hence the route is not energy efficient at all. It can be observed from Figure 13.2 that all the algorithms consume 20% to 90% more energy per packet. However, the MH and MHKC algorithms take only 50% less time to deliver one packet; this is clearly because MH-based algorithms select the route that has the minimum number of hops from source to destination, and hence the resulting packet-delivery delay is the smallest. Since choosing the most energy-efficient route is very important, we will focus on MTE-type algorithms for the rest of the simulation section. Next, we simulate both a two-dimensional (2D) Poisson network and a uniform random network. The nodes in the 2D Poisson network are deployed/generated on the basis of a 2D Poisson process,
Connectivity-aware network lifetime optimization
Figure 13.2
Normalized packet delivery time
1 0.8 0.6 0.4
1 2 3 4 5 6 7 8 9 10 Network realizations
2
Normalized total delivered packets
Normalized transmit energy per packet
Normalized network lifetime
344
1.5
1
1 2 3 4 5 6 7 8 9 10 Network realizations
MMRC
1.5
MH MHKC (y = 1)
1 0.5 0
1
2 3 4 5 6 7 8 Network realizations
9 10
1
2 3 4 5 6 7 8 Network realizations
9 10
1 0.8 0.6 0.4
A comparison of normalized metrics for MMRC, MH, and MHKC(y = 1) w.r.t. the MTE algorithm, when packet arrival takes place as a Poisson process with mean μ = 1.0.
whereas the nodes in a uniform random network are generated on the basis of uniform distribution. We also consider two types of source–destination pairs, namely convergecast traffic and random traffic. Converge-cast traffic refers to the pattern when the sources are generated from alive nodes to a fixed destination (sink); random traffic refers to the pattern when the source and destination are selected randomly from among the nodes that have not failed. We generate five networks, where nodes are generated using a 2D Poisson point process with the following parameters. Networks with on average 100 nodes are generated using a 2D Poisson point process in an area of 100 m by 100 m. The transmit energy per packet between two nodes is equivalent to 3 × 10−3 d 2 , where d is the distance between the nodes. The maximum transmit power equals 1.4 W per packet; the receive power equals 0.7 W per packet. This implies that the furthest node that can be reached by a node is about 21.602 m away. The initial energy of all nodes is equal to 10 000.0 energy units. We also show the algebraic network connectivity of the original graph in Table 13.5. Figure 13.3 shows the simulation results for normalized MTEKC(y = 1) w.r.t. the MTE algorithm for the networks generated. As before, Figure 13.3 shows the normalized performance metrics with respect to the MTE algorithm. These metrics are collected from initialization of the network until the network becomes disconnected. The x-axes of these sub-figures denote the network number shown in Table 13.5. From Figure 13.3, we observe that the improvement associated with the keep-connect algorithm w.r.t MTE is about 20% for the load (μ = 1.0) in network 2. In summary, one can observe that the algorithm achieves an increase of from 10% to 53% in the network lifetime for broad network loads. Similarly, the algorithm achieves an improvement of around 10% to 36% in the total number of packets delivered. We note that the keep-connect algorithm is about 20% less energy efficient than
345
13.7 Simulation results
Table 13.5. Simulation parameters set 2 Network 2
3
4
5
102
101
115
124
86
0.3994
0.5679
0.7282
0.5543
0.1141
Normalized routing time
1.7 1.6 1.5 1.4 1.3 1.2 1.1 1
1
2 3 4 Network realizations
1.3 1.2 1.1 1 0.9 0.8
1
2
3
4
Network realizations
Figure 13.3
1.25 1.2 1.15 1.1 1.05 1 0.95 0.9
5
1
Normalized successful delivered packets
Normalized TxRx Energy
Normalized network lifetime
Nodes Algebraic network connectivity
1
5
2 3 4 Network realizations
5
Load μ = 1.0 Load μ = 2.0 Load μ = 3.0
1.4 1.35 1.3 1.25 1.2 1.15 1.1 1.05 1 1
2 3 4 Network realizations
5
A comparison of normalized metrics for MTEKC(y = 1) w.r.t. MTE for various packet-arrival rates.
MTE. Besides being able to have a longer lifetime and a larger total number of delivered packets, the algorithm is also more robust in terms of the algebraic network connectivity. This can be seen from Figure 13.4, which shows the decrease in algebraic network connectivity whenever a node dies. The x-axis is the number of failed nodes before the network becomes disconnected and the y-axis is the algebraic network connectivity. We note that zero algebraic network connectivity implies a disconnected network. It is obvious that our algorithm is more robust in terms of algebraic network connectivity. As shown in Figure 13.4, typically more than one node dies before the network becomes disconnected. We also simulated converge-cast traffic in network 1 of Table 13.5 and ran the simulation for 100 realizations. We randomly selected the destination to be node 94 and assumed that it has infinite energy. Figure 13.5 shows that the algorithm presented here constantly outperforms the MTE algorithm.
346
Connectivity-aware network lifetime optimization
0.7 MTE network 1 MTEKCFiedler network 1 MTE network 2 MTEKCFiedler network 2
Algebraic network connectivity
0.6
0.5
0.4
0.3
0.2
0.1
0
Figure 13.4
0
10 15 20 Number of dead nodes
25
30
The robustness of the algorithm in terms of network connectivity.
Network lifetime
4.6
4.4 4.3
4.55 Delivered packets
x 104
4.5
4.2
0
20
40 60 Network realization
80
100
x 104
4.5 MTE MTEKCFiedler(1)
4.45 4.4 4.35 4.3 4.25
Figure 13.5
5
0
20
40 60 Network realization
80
100
Comparison of normalized metrics for MTEKC(y = 1) (MTEKCFiedler(1)) w.r.t. MTE for converge-cast traffic, μ = 1.
347
Delivered packets
Routing time
Transmitted and received energy
13.7 Simulation results
Figure 13.6
102 Centralized Improved distributed
101 100
0
0.5
1
1.5
2 2.5 3 Simulation time
3.5
4
4.5
5 x 104
0
0.5
1
1.5
2 2.5 3 Simulation time
3.5
4
4.5
5 x 104
0
0.5
1
1.5
2 2.5 3 Simulation time
3.5
4
4.5
5 x 104
100
140 120 100 80 60 40
Performance in terms of the transmitted and received energy, routing time, and the number of packets delivered for the improved distributed solution when μ = 1.0.
Next, we show the performance of the distributed implementation presented in Section 13.6 in Figure 13.6. The distributed solution sends around 10 small packets in between sending the actual packets to facilitate better learning. If one node dies, each node increases the sending of small packets to quickly learn the correct route. The implementation is straightforward. The learning process for the distributed solution for the converge-cast traffic in network 1 is shown in Figure 13.6, which shows that the improved distributed solution can nearly achieve the centralized solution. Finally, in order to acquire statistics concerning the improvement associated with the keep-connect algorithm, we study the effect of the routing algorithm on a random network in which nodes are randomly placed uniformly in an area of 100 m by 100 m with random traffic. For each number of nodes (from 100 to 200 nodes), we generated 20 networks, and averaged the normalized network lifetime for MTEKC(y = 1, 2, 3) with respect to the MTE algorithm. The results are plotted in Figures 13.7 to 13.9. In these figures we show both the mean, m, and the unbiased standard deviation, s, of the performance metrics given below by means of error bars, 1 xn , N n=1 ? @ N @ 1 s=A (xn − μ)2 , N −1 N
m=
n=1
348
Connectivity-aware network lifetime optimization
1.3
Normalized lifetime
1.2 1.1 1
0.9 0.8 0.7 0.6
y=1 y=2 y=3
Normalized number of nodes failed before disconnected
2.2
0.5 80 100 120 140 160 180 200 220 Number of nodes Figure 13.7
1.8 1.6 1.4 1.2 1 0.8
y=1 y=2 y=3
80 100 120 140 160 180 200 220 Number of nodes
Normalized performance in a random network with random traffic when the packet-arrival load is μ = 1.0 and α = 2.0. 2
Normalized lifetime
1.2 1.1 1 0.9 0.8 0.7 0.6
y=1 y=2 y=3
0.5 80 100 120 140 160 180 200 220 Number of nodes
Normalized number of nodes failed before disconnected
1.3
Figure 13.8
2
1.8 1.6 1.4 1.2 1 0.8
y=1 y=2 y=3
80 100 120 140 160 180 200 220 Number of nodes
Normalized performance in a random network with random traffic when the packet-arrival load is μ = 3.0 and α = 2.0.
where N = 20 represents 20 network realizations. xn denotes the normalized network lifetime and the normalized number of nodes that can fail before the network becomes disconnected in the left and right sub-figures of Figures 13.7 to 13.9, respectively. We note that Figure 13.7 shows the case when the attenuation is α = 2.0 and the load is μ = 1.0. In contrast, we also simulate the case when α = 4.0 and μ = 1.0 in Figure 13.9. On comparing these figures, we conclude that the algorithm is not sensitive to the choice of the attenuation coefficient α. All these figures show that the algorithm achieves an improvement in performance of about 10% to 20% compared with the MTE algorithm. Moreover, the algorithm is more robust in terms of algebraic network connectivity, which may be a much more important criterion in the network protocol design. It can be seen from these figures that a greater number of nodes can fail before the network becomes disconnected in the algorithm considered in this chapter.
349
1.3
Normalized lifetime
1.2 1.1 1 0.9 0.8 0.7 y=1 y=2 y=3
0.6 0.5
80 100 120 140 160 180 200 220 Number of nodes
Normalized number of nodes failed before network is disconnected
13.8 Summary
1.8 1.7
y=1 y=2 y=3
1.6 1.5 1.4 1.3 1.2 1.1 1 0.9 80 100 120 140 160 180 200 220 Number of nodes
Figure 13.9
Normalized performance in a random network with random traffic when the packet-arrival load is μ = 1.0 and α = 4.0.
13.8
Summary In this chapter, we employ the time before the network becomes disconnected as our definition of the network lifetime. Using this definition, we present a class of routing algorithms called keep-connect algorithms that use computable measures of network connectivity in determining how to route packets. The algorithm embeds the importance of a node when making the routing decision. The importance of a node is quantified by the Fiedler value of the remaining network when that particular node dies. The keep-connect algorithm achieves a 10%–20% better network lifetime (time before the network becomes disconnected) and total number of packets delivered than with the MTE algorithm for low to high arrival rates. More importantly, such an algorithm is more robust in terms of algebraic network connectivity. We also present a distributed implementation that nearly achieves the centralized solution. Interested readers can refer to [342].
14
Connectivity-aware network maintenance and repair
In this chapter we address the problem of network maintenance, in which we aim to maximize the lifetime of a sensor network by adding a set of relays to it. The network lifetime is defined as the time until the network becomes disconnected. The Fiedler value, which is the algebraic connectivity of a graph, is used as an indicator of the network’s health. The network-maintenance problem is formulated as a semi-definite programming (SDP) optimization problem that can be solved efficiently in polynomial time. First, we present a network maintenance algorithm that obtains the SDP-based locations for a given set of relays. Second, we study a routing algorithm, namely the weighted minimum-power routing (WMPR) algorithm, that significantly increases the network lifetime due to the efficient utilization of the deployed relays. Third, we consider an adaptive network maintenance algorithm that relocates the deployed relays on the basis of the network health indicator. Further, we study the effect of two different transmission scenarios, with and without interference, on the network maintenance algorithm. Finally, we consider the network repair problem, in which we find the minimum number of relays together with their SDP-based locations needed in order to reconnect a disconnected network. We then present an iterative network repair algorithm that utilizes the network maintenance algorithm.
14.1
Introduction There has been much interest in wireless sensor networks due to their various areas of application such as battlefield surveillance systems, target tracking, and industrial monitoring systems. A sensor network consists of a large number of sensor nodes, which are deployed in a particular area to measure certain phenomena such as temperature and pressure. These sensors send their measured data to a central processing unit (information sink), which collects the data and develops a decision accordingly. Often sensors have limited energy supply. Hence efficient utilization of the sensors’ limited energy, and consequently extending the network lifetime, is one of the design challenges in wireless sensor networks. The network lifetime, like in the previous chapter, is defined as the time until the network becomes disconnected. The network is considered connected if there is a path, possibly a multi-hop one, from each sensor to the central processing unit. In various applications, sensors are deployed randomly in the field and there is not much control
14.1 Introduction
351
over the specific location of each sensor. When relays are available, it could be possible to deploy relays in some particular locations to enhance the network’s performance and extend its lifetime. An example is that a low-altitude unmanned air vehicle (UAV) can perform as a relay that can be deployed in particular locations. Throughout this work, we assume that the deployed relays have the same capability as that of the sensors. In particular, the relays forward the received data without any processing operations. Deploying a set of relays in a wireless sensor network is one of the main approaches employed to extend the network lifetime. More precisely, relays can forward the sensors’ data and hence they contribute to reducing the transmission power required by many sensors per transmission, which can extend the lifetime of these sensors. However, the problem of finding the optimum locations of these relays has been shown to be NP-hard. Therefore, there is a need to find a heuristic algorithm that can find good locations for the available set of relays in polynomial time. This problem is referred to in the literature as the network maintenance problem. In wireless sensor networks, after having deployed the sensors for a while, some sensors may lose their available energy, which affects each sensor’s ability to send its own data as well as forward the other sensors’ data. This affects the network connectivity and may result in the network becoming disconnected. In this case, there is a need to determine the minimum number of relays together with their optimum locations that are needed in order to reconnect this network. Similarly to the network maintenance problem, this problem is NP-complete and there is a need for a heuristic algorithm to solve this problem in polynomial time. This problem is referred to as the network repair problem. In this chapter, we address the network maintenance and network repair problems in wireless sensor networks. We consider various cross-layer algorithms for relay deployment and data routing, which are jointly designed across the physical and network layers. First, we present an efficient network maintenance algorithm that finds heuristic locations for an available set of relays to extend the network lifetime. The network connectivity and consequently the network lifetime are quantified via the Fiedler value, which is the algebraic connectivity of the network graph. The Fiedler value is equal to the second-smallest eigenvalue of the Laplacian matrix representing the network graph. The network maintenance algorithm aims at formulating the network lifetime problem as an SDP optimization problem that can be solved in polynomial time. Building upon the network maintenance algorithm, we consider a routing algorithm, namely the WMPR algorithm, that can extend the network lifetime whenever the deployed relays have higher initial energy than that of the existing sensors. The WMPR algorithm assigns to the sensors weights that are different from those of the relays. It tends to use the relays more often and hence balance the network load among the existing sensors and relays, which results in a longer network lifetime. Furthermore, we present an adaptive network maintenance algorithm that increases the network lifetime by relocating the relays depending on the network status. We consider the Fiedler value of the remaining network as a good network-health indicator. Finally, we consider an iterative network repair algorithm that finds the minimum number of relays together with their locations needed in order to reconnect a disconnected network.
352
Connectivity-aware network maintenance and repair
The network maintenance algorithms are applied in two different transmission scenarios depending on the MAC protocol employed. First, we consider a zero-interference scenario in which each node is assigned an orthogonal channel and hence there is no interference among the nodes. Second, we consider an interference-based scenario in which a set of nodes is allowed to send simultaneously and hence nodes cause interference with each other. We show that the transmission power required by each sensor per packet transmitted is higher in the interference-based scenario than in the zero-interference scenario. Therefore, in a limited-energy network setup, where network lifetime is of great concern, a zero-interference transmission scenario should be favorably considered as the means by which to extend the network lifetime. Finally, the unique aspects of this chapter are summarized as follows. First, the topology model is based on some of the physical-layer parameters. More precisely, the graph edges are constructed on the basis of the desired bit error rate, maximum transmission power of the sensors, noise variance, and Rayleigh fading channel model parameters. This helps in cross-layer design of relay deployment and data-routing schemes. Second, the Fiedler value, which is a good measure of the connectivity, is considered as the network-health indicator. Third, the main relay-deployment algorithm has less complexity, because it is based on an SDP formulation, which can be solved in polynomial time. The remainder of this chapter is organized as follows. In Section 14.2, we describe the system model and present a brief recapitulation on the algebraic connectivity of a graph. We formulate the network maintenance problem and describe the algorithm in Section 14.3. We build upon that algorithm and consider strategies to increase the network lifetime in Section 14.4. In Section 14.5, we address the network repair problem and describe the algorithm. In Section 14.6, we present some simulation results to demonstrate the performance.
14.2
The system model Most of the literature considers the time until the first sensor dies, i.e., runs out of energy, as the network lifetime. In sensor networks, sensors are usually deployed in large numbers and each area is often covered by more than one sensor. Therefore, there is a strong correlation in the sensors’ information and the death of one sensor might not affect the performance of the others sending their measurements to the central unit. Thus, we consider the time until the network becomes disconnected as the network lifetime just like in the previous chapter. In this section, first we describe the wireless sensor network model. Second, we derive the transmission power required in order to achieve a particular quality of service (QoS), which is the bit error rate. Finally, we briefly review some concepts related to spectral graph theory. A wireless sensor network can be modeled as an undirected weighted simple finite graph G(V, E), where V = {v1 , v2 , . . . , vn } is the set of all nodes (sensors) and E is the set of all edges (links). An undirected graph implies that all the links in the network are bi-directional; hence, if node vi can reach node v j then the converse is also true.
353
14.2 The system model
A simple graph means that there is no self-loop in each node and there are no multiple edges connecting two nodes. Finally, a finite graph implies that the cardinality of the sets V and E is finite. Let n and m denote the numbers of nodes and edges in the graph, respectively, i.e., |V | = n and |E| = m, where | · | denotes the cardinality of the given set. Without loss of generality, we assume that a binary phase-shift keying (BPSK) modulation scheme is employed for the transmission between any two nodes. BPSK is chosen primarily since the data rate in most sensor network applications is relatively low, and BPSK modulation is an intuitive choice for such applications. We point out that the algorithms can easily be applied with other modulation types as well. Let di, j denote the distance between two nodes {vi , v j } ∈ V and let α denote the path loss exponent. The channel between each two nodes {vi , v j } ∈ V , denoted by h i, j , is modeled as a complex Gaussian random variable with zero mean and variance equal to di,−αj , i.e., h i, j ∼ C N 0, di,−αj . The channel gain |h i, j | follows a Rayleigh fading model. Furthermore, the channel gain squared |h i, j |2 is an exponential random variable with mean di,−αj , i.e., p(|h i, j |2 ) = di,α j exp − |h i, j |2 di,α j is the probability density function (pdf) of |h i, j |2 . The noise in each transmission is modeled as a Gaussian random variable with zero mean and variance N0 . Without loss of generality, we assume the zero-interference transmission scenario,1 in which sensors transmit their data over orthogonal channels whether in the time or the frequency domain. For instance, we consider the time-division multiple-access (TDMA) scenario. The transmission from node vi to node v j can be modeled as : (14.1) y j = Pi h i, j xi + n j , where xi is the transmitted symbol with unit energy, i.e., |xi |2 = 1. In (14.1), Pi is the transmitted power, y j is the received symbol, and n j is the added noise term. The probability of bit error, or bit error rate (BER), can be calculated as ' ( B γi, j 1 1− , (14.2) ε= 2 1 + γi, j where γi, j = Pi di,−αj /N0 denotes the average SNR. The transmission power of node vi required to achieve a desired average BER of εo over link (vi , v j ) can be calculated from (14.2) as Pio = di,α j N0
(1 − 2 εo )2 , 1 − (1 − 2 εo )2
(14.3)
which is the required transmission power for the zero-interference transmission scenario. We assume that each node vi ∈ V can transmit with power 0 ≤ Pi ≤ Pmax , where Pmax denotes the maximum transmission power of each node. Also, we assume that the 1 The transmission scenario that takes into consideration the interference effect is a simple extension of the
zero-interference scenario, and it will be addressed in Section 14.6.1.
354
Connectivity-aware network maintenance and repair
noise variance and the desired BER are constant for all transmissions in the network. Therefore, an undirected weighted edge (vi , v j ) exists if Pio ≤ Pmax , where Pio is calculated as in (14.3). Furthermore, the weight of an edge l connecting vi and v j , denoted by wi, j or wl , is a function of the transmitted power Pio that depends on the routing scheme considered, as will be discussed in Section 14.4.1. For an edge l, 1 ≤ l ≤ m, connecting nodes {vi , v j } ∈ V , define the edge vector al ∈ Rn , where the ith and jth elements are given by al,i = 1 and al, j = −1, respectively, and the rest are zero. The incidence matrix A ∈ Rn×m of the graph G is the matrix with its lth column given by al . The weight vector w ∈ Rm is defined as w = [w1 , w2 , . . . , wm ]T , where T denotes transpose. The Laplacian matrix L ∈ Rn×n is defined as L = A diag(w)AT =
m
wl al alT ,
(14.4)
l=1
where diag(w) ∈ is the diagonal matrix formed from w. The diagonal entry L i,i = j∈N (i) wi, j , where N (i) is the set of neighboring nodes of node vi that have a direct edge with node vi . L i, j = −wi, j if (vi , v j ) ∈ E, otherwise L i, j = 0. Since all the weights are non-negative, the Laplacian matrix is positive semi-definite, which is expressed as L # 0. In addition, the smallest eigenvalue is zero, i.e., λ1 (L) = 0. The second-smallest eigenvalue of L, λ2 (L), is the algebraic connectivity of the graph G [122] [133] [294] [34]. It is called the Fiedler value and it measures how connected the graph is, for the following main reasons. First, λ2 (L) > 0 if and only if G is connected and the multiplicity of the zero-eigenvalue is equal to the number of connected sub-graphs. Second, λ2 (L) is monotonically increasing in the edge set, i.e., Rm×m
if G 1 = (V, E 1 ), G 2 = (V, E 2 ), E 1 ⊆ E 2 then λ2 (L1 ) ≤ λ2 (L2 ),
(14.5)
where Lq denotes the Laplacian matrix of the graph G q for q = 1, 2. As we mentioned previously, the smallest eigenvalue of the Laplacian matrix is λ1 (L) = 0. In addition, its corresponding eigenvector is the all-ones vector 1 ∈ Rn , since the sum of the elements in each row (column) is zero. Let y ∈ Rn be the eigenvector corresponding to λ2 (L), which has unity norm ||y||2 = 1 and is orthogonal to the all-ones vector, i.e., 1T y = 0. Since L y = λ2 y, we have yT L y = λ2 yT y = λ2 . Therefore, the Fiedler value can be expressed as the smallest eigenvalue that satisfies these conditions, i.e., λ2 (L) = inf {yT L y, ||y||2 = 1, 1T y = 0}. y
(14.6)
In this chapter, the network lifetime is defined as the time until the network becomes disconnected, which happens when there is no communication path from any existing sensor to the central unit. Consequently, the network dies (becomes disconnected) if there is no communication path between any two living nodes including the central unit. Therefore, there is a direct relation between keeping the network connected for as long as possible and maximizing the network lifetime, as was shown in the previous chapter. As discussed before, the Fiedler value defines the algebraic connectivity of the
355
14.3 Network maintenance
graph and is a good measure of how connected the graph is. Intuitively, the higher the Fiedler value is, the more edges exist between the nodes, the longer the network can live without being disconnected, and thus the longer the network lifetime is. On that basis, we consider the Fiedler value as a quantitative measure of the network lifetime. In Section 14.6, we will validate this direct relation between the Fiedler value and the network lifetime.
14.3
Network maintenance The network maintenance problem can be stated as follows. Given a wireless network deployed in a g × g square area and represented by the graph G b = (Vb , E b ), as well as a set of K relays, what are the optimum locations for placing relays in order to maximize the Fiedler value of the resulting network? Intuitively, adding a relay to the network may result in connecting two sensors or more, which hitherto were not connected together. Because this relay can be within the transmission range of these sensors, it can forward data from one sensor to the other. Therefore, adding a relay may result in adding an edge or more to the original graph. Let E c (K ) denote the set of edges resulting from adding a candidate set of K relays. Thus, the network maintenance problem can be formulated as max λ2 L E b ∪ E c (K ) . (14.7) E c (K )
Since each relay can be deployed anywhere in the network, the location of each relay is considered as a continuous variable, which belongs to the interval ([0, g], [0, g]). It has been shown in [192] that this problem is NP-hard. In the following subsection, we explain a heuristic algorithm to solve this problem.
14.3.1
The SDP-based network maintenance algorithm The algorithm to solve the network maintenance problem in (14.7) can be described as follows. First, we divide the g × g network area into n c equal square regions, each with width h. Thus, n c = (g/ h)2 . We represent each region by a relay deployed at its center. Thus, the problem can be viewed as having a set of n c candidate relays, hence the subscript c, and we want to choose the optimum K relays among these n c relays. This optimization problem can be formulated as max λ2 L(x) , s.t. 1T x = K , x ∈ {0, 1}n c , (14.8) where L(x) = Lb +
nc
xl Al diag(wl )AlT
(14.9)
l=1
and 1 ∈ Rn c is the all-ones vector. We note that the optimization vector in (14.8) is the vector x ∈ Rn c . The ith element of x, denoted by xi , is either 1 or 0, which corresponds to whether this relay
356
Connectivity-aware network maintenance and repair
should be chosen or not, respectively. In (14.9), Lb is the Laplacian matrix of the base graph. In addition, Al and wl are the incidence matrix and weight vector resulting from adding relay l to the original graph. Assuming that adding relay l results in Il edges between the original n nodes in the base network, the matrix Al can be formed as Al = al1 , al2 , . . . , alIl , where alz ∈ Rn , z = 1, 2, . . . , Il , represents an edge between two original nodes. Similarly, Wl = wl1 , wl2 , . . . , wlIl . We point out that the effect of adding relays appears only in the edge set E, not in the node set V . The weight of a constructed edge equals the summation of the weights of the edges connecting the relay with the two sensors. Finally, the constraint 1T x = K in (14.8) indicates that the number of relays chosen is K . The exhaustive search scheme to solve (14.8) is implemented by computing λ2 (L) for various nKc Laplacian matrices, which requires a huge amount of computation for large n c . Therefore, we need an efficient and quick way to solve (14.8). The optimization problem (14.8) can be thought of as a general version of the one considered in [133]. By relaxing the Boolean constraint x ∈ {0, 1}n c to a linear constraint x ∈ [0, 1]n c , we can represent the optimization problem in (14.8) as max λ2 L(x) , s.t. 1T x = K , 0 ≤ x ≤ 1. (14.10) We note that the optimal value of the relaxed problem in (14.10) is an upper bound for the optimal value of the original problem (14.8), since it has a larger feasible set. Similarly to (14.6), the Fiedler value of L(x) can be expressed as λ2 L(x) = inf {yT L(x)y, ||y||2 = 1, 1T y = 0}. (14.11) y
It can be shown that λ2 L(x) in (14.11) is the point-wise infimum of a family of linear functions of x. Hence, it is a concave function in x. In addition, the relaxed constraints are linear in x. Therefore, the optimization problem in (14.10) is a convex optimization problem [41]. Furthermore, the convex optimization problem in (14.10) is equivalent to the following SDP optimization problem [133] [34]: ( ' 1 (14.12) max s, s.t. s I − 11T / L(x), 1T x = K , 0 ≤ x ≤ 1, n where I ∈ Rn×n is the identity matrix and B / A denotes that A − B is a positive semidefinite matrix. By solving the SDP optimization problem in (14.12) efficiently using any standard SDP solver such as the SDPA-M software package [118], the optimization variable x is obtained. Then, we use a heuristic algorithm to obtain a near-optimal Boolean solution from the SDP solution. In this chapter, we consider a simple heuristic, which is to set the largest K elements in the vector x to 1 and the rest to 0. The Boolean vector obtained is the solution of the original problem in (14.8). This procedure will be repeated a few times, and each repetition is referred to as a level. As indicated earlier, each location xk , k = 1, 2, . . . , K , represents a square region of width h. Choosing xk = 1 implies that the kth region is more significant, in terms of the connectivity of the whole network, than other ones that have not been chosen.
14.4 Lifetime-maximization strategies
357
Table 14.1. The network-maintenance algorithm Let G b = (Vb , E b ) be the original graph, L(K ) be the Laplacian matrix of the resulting graph after adding the available K relays, and λ2,t L(K ) be the Fiedler value at the tth level (iteration). (i) Initialization: set t = 1 and λ2,0 (L(K )) = λ2 (Lb (0)), where Lb is the Laplacian matrix of Gb. (ii) Divide the network area into n c equal square regions. Each region is represented by a relay at its center. (iii) Solve the optimization problem in (14.12) and obtain the best K < n c relays among the n c relays defined in (ii).Denote the solutions as xk , k = 1, 2, . . . , K . (iv) Calculate λ2,t L(K ) , which is the Fiedler value of the resulting graph, by constructing the Laplacian of the resulting graph. matrix (v) While λ2,t L(K ) > λ2,t−1 L(K ) (a) increment the level index as t = t + 1 (b) for each solution xk , Divide the kth region into n c equal square regions and obtain the best location for this relay. This can be solved using (14.12) by setting K = 1 End for (c) Calculate λ2,t L(K ) of the resulting graph End while (vi) The obtained solutions xk , k = 1, 2, . . . , K , represent the required locations of the relays.
In order to improve the current solution, we repeat the same procedure by dividing each kth region into n c smaller areas and representing each area by a relay at its center. Then, we find the heuristic location in these n c regions in order to have the relay deployed there. This problem is the same as the one in (14.12) on setting K = 1 relay. The same procedure is repeated for each region k, 1 ≤ k ≤ K , obtained in the first level. The network maintenance algorithm applies a finite number of levels until there is no more improvement in the resulting Fiedler value. Table 14.1 summarizes the implementation of the network maintenance algorithm. We also discuss the complexity issue of the network maintenance algorithm. The interior-point algorithms for solving SDP optimization problems have been shown to be polynomial in time. Thus, the network maintenance algorithm which applies a small number of iterations, each requiring the solution of an SDP optimization problem, has a polynomial complexity in time. Finally, we point out that the network maintenance algorithm is also suitable for the kinds of applications in which there are various possible locations for the relays to be deployed [293]. In the next section, we consider various strategies to increase the efficiency of the relays deployed.
14.4
Lifetime-maximization strategies In this section, we build upon the network maintenance algorithm described in Table 14.1 and present two strategies that can extend the network lifetime. First, we consider the WMPR algorithm, which efficiently utilizes the relays deployed in
358
Connectivity-aware network maintenance and repair
a wireless network. Second, we study an adaptive network maintenance algorithm that relocates the relays on the basis of the status of the network.
14.4.1
The weighted minimum-power routing (WMPR) algorithm In this subsection, first we explain the conventional minimum-power routing (MPR) algorithm and then we present the WMPR algorithm. The MPR algorithm constructs the minimum-power route from each sensor to the central unit, by utilizing the conventional Dijkstra shortest-path algorithm [25]. The cost (weight) of a link (vi , v j ) is given by wi, j |MPR = Pio + Pr ,
(14.13)
where Pio is the transmission power given in (14.3) and Pr denotes the receiver processing power, which is assumed to be fixed for all of the nodes. In (14.13), it is obvious that the MPR algorithm does not differentiate between the original sensors and the deployed relays while constructing the minimum-power route. In most applications, it is very possible that the few relays deployed have higher initial energy than that of the many existing sensors. Intuitively, to make the network live longer, the relays should be utilized more often than the sensors. Consequently, the loads of the sensors and relays will be proportional to their energies, which results in a more balanced network. The WMPR algorithm achieves this balance by assigning weights to the sensors and the relays, and the cost of each link depends on these weights. Therefore, we consider that we have the weight of the link (vi , v j ) given by wi, j |WMPR = ei Pio + e j Pr ,
(14.14)
where ei denotes the weight of node vi . Assigning the relays a smaller weight than that of the sensors means that the network becomes more balanced and the network lifetime is increased. In summary, the WMPR algorithm utilizes Dijkstra’s shortest-path algorithm to compute the route from each sensor to the central unit using (14.14) as the link cost. More importantly, the weights of the relays should be smaller than those of the sensors. Figure 14.1 depicts a sensor network of n = 20 nodes deployed randomly in a 6 × 6 area. The central unit is located at the center of the network and we assume that K = 1 relay is available. The location of the relay is determined via the network maintenance algorithm, which is depicted in Table 14.1. Each routing algorithm, either the MPR or the WMPR, constructs a tree connecting all the nodes together that has the minimum weight between each two nodes. In Figure 14.1(a), the relay is treated in a similar fashion to the sensors in the MPR-based constructed routing tree. On the other hand, Figure 14.1(b) shows that most sensors in the WMPR-based constructed routing tree tend to send their packets to the relay rather than to the neighboring sensors. As will be shown in Section 14.6, the WMPR algorithm achieves a higher lifetime gain than that achieved by the MPR algorithm when the relays deployed have more initial energy than do the sensors. Finally, we shall point out that many of the lifetime-maximization routing algorithms can be modified in a similar way to that of the WMPR algorithm.
359
14.4 Lifetime-maximization strategies
6
6
5
5 central unit
4 3
3
2
2
1
central unit
4
1
relay
relay
0
0 0
1
2
3 (a)
4
5
6
0
1
2
3 (b)
4
5
Figure 14.1
Examples of routing trees for n = 20 sensors deployed randomly in a 6 × 6 square field: (a) an MPR-based constructed routing tree and (b) a WMPR-based constructed routing tree.
14.4.2
An adaptive network maintenance algorithm
6
In this subsection, we consider the possibility of relocating the deployed relays. In the fixed network maintenance strategy, as described in Table 14.1, each relay will be deployed in a particular place and will be there until the network dies. Intuitively, the network lifetime can be increased by adaptively relocating the relays on the basis of the status of the network. Such a scheme can be implemented via low-altitude UAVs or movable robots depending on the network environment. For instance, we can utilize one UAV or more, which can fly to the obtained relay locations to improve the connectivity of the ground network. At each location, the UAV acts exactly like a fixed relay connecting a set of sensors through multi-hop relaying. The adaptive network maintenance algorithm is implemented as follows. First, the initial locations of the deployed relays are determined using the network maintenance algorithm described in Table 14.1. Whenever a node dies, the Fiedler value of the remaining network is calculated. If it is greater than a certain threshold, then the network is likely to become disconnected soon. Therefore, the deployment algorithm is calculated again and the new relay locations are obtained. Finally each relay is relocated to its new location, if that is different from its current one. Application of the algorithm is repeated until the network becomes disconnected. In what follows, we present an example to illustrate how effective the adaptive network maintenance algorithm can be. Consider a wireless sensor network of n = 20 nodes deployed randomly in a 6 × 6 square area. We assume that only K = 1 relay is available. Whenever a node sends a packet, the remaining energy is decreased by the amount of the transmission energy and it dies when it has no remaining energy. In addition, the Fiedler-value threshold is chosen to be 0.03. Figure 14.2 depicts the Fiedler value of the network as a function of the number of dead nodes utilizing the MPR algorithm. The original network becomes disconnected after the death of eight nodes. Adding a fixed relay increases the network lifetime, resulting in a network-lifetime gain of 31%. The network-lifetime gain due to adding K
360
Connectivity-aware network maintenance and repair
Fiedler Value (Network health indicator)
0.14 0.12 0.1
0.08 0.06 Threshold 0.04 0.02 0
Figure 14.2
Adaptive (Lifetime gain = 70%) Fixed (Lifetime gain = 31%) No relays
0
2
4
6 8 Dead Nodes
10
12
The Fiedler value (network health indicator) vs. the number of dead nodes, for n = 20 sensors deployed randomly in a 6 × 6 square field. Effects of adaptive and fixed network-maintenance algorithms are illustrated.
relays is defined as G(K ) = (T (K ) − T (0))/T (0), where T (K ) is the network lifetime on deploying K relays. For K = 1 relay, the adaptive network maintenance algorithm achieves a lifetime gain of 70%. This example shows that the adaptive network maintenance algorithm can significantly increase the network lifetime. We clarify that these lifetime gains are specific to that particular example and do not correspond to the average results. The average results for the various network maintenance strategies are provided in Section 14.6. It is worth noting that Figure 14.2 shows that the Fiedler value of the living network can be thought of as a health indicator of the network. If the network health is below a certain threshold, then the network is in danger of being disconnected. Thus, a network maintenance strategy, either fixed or adaptive, should be implemented. However, if the network becomes disconnected, then intuitively we can consider reconnecting the network again by deploying the minimum possible number of relays. This is the network repair problem, which is discussed in the following section.
14.5
Network repair In this section, we consider the network repair problem. In particular, the network is initially disconnected and we need to find the minimum number of relays and their optimum locations in order to reconnect the network. Let a disconnected base network deployed in a g × g square area be represented by the graph G b = (Vb , E b ). Hence, λ2 (L(E b )) = 0. The network repair problem can be formulated as
361
14.6 Simulation results
Table 14.2. The network repair algorithm Let G b = (Vb , E b ) be the original graph, L(K ) be the Laplacian matrix of the resulting graph after adding the available K relays, and λ2 (L(K )) be its Fiedler value. (i) Initialization: set K = 0. (ii) While (λ2 (L(K ) ≤ δ) (a) increment the number of relays as K = K + 1; (b) implement the network maintenance algorithm, which is given in Table 14.1, utilizing K candidate relays; (c) calculate λ2 (L(K )) of the resulting graph. End while. (iii) The obtained K is the minimum number of relays required.
min K ,
s.t. λ2 (L (E b ∪ E c (K ))) > δ,
(14.15)
where δ > 0 is referred to as the connectivity threshold and reflects the degree of desired robustness of the network connectivity, and E c (K ) denotes the set of edges resulting from adding a candidate set of K relays. We note that as δ increases the number of relays required to satisfy the connectivity constraint in (14.15) increases. It was shown in [248] that the network repair problem is NP-complete and hence we consider a heuristic algorithm to solve it. We utilize the solution for the networkmaintenance problem in solving the network repair problem. More precisely, we present an iterative network repair algorithm, which is implemented as follows. First, we assume that K = 1 relay suffices to reconnect the network. Second, we solve the network maintenance problem in Table 14.1 to find the location for that relay. If the Fiedler value of the resulting network is strictly greater than zero then the network is reconnected and the algorithm stops. Otherwise, the number of candidate relays is incremented by one and the algorithm is applied again. Table 14.2 summarizes the network repair algorithm. Similarly to the network maintenance algorithm, the network repair algorithm is implemented in polynomial time.
14.6
Simulation results In this section, we present some simulation results to show the performance of the algorithms. We consider n = 20 nodes deployed randomly in a 6 × 6 square area and the central unit is assumed to be at the center of the network. Data generated at the sensors follow a Poisson process with rate 10 packets per unit time, and the path loss exponent is α = 2. The desired BER for transmissions over any link is εo = 10−4 , the noise variance N0 = −20 dB m, the maximum power Pmax = 0.15 units, the receiver processing power is Pr = 10−4 units, and the initial energy of every sensor is 0.1 unit. The number of candidate-relay locations utilized in the network maintenance algorithm, which is given in Table 14.1, is chosen to be n c = 25 locations. The SDPA-M software
362
Connectivity-aware network maintenance and repair
0.22 0.2 SDP Random
0.18
Fiedler value
0.16 0.14 0.12 0.1 0.08 0.06 0.04
Figure 14.3
0
1
2 Number of added relays
3
4
The average Fiedler value vs. the added number of relays, for n = 20 distributed randomly in a 6 × 6 square field. The effect of deploying relays is illustrated.
package [118] has been utilized to solve the SDP optimization problem in (14.12). The following results are averaged over 1000 independent network realizations. Figure 14.3 depicts the increase of the Fiedler value as the number of relays added increases. For comparison purposes, we also plot the effect of randomly adding relays. As shown, the random addition performs poorly compared with the presented algorithm. In Section 14.3, we have chosen the Fiedler value as an intuitive and good measure of the network lifetime, the maximization of which is our main objective. Figure 14.4 depicts the gain in network lifetime as a function of the number of relays added. The network-lifetime gain due to adding K relays is defined as G T (K ) =
T (K ) − TMPR (0) , TMPR (0)
(14.16)
where T (K ) is the network lifetime after deploying K relays and TMPR (0) denotes the network lifetime of the original network utilizing the MPR algorithm. As shown, the SDP-based network maintenance algorithm achieves a significant networklifetime gain as the number of relays added increases, which is a direct consequence of increasing the Fiedler value as shown previously in Figure 14.3. At K = 4, employing the MPR algorithm, the network maintenance algorithm achieves a lifetime gain of 105.8%, whereas random deployment achieves a lifetime gain of 40.09%. In Figure 14.4, we also illustrate the impact of the adaptive network maintenance algorithm on the network-lifetime gain. At K = 4 relays, the lifetime gain jumps to 132.1% for the MPR algorithm. We also compare the performance with that obtained
363
14.6 Simulation results
350 Exhaustive MPR Adaptive MPR Fixed MPR Random−based MPR
300
Lifetime Gain (%)
250 200 150 100 50 0
Figure 14.4
1
2 3 Number of added relays
4
The average gain in network lifetime vs. the number of relays added, for n = 20 distributed randomly in a 6 × 6 square field. The effect of deploying relays is illustrated.
with the exhaustive-search scheme. For practical implementation of the exhaustivesearch scheme, the optimum locations for a given set of relays are determined consecutively, i.e., one relay at a time. We implemented the exhaustive-search scheme by dividing the network area into many small regions, with each region represented by a relay at its center. The optimum location for the first relay is determined by calculating the lifetime for all the possible locations and choosing the one that results in the maximum lifetime. Given the updated network including the first relay, we find the optimum location for the second relay via the same exhaustive-search scheme. Application of this algorithm is repeated until all the relays have been deployed. In Figure 14.4, we show the network-lifetime gain for the exhaustive-search case utilizing the MPR algorithm. As indicated in Section 14.4.1, the WMPR algorithm should intuitively outperform the MPR algorithm when relays have higher initial energy than that of the sensors. We set the weights of the deployed relays to be 0.1, while the weights of the original sensors to be 1. Therefore, sensors tend to send their data to the deployed relays rather than to the neighboring sensors. In addition, the relays’ energy is set to be 10 times that of the sensors. As a result, the WMPR algorithm achieves a higher gain than that achieved by the MPR algorithm, as shown in Figure 14.5. At K = 4, the WMPR and MPR algorithms achieve network-lifetime gains of 278.8% and 262.7%, respectively. In Figure 14.5, we notice that the difference between the WMPR and the MPR performance curves increases as the number of relays increases. Intuitively, the WMPR algorithm utilizes the relays more frequently than the MPR algorithm. Hence it achieves a higher lifetime gain by increasing the relays’ initial energy.
364
Connectivity-aware network maintenance and repair
300 SDP−based WMPR (10 times initial energy) SDP−based MPR (10 times initial energy) SDP−based MPR Random−based MPR (10 times initial energy)
Lifetime Gain (%)
250
200
150
100
50
0
Figure 14.5
1
1.5
2 2.5 3 Number of added relays
3.5
4
The average network-lifetime gain vs. the number of relays added, for n = 20 distributed randomly in a 6 × 6 square field. The effect of increasing the relays’ initial energy by a factor of 10 is illustrated.
We also consider a larger sensor network of n = 50 nodes deployed randomly in a 15 × 15 square area. Figure 14.6 shows the network-lifetime gain. At K = 15, employing the MPR algorithm, the network maintenance algorithm presented here achieves a lifetime gain of 113.6%, whereas random deployment achieves a lifetime gain of 40.7%. In Figure 14.6, we also illustrate the impact of the adaptive network maintenance algorithm on the network-lifetime gain. At K = 15 relays, the lifetime gain jumps to 119.7% for the MPR algorithm.
14.6.1
Interference-based transmission scenario In this subsection, we consider a different transmission scenario in which some of the sensors are allowed to send their data simultaneously over the same channel. We assume that node vi is sending its data to node v j and the total number of simultaneous transmissions is s. The received symbol can be modeled as yj =
:
Pi h i, j xi +
s :
Pk h k, j xk + n j .
(14.17)
k=1,k =i
√ Let m j = k=i Pk h k, j xi + n j denote the random variable representing the summation of the noise and interference terms. For a large enough number of simultaneous transmissions, m j can be modeled as a complex Gaussian random variable −α with zero mean and variance N0 + k=i Pk dk, j via the central limit theorem, i.e.,
365
14.6 Simulation results
120
Lifetime Gain (%)
100
Adaptive MPR Fixed MPR Random−based MPR
80
60
40
20
0
Figure 14.6
0
5 10 Number of added relays
15
The average network-lifetime gain vs. the number of relays added, for n = 50 distributed randomly in a 15 × 15 square field. The effect of deploying relays is illustrated.
−α m j ∼ C N 0, N0 + k=i Pk dk, j . This is a reasonable assumption since the number of sensors deployed in a sensor network is often large. Thus, (14.17) can be written as (14.1) with a different noise term, which is m j . Consequently, and similarly to (14.3), the power required in order to achieve a desired BER of εo is given by ⎛ Pio = di,α j ⎝ N0 +
⎞ −α ⎠ Pk dk, j
k=i
(1 − 2εo )2 . 1 − (1 − 2εo )2
(14.18)
In (14.18), it is obvious that the transmission power required by each node depends on the transmission powers of the other nodes sending simultaneously over the same channel. We obtain an approximated power expression by first approximating (14.18) as follows. At low BER, it can be easily shown that
Pi ≈
N0 +
−α k=i Pk dk, j 4εo di,−αj
.
(14.19)
The transmission power can be determined through a power-control problem, which can be formulated as the following optimization problem:
min
i
Pi ,
s.t.
N0 +
−α k=i Pk dk, j 4Pi di,−αj
≤ εo ,
(14.20)
366
Connectivity-aware network maintenance and repair
Let p ∈ Rs be the power vector, containing the transmission powers Pi , that needs to be calculated. Hence, (14.20) can be formulated in a matrix form as ' ( 1 Pi , s.t. I − o F p ≥ u, min (14.21) 4ε i
s×s s where I ∈ R is the identity matrix and the ith element of the vector u ∈ R is u i = −α o s×s N0 / 4ε di, j . With respect to F ∈ R , Fi, j = 0 if i = j and Fi, j = (dk, j /di, j )−α elsewhere. If the spectral radius of F, which is its largest eigenvalue, is less than (4εo ), then the minimum power set is given [361] [360] by ' (−1 1 (14.22) po = I − o F u. 4ε
At low BER, it can be shown that the zero-interference required transmission power given in (14.3) can be approximated as Pi ≈ N0 / 4εo di,−αj . On comparing this power with that required for the interference-based transmission scenario given in (14.19), it is obvious that the interference-based transmission scenario requires more transmission power per node than that required in the zero-interference scenario for the same desired BER. Therefore, nodes will lose their energies at a faster rate in the interference-based transmission scenario. Consequently, the network lifetime is shorter in the interferencebased transmission scenario. Therefore, if the limited lifetime of batteries is a concern in a sensor network, it is recommended to have orthogonal transmission between the nodes in order to maximize the network lifetime. We consider a network of n = 10 nodes deployed randomly in a 4 × 4 area. All the nodes operate in half-duplex mode, i.e., no node is allowed to transmit and receive at the same time. In addition, nodes sending their data to the same destination are not allowed to send their data at the same time since this requires a more complex receiver such as a successive-interference-cancelation (SIC) decoder, which it might not be possible for a simple sensor node to have. The route from each sensor to the central unit is determined on the basis of the zero-interference transmission powers given in (14.3). Then the transmission powers are modified according to (14.22) to represent the interference-based case. In addition to the network lifetime, the number of packets delivered from all the sensors to the central unit before the network dies is an important measure of the network performance. Figure 14.7 depicts the number of delivered packets versus the number of relays added for the zero-interference and interference-based transmission scenarios utilizing the MPR algorithm. First, it is shown for the interference-based scenario that the number of packets delivered slightly increases as the number of relays added increases. Generally, there are two main factors affecting the net result of the interference-based scenario whenever relays are deployed. Deploying relays increases the number of packets delivered because of the relaying together with the extra energy that the deployed relays have. So, adding more relays increases the number of packets delivered, as shown previously for the zero-interference transmission scenario. On the other hand, deploying relays causes interference with the other existing nodes and forces each existing
367
14.6 Simulation results
160 Zero−interference MPR Interference−based MPR
150
Delivered Packets
140
130
120
110
100
90
Figure 14.7
0
1
2 Number of added relays
3
4
The average number of packets delivered vs. the number of relays added, for n = 10 distributed randomly in a 4 × 4 square field.
node to raise its transmission power to overcome the interference effect of the recently added relays. Thus, deploying relays will cause nodes to die faster and consequently will decrease the number of packets delivered. This is the main reason why the networklifetime gains in the zero-interference transmission scenario are higher than those in the interference-based scenarios. We note that the net result of these two factors will determine the performance of the interference-based network maintenance algorithms.
14.6.2
Network repair We consider the network repair problem when the network is originally disconnected. In Figure 14.8, we show the average number of added relays required in order to reconnect a disconnected network, assuming δ = 0 in (14.15). n sensors are randomly distributed in a 6 × 6 square area. The maximum transmission power of any node is Pmax = 0.07. It is shown that, for a disconnected network of n = 25 nodes deployed randomly in a 6 × 6 area, the average number of relays added is 4. For n < 15, Figure 14.8 shows that the average number of relays added increases as n increases. This is because, for small n, it is more likely that the added sensors will be deployed in new regions where there are very few or no sensors. Thus, more relays need to be deployed to connect these added sensors. On the other hand, as n increases beyond n = 15, the average number of added relays decreases. This is intuitive because, as the number of sensors increases to produce a moderate state, the network becomes more balanced, i.e., the sensors are uniformly deployed throughout the whole area. Beyond this moderate state, increasing the number of sensors keeps filling the gaps in the network. Consequently, the average number of relays needed decreases as n increases.
368
Connectivity-aware network maintenance and repair
Average number of added relays
5
4.5
4
3.5
3
2.5 10
15
20
25 30 Number of sensors
35
40
Figure 14.8
The average minimum number of added relays required in order to reconnect a network vs. the number of sensors in the network.
14.7
Summary and bibliographical notes In this chapter, we have addressed the problems of network maintenance and network repair in wireless sensor networks. We have considered the Fiedler value, which is the algebraic connectivity of a graph, as a network-health indicator. First, we consider a network maintenance algorithm, which finds the locations for an available set of relays that result in the maximum possible Fiedler value. This algorithm finds the location through a small number of levels. In each level, the network maintenance problem is formulated as an SDP optimization problem, which can be solved using the available standard SDP solvers. In a sensor network of n = 50 sensors deployed in a 15 × 15 area, the network lifetime is increased by 113.6% due to the addition of 15 relays. Second, we present an adaptive network maintenance algorithm, whereby the relays’ locations can be changed depending on the network-health indicator. We have shown that a lifetime gain of 119.7% is achieved due to the adaptive network-maintenance algorithm. Third, we study the WMPR algorithm, which balances the load of the network among the sensors and the relays. We also illustrate that, in sensor networks where sensors have limited energy supplies, nodes should transmit their data over orthogonal channels with no interference from the other nodes. Finally, we discuss an iterative network repair algorithm, which finds the minimum number of relays needed to connect a disconnected network. Interested readers can refer to [200]. Numerous network maintenance algorithms [192] [189] [199] [469] have recently been investigated. In [192], the problem of providing additional energy to the existing sensors together with deploying additional relays in two-tier wireless sensor networks
14.7 Summary and bibliographical notes
369
was considered. It was shown that the problem of joint design of energy provision and relay node placement can be formulated as a mixed-integer nonlinear programming problem, which is NP-hard in general. A relay-deployment algorithm that maximizes the minimum sensor lifetime by exploiting cooperative diversity was presented in [189]. In [469], a joint design of relay deployment and transmission power control was considered in order to maximize the network lifetime. In that work, there is no solution specifying how to deploy the relays at particular locations; instead the probability distribution of the relays’ location is quantified. More precisely, the relay density should be higher near the central unit. There have been recent works that considered the connectivity in wireless sensor networks [293] [471] [190]. In [293], the problem of adding relays to improve the connectivity of multi-hop wireless networks was addressed. A set of designated points was given and the available relays had to be deployed in a smaller set of these designated points. The set of relay locations was determined by testing all the designated points and choosing the combination which resulted in a higher connectivity measure. Obviously, this scheme becomes very complex as the network size increases. In [471], three random deployment strategies, namely, connectivity-oriented, lifetime-oriented, and hybrid-oriented, were developed. However, there is no explicit optimization problem for maximizing the network lifetime in that work. A mathematical approach to positioning and flying a UAV over a wireless ad hoc network in order to optimize the network’s connectivity for a better QoS and coverage was considered in [190]. Authors of several works have considered the network repair problem, in which the objective is to find the minimum number of relays needed in order to have a connected graph. This is the same problem as the problem of a Steiner minimum tree with the minimum number of Steiner points and bounded edge length defined in [261], which is NP-hard. Several approximate algorithms have been developed to solve it in [248] [52] [53] [262]. For instance, the algorithm in [262] first computes the minimum spanning tree (MST) of the given graph, then it adds on the MST edges relays that did not exist in the original graph. The connectivity improvement using Delaunay triangulation in [248] involves constructing a Delaunay triangulation in the disconnected network and deploying nodes in certain triangles according to several criteria. The network repair problem has been generalized to k-connectivity, in the sense of both edge and vertex connectivity, in [233].
Part III
Securing mechanism and strategies
15
Trust modeling and evaluation
The performance of distributed networks depends on collaboration among distributed entities. To enhance security in distributed networks, such as ad hoc networks, it is important to evaluate the trustworthiness of participating entities since trust is the major driving force for collaboration. In this chapter, we present a framework to quantitatively measure trust, model trust propagation, and defend trust-evaluation systems against malicious attacks. In particular, we address the fundamental understanding of trust, quantitative trust metrics, mathematical properties of trust, dynamic properties of trust, and trust models. The attacks against trust evaluation are identified and defense techniques are developed.
15.1
Introduction The fields of computing and communications are progressively heading toward systems of distributed entities. In the migration from traditional architectures to more distributed architectures, one of the most important challenges is security. Currently, the networking community is working on introducing traditional security services, such as confidentiality and authentication, into distributed networks including ad hoc networks and sensor networks. However, it has also recently been recognized that new tools, beyond conventional security services, need to be developed in order to defend these distributed networks from misbehavior and attacks that may be launched by selfish and malicious entities. In fact, the very challenge of securing distributed networks comes from the distributed nature of these networks – there is an inherent reliance on collaboration between network participants in order to achieve the planned functionalities. Collaboration is productive only if all participants operate in an honest manner. Therefore, establishing and quantifying trust, which is the driving force for collaboration, is very important for securing distributed networks. There are three primary aspects associated with evaluating trust in distributed networks. First, the ability to evaluate trust offers an incentive for good behavior. Creating an expectation that entities will “remember” one’s behavior will cause network participants to act more responsibly. Second, trust evaluation provides a prediction of one’s future behavior. This predication can assist in decision-making. It provides a means for good entities to avoid working with less trustworthy parties. Malicious users, whose behavior has caused them to be recognized as having low trustworthiness, will have less
374
Trust modeling and evaluation
ability to interfere with network operations. Third, the results of trust evaluation can be directly applied to detect selfish and malicious entities in the network. Research on the subject of trust in computer networks has been performed extensively for a wide range of applications, including public-key authentication, electronic commerce, peer-to-peer networks, and ad hoc and sensor networks. However, many challenges still need to be addressed. Trust definition Although definitions and classifications of trust have been borrowed from the social science literature, there is no clear consensus on the definition of trust in computer networks. Trust has been interpreted as reputation, trusting opinion, probability[131], etc. Trust metrics As a natural consequence of the confusion in the definition of trust, it has been evaluated in very different ways. Currently, it is very difficult to compare or validate these trust metrics because a fundamental question has not been well understood. What is the physical meaning of trust? Unlike in social networks, where trust is often a subjective concept, trust metrics for computer networks need to have clear physical meanings, in order to establish the connection between trust metrics and observations (trust evidence) and justify calculation/policies/rules that govern calculations performed upon trust values. Quantitative trust models Many trust models have been developed to model the transit of trust through third parties. For example, the simplest method is to sum the number of positive ratings and negative ratings separately and keep a total score as the positive score minus the negative score. This method is used in eBay’s reputation forum. Although a variety of trust models is available, it is still not well understood what fundamental rules trust models must follow. In the absence of a good answer to this essential question, the design of trust models is still at the empirical stage. Security Trust evaluation is obviously an attractive target for adversaries. Besides well-known straightforward attacks such as providing dishonest recommendations, some sophisticated attacks can undermine the whole trust-evaluation process. In addition, providing trust recommendations may violate the privacy of individuals. Currently, security and privacy issues have not received enough attention in the design of trust-evaluation systems. In this chapter, we address the four major challenges discussed above and develop a systematic framework for trust evaluation in distributed networks. • We exploit the definitions of trust in the sociology, economics, political science, and psychology literature. By investigating correlations and differences between the establishing of trust in the social context and that in networking, we clarify the concept of trust in distributed networks and develop trust metrics. • We develop fundamental axioms that address the basic rules for establishing trust through a third party (concatenation propagation) and through recommendations from multiple sources (multipath propagation).
375
15.2 The foundations of trust evaluation
• The vulnerabilities of trust/reputation systems are extensively studied and protection strategies are considered. Some of the vulnerabilities have not been recognized in the existing works. • Finally, we develop a systematic framework for trust evaluation in distributed networks. To demonstrate the usage of this framework, we implement it in an ad hoc network to assist secure routing and malicious-node detection. Extensive simulations are performed to demonstrate the effectiveness of the trust-evaluation methods and attack/anti-attack schemes presented here. The rest of the chapter is organized as follows. Section 15.2 presents our understanding of trust, including trust definition, trust metrics, basic axioms for trust propagation, and trust models. Section 15.3 presents attacks and protection techniques for trustevaluation systems. In Section 15.4, a systematic trust-management framework is introduced and applied in ad hoc networks to assist route selection and maliciousnode detection. Simulation results are shown in Section 15.5, followed by a summary in Section 15.6.
15.2
The foundations of trust evaluation
15.2.1
Trust concepts in social networks and computer networks In order to gain insight into the meaning of trust, we start from the definitions of trust commonly adopted in social science. In [275], after having examinined definitions of trust in 60 research articles and books, the authors identified six representative trust constructs, as illustrated in Figure 15.1. In social networks, trust can refer to a behavior such that one person voluntarily depends on another person in a specific situation. Trust can also be an intention, that is, one party is willing to depend on the other party. For social interactions, trust intention and trust behavior are built upon four constructs: trusting belief, system trust, situational decision trust, and dispositional trust. Among them, the most important one is trusting belief, that is, one believes that the other person is willing and able to act in one’s best interests. This belief is built upon a belief-formation process. In addition, system trust means that the proper impersonal structures are in place to ensure successful future endeavor. Here, impersonal structures can be regulations that Trusting Behavior Trusting Intention
Situational Decision to Trust
Figure 15.1
Trusting Beliefs Disposition to Trust
Belief-Formation Processes
System Trust
Relationships among trust constructs (arrows indicate relationships and mediated relationships).
376
Trust modeling and evaluation
provide structural assurance. Dispositional trust refers to the fact that people develop general expectations about the trustworthiness of other people over the course of their lives. Situational decision trust applies to the circumstances under which the benefits of trust outweigh the possible negative outcomes of trusting behavior. The relationship among computing devices is much simpler than that among human beings. The concept of trust in computer networks does not have all six perspectives. Trust behavior, trust intention, situational decision trust, and dispositional trust are not applicable to networking. Here, only trusting belief and system trust, which are built upon a belief-formation process, are relevant to the concept of trust in computer networks. In this chapter, these three modules are collectively referred to as trust management. As illustrated in Figure 15.2, the outcome of trust management is provided by decision-making functions, which will make decisions on the basis of trust evaluation as well as other application-related conditions. Further, system trust can be interpreted as a special type of belief, whereby an entity believes that the network will operate as it is designed. Therefore, the most appropriate interpretation of trust in computer networks is belief. One entity believes that the other entity will act in a certain way, or believes that the network will operate in a certain way. This is our basic understanding of trust in computer networks.
15.2.2
Notation of trust Trust is established between two parties for a specific action. In particular, one party trusts the other party to perform an action. In our work, the first party is referred to as the subject and the second party as the agent. We introduce the notation {subject : agent, action} to represent a trust relationship. The concepts of subject, agent, and action can have broader meanings. For example, an ad hoc mobile node trusts that the network has the capability to revoke the majority of malicious nodes. The base station trusts that the sensors around location (x, y) can successfully report explosion events. In general, we have the following. • Subject – usually represents one entity; can be a group of entities. • Agent – one entity, a group of entities, or even the network. • Action – an action performed (or a property possessed) by the agent. Application-Related Decision-Making
Trusting Beliefs Applicationrelated Conditions
Figure 15.2
System Trust
Belief-Formation Processes
Trust constructs in computer networks.
15.2 The foundations of trust evaluation
15.2.3
377
Uncertainty is a measure of trust The trust concept in computer networks is belief. However, how should one quantitatively evaluate the level of trust? We argue that the uncertainty in belief is a measure of trust. Here are three special cases. 1. When the subject believes that the agent will perform the action for sure, the subject fully trusts the agent and there is no uncertainty. 2. When the subject believes that the agent will surely not perform the action, the subject fully distrusts the agent and there is no uncertainty either. 3. When the subject has no idea about the agent at all, there is the maximum amount of uncertainty and the subject has no trust in the agent. Indeed, trust is built upon how certain one is about whether some actions will be carried out or not. Therefore trust metrics should describe the level of uncertainty in trust relationships.
15.2.4
Trust metrics How should one measure uncertainty? Information theory states that entropy is a natural measure of uncertainty. We would like to define a trust metric that is based on entropy. This metric should give trust value 1 in the first special case, −1 in the second special case, and 0 in the third special case. Let T {subject, agent, action} denote the trust value of a trust relationship and P{subject, agent, action} denote the probability that the agent will perform the action in the subject’s point of view. In this chapter, the entropy-based trust value is defined as: 1 − H ( p), for 0.5 ≤ p ≤ 1, T = (15.1) H ( p) − 1, for 0 ≤ p < 0.5, where T = T {subject : agent, action}, p = P{subject, agent, action}, H ( p) = − p log2 ( p) − (1 − p)log2 (1 − p), and H is the entropy function [79]. This definition considers both trust and distrust. In general, the trust value is positive when the agent is more likely to perform the action ( p > 0.5), and is negative when the agent is more likely not to perform the action ( p < 0.5). This definition also tells us that the trust value is not a linear function of the probability. This can be seen from a simple example. In the first case, let the probability increase from 0.5 to 0.509. In the second case, let the probability increase from 0.99 to 0.999. The probability value increases by the same amount in both cases, but the trust value increases by 0.000 23 in the first case and 0.07 in the second case. This agrees with the intuition that the agent should gain more additional trust in the second case. Trust is not an isolated concept. As pointed out in [275], many belief-formation processes may generate belief as well as the confidence of belief. Confidence is an important concept because it can differentiate between a trust relationship established through a long-term experience and that arising from only a few interactions. Trust and confidence are closely related. In practice, the probability that the agent will perform
378
Trust modeling and evaluation
the action in the subject’s point of view, i.e., p = P{subject : agent, action} is often obtained through estimation. While the belief/trust is determined by the mean value of the estimated probability, the confidence is determined by the variance of the estimation.
15.2.5
Fundamental axioms of trust A trust relationship can be established in two ways: through direct observations and recommendations. When direct observations are available, the subject can estimate the probability value, and then calculate the trust value. When the subject does not have a direct interaction with the agent, it can also establish trust through trust propagation. There is a need to establish the fundamental axioms that govern the basic rules of trust propagation. Assume that A and B have established { A : B, action r }, and B and C have established {B : C, action}. Then, {A : C, action} can be established if the following two conditions are satisfied. 1. action r is to make recommendations of other nodes about performing action. 2. The trust value of {A : B, action r } is positive. The first condition is necessary because the entities that perform the action do not necessarily make correct recommendations. The second condition is necessary because untrustworthy entities’ recommendations could be totally uncorrelated with the truth. The enemy’s enemy is not necessarily a friend. Thus, the best strategy is not to take recommendations from untrustworthy parties. When the above two conditions are satisfied, we recognize three axioms that originate from the understanding of uncertainty. Axiom 1: Concatenation propagation of trust does not increase trust. When the subject establishes a trust relationship with the agent through the recommendation from a third party, the trust value between the subject and the agent should not be more than the trust value between the subject and the recommender as well as the trust value between the recommender and the agent. The mathematical representation of Axiom 1 is |TAC | ≤ min(|RAB |, |TBC |),
(15.2)
where TAC = T {A : C, action}, RAB = T {A : B, action r }, and TBC = T {B : C, action}. As shown in Figure 15.3, the trust relationship can be represented by a directional graph, in which the weight of the edge is the trust value. The style of the line represents the type of the action: dashed lines represent making recommendations and solid lines represent performing the action. Axiom 1 is similar to the data-processing theory in information theory[79]: entropy cannot be reduced via data processing. RAB A Figure 15.3
TBC B
C
Transit of trust along a chain.
379
15.2 The foundations of trust evaluation
R1
T2
A1
B2
R1
D2
C2
(a) Figure 15.4
T2
(b)
Combining trust recommendations. R2 A1
R1
D1
T4
B1
R1
E1
B2
R2
D2
T4
A2
C1
R3
C2
R1
T5
(a) Figure 15.5
T2
A2
C1
B1
R1
F2
R3
E2
T5
(b)
Sharing entities on transit paths.
Axiom 2: Multipath propagation of trust does not reduce trust. If the subject receives the same recommendations for the agents from multiple sources, the trust value should be no less than that in the case when the subject receives fewer recommendations. In particular, as illustrated in Figure 15.4, A establishes trust with C through one concatenation path (Figure 15.4(a)), and A establishes trust with C through two analogous trust paths (Figure 15.4(b)). Let TAC = T {A : C, action} and TAC = T {A : C , action}. The mathematical representation of Axiom 2 is TAC ≥ TAC ≥ 0, for R1 > 0 and T2 ≥ 0; TAC ≤ TAC ≤ 0, for R1 > 0 and T2 < 0, where R1 = T {A : B, making recommendation} = T {A : D, making recommendation} and T2 = T {B : C, action} = T {D : C, action}. Axiom 2 states that the subject will be more certain about the agent, or at least maintain the same level of certainty, if the subject obtains an extra recommendation that agrees with the subject’s current opinion. Notice that Axiom 2 holds only if multiple sources generate the same recommendations. The collective combination of different recommendations is a problem in nature that can generate different trust values according to different trust models. Axiom 3: Trust based on multiple recommendations from a single source should not be higher than that derived from independent sources. When the trust relationship is established jointly through concatenation and multipath trust propagation, it is possible to have multiple recommendations from a single source, as shown in Figure 15.5(a). Here, let TAC = T {A : C , action} denote the trust value established in Figure 15.5(a), and TAC = T {A : C, action} denote the trust value established in Figure 15.5(b). For the particular case shown in Figure 15.5, Axiom 3 says that TAC ≥ TAC ≥ 0, if TAC ≥ 0; TAC ≤ TAC ≤ 0, if TAC < 0, where R1 , R2 , and R3 are all positive. Axiom 3 states that recommendations from independent sources can reduce uncertainty more effectively than can recommendations from correlated sources.
380
Trust modeling and evaluation
As a summary, the above three basic axioms address different aspects of trust relationships. Axiom 1 states the rule for concatenation trust propagation. Axiom 2 describes the rule for multipath trust propagation. Axiom 3 addresses correlation among recommendations.
15.2.6
Trust models The methods for calculating trust via concatenation and multipath propagations are referred to as trust models. Trust models should satisfy all axioms. In this section, we introduce entropy-based and probability-based trust models.
15.2.6.1
The entropy-based model The entropy-based model takes trust values as defined in (15.1) as the input. This model considers only the trust value, not the confidence. For concatenation trust propagation as shown in Figure 15.3, node B observes the behavior of node C and makes recommendation to node A as TBC = {B : C, action}. Node A trusts node B with T {A : B, making recommendation} = RAB . To satisfy Axiom 1, one way to calculate TABC = T {A : C, action} is TABC = RAB TBC .
(15.3)
Note that, if node B has no idea about node C (i.e., TBC = 0) or if node A has no idea about node B (i.e., TAB = 0), the trust between A and C is zero, i.e., TABC = 0. For multipath trust propagation, let RAB = T {A : B, making recommendation}, TBC = T {B : C, action}, RAD = T {A : D, making recommendation}, TDC = T {D : C, action}. Thus, A can establish trust with C through two paths: A–B–C and A–D–C. To combine the trust established through different paths, we use maximal-ratio combining as T {A : C, action} = w1 (RAB TBC ) + w2 (RAD TDC ),
(15.4)
where w1 =
RAB RAD and w2 = . RAB + RAD RAB + RAD
(15.5)
In this model, if any path has the trust value 0, this path will not affect the final result. From (15.3) and (15.4), it is not difficult to prove that this model satisfies all three axioms.
15.2.6.2
The probability-based model In the probability-based model, we calculate concatenation and multipath trust propagation using the probability values of the trust relationship. Then, the probability values can be easily transformed back to trust values using (15.1). This model considers both the mean and the variance, i.e., trust and confidence.
15.2 The foundations of trust evaluation
381
The concatenation propagation model We first investigate the concatenation trust propagation in Figure 15.3. Define the following notations. • Random variable P is the probability that C will perform the action. In A’s opinion, the trust value T {A : C, action} is determined by E(P) and the confidence is determined by Var(P). • Random variable X is binary. X = 1 means that B provides honest recommendations. Otherwise, X = 0. • Random variable is the probability that X = 1, i.e., Pr(X = 1| = θ ) = θ and Pr(X = 0| = θ ) = 1 − θ. In A’s opinion, P{A : B, making recommendation} = pAB = E(θ ), and Var(θ ) = σAB . • B provides a recommendation about C as follows. The mean value of P{B : C, action} is pBC , and the variance of P{B : C, action} is σBC . To obtain E(P) and Var(P), the first step is to derive the pdf of P. It is obvious that ) θ =1 f (P = p, = θ )dθ, (15.6) f (P = p) = θ =0 f (P = p, X = x, = θ )Pr(X = x), (15.7) f (P = p, = θ ) = x=0,1
f (P = p, X = x, = θ ) = f (P = p|X = x, = θ ) f (X = x| = θ ) f ( = θ ). (15.8) Since A’s opinion about C depends solely on whether B makes honest recommendations and what B says, it is reasonable to assume that f (P = p|X = x, = θ ) = f (P = p|X = x). From (15.8) and (15.7), we can see f (P = p, X = x) = θ f (P = p, X = 1) f ( = θ ) + (1 − θ ) f (P = p, X = 0) f ( = θ ).
(15.9)
From (15.6) and (15.9), we can derive that f (P = p) = E(θ ) f (P = p|X = 1) + (1 − E(θ )) f (P = p|X = 0). (15.10) Using (15.10) and the fact that E(θ ) = pAB , we obtain E(P) = pAB · pC|X =1 + (1 − pAB ) pC|X =1 ,
(15.11)
where pC|X =1 = E(P|X = 1) and pC|X =0 = E(P|X = 0). Although A does not know pC|B=1 , it is reasonable for A to assume that pC|B=1 = pBC . Then, (15.11) becomes E(P) = pAB · pBC + (1 − pAB ) pC|X =1 .
(15.12)
From Axiom 1, it is easy to see that E(P) should be 0.5 when pAB is 0.5. By using pAB = 0.5 and E(P) = 0.5 in (15.12), we can show that pC|X =1 = 1 − pBC . Therefore, we calculate E(P) as
382
Trust modeling and evaluation
E(P) = pAB pBC + (1 − pAB )(1 − pBC ).
(15.13)
Using similar methods, Var(P) is expressed as ) p=1 Var(P) = p 2 f (P = p)d p − E(P)2 p=0
= pAB σBC + (1 − pAB )σC|X =0 + pAB (1 − pAB )( pBC − pC|X =0 )2 ,
(15.14)
where σC|X =0 = Var(P|X = 0) and pC|X =0 = 1 − pBC as in (15.13). The choice of σC|X =0 depends on specific application scenarios. For example, if we assume that P is uniformly distributed in [0, 1], we can choose σC|X =0 to be the maximum possible variance, i.e. 1/12. If we assume that the pdf of P is a beta function with mean m = pC|X =0 , we can choose m(1 − m)2 /(2 − m) for m ≥ 0.5, (15.15) σC|X =0 = m 2 (1 − m)/(1 + m) for m < 0.5. The expression in (15.15) is the maximum variance for a given mean m in beta distributions. As a summary, the probability-based concatenation model is expressed in (15.13) and (15.14).
The multipath propagation model Beta functions have been used in several schemes to address the problem of multipath trust propagation. In this section, we first briefly review the beta-function model and then generalize its usage. Assume that A can establish trust with C through two paths: A–B–C and A–D–C. Let r ec1 represent B’s recommendation and how much A trusts B, while r ec2 represent D’s recommendation and how much A trusts D. First, when only r ec1 is available, A uses the Bayesian model and obtains f (P = p|r ec1 ) = C
Pr(r ec1 |P = p) · f 0 (P = p) , Pr(r ec1 |P = p) · f 0 (P = p)d p
(15.16)
where f 0 (P = p) is the prior knowledge of P. When A does not have previous knowledge of P, we assume that f 0 (P = p) is a uniform distribution in [0, 1]. Thus, f (P = p|r ec1 ) = C
Pr(r ec1 |P = p) . Pr(r ec1 |P = p)d p
(15.17)
Next, A obtains more information about C through the second path as r ec2 . We use the Bayesian model again and replace the prior knowledge by f (P = p|r ec1 ) as f (P = p|r ec2 , r ec1 ) = C
Pr(r ec2 |P = p) · f (P = p|r ec1 ) . Pr(r ec2 |P = p) · f (P = p|r ec1 )d p
(15.18)
If we assume that Pr(r ec1 |P = p) and Pr(r ec2 |P = p) are beta functions, i.e., Pr(r ec1 |P = p) = B(α1 , β1 ),
(15.19)
Pr(r ec2 |P = p) = B(α2 , β2 ),
(15.20)
383
15.3 Attacks and protection
then it can be proved that (15.18) is still a beta function, B(α1 + α2 − 1, β1 + β2 − 1). The beta distribution is B(α, β) =
(α + β) α−1 (1 − p)β−1 . p (α)(β)
(15.21)
The beta-function model is often used in scenarios in which the subject has collected binary opinions/observation about the agent[147] [16]. For example, entity A receives in total S positive feedback and F negative feedback about entity C. In another example, entity A made an observation that C had performed the action successfully S times in S + F trials. In these cases, the probability Pr(obser vation|P = p) can be approximated as B(S + 1, F + 1). Next, we generalize the usage of the beta-function model to non-binary opinions/observation cases. It is known that the beta distribution B(α, β) has mean and variance m=
α α+β
and
v=
αβ . (α + β)2 (α + β + 1)
Thus, the parameters α and β are determined from the mean and variance as ' ( ' ( m(1 − m) m(1 − m) α=m −1 and β = (1 − m) −1 . v v
(15.22)
(15.23)
In the case of multipath trust propagation, let A establish trust and confidence represented by the mean value m 1 and variance v1 through the first path. Through the second path, A establishes trust and confidence represented by the mean value m 2 and variance v2 . Using (15.23), (m 1 , v1 ) is converted into (α1 , β1 ) and (m 2 , v2 ) is converted into (α2 , β2 ). Then, a pair of new parameters (α, β) is calculated as α = α1 + α2 − 1 and β = β1 + β2 − 1. After combining the two paths, the new mean value and variance should be calculated from (α, β) using (15.22).
15.3
Attacks and protection As we will show in the simulation section, trust management can effectively improve network performance and detect malicious entities. Therefore, trust management itself is an attractive target for attackers. Besides some well-known attacks, such as badmouthing attacks, we will identify new attacks and develop defense methods in this section.
15.3.1
Bad-mouthing attacks As long as recommendations are taken into consideration, malicious parties can provide dishonest recommendations to smear good parties and/or boost trust values of malicious peers. This type of attack, referred to as a bad-mouthing attack, is the most straightforward attack and has been discussed with regard to many existing trust-management or reputation systems.
384
Trust modeling and evaluation
In our work, the defense against the bad-mouthing attack has three perspectives. First, the action trust and the recommendation trust records are maintained separately. Only entities that have provided good recommendations previously can earn high recommendation trust. Second, recommendation trust plays an important role in the trust-propagation process. The necessary conditions of trust propagation state that only recommendations from entities with positive trust values can propagate. In addition, the three fundamental axioms limit the recommendation power of entities with low recommendation trust. Third, besides the action trust, the recommendation trust is treated as an additional dimension in the malicious-entity-detection process. As a result, if a node has low recommendation trust, its recommendations will have minor influence on good nodes’ decision making, and it might be detected as malicious and expelled from the network. The consequences of a bad-mouthing attack and the effectiveness of the defense strategy will be demonstrated in Section 15.5.
15.3.2
On–off attack On–off attack means that malicious entities behave well and badly in alternation, hoping that they can remain undetected while causing damage. This attack exploits the dynamic properties of trust through time-domain inconsistent behaviors. We first discuss the dynamic properties of trust and then demonstrate the nature of this attack. Trust is a dynamic event. A good entity may be compromised and turned into a malicious one, while an incompetent entity may become competent due to environmental changes. In wireless networks, for example, a mobile node may experience a bad channel condition at a certain location and has a low trust value associated with forwarding packets. After moving it to a new location where the channel condition is good, some mechanisms should be in place for it to recover its trust value. In order to track this dynamics, an observation made a long time ago should not carry the same weight as that for one made recently. The most commonly used technique that addresses this issue is to introduce a forgetting factor. That is, performing K good actions at time t1 is equivalent to performing Kβ t2 −t1 good actions at time t2 , where β(0 < β ≤ 1) is often referred to as the forgetting factor. In the existing schemes, using a fixed forgetting factor has been taken for granted. We discover, however, that a forgetting factor can facilitate on–off attacks on trust management. Let’s demonstrate such an attack through a simple example. Assume that an attacker behaves in a manner corresponding to the following four stages: the attacker (1) initially behaves well 100 times, (2) then behaves badly 100 times, (3) then stops doing anything for a while, and (4) then behaves well again. Figure 15.6 shows how the trust value of this attacker changes. The horizontal axis is the number of good behaviors minus the number of bad behaviors, while the vertical axis is the estimated probability value. The probability value is estimated as (S + 1)/(S + F + 2), where S is the number of good behaviors and F is the number of bad behaviors. This calculation is based on the beta-function model introduced in Section 15.2.6.2. We observe the following.
385
15.3 Attacks and protection
Four-stage behavior, for β = 1, β = 0.0001 1 0.9
β = 0.0001 β=1
stage 1
0.8 stage 4
Probability Value
0.7
stage 2
0.6 0.5 0.4
stage 3
0.3 0.2 0.1 0 −50 Figure 15.6
0 50 No. of good behaviors − No. of bad behaviors
100
Trust-value changes upon entities’ inconsistent behavior with a fixed forgetting factor.
1. When the system does not forget, i.e., β = 1, this attacker has a positive trust value in stage (2). That is, this attacker can have good trust values even after he has performed many bad actions. When one uses a large forgetting factor, the trust value might not represent the latest status of the entity. As a consequence, the malicious node could cause a large amount of damage in a stage that is similar to stage (2). 2. When one uses a small forgetting factor, the attacker’s trust value drops rapidly after he starts behaving badly in stage (2). However, he can regain trust by simply waiting in stage (3) while the system will forget his bad behaviors quickly. From the attackers’ point of view, he can take advantage of the system one way or another, no matter what forgetting factor one chooses. To defend against on–off attacks, we consider a scheme that is inspired by a social phenomenon − while it takes long-time interaction and consistent good behavior to build up a good reputation, just a few bad actions can ruin it. This implies that humans remember bad behaviors for a longer time than they do good behaviors. Therefore, we mimic this social phenomenon by introducing an adaptive forgetting scheme. Instead of using a fixed forgetting factor, β is a function of the current trust value. For example, we can choose β = 1 − p, where p = P{subject : agent, action}
(15.24)
or β = β1 for p ≥ 0.5
and
β = β2 for p < 0.5,
(15.25)
386
Trust modeling and evaluation
Two adaptive forgetting schemes 1 0.9 stage 1 0.8 stage 2
Probability Value
0.7 0.6 0.5
stage 4
0.4 0.3 0.2
β = 0.99, p < 0.5 β = 0.01, p > 0.5 β=1−p
stage 3 0.1 0 −50 Figure 15.7
0 50 100 150 No. of good behaviors − No. of bad behaviors
200
Trust-value changes upon entities’ inconsistent behavior with an adaptive forgetting factor.
where 0 < β1 β2 ≤ 1. Figure 15.7 demonstrates the trust-value changes for these two adaptive forgetting schemes. The dashed line represents the case using (15.24), and the solid line represents the case using (15.25) with β1 = 0.01 and β2 = 0.99. Figure 15.7 clearly shows the advantages of the adaptive forgetting scheme. That is, the trust value can keep up with the entity’s current status after the entity has turned bad; and an entity can recover its trust value after some bad behaviors, but this recovery requires many good actions.
15.3.3
Conflicting-behavior attack While an attacker can behave inconsistently in the time domain, he can also behave inconsistently in the user domain. In particular, malicious entities can impair good nodes’ recommendation trust by performing differently from different peers. This attack is referred to as a conflicting-behavior attack. For example, the attackers can always behave well to one group of users and behave badly to another group of users. Thus, these two groups develop conflicting opinions about the malicious users. Users in the first group obtain recommendations from the other group, but those recommendations will not agree with the first group’s own observations. As a consequence, the users in one group will assign low recommendation trust to the users in the other group. Figure 15.8 demonstrates this attack through a simple example in an ad hoc network. The system is set up as follows. In each time interval, which is n time units long, each node randomly selects another node to transmit packets. Assume that node A selects node X. If node A has not previously interacted with node X or the trust
387
15.3 Attacks and protection
Node index i
T {node i : node j, make recommendation} 20
1
15
0.5
10
0
–0.5
5
2
4
6
8
10
12
14
16
18
20
–1
Node index j Figure 15.8
Recommendation trust when malicious users attack half of the good users.
value T { A : X, forward packet} is smaller than a threshold, node A will ask all other nodes for recommendations about X. Then, node A asks X to forward n packets. In this example, we assume that A can observe how many packets X has forwarded. Next, A updates its trust record. The trust-updating procedure will be described in detail in Section 15.4. In this example, there are in total 20 nodes. If a malicious node decides to attack node A, it drops the packets from A with a packet-dropping ratio randomly selected between 0 and 40%. Two attackers, users 2 and 3, launch a conflictingbehavior attack by dropping the packets of users 1, 2, . . . ,10 but not those of users 11, 12, . . . , 20. In Figure 15.8, the element on the ith row and jth column represents the recommendation trust of the jth user in the ith user’s record. The lighter the shading, the higher the trust. We can see that node 1–10 will give low recommendation trust values to node 11–20, and vice versa.
15.3.4
Sybil attacks and newcomer attacks If a malicious node can create several faked IDs, the trust-management system suffers from a sybil attack. The faked IDs can share, or even take, the blame which should be attributed to the malicious node. Here is an example of a sybil attack. In an ad hoc network, node A sends packets to node D through a path A–B–C–D. With the sybil attack, B creates a faked ID B and makes the route look like A–B–B –C–D from A, C, and D’s viewpoint. Node B can achieve this by manipulating route-discovery messages, communicating with A using the ID B and communicating with C using the ID B . When packets are dropped by B, B could make B take the blame if B is ever suspected of dropping packets. Obviously, B can also created multiple faked IDs. If a malicious node can easily register as a new user, the trust management suffers from a newcomer attack. Here, malicious nodes can easily remove their bad history by registering as new users. Newcomer attacks can significantly reduce the effectiveness of trust management.
388
Trust modeling and evaluation
The defense against sybil attacks and newcomer attacks does not rely on the design of trust management, but rather involves the authentication schemes. Authentication is the first line of defense that makes registering a new ID or a faked ID difficult. In this chapter, we simply note these two attacks and will not discuss them in depth.
15.4
Trust-management systems in ad hoc networks
15.4.1
Design of trust-management systems In the current literature, many works use heuristic trust metrics to address one or a few perspectives of trust management for specific applications. There are few works focusing on establishing generic trust models or providing a complete picture of trust management through a survey. However, the existing works do not well address two important perspectives of trust management in distributed computer networks. The first is the networking of specific elements such as how to request and obtain recommendations, and the second is attacks and protection mechanisms. In this chapter, we design a comprehensive framework of trust management for distributed networks, as illustrated in Figure 15.9. This framework contains five basic building blocks. A trust record is constructed through the process of trust establishment, which builds direct trust values from observations and indirect trust values from recommendations, and updated by the process of record maintenance, which assigns initial trust values and addresses dynamic properties of trust. Trust-requests Trust Management Trust Record
direct
indirect
Quantitative Representation
Trust Relation type 2 ………
Record Maintenance Initialization Update according to records relationship
Trust-Request Management
Trust Establishment
Process query from local applications
Update according to time
Update direct recom. trust
Calculate indirect trust
Adjustment of forgetting factor
Update direct action trust
Generate Trust Graph
Pr Process other nodes’ recom. query
Recom. Buffer
Request for recom. and process replies
Observation Buffer
Applications that provide observations
Figure 15.9
MaliciousNode Detection
Definition
Trust Relation type 1
A trust-management system for distributed computer networks.
Network communication components
Local applications
15.4 Trust-management systems in ad hoc networks
389
management serves as the interface between applications that request trust values and trust management. It also handles the requests for trust recommendations. In addition, malicious-node detection is performed on the basis of the trust record and its output also affects some entries in the trust record. This framework can be used in a variety of applications, such as ad hoc networks, peer-to-peer networks, and sensor networks. To demonstrate its usage, we present the implementation of such a framework in mobile ad hoc networks.
15.4.2
Applications in ad hoc networks In ad hoc networks, securing routing protocols is one of the fundamental challenges. While many secure routing schemes focus on preventing attackers from entering the network through secure key distribution/authentication and secure neighbor discovery, trust management can guard routing even if malicious nodes have gained access to the network. In this section, we demonstrate the usage of trust management in ad hoc networks to secure routing protocols. For ad hoc routing, we investigate the trust values associated with two actions: forwarding packets and making recommendations. Broadly speaking, each node maintains its trust record associated with these two actions. When a node (source) wants to establish a route to the other node (destination), the source first tries to find multiple routes to the destination. Then the source tries to find the packet-forwarding trustworthiness of the nodes on the routes from its own trust record or through requesting recommendations. Finally the source selects the trustworthy route along which to transmit data. After the transmission, the source node updates the trust records on the basis of its observation of route quality. The trust records are also used for malicious-node detection. All of the above is achieved in a distributed manner.
15.4.2.1
Obtaining trust recommendations Requiring trust recommendation in ad hoc networks often occurs in the circumstance that communication channels between arbitrary entities are not available. In this section, we briefly introduce the procedures for requesting trust recommendations and processing trust-recommendation requests. We assume that node A wants to establish trust relationships with a set of nodes B = {B1 , B2 , . . .} about an action act, and A does not have a valid trust record with ˆ {Bi , ∀i}. Node A first checks its trust record and selects a set of nodes, denoted by Z, that have recommendation trust values larger than a certain threshold value. Although A needs only recommendations from Zˆ in order to calculate the trust value of B, A may ask for recommendations from a larger set of nodes, denoted by Z, for two reasons. First, node A does not necessarily want to reveal the information about whom it trusts because the malicious nodes may take advantage of this information. Second, if node A establishes trust with B through direct interaction later, node A can use the recommendations it has collected previously to update the recommendation trust of ˆ but also the nodes the nodes in Z. Thus, Z should contain not only the nodes in Z, with which A wants to update/establish recommendation trust. Next, node A sends a
390
Trust modeling and evaluation
trust-recommendation-request (TRR) message to its neighbors that are within node A’s transmission range. The TRR message should contain the IDs of nodes in set B and in set Z. In order to reduce overhead, the TRR message also contains the maximal length of trust transit chains, denoted by Max_transit, and time-to-live (TTL). Node A waits for time TTL for replies. In addition, a transmit- path is used to record the delivery history of the TRR message such that the nodes which receive the TRR message can send their recommendations back to A. Upon receiving an unexpired TRR message, the nodes that are not in Z simply forward the TRR message to their neighbors; the nodes in Z either send trust values back to A or ask their trusted recommenders for further recommendations. In addition, the nodes in Z might not respond to the TRR message if they do not want to reveal their trust records to A when, for example, they believe that A is malicious. In particular, suppose that node X is in Z. When X receives an unexpired TRR message, if X has a trust relationship with some of {Bi }, X sends its recommendation back to A. If X does not have a trust relationship with some of {Bi }, X generates a new TRR message by replacing Z by the recommenders trusted by X and reducing the value of Max_transit by one. If Max_transit > 0, the revised TRR message is sent to X’s neighbors. X also sends A corresponding recommendation trust values needed for A to establish trust-propagation paths. If the original TRR message has not expired, X will also forward the original TRR message to its neighbors. By doing so, the trust concatenations can be constructed. The major overhead of requesting trust recommendations comes from transmitting TRR messages in the network, which increases exponentially with Max_transit. Fortunately, Max_transit should be a small number due to Axiom 1, which implies that only short trust transit chains are useful.
15.4.2.2
Trust-record maintenance and updating In this study, the trust relationship {A : C, forward packet} is established on the basis of whether C forwarded packets for A or not. Assume that C has forwarded k packets out of a total of N packets. Node A will calculate P{A : C, forward packet} = (k + 1)/ (N + 2). The observations of k and N values are made through a light-weight selfevaluation mechanism, which allows the source node to collect packet-forwarding statistics and to validate the statistics through a consistency check. More details of this mechanism are presented in [490]. In addition, before any interaction takes place, A sets the initial trust values using k = 0 and N = 0. Next, we present the procedure for updating trust records. Assume that node A would like to ask node C to transmit packets, while A does not have a trust relationship with node C.
Before data transmission • Node A receives the recommendation from node B, and node B says that T {B : C, forward packet} = TBC . Also, A has established { A : B, make recommendation} r = T {A : C, forward packet} on the basis of previously. Then, A calculates TAC trust-propagation models.
15.5 Simulations
391
After data transmission • Node A observes that C forwards k packets out of a total of N packets. a , and updates its • A calculates T {A : C, forward packet} based on observations as TAC trust arecord.r − TAC ≤ threshold, node A believes that B has made one good recommen• If TAC dation. Otherwise, node A believes that B has made one bad recommendation. Then, A can update the recommendation trust of B accordingly.
15.4.2.3
Some implementation details • Route discovery: A performs on-demand routing to find several possible routes to destination D. • Route selection: among all possible routes, node A would like to choose a route that has the best possible quality. Let {n i , ∀i} represent the nodes on a particular route R. Let pi represent P{A : n i , forward packet}, where A is the source. The quality of route R is calculated as i pi . • Malicious-node detection: assume that the malicious-node-detection algorithm considers M trust relationships as {A : B, acti }, for i = 1, 2, . . . , M. The mean value and variance associated with {A : B, acti } are denoted by m i and vi , respectively. G = First, we convert (m i , vi ) into (αi , βi ) using (15.23). Then, we calculate pAB G P{A : B, be a good node} as pAB = α/(α + β), where α = i wi (αi − 1) + 1 and β = i wi (βi − 1) + 1. Here, {wi } is a set of weight vectors and wi ≤ 1. Finally, if G is smaller than a certain threshold, A detects B as malicious. pAB
15.5
Simulations An event-driven simulator was built to simulate mobile ad hoc networks. The physical layer uses a fixed-transmission-range model, in which two nodes can directly communicate with each other only if they are within a certain transmission range. The MAC-layer protocol simulates the IEEE 802.11 distributed coordination function (DCF) [197]. DSR is used as the underlying routing protocol. We use a rectangular space of size 1000 m by 1000 m. The network size is about 50 nodes, and the maximum transmission range is 300 m. There are 50 traffic pairs randomly generated for each simulation. For each traffic pair, the packet arrival time is modeled as a Poisson process, and the average packet inter-arrival time is 1 s. The size of each data packet after encryption is 512 bytes. Among all the ROUTE REQUESTs with the same ID received by node A, node A will only broadcast the first request if it is not the destination, and will send back at most five ROUTE REPLYs if it is the destination. The maximum number of hops on a route is restricted to be 10. Each node moves randomly according to the random-waypoint model [209] with a slight modification. A node starts at a random position, waits for a duration called the pause time that is modeled as a random variable with an exponential distribution, then randomly chooses a new location and moves toward the new location with a velocity uniformly chosen between 0 and
392
Trust modeling and evaluation
vmax = 10 m/s. When it arrives at the new location, it waits for another random pause time and repeats the process. The average pause time is 300 s. In this section, we first show the advantages of trust management in improving network throughput and malicious detection, and then demonstrate the effects of several attack/anti-attack methods presented in Section 15.3.
15.5.1
Effects of trust management In Figure 15.10, three scenarios are compared: (1) a baseline system that does not utilize trust management, with no malicious attackers; (2) a baseline system with five attackers who randomly drop about 90% of packets passing through them; and (3) the system with trust management and five attackers. Here, we use the probability-based trust model. Figure 15.10 shows for these three scenarios the percentages of the packets that are successfully transmitted, representing the network throughput, as a function of time. Three observations can be made. First, the network throughput can be significantly degraded by malicious attackers. Second, after using trust management, the network performance can be recovered because it enables the route-selection process to avoid less trustworthy nodes. Third, when the simulation time increases, trust management can bring the performance close to that in the scenario where no attackers are present, since more and more accurate trust records are built over time. We introduce a metric MDP to describe the malicious-node-detection performance. Let Di denote the number of good nodes that have detected that node n i is malicious, M denote the set of malicious nodes, and G denote the set of good nodes. Then, MDP is defined as i:n i ∈M Di /|M|, which represents the average detection rate. Similarly, we can define another metric as i:n i ∈G Di /|G|, which describes the false-alarm rate. 0.96 0.94
Packet delivery ratio
0.92 0.9 0.88 No attackers
0.86
5 attackers, no trust management
0.84
5 attackers, trust management
0.82 0.8 0.78
0
500
1000
1500
2000 Time
Figure 15.10
Network throughput with/without trust management.
2500
3000
3500
4000
393
15.5 Simulations
Malicious-node-detection performance (MDP)
20
15
10
5
Using direct forwarding−packet trust Using direct & indirect forwarding−packet trust Using forwarding−packet trust & recom. trust
0 0
500
1000
1500
2000
2500
3000
3500
4000
Time Figure 15.11
The effectiveness of malicious-node detection with/without recommendations.
For all simulations in this section, we choose the detection threshold such that the falsealarm rate is approximately 0. Thus, we only show MDP as the performance index. Figure 15.11 shows the MDP for three cases. In case 1, only direct packet-forwarding trust information is used to detect malicious nodes. In case 2, both direct and indirect packet-forwarding trust information is used to detect malicious nodes. In case 3, direct and indirect packet-forwarding trust and direct recommendation trust are used. Recall that direct trust records are built upon the observations, whereas indirect trust records are built upon the recommendations. As we expected, the detection rate is higher when indirect information and recommendation trust information are used. This means that the recommendation mechanism improves the performance of malicious-node detection.
15.5.2
Bad-mouthing attack The influence of a bad-mouthing attack is demonstrated in Figure 15.12, which shows the network throughput when attackers launch only a gray-hole attack (i.e., dropping packets) and when attackers launch both gray-hole and bad-mouthing attacks. Here, both direct and indirect packet-forwarding trust are used in the route-selection process. We can see that the bad-mouthing attack leads to a drop in performance since indirect trust information can be inaccurate. However, this performance drop is small because our trust-management system already has defense mechanisms embedded, as discussed in Section 15.3.1. To defeat a bad-mouthing attack, the best strategy is to use recommendation trust in the detection process. As illustrated in Figure 15.13, when using the direct recommendation trust in the detection process, the MDP is significantly improved, compared with the case using only packet-forwarding trust.
394
Trust modeling and evaluation
0.92 0.91
Packet-Delivery Ratio
0.9 0.89 0.88 0.87 0.86 0.85 Gray-hole attack only Gray-hole attack + bad-mouthing attack
0.84 0.83
0
500
1000
1500
2000
2500
3000
3500
4000
Time Figure 15.12
The effects of bad-mouthing attack when route selection uses both direct and indirect packet-forwarding trust information (50 good nodes and 5 bad nodes). 40 Using direct and indirect action trust Using direct, indirect action trust and direct recom. trust
35 30
MDP
25 20 15 10 5 0
0
500
1000
1500
2000
2500
3000
3500
4000
Time Figure 15.13
A comparison of malicious-node-detection strategies under bad-mouthing attack (50 good nodes and 5 bad nodes).
15.5.3
On–off attack For on–off attack, we would like to compare four scenarios: (1) no on–off attack but attacking all the time; (2) with on–off attack and using forgetting factor 1 to defend; (3) with on–off attack and using forgetting factor 0.001 to defend; and (4) with on–off attack and using the adaptive forgetting scheme to defend. In the last
395
15.5 Simulations
Packet-Delivery Ratio
1
0.95
0.9
0.85
0
500
1000
1500
2000
2500
3000
3500
4000
Malicious-Node-Detection Performance (MDP)
Time 20 15 Attack all the time with β = 1
10
on−off attack with β = 1
5 0
Figure 15.14
on−off attack with β = 0.001
on−off attack, adaptive forgetting
0
500
1000
1500
2000 Time
2500
3000
3500
4000
The effect of on–off attack and different forgetting schemes.
scenario, we use (15.25) in the adaptive forgetting scheme. In those experiements, when attackers are “on,” they randomly choose a packet-dropping ratio between 40% and 80%. First, Figure 15.14 shows the consequences of the on–off attack. With the on–off attack, the MDP values are close to 0 because attackers change behavior when their trust values drop to close to the detection threshold. Meanwhile, the network throughput is higher when the attackers launch on–off attacks than that when they attack all the time. Next, we show the tradeoff between the network throughput and the trust values of the attackers in Figure 15.15. The vertical axis is the average packet-forwarding trust of malicious nodes, and the horizontal axis is the network throughput. On comparing the three forgetting schemes (i.e., scenarios (2)–(4)), we can see that, given the same network throughput, the adaptive forgetting scheme is the best because it results in the lowest trust values for attackers.
15.5.4
Conflicting-behavior attack As discussed in Section 15.3.3, conflicting-behavior attacks can cause deterioration of the recommendation trust of good nodes. How about the recommendation trust of bad nodes?
396
Trust modeling and evaluation
Average packet-forwarding trust of malicious nodes
0.98 0.96 0.94 0.92 0.9 0.88 0.86 0.84
on−off attack with β = 0.001 on−off attack with β = 1
0.82 0.8 0.94
on−off attack, adaptive forgetting
0.945
0.95
0.955
0.96
0.965
0.97
0.975
0.98
Packet-Delivery Ratio Figure 15.15
A comparison between adaptive forgetting and fixed forgetting.
The attackers have four strategies that they can implement to provide recommendations to others. Assume that the attackers will drop packets for a subset of users, denoted by A, and will not drop packets for the rest of the users, denoted by B. The attackers can provide (R1) (R2) (R3) (R4)
no recommendations to subgroup A, and honest recommendations to subgroup B; no recommendations to subgroup A, and no recommendations to subgroup B; bad recommendations to subgroup A, and no recommendations to subgroup B; bad recommendations to subgroup A, and honest recommendations to subgroup B.
What is the best strategy for the attackers to make the conflicting-behavior attack more effective? We have performed extensive simulations for the above four recommendation scenarios. Owing to limitations of space, the simulation curves are not included in this chapter, and we merely summarize the observations. First of all, in (R1) and (R4), the attackers can in fact help the network performance by providing good recommendations, especially when the attack percentage is low and attacks occur at the beginning of the simulation (when most good nodes have not established reliable recommendation trust with others). In (R1), malicious nodes usually have higher recommendation trust than good nodes. Thus, it is harmful to use the recommendation trust in the malicious-node-detection algorithm. A similar phenomenon occurs in (R4) when the attack percentage is low.
397
Malicious-node-detection performance (MDP)
15.6 Summary and bibliographical notes
Figure 15.16
4.5 Using action trust only
4
Using both action trust and recom. trust
3.5 3 2.5 2 1.5 1 0.5 20
30
40
50 60 70 Attack percentage
80
90
100
A conflicting-behavior attack reduces the advantage of using recommendation trust in the detection process.
In (R3), malicious nodes always have much lower recommendation trust than good nodes. Thus, the conflicting-behavior attack can be easily defeated as long as the threshold in the malicious-node-detection algorithm is properly chosen. A similar phenomenon occurs in (R4) when the attack percentage is high. As a summary, if the attackers do not want to help the network by providing honest recommendations and do not want to be detected easily, the best strategy for providing recommendation is (R2). Figure 15.16 shows the MDP values versus the percentage of users who are attacked by the malicious nodes when (R2) is adopted. The data are for the simulation time 1500. In Figure 15.16, the MDP for the detection scheme that uses direct and indirect packet-forwarding trust performs better than does that using packetforwarding trust and recommendation trust. In addition, the difference between the two detection schemes in terms of MDP is not very large. In practice, when a conflicting-behavior attack is suspected, one should not use recommendation trust in the detection algorithm. When it is not clear what types of attack are being launched, using recommendation trust in the malicious-node detection is still a good idea because of its obvious advantages in defeating other types of attacks.
15.6
Summary and bibliographical notes This chapter presents a framework for trust evaluation in distributed networks. We address the concept of trust in computer networks, develop trust metrics with clear physical meanings, develop fundamental axioms of the mathematical properties of trust, and build trust models that govern trust propagation through third parties. Further,
398
Trust modeling and evaluation
we present attack methods that can reduce the effectiveness of trust evaluation and discuss the protection schemes. In particular, we focus on bad-mouthing attacks, on–off attacks, and conflicting-behavior attacks. Then, a systemic trust-management system is designed, with the specific consideration of distributed implementation. In this work, the usage of the framework is demonstrated in ad hoc networks to assist route selection and malicious-node detection. The simulation results show that the trust-evaluation system can improve network throughput as well as help malicious-node detection. Simulations are also performed to investigate various malicious attacks. The main observations are summarized as follows. For a bad-mouthing attack, the most effective method of malicious-node detection is to use both packet-forwarding trust and recommendation trust. To defeat an on–off attack, the adaptive forgetting scheme developed in this chapter is better than using fixed forgetting factors. From the attackers’ points of view, they would not provide recommendations in order to make the conflicting-behavior attack effective. When a conflicting-behavior attack is launched, using recommendation trust in malicious-node detection can reduce the detection rate. Interested readers can refer to [408] [386] [385] for more information. Some schemes employ linguistic descriptions of trust relationship, such as in PGP [497], PolicyMaker [23], the distributed trust model [11], trust policy language [179], and SPKI/SDSI public-key infrastructure [54]. In some other schemes, continuous or discrete numerical values are assigned to measure the level of trustworthiness. For example, in [271], an entity’s opinion about the trustworthiness of a certificate is described by a continuous value in [0, 1]. In [414], a 2-tuple in [0, 1]2 describes the trust opinion. In [210], the metric is a triplet in [0, 1]3 , where the elements in the triplet represent belief, disbelief, and uncertainty, respectively. In [11], discrete integer numbers are used. In [210], an algebra, called subjective logics, is used to assess trust values based on the triplet representation of trust. In [268], fuzzy logic provides rules for reasoning with linguistic trust metrics. In the context of the “Web of Trust,” many trust models are built upon a graph in which the resources/entities are nodes and trust relationships are edges, such as in [369] [271]. Then, simple mathematical criteria, such as the minimum, maximum, and weighted average, are used to calculate unknown trust values through concatenation and multipath trust propagation. In [203] [147] [16], a Bayesian model is used to take binary ratings as input and compute reputation scores by statistically updating beta probability density functions.
16
Defense against routing disruptions
In this chapter, we introduce a set of mechanisms to protect mobile ad hoc networks against routing-disruption attacks launched by inside attackers. First, each node launches a route-traffic observer to monitor the behavior of each valid route in its route cache, and to collect the packet-forwarding statistics submitted by the nodes on this route. Since malicious nodes may submit false reports, each node also keeps cheating records for other nodes. If a node is detected as dishonest, this node will be excluded from future routes, and the other nodes will stop forwarding packets for it. Third, each node will try to build friendship with other nodes to speed up malicious-node detection. Route diversity will be explored by each node in order to discover multiple routes to the destination, which can increase the chance of defeating malicious nodes that aim to prevent good routes from being discovered. In addition, adaptive route rediscovery will be applied to determine when new routes should be discovered. It can handle various attacks and introduces little overhead into the existing protocols. Both analysis and simulation studies have confirmed the effectiveness of the defense mechanisms.
16.1
Introduction and background A mobile ad hoc network is a group of mobile nodes not requiring centralized administration or a fixed network infrastructure, in which nodes can communicate with other nodes beyond their direct transmission ranges through cooperatively forwarding packets for each other. One underlying assumption is that they communicate through wireless connections. Since ad hoc networks can be easily and inexpensively set up as needed, they have a wide range of applications, such as military exercises, disaster rescue, and mine-site operations. Before mobile ad hoc networks can be successfully deployed, security concerns must be addressed. However, due to mobility and their ad hoc nature, protecting mobile ad hoc networks is particularly hard: the wireless links are usually fragile, with a high ratio of broken links; nodes lacking sufficient physical protection can be easily captured, compromised, and hijacked; the sporadic nature of connectivity and the dynamically changing topology may cause frequent route updates; the lack of centralized monitoring or management points causes further deterioration of the situation. Attackers can easily launch a variety of attacks ranging from passive eavesdropping to active interfering.
400
Defense against routing disruptions
During the last decade, extensive studies have been conducted on routing in mobile ad hoc networks, and have resulted in several mature routing protocols [61] [329] [209] [325]. However, in order to work properly, these protocols need trusted working environments, which are not always available. In many situations, the environments may be adversarial. For example, some nodes may be selfish or malicious, or may have been compromised by attackers. In the literature, many schemes have been considered to secure ad hoc network routing protocols. However, most schemes focus on preventing attackers from entering the network through secure key distribution/authentication and secure neighbor discovery, such as those in [181] [335] [379] [491] [182] [183] [172]. These schemes are not effective in situations in which malicious nodes have entered the network, or some nodes in the network have been compromised. In this chapter, we consider the scenario that all nodes in the network belong to the same authority and pursue common goals, and present a set of integrated mechanisms to defend against routing-disruption attacks launched by inside attackers. Under this scenario, we can categorize the nodes into two classes: good and malicious. Good nodes will try their best to forward packets for others, that is, they are fully cooperative, whereas malicious nodes may manipulate routing messages, (selectively) drop data packets, and frame other good nodes, with the objective of degrading the network performance and consuming valuable network resources. We use “HADOF” (the acronym of Honesty, Adaptivity, Diversity, Observer, and Friendship) to refer to the set of mechanisms to defend against routing-disruption attacks, which in brief works in the following way. Each node launches a route-traffic observer to monitor the behavior of each valid route in its route cache, and to collect the packet-forwarding statistics submitted by the nodes on those routes. Since the mechanism for submission of reports does not rely on monitoring neighbors’ forwarding activities, it is much more energy efficient than the watchdog mechanism. Since malicious nodes may submit false reports, each node also keeps a cheating-record database that indicates whether some nodes are dishonest or have been suspected to be dishonest. If a node is detected as cheating, then this node will be excluded from future routes. Furthermore, other nodes will stop forwarding packets originated from this cheating node as punishment. In many situations, if malicious nodes are smart, it is hard to find concrete evidence to prove that they are cheating. To address this issue and speed up malicious-node detection, each node can also build friendship with other nodes that it trusts. The next two mechanisms are to explore the route diversity and the dynamic nature of mobile ad hoc networks. Since there may exist more than one route from a source to a destination, the source can try to find multiple routes to the destination, and adaptively determine which route should be used. By exploring route diversity, we expect that the frequency of route discovery can be reduced, and the case that malicious nodes try to prevent good routes from being discovered can be better handled. Owing to node mobility and dynamically changing traffic pattern, a route that was good before need not necessarily be good currently. Instead of waiting for all the routes in the route cache to become invalid, adaptive route rediscovery tries to trade the route-discovery overhead
16.2 Assumptions and the system model
401
with the route quality through dynamically determining when new route discoveries should be initiated. The rest of this chapter is organized as follows. In Section 16.2 we outline our assumptions. Section 16.3 describes HADOF in detail. Section 16.4 analyzes the security of HADOF. Simulation methodology and performance metrics are described in Section 16.5. Section 16.6 presents the simulation results and performance evaluation.
16.2
Assumptions and the system model
16.2.1
Physical- and MAC-layer assumptions We assume that nodes can move freely inside a certain area, and communicate with each other through wireless connections. We assume that the links are bi-directional, but not necessarily symmetric. That is, if node A is capable of transmitting data to node B directly, then node B is also capable of transmitting data to A directly, though the two directions may have different bandwidths. This assumption holds in most wireless communication systems. In this chapter, neighbor refers to the fact that two nodes are within each other’s transmission range, and can directly communicate with each other. We assume that the MAC-layer protocol supports an acknowledgment (ACK) mechanism. That is, if node A has sent a packet to node B, and B has successfully received it, then node B needs to notify A of its reception immediately.
16.2.2
Dynamic source routing We adopt DSR [209] as the underlying routing protocol, which is an on-demand sourcerouting protocol for mobile ad hoc networks. On-demand routing means that routes are discovered at the time when a source wishes to send a packet to a destination and no existing routes are known by the source. Source routing means that, when sending a packet, the source lists in the packet header the complete sequence of nodes through which the packet is to traverse. There are two basic operations in DSR: route discovery and route maintenance. In DSR, when a source S wishes to send packets to a destination D but does not know any routes to D, S will initiate route discovery by broadcasting a ROUTE REQUEST packet, specifying the destination D and a unique ID. When a node receives a ROUTE REQUEST not targeting it, it first checks whether this request has been seen before. If yes, it will discard this packet; otherwise, it will append its own address to this REQUEST and rebroadcast it. When the REQUEST arrives at D, D then sends a ROUTE REPLY packet back to S, including the list of accumulated addresses (nodes). A source may receive multiple ROUTE REPLY messages from the destination, and can cache these routes in its route cache. Route maintenance handles link breakages. If a node detects that the link to the next hop is broken when it tries to send a packet, it will send a ROUTE ERROR packet back to the source to notify it of this link breakage. The source then removes the route having
402
Defense against routing disruptions
this broken link from its route cache. For sending subsequent packets to the destination, the source will choose another route in its route cache, or will initiate route discovery anew when no route exists.
16.2.3
Attacks and node-behavior assumptions Since we consider the scenario that all nodes belong to the same authority and pursue common goals, without loss of generality, we assume that nodes are either good or malicious. Malicious nodes can launch a variety of attacks in almost all layers of mobile ad hoc networks. For example, an attacker can use a jammer to interfere with the transmission in the physical layer. It can also attack the MAC layer by exploring the vulnerability of existing protocols. Defense against attacks launched in physical and MAC layers is outside the scope of this chapter; we will focus on security issues in the network layer. Two types of attacks have been widely used to attack the network layer in ad hoc networks: resource consumption and routing disruption. Resource-consumption attacks are those in which the attackers inject extra packets into the network in an attempt to consume valuable network resources. Routing-disruption attacks, which are the focus of this chapter, are those in which attackers attempt to cause legitimate data packets to be routed in dysfunctional ways, and consequently cause packets to be dropped or extra network resources to be consumed. Some examples of routing-disruption attacks are black-hole, gray-hole, wormhole, rushing, and frame-up attacks. The attackers can create a wormhole through collusion in the network to short circuit the normal flow of routing packets, or can apply a rushing attack to disseminate ROUTE REQUEST quickly through the network. By creating a wormhole or applying rushing attacks, the attackers can prevent good routes from being discovered, and increase their chance of being on discovered routes. Once an attacker is on a certain route, it can create a black hole by dropping all the packets passing through it, or can create a gray hole by selectively dropping some of the packets passing through it. If the protocols have a mechanism to track malicious behavior, an attacker can also frame good nodes. In addition, an attacker can modify the packets passing through it, which has a similar effect to that of dropping packets, but is a little bit more severe because more network resources will be wasted when the following nodes on this route continue forwarding this corrupted packet.
16.2.4
Security and key-setup assumptions We assume that each node has a public/private key pair, and there is a tight coupling between a node’s public key and its address, such as deriving the IP address of the node from its public key using the methods described in [323] [276]. We also assume that a node can know or authenticate other nodes’ public keys, but no node will disclose its private key to others unless it has been compromised. We do not assume that nodes trust each other, since some nodes may be malicious or may have been compromised. However, if there exists some trust relationship, we will take advantage of it.
16.3 Security mechanisms
403
We assume that all the nodes in the network are legitimate, that is, they have been authorized to enter the network, and have certified public keys. Attackers without certified public keys can be excluded from the routes through the enforcement of key authentication. We assume that, if two nodes set up communication between them, they must have built a trust relationship, and they trust the information reported by each other. This trustworthiness can be built outside of the context of the network (e.g., between friends), or through certain authentication mechanisms after the network has been set up. To keep the confidentiality and integrity of the transmitted content, the sources encrypt and sign each packet sent by them. Since the source and the destination trust each other, they can create a temporarily shared secret key to encrypt the communication and use an efficient hash chain to authenticate the communication. For each intermediate node on the route, authentication is activated only when the destination has detected abnormal corruptions in data packets, which means that some malicious nodes are present on the route.
16.3
Security mechanisms Before describing the detail of HADOF, we first introduce some notation in Table 16.1. In this chapter, we use S to denote the source and D to denote the destination. Also, traffic pair refers to a pair of nodes (S, D) communicating with each other directly or indirectly. On the basis of our assumption, S and D trust each other.
16.3.1
Route-traffic observers Each node launches a route-traffic observer (RTO) to periodically collect the traffic statistics of each valid route in its route cache. A valid route refers to a route for which the node does not receive a link-breakage report. At the end of each predetermined interval, the RTO examines each traffic pair (S, D) and each route Ri to D in S’s route cache that has been used in this interval. In particular, the RTO collects R Ncur (A, S, Ri ) and F Ncur (A, S, Ri ) reported by each node A on this route. This can be done by letting D periodically send back an agent packet to collect such information, or letting each node periodically report its own statistics to S. For each node A known by S, S’s RTO also keeps a record of R Ntot (A, S) and F Ntot (A, S). To reduce the overhead, the RTO of S will request reports from the intermediate nodes of a route only when S realizes that some packets have been dropped on this route in this interval on the basis of the reports submitted by D. After the RTO has finished collecting packet-forwarding statistics, it recalculates the expected quality of those routes that have been used in this interval. In general, route quality is affected by many factors, such as the forwarding history of each node on this route, the number of hops, the current traffic load, and traffic distributions. Before defining the expected route-quality metric, we first define the expected packet-delivery ratio of A for S, P(A, S), as follows:
404
Defense against routing disruptions
Table 16.1. Notations for each traffic pair S D Ri Li F Ncur (A, S, Ri ) R Ncur (A, S, Ri ) F Ntot (A, S) R Ntot (A, S) Pcur (A, S, Ri ) Pavg (A, S) H (A, S)
The source The destination The ith available route from S to D in S’s route cache The number of intermediate nodes on the route Ri The number of packets originated from S and forwarded by A via route Ri in this interval The number of packets originated from S and received by A via route Ri in this interval The total number of packets originated from S and forwarded by A The total number of packets originated from S and received by A F Ncur (A, S, Ri )/R Ncur (A, S, Ri ), the packet-delivery ratio of A for S via route Ri in this interval F Ntot (A, S)/R Ntot (A, S), the overall packet-delivery ratio of A for S A’s honesty score from S’s point of view.
P(A, S) = (1 − β)Pavg (A, S) + β Pcur (A, S, Ri ).
(16.1)
That is, P(A, S) is a weighted average of Pcur (A, S, Ri ) and Pavg (A, S), and β is used to adjust the weighting. The intuition behind this is that, when predicting a node’s future performance, we consider not only this node’s current performance, but also its past history. It is easy to see that the range of P(A, S) is between 0 and 1. In HADOF, the expected route quality Q(Ri ) for route Ri is calculated as follows: Q(Ri ) =
P(A, S) ∗ H (A, S) − λ ∗ L i ,
(16.2)
A∈Ri
where H (A, S) is A’s honesty score in S’s view indicating the suspicious degree of A. H (A, S) ranges from 0 to 1, with 1 indicating being honest and 0 indicating being malicious. The criteria for calculating H (A, S) are presented in Section 16.3.2. In (16.2), a small positive value λ is introduced to account for the effects of the number of hops. As a result, if two routes have the same value for the product on the right-hand side of (16.2), the route with fewer hops is favored. The intuition behind this is that we expect a route with fewer hops to have less influence on the network. In HADOF, the values of P(S, S), P(D, S), and H (S, S) will always be 1, since a source trusts itself and the corresponding destination.
405
16.3 Security mechanisms
16.3.2
Cheating records and honesty scores When S’s RTO collects packet-forwarding statistics, malicious nodes may submit false reports. For example, a malicious node may report a smaller RN value and a larger FN value to cheat the source and frame its neighbors. To address this, each source keeps a cheating-record (CR) database to track whether some nodes have ever submitted or been suspected of submitting false reports to it. S will mark a node as malicious if S has enough evidence to believe that the node has submitted false reports. Initially, S assumes that all nodes are honest, and sets the honesty score H (A, S) for each node A to 1. After each report collection, which is performed periodically, S will try to detect whether some nodes on a route are cheating by checking the consistency of the reports received. For example, in Figure 16.1, both A and B are on the route R, with A being ahead of B. An instance of cheating behavior is detected if S finds that F Ncur (A, S, R) = R Ncur (B, S, R). If one of them (A or B) is trusted by S (e.g., that node is S itself or D), then the other node can be marked as cheating by S, and the honesty score of the cheating node will be set to be 0. Otherwise, S can only suspect that at least one of them is cheating. In this case, the honesty scores of both nodes are updated as H (A, S) = α H (A, S),
(16.3)
H (B, S) = α H (B, S),
(16.4)
where 0 < α < 1 is used to indicate the degree of punishment. In addition, if F Ncur (A, S, R) > R Ncur (B, S, R), S will reset the value of F Ncur (A, S, R) using R Ncur (B, S, R), reset the value of R Ncur (B, S, R) using F Ncur (B, S, R), and recalculate F Ntot (A, S) and R Ntot (B, S) using the updated values. Since F Ncur (A, S, R) < R Ncur (B, S, R) does not make sense, we will not consider this situation. Once a node has been detected as cheating, punishment should be applied to it. In HADOF, when S detects a node B being malicious, S will put B on its blacklist (which is equivalent to setting H (B, S) to 0), stop forwarding any packets originating from B, and refuse to be on the same route as B in the future. Next we introduce a mechanism to recover the honesty scores of nodes that have been framed by malicious nodes. We still use the example in Figure 16.1 to illustrate this mechanism. When S finds the reports submitted by A and B to be conflicting with each other, that is, F Ncur (A, S, R) > R Ncur (B, S, R), besides decreasing A’s honesty score, S will also increase the number of possible framing attacks launched by B against A, and will record the difference between F Ncur (A, S, R) and R Ncur (B, S, R). Similarly, S does the same thing to B. If later S detects that B is a cheating node, S will check how many nodes have ever been framed by B and, for each node, how many times. Assuming that A has been framed by B m times, S will recover A’s honesty score as follows: S Figure 16.1
A
B
Detection of cheating behavior.
C
D
406
Defense against routing disruptions
H (A, S) =
H (A, S) , αm
(16.5)
which is always bounded by 1. Meanwhile, S also needs to increase F Ntot (A, S) or decrease R Ntot (A, S) to recover the inaccuracy caused by framing attacks launched by B.
16.3.3
Friendship Since a malicious node knows the source and destination of each route that it is on, to avoid being detected, it will frame only those of its neighbors which are neither the source nor the destination. Therefore, even when the CR database has been activated, the activity of malicious nodes can only be suspected, but cannot be proved by the source. This can be mitigated by taking advantage of the existing trustworthiness relationship. Each node maintains a private list of trusted nodes that it considers to be honest. Now, if B submits false reports to S to frame A, while S trusts A, B can be detected by S immediately, and H (B, S) will be set to 0.
16.3.4
Route diversity Since there may exist more than one route from a source to a destination, it is usually beneficial to discover multiple routes. In [10] [339], the authors have shown that using multiple routes can reduce the route-discovery frequency. In this chapter, we investigate how route diversity can be used to defend against routing-disruption attacks. In DSR, discovering multiple routes from a source to a destination is straightforward. Let MaxRouteNum be the maximum number of ROUTE REPLY messages that the destination can send back for the route requests with the same request ID. By varying MaxRouteNum, we can discover different numbers of routes. By exploring route diversity, we have a better chance of defeating attackers who aim to prevent good routes from being found. Meanwhile, since there may exist multiple routes, the source can always use the route with the best quality according to certain criteria. Whenever a new route R is discovered, for each node A on this route, F Ncur (A, S, R) and R Ncur (A, S, R) should be initialized to 0. Since this route has never been used before, its expected quality can be calculated as Q(R) = Pavg (A, S) ∗ H (A, S) − λ ∗ L . (16.6) A∈R
The difference between (16.6) and (16.2) lies in the fact that only nodes’ past histories on the route are used in (16.6). Since there may exist multiple routes to D in S’s route cache, S needs to decide which route should be used. One possible way is to use always the one with the best expected quality. However, this might not be the best choice. For example, the quality of a route may degrade dramatically after its use for a lot of traffic. In this chapter, the following procedure is used to distribute traffic among multiple routes, and adaptively determine which route should be used. Let Q threshold be a predetermined quality threshold, and let R1 , . . . , R K be the K routes with expected quality higher than Q threshold . Once S wants
407
16.3 Security mechanisms
to send a packet to D, S randomly picks a route among them. The probability that route Ri (1 ≤ i ≤ K ) will be picked is determined as Pr(Ri ) =
Q(Ri ) . Q(R1 ) + · · · + Q(R K )
(16.7)
If no route has an expected quality higher than Q threshold , the route with the highest expected quality will be selected.
16.3.5
Adaptive route rediscovery Owing to mobility and the dynamically changing traffic patterns, some routes may become invalid after a while, or their quality may change. Usually, a new route discovery should be initiated by S when there exist no available routes from S to D. In this chapter, we use an adaptive route-rediscovery mechanism to determine when a new route discovery should be initiated: if S wants to send packets to D, and there exist no routes to D with quality higher than Q threshold in S’s route cache, S then initiates a new route discovery.
16.3.6
Implementation of HADOF We have implemented HADOF upon DSR, which includes two major procedures: the packet-sending procedure and the updating of traffic statistics and cheating records. The packet-sending procedure is described in Figure 16.2. When S wants to send a packet to D, S first checks its route cache to find whether there exist valid routes to D. If there exist
S wants to send a packet to D
Find all the routes to D in S’s route cache with expected quality higher than Q threshold
Is there any such route? NO
S initiates a new route discovery to D. Use the route with the highest expected quality to send the packet Figure 16.2
The packet-sending procedure.
YES
Pick one among these routes according to the procedure described in Section 16.3.4, and use this route to send the packet
408
Defense against routing disruptions
Traffic statistics and cheating records update for pair (S, D)
No Time to update? YES Is there any valid route in S’s route cache for this traffic pair?
YES NO Update traffic statistics and honesty scores as described in Sections 16.3.1 and 16.3.2
Does any route have quality higher than Q threshold?
S initiates a new route discovery to D
NO
YES Figure 16.3
Updating/maintaining traffic statistics and cheating records.
no valid routes, S initiates a new route discovery with the destination being D. If there exist some valid routes, but none has expected quality higher than Q threshold , S picks the route with the best expected quality, and initiates a new route discovery. Otherwise, S randomly picks one route according to the procedure described in Section 16.3.4. The procedure for updating/maintaining traffic statistics and cheating records is described in Figure 16.3. The source S periodically calls this procedure to collect traffic statistics for each route that has been used in this interval. Using the mechanisms described in Sections 16.3.1 and 16.3.2, S updates the expected-route-quality and cheating records. If necessary, a new route discovery should be initiated when certain conditions are satisfied, as described in Section 16.3.5.
16.4
Security analysis This section analyzes the security aspects of HADOF in terms of defending against various routing-disruption attacks. Throughout this section, we will use Figure 16.4 as a simple example to illustrate various situations.
409
16.4 Security analysis
A
S
V
E Figure 16.4
B
C
D
W
F
G
A simple example.
Black-hole and gray-hole attacks In HADOF, the source can quickly detect a gray hole or black hole on the basis of the reports it has collected and past records of each node. Without loss of generality, assume that B has created a gray hole on the route “SABCD” in Figure 16.4. On the basis of the reports submitted by A, B, C, and D, S can know that some of them have dropped packets. Node B can be detected by S as creating a black/gray hole if Pavg (B, S) and Pcur (B, S, “S ABC D
) are low, and R N (B, S) is larger than a pre-defined threshold, where a relatively large R N (A, S) is used to make sure that this is not a transient phenomenon.
Framing attacks without collusion Besides dropping packets, a malicious node can also submit false reports to cheat the source and frame its neighbors. For example, on the route “SABCD,” if B is malicious, B can submit a smaller RN value to frame A and a larger FN number to frame C. In HADOF, a source can detect framing attacks through checking the consistency of the reports it has collected. We still use the route “SABCD” as an example, and assume that the malicious nodes work alone. If B has reported a larger F Ncur (B, S, R) to frame C, S can detect this by finding F Ncur (B, S, R) > R Ncur (C, S, R), where R denotes the route “SABCD.” Now we analyze the possible consequence of this frameing. First, B cannot increase its Pcur (B, S, R) and P(B, S) since S will use R Ncur (C, S, R) to replace F Ncur (B, S, R). Second, B can only make S suspect C, but cannot make S believe that C is malicious. Third, if C is trusted by S, then B can be detected immediately, and will be excluded from any route originating from S in the future. Fourth, B’s own honesty score will be decreased. Therefore, B can cause only limited damage by framing others, and has to take the risk of being detected as malicious, especially when friendship has been introduced.
Framing attacks with collusion Next we show that collusion in framing attacks cannot cause further deterioration of the situation. We still use the route “SABCD” as an example. In the first case, the malicious nodes are neighbors of each other, for example, B and C. Without loss of generality, we can view them as one node B , with R Ncur (B , S, R) = R Ncur (B, S, R) and F Ncur (B , S, R) = F Ncur (C, S, R). That is, B and C together have the same effects as B working alone, and the only difference is that they can release one node by sacrificing the other one, that is, by letting it take all the blame. In the second case, the malicious
410
Defense against routing disruptions
nodes are not neighbors of each other. For example, A and C are malicious and work together to frame B. It can be seen that the effect of A and C jointly framing B is the same as that of A and C framing B independently. Thus we conclude that in HADOF collusion cannot further improve the capability of framing attacks.
Rushing attacks In rushing attacks, an attacker can increase its chance of being on the route by disseminating ROUTE REQUESTs quickly and suppressing any later legitimate ROUTE REQUESTs [183]. For example, in Figure 16.4, if V can broadcast the ROUTE REQUESTs originating from S more quickly than A and E, then all the ROUTE REQUESTs broadcast by A and E will be ignored. The direct consequence is that V appears on all the routes returned by D. Later V can drop packets and frame its neighbors. Now we show how rushing attacks can be handled using HADOF. If S detects that no routes to D in its route cache work well, it will check whether these routes share a critical node where all packets sent from S to D pass through it. In this example, the critical node is V. If V has a low Pavg (V, S) and a low H (V, S), S has reason to suspect that V has launched rushing attacks. S then initiates a new route discovery and explicitly excludes V from being on discovered routes.
Wormhole attacks A pair of attackers can create a wormhole in the network via a private network connection to disrupt routing by short circuiting the normal flow of routing packets. For example, in Figure 16.4, if W and V are attackers and have created a wormhole between them, V can quickly forward any ROUTE REQUESTs it receives to W, and let W broadcast them. There are two variations, depending on whether V and W append their addresses to the REQUESTS. If they append their addresses, they are similar to rushing attackers, and the method discussed above can be used to handle them. The situation becomes more severe if they do not append their addresses. For example, W and V can make S believe that D is its neighbor, and later V can create a black hole to drop all the packets originating from S and targeting D. In HADOF, if S finds that no routes returned by D are valid, or S has not received any acknowledgment from D, S has reason to suspect that there exists a wormhole between S and D. S then activates neighbor-discovery techniques such as in [182] [183] to prevent attackers from creating wormholes. In summary, HADOF can handle various routing-disruption attacks, such as grayhole, black-hole, framing, rushing, and wormhole attacks, very well, and is collusionresistant.
16.5
Simulation methodology
16.5.1
Simulator and simulation parameters In our simulations, we use an event-driven simulator to simulate mobile ad hoc networks. The physical layer assumes a fixed-transmission-range model, such that two
411
16.5 Simulation methodology
Table 16.2. Simulation parameters Number of nodes Maximum velocity (vmax ) Dimensions of space Maximum transmission range Number of traffic pairs Average packet inter-arrival time Data-packet payload size Link bandwidth MaxRouteNum MaxHopNum α β λ Q threshold Update interval
100 20 m/s 1000 m × 1000 m 250 m 20 0.04–0.2 s 512 bytes 1 Mbps 5 10 0.9 0.6 0.02 0.8 1s
nodes can directly communicate with each other successfully only if they are in each other’s transmission range. The MAC-layer protocol simulates the IEEE 802.11 distributed coordination function (DCF) [197]. DSR is used as the underlying routing protocol. The simulation parameters are listed in Table 16.2. We use a rectangular space of size 1000 m × 1000 m. The total number of nodes is 100, and the maximum transmission range is 250 m. For each simulation, 20 traffic pairs are randomly generated. For each traffic pair, the packet arrival is modeled as a Poisson process, and the average packet inter-arrival time is uniformly chosen between 0.04 s and 0.2 s, such that each traffic pair injects a different traffic load into the network, which we expect could better simulate the reality than using the same inter-arrival time for all the traffic pairs. The size of each data packet after encryption is 512 bytes, and the link bandwidth is 1 Mbps. We vary the total number of malicious nodes among the 100 nodes from 5 to 20. In our implementation, the malicious nodes will submit false reports only when they have dropped packets and these false reports cannot be detected easily. For example, a malicious node will not submit false reports to frame the sources or the destinations. In the simulations, each node moves randomly according to a random waypoint model [209]: a node starts at a random position, waits for a duration called the pause time that is modeled as a random variable with an exponential distribution, then randomly chooses a new location and moves toward the new location with a velocity uniformly chosen between 0 and vmax . When it arrives at the new location, it waits for another random pause time and then repeats the process. In the simulations, two sets of average pause time are used: 0 s and 50 s. The average pause time of 0 s represents a high-mobility case in which nodes keep moving, whereas the average pause time of 50 s represents a moderate-mobility case.
412
Defense against routing disruptions
16.5.2
The baseline system and watchdog In our simulations, the baseline system is implemented as follows: the basic DSR described in Section 16.2.2 is used, and, for each route discovery, only one route is returned. No adaptive route rediscovery is used, and no malicious-node-detection mechanisms are applied. It is expected that the baseline system will perform badly in most situations. For comparison, the mechanism proposed in [283] has also been implemented. This includes two major components: watchdog and pathrater. To make watchdog work properly, we have modified the MAC-layer protocol to ensure the following property: after node B has received a packet from node A and needs to forward this packet to node C, B can start the forwarding only if both A and C are idle and ready to receive packets. When using watchdog, a node will report to the source when another node refuses to forward more than a certain number of packets for it. In our implementation, we set the threshold to five. In addition, each route discovery initiated by source S will return at most five routes, and the route with the best quality (calculated using pathrater) will be used. When the route in use becomes invalid due to link breakages, instead of using the routes in S’s route cache, S will initiate a new route discovery. The reason is that with a very high probability those routes might also not work or would work badly due to mobility and traffic dynamics. The SSR (send extra route request) extension has also been implemented.
16.5.3
Performance metrics The following metrics will be used to evaluate the performance of HADOF. • The packet drop ratio: the percentage of data packets sent but not received by the destinations, which equals 1 minus the end-to-end throughput. • The overhead: in this chapter, we consider the routing overhead, energy-consumption overhead, encryption overhead, and complexity overhead. Given a traffic pattern, the routing overhead indicates how many route discoveries have been initiated by the sources. The energy-consumption overhead denotes how much extra energy needs to be consumed. To maintain the confidentiality and integrity of the transmitted content, extra cryptographic operations are needed, and the encryption overhead describes how many extra cryptographic operations are needed by these mechanisms. The complexity overhead accounts for the extra storage and computations needed for applying these mechanisms.
16.6
Performance evaluation We use “baseline” to denote the baseline system, “watchdog” to denote the system based on watchdog and pathrater, and “HADOF” to denote the system based on HADOF. We set up different node-movement patterns for each simulation by changing the average pause time and the seed of the random-number generator. By varying the number of
16.6 Performance evaluation
413
malicious nodes and the average pause time, we get different configurations. For each configuration, the results are averaged over 25 rounds of simulations, in which for each round we change the random seed to get different movement and traffic patterns. To make a fair comparison, for each configuration and each round of simulation, the same movement and traffic patterns were used by all three systems.
16.6.1
Packet-drop-ratio comparisons We compare the packet drop ratios of the three systems under different scenarios. First, we compare the packet drop ratios under only gray-hole attacks. That is, no nodes will submit false reports. Second, we compare the packet drop ratios under both gray-hole and framing attacks, in which some malicious nodes will drop packets and frame their neighbors when possible. Third, we show how a friendship mechanism can mitigate the effects of framing attacks.
16.6.1.1
Gray-hole attacks In our simulations, we vary the number of malicious nodes from 5 to 20. The gray hole is implemented in such a way that each malicious node drops half of the packet passing through it. The simulation results obtained under different configurations are plotted in Figure 16.5. From these results we can see that HADOF outperforms watchdog in all situations. For example, under the configuration of pause time 50 s and 20 malicious nodes, the packet drop ratio of baseline is more than 40%, watchdog can reduce the packet drop ratio to 22%, while for HADOF the packet drop ratio is only 16%, that is, more than 33% improvement is obtained over watchdog under this configuration. Under the configuration of pause time 50 s and five malicious nodes, HADOF provides more than 55% improvement over watchdog.
16.6.1.2
Gray-hole plus frame-up attacks We investigate the packet drop ratio under both gray-hole and framing attacks. In HADOF, the only way for a malicious node to frame a good node is to let the source suspect that the good node is cheating. To achieve this, a malicious node can report a smaller RN number than the actual value to frame the node ahead of it on the route, and/or report a larger FN number than the actual value to frame the node just following it on the route. However, the malicious node can never make the source believe that a good node is cheating, since the malicious node cannot create solid evidence. In watchdog, there are various ways for a malicious node to frame good ones. For example, if node A has sent a packet to B and asks B to forward it to C, B has many ways to make A believe that it has sent the packet to C even though B did not send packets or intentionally caused transmission failure. As reckoned in [283], many reasons can cause a misbehaving node not to be detected, such as ambiguous collisions, receiver collisions, limited transmission power, false misbehavior, collusion, and partial dropping. In our simulations, we implement only framing attacks through receiver collisions. That is, B will forward packet to C only when it knows that C cannot correctly receive it (e.g., C is transmitting data to another node, or receiving data from another node). Since A can
414
Defense against routing disruptions
Average pause time: 0 seconds, 5 malicious nodes baseline watchdog HADOF
0.4
Packet Drop Ratio
Packet Drop Ratio
Average pause time: 50 seconds, 5 malicious nodes 0.5
0.5
0.3 0.2 0.1 0
0
100
200 300 Time (Seconds)
400
0.3 0.2 0.1 0
500
Packet Drop Ratio
Packet Drop Ratio
baseline watchdog HADOF
0.2 0.1
0
100
200 300 Time (Seconds)
400
Packet Drop Ratio
Packet Drop Ratio
baseline watchdog HADOF
0.4 0.3 0.2 0.1 0
0
Figure 16.5
100
200 300 Time (Seconds)
400
200 300 Time (Seconds)
400
500
500
baseline watchdog HADOF
0.4 0.3 0.2 0.1 0
500
Average pause time: 0 seconds, 20 malicious nodes 0.5
100
0.5
0.3
0
0
Average pause time: 50 seconds, 10 malicious nodes
Average pause time: 0 seconds, 10 malicious nodes 0.5 0.4
baseline watchdog HADOF
0.4
0
100
200 300 Time (Seconds)
400
500
Average pause time: 50 seconds, 20 malicious nodes 0.5 baseline watchdog 0.4 HADOF 0.3 0.2 0.1 0
0
100
200 300 Time (Seconds)
400
500
Packet-drop-ratio comparisons under gray-hole attacks.
tell only whether B has sent the packet to C, but cannot tell whether C has received it successfully, B can easily frame its neighbors. Figure 16.6 shows the simulation results with the configurations of 20 malicious nodes, half of them applying framing attacks. First, we can see that the degradation of HADOF caused by framing attacks is limited. Second, we see that framing degrades the performance of both, but affects watchdog more than HADOF. Meanwhile, it is important to point out that we have shown the best-case results for watchdog because we have made many assumptions which favor watchdog, such as an absence of collusion attacks, only receiver collisions, and a perfect MAC protocol. For HADOF, no extra assumptions are needed except for those listed in Section 16.2.
415
16.6 Performance evaluation
0.5 HADOF, gray hole HADOF, gray hole + framing Watchdog, gray hole Watchdog, gray hole + framing
Packet Drop Ratio
0.4
0.3
0.2
0.1
0
0
100
200
300
400
500
Time (Seconds) Figure 16.6
Effects of framing attacks (average pause time 50 s; 20 malicious nodes). 0.5 Gray hole + framing, no friends Gray hole + framing, 20 friends Only gray hole, no friends
Packet Drop Ratio
0.4
0.3
0.2
0.1
0
0
100
200
300
400
500
Time (Seconds) Figure 16.7
16.6.1.3
Effects of a friendship mechanism (average pause time 50 s; 20 malicious nodes).
The effectiveness of friendship In the previous simulations, friendship was not introduced and the source trusts only the destination. Next we show the results after introducing a friendship mechanism to combat framing attacks. We conduct simulations for the situation that each source has 20 friends, which are randomly chosen from among all good nodes in the network. Figure 16.7 shows the simulation results using HADOF with the configuration of average pause time 50 s and 20 malicious nodes, half of them launching both gray-hole and framing attacks, and half of them launching only gray-hole attacks. From these results we can see that the effects of framing attacks can be overcome when trustworthiness has been established among a certain number of users. For example, with 20 friends, the packet drop ratio, which is 15%, is even lower than that in the situation in which no framing attacks are launched, which is 16%.
416
Defense against routing disruptions
Number of Route Discoveries
1600 baseline watchdog HADOF
1200
800
400
0
0
100
200
300
400
500
Time (Seconds) Figure 16.8
A comparison of route-discovery overhead (average pause time 50 s; 20 malicious nodes).
16.6.2
Overhead comparisons
Routing-discovery overhead For each simulation, we have counted the total number of route discoveries that have been initiated by all the sources. Figure 16.8 shows the results under the configuration of average pause time 50 s and 20 malicious nodes, with only gray-hole attacks. From these results we can see that, although HADOF needs to initiate route discoveries preventatively, it still has the lowest routing-discovery overhead. In the baseline system, only one route is returned for each route discovery, which explains why baseline needs to initiate more route discoveries. This also verifies the effectiveness of path diversity. Surprisingly, watchdog has the highest route-discovery overhead, which comes from its high false-alarm ratio, since a new route discovery will be initiated once no route has an average reputation larger than 0.
Energy-consumption overhead One major drawback of watchdog is that it consumes much more energy than HADOF, because each node has to keep monitoring its neighbors’ transmission activities. We use Figure 16.1 to illustrate why watchdog needs to consume extra energy. For example, after B has sent a packet to C and asked C to forward the packet to D, B has to keep listening to C’s transmission. If C is a malicious node, C can launch resource-consumption attacks to consume B’s energy by putting off forwarding packets for B. Even if C is a good node, B still needs to consume extra energy to receive, decode, and compare the packets transmitted by C with the packets stored in B’s buffer. This consumes a lot of extra energy. By requiring nodes to keep monitoring their neighbors, watchdog not only reduces network capacity, but also consumes extra energy. On the other hand, HADOF has no such drawbacks.
16.7 Summary and bibliographical notes
417
Encryption overhead As we discussed in Section 16.2, all packets should be encrypted and signed to ensure the confidentiality and integrity of data. Otherwise, outside attackers can easily intercept those messages through eavesdropping. Compared with the baseline system, HADOF introduces some encryption overhead, which comes from encrypting the reports. In most situations only the destination needs to submit reports, and the source and the destination already share a secret key for data encryption. Thus, the reports from the destination can just be encrypted by this secret key, which introduces little overhead. In addition, if the amount of data for reporting packet-forwarding statistics is much less than the total amount of data, which is generally true, the overhead of encrypting reports of intermediate nodes on the route will become negligible compared with the data-encryption overhead.
Complexity overhead In HADOF, each source needs to launch a route-traffic observer to maintain and update traffic statistics, and maintain records to keep track of cheating behavior. However, both can be implemented using simple data structures, and consume little memory. The computation overhead comes from updating traffic statistics, route quality, and cheating records. These operations will not introduce much computational burden. In watchdog, each node also needs to keep a reputation database and needs to calculate the route quality. Moreover, each node in watchdog needs to keep an extra buffer to store the packets that it has requested its neighbors to forward but whose forwarding has not been confirmed, which consumes a lot of extra memory, and may introduce extra computational overhead because of the need to compare the packets.
16.7
Summary and bibliographical notes Mobile ad hoc networks have attracted a lot of attention from military, industrial, and academic perspectives. However, before mobile ad hoc networks can be successfully deployed, the security issues have to be resolved. In this chapter we presented HADOF to defend against routing-disruption attacks launched by inside attackers, which can be implemented upon the existing source-routing protocols. HADOF is capable of adaptively adjusting routing strategies according to the network dynamics and nodes’ past records and current performance. It can handle various attacks launched by malicious nodes, such as black-hole, gray-hole, framing, rushing, and wormhole attacks. Moreover, HADOF introduces little overhead into the existing routing protocols, and is fully distributed. Our extensive simulation studies have also confirmed the effectiveness of HADOF. For example, in the presence of 20 malicious nodes, with each launching a gray-hole attack by selectively dropping half of the packets passing through it, and with half of them also launching framing attacks, the system without protection schemes has a packet drop ratio of 40%, the system using watchdog and pathrater can reduce the packet
418
Defense against routing disruptions
drop ratio to at most 26%, while the system using HADOF can reduce the packet drop ratio to only 15%. The simulation results have also shown that HADOF introduces little routing-discovery, encryption, and complexity overhead. More information can be found in [490]. To secure the ad hoc network, the first step is to prevent attackers from entering the network through secure key distribution/authentication and secure neighbor discovery, such as in [496] [181] [335] [379] [491] [182] [183] [172]. However, these schemes cannot work well when attackers have entered the network. To defend against inside attackers, schemes based on monitoring packet-forwarding activities have been shown to be promising solutions [283] [500] [15] [27] [493] [396]. Papadimitratos and Haas [335] proposed a secure routing protocol for mobile ad hoc networks that guarantees the discovery of correct connectivity information over an unknown network in the presence of malicious nodes. However, it is still vulnerable to several attacks, such as rushing and wormhole attacks. Sanzgiri et al. [379] considered a scenario in which nodes authenticate routing information coming from their neighbors while not all the nodes on the route will be authenticated by the sender and the receiver. However, this scheme cannot handle compromised nodes. Hu, Perrig, and Johnson [181] proposed Ariadne, a secure on-demand ad hoc network routing protocol, which can prevent attackers or compromised nodes from tampering with uncompromised routes that consist (entirely) of uncompromised nodes. The authors of [182] [183] described how to defend against rushing attacks through secure neighbor discovery and how to apply packet leashes to defend against wormhole attacks. Later, Capkun and Hubaux investigated secure routing in ad hoc networks in which security associations exist only for a subset of all pairs of nodes [59]. However, none of the above schemes can handle inside attackers well. To defend against attackers that have entered the network and can be on a discovered route, a reputation system based on monitoring traffic in the network can be used. Initial work using these mechanisms is presented by Marti et al. in [283], in which they considered the case in which nodes agree to forward packets but fail to do so, and proposed two tools that can be applied upon source-routing protocols: watchdog and pathrater. However, their scheme suffers from many problems. First, watchdog requires the promiscuous mode of the wireless interface, which is not always available. Second, since nodes using watchdog have to keep receiving packets from their neighbors, the network capacity may be reduced and a lot of energy will be wasted. Third, watchdog cannot distinguish between malicious behavior and misbehavior caused by a temporary network malfunction, such as collision or network congestion. Therefore, watchdog suffers from a lot of false alarms. Fourth, pathrater defines the route quality as the average reputation of the nodes on the route, which in general is not the best metric. Another major problem, which has also been assessed in their work [283], is that their schemes are not collusion-resistant, and are also vulnerable to attacks that aim to frame innocent nodes. In [15] [290], the authors extended the ideas in [283], and allowed the reputation to propagate throughout the network. However, since they still rely on watchdog,
16.7 Summary and bibliographical notes
419
the schemes in [15] [290] also suffer from the same types of problems as those afflicting [283]. Furthermore, once reputation can propagate, selfish or malicious nodes can collude to frame other nodes. In [27] [493] [396], the authors consider the scenario in which nodes are selfish, and may be unwilling to forward packets to the benefit of other nodes. They proposed schemes to stimulate cooperation among nodes on the basis of a credit system or game theory. However, those schemes cannot handle malicious nodes in the network whose objective is to maximize the damage they cause to the network, instead of maximizing their own benefits obtained from the network.
17
Defense against traffic-injection attacks
Since in ad hoc networks nodes need to cooperatively forward packets for each other, without necessary countermeasures, such networks are extremely vulnerable to trafficinjection attacks, especially to those attacks launched by insider attackers. Injecting an overwhelming amount of traffic into the network can easily cause network congestion and decrease the network lifetime. In this chapter we focus on traffic-injection attacks launched by insider attackers. After investigating the possible types of trafficinjection attacks, we present two sets of defense mechanisms to combat such attacks. The first set of defense mechanisms is fully distributed, whereas the second is centralized with decentralized implementation. The detection performance of each of the mechanisms is also formally analyzed. Both theoretical analysis and experimental studies have demonstrated that, with such defense mechanisms, there is hardly any gain to be obtained by launching traffic-injection attacks from the attackers’ point of view.
17.1
Introduction In this chapter, we study a class of powerful attacks: traffic-injection attacks. Specifically, attackers inject an overwhelming amount of traffic into the network in an attempt to consume valuable network resources, and consequently degrade the network performance. Since, in ad hoc networks, nodes need to cooperatively forward packets for other nodes, such networks are extremely vulnerable to traffic-injection attacks, especially those launched by insider attackers. Roughly speaking, traffic-injection attacks can be classified into two types: queryflooding attacks and injecting-data-packets attacks (IDPAs). Owing to the changing topology or traffic pattern, nodes in ad hoc networks may need to frequently update their routes, which may require broadcasting route-query messages. Then attackers can broadcast query messages with a very high frequency to consume valuable network resources. We call such attacks query-flooding attacks. Besides query-flooding attacks, attackers can also inject an overwhelming amount of data packets into the network in order to request other nodes to forward them. When other nodes process and forward these packets, their resources (e.g., energy) are wasted. We call such attacks IDPAs. Since in general the size of a data packet is much larger than the size of a route-query message, and the injection rate of data packets is usually much higher than the injection
17.2 Traffic-injection attacks
421
rate of route-query messages, the damage that can be caused by IDPAs is usually more severe than that caused by query-flooding attacks. To defend against query-flooding attacks, we can limit the amount of queries that each node can initiate. Although this may degrade the network performance somewhat, this method can effectively limit the damage that can be caused by query-flooding attacks. On the other hand, if nodes in the network cannot know other nodes’ data-packetinjection rates, it will become extremely hard or even impossible to detect IDPAs. In this work we focus on the scenario in which nodes in the network belong to the same authority and pursue some common goals. Therefore each node’s traffic-injecting pattern can usually be estimated by at least a subset of nodes in the network, such as sinks in ad hoc sensor networks. In this chapter we first consider a set of fully distributed defense mechanisms that can effectively detect IDPAs. Such mechanisms can work well even when attackers can use advanced transmission techniques, such as directional antennas, to avoid being detected. We then derive the theoretical upper bounds for the probability that attackers can successfully launch IDPAs without being detected. The results show that, from the attackers’ point of view, the best IDPA strategy is to conform to their legitimate data-packet-injection rates. In other words, the best attacking strategy is not to launch IDPAs. To decrease the storage overhead and further increase the attacker-detection performance, we then present a centralized defense mechanism with decentralized implementation. This is achieved by letting some nodes under strong protection perform attacker detection. Besides IDPAs, also query-flooding attacks have been studied and the tradeoff between limiting the query rate and the system performance has been exploited. The rest of the chapter is organized as follows. Section 17.2 describes the system model and investigates the possible types of traffic-injection attack. Section 17.3 describes the fully distributed defense mechanisms. The theoretical analysis of the distributed defense mechanisms is presented in Section 17.4. In Section 17.5, a centralized detection mechanism with decentralized implementation is described. To confirm the effectiveness of the mechanisms, we conducted extensive simulation experiments, which are presented in Section 17.6.
17.2
Traffic-injection attacks In this chapter we focus on ad hoc networks with nodes belonging to the same authority that pursue some common goals. Nodes in such networks can be classified into two types: good and malicious. Good nodes will unconditionally help other good nodes to achieve the common goals, whereas malicious nodes will try to degrade the network performance as much as possible. Each node is equipped with a battery with limited power supply, communicates with other nodes through wireless connections, and can move freely inside a certain area. We focus on the most general scenario, namely that in which good nodes use omnidirectional transmission techniques. However, in our setting, attackers are allowed to use directional transmission techniques, such as directional antennas or adaptive beamforming, to improve their attacking capability.
422
Defense against traffic-injection attacks
According to the common system goal, each node may be required to generate a sequence of packets to be delivered to certain destinations. For example, in wireless ad hoc sensor networks, each node may need to periodically (or aperiodically) send the sensed information back to the data sinks. We call a source–destination pair legitimate if this pair is required by the common system goals. For each legitimate source– destination pair (s, d) in the network, we assume that the number of packets that is required to be delivered by this pair until time t is Ns,d (t). In general, the exact value of Ns,d (t) may not be known a priori by the other nodes in the network. To overcome this difficulty, in this chapter we make an assumption that the upper-bound of Ns,d (t), denoted by f s,d (t), can be estimated by some other nodes in the network. From now on f s,d (t) will be referred to as the upper bound of the traffic-injection rate associated with the source–destination pair (s, d). In this chapter we mainly focus on insider attackers. That is, all nodes in the network are legitimate, no matter whether they are good or malicious. To handle outside attackers, access control and secret communication channels can usually work well. We assume that each node has a public/private key pair. We also assume that a node can know or authenticate other nodes’ public keys. However, no node will disclose its private key to the others unless it has been compromised. To maintain the confidentiality and integrity, each packet may be encrypted and signed by its sender when necessary. Without loss of generality, we simply assume that all data packets are of the same size. As mentioned before, in this chapter our focus is on defending against traffic-injection attacks, or, more specifically, IDPAs and query-flooding attacks. We first consider the possible ways in which an IDPA can be launched by attackers s and d with s being the source and d being the destination. The simplest way, which is called simple IDPA, is that s picks a route R to d and injects an overwhelming amount of packets into the network, with the injection rate being much higher than the legitimate upper bound f s,d (t). In the second way, which is called long-route IDPA, the source s picks a very long route to inject data packets into the network. For example, as in Figure 17.1, s can pick the route “s → w → c → b → a → h → e → f → g → d” to send packets from s to d, and meanwhile keep the number of packets injected below the legitimate upper bound f s,d (t). By acting in this way, s and d can achieve the same effect as increasing the traffic-injection rate. In the third and most advanced way, which is called multiple-route IDPA, the source s picks multiple routes to d and simultaneously injects traffic into the network a
h
s
e
Figure 17.1
c
b
w
f
An example of a long-route attack.
d
g
423
17.3 Defense mechanisms
route 1
a1
b1
route 4 a2
s
a3
a4
b4 route 3
d
b2
b3
route 2 Figure 17.2
An example of a multiple-route attack.
via these routes. For example, as shown in Figure 17.2, s uses four routes “s → a1 → · · · → b1 → d,” “s → a2 → · · · → b2 → d,” “s → a3 → · · · → b3 → d,” and “s → a4 → · · · → b4 → d” to inject packets into the network. In this way, the traffic can be distributed among multiple routes such that for each route the packet-injection rate is no higher than the legitimate upper bound f s,d (t), though the total number of packets injected can be much higher than the legitimate upper bound f s,d (t). Moreover, the attackers can also take advantage of advanced transmission techniques, such as directional antennas and beamforming, to avoid being detected. Besides injecting data packets, attackers can also inject an overwhelming amount of query messages into the network to request other nodes to forward them, which is called a query-flooding attack. The advantage of query-flooding attacks lies in the fact that, for each query, more nodes in the network will be involved in processing and forwarding packets than in the case of injecting data packets. Although a query message is usually much smaller than a data packet, when the query frequency is very high, query-flooding attacks can still cause severe damage to the network.
17.3
Defense mechanisms In general, to detect whether a node has launched traffic-injection attacks, the detectors have to base their assessment on what they have observed (either directly or indirectly). For example, a node can be marked as launching traffic-injection attacks only if it has been observed by some other nodes that it has injected too much traffic (higher than the legitimate bound), or that it has sent traffic to illegitimate destinations. Therefore, the following mechanisms will be required by any defense system to combat trafficinjection attacks. • A robust packet-delivery mechanism such that, for each packet injected by a node, this node cannot deny that this packet is from it and no other nodes can generate the same packet without colluding with it. This is addressed in Section 17.3.1. • A robust traffic-monitoring mechanism to count the number of packets injected into the network by each node. This is addressed in Section 17.3.2. • A robust detection mechanism to detect traffic-injection attacks on the basis of the observed information. This is addressed in Section 17.3.3.
424
Defense against traffic-injection attacks
17.3.1
Route discovery and packet delivery Since source routing has been widely used in mobile ad hoc networks, and can greatly facilitate the detection of attackers, in this chapter we focus on source routing. Specifically we will adopt dynamic source routing (DSR) from [209] as the underlying routing protocol with which to perform route discovery and maintenance. On the other hand, to defend against possible routing-related attacks, the following security enhancements will be incorporated into the baseline DSR protocol. • When a node s initiates a route discovery to destination d, besides the source– destination pair, the route-request packet should also include a unique ID associated with this request and the sequence number corresponding to the last data packet that s has sent to d. In this chapter, the following format is used for each route-request packet: {s, d, ids (s, d), seqs (s, d), sign s (s, d, ids (s, d), seqs (s, d))}. Here ids (s, d) is the sequence number of this route-request packet, which has an initial value of 1 and must be increased by 1 after each route request has been issued by the pair (s, d). seqs (s, d) is the sequence number of the last data packet that the pair (s, d) has injected into the network. sign s (s, d, ids (s, d), seqs (s, d)) is the signature generated by s on the basis of the message {s, d, ids (s, d), seqs (s, d)}. • When a good node x receives a route-request packet with s being the source and d being the destination, x first checks whether the following conditions can be satisfied: (i) the source–destination pair (s, d) is legitimate; (ii) all signatures are valid; (iii) idx (s, d) < ids (s, d), where idx (s, d) is the largest route-request sequence number corresponding to the source–destination pair (s, d) that x has observed before; (iv) seqx (s, d) ≤ seqs (s, d), where seqx (s, d) is the largest data-packet sequence number corresponding to the pair (s, d) that x has observed before; (v) no nodes appended to the route-request packet have been detected as malicious by x; (vi) fewer than L maxhop relay nodes have been appended to the query packet, where L maxhop is a system parameter indicating the maximum number of relays that any route is allowed to have; and (vii) x has not forwarded any route request for the source–destination pair (s, d) within the last Tx (s, d) interval, where Tx (s, d) is the minimum route-request forwarding interval specified by x to indicate that x will not forward more than one route request for (s, d) within any Tx (s, d) interval. If all of the above conditions can be satisfied, we call such a route request a valid request. In this case, x will assign the value of ids (s, d) to idx (s, d), assign the value of seqs (s, d) to seqx (s, d), append its own address to the route-request packet, sign the whole packet, and rebroadcast the updated route request. If just the first four conditions can be satisfied, x will simply update the values of idx (s, d) and seqx (s, d)
425
17.3 Defense mechanisms
using ids (s, d) and seqs (s, d). In all other situations, x will just discard this route request, and perform necessary attacker detection. Assuming that r equest is the received valid route-request message that x has decided to forward, the following format will be used for x to append its own address: {r equest, x, sign x (r equest, x)}. Once a source has decided to send a packet to a certain destination using a certain route, a data-packet-delivery transaction should be initiated. The data-packetdelivery mechanism works as follows. Suppose that node s is to send a packet to destination d through the route R with the payload msg and the sequence number seqs (s, d). s first generates two signatures, sigh and sigb , with sigh generated on the basis of the message {R, seqs (s, d)} and sigb generated on the basis of the message {R, seqs (s, d), M D(msg)}, where M D( ) is a digest function such as SHA-1 [382]. The format of the packet to be sent is as follows: {R, seqs (s, d), sigh , msg, sigb }.
(17.1)
We refer to {R, seqs (s, d), sigh } as the header of the packet, and refer to {msg, sigb } as the body of the packet. Next, s transmits this packet to the next node on route R, and increases the value of seqs (s, d) by 1. The advantage of generating two signatures will be explained later. When a node x detects that a certain packet is to be transmitted by a certain node y, x first decodes and checks the header of the packet. Assuming that {R, seqs (s, d), sigh } is the header of the transmitted packet, x needs to continue receiving and decoding the body of the packet only if all of the following conditions can be satisfied: (i) (ii) (iii) (iv) (v) (vi)
the signature sigh is valid; x belongs to the route R and is the target of this transmission; no nodes on route R have been detected as malicious by x; seqs (s, d) > seqx (s, d); route R has no more than L maxhop relays; and x has agreed to participate on this route before and the route has not expired, where each route will be set an expiration time.
If all of the above conditions can be satisfied, x will continue receiving and decoding the body of the packet, assuming it is {msg, sigb }. If the signature sigb is valid, x will forward the packet to the next node on the route, and update the value of seqx (s, d) using seqs (s, d).
17.3.2
Traffic monitoring Traffic monitoring is an indispensable component of the means by which to detect possible traffic-injection attacks. In this chapter, each node will keep monitoring its neighbors’ transmission activities using the header-watcher mechanism. Specifically, when a node x detects that a neighbor y is transmitting a data packet, no matter whether
426
Defense against traffic-injection attacks
x is the receiver of this transmission or not, x will try to receive and decode the packet header sent by y. Actually, this is needed in most wireless networks: without decoding the header, how can a node know whether a packet is targeted on it or not? The uniqueness of the header-watcher mechanism lies in the fact that each node will also check the validity of the signature for the packet header. If the signature of the packet header is valid, x will put the packet header into the set List (s, d, x) in x’s records, which will be used later to detect whether s has launched traffic-injection attacks. Unlike the “watchdog” mechanism introduced in [283], which requires a node to buffer all the packets that it has sent or forwarded and to keep monitoring its neighbors’ transmission activities in order to check whether those packets have been forwarded by them, the “header-watcher” mechanism considered in this chapter merely requires a node to monitor the packet headers in its neighborhood. Since only packet headers need to be received and decoded, and since the header of a packet is much shorter than the body of a packet, a lot of effort can be saved compared with the watchdog mechanism which requires receiving, decoding, and comparing of the whole packet. In general, if all packet headers received by node x are recorded, then, with increasing duration of x’s staying time in the network, more and more storage will be required. Actually, in our scheme, for each legitimate source–destination pair (s, d), only those packet headers received after the last valid route request issued by (s, d) need to recorded by x. In other words, only those packet headers whose sequence numbers are larger than the sequence number broadcast by s in its last valid route-request packet need to be recorded. With this modification, the storage requirement becomes very small and does not increase over x’s staying time in the network. In Section 17.5, we will also show how to modify the schemes to further decrease the storage requirement.
17.3.3
Detection of traffic-injection attacks In this chapter each good node in the network will perform detection of traffic-injection attacks on the basis of what it has observed. Specifically, for each source–destination pair (s, d) with List (s, d, x) being non-empty in a good node x’s records, the following detection rules will be used by x to check whether s has launched traffic-injection attacks: • Rule 1: x will mark s as malicious if List (s, d, x) is not empty and the source– destination pair (s, d) is illegitimate. • Rule 2: x will mark s as malicious if x has received a request issued by an illegitimate source–destination pair (s, d). • Rule 3: for any packet header {R, seqs (s, d), sigh } in List (s, d, x), x will mark s as malicious if route R has more than L maxhop relays. • Rule 4: x will mark s as malicious if x has detected that there exist two valid packet headers {R, seqs (s, d), sigh } and R , seqs (s, d), sigh in the set List (s, d, x) with seqs (s, d) = seqs (s, d) but R = R .
17.3 Defense mechanisms
427
• Rule 5: let seqmax (s, d) be the largest sequence number corresponding to the source– destination pair (s, d) at time t (i.e., seqmax (s, d) = f s,d (t) at time t), then x will mark s as malicious if x has detected that there exists a sequence number seqs (s, d) in List (s, d, x) with seqs (s, d) > seqmax (s, d). The first two rules imply that only legitimate source–destination pairs can inject packets into the network. Rule 3 implies that no routes should have more than L maxhop relays. Rule 4 handles multiple-route IDPAs. Rule 5 handles attackers who inject more packets than they should. In summary, rules 4 and 5 are used to prevent attackers from injecting more packets than they are allowed to by associating each packet with a unique sequence number. That is, no two packets from the same traffic pair should have the same sequence number, and the sequence number has to increase monotonically. Once x has detected that s is launching traffic-injection attacks, x will also inform the other nodes in the network by broadcasting an ALERT message that includes evidence such as the corresponding packet headers. When other good nodes have received the ALERT message, after necessary verification (i.e., that the signatures are valid), they will also mark s as malicious. Next we analyze the effects of possible impersonation attacks that can be launched by attackers. Under the mechanisms considered in this chapter, in order to impersonate a good node s that has not been compromised, an attacker m has to first record the packets that s has transmitted, and then later forwards/broadcasts these packets. Specifically, there are two situations. • Situation 1: m recorded a query packet issued by s and rebroadcast it later. However, since this query packet has been seen by all other nodes in the network due to the flooding nature of the query message, no nodes will implement further processing of this query packet. • Situation 2: m recorded a data packet issued by s and forwarded it later. However, since nodes on the route associated with this data packet will only process this packet at most one time, forwarding of this packet at time t1 by m cannot cause damage to other nodes. In summary, impersonation attacks cannot cause further damage to good nodes in the network. Furthermore, it can be readily checked that, as long as s is good and has not been compromised, the probability that x will mark s as malicious is 0. That is, the false-alarm ratio of the above detection rules is 0.
17.3.4
Overhead analysis Now we analyze the overhead associated with the above defense mechanisms. According to the above description, since each good node uses solely its own observation to conduct attacker detection, there is no extra communication overhead. The computation overhead mainly comes from generating and verifying the signatures for each sent and received packet, or, specifically, the computation overhead comes from generating and verifying the signatures for packet headers. Compared with the packet body, the length
428
Defense against traffic-injection attacks
of a packet header is much smaller, therefore the extra computation overhead is also small. Meanwhile, when applying rules 4 and 5 to perform attacker detection, a node also needs to go through the header records it has stored, which may also incur some extra computational overhead. However, since the list of records is usually small, the extra computational overhead is not at all significant. Now we analyze the storage overhead. The main drawback of the above defense mechanism lies in the fact that it requires some extra memory, while in some mobile nodes storage may be a precious resource. For each good node, it needs to store the set of legitimate source–destination traffic pairs as well as the upper bounds of their trafficinjection rate. Meanwhile, for each source–destination pair, it also needs to store the set of received packet headers from this node pair’s last valid route request. If there are too many legitimate source–destination pairs, the storage overhead can be huge. However, in many applications of ad hoc networks, the number of legitimate source–destination pairs is usually limited, such as in wireless ad hoc sensor networks. Further, in mobile ad hoc networks, route requests need to be issued very frequently, therefore the number of packet headers that each node needs to store is also limited. In Section 17.5 we will discuss how to further reduce the storage overhead by proposing some centralized detection mechanisms with decentralized implementation.
17.4
Theoretical analysis According to the secure route-discovery procedure described in Section 17.3.1, a good node x will forward at most one route request in any time interval Tx (s, d) for any legitimate source–destination (SD) pair (s, d), and will not forward route requests for any illegitimate SD pairs, therefore the total damage that can be caused by attackers launching query-flooding attacks is bounded. Next we analyze the effects of IDPAs. Assume that node s is malicious and tries to launch an IDPA with d being the destination of the packets injected by s. To avoid immediate detection of node s as being malicious, the SD pair (s, d) must be legitimate and d must be malicious too, otherwise s can be easily detected by d as malicious. According to Section 17.2, there are three possible ways to launch an IDPA: simple IDPA, long-route IDPA and multiple-route IDPA. We first consider simple IDPA. According to Section 17.3.1, in order for good nodes to forward packets for s, s has to increase the sequence number seqs (s, d) by 1 after each packet delivery. Unless all nodes on the selected route are malicious, which makes no sense, the good nodes on route R can easily detect that s is launching an IDPA by comparing the received packets’ sequence number with f s,d (t) defined in Section 17.3.3. That is, when they launch a simple IDPA, the attackers can be immediately detected and can cause negligible damage. If s launches a long-route IDPA, since many more good nodes will be involved, s can cause similar damage to that caused by launching a simple IDPA. However, as described in Section 17.3.1, the maximum allowable number of hops per route is bounded by Tmaxhop , and good nodes will drop all packets with associated numbers of hops greater than Tmaxhop . Therefore the damage is upper-bounded by f s,d (t)Tmaxhop .
17.4 Theoretical analysis
429
Finally we consider the multiple-route IDPA. For the attacker to avoid being detected immediately, the packet-injection rate for each route must conform to f s,d (t), and the selected routes must be node-disjoint, that is, no selected routes should share any common good node except for s and d, otherwise, if a good node x lies on more than one route from s to d, one can easily detect whether s and d have launched a multiple-route IDPA. Meanwhile, packets passing through the same route should have different sequence numbers in order for good nodes on the route to forward them. Depending on whether s allows packets on different routes to share the same sequence numbers and what transmission techniques s will use, there are three cases. • Case 1: s does not allow packets on different routes to share the same sequence numbers. Since seqs (s, d) ≤ f s,d (t) is required in order to let s avoid being detected immediately, in this case s achieves no extra gain compared with launching a simple IDPA. • Case 2: s allows packets on different routes to share the same sequence numbers, and transmits packets omnidirectionally. Since s’s neighbors will keep monitoring s’s transmission of packets, they can easily detect that some packets sent by s through different routes have the same sequence number, which indicates that s is launching an IDPA. Therefore, if s can only transmit packets omnidirectionally, s should not launch a multiple-route IDPA. • Case 3: s allows packets on different routes to use the same sequence numbers, and can transmit packets using directional transmission techniques. Since now s’s neighbors cannot receive those of s’s transmissions which are not targeted on them, they have little chance of directly detecting that s is launching an IDPA. However, since good nodes in the network use omnidirectional transmission techniques, the probability that s can successfully launch a multiple-route IDPA without being detected still approaches 0, as will be shown next. Next we derive the upper bounds for the probability that s is able to successfully pick n node-disjoint routes to inject data packets without being detected immediately, as illustrated in Case 3. We consider the most general situation that the destination d does not know the exact locations of those nodes within its transmission range, and all d’s neighbors are good nodes. Given a node x and a certain area S, we say that x is randomly deployed inside S according to the two-dimensional (2D) uniform distribution if for any subarea S1 ⊂ S we have P(x ∈ S1 |x ∈ S, S1 ⊂ S) = S1 /S. Then we have the following theorem. Theorem 17.4.1 Suppose that N good nodes are independently deployed inside a large area of extent S according to the 2D uniform distribution. Suppose that all of these N nodes use omnidirectional transmission techniques and r is their common maximum transmission distance. Suppose that the SD pair (s, d) colludes to launch an IDPA with s using a directional transmission technique and s and d not knowing the exact location of the nodes inside d’s receiving range (which is r ). If the defending mechanisms described in Section 17.3 are used by good nodes, then the probability P(n, r, N ) that the two
430
Defense against traffic-injection attacks
attackers can successfully pick n node-disjoint routes to launch a multiple-route IDPA without being detected immediately is upper-bounded by ⎛ √ n−1 ⎞k−n √ (n) N (2) 3 3 3 3 2 ⎝ ⎠ P(n, r, N ) ≤ P1 (k, N ) n , 4π 4π
(17.2)
k=n
where P1 (k, N ) is defined as follows: P1 (k, N ) =
' ( ' 2 (k ' ( N −k N πr πr 2 . 1− k S S
(17.3)
Before proving Theorem 17.4.1, we first prove the following lemmas. Lemma 17.4.1 Assume that N nodes are independently deployed inside an area of extent S according to the 2D uniform distribution. For any node x inside subarea S1 ⊂ S and for any subarea S2 ⊂ S1 , we have P(x ∈ S2 |x ∈ S1 , S2 ⊂ S1 ⊂ S) =
S2 . S1
(17.4)
P ROOF. P(x ∈ S2 , x ∈ S1 |S2 ⊂ S1 ⊂ S) P(x ∈ S1 |S2 ⊂ S1 ⊂ S) P(x ∈ S2 |S2 ⊂ S) S2 = = . P(x ∈ S1 |S1 ⊂ S) S1
P(x ∈ S2 |x ∈ S1 , S2 ⊂ S1 ⊂ S) =
That is, the conditional distribution of x in S1 is independent of S, which is also the 2D uniform distribution. Lemma 17.4.2 Assume that nodes x and y are independently deployed inside a certain area S according to the 2D uniform distribution. Given x ∈ S1 ⊂ S and y ∈ S1 ⊂ S, and given any subareas Sx ⊂ S1 and S y ⊂ S1 , we have P(x ∈ Sx , y ∈ S y |x ∈ S1 , y ∈ S1 , Sx ⊂ S1 , S y ⊂ S1 ) = P(x ∈ Sx |x ∈ S1 , Sx ⊂ S1 )P(y ∈ S y |y ∈ S1 , S y ⊂ S1 ).
(17.5)
P ROOF. Since the deployment of x and that of y are independent of each other, we have P(x ∈ Sx , y ∈ S y |x ∈ S1 , y ∈ S1 , Sx ⊂ S1 , S y ⊂ S1 ) = P(x ∈ Sx |x ∈ S1 , Sx ⊂ S1 , y ∈ S y ⊂ S1 ) × P(y ∈ S y |y ∈ S1 , S y ⊂ S1 , x ∈ S1 , Sx ⊂ S1 ) = P(x ∈ Sx |x ∈ S1 , Sx ⊂ S1 )P(y ∈ S y |y ∈ S1 , S y ⊂ S1 ). That is, the distribution of x and that of y inside S1 are independent of each other.
431
17.4 Theoretical analysis
Lemma 17.4.3 Let S be a circular area with center o and radius R. Assume that node x lies within S and P(A ∈ S1 | A ∈ S, S1 ⊂ S) = S1 /S. Let d(x) denote the random variable of the distance from x to o, then 2r/R 2 0 ≤ r ≤ R, P(d(x) = r |x ∈ S) = (17.6) 0 r > R. P ROOF. For any 0 < r ≤ R, we have πr 2 /(π R 2 ) − π(r − )2 /(π R 2 ) 2r = 2. →0 R
P(d(x) = r |x ∈ S) = lim
For any r > R, we have x ∈ / S, which implies P(d(x) = r |x ∈ S) = 0.
(17.7)
Lemma 17.4.4 Let S be a circular area with center o and radius R. Given two nodes a and b independently deployed within S according to the 2D uniform distribution, we have √ 3 3 P(|ab| > R|a ∈ S, b ∈ S) = , (17.8) 4π where |ab| denotes the distance between a and b. P ROOF. We use Figure 17.3 to help illustrate the proof. Let r denote the distance from a to o, let Co denote the circle with center o and radius R, and let Ca denote the circle with center a and radius R. Let c and d be the intersecting points between the two circles Co and Ca , and let α = coa = doa. Let SI (r ) denote the area of the intersection of the two circles Co and Ca with |oa| = r , and let SII (r ) denote the area of S minus SI (r ). Then we have ) R 2r SII (r ) dr, (17.9) P(|ab| > R a ∈ S, b ∈ S) = 2 S R 0 where (17.9) comes from Lemma 17.4.4. We first calculate SI (r ): ⎛ ⎞ < ' ( ' (2 r r r ⎠, R2 − − SI (r ) = 2 ⎝ R 2 arccos 2R 2 2
SII
b R
SI
o
α
c a
r/2
d
e
Figure 17.3
Illustration for proving Lemma 17.4.4.
(17.10)
432
Defense against traffic-injection attacks
where α = arccos[r/(2R)]. Then SII (r ) can be calculated as ⎛ ⎞ < ' ( ' (2 r r r ⎠. SII (r ) = R 2 ⎝π − 2 arccos − 2 R2 − 2R 2 R
(17.11)
√ On integrating (17.11) into (17.9), we have P(|ab| > R a ∈ S, b ∈ S) = 3 3/(4π ). Lemma 17.4.5 Assume that n nodes A = {a1 , . . . , an } are independently deployed inside a circular area S according to the 2D uniform distribution with R being the radius, then we have n P(|ai a j | > R : ∀ai , a j ∈ A) ≤ P(|a1 a2 | > R)(2) .
(17.12)
P ROOF. P(|ai a j | > R : ∀ai , a j ∈ A) = P(|a1 a2 | > R, . . . , |a1 an | > R, . . . , |an−1 an | > R) = P(|a1 a2 | > R |a1 a3 | > R, . . . , |an−1 an | > R) × P(|a1 a3 | > R, . . . , |an−1 an | > R) = P(|a1 a2 | > R |a1 ai | > R, |a2 ai | > R : ∀3 ≤ i ≤ n) × P(|a1 a3 | > R, . . . , |an−1 an | > R). Given |a1 ai | > R and |a2 ai | > R for any 3 ≤ i ≤ n, we can draw a circle with center ai and radius R. To conform to the statement that ∀ai , a j ∈ A, |ai a j | > R, both a1 and a2 cannot lie inside the area of intersection of this circle and the circle having o as its center. That is, a1 and a2 are now restricted to lying within an area of S ⊂ S, which is smaller than S. So the probability that |a1 a2 | is larger than R under such restrictions will become smaller than it would be without such restrictions. That is, P(|a1 a2 | > R |a1 ai | > R, |a2 ai | > R) ≤ P(|a1 a2 | > R : ∀3 ≤ i ≤ n). (17.13) Following the same arguments, we can have P(|ai a j | > R : ∀ai , a j ∈ A) ≤
P(|ai a j | > R).
(17.14)
1≤i< j≤n
Since there are in total n2 items in the product, and nodes in A are symmetric, we can conclude that (17.12) holds. Lemma 17.4.6 Assume that n + m nodes {a1 , . . . , an , b1 , . . . , bm } are independently deployed inside a circular area S according to the 2D uniform distribution with R being the radius. Let A = {a1 , . . . , an } and B = {b1 , . . . , bm }, then we have m P(|ai bl | > R or |a j bl | > R : ∀ai , a j ∈ A, bl ∈ B, i = j) ≤ n P(|a1 b1 | > R)n−1 . (17.15)
433
17.4 Theoretical analysis
P ROOF. Let Ai = A − {ai }. Given any b ∈ B, say that |ai b| > R or |a j b| > R : ∀ai , a j ∈ A, ai = a j is equivalent to saying that there exists at least one Ai with |xb| > R for any x ∈ Ai , that is, P(|ai b| > R or |a j b| > R : ∀ai , a j ∈ A, ai = a j ) = P((|xb| > R : ∀x ∈ A1 ) or . . . or (|xb| > R : ∀x ∈ An )) n ≤ P(|xb| > R : ∀x ∈ Ai ) i=1
= n P(|xb| > R : ∀x ∈ A1 ) ≤ n P(|a1 b| > R)n−1 . Owing to the symmetry and independence of the m nodes in B, we can conclude that (17.15) holds. Now Theorem 17.4.1 can be proved as follows. P ROOF. Let Cd denote the circle with center d and radius r . For s and d to be able to successfully pick n node-disjoint routes to launch a multiple-route IDPA without being detected immediately, they need to pick at least n distinct nodes inside Cd , one for each route, to act as the last intermediate nodes on these routes. Since s and d do not know the exact locations of the nodes inside Cd , these n nodes can only be randomly selected. It is easy to see that the following three necessary conditions must be satisfied in order for the attackers to succeed. C1. There exist at least n nodes inside Cd , otherwise s and d can never have n nodedisjoint routes between them. C2. Given that there are k ≥ n nodes inside Cd , and that s and d are to randomly select n nodes among them to act as the last intermediate nodes for these n node-disjoint routes, then, for any two nodes among the n nodes selected by s and d, no node should lie within the other node’s transmission range. Otherwise, if any two of the n nodes lie within each other’s transmission range, they can easily detect that s is launching a multiple-route IDPA. C3. Given that the n nodes have been selected by s and d, there should exist no other good nodes (nodes excluding the selected n good nodes) that can simultaneously lie within any two of these n nodes’ transmission ranges. Otherwise, if there exists one such node, it can easily detect that s is launching a multiple-route IDPA. Let P1 (k, N ) denote the probability that there are k nodes inside Cd , let P2 (n, r, k) denote the probability that the condition C2 can be satisfied given that the n nodes are randomly selected among k ≥ n nodes inside Cd , and let P3 (n, r, k, N ) denote the probability that the condition C3 can be satisfied given that there are k ≥ n nodes inside Cd and the n nodes have been determined by s and d. It is easy to see that P(n, r, N ) ≤
N k=n
P1 (k, N )P2 (n, r, k)P3 (n, r, k, N ).
(17.16)
434
Defense against traffic-injection attacks
Since nodes are independently deployed inside S according to the 2D uniform distribution, we can immediately have ' ( ' 2 (k ' ( N −k N πr πr 2 P1 (k, N ) = . (17.17) 1− k S S Given that k nodes lie within Cd , according to Lemmas 17.4.1 and 17.4.2, this is equivalent to saying that these k nodes are independently deployed inside Cd according to the 2D uniform distribution. According to Lemmas 17.4.4 and 17.4.5, we can have √ (n) 3 3 2 . (17.18) P2 (n, r, k) = 4π To simplify the analysis, we consider a modified version of condition C3: given any two nodes among the selected n nodes, there should exist no other good nodes inside Cd but not belonging to these n nodes that can simultaneously lie within these two nodes’ transmission ranges. That is, only a small subset of the applicable nodes need be considered. Let P3 (n, r, k, N ) denote the probability that the modified condition C3 can be satisfied given that there are k ≥ n nodes inside Cd and the n nodes have been determined by s and d, then we must have P3 (n, r, k, N ) ≤ P3 (n, r, k, N ). According to Lemmas 17.4.4 and 17.4.6, the probability that the modified condition C3 can be satisfied is upper-bounded by ⎛ √ n−1 ⎞k−n (2) 3 3
⎝ ⎠ P3 (n, r, k, N ) ≤ n . 4π
(17.19)
By combining the above results, we can conclude that both (17.2) and Theorem 17.4.1 hold. Theorem 17.4.2 The probability that two colluding attackers s and d can successfully pick six or more node-disjoint routes to launch a multiple-route IDPA without being detected immediately is 0. P ROOF. For the attackers s and d (assuming that s is the source and d is the destination) to simultaneously pick six routes along which to launch a multiple-route IDPA, they need to pick six nodes within d’s receiving range, that is, the circular area Cd with center d and radius r . Let A = {a1 , a2 , a3 , a4 , a5 , a6 } denote the set of six nodes selected by s and d that lie inside Cd . One necessary condition for the attackers to succeed is that, for any ai , a j ∈ A, we must have |ai a j | > r for any a j ∈ A and a j = ai . Now we show that this is not achievable. If there exist ai , a j ∈ A with ai da j = 0, then we must have |ai a j | ≤ r . Next we need only consider the situations in which, for any ai , a j ∈ A, ai da j = 0. For each node ai ∈ A, we draw a radial line originating from d and passing through ai , and let ai be the point of intersection between the radial line dai and the circumference of the circle Cd . Any two radial lines will partition the circular area Cd into two sectors. We say that a sector is singleton if none of the nodes in A lie inside this
435
17.4 Theoretical analysis
sector (including the arc but excluding the two radial lines). It is easy to see that the six nodes will partition the circle into six singleton sectors. To satisfy the above necessary condition, the angle of each singleton sector should be more than π/3: if the angle of a singleton section is no more than π/3, let ai be the node on one side of this sector, and a j be the node on the other side of this sector, then, for any point x that lies within the segment dai and any point y that lies within the segment da j , we must have |x y| ≤ r . Since we have six singleton sectors, and each singleton sector has an angle of more than π/3, the summed angle is more than 2π , which contradicts the fact that the angle subtended by a circle is 2π. Given this conclusion, it is trivial to show that having more than six routes is also not achievable. We have also evaluated through experiments the upper bounds of the success ratio for two colluding attackers s and d to launch a multiple-route IDPA with s using a directional transmission technique. Given a rectangular area of 20r × 20r , we put d at the center of the area. At each round of the experiment, we independently deploy 400r 2 ρ nodes inside the area according to the 2D uniform distribution and randomly pick n nodes inside d’s receiving range, where ρ is referred to as the node density. We see that (s, d) may succeed only if all of the three necessary conditions presented in the proof of Theorem 17.4.1 are satisfied. For each configuration of route number n and node density ρ, 107 experiments have been conducted, and the upper bounds have been obtained as the ratio of the total number of successful attacks over the total number of experiments. Both experimental and theoretical upper bounds are plotted in Figure 17.4, where “theor” denotes the theoretical upper bounds obtained using (17.2), “exp” denotes the experimental upper bounds obtained through the experiments described above, and “n” denotes the number of node-disjoint routes to be picked by the malicious SD pair (s, d). 0.3 n = 2, theor n = 3, theor n = 4, theor n = 5, theor n = 2, exp n = 3, exp n = 4, exp n = 5, exp
Attacker’s success ratio
0.25
0.2
0.15
0.1
0.05
0 0 Figure 17.4
2
4 6 Normalized node density
Upper bounds of attackers’ success probability.
8
10
436
Defense against traffic-injection attacks
In Figure 17.4, the normalized node density is defined as the average number of nodes inside an area of πr 2 . Since both the theoretical and the experimental upper bounds corresponding to n = 4 and n = 5 are almost equal to 0 for all node densities illustrated (e.g., for n = 4, all values are less than 2 × 10−3 ), the four curves associated with n = 4, 5 have almost overlapped to give a single curve, which is the lowest curve illustrated in Figure 17.4. For n = 2, 3, we can see that the success ratio initially increases with increasing node density until it arrives at a peak, whereupon it decreases with further increase in node density, which is consistent with (17.2). The reason is as follows: with increasing node density, the probability P1 that the condition C1 can be satisfied increases monotonically from 0 to 1, the probability P2 that the condition C2 can be satisfied remains unchanged, and the probability P3 that the condition C3 can be satisfied decreases monotonically from 1 to 0; and, when ρ is small, the value of P1 dominates the bound, whereas, when ρ is large, the value of P3 dominates the bound. From Figure 17.4 we can also see that there exist gaps between theoretical results and experimental results. The reason is that, when we calculate the probability of condition C3 being satisfied, only a subset of applicable nodes has been considered, which makes the theoretical upper bounds a little bit looser (higher) than the experimental upper bounds. In the above experiments we have assumed that all packets can be successfully received as long as the distance between the transmitter and the receiver is no more than the transmission range r . However, in reality, the channel is usually lossy, and not all packets can be successfully received. This can be taken advantage of by the attackers to increase their success probability. Figure 17.5 illustrates the experimental results concerning the attackers’ success probability for launching a packet-injection attack via two node-disjoint routes under the condition of lossy channels. Specifically, each curve corresponds to a certain packet-loss ratio. From these results we can see that lossy channels 0.6 10% 20% 30% 40% 50%
Attacker’s success ratio
0.5
0.4
0.3
0.2
0.1
0
Figure 17.5
0
2
4 6 Normalized node density
8
10
Upper bounds of attackers’ success probability for launching a packet-injection attack via two node-disjoint routes under the condition of lossy channels.
17.5 Centralized detection with decentralized implementation
437
can certainly increase the attackers’ success probability. However, we can also see that, even when half of the packets have been lost, the maximum possible success ratio is still no more than 50% even for two node-joint routes. The above upper bounds are evaluated on the basis of a fixed topology, that is, the set of links E(t) stays unchanged for all values of the time index t. However, due to node mobility, E(t) will change over time t, therefore s needs to frequently update routes. Then, after several route updates, the probability that s still has not been detected as malicious will be very small. For example, assuming that each route update is independent, after five route updates, even for n = 2, the probability that s has not been detected as malicious is less than 0.06%. That is, an attacker has a negligible chance of avoiding detection. In summary, when the malicious SD pair (s, d) tries to launch an IDPA, to avoid being detected and to maximize the damage, the optimal strategy is to use only one route to inject data packets by conforming to both the maximum hop number L maxhop and the legitimate rate λs,d , which is equivalent to saying that the optimal strategy is not to launch an IDPA at all.
17.5
Centralized detection with decentralized implementation The defense system described in Section 17.3 is fully distributed. However, the drawback of this system is that it may have a relatively high storage complexity. Meanwhile, each node needs to have prior knowledge of the set of legitimate traffic pairs, which might not be available to all nodes in general. Next we describe a modified version of the defense system. In the modified version, instead of performing attacker detection by itself, each good node will report the observed information to certain nodes, which we call centralized detectors. Then the centralized detectors will perform attacker detection on the basis of the traffic information collected. In general, the centralized detectors will be under stronger protection than those normal nodes and may have more powerful computational capability and more storage. The detailed description of the modified defense system is as follows. First, the routediscovery and packet-delivery procedure is the same as that described in Section 17.3.1. Second, the monitoring mechanism is still the header-watcher mechanism as described in Section 17.3.2. To reduce the storage overhead, we made the following modification: for each good node, instead of storing all valid packet headers that have been read, most times it is not necessary to store any packet headers locally, but only to store the threetuple (traffic pair, sequence number, route) associated with each valid packet header that has been read. A good node needs to record a whole packet header only if it has been requested to do so by the detectors, as will be explained next. Furthermore, instead of reporting each item of packet-header information that has been read separately, each good node will report the packet-header information that has been read in a batch mode, that is, each report consists of many items of packet-header information. Assuming that in the previous fully distributed mechanism a good node needs to store n number of packet headers with each having l bytes (l is usually more than 100 bytes for a route request with 10 relays, considering the extra signatures), then, in the modified defense
438
Defense against traffic-injection attacks
mechanism, it need store only n × l˜ bytes, where l˜ is usually much smaller than l. For example, for a route with 10 relays, each node ID uses 8 bits, and the sequence number uses 32 bits, so l˜ is only 14 bytes. Further, normal nodes do not need to know which source–destination pairs are legitimate or their legitimate traffic-injection rates. The centralized detectors perform the job of detecting traffic-injection attacks by applying detection rules similar to those described in Section 17.3.3. The major difference lies in that, when the centralized detector performs detection, there are usually two steps involved. In the first step, the detector will check whether a node has injected two packets with the same sequence number or whether a sequence number is larger than a specified upper bound solely on the basis of the partial packet-header information that has been collected, that is, without checking the packet-header signatures. If either of the two conditions has been satisfied, the detector will then request those nodes which report such information to submit full packet headers. That is, the centralized detector needs solid evidence in order to mark a node as an attacker. Now we use an example to illustrate the modified detection procedure. Assume that node a has reported a sequence number seq1 and route R1 associated with traffic pair (s, d), and node b has reported a sequence number seq2 and route R2 associated with traffic pair (s, d). After the centralized detector has received these reports, it will find that seq1 = seq2 but R1 = R2 . Then the detector has reason to suspect that s has launched traffic-injection attacks. When this happens, the detector will ask nodes a and b to report the full packet headers next time, so that it can collect concrete evidence on the basis of which to charge s. From the above description we can see that, although the detection is performed in a centralized manner, the monitoring is still fully distributed. Now we analyze the detection performance of the modified defense system. It is easy to see that either a simple IDPA or a long-route IDPA can easily be detected. Meanwhile, for a multi-route IDPA, requiring packets sent via different routes to use different sequence numbers produces no gain from the attacker’s point of view, and allowing packets sent via different routes to use the same sequence number will be detected immediately when an omnidirectional transmission technique is used. Now we focus on the scenario in which attackers that allow packets sent via different routes to use the same sequence number will be detected immediately, and hence a directional transmission technique is used to avoid their being detected. Given that an attacker s picks n node-disjoint routes to simultaneously inject packets and packets on different routes will share the same set of sequence numbers, as long as at least two nodes on the selected routes are good, it is easy to check that there is zero probability that s can avoid being detected. In other words, attackers have no chance of launching an IPDA without being detected. That is to say, under the modified defense mechanism, the attackers’ success probability is much lower than that under the previous fully distributed defense mechanism, which is the major advantage of the modified mechanism. Compared with the fully distributed defense system described in Section 17.3, the storage overhead of the modified defense system can be dramatically reduced, but some extra communication overhead is introduced due to the fact that each node needs to
17.6 Simulation studies
439
report to the centralized detector. However, since the size of each report is very small compared with the size of the data packet, the extra communication overhead is negligible. For example, if the average packet size is 1000 bytes, and the report size is 20 bytes, then the overall increase in traffic is only 2%. If the memory resource is more precious than the communication resource, the modified detection scheme should be preferred. Until now we have assumed that each good node will keep listening to all the packet transmissions in its neighborhood. Next we show how to further decrease the overhead by letting nodes selectively listen to packet transmissions, with negligible degradation of the detection performance. Specifically, each node can selectively listen to its neighbors’ transmissions with a certain probability p, which we call probabilistic monitoring. That is, when a packet-transmission event happens in a good node’s neighborhood, there is a probability p that this node will monitor this transmission and report the observation to the centralized detector. Now, when an attacker has injected n packets with the same sequence number via n node-disjoint routes (where n > 1), this attacker can avoid detection with probability no more than p(n) = (1 − p)n + p(1 − p)n−1 . Furthermore, after the attacker has injected k packets, the probability that it will not be detected will have decreased to p(n)k , which goes to 0 with increasing k. By applying probabilistic monitoring, the communication overhead can be further decreased by 1 − p, while the detection performance suffers only negligible degradation. One possible drawback of such a centralized detection mechanism is that the detector itself can also become the attackers’ target. Besides increasing the protection level, one can also increase the number of centralized detectors. For example, if there are two detectors in the network, then, even if one of them has been compromised, the other should still work well. In this case, each node can either submit reports to both detectors, or each time randomly pick one to which to submit reports, where the latter case is equivalent to halving p.
17.6
Simulation studies In our simulations, nodes are randomly deployed inside a rectangular area, and each node moves according to the modified random waypoint model in [489], where a node starts at a random position, waits for a duration called the pause time that is modeled as a random variable with an exponential distribution, and then randomly chooses a new location and moves toward the new location with a velocity uniformly chosen between vmin and vmax . The physical layer assumes that two nodes can directly communicate with each other successfully only if they are within each other’s transmission range. The MAC-layer protocol simulates the IEEE 802.11 distributed coordination function (DCF) with a four-way handshaking mechanism [197]. Some of the simulation parameters are listed in Table 17.1. In the simulations, 50 good nodes are selected as the packet generators, and each will randomly pick a good node to which to send packets, therefore the total number of source–destination pairs is 50. Each malicious node will also randomly pick another malicious node as the destination to which to inject packets. All source–destination
440
Defense against traffic-injection attacks
Table 17.1. Simulation parameters Number of good nodes Number of malicious nodes Minimum velocity (vmin ) Maximum velocity (vmax ) Average pause time Dimensions of space Maximum transmission range Average packet inter-arrival time Data-packet size Link bandwidth
100 0–50 2 m/s 10 m/s 300 s 1500 m × 1500 m 300 m 1s 1024 bytes 1 Mbps
pairs (either good or malicious) are set to be legitimate, and, for each pair, packets are generated according to a Poisson process with a pre-specified traffic rate known by all nodes, such that the average packet inter-arrival time is 1 s. We set f s,d (t) to be t + 3 for any source–destination pair (s, d). Malicious nodes that launch traffic-injection attacks will increase the average packet-injection rate by a factor of 10. Also, all data packets are of the same size, and on average each route request packet is of size 100 bytes. In our simulations, each configuration has been run for 20 independent rounds using different random seeds, and the results are averaged over all 20 rounds. For each round, the simulation time is set to be 5000 s. We use the average energy efficiency and endto-end throughput as metrics to measure the network performance. Here the average energy efficiency is defined as the total number of good nodes’ successfully delivered packets over the total amount of energy spent by all good nodes, and the end-to-end throughput is defined as the total number of good nodes’ successfully delivered packets over the total number of good nodes’ packets that needs to be sent. When we calculate the energy efficiency, only transmission energy consumption has been considered. One reason is that transmission energy consumption plays a major role in overall energy consumption, and another reason is that receiving energy consumption may vary dramatically among communication systems due to their different implementations. We assume that the transmission energy needed per data packet is normalized to be 1. We first investigate the tradeoff between limiting the route request rate and system performance, although the performance also depends on other factors such as the mobility pattern, the number of nodes in the network, and the average number of hops per route. To better illustrate the tradeoff between limiting the route-request rate and system performance, the other parameters are set fixed. However, similar results can also be obtained with variation of these parameters. Figure 17.6 illustrates the tradeoff between limiting the route-request rate and network performance. In this set of simulations, all malicious nodes will inject only route-request packets and will not inject any data packets or launch routing-disruption attacks. We assume that all good nodes have the same minimum route-request forwarding interval denoted by Tmin , but all malicious nodes will set their route request rate to
441
17.6 Simulation studies
0.25
Average energy efficiency
0.2
0.15
0.1
10 attackers 20 attackers 30 attackers
0.05
0
0
20 40 60 80 100 120 140 Minimum query-forwarding interval (seconds)
160
(a) Energy efficiency
1 0.9
End-to-end throughput
0.8 0.7 0.6 0.5 0.4 0.3 0.2
10 attackers 20 attackers
0.1 0
30 attackers
0
20
40 60 80 100 120 140 Minimum query-forwarding interval (seconds)
160
(b) End-to-end throughput Figure 17.6
Limiting the route-request rate vs. system performance.
be 1 per second. From Figure 17.6(a) we can see that, with the increase of Tmin from 1 to 80 s, the energy efficiency of good nodes also increases, and it stays almost unchanged on increasing Tmin from 80 to 160 s. The reason is that, when Tmin is small, attackers can waste good nodes’ energy through injecting a lot of route-request packets to request others to forward them. Figure 17.6(b) shows that, with the increase of Tmin from 1 s to
442
Defense against traffic-injection attacks
IDPA under no defense General IDPA strategy Optimal IDPA strategy
Average energy efficiency
0.25
0.2
0.15
0.1
0.05
0
0
10
20 30 Number of attackers
40
50
40
50
(a) Energy efficiency 1 0.9
End-to-end throughput
0.8 0.7 0.6 0.5 0.4 0.3 0.2
IDPA under no defense General IDPA strategy
0.1 0
Optimal IDPA strategy
0
10
20 30 Number of attackers
(b) End-to-end throughput Figure 17.7
Effects of IDPA under different configurations.
20 s, the end-to-end throughput of good nodes remains almost unchanged, while with the increase of Tmin from 80 s to 160 s, the end-to-end throughput of good nodes drops almost linearly. These results also motivate us to choose Tmin to be 40 s in the following simulations. Figure 17.7 shows the simulation results under various types of IDPA. Here “IDPA under no defense” denotes that attackers just launched simple IDPAs and the underlying
17.7 Summary and bibliographical notes
443
system has not launched any defending mechanism. “General IDPA strategy” denotes that attackers launch IDPAs but the mechanisms described in Section 17.3 have been launched, where both multiple-route IDPAs and long-route IDPAs have been simulated. Specifically, the half of the attackers who have launched multiple-route IDPAs will try to pick as many node-disjoint routes as possible along which to inject packets, though for each route they will conform to the legitimate traffic-injection rate. The other half of the attackers, who try to launch long-route IDPAs, will try to pick routes that are as long as possible along which to inject traffic. “Optimal IDPA strategy” denotes that attackers will use only one route to inject data packets, conforming both to the maximum number of hops L maxhop = 10 and to the legitimate maximum packet-injection rate, and the mechanisms described in Section 17.3 have been launched. In other words, here “optimal IDPA strategy” can also be regarded as no IPDA attack at all. From Figure 17.7(a) we can see that, when there is no defending mechanism against IDPA, even a simple IDPA can dramatically degrade the energy efficiency of good nodes. When the defending mechanisms described in Section 17.3 are employed, from the attackers’ point of view, launching an IDPA provides no gain in terms of decreasing the energy efficiency of good nodes. However, if attackers apply the optimal IDPA strategy, they can still degrade the energy efficiency of good nodes. From Figure 17.7(b) we can see that, without employing necessary defending mechanisms, with increasing number of attackers, even simple IDPA can dramatically degrade the end-to-end throughput of good nodes due to the congestion the attackers cause. When the defending mechanisms described in Section 17.3 are employed, launching IDPAs has hardly any effect on the performance in terms of good nodes’ end-to-end throughput.
17.7
Summary and bibliographical notes In this chapter we study the possible traffic-injection attacks that can be launched in mobile ad hoc networks, and present a set of mechanisms to defend against such attacks. Both query-flooding attacks and attacks involving injecting general data packets are investigated. Furthermore, for the latter type of attack, the situations in which attackers use some advanced transmission techniques, such as directional antennas or beamforming, to avoid being detected are also considered. Two sets of defense mechanisms are presented: one is fully distributed, while the other is centralized with decentralized implementation. The theoretical analysis has shown that, when these mechanisms are used, the best strategy for attackers is not to launch traffic-injection attacks. The results from extensive simulation studies agree with the theoretical analysis. Some related references can be found in [484] [486]. Regarding how to handle traffic-injection attacks in ad hoc networks where nodes belong to different authorities and pursue different goals, interested readers are advised to refer to [483].
18
Stimulation of attack-resistant cooperation
In autonomous ad hoc networks, nodes usually belong to different authorities and pursue different goals. In order to maximize their own performance, nodes in such networks tend to be selfish, and are not willing to forward packets for the benefit of other nodes. Meanwhile, some nodes might behave maliciously and try to disrupt the network and waste other nodes’ resources. In this chapter, we present an attack-resilient cooperationstimulation (ARCS) system for autonomous ad hoc networks to stimulate cooperation among selfish nodes and defend against malicious attacks. In the ARCS system, the damage that can be caused by malicious nodes can be bounded, cooperation among selfish nodes can be enforced, and fairness among nodes can also be achieved. Both theoretical analysis and simulation results are presented to demonstrate the effectiveness of the ARCS system. Another key property of the ARCS system is that it is completely self-organizing and fully distributed, and does not require any tamper-proof hardware or central management points.
18.1
Introduction In emergency or military situations, nodes in an ad hoc network usually belong to the same authority and have a common goal. To maximize the overall system performance, nodes usually work in a fully cooperative way, and will unconditionally forward packets for each other. Emerging applications of ad hoc networks are now being envisioned also for civilian usage. In such applications, nodes typically do not belong to a single authority and need not pursue a common goal. Consequently, fully cooperative behaviors such as unconditionally forwarding packets for others cannot be directly assumed. On the contrary, in order to save limited resources, such as battery power, nodes may tend to be “selfish.” Before ad hoc networks can be successfully deployed in autonomous ways, the issues of cooperation stimulation and security must be resolved. One possible way to stimulate cooperation among selfish nodes is to use payment-based methods. Most of the schemes consider only nodes’ selfish behavior, although in many situations nodes can be malicious. Another possible way to stimulate cooperation is to employ reputation-based schemes. However, these schemes suffer from some problems. First, many attacks can lead to malicious behavior not being detected in these systems, and malicious nodes
18.2 The system model and problem formulation
445
can easily propagate false information to frame others. Second, these schemes can isolate misbehaving nodes, but cannot actually punish them, and malicious nodes can still utilize the valuable network resources even after being suspected or detected. In this chapter we consider scenarios in which there exist both selfish nodes and malicious nodes in autonomous ad hoc networks. The objective of selfish nodes is to maximize the benefits they can get from the network, while the objective of malicious nodes is to maximize the damage they can cause to the network. Since no central management points are available, selfish nodes need to adaptively and autonomously adjust their strategies according to the environment. Accordingly, we present an ARCS system for autonomous ad hoc networks that provides mechanisms to stimulate cooperation among selfish nodes in adversarial environments. Besides maintaining fairness among selfish nodes and being robust against various attacks, another key property of the ARCS system is that it does not require any tamper-proof hardware or central management point, which is very suitable for autonomous ad hoc networks. Both analysis and simulation confirm the effectiveness of the ARCS system. The rest of the chapter is organized as follows. Section 18.2 describes the system model and formulates the problem. Section 18.3 describes the ARCS system presented here. Section 18.4 presents the performance analysis of the system under various attacks. Simulation studies are presented in Section 18.5. Finally, Section 18.6 concludes this chapter.
18.2
The system model and problem formulation In this chapter we consider autonomous ad hoc networks in which nodes belong to different authorities and have different goals. We assume that each node is equipped with a battery with a limited power supply, and may act as a service provider: packets are scheduled to be generated and delivered to certain destinations, with each packet having a specific delay constraint. If a packet can be successfully delivered to its destination within the specified delay constraint, the source of the packet will get some payoff; otherwise, it will be penalized. According to their objectives, the nodes in such networks can be classified into two types: selfish and malicious. The objective of selfish nodes is to maximize the payoff they can get using their limited resources, and the objective of malicious nodes is to maximize the damage they can cause to the network. Since energy is usually the most stringent and valuable resource for battery-supplied nodes in ad hoc networks, we restrict the resource constraint to be energy. However, the schemes presented here are also applicable to other types of resource constraint. For each node in the network, the energy consumption may come from many aspects, such as processing, transmitting, and receiving packets. In this chapter, we focus on the energy consumed in communication-related activities. We focus on the situation in which all nodes in the network are legitimate, no matter whether they are selfish or malicious. To prevent illegitimate nodes from entering the network, some existing schemes can be used as the first line of defense, such as those in [496] [181] [182] [183].
446
Stimulation of attack-resistant cooperation
Next we exploit the possible attacks that can be launched in such networks. We say that a route R = “R0 R1 . . . R M ” is valid at time t if, for any 0 ≤ i < M, Ri and Ri+1 are within each other’s transmission range. We say that a link (Ri , Ri+1 ) is broken at time t if Ri and Ri+1 are not within each other’s transmission range. It is easy to see that, at time t, a packet can be successfully delivered from its source S to its destination D through the route R = “R0 R1 . . . R M ” (R0 = S and R M = D) within the delay constraint τ if and only if all of the following conditions are satisfied. (i) R is a valid route at time t, and no links on route R will break during the transmission. (ii) No errors will be introduced into the packet during the transmission. (iii) No nodes on route R will drop the packet during the transmission. (iv) The total transmission time is less than τ . In order to degrade the network performance, the attackers can either directly break the ongoing communications, or try to waste other nodes’ valuable resources. In general, the possible attacks that can be used by attackers in ad hoc networks can be roughly categorized as follows. A1. Emulate link breakage. When a node Ri wants to transmit a packet to the next node Ri+1 on a certain route R, if Ri+1 is malicious, Ri+1 can simply remain silent to let Ri believe that Ri+1 is out of Ri ’s transmission range, which can dissatisfy condition (i). A2. Drop/modify/delay packets. Dropping a packet can dissatisfy condition (iii), modifying a packet can dissatisfy condition (ii), and delaying a packet can dissatisfy condition (iv). A3. Prevent good routes from being discovered. Such attacks can either dissatisfy condition (i) or increase the attackers’ chance of being on the discovered routes and then launching various attacks such as A1 and A2. Two examples are wormhole and rushing attacks [183] [182]. A4. Inject traffic. Malicious nodes can inject an overwhelming amount of packets to overload the network and consume other nodes’ valuable energy. When other nodes forward these packets but cannot get payback from attackers, the energy consumed is wasted. A5. Collusion attack. Attackers can work together in order to improve their attacking capability. A6. Slander attack. Attackers can also try to say something bad about the other nodes. Before formulating the problem, we first introduce some notation, as listed in Table 18.1. We assume that all data packets are of the same size, and that the transmitting power is the same for all nodes. We use “packet-delivery transaction” to denote sending a packet from its source to its destination. We say that a transaction is “successful” if the packet has successfully reached its destination within its delay constraint; otherwise, the transaction is “unsuccessful.”
447
18.2 The system model and problem formulation
Table 18.1. Notation used in the problem formulation E E S,max αS βS ES NS,succ NS,fail E S,waste E S,contribute
The amount of energy needed to transmit and receive a data packet and a receipt S’s total available energy when it enters the network The payoff that S can get for each successfully delivered data packet, with S being the source The penalty that S will receive for each unsuccessfully delivered data packet, with S being the source The total energy that S has spent until now The total number of successful data-packet deliveries until now, with S being the source The total number of unsuccessful data-packet deliveries until now, with S being the source The amount of selfish nodes’ energy that has been wasted until now due to S’s malicious behavior The amount of S’s energy that it has spent until now on successfully transmitting packets for others
For each node S, if it is selfish, its total profit Profit(S) is defined as follows: Profit(S) = αS NS,succ − βS NS,fail .
(18.1)
Then the objective of each selfish node S can be formulated as follows: max Profit(S)
s.t.
E S ≤ E S,max .
(18.2)
If S is malicious, then the total damage DS that S has caused to other nodes until the current moment is calculated as DS = E S,waste − E S,contribute .
(18.3)
Since in the current system model malicious nodes are allowed to collude, in this chapter we formulate only the overall objective of malicious nodes, which is as follows: max
DS .
(18.4)
S is malicious
We assume that each node has a public key and a private key. We also assume that a node can know or authenticate the other nodes’ public keys, but no node will disclose its private key to the others unless it has been compromised. We do not assume that nodes trust each other. To keep the confidentiality and integrity of the content, all packets should be encrypted and signed by their senders when necessary. We assume that the acknowledgment mechanism is supported by the network link layer. That is, if node A has transmitted a packet to node B and B has successfully received it, then node B needs to immediately notify A of the reception through link-level acknowledgment.
448
Stimulation of attack-resistant cooperation
Table 18.2. Records kept by node S Credit(A, S) Debit(A, S) Wby (A, S) Wto (A, S) L Bwith (A, S) Blacklist(S) Blacklist(A, S)
18.3
The energy that A has spent until now on successfully forwarding packets for S The energy that S has spent until now on successfully forwarding packets for A The wasted energy that A has caused to S until now The wasted energy that S has caused to A until now The wasted energy caused to S until now due to link breakages between A and S The set of nodes that S believes are malicious and S does not want to work together with The subset of A’s blacklist known by S until now
System description This section presents the ARCS system for autonomous ad hoc networks. In the ARCS system, each node S keeps a set of records indicating the interactions with other nodes, as listed in Table 18.2. In a nutshell, when a node has a packet scheduled to be sent, it first checks whether this packet should be sent and which route should be used. When an intermediate node on the selected route receives a packet-forwarding request, it will check whether it should forward the packet. Once a node has successfully forwarded a packet on behalf of another node, it will request a receipt from the next node on the route and submit this receipt to the source of the packet to claim credit. After a packetdelivery transaction has finished, all participating nodes will update their own records to reflect the changed relationships with other nodes and to detect possible malicious behavior. For each selfish node S, all the records listed in Table 18.2 will be set initially to 0 when S first enters the network.
18.3.1
The cooperation degree In [98], Dawkins illustrates that reciprocal altruism is beneficial for every ecological system when favors are granted simultaneously, and gives an example to explain the survival chances of birds, which cannot clean their own heads, grooming parasites off each others’ heads. In that example, Dawkins divides the birds into three categories: suckers, which always help; cheats, which ask other birds to groom parasites off their heads but never help others; and grudgers, which start out being helpful to every bird but refuse to help those birds that do not return the favor. The simulation studies have shown that both cheats and suckers finally become extinct, and only grudgers win over time. Such cooperative behaviors are also developed at length in [12] [13]. In order to best utilize their limited resources, selfish nodes in autonomous ad hoc networks should also act like the grudgers. In the ARCS system, each selfish node S keeps track of the balance B(A, S) with any other node A known by S, which is defined as B( A, S) = (Debit(A, S) − Wto (A, S)) − (Credit(A, S) − Wby (A, S)).
(18.5)
449
18.3 System description
That is, B(A, S) is the difference between what S has contributed to A and what A has contributed to S in S’s point of view. If B(A, S) is a positive value, it can be viewed as the relative damage that A has caused to S; otherwise, it is the relative help that S has received from A. Besides keeping track of the balance, each node S also sets a threshold Bthreshold (A, S) for each known node A in the network, which we call the cooperation degree. A necessary condition for S to help A (e.g., by forwarding a packet for A) is B(A, S) < Bthreshold (A, S).
(18.6)
Setting Bthreshold (A, S) to be ∞ means that S will always help A no matter what A has done, as the suckers do in the example. Setting Bthreshold (A, S) to be −∞ means that S will never help A, like the cheats in the example. In the ARCS system, each selfish node will set Bthreshold (A, S) to a relatively small positive value, which means that S is initially helpful to A, and will keep being helpful to A unless the relative damage that A has caused to S has exceeded Bthreshold (A, S), as the grudgers do in the example where they set the threshold to be 1 for any other bird. By specifying positive cooperation degrees, cooperation among selfish nodes can be enforced, while by letting the cooperation degrees be relatively small, the possible damage that can be caused by malicious nodes can be bounded.
18.3.2
Route selection In the ARCS system, source routing is used, that is, when sending a packet, the source lists in the packet header the complete sequence of nodes which the packet is to traverse. Owing to insufficient balance, malicious behavior and possible node mobility, not all packet-delivery transactions can succeed. When a node has a packet scheduled to be sent, it needs to decide whether it should start the packet-delivery transaction and which route should be used. In the ARCS system, each route is specified an expiry time indicating that after that time the route will become invalid, which is determined by the intermediate nodes during the route-discovery procedure. Assume that S has a packet scheduled to be sent to D, and route R = “R0 R1 . . . R M ” is a valid route known by S with R0 = S and R M = D, where M is the number of hops. Let Pdrop (Ri , S) denote the probability of node Ri dropping S’s packet, and let Pdelivery (R, S) denote the probability that a packet can be successfully delivered from S to D through route R at the current moment. S then calculates Pdelivery (R, S) as follows: ⎧ ⎪ (∃Ri ∈ R) B(Ri , S) < −Bthreshold (S, Ri ), ⎪ ⎨0 Pdelivery (R, S) = 0 (18.7) (∃Ri , R j ∈ R) Ri ∈ Blacklist(R j , S), ⎪ ⎪ ⎩ M−1 (1 − P (R , S)) otherwise. i=1
drop
i
That is, a packet-delivery transaction has no chance of succeeding unless S has a large enough balance to request help from all intermediate nodes on the route and no node
450
Stimulation of attack-resistant cooperation
has been marked as malicious by any other node on the route. Once a valid route R with nonzero Pdelivery (R, S) is used to send a packet by S, the expected energy consumption can be calculated as E avg (R, S) = E M Pdelivery (R, S) + E fail (R, S) n−1 M−1 nE (1 − Pdrop (Rk , S)) Pdrop (Rn , S), × n=1
(18.8)
k=1
and the expected profit of S is Profit(R, S) = αS Pdelivery (R, S) − βS (1 − Pdelivery (R, S)).
(18.9)
Let Q(R, S) be the expected profit per unit energy when S uses R to send a packet to D at the current moment, referred to as the expected energy efficiency. That is, Q(R, S) =
Profit(R, S) . E avg (R, S)
(18.10)
Then, in the ARCS system, which route should be selected is decided as follows. Among all routes R known by S that can be used to reach D, route R ∗ will be selected if and only if Pdelivery (R ∗ , S) > 0 and Q(R ∗ , S) ≥ Q(R, S) for any other R ∈ R. The above decision is optimal in the sense that no other known routes can provide better expected energy efficiency than route R ∗ . Since the value of Pdrop (Ri , S) is usually not known accurately, in the ARCS system, Pdrop (Ri , S) is estimated as the ratio between the number of S’s failed transactions caused by Ri and S’s total number of transactions passing Ri . After the route with the highest expected energy efficiency has been found by the sender S, supposing that it is route R ∗ , in the next step S should decide whether it should use R ∗ to start a data-packet-delivery transaction. If the route quality is too low, simply dropping the packet without trying may be a better choice. Let Q avg (S) be S’s average energy efficiency over the past: Q avg (S) =
αS NS,succ − βS NS,fail . ES
(18.11)
Then in the ARCS system, the following packet-delivery decision rule is used: S will use route R ∗ to start a data-packet-delivery transaction if and only if the following condition holds: Profit(R ∗ , S) ≥ Q avg (S)E avg (R ∗ , S) − βS .
(18.12)
The left-hand side of (18.12) is the expected profit when S uses R ∗ to start a packetdelivery transaction, and the right-hand side of (18.12) is the predicted profit associated with simply dropping the packet without trying, where βS is the penalty due to dropping a packet and Q avg (S)E avg (R ∗ , S) is the gain that S predicts to get with energy E avg (R ∗ , S) on the basis of its past performance. If Q avg (S) is stationary over time, the above decision is optimal in the sense that S’s total profit can be maximized under the energy constraint.
18.3 System description
451
Table 18.3. Notation used in the data-packet-delivery protocols signS (m) verifyS (m, s) v←m M D( ) seqS (S, D)
18.3.3
Node S generates a signature based on the message m Other nodes verify whether s is the signature generated by node S on the basis of the message m Assign the value of m to the variable v A message digest function, such as SHA-1 [382] The sequence number of the current packet being processed, with S being the source and D being the destination
The data-packet-delivery protocol In the ARCS system, data-packet delivery consists of two stages: forwarding the data packet and submitting receipts. In the first stage, the data packet is delivered from its source to its destination. In the second stage, each participating node on the route will submit a receipt to the source to claim credit. Table 18.3 lists some notation to be used.
18.3.3.1
Forwarding the data packet Suppose that node S is to send a packet with payload m and sequence number seqS (S, D) to destination D through the route R. For the sender S, it first computes a signature s = signS (M D(m), R, seqS (S, D)). Next, S transmits the packet (m, R, seqS (S, D), s) to the next node on the route, increases seqS (S, D) by 1, and waits for receipts to be returned by the following nodes on route R. Once a selfish node A has received the packet (m, R, seqS (S, D), s), A first checks whether it is the destination of the packet. If it is, after necessary verification, A returns a receipt to the previous node on the route to confirm the successful delivery; otherwise, A checks whether it should forward the packet. A is willing to forward the packet if and only if all the following conditions are satisfied: (1) A is on route R; (2) seqS (S, D) > seqA (S, D), where seqA (S, D) is the sequence number of the last packet that A has forwarded, with S being the source and D being the destination; (3) the signature is valid; (4) B(S, A) < Bthreshold (S, A); and (5) no node on route R has been marked as malicious by A. Once A has successfully forwarded the packet (m, R, seqS (S, D), s) to the next node on route R, it will specify a time to wait for a receipt to be returned by the next node before confirming the successful transmission, which A will use to claim credit from S. In the ARCS system, a selfish node sets its waiting time to the value of Tlink multiplied by the number of hops following this node, where Tlink is a relatively small interval to account for the necessary processing and waiting time (e.g., the time needed for channel contention) per hop. Since in general the waiting time is small enough, we can assume that, if a node can return a receipt to its previous node in time, the two nodes will still remain connected. The protocol execution of each participating selfish node in this stage is described in Protocol 1.
452
Stimulation of attack-resistant cooperation
Protocol 1 Forwarding the data packet 0 A is the current node, S is the sender, D is the destination. (m, R, seqS (S, D), s) is the received data packet from A’s previous node if A = S; otherwise, (m, R, seqS (S, D), s) is the data packet generated by A. if (A = S) then S forwards (m, R, seqS (S, D), s) to the next node, increases seqS (S, D) by 1, and waits for receipts to be returned. else if ((A = D) and (verifyS ((m, R, seqS (S, D)), s) = true) and (seqS (S, D) > seqA (S, D))) then A assigns the value of seqS (S, D) to seqA (S, D), and returns a receipt to the previous node. else if ((A ∈ / R) or (verifyS ((m, R, seqS (S, D)), s) = true) or (seqS (S, D) ≤ seqA (S, D)) or (∃Ri ∈ R, Ri ∈ Blacklist(A))) then A simply drops this packet. else if ((B(S, A) > Bthreshold (S, A)) or (the link to A’s next node is broken)) then A drops the packet, and returns a receipt to the previous node which also includes the reason for dropping the packet. else A assigns the value of seqS (S, D) to seqA (S, D), forwards (m, R, seqS (S, D), s) to the next node, and waits for a receipt to be returned by the next node. end if end if
18.3.3.2
Submitting receipts In autonomous ad hoc networks, nodes might not be willing to forward packets on behalf of other nodes. So after a node (e.g., A) has forwarded a packet (m, R, seqS (S, D), s) for another node (e.g., S), A will try to claim corresponding credit from S, which A can use later to request S to return the favor. To claim credit from S, A needs to submit necessary evidence to convince S that it has successfully forwarded packets for S. In the ARCS system, in order for A to show that it has successfully forwarded a packet for S, A need only submit a valid receipt generated by any node following A on the route (e.g., B) indicating that B has successfully received the packet. One possible format of such a receipt is {M D(m), R, seqS (S, D), B, signB (M D(m), R, seqS (S, D), B)}. That is, the receipt consists of the message {M D(m), R, seqS (S, D), B} and the signature generated by node B on the basis of this message. For each selfish node, if it has dropped the packet or cannot get a receipt from the next node in time, or the receipt received is invalid, it will generate a receipt by itself and return it to the previous node; otherwise, it will simply send the received receipt back to the previous node on the route. The protocol execution of each participating selfish node in this stage is described in Protocol 2.
453
18.3 System description
Protocol 2 Submitting receipts 0 A is the current node, (M D(m), R, seqS (S, D), B, s) is the successfully received packet to be processed. if ((A = D) or (no valid receipts have been returned by the next node after waiting for enough time)) then s ← signA (M D(m), R, seqS (S, D), A). Send the receipt {M D(m), R, seqS (S, D), A, s} to A’s previous node on R. else receipt = {M D(m), R, seqS (S, D), B, s}, which is the returned receipt from the next node on the route. if (verifyB ((M D(m), R, seqS (S, D), B), s) = true)) then Send receipt to A’s previous node on R. else s ← signA (M D(m), R, seqS (S, D), A). Send the receipt {M D(m), R, seqS (S, D), A, s} to A’s previous node on R. end if end if
m hops
n hops S source Figure 18.1
Updating records.
18.3.4
Updating records
A
M
B
D destination
In the ARCS system, after a packet-delivery transaction has finished, no matter whether it is successful or not, each participating node will update its records to keep track of the changing relationships with other nodes and to detect possible malicious behavior. Next we use Figure 18.1 to illustrate the record-updating procedure, where S is the initiator of this transaction, D is the destination, and R = “S . . . AM B . . . D” is the associated route. For the sender S, depending on the situation, it updates its records as follows. • Case 1. S has received a valid receipt signed by D, which means that this transaction has succeeded. Then, for each intermediate node X, S updates Credit(X, S) as follows: Credit(X, S) = Credit(X, S) + E.
(18.13)
• Case 2. S has successfully sent a packet to the next node, but cannot receive a receipt in time. In this case, letting X be S’s next node, S then updates its records as follows: Wby (X, S) = Wby (X, S) + E,
(18.14)
454
Stimulation of attack-resistant cooperation
Blacklist(S) = Blacklist(S)
!
{X }.
(18.15)
That is, refusing to return a receipt will be regarded as malicious behavior. • Case 3. S has received a valid receipt that is signed not by D, but by an intermediate node (e.g., M), which means that either M has dropped the packet or a returned receipt has been dropped by a certain node following M (including M) on the route in the receipt-submitting stage. In this case, for each intermediate node X between S and M, S still updates Credit(X, S) using (18.13). Since node M’s transmission cannot be verified by S, S has enough evidence to suspect that the packet has been dropped by M. To reflect this suspicion, S updates Wby (M, S) as follows: Wby (M, S) = Wby (M, S) + n E,
(18.16)
where n E accounts for the amount of energy that has been wasted in this transaction, with n being the number of hops between S and M. If a transaction fails, S also keeps a record of (M D(m), R, seqS (S, D), s) for this transaction as well as a copy of the returned receipt if one exists. Each intermediate node (e.g., node M in Figure 18.1) that has participated in the transaction, if it is selfish, updates its records as follows. • Case 1. M has successfully sent the packet to node B, and has got a receipt from B to confirm the transmission. In this case, M need only update Debit(S, M) as follows: Debit(S, M) = Debit(S, M) + E.
(18.17)
• Case 2. M has successfully sent the packet to node B, but cannot get a valid receipt from B. In this case, M updates its records as follows: Wto (S, M) = Wto (S, M) + n E, Wby (B, M) = Wby (B, M) + (n + 1)E, ! Blacklist(M) = Blacklist(M) {B}.
(18.18)
• Case 3. M has dropped the packet due to a link breakage between M and B. Although this packet dropping is not M’s fault, since M cannot prove it to S, M will take the responsibility. However, since this link breakage may have been caused by S, who has selected a bad route, or by B, who tries to emulate link breakage to attack M, M should also record this link breakage. In this case, M updates its records as follows: Wto (S, M) = Wto (S, M) + n E, L Bwith (B, M) = L Bwith (B, M) + n E,
(18.19)
L Bwith (S, M) = L Bwith (S, M) + n E. In the ARCS system, each selfish node (e.g., M) will also set a threshold L Bthreshold (S, M) with any other node (e.g., S) to indicate the damage M can tolerate which is caused due to the link breakages between M and S. In this case, if L Bwith (B, M) exceeds L Bthreshold (B, M), B will be put on M’s blacklist. Similarly, if L Bwith (S, M) exceeds L Bthreshold (S, M), S will be put on M’s blacklist.
18.3 System description
455
• Case 4. M has dropped the packet due to the reason that the condition in (18.6) is not satisfied or some nodes on R are on M’s blacklist. In this case M does not need to update its records. After finishing updating its records, M will also keep a copy of the submitted receipt for possible future usage, such as resolving inconsistent record updates, as will be described in Section 18.3.6. From the above update procedure we can see that a selfish node should always return a receipt to confirm a successful packet reception, since refusing to return a receipt is regarded as malicious behavior and cannot provide any gain.
18.3.5
Secure-route discovery In the ARCS system, DSR [209] is used as the underlying routing protocol to perform route discovery. It is an on-demand source-routing protocol. However, without security measures, the routing protocol itself can easily become a target of attacks. For example, malicious nodes can inject an overwhelming amount of route-request packets into the network. In the ARCS system, besides necessary identity authentication, the following security enhancements have also been incorporated into the route-discovery protocol. 1. When node S initiates a route discovery, it will also append its blacklist to the routerequest packet. After an intermediate node A has received the request packet, it will update its own record Blacklist (S, A) using the received blacklist. 2. When an intermediate node A receives a route-request packet which originates from S and A is not this request’s destination, A first checks the following conditions: (1) A has never seen this request before; (2) A is not on S’s blacklist; (3) B(S, A) < Bthreshold (S, A); (4) no nodes that have been appended to the request packet are on A’s blacklist; (5) A has not forwarded any request for S during the last Tinterval (S, A), where Tinterval (S, A) is the minimum interval specified by A to indicate that A will forward at most one route request for S during each Tinterval (S, A). A will broadcast the request if and only if all of the above conditions can be satisfied; otherwise, A will discard the request. 3. While a discovered route is being returned to the requester S, each intermediate node A on the route appends the following information to the returned route: the subset of its blacklist that is not known by S, the value of Bthreshold (S, A) if this is not known by S, the value of Debit(S, A), and node A’s expected staying time at the current position. After S has received the route, for each node A on the discovered route, it updates the corresponding blacklist Blacklist(A, S) and the value of Bthreshold (S, A), determines the expiry time of this route, which can be approximated as the expected minimum staying time among all nodes on the route, and checks the consistency between Debit(S, A) and Credit(A, S).
18.3.6
Resolving inconsistent record updates In some situations, after a node (e.g., A) has successfully forwarded a packet for another node (e.g., S) and has sent a receipt back to S, the value of Credit(A, S) might not be
456
Stimulation of attack-resistant cooperation
increased immediately by S due to some intermediate node having dropped the receipt returned by A. In this case, the value of Debit(S, A) will be larger than the value of Credit(A, S), which we refer to as an inconsistent record update. As a consequence, S may refuse to forward packets for A even though the actual value of B(A, S) is still less than Bthreshold (A, S), or S may continue requesting A to forward packets for it when the true value of B(S, A) has exceeded Bthreshold (S, A). Next we describe how the problem of inconsistent record updates is resolved in the ARCS system. In the route-discovery stage, after route R has been returned to S, S will check whether there exists any inconsistency. If S finds that a node A on route R has reported a value of Debit(S, A) that is larger than the value of Credit(A, S), then, for calculating the route quality, S should use the value of Debit(S, A) to temporarily substitute for the value of Credit(A, S). In the packet-delivery stage, when route R is picked by S to send packets, for each intermediate node A on route R, the value of Credit(A, S) will also be appended to the payload of the data packet. When A receives an appended value of Credit(A, S) from S, and finds Credit(A, S) < Debit(S, A), A will submit those receipts that are targeted on S but have not been confirmed by S to claim corresponding credits. We say that a receipt received by A at time t1 and targeted on S has been confirmed if there existed at least one moment t2 > t1 before now at which A and S had agreed that Credit(A, S) = Debit(S, A). Once S has received an unconfirmed receipt returned by A, S will check whether there is a failed transaction record associated with this receipt. If no such record exists, either the receipt has been faked, or the corresponding credit has been issued to A. If there exists such a record, let B be the node which has signed the receipt associated with this transaction record, that is, all nodes between S and B have been credited by S. Let C be the node which has signed the receipt submitted by A. If B is in front of C on the route, S should use the new receipt signed by C to replace the previous receipt signed by B, and, for each intermediate node X between B and C on the route, S should update Credit(X, S) using (18.13), also, if C is not the destination of the associated packet, S should update Wby (C, S) using (18.16).
18.3.7
Parameter selection In the ARCS system, for each selfish node S, it is necessary to specify three types of threshold regarding any other node A in the network: the cooperation degree Bthreshold (A, S), the maximum tolerable damage due to link breakage L Bthreshold (A, S), and the minimum route-request-forwarding interval Tinterval (A, S), which are determined in the following way. For each known node A, S initially sets Tinterval (A, S) to a moderate value, such as a value equal to its own average pause time. For as long as it remains within the network, S will keep estimating a good route-discovery frequency for itself, and will set Tinterval (A, S) to the inverse of its own route-discovery frequency. Similarly, S initially sets all link-breakage thresholds using a (relatively small) constant value L Binit , and keeps estimating its own average link-breakage ratio over time, assuming PS,LB . For each node A, let Ntrans (A, S) be the total number of transactions that simultaneously
457
18.4 Analysis under attacks
involve S and A, with A being either S’s next node or the initiator of the transactions, then S may set L Bthreshold (A, S) = L hop PS,LB Ntrans (A, S)E + L Binit ,
(18.20)
where L hop is the average number of hops per route. For Bthreshold (A, S), if favors can be granted simultaneously, a small value (for example 1, as for grudgers in the ecological example) can work perfectly. However, in many situations favors cannot be granted immediately. For example, after S has helped A several times, S might not get a similar amount of help from A due to the fact that S does not currently need help from A or A has moved. Many factors can affect the selection of Bthreshold (A, S), and among them some are unknown to S, such as other nodes’ traffic patterns and behaviors, and some are unpredictable, such as mobility, which makes selecting an optimal value for Bthreshold (A, S) hard or impossible. However, our simulation studies in Section 18.5 have shown that in most situations a relatively small constant value can achieve a good tradeoff between energy efficiency and robustness against attacks.
18.4
Analysis under attacks In this section we analyze the performance of the ARCS system under the following types of attacks: packet dropping, emulating link breakage, injecting traffic, collusion, and slander. Since attacks that prevent good routes from being discovered are mainly used to increase the attackers’ chance of being on the discovered routes, they can be regarded as types of packet-dropping attacks or attacks emulating link breakage, and will not be analyzed separately. Similarly, attacks involving modifying or delaying packets can also be regarded as specific types of packet-dropping attack, and will not be analyzed separately. The results show that the damage that can be caused by malicious nodes is bounded, and the system is collusion-resistant.
18.4.1
Packet-dropping attacks In the ARCS system, malicious nodes can waste other nodes’ energy by dropping their packets, which can happen during either stage, i.e., either forwarding data packets or submitting receipts. We use Figure 18.2 as an example to study the possible packetdropping attacks that can be launched by malicious node M. Depending on the stage during which M drops packets and whether M will return receipts, there are four possible attacking scenarios.
S source Figure 18.2
n hops A
M
B
malicious node
Packet-dropping attacks.
D destination
458
Stimulation of attack-resistant cooperation
• Scenario 1: M drops a packet during the data-packet-forwarding stage, but creates a receipt to send back to A to confirm successful receipt from A. In this scenario, when S gets the receipt, S will increase Wby (M, S) by n E, which equals the total amount of energy that has been wasted by M. That is, in this scenario, the damage caused by M has been recorded by S and needs to be compensated for by M later if M still wants to get help from S. • Scenario 2: M drops a packet during the data-packet-forwarding stage, and refuses to return a receipt to A. In this scenario, although A will be mistakenly charged by S, which increases Wby (A, S) by (n − 1)E, A will mark M as malicious and will stop working with M in future. That is, M can never get help from A and cause damage to A in the future. • Scenario 3: M drops the receipt returned by B, but creates a receipt to send back to A. In this scenario, M will be charged n E by S, but the nodes after M which have successfully forwarded the packet will not be credited by S immediately. That is, by taking some charge (here n E), M can cause inconsistent updating of records. However, as described in Section 18.3.6, this inconsistency can be easily resolved and will not cause further damage. That is, M can cause only temporary inconsistency of records with the extra payment of (n + 1)E. • Scenario 4: M drops the receipt returned by B, and refuses to return a receipt to A. This scenario is similar to scenario 3 with the only difference being that in this scenario A will be mistakenly charged by S, but M will be marked as malicious by A and cannot do any further damage to A in the future.
From the above analysis we can see that, when a malicious node M launches packetdropping attacks, either it will be marked as malicious by some nodes, or the damage caused by it will be recorded by other nodes. Since, for each node A, the maximum possible damage that can be caused by M is bounded by Bthreshold (M, A), the total damage that M can cause is also bounded.
18.4.2
Attacks emulating link breakage Malicious nodes can also launch attacks emulating link breakage to waste other nodes’ energy. For example, in Figure 18.2, when node A has received a request from S to forward a packet to M, M can just keep silent to let A believe that the link between A and M is broken. By emulating link breakage, M can cause a transaction to fail and waste other nodes’ energy. In the ARCS system, each selfish node handles the possible attacks emulating link breakage as follows. For each known node M, S keeps a record L Bwith (M, S) to remember the damage that has been caused due to link breakage between M and S, and, if L Bwith (M, S) exceeds the threshold L Bthreshold (M, S), S will mark M as malicious and will never work with M again. That is, the damage that can be caused to S by malicious node M that launched attacks emulating link breakage is bounded by L Bthreshold (M, S).
459
18.4 Analysis under attacks
18.4.3
Traffic-injection attacks Besides dropping packets, attackers can also inject an excessive amount of traffic to overload the network and to consume other nodes’ valuable energy. Two types of packets can be injected: general data packets and route-request packets. In the ARCS system, according to the route-discovery protocol, the number of route-request packets that can be injected by each node is bounded by 1 in each time interval Tinterval . For general data packets, since an intermediate node A will stop forwarding packets for node M if B(M, A) > Bthreshold (M, A), the maximum damage that can be caused to node A by node M launching traffic-injection attacks with general data is bounded by Bthreshold (M, A). In summary, the maximum damage that can be caused by a malicious node M to node A by launching traffic-injection attacks is bounded.
18.4.4
Collusion attacks In order to increase their attacking capability, malicious nodes may choose to collude. Next we show that in the ARCS system collusion among malicious nodes cannot cause more damage to the network than that caused by their working alone, that is, the ARCS system is collusion-resistant. First, it is easy to see that two nodes colluding to launch extra traffic-injection attacks cannot increase the damage due to the existence of a balance threshold (cooperation degree), and two nodes colluding to launch attacks emulating link breakage makes no sense, since each link-breakage event has only two participants. Next we consider two malicious nodes colluding to launch packet-dropping attacks. Given a packet-delivery transaction, we first consider the case in which the two colluding nodes are neighbors of each other. For example, as in Figure 18.2, assume that M and B collude. When M drops the packet, M can still get (or generate by itself, since M may know B’s private key) the receipt showing that M has successfully forwarded the packet. However, this cannot increase their total attacking capability, since B needs to take the charge for the damage caused by this packet dropping. That is, in this case M is released from the charge by sacrificing B. If two colluding nodes are not neighbors of each other, the only way in which they can collude is that one node drops the data packet during the data-packet-forwarding stage, and the other node drops the receipt during the receipt-submitting stage, as shown in Figure 18.3, where node C drops the data packet and node M drops the receipt. With this type of collusion, if C has returned a receipt to its previous node, C will not be charged by S temporarily, and all the nodes between M and C cannot get credits from n hops S
A
source Figure 18.3
Collusion attacks.
m hops M drop receipt
B
C
D
drop data packet destination
460
Stimulation of attack-resistant cooperation
S immediately. For node M, if M returns a receipt to A, S will increase Wby (M, S) by n E, and if M refuses to return a receipt to A, M will be marked as malicious by A. That is, in this case, a temporarily inconsistent updating of records can be caused, but the colluding nodes will be overcharged by n E. However, according to Section 18.3.6, the inconsistency can easily be resolved.
18.4.5
Slander attacks In ARCS, each node can propagate its blacklist to the network, which may give attackers chances to slander the others. Next we show that, instead of causing damage, such attacks can even benefit selfish nodes in some situations. Suppose that an attacker M tells the others that node X is malicious. For any selfish node S, this information will be used only when S wants to calculate a route’s probability of successful packet delivery and X is on this route. In this situation, the probability of successful packet delivery for this route will be calculated as 0 according to (18.7), and this route will not be used by S, which is just one of the goals of secure-route discovery: preventing attackers from being on the discovered route. In all other situations, such information will not affect S’s decision. Theorem 18.4.1 Assume that in the ARCS system there are L selfish nodes {S1 , . . . , SL } and K malicious nodes {M1 , . . . , M K }. Let Bthreshold (Mk , Sl ), L Bthreshold (Mk , Sl ), and Tinterval (Mk , Sl ) be the cooperation degree, the link-breakage threshold, and the minimum route-request-forwarding interval that Sl sets for Mk , respectively. Let TSl be node Sl ’s staying time in the system, and let E request (which is far less than E) be the energy consumed per route request forwarded, then the total damage that can be caused by all the malicious nodes is bounded by Damage ≤
L ' K k=1 l=1
( TSl E request . Bthreshold (Mk , Sl ) + L Bthreshold (Mk , Sl ) + Tinterval (Mk , Sl ) (18.21)
P ROOF. See the above analysis.
From Theorem 18.4.1 we can see that the damage that can be caused by malicious nodes is bounded, which is determined by the thresholds specified by each selfish node.
18.5
Simulation studies
18.5.1
The simulation configuration In our simulations nodes are randomly deployed inside a rectangular space. Each node moves randomly according to the random waypoint model [209]: a node starts at a
461
18.5 Simulation studies
Table 18.4. Simulation parameters Total number of nodes Number of malicious nodes Maximum velocity (vmax ) Average pause time Dimensions of space Maximum transmission range Average packet inter-arrival time Data-packet size Link bandwidth
100 0–50 10 m/s 100 s 1000 m × 1000 m 250 m 2s 1024 bytes 1 Mbps
random position, waits for a duration called the pause time, then randomly chooses a new location and moves toward the new location with a velocity uniformly chosen between 0 and vmax . When it arrives at the new location, it waits for another random pause time and then repeats the process. The physical layer assumes a fixedtransmission-range model, such that two nodes can directly communicate with each other successfully only if they are within each other’s transmission range. The MAClayer protocol simulates the IEEE 802.11 distributed coordination function (DCF) with a four-way handshaking mechanism [197]. Some simulation parameters are listed in Table 18.4. In the simulations, each selfish node acts as a service provider that randomly picks another selfish node as the receiver, and packets are scheduled to be generated according to a Poisson process. Similarly, each malicious node also randomly picks another malicious node as the receiver to which to send packets. The total number of malicious nodes varies from 0 to 50. Among those malicious nodes, 1/3 launch packet-dropping attacks in which all packets passing through them whose sources are not malicious are dropped, 1/3 launch attacks emulating link breakage which emulate link breakage once they have received packet-forwarding requests from selfish nodes, and 1/3 launch traffic-injection attacks. For each selfish or malicious node that does not launch traffic-injection attacks, the average packet inter-arrival time is 2 s, whereas for those malicious nodes which do launch traffic-injection attacks, the average packet inter-arrival time is 0.1 s. In the simulations, all data packets are of the same size. Depending on the selfish nodes’ forwarding decision, three systems have been implemented in the simulations: the ARCS system, which we call “ARCS”; the ARCS system without balance constraint (i.e., the cooperation degree is set to be infinity for all selfish nodes), which we call “ARCS-NBC”; and a fully cooperative system, which we call “FULL-COOP.” In ARCS, all selfish nodes behave in the way described in Section 18.3. In ARCS-NBC, the same strategies as in ARCS have been used to detect the launching of packet-dropping attacks and attacks emulating link breakage, but now (18.6) is not a necessary condition for the forwarding of packets for other nodes, and a selfish node will unconditionally forward packets for those nodes which have not been marked as malicious by it. In FULL-COOP, all selfish nodes will unconditionally forward packets
462
Stimulation of attack-resistant cooperation
for other nodes, and no mechanisms for detection and punishment of malicious nodes have been used. In all three systems, the same route-discovery procedure as described in Section 18.3.5 is used. We use {S1 , . . . , SL } to denote the L selfish nodes and use {M1 , . . . , M K } to denote the K malicious nodes in the network. In this section the following performance metrics are used. • Energy efficiency of selfish nodes, which is the total profit gained by all selfish nodes divided by the total energy spent by all selfish nodes. • Average damage received per selfish node, which is the total damage received by all selfish nodes divided by the total number of selfish nodes, that is, ⎞ ⎛ L L K 1 ⎝ Davg = B(Sl , Si ) + B(Mk , Si )⎠ . L i=1
l=1
(18.22)
k=1
• Balance variation of selfish nodes, which is the standard deviation of selfish nodes’ L B(Sl ) = 0, that is, overall balance with the assumption that l=1 ? @ L @1 Variation = A B(Sl ) ∗ B(Sl ). L
(18.23)
l=1
L By assuming l=1 B(Sl ) = 0, the effects caused by malicious nodes have been L B(Sl ) deviate from 0. This metric also reflects incorporated, which will make l=1 the fairness for selfish nodes, where Variation = 0 implies absolute fairness, and the increase of Variation implies the increase of possible unfairness for selfish nodes.
18.5.2
Simulation results In our simulations, each configuration has been run for 10 independent rounds using different random seeds, and the results are averaged over all the rounds. In the simulations, we set αS = 1, βS = 0.5, and Tinterval = 100 s for each selfish node S, which is equal to the average pause time. The run time for each round is 5000 s. For each selfish node, the link-breakage ratio is estimated through its own experience, which is the ratio between the total number of link breakages it has experienced with itself being the transmitter and the total number of transmissions it has tried. Figure 18.4 shows the values of the link-breakage ratio estimated by each node, which shows that all nodes have almost the same link-breakage ratio (here 2%). Figure 18.5 shows a performance comparison among the three systems, ARCS, ARCS-NBC, and FULL-COOP, where, in ARCS, Bthreshold is set to 60E, and the value of L Bthreshold is set according to (18.20) with L Binit = 20E. Experiments using other values of Bthreshold have also been conducted, showing that 60E can achieve a good tradeoff between performance and possible damage (demonstrated in Figure 18.7 later). From the comparisons of selfish nodes’ energy efficiency (Figure 18.5(a)) we can see that ARCS has a much higher efficiency than ARCS-NBC and FULL-COOP when
463
18.5 Simulation studies
Average link-breakage ratio
0.1
0.08
0.06
0.04
0.02
0 0
40 60 Node index
0.3
100
8000
0.25 0.2 0.15 0.1 FULL-COOP ARCS-NBC ARCS
0.05 0
80
Self-estimated link-breakage ratio.
Standard deviation
Average profit per unit energy
Figure 18.4
20
0
10
20
30
40
FULL-COOP ARCS-NBC ARCS
6000 4000 2000
50
0
0
Number of malicious nodes
10
(a) Energy effciency
Damage (in E )
8000
4000 2000
0
30
40
(b) Balance variation FULL-COOP ARCS-NBC ARCS
6000
0
10 20 30 40 Number of malicious nodes (c) Average damage
Figure 18.5
20
Number of malicious nodes
A performance comparison of the three systems.
50
50
464
Stimulation of attack-resistant cooperation
there exist malicious nodes. When only selfish nodes exist, ARCS-NBC and FULLCOOP have the same efficiency, since they work in the same way, and both have slightly higher efficiency than ARCS with the payment of a higher balance variation of selfish nodes, which is shown in Figure 18.5(b). The balance-variation comparison shows that ARCS has a much lower balance variation than the other two systems, which remains almost unchanged with increasing number of malicious nodes, whereas, for the other two systems, the balance variation increases linearly and dramatically with increasing number of malicious nodes. This comparison also implies a lower unfairness for selfish nodes in the ARCS system. The average-damage comparison (Figure 18.5(c)) shows that the damage that can be caused by malicious nodes is much lower in ARCS than in the other two systems, and increases very slowly with increasing number of malicious nodes. From the results shown in Figure 18.5 we can also see that, although ARCS-NBC has gained a lot of improvement over FULL-COOP by introducing mechanisms to detect packet-dropping attacks and attacks emulating link breakage, its performance is still much worse than that of ARCS. The reason is that ARCS-NBC cannot detect and punish those malicious nodes which launch traffic-injection attacks, so a large amount of energy has been wasted in forwarding packets for those nodes. Figure 18.6 illustrates the different effects of traffic-injection attacks in the three systems. The vertical axis shows the percentage damage to the network caused by traffic-injection attacks. From these results we can see that, in ARCS, only about 40% damage is caused by traffic-injection attacks, in FULL-COOP this percentage increases to around 80%, while in ARCS-NBC the percentage increases to more than 90%, although the overall damage caused by all malicious nodes to the selfish nodes in ARCS-NBC is less than that in FULL-COOP. In other words, in Figure 18.5(c), the gap between the results corresponding to ARCS and the results corresponding to ARCS-NBC is caused by traffic-injection attacks, whereas 1
Percentage damage
0.8
0.6
0.4
FULL-COOP
0.2
ARCS-NBC ARCS
0 10
Figure 18.6
20
30 40 Number of malicious nodes
Effects of traffic-injection attacks in the three systems.
50
465
18.5 Simulation studies
2000
0.25
Standand deviation
Average profits/energy
0.3
0.2 0.15
10E 20E 40E 60E 80E 120E 160E
0.1 0.05 0
0
10
20
30
40
1600 1200 800 400 0
50
10E 20E 40E 60E 80E 120E 160E
0
10
20
30
40
Number of malicious nodes
Number of malicious nodes
(a) Energy efficiency
(b) Balance variation
Damage (in E )
1600
50
10E 20E 40E 60E 80E 120E 160E
1200 800 400 0
0
10
20
30
40
50
Number of malicious nodes (c) Average damage
Figure 18.7
A performance comparison of the ARCS system under different cooperation degrees.
the gap between the results corresponding to ARCS-NBC and the results corresponding to FULL-COOP is caused by packet-dropping/emulating-link-breakage attacks. These results explain why ARCS-NBC has much worse performance than ARCS, and clearly show how necessary it is to introduce mechanisms to defend against traffic-injection attacks. Next we evaluate the ARCS system under different cooperation degrees, with all other parameters kept unchanged. Figure 18.7 shows the performance of the ARCS system on varying the cooperation degree from 10E to 160E. From Figure 18.7(a) we can see that, when the cooperation degree is 40E or more, the energy efficiency remains almost constant. However, Figures 18.7(b) and (c) show that, with increasing cooperation degree, both the balance variation of selfish nodes and the average damage per selfish node increase. This can be explained by reference to Figure 18.8, which shows that, with increasing cooperation degree, the percentage damage that is caused by traffic-injection attacks also increases. That is, the higher the cooperation degree, the greater the vulnerability to traffic-injection attacks. These results suggest that a relatively small cooperation degree (for example 40E) is enough to achieve good performance for selfish nodes, in terms of high energy efficiency, low unfairness, and small amount of damage.
466
Stimulation of attack-resistant cooperation
1
Percentage damage
0.8
0.6
0.4 40E 60E 80E 120E 160E
0.2
0 10
20
30 40 Number of malicious nodes
50
Figure 18.8
Effects of traffic-injection attacks under different cooperation degrees in the ARCS system.
18.6
Summary and bibliographical notes In this chapter we have considered the issues of cooperation stimulation and security in autonomous ad hoc networks, and presented an attack-resistant cooperation-stimulation (ARCS) system to stimulate cooperation among selfish nodes and defend against various attacks launched by malicious nodes. In the ARCS system, each node can adaptively and autonomously adjust its own strategy according to the changing environment. The analysis has shown that, in the ARCS system, the damage that can be caused by malicious nodes can be bounded, and cooperation among selfish nodes can be enforced by introducing a positive cooperation degree. At the same time, the ARCS system maintains good fairness among selfish nodes. The simulation results agree with the analysis. Another key property of the ARCS system is that it is fully distributive, completely self-organizing, and does not require any tamper-proof hardware or central management points. Some related references can be found in [483] [485]. In [26], a cooperation-stimulation approach was developed by using a virtual currency, called nuglets, as payments for packet forwarding, which was then improved in [27] using credit counters. However, tamper-proof hardware is required in each node to count the credits. In [493], Sprite was presented to stimulate cooperation. It releases one from the requirement for tamper-proof hardware, but requires a centralized credit-clearance service that is trusted by all nodes. In [3], a pricing-based truthful and cost-efficient routing protocol for mobile ad hoc networks was proposed. A similar approach was presented in [504]. Although these schemes can effectively stimulate cooperation among selfish nodes, the requirement for tamper-proof hardware or a central billing service greatly limits their applications. In [283], the first reputation-based system for ad hoc networks was presented in order to mitigate nodes’ misbehavior. In this approach, each node launches a “watchdog” to
18.6 Summary and bibliographical notes
467
monitor its neighbors’ packet-forwarding activities and to make sure that these neighbors have forwarded the packets according to its requests. Following [283], CORE was developed to enforce cooperation among selfish nodes [290], and CONFIDANT was considered in order to detect and isolate misbehaving nodes and thus make it unattractive to deny cooperation [15].
19
Optimal strategies for stimulation of cooperation
In this chapter we present a joint analysis of cooperation stimulation and security in autonomous mobile ad hoc networks under a game-theoretic framework. We first investigate a simple yet illuminating two-player packet-forwarding game, and derive the optimal and cheat-proof packet-forwarding strategies. We then investigate the securerouting and packet-forwarding game for autonomous ad hoc networks in noisy and hostile environments, and derive a set of reputation-based cheat-proof and attackresistant cooperation-stimulation strategies. When analyzing the cooperation strategies, besides Nash equilibrium, other optimality criteria, such as Pareto optimality, subgame perfection, fairness, and cheat-proofing, are also considered. Both analysis and simulation studies show that the strategies discussed here can effectively stimulate cooperation among selfish nodes in autonomous mobile ad hoc networks under noise and attacks, and that the damage that can be caused by attackers is bounded and limited.
19.1
Introduction Node cooperation is a very important issue in order for ad hoc networks to be successfully deployed in an autonomous way. In addition to many schemes that have been studied to stimulate node cooperation in ad hoc networks, ARCS was considered in the previous chapter to simultaneously stimulate cooperation among selfish nodes and defend against various attacks. Meanwhile, efforts have also been made toward mathematically analyzing cooperation in autonomous ad hoc networks in a game-theoretic framework, such as in [396] [427] [57] [291] [5] [121]. In this chapter we focus on designing reputation-based cooperation-stimulation strategies for autonomous mobile ad hoc networks under a game-theoretic framework. However, there are several major differences to distinguish this chapter from the existing work. First, instead of focusing only on selfish behavior, in our analysis the effect of possible malicious behavior has also been incorporated. Second, we have addressed these issues under more realistic scenarios by considering error-prone communication channels, whereas most existing cooperation schemes can work well only in ideal environments. Third, we have also fully explored possible cheating behavior and derived cheat-proof strategies, whereas most existing work requires nodes to honestly
19.2 Optimal strategies in packet-forwarding games
469
report their private information. Fourth, in our analysis, besides Nash equilibrium, other optimality criteria, such as Pareto optimality, cheat-proofing and fairness, have also been applied. We first studied a simple yet illuminating two-player packet-forwarding game and investigated the Nash equilibria. Since this game usually has multiple equilibria, we then investigated how to apply extra optimality criteria, such as subgame perfection, Pareto optimality, fairness, and cheat-proofing, to further refine the obtained Nashequilibrium solutions. Finally, a unique Nash-equilibrium solution is derived, which suggests that a node should not help its opponent more than its opponent has helped it. The analysis is then extended to handle multi-node scenarios in noisy and hostile environments by modeling the dynamic interactions between nodes as a secure-routing and packet-forwarding game. By taking into consideration the difference between the two-node case and the multi-node case, we have also derived attack-resistant and cheatproof cooperation-stimulation strategies for autonomous mobile ad hoc networks. Both analysis and simulation studies have shown that the strategies can effectively stimulate cooperation under noise and attacks, and that the damage that can be caused by attackers is bounded and limited. The rest of this chapter is organized as follows. Section 19.2 studies the twoplayer packet-forwarding game. Section 19.3 describes the system and the game model. Cooperation-stimulation strategies are presented in Section 19.4, with analysis under no attacks in Section 19.5 and analysis under attacks in Section 19.6, respectively. Discussions on the strategies are presented in Section 19.7. In Section 19.8, extensive simulations have been conducted to evaluate the strategies under various scenarios.
19.2
Optimal strategies in packet-forwarding games
19.2.1
The game model In this chapter we focus on the most basic networking function, namely packet forwarding. We first study a simple yet illuminating two-player packet-forwarding game. In this game there are two players, denoted by N = {1, 2}. Each player needs its opponent to forward a certain number of packets in each stage. For each player i, the cost of forwarding a packet is ci , and the gain it can get for each packet that its opponent has forwarded for it is gi . To simplify the illustration, we assume that all packets are of the same size. Here the cost can be the energy consumed and the gain is usually application-specific. It is reasonable to assume that gi ≥ ci , and that there exists a cmax with ci ≤ cmax . Let Bi be the number of packets that player i will request its opponent to forward in each stage. Here Bi , ci , and gi are player i’s private information, which is not known to its opponents unless player i reports it (either honestly or dishonestly). In each stage, let A1 = {0, 1, . . . , B2 } denote the set of actions that player 1 can take and let A2 = {0, 1, . . . , B1 } denote the set of actions that player 2 can take. That is, ai ∈ Ai denotes that player i will forward ai packets for its opponent in this stage. We refer to an action profile a = (a1 , a2 ) as an outcome and denote the set A1 × A2 of
470
Optimal strategies for stimulation of cooperation
outcomes by A. Then in each stage players’ payoffs are calculated as follows, provided that the action profile a is being taken: u 1 (a) = a2 × g1 − a1 × c1 , u 2 (a) = a1 × g2 − a2 × c2 .
(19.1)
That is, the payoff of a player is the difference between the total gain it obtained with the help of its opponent and the total cost it incurred in helping its opponent. We refer to u(a) = (u 1 (a), u 2 (a)) as the payoff profile associated with the action profile a. It is easy to check that, if this game is played only once, the only Nash equilibrium (NE) is a ∗ = (0, 0). According to the backward-induction principle [322], this is also true when the stage game is played finite times with the game-termination time known to both players. Therefore, in such scenarios, for each player, the only optimal strategy is to always play non-cooperatively. However, in most situations these two players may interact many times and no one can know exactly when the opponent will quit the game. Next we show that, under such a more realistic setting, besides the non-cooperative strategy, cooperative strategies can also be obtained. Let G denote the repeated version of the above one-stage packet-forwarding game. Let si denote player i’s behavior strategy, and let s = (s1 , s2 ) denote the strategy profile. Next we consider the following two utility functions: Ui (s) = lim
T →∞
T ∞ 1 u i (s), Ui (s, δ) = (1 − δ) δ t u i (s). T t=0
(19.2)
t=0
The utility function Ui (s) can be used when the game will be played an infinite number of times, and the discounted version Ui (s, δ) can be used when the game will be played a finite number of times, but no one knows the exact termination time. Here the discount factor δ (with 0 < δ < 1) characterizes each player’s expected playing time. Since in general the results obtained on the basis of Ui (s) can also be applied to scenarios in which Ui (s, δ) is used as long as δ approaches 1, in this section we will mainly focus on Ui (s). Now we analyze possible NE for the game G with utility function Ui (s). According to the folk theorem [322], for every feasible and enforceable payoff profile, there exists at least one NE to achieve it, where the set of feasible payoff profiles for the above game is V0 = convex hull v ∃a ∈ A with u(a) = v (19.3) and the set of enforceable payoff profiles, denoted by V1 , is 2 V1 = v v ∈ V0 and ∀i : vi ≥ v i , where v i = min max u i (a−i , ai ) . a−i ∈A−i ai ∈Ai
(19.4)
Figure 19.1 depicts these sets for the game with B1 = 1 and B2 = 2, where the vertical axis denotes player 1’s payoff and the horizontal axis denotes player 2’s payoff. The payoff profiles inside the convex hull of {(0, 0), (g1 , −c2 ), (g1 − 2c1 , 2g2 − c2 ), (−2c1 , 2g2 )} (including the boundaries) are the set of feasible payoff profiles V0 ;
471
19.2 Optimal strategies in packet-forwarding games
player 1 (g1 , –c2) (g1 –c1, g2 – c2) (g1 – 2c1, 2g2 – c2)
player 2
(0, 0) (–c1, g2)
(–2c1, 2g2) Figure 19.1
Feasible and enforceable payoff profiles.
the set of payoff profiles inside the shading area (including the boundaries) is the set of feasible and enforceable payoff profiles V1 . We can easily check that, as long as g1 g2 > c1 c2 , there will exist an infinite number of NE. To simplify our illustration, in this chapter we will use x = (x1 , x2 ) to denote the set of NE strategies corresponding to the enforceable payoff profile (x2 g1 − x1 c1 , x1 g2 − x2 c2 ).
19.2.2
Nash-equilibrium refinements On the basis of the above analysis we can see that the infinitely repeated game G may have an infinite number of NE. It is also easy to check that not all the obtained NE payoff profiles are simultaneously acceptable to both players. For example, the payoff profile (0, 0) will not be acceptable from both players’ points of view if they are rational. Next we show how to perform equilibrium refinement, that is, how to introduce new optimality criteria to eliminate those NE solutions which are less rational, less robust, or less likely. Specifically, the following optimality criteria will be considered: Pareto optimality, subgame perfection, proportional fairness, and absolute fairness.
Subgame perfection Our first step toward refining the NE solutions is to rule out those empty threats on the basis of more credible punishments known as subgame perfect equilibrium. According to the perfect folk theorem [322], every strictly enforceable payoff profile v ∈ V2 is a subgame perfect equilibrium payoff profile of the game G, where
2 V2 = v v ∈ V0 and ∀i : vi > v i , where v i = min max u i (a−i , ai ) . a−i ∈A−i ai ∈Ai
(19.5)
That is, after applying the criterion of subgame perfection, only a small set of NE will be removed.
472
Optimal strategies for stimulation of cooperation
Pareto optimality Our second step toward refining the set of NE solutions is to apply the criterion of Pareto optimality. Given a payoff profile v ∈ V0 , v is said to be Pareto optimal if there is no v ∈ V0 for which vi > vi for all i ∈ N ; v is said to be strongly Pareto optimal if there is no v ∈ V0 for which vi ≥ vi for all i ∈ N and vi > vi for some i ∈ N [322]. It is easy to check that only those payoff profiles lying on the boundary of the set V0 could be Pareto optimal. Let V3 denote the subset of feasible payoff profiles that are also Pareto optimal. For the case depicted in Figure 19.1, V3 is the set of payoff profiles that lie on the segment between (g1 , −c2 ) and (g1 − 2c1 , 2g2 − c2 ) and on the segment between (g1 − 2c1 , 2g2 − c2 ) and (−2c1 , 2g2 ). After applying the criterion of Pareto optimality, although a large portion of the NE has been removed from the feasible set, there still exists an infinite number of NE. Let V4 = V3 ∩ V2 .
Proportional fairness Next we try to further refine the solution set on the basis of the criterion of proportional fairness. Here a payoff profile is proportionally fair if U1 (s)U2 (s) can be maximized, which can be achieved by maximizing u 1 (s)u 2 (s) in each stage. Then we can reduce the solution set V4 to a unique point as follows: D' ( ⎧ c2 g1 B1 ⎪ ⎪ <2 + , ((c2 /g2 + g1 /c1 )B1 /2, B1 ) if ⎪ ⎪ B2 g2 c1 ⎪ ⎪ ⎪ D' ' ( (D ⎨ c2 g1 c1 g2 B1 ∗ if 2 + ≤ + x = (B2 , B1 ) ≤ 2, ⎪ g2 c1 B2 g1 c2 ⎪ ⎪ ' (D ⎪ ⎪ B1 c1 g2 ⎪ ⎪ ⎩(B2 , (c1 /g1 + g2 /c2 )B2 /2) if > + 2. B2 g1 c2 (19.6)
Absolute fairness In many situations, absolute fairness is also an important criterion. We first consider absolute fairness in payoff, which refers to the fact that the payoff of these two players should be equal. Then we can reduce the solution set V4 to a unique point as follows: ⎧ B1 g2 + c1 ⎪ ⎪ if ≤ , ⎨([(g1 + c2 )/(g2 + c1 )]B1 , B1 ) B g1 + c2 2 ∗ (19.7) x = B1 g2 + c1 ⎪ ⎪ ⎩(B2 , [(g2 + c1 )/(g1 + c2 )]B2 ) if ≥ . B2 g1 + c2 Another similar criterion is absolute fairness in cost, which refers to the fact that the cost incurred by both players should be the same. Then we can also reduce the solution set V4 to a unique point as follows: ⎧ B1 c1 ⎪ ⎪ ≤ , ⎨((c2 /c1 )B1 , B1 ) if B c 2 2 x∗ = (19.8) B1 c1 ⎪ ⎪ ⎩(B2 , (c1 /c2 )B2 ) if ≥ , B2 c2
473
19.2 Optimal strategies in packet-forwarding games
Analysis of cheat-proofness for the solution (19.6), (19.7), and (19.8) We first study the solution (19.6). Let πi = ci /gi denote player i’s cost–gain ratio (CGR). We first analyze whether player i can increase its payoff by reporting a false CGR given that player 2 will honestly report its CGR. That is, we fix the value of π2 , let π1 be player 1’s true value, and let π1 be the value that player 1 will falsely report with π1 > π1 . Let τ1 = 2/(π2 + 1/π1 ), τ2 = (π1 + 1/π2 )/2, τ1 = 2/ π2 + 1/π1 , and τ2 = π1 + 1/π2 /2. It is easy to check that τ1 < τ1 and τ2 < τ2 . Recall that Bi is the maximum number of packets that player i will request an opponent to forward in each stage. Let (x1 , x2 ) denote the number of packets on average they will forward for each other in each stage according to the solution (19.6), given that the true values of B1 and B2 are known by both players. The relationship between x2 /x1 and B1 /B2 under different situations is illustrated in Figures 19.2(a) and (b). In these two figures, the dashed curve corresponds to the relationship between x2 /x1 and B1 /B2 given that player 1 honestly reports its CGR, which is π1 ; while the solid curve corresponds to the relationship between x2 /x1 and B1 /B2 given that player 1 falsely reports its CGR, which is π1 . From the results illustrated in Figure 19.2 we can see that, by falsely reporting a higher CGR, in most situations player 1 can increase the ratio of x2 /x1 . Next we study the effect of falsely reporting a high CGR on player 1’s payoff. We first consider the situation in which τ1 ≤ τ2 , which is illustrated in Figure 19.2(a). In this case the whole feasible space can be partitioned into five subareas in terms of the feasible range of B1 /B2 . • For any value of B1 /B2 inside range I, the solution corresponding to π1 is
((π2 + 1/π 1 )B1 /2, B1 ), and the solution corresponding to π1 is π2 + 1/π1 B1 /2, B1 . Since π1 > π1 , falsely reporting a higher CGR means that player 1 can forward fewer packets for player 2 than it should, consequently increasing player 1’s payoff. corresponding to π1 is • For any value of B1 /B2 inside range II, the solution (B2 , B1 ), and the solution corresponding to π1 is π2 + 1/π1 B1 /2, B1 . Since x2/x1
x2/x1
correspond to π′1
τ ′2
τ ′2
τ2
τ ′1 correspond to π1
τ ′1 τ1
correspond to π′1
correspond to π1
τ2 τ1
range range range range range IV V I II III τ1 τ2 τ ′1 τ ′2 0
B1/B2
(a) τ ′1 < τ 2 Figure 19.2
Player 1 falsely reports the value of π1 .
range range range range range I II III IV V τ1 τ2 τ ′1 τ ′2 0
(b) τ ′1 > τ 2
B1/B2
474
Optimal strategies for stimulation of cooperation
B2 > π2 + 1/π1 B1 /2, falsely reporting a higher CGR means that player 1 can forward fewer packets for player 2 than it should, consequently increasing player 1’s payoff. • For any value of B1 /B2 inside range III, the solution corresponding to π1 is (B2 , B1 ), and the solution corresponding to π1 is also (B2 , B1 ). That is, in this situation, changing the value of π1 to π1 will not change player 1’s payoff. • For any value of B1 /B2 inside range IV, the solution corresponding to π1 is (B2 , (π1 + 1/π2 ) B2 /2), and the solution corresponding to π1 is (B2 , B1 ). Since B1 > (π1 + 1/π2 )B2 /2, falsely reporting a higher CGR means that player 1 can request player 2 to forward more packets than player 2 should, consequently increasing player 1’s payoff. • For any value of B1 /B2 inside range V, the solution corresponding to π1 is
is B , π + 1/π )B /2), and the solution corresponding to π (B2 , (π1 + 1/π 2 2 2 2 1 1 B2 /2). Since π1 + 1/π2 B2 /2 > (π1 + 1/π2 )B2 /2, falsely reporting a higher CGR means that player 1 can request player 2 to forward more packets than player 2 should, consequently increasing its own payoff. Similar results can also be obtained for the case in which τ1 > τ2 , where now player 1 can increase its own payoff over all possible values of B1 /B2 by falsely reporting a higher π1 value, given that π2 is fixed. In summary, falsely reporting a higher π1 value means that in most situations player 1 can increase its payoff, and in no situation will player 1’s payoff be decreased. Further, the higher the CGR which player 1 reports, the more benefit player 1 can get. Similarly, player 2 can also increase its benefit by falsely reporting a higher CGR. Now let τ = (g2 + c1 )/(g1 + c2 ) and τ = consider the solution (19.7). Next we g2 + c1 /(g1 + c2 ), where c1 < c1 . Figure 19.3(a) illustrates the relationship between x2 /x1 and B1 /B2 for the two different reported cost values c1 and c1 , where g1 , g2 , and c2 are fixed. Similarly to in Figure 19.2, the dashed curve corresponds to the case in which player 1 reports a true cost value, whereas the solid curve corresponds to the case in which player 1 reports a false cost value. From Figure 19.3(a) we can see that falsely reporting a higher cost value means that in all situations player 1 can increase the ratio x2/x1
x2/x1
correspond to c′1
τ′
correspond to c′1
c′1/c2 correspond to c 1
correspond to c 1
τ
c1/c2 range I 0
range II
τ
range III B1/B2
τ′
(a) Figure 19.3
Player 1 falsely reports the cost incurred.
range I 0
range II
range III B1/B2 c′1/c2
c1/c2
(b)
19.2 Optimal strategies in packet-forwarding games
475
of x2 /x1 . Next we study the effect of falsely reporting a higher cost on player 1’s payoff. As shown in Figure 19.3(a), the whole space can be partitioned into three subareas in terms of the feasible range of B1 /B2 . • For any value of B1 /B2 inside range I, the solution corresponding to c1 is (B1 /τ, B1 ), and the solution corresponding to c1 is (B1 /τ , B1 ). Since τ > τ , falsely reporting a higher cost means that player 1 can forward fewer packets for player 2 than it should, thereby increasing its own payoff. • For any value of B1 /B2 inside range II, the solution corresponding to c1 is (B2 , τ B2 ), and the solution corresponding to c1 is (B1 /τ , B1 ). Since B1 /τ < B2 and B1 > τ B2 , falsely reporting a higher cost means that player 1 can forward fewer packets for player 2 than it should and can request player 2 to forward more packets for it than player 2 should, thereby increasing its own payoff. • For any value of B1 /B2 inside range III, the solution corresponding to c1 is (B2 , τ B2 ), and the solution corresponding to c1 is (B2 , τ B2 ). Since τ > τ , falsely reporting a higher cost means that player 1 can always request player 2 to forward more packets for it than player 2 should do, and consequently increase its own payoff. In summary, by falsely reporting a higher c1 value, player 1 can increase its payoff in all situations, given that c2 and g2 are fixed. Further, the higher the value of c1 player 1 reports, the more benefit player 1 can get. By applying a similar analysis, it is also easy to show that, by falsely reporting a lower g1 value, player 1 can increase its payoff in all situations, given that c2 and g2 are fixed. Similarly, player 2 can also increase its benefit by falsely reporting a higher c2 or a lower g2 , given that g1 and c1 are fixed. Now we consider the solution (19.8). Figure 19.3(b) illustrates the relationship between x2 /x1 and B1 /B2 for the two different reported cost values c1 and c1 , with c1 > c1 and c2 fixed. From Figure 19.3(b) we can see that, by falsely reporting a higher c1 value, player 1 can in all situations increase the ratio of x2 /x1 . Applying similar analysis to that presented before, we can conclude that, given a fixed c2 , player 1 can always increase its payoff by falsely reporting a higher c1 value. Further, the higher the value of c1 player 1 reports, the more benefit player 1 can get. Similarly, player 2 can also increase its payoff by falsely reporting a higher c2 value, given a fixed c1 .
19.2.3
Optimal and cheat-proof packet-forwarding strategies In Section 19.2.2 we have obtained several unique solutions after applying various optimality criteria to refine the original solutions. However, all these solutions involve some private information reported by each selfish player. Owing to players’ selfishness, honest reporting of private information cannot be taken for granted, and players may tend to cheat whenever they believe cheating can increase their payoffs. In this chapter, we refer to a NE as cheat-proof if no player can further increase its payoff by revealing false private information to its opponents. In the above, we have examined the three solutions (19.6), (19.7), and (19.8). The analysis shows that none of them is cheat-proof with respect to the private information ci and gi . Since all these unique solutions are strongly Pareto optimal, an increase of one’s opponent’s payoff will lead to a decrease of
476
Optimal strategies for stimulation of cooperation
one’s own payoff. Therefore players have no incentive to report their private information honestly. On the contrary, they will cheat whenever cheating can increase their payoff. What is the consequence if both players cheat with respect to ci and gi ? Let’s first examine the solution (19.6). In this case, according to the previous analysis, both players will report a ci /gi value that is as high as possible. Since we have assumed that gi ≥ ci and ci ≤ cmax , both players will set gi = ci = cmax , and the solution (19.6) will take the following form: x ∗ = (min(B1 , B2 ), min(B1 , B2 )).
(19.9)
After applying a similar analysis to the solutions (19.7) and (19.8), we see that they too will converge to the form (19.9). Accordingly, the corresponding payoff profile is v ∗ = ((g1 − c1 )min{B1 , B2 }, (g2 − c2 )min{B1 , B2 }).
(19.10)
It is easy to check that solution (19.9) forms an NE, is Pareto optimal, and is cheat-proof with respect to private information ci and gi . From the above analysis we have learned that nodes can report false ci and gi to increase their payoff. Recall that Bi is also regarded as private information reported by player i. Then a natural question is the following: in the obtained solutions (19.6), (19.7), (19.8), and (19.9), can player i further increase its payoff by reporting a false Bi value? After examining all these four unique solutions, the answer is no. In other words, all these unique solutions are cheat-proof with respect to the private information Bi . Next we analyze why these solutions are cheat-proof with respect to Bi . Let’s first consider whether player 1 can further increase its payoff by reporting a high false B1 value. Here we use B1 to denote the actual number of packets that player 1 needs its opponent to forward in each stage; that is, player 1 cannot get any extra gain if player 2 forwards more than B1 packets. Let B1 be a certain value that player 1 may report to its opponent. Now, what is the consequence if player 1 reports a false B1 > B1 ? By checking the four solutions, we can see that such a B1 can never decrease x1 (i.e., the number of packets that player 1 needs to forward for player 2). Further, we can see that, when x1 is increased upon reporting a false B1 > B1 , the optimal x2 is always B1 . Therefore, reporting a B1 > B1 can never increase player 1’s payoff. How about reporting a false B1 < B1 ? After checking all these solutions, we can see that such action will either have no effect on these solutions, or will decrease x2 . Although such action may also decrease x1 , the penalty due to the decrease of x2 is always less than the gain due to the decrease of x1 . Therefore, reporting a B1 < B1 can never increase player 1’s payoff. Owing to the symmetry between these two players, it is easy to check that reporting a false B2 can never increase player 2’s payoff either. From the above analysis we can conclude that, in the two-player packet-forwarding game, in order to maximize its own payoff and be resistant to possible cheating behavior, a player should not forward more packets for an opponent than its opponent does for it. Specifically, for each player i ∈ N , in each stage it should forward min(B1 , B2 ) packets for its opponent unless there was a previous stage in which its opponent forwarded
19.3 System description and the game model
477
fewer than min(B1 , B2 ) packets for it, in which case it will stop forwarding packets for its opponent forever. We refer to the above strategy as the two-player cheat-proof packet-forwarding strategy.
19.2.4
Discussion The strategies studied in [396] [427] may look similar to the one described above. In [396] the cooperation in ad hoc networks was considered by focusing on the energyefficient aspects of cooperation, whereby the nodes are classified into different energy classes and the behavior of each node depends on the energy classes of the participants of each connection. The authors demonstrated that, if two nodes belong to the same class, they should apply the same packet-forwarding ratio. However, they require nodes to report their classes honestly, and a node can easily cheat to increase its own performance. Meanwhile, using normalized throughput alone as the performance metric might not be a good choice in general, as will be explained in Section 19.7. In [427], it was shown that it is not possible to force a node to forward more packets than it sends on average (Lemma 6.2), and then concluded that cooperation can be enforced in a mobile ad hoc network, provided that enough members of the network agree on it, as long as no node has to forward more traffic that it generates (Theorem 6.3). However, the above analysis has shown that a strategy profile can still be enforceable even though it may require a node to forward more packets than it sends on average, as illustrated in solutions (19.6), (19.7), and (19.8). Second, in a mobile ad hoc network, due to its multi-hop nature, in general the number of packets a node forwards should be much greater than the number of packets it generates. Accordingly, their strategy cannot enforce cooperation at all in most scenarios. The works presented in [3] [504] are also related to this chapter in the sense that cheating behavior has been considered. The authors developed auction-based schemes to stimulate participation in packet forwarding, where, by using VCG-based second-price auctioning, these schemes force selfish nodes to report their true private information (such as cost) honestly in order to maximize their profit. However, in such schemes, a trusted third-party auctioneer is required for each route selection and central banking services are needed to handle billing information, which conditions usually cannot be satisfied in a mobile ad hoc network. In this chapter, we focus on the scenario in which neither a trusted third-party auctioneer nor a central banking service is available.
19.3
System description and the game model Next we will investigate how to stimulate cooperation for autonomous mobile ad hoc networks in noisy and hostile environments. We focus on the scenario that the network will remain alive for a relatively long time, and there exists a finite number of users, such as students on a campus. Each user will stay in the network for a reasonably long time, but is allowed to leave and reconnect to the network when necessary. We
478
Optimal strategies for stimulation of cooperation
assume that each legitimate user has a unique registered and verifiable identity (e.g., a public/private-key pair), which is issued by some central authority and will be used to perform necessary access control and authentication. We focus on an information-push model in which it is the source’s duty to guarantee the successful delivery of packets to their destinations. For each user, forwarding a packet will incur some cost, and a successful delivery of a packet originating from it to its destination can bring some gain. Here the cost can be the energy consumed, and the gain is usually user- and/or application-specific. Since ad hoc networks are usually deployed in adversarial environments, there may be some malicious nodes whose goal is to cause damage to other nodes. In this chapter we focus on insider attackers, that is, the attackers also have legitimate identities. Preventing outside attackers from entering the network can be easily achieved by employing necessary access control and communicating through shared secret channels. Instead of forcing all users to act fully cooperatively (which has been shown not to be achievable in some situations [121] [504]), our goal is to stimulate cooperation among selfish nodes as much as possible. In general, not all packet-forwarding decisions can be perfectly executed. For example, when a node has decided to help another node to forward a packet, the packet may still be dropped due to link breakage or the transmission may fail due to channel errors. We refer to all those factors which can cause imperfect decision execution as noise, including uncertainty factors such as channel errors, link breakages, and congestion. In this chapter we use pe to denote the packet-dropping probability owing to noise. We also assume that some underlying monitoring schemes, such as those presented in [490] [483], have been launched, whereby the source can know whether its packets have been successfully delivered through end-to-end acknowledgment and can detect who has dropped packets. In order to formally analyze cooperation and security in such networks, we model the interactions among nodes as a secure-routing and packet-forwarding game.
• Players. The finite number of users in the network, denoted by N . • Types. Each player i ∈ N has a type θi ∈ , where = {sel f ish, malicious}, and players have no prior knowledge of the others’ types. Let Ns denote the set of selfish players and Nm = N − Ns denote the set of (insider) attackers. • Strategy space. Routing and packet forwarding are jointly considered, and each packet-forwarding transaction is divided into three stages. 1. Route-participation stage. Each player, after receiving a request asking it to be on a certain route, can either accept or refuse this request. 2. Route-selection stage. Each player who has a packet to send can, after discovering a valid route, either use or not use this route to send the packet. 3. Packet-forwarding stage. Each relay, once it has received a packet requesting it to forward, can either forward or drop this packet. • Cost. For any player i ∈ N , transmitting a packet, either for itself or for the others, incurs cost ci .
479
19.4 Attack-resistant and cheat-proof cooperation-stimulation strategies
• Gain. Each selfish player i ∈ Ns , if a packet originating from it can be successfully delivered to its destination, can achieve a gain gi , where gi ≥ ci . • Utility. For each player i ∈ N , Ti (t) denotes the number of packets that player i needs to send by time t, Si (t) denotes the number of packets that have been successfully delivered to their destinations by time t with player i being the source, Fi ( j, t) denotes the number of packets that i has forwarded for player j by time t, and Fi (t) = j∈N Fi ( j, t). Let Wi ( j, t) denote the total number of useless packet transmissions that player i has caused to player j by time t due to i dropping the packets forwarded by j. Let tf be the lifetime of this network. Then we model the players’ utility as follows. 1. For any selfish player i ∈ Ns , its objective is to maximize Uis (tf ) with Uis (tf ) =
Si (tf )gi − Fi (tf )ci . Ti (tf )
(19.11)
2. For any malicious player j ∈ Nm , its objective is to maximize U m j (tf ) with Um j (tf ) =
1 W j (i, tf ) + Fi ( j, tf ) ci − F j (tf )c j . tf
(19.12)
i∈Ns
If the game is played for an infinite duration, then their utilities will become limtf →∞ Uis (tf ) and limtf →∞ U m j (tf ), respectively. On the right-hand side of (19.11), the numerator denotes the net profit (i.e., total gain minus total cost) that the selfish node i obtained, and the denominator denotes the total number of packets that i needs to send. This utility function represents the average net profit that i can obtain per packet. We can see that maximizing (19.11) is equivalent to maximizing the total number of successful deliveries subject to the constraint of the total available cost. If ci = 0, this is equivalent to maximizing the throughput. The numerator of the right-hand side of (19.12) represents the net damage caused to the other nodes by j. Since in general this value may increase monotonically, we normalize it using the network lifetime tf . Now this utility function represents the average net damage that j causes to the other nodes per unit time. From (19.12) we can see that in this game the attackers’ goal is to waste the selfish nodes’ cost (or energy) as much as possible. The attackers are allowed to collude to maximize their performance. Other possible alternatives, such as minimizing the others’ payoff, will be discussed in Section 19.6.
19.4
Attack-resistant and cheat-proof cooperation-stimulation strategies From the system description in Section 19.3 we can see that the multi-node scenario is much more complicated than the two-node scenario, and that directly applying the twoplayer cooperation strategies to multi-node scenarios might not work. In this subsection we first explore how to stimulate cooperation for autonomous mobile ad hoc networks in noisy and hostile environments, and then devise attack-resistant and cheat-proof cooperation-stimulation strategies.
480
Optimal strategies for stimulation of cooperation
First, in autonomous mobile ad hoc networks, the repeated-game model is no longer applicable. For example, a source may request different nodes to forward packets at different times and may act as a relay for different sources. Meanwhile, the request rates of each node to other nodes are usually variable, which can be caused either by its inherently variable traffic-generation rate, or by mobility. A direct consequence of such a non-repeated model is that favors cannot be granted simultaneously. In [98], Dawkins demonstrated that reciprocal altruism is beneficial for every ecological system when favors are granted simultaneously. However, when favors cannot be granted simultaneously, altruism might not guarantee satisfactory future payback, especially when the future is unpredictable. This makes the stimulation of cooperation in autonomous mobile ad hoc networks an extremely challenging task. Second, in wireless networks, noise is inevitable and can cause severe trouble. For a two-player cheat-proof packet-forwarding strategy, if some packets are dropped due to noise, the game will be terminated immediately, and the performance will be degraded drastically. This will also happen in most existing cooperation-enforcement schemes, such as those in [396] [121]. In these schemes, noise can easily result in the collapse of the whole network, where finally all nodes will act non-cooperatively. Distinguishing the misbehavior caused by noise from that caused by malicious intention is a challenging task. Third, since autonomous mobile ad hoc networks are usually deployed in adversarial environments, some nodes may even be malicious. If there exist only selfish nodes, stimulating cooperation will be much easier according to the following logic: misbehavior can result in a decrease of the service quality experienced by some other nodes, which may consequently decrease the quality of service provided by them; this quality degradation will then be propagated back to the misbehaving nodes. Therefore selfish nodes have no incentive to intentionally behave maliciously in order to enjoy a high quality of service. However, this is not the case when some nodes are malicious. Since the attackers’ goal is to decrease the network performance, such quality degradation is exactly what they want to see. This makes the stimulation of cooperation in hostile environments extremely challenging. Unfortunately, malicious behaviors have been heavily overlooked when designing cooperation-stimulation strategies. In the rest of this section we study how to handle these challenges. We partition the secure-routing and packet-forwarding game into a set of subgames, such that each subgame is played by a pair of nodes. To effectively stimulate cooperation among selfish nodes in noisy and hostile environments, a credit mechanism is introduced to alleviate the effect of non-simultaneous return of favors, and a statistical attacker-detection mechanism is introduced to distinguish malicious behavior from misbehavior caused by noise. We first introduce the credit mechanism. For any two nodes i, j ∈ N , we define Di ( j, t) as follows: Di ( j, t) = Fi ( j, t) − F j (i, t).
(19.13)
19.4 Attack-resistant and cheat-proof cooperation-stimulation strategies
481
Now consider the following strategy: each node i will limit the number of packets that it will forward for any other node j in such a way that the total number of packets that i has forwarded for j by any time t should be no more than Dimax ( j, t), that is, Di ( j, t) ≤ Dimax ( j, t).
(19.14)
Here Dimax ( j, t) is a threshold set by i for two purposes: (1) stimulate cooperation between i and j, and (2) limit the possible damage that j can cause to i. By letting Dimax ( j, t) be positive, i agrees to forward some extra packets for j without getting instant payback. Meanwhile, unlike acting fully cooperatively, the extra number of packets that i will forward for j will be bounded to limit the possible damage when j plays non-cooperatively. This is analogous to a credit card system in which Dimax ( j, t) can be regarded as the credit line that i sets for j at time t. Just as a credit card company adjusts credit lines, Dimax ( j, t) can also be adjusted by i over time. We refer to Dimax ( j, t) as the credit line set by i for j. It is easy to see that an optimal setting of credit lines is crucial for effective stimulation of cooperation in noisy and hostile environments. Next we illustrate how to set good credit lines. If the request rates between i and j are constant, then setting the credit line to 1 will be optimal in the sense that no request will be refused when two nodes have the same request rate. However, due to mobility and nodes’ inherently variable traffic-generation rates, the request rates between i and j are usually variable. In this case, if the credit lines are set to be too small, some requests will be refused even when the average request rates between them are equal. Let f i ( j, t) denote the number of times that i needs j to help forward packets by time t. If we set the credit lines as Dimax ( j) = max{1, f i ( j, t) − f j (i, t)}, t
D max j (i) = max{1, f j (i, t) − f i ( j, t)},
(19.15)
t
then, except for the first several requests, no requests will be refused when the average request rates between them are equal. If the average request rates between them are not equal then, assuming that limt→∞ f i ( j, t)/ f j (i, t) > 1, no matter how large Dimax ( j) is, a certain portion of i’s requests will have to be refused. This makes sense since j has no incentive to forward more packets for i. This also suggests that arbitrarily increasing credit lines cannot always increase the number of requests accepted. It is worth pointing out that (19.15) requires f i ( j, t) and f j (i, t) to be known by i and j. However, such prior knowledge is usually not available since each node will not know a priori the others’ request rates. A simple solution to this is to set the credit lines to reasonably large positive constants, as in our simulations. It has been shown in [121] that topology will also play a critical role in enforcing cooperation for fixed ad hoc networks, and in most situations cooperation cannot be enforced. For example, a node in a bad location may never be able to get help from other nodes due to the fact that no one will need it to forward packets. In this chapter we focus on mobile ad hoc networks. In such networks, a node in a bad location at a certain time may move to a better location later, or vice versa. This suggests that, when
482
Optimal strategies for stimulation of cooperation
a node receives a packet-forwarding request from another node, it should not refuse this request simply because the requester cannot help it currently, since the requester may be able to help later. Next we introduce a statistical mechanism for detection of packet-dropping attacks to distinguish packet dropping caused by malicious behavior from that caused by noise. To simplify our analysis, we first model packet dropping due to noise as follows: for any player i, when it has decided to forward a packet for any other player j, with probability 1 − pe this packet may be dropped due to noise. That is, packet dropping caused by noise is modeled using a Bernoulli random process. pe can be either estimated by nodes online or trained offline. Let Ri ( j, t) denote the number of packets that i has requested j to forward and j has agreed to forward by time t. According to the central limit theorem, for any x ∈ R+ we can have ( ' F j (i, t) − Ri ( j, t) pe ≥ −x = (x), (19.16) lim Prob √ Ri ( j,t)→∞ Ri ( j, t) pe (1 − pe ) where 1 (x) = √ 2π
)
x
−∞
e−t
2 /2
dt.
(19.17)
That is, when Ri ( j, t) is large, the sufficient statistics F j (i, t) − Ri ( j, t) pe can be approximately modeled as a random variable with mean 0 and variance Ri ( j, t) pe (1 − pe ). Let isBadi ( j) denote i’s belief about j’s type, where isBadi ( j) = 1 indicates that i believes j is malicious, whereas isBadi ( j) = 0 indicates that i believes j is good. Then the following hypothesis-testing rule can be used by i to judge whether j has maliciously dropped its packets, with 1 − (x) being the maximum allowable falsepositive probability: √ 1 if F j (i, t) − Ri ( j, t) pe < −x Ri ( j, t) pe (1 − pe ), √ (19.18) isBadi ( j) = 0 if F j (i, t) − Ri ( j, t) pe ≥ −x Ri ( j, t) pe (1 − pe ). By summarizing the above results, we can arrive at the following cooperationstimulation strategies.
Multi-node attack-resistant and cheat-proof cooperation strategy In the secure-routing and packet-forwarding game, for any selfish node i ∈ Ns , initially, i sets isBadi ( j) = 0 for all j ∈ N . Then in different stages the following strategy will be used by i. (i) In the route-participation stage, if i has been requested by j to be on a certain route, i will accept this request if j has not been marked as malicious by i and (19.14) holds; otherwise, i will refuse. (ii) In the route-selection stage, a source i will use a valid route with n hops to send packets only if (a) no intermediate nodes on this route have been marked as malicious, (b) the expected gain is larger than the expected cost, that is, (1 − pe )n gi > nci , and (c) this route has the minimum number of hops among all those valid routes with no nodes being marked as malicious by i.
19.5 Strategy analysis under no attacks
483
(iii) In the packet-forwarding stage, i will forward a packet for j if i has agreed before and j has not been detected as malicious by i; otherwise, i will drop this packet. (iv) Let 1 − (x) be the maximum allowable false-positive probability from i’s point of view, then, as long as Ri ( j, t) is large for any node j ∈ N (e.g., larger than 200), the detection rule (19.18) will be applied by i after each packet-forwarding transaction initiated by it.
19.5
Strategy analysis under no attacks This subsection analyzes the optimality of the strategies when no attackers exist. We first consider an infinite-lifetime situation with Ti (t) → ∞ as t → ∞. The finite-lifetime situation will be discussed later. We assume that credit lines are set in such a way that, for any node i, lim
t→∞
Dimax ( j, t) = 0, Ti (t)
(19.19)
and for any pair of nodes i and j, when limt→∞ f i ( j, t)/ f j (i, t) ≤ 1, at most a finite number of i’s requests will be refused by j due to the fact that (19.14) does not hold. We also assume that, due to mobility, each pair of nodes in the network can meet infinite times when tf → ∞. Lemma 19.5.1 For any selfish node i ∈ N in the secure-routing and packet-forwarding game with no attackers, once i has received a route-participation request from any other node j ∈ N , if (19.14) holds and the multi-node attack-resistant and cheat-proof cooperation strategy is used by player j, then accepting the request is always an optimal decision from player i’s point of view. P ROOF. From player i’s point of view, refusing the request may cause it to lack a large enough balance to request player j to forward packets for it in the future (i.e., D j (i, t) > D max j (i, t)), while agreeing to forward the packet will not introduce any performance loss due to the assumption (19.19). Therefore, accepting the request is the optimal decision. Lemma 19.5.2 In the secure-routing and packet-forwarding game where some packetforwarding decisions might not be perfectly executed, from the point of view of any player j ∈ N , if the multi-node attack-resistant and cheat-proof cooperation strategy is followed by all the other nodes, then, during the packet-forwarding stage, intentionally dropping a packet that it has agreed to forward cannot bring it any gain. P ROOF. When a player j ∈ N intentionally drops a packet that it has agreed to forward for any other player i ∈ N , it cannot achieve any gain except for saving the cost of transmitting this packet. However, since player i follows the multi-node
484
Optimal strategies for stimulation of cooperation
attack-resistant and cheat-proof cooperation strategy, that is, it will always try to maintain limt→∞ Fi ( j, t)/F j (i, t) ≥ 1, as a consequence of dropping this packet, player j also loses a chance to request player i to forward a packet for it. To regain that possibility, player j has to forward another packet for player i. Therefore, intentionally dropping a packet cannot bring any gain to player j. Theorem 19.5.1 In the secure-routing and packet-forwarding game with no attackers, the strategy profile that all players follow the multi-node attack-resistant and cheat-proof cooperation strategy forms a subgame-perfect equilibrium, is cheat-proof, and achieves absolute fairness in cost if ci = c for all i ∈ N . If 0 < limt→∞ Ti (t)/ T j (t) < ∞ for any i, j ∈ N , this strategy profile is also strongly Pareto optimal. P ROOF. We first prove that this strategy profile forms a subgame-perfect equilibrium. Since this multi-player game can be decomposed into many two-player subgames, we need only consider the two-player subgame played by player i and player j. Suppose that player j does not follow the above strategy, that is, it will refuse to forward packets for player i when it should, or it will intentionally drop packets that it has agreed to forward for player i, or it will forward more packets than it should for player i, or it will use non-minimum-cost routes to send packets. First, from Lemmas 19.5.1 and 19.5.2 we know that refusing to forward packets for other players when it should or intentionally dropping packets that it has agreed to forward will not introduce any performance gain. Second, forwarding many more packets (i.e., more than Dimax ( j, t)) than player j has forwarded for it will not increase its own payoff either, according to the assumption of credit-line selections. Third, using a non-minimum-cost route to send a packet will decrease its expected gain. From the above analysis we can conclude that the above strategy profile (the multi-player attack-resistant and cheat-proof cooperation strategy) forms a Nash equilibrium. To check that the profile is subgame perfect, note that in every subgame off the equilibrium path the strategies are either to play non-cooperatively forever if player j has dropped a certain number of packets that it agreed to forward for player i, which is a Nash equilibrium, or to continue to play the multi-player cheat-proof packet-forwarding strategy, which is also a Nash equilibrium. Since no private information of gi and ci has been involved, we can conclude from the analysis presented in Section 19.2 that the cooperation-stimulation strategy is cheatproof. Since we have Fi ( j, t) − F j (i, t) < Dimax ( j, t) for any player i, j ∈ N and limt→∞ Dimax ( j, t)/Ti (t) = 0, and we have assumed that ci = c for all i ∈ N , it always holds that j∈N , j=i Fi ( j, t) = 1. (19.20) lim t→∞ j∈N , j=i F j (i, t) That is, this strategy can achieve absolute fairness in cost. Now we show that the strategy profile is strongly Pareto optimal. From the payoff function (19.11) we can see that, to increase its own payoff, a player i can try either to increase Si (t) or to decrease Fi (t). However, according to the above strategy,
19.6 Strategy analysis under attacks
485
minimum-cost routes have been used, therefore Fi (t) cannot be further decreased without affecting the others’ payoff. The only way that player i can increase its payoff is to increase limt→∞ Si (t)/Ti (t), which means that some other players will have to forward more packets for player i. Since all of the Ti (t) are of the same order, increasing player i’s payoff will definitely decrease other players’ payoff. Therefore the above strategy profile is strongly Pareto optimal. max In the proof of Theorem 19.5.1 we have assumed that (1) Di ( j, t) is large enough that forwarding Dimax ( j, t) more packets than player j has forwarded for it will not increase its own payoff, and (2) Dimax ( j, t) is also small enough that limt→∞ Dimax ( j, t)/Ti (t) = 0. If Dimax ( j, t) cannot satisfy the above two requirements, the strategy profile is not necessarily a Nash equilibrium. Finding a Dimax ( j, t) value to satisfy the first requirement is easy, but satisfying both requirements may be difficult or even impossible when nodes’ requests rates and mobility patterns are not known a priori. However, our simulation results show that in many situations even a nonoptimal Dimax ( j, t), such as a reasonably large constant, can still effectively stimulate cooperation. From the above analysis we can see that, as long as gi is larger than a certain value, such as (1 − pe ) L max gi > L max ci , where L max is a system parameter to indicate the maximum possible number of hops that a route is allowed to have, then varying gi will not change the strategy design. Until now we have mainly focused on the situation that the game will be played for an infinite duration. In most situations, a node will remain in the network only for a finite duration. Then, for each player i, if Dimax ( j) is too large, it may have helped its opponents much more than its opponents have helped it, whereas if Dimax ( j) is too small, it may lack enough nodes to forward packets for it. How to select a good Dimax ( j) still remains a challenge. In Section 19.8 we study the tradeoff between the value of Dimax ( j) and the performance through simulations, which show that under given simulation scenarios a relatively small Dimax ( j) value is good enough to achieve near-optimal performance (compared with setting Dimax ( j) to be ∞) and good fairness (compared with absolute fairness in cost). Here it is also worth pointing out that the optimality of the strategies cannot be guaranteed in finite-duration scenarios.
19.6
Strategy analysis under attacks In this chapter the following two widely used attack models are considered: packetdropping and traffic-injection attacks. To simplify our illustration, we assume that ci = c and gi = g for all i ∈ N . We first study packet-dropping attacks. By dropping other nodes’ packets, attackers can decrease the network throughput and waste other nodes’ limited resources. According to the attacker-detection strategy, from an attacker’s point of view, dropping all packets might not be a good strategy since this can be easily detected. Intuitively, in order to maximize the damage, attackers should selectively drop only some portion of packets in order to avoid their being detected. According to the multi-node attack-resistant and cheat-proof cooperation strategy, the
486
Optimal strategies for stimulation of cooperation
maximum number of packets that an attacker can drop without being detected is √ upper-bounded by npe + x npe (1 − pe ), where n is the number of times that it has agreed to forward packets. That is, it has to forward at least n(1 − pe ) − √ x npe (1 − pe ) packets. However, among those dropped packets, n(1 − pe ) packets would have been dropped due to noise even with no attackers present. Thus, the √ extra damage is upper-bounded by x npe (1 − pe )c. Meanwhile, the extra cost is √ n(1 − pe )c − x npe (1 − pe )c. Since for any constant value x ∈ R+ we have √ x npe (1 − pe ) = 0, (19.21) lim n→∞ n(1 − pe ) selectively dropping packets can bring no gain to the attackers. In other words, if the game is played for an infinite duration, packet-dropping attacks cannot cause damage to selfish nodes. Now we study traffic-injection attacks. By injecting an overwhelming amount of packets into the network, attackers can consume other nodes’ resources once they help the attackers to forward packets. Since, for each selfish node i ∈ Ns , we have Di ( j, t) ≤ Dimax ( j, t), the maximum number of packets that an attacker j can request i to forward without paying it back is upper-bounded by Dimax ( j, t). Therefore, the damage that can be caused by a traffic-injection attack is bounded and limited, and becomes negligible when limt→∞ Dimax ( j, t)/Ti (t) = 0. In summary, when the multi-node attack-resistant and cheat-proof strategy is used by all selfish nodes, attackers can cause only limited damage to the network. Further, the relative damage will go to 0 when the game is played for an infinite duration. Since Ri ( j, t) and F j (i, t) are of the same order, for any constant value x ∈ R+ we always have √ x Ri ( j, t) p(1 − p) lim = 0. (19.22) Ri ( j,t)→∞ F j (i, t) Therefore, except for some false positives, selfish players’ overall payoff will not be affected under attacks. Although false positives may cause a node to be unable to get help from some other nodes, this will not become a big issue since the false-positive probability can be made to approach 0 by using a large constant x without decreasing the overall payoff. From the above analysis we can also see that, no matter what objectives the attackers have and what attacking strategy they use, as long as selfish nodes employ the multi-node attack-resistant and cheat-proof cooperation strategy the selfish nodes’ performance can be guaranteed. From the above analysis we can conclude that for the infinite-duration case an attacker j’s overall payoff is upper-bounded by D max ( j, t) i c, t→∞ t
Um j ≤ lim
(19.23)
i∈Ns
provided that all selfish nodes follow the multi-node attack-resistant and cheat-proof cooperation strategy. This upper bound can be achieved by the following attacking strategy.
19.7 Discussion
487
Optimal attacking strategy In the secure-routing and packet-forwarding game, any attacker j ∈ Nm should always refuse to forward during the route-participation stage, should always pick the route including no attackers in the route-selection stage, and should not forward packets during the packet-forwarding stage. Following arguments similar to the proof of Theorem 19.5.1, we can also show that, in the infinite-duration secure-routing and packet-forwarding game, the strategy profile in which all selfish players follow the multi-node attack-resistant and cheat-proof cooperation strategy and all attackers follow the above optimal attacking strategy forms a subgame perfect equilibrium, is cheat-proof and strongly Pareto optimal, and achieves absolute fairness in cost under some mild conditions. When the game is played only for a finite duration, the above attacking strategy is no longer optimal. Now the attackers can try to drop some nodes’ packets without being detected, since the statistical detection of packet-dropping attackers will not be initiated unless enough interactions have occurred (i.e., Ri ( j, t) is large enough in (19.18)) to avoid a high false-positive probability. In this case, selfish nodes’ performance will be degraded a little bit. However, as long as the game is played for a reasonably long time, which is the focus of this chapter, the relative damage is still insignificant.
19.7
Discussion Compared with the existing literature, such as [396] [427] [57] [291] [5] [121], we have addressed stimulation of cooperation under more realistic and challenging scenarios: a noisy environment, the existence of (insider) attackers, variable traffic rate, etc. In such scenarios, instead of forcing all nodes to act fully cooperatively, our goal is to stimulate cooperation among nodes as much as possible. One major difference between this chapter and most existing reputation-based schemes is that in this chapter pair-wise relationships have been maintained by nodes. That is, each selfish node will keep track of the interactions with all the other nodes it encounters. The drawback is that this requires per-node monitoring and will result in extra storage complexity. However, the advantage lies in the fact that it can effectively stimulate cooperation in noisy and hostile environments. Actually, each node i need at any time only maintain the records Ri ( j), isBadi ( j), Fi ( j), and F j (i) for any other node j with which i has interacted, so the maximum storage complexity is upper-bounded by 4|N |. As long as |N | is not too large, for most mobile devices, such as notebooks and PDAs, the storage requirement is insignificant. In most of the existing literature, however, each node makes its decision solely on the basis of its own experienced quality of service, such as throughput. Although the overhead is much lower due to the fact that only end-to-end acknowledgment is required and each node need only keep its own past state, these approaches cannot effectively stimulate cooperation at all in noisy and hostile environments. As explained in Section 19.4, their logic is that no node will behave maliciously since misbehaviors will be propagated
488
Optimal strategies for stimulation of cooperation
back later and the quality of service experienced by the misbehaving nodes will also be decreased. However, such logic cannot hold in noisy and hostile environments. First, attackers will be willing to see such performance degradation, therefore they will try to behave maliciously if possible. Second, even noise alone can cause such misbehavior propagation and performance degradation since noise can cause packet dropping. Meanwhile, without per-node monitoring, attackers can always behave maliciously and cause damage to the others without their being detected. In [396] [121], when a node makes its cooperation decision at each step, this is based solely on the normalized throughput that it has experienced. If just the normalized throughput is used, a greedy user can set a low forwarding ratio, but try to send a lot of packets. Therefore, unless the others also try to send a lot of packets, from the greedy node’s point of view, even after a large portion of its packets have been dropped by other nodes, it can still enjoy a high throughput, even though the normalized throughput may be low. Meanwhile, as mentioned before, applying the same forwarding ratio to all nodes is not fair to those which have acted cooperatively. To resolve this problem, in this chapter, each node applies different packet-forwarding decisions for different nodes on the basis of its past interactions with them. Besides reputation-based cooperation-stimulation schemes, pricing-based schemes have also been developed in the literature, such as in [26] [27] [493] [3] [504]. Compared with pricing-based schemes, the major drawback of reputation-based schemes is that some nodes might not get enough help to send out all their packets. The underlying reasons are that favors cannot be returned immediately and the future is not predictable. In other words, when a node is requested by another node to forward packets, since it cannot get compensation immediately and it is not sure whether the requester will return the favor later, it usually has no strong incentive to accept the request. Pricing-based schemes do not suffer from such problems since a node can get immediate monetary payback after providing services. However, pricing-based schemes require tamper-proof hardware or a central banking service to handle billing information, which is their major drawback. If such a requirement can be efficiently satisfied with a low overhead, pricing-based schemes could be a better choice than reputation-based schemes. Meanwhile, it is worth pointing out that pricing-based schemes also suffer from noise and possible malicious behavior, and the statistical attacker-detection mechanism discussed here is applicable also to pricing-based scenarios. In general, monitoring is necessary when stimulating cooperation among nodes. For example, in [283], watchdog is considered in order to detect whether some nodes have dropped packets. In this chapter we assume that the underlying monitoring mechanism can provide accurate per-node monitoring. Although this can be a strong assumption in some scenarios, it can greatly simplify our analysis and at the same time provide thought-provoking insights. It is worth mentioning that in some situations perfect monitoring is either not available or too expensive to afford. Meanwhile, in our current analysis the cost incurred by the underlying monitoring mechanism has not been included. The study of imperfect monitoring is beyond the scope of this chapter, but will be investigated in our future work.
19.8 Simulation studies
489
In our analysis we have assumed that the packet-dropping ratio pe is the same for all nodes at all times, which need not hold in general. If different nodes may experience different pe , the nodes experiencing lower pe may experience high false-positive probabilities when implementing the attacker-detection mechanism. In this case, to decrease the false-positive probability, nodes must set the threshold to be large enough, which means using a larger x and pe in (19.18). Although this may be taken advantage of by the attackers to cause more damage, as long as the gap between the packet-dropping ratios experienced by different nodes is not large, which is usually the case, the extra damage is still limited. It is also worth mentioning that the security of the strategy discussed here also relies on the existing secure protocols such as those in [496] [161] [181] [335] [379] [491] [182] [183] [172] [490] [483] [484]. In general, besides dropping packets and injecting traffic, attackers also have a variety of other ways to attack the network, such as jamming, slander, etc. Instead of trying to address all these attacks, in this chapter our goal is to provide insight into stimulating cooperation in noisy and hostile environments.
19.8
Simulation studies We conducted a set of simulations to evaluate the performance of the strategies under various scenarios. In these simulations, 100 selfish nodes and various numbers of attackers are randomly deployed inside a rectangular area of 1000 m × 1000 m. Each node may either be static or move according to the random waypoint model [489]: a node starts at a random position, waits for a duration called the pause time, and then randomly chooses a new location and moves toward the new location with a velocity uniformly chosen between vmin and vmax . The physical layer assumes that two nodes can directly communicate with each other successfully only if they are within each other’s transmission range, which is set to be 250 m. The MAC-layer protocol simulates the IEEE 802.11 DCF with a four-way handshaking mechanism [197]. The link bandwidth is 4 Mbps, and the data-packet size is 1024 bytes. DSR [209] is used as the underlying routing protocol. For each simulation, each node randomly picks another node as the destination to which to send packets according to a Poisson random process. The average packet inter-arrival time is 1 s for each selfish node, and 0.2 s for each attacker. Meanwhile, when a packet is dropped, no retransmission will be applied. We set gi = 1, ci = 0.1, and tf = 15 000 s.
19.8.1
Simulation studies with different credit lines We first study how different credit lines can affect stimulation of cooperation. In this set of simulations, there are only mobile selfish nodes that follow the multi-node attack-resistant and cheat-proof cooperation strategy. Each mobile node follows the random-waypoint mobility pattern with vmin = 10 m/s, vmax = 30 m/s, and pause time 100 s. In each simulation, the credit line (CL) is fixed to be the same for all selfish
490
Optimal strategies for stimulation of cooperation
1
0.8
0.6
0.4
0.2 Throughput Payoff
0
50
100
150
200
250
300
Credit Line (a) 0.65 0.6
Payoff
0.55 0.5 0.45 0.4 0.35
20
40
60
80
100
Node index CL: 10
CL: 60
CL: 320
(b) Figure 19.4
Selfish nodes’ performance under the strategies discussed.
nodes. The simulation results are depicted in Figure 19.4. Figure 19.4(a) demonstrates the relationship between the CL and the average payoff and normalized throughput of selfish nodes. From these results we can see that, when the CL exceeds 80, the selfish nodes’ average payoff no longer increases, though the throughput may still increase a little bit. This suggests that setting CL = 80 is almost optimal. Now we examine each individual node’s payoff. Figure 19.4(b) shows the individual nodes’ payoff under three CL settings. First, we can see that for each node setting CL = 60 results in a much higher payoff than setting CL = 10. Second, although on average setting CL = 320 can result in a slightly higher payoff than setting CL = 60, it is also evident that a large portion of nodes (more than 20%) will suffer from a lower payoff than when setting CL = 60. The reason is that these nodes have acted too generously in the sense that they have forwarded many more packets for the others than the others
491
19.8 Simulation studies
Table 19.1. Types of network Mobile network 1 Mobile network 2
Partially mobile network
Static network
All nodes follow the random-waypoint pattern with vmin = 10 m/s, vmax = 30 m/s, and pause time 100 s All nodes follow the random-waypoint pattern with vmin = 10α m/s, vmax = 30α m/s, and pause time 100 s, where α is a value randomly drawn in the range [0, 1] The first 50 nodes follow the random-waypoint pattern with vmin = 10 m/s, vmax = 30 m/s, and pause time 100 s, while the second 50 nodes are static All nodes are static
0.9 0.8 0.7
Payoff
0.6 0.5 0.4 0.3 0.2 0.1 0
20
40
60
80
100
Node index
Figure 19.5
Mobile network 1
Partially mobile network
Mobile network 2
Static network
Performance under different networks.
have for them, which consequently decreases their payoff. This suggests that, when a node will stay in the network only for a finite duration, it might not be a good choice to set a very high CL. In the following simulations each selfish node will set CL = 60.
19.8.2
Simulation studies under different networks In this subsection we investigate the performance of the cooperation strategies in different ad hoc networks. We still consider only selfish nodes such that all nodes follow the multi-node attack-resistant and cheat-proof cooperation strategy. However, instead of considering only mobile ad hoc networks, we conducted also simulations in static and partially mobile ad hoc networks. Specifically, the four types of ad hoc networks in Table 19.1 are considered. Figure 19.5 depicts each individual node’s payoff under the above four network settings. First, we can see that most nodes have very low payoff in the static ad hoc network. This is easy to understand: after these nodes have used up all of the credit line assigned to them by their neighbors, and their neighbors might not need their help due
492
Optimal strategies for stimulation of cooperation
to topology dependence, these nodes can no longer request their neighbors to forward packets for them. Second, we can also see that some nodes in the static network still have very high payoff (almost 0.9). This is because their destinations are within their one-hop transmission range, so they do not need to rely on the others to forward packets. The effect of topology dependence has also been studied in [121]. How about when some nodes are mobile in the network? Now let us examine selfish nodes’ performance under partially mobile ad hoc networks. First, the 50 mobile nodes (the nodes with index between 1 and 50) have almost comparable payoff to that of nodes in mobile ad hoc networks, though several of them still have a slightly lower payoff. Second, most of the 50 static nodes (with index between 51 and 100) suffer from a fairly low payoff, though some of them have a high payoff because their destinations are within their transmission range. Third, in comparison with the static ad hoc network, even those 50 static nodes have a much higher payoff. These results suggest that mobility can play a very positive role in alleviating the effect of topology dependence. Now let us come back to mobile ad hoc networks. First, from Figure 19.5 we can see that in both of the mobile ad hoc networks (mobile networks 1 and 2) all nodes achieve a fairly high payoff, and all payoffs lie within a narrow range. These results suggest that the strategies can effectively stimulate cooperation among selfish nodes in mobile ad hoc networks. Second, nodes in mobile network 2 have a slightly higher payoff than that of nodes in mobile network 1. This is because mobile network 2 has a lower mobility rate than mobile network 1, which leads to a lower link-breakage ratio. It is worth pointing out that, according to the random waypoint model, each mobile node can move globally inside the specified area. Sometimes nodes’ movement may be restricted to a certain local area. Actually, the above partially mobile network can be regarded as a special case of such a network with restricted movement, since half of the nodes of this network are restricted to remaining at a certain point.
19.8.3
Simulation studies under attacks Now we study how the cooperation-stimulation strategies can effectively handle attacks, specifically, packet-dropping and traffic-injection attacks. In this set of simulations, selfish nodes follow the multi-node attack-resistant and cheat-proof cooperation strategy and attackers follow the optimal attacking strategy. All nodes are mobile, and each follows the random-waypoint mobility pattern with vmin = 10 m/s, vmax = 30 m/s, and pause time 100 s. For each selfish node, the maximum allowable false-positive probability is set to be 0.1%, and the link breakage ratio pe is estimated on the basis of the node’s own experience, which is the ratio between the total number of link breakages it has experienced with itself being the transmitter and the total number of transmissions it has attempted. Figure 19.6 presents the estimated values of pe under the current network configuration, which shows that all nodes have almost the same link-breakage ratio (here 2%).
493
19.8 Simulation studies
Average link-breakage ratio
0.1
0.08
0.06
0.04
0.02
0
Figure 19.6
0
20
40 60 Node index
80
100
The estimated link-breakage ratio.
We first study how well the strategies can scale with increasing number of attackers. Figure 19.7(a) shows the selfish nodes’s average payoff for various numbers of attackers, and Figure 19.7(b) shows the total damage that the attackers have caused to selfish nodes, where the total damage is as defined in (19.12) without being divided by tf . First, from Figure 19.7(b) we can see that the total damage that the attackers can cause increases almost linearly with increasing number of attackers. This makes sense since the greater the number of attackers, the more traffic they can inject and the more packets they can drop. Second, from Figure 19.7(a) we can see that the selfish nodes’ average payoff decreases only very slightly with increasing number of attackers. Further, in the above simulation result there also exist some points where the selfish nodes’ payoff may even increase slightly with increasing number of attackers. Such behavior can be better illustrated by Figure 19.7(c), where we have drawn the selfish nodes’s average payoff with increasing time both with no attackers and with 20 attackers. From Figure 19.7(c) we can see that, for the first 4000 s, the selfish nodes can gain a higher payoff when there are no attackers than they can with 20 attackers, whereas later the selfish nodes’ payoff under attacks may even outperform the payoff under no attacks. This phenomenon can be explained as follows: initially the damage is mainly caused by traffic-injection attacks, then, after attackers have used up all the credit lines assigned by the selfish nodes, the damage will be contributed mostly by packet-dropping attacks. However, since the link-breakage ratio is low, the attackers can drop only very few packets without being detected. The reasons why the selfish nodes’ payoff under attacks may even outperform the payoff under no attacks arise from (1) the randomness of each simulation; and (2) the fact that, by participating in packet forwarding, the attackers can also decrease the average number of hops per selected route, which consequently increases the selfish nodes’ payoff. In the final set of simulations we demonstrate why setting a very high credit line might not be a good choice. In this set of simulations, we fix the number of attackers
494
Optimal strategies for stimulation of cooperation
Total damage to selfish nodes
Selfish nodes’ payoff
1 0.8 0.6 0.4 0.2 0 0
5
10 15 20 Attacker number
25
240000 200000 160000 120000 80000 40000 0 0
30
5
10 15 20 Attacker number
(a)
25
30
(b)
Selfish nodes’ payoff
1 Under no attacks Under 20 attacks
0.8 0.6 0.4 0.2 0 2
4
6 8 10 12 Time (x1000 seconds)
14
(c) Figure 19.7
Performance under attacks. 240000
Total damage to selfish nodes
Selfish nodes’ payoff
1 0.8 0.6 0.4 0.2 0 20
Figure 19.8
30
40
50
60
70
80
90
100
200000 160000 120000 80000 40000 20
30
40
50
60
70
Credit line
Credit line
(a)
(b)
80
90
100
The effect of credit lines under attacks.
to be 20, but vary the credit line in each simulation, ranging from 20 to 100. The simulation results are illustrated in Figure 19.8. From Figure 19.8(b) we can see that, with increasing credit line, the damage that can be caused by attackers will also increase linearly because attackers can inject more packets. From Figure 19.8(a) we can see that
19.9 Summary
495
increasing credit line may even decrease the selfish nodes’ performance. For example, the selfish nodes with CL = 100 have even lower payoff than do selfish nodes with CL = 80. This can easily be understand by examining the results presented in Figure 19.8(b): setting the credit line to 100 will let the selfish nodes suffer more damage than will setting credit line to 80.
19.9
Summary In this chapter we have formally investigated secure cooperation stimulation in autonomous mobile ad hoc networks under a game-theoretic framework. Besides selfish behavior, possible attacks have also been studied, and a strategy for attack-resistant stimulation of cooperation that can work well in noisy and hostile environments has been devised. First, a simple yet illuminating two-player packet-forwarding game is studied. To find good cooperation strategies, equilibrium refinements have been performed on Nash-equilibrium solutions under various optimality criteria, including subgame perfection, Pareto optimality, fairness, and cheat-proofing, and a unique Nash-equilibrium solution is finally derived. This states that in the two-node packetforwarding game a node should not help its opponent more than its opponent has helped it. The results are then extended to handle multi-node scenarios in noisy and hostile environments, where the dynamic interactions between nodes are modeled as secure-routing and packet-forwarding games. By taking into consideration the difference between the two-node case and the multi-node case, an attack-resistant and cheat-proof cooperationstimulation strategy has been devised for autonomous mobile ad hoc networks. The analysis has demonstrated the effectiveness of the strategy and shown that it is optimal under certain conditions. The analysis has also shown that the damage that can be caused by attackers is bounded and limited when the strategies presented here are used by selfish nodes. Simulation results have also illustrated that the strategies can effectively stimulate cooperation among selfish nodes in noisy and hostile environments. Interested readers can refer to [487].
20
Belief evaluation for cooperation enforcement
In autonomous mobile ad hoc networks (MANETs) where each user is its own authority, the issue of cooperation enforcement must be solved first in order to enable networking functionalities such as packet forwarding, which becomes very difficult under noise and imperfect monitoring. In this chapter, we consider cooperation enforcement in autonomous MANETs under noise and imperfect observation and study basic packet forwarding among users using repeated-game models with imperfect information. A belief-evaluation framework is presented to obtain cooperation-enforcement packetforwarding strategies that are based solely on each node’s private information, including its own past actions and imperfect observation of other nodes’ information. More importantly, we not only show that the strategy with a belief system can maintain the cooperation paradigm but also establish its performance bounds. The simulation results illustrate that the belief-evaluation framework can enforce cooperation with only a small performance degradation compared with the unconditionally cooperative outcomes when noise and imperfect observation exist.
20.1
Introduction One major drawback of the existing game-theoretic analyses on cooperation in autonomous ad hoc networks is that all of them have assumed perfect observation, and most of them have not considered the effect of noise on the strategy design. However, in autonomous ad hoc networks, even when a node has decided to forward a packet for another node, this packet may still be dropped due to link breakage or transmission errors. Further, since central monitoring is in general not available in autonomous ad hoc networks, perfect public observation is either impossible or too expensive. Therefore, the question of how to stimulate cooperation and analyze the efficiency of possible strategies in scenarios with noise and imperfect observation is still unanswered for autonomous ad hoc networks. In this chapter we study cooperation enforcement for autonomous MANETs under noise and imperfect observation and focus on the most basic networking functionality, namely packet forwarding. Considering that the nodes need to infer the future actions of other nodes on the basis of their own imperfect observations, in order to optimally quantify the inference process with noise and imperfect observation, a beliefevaluation framework to stimulate packet forwarding between nodes and maximize
20.2 The system model and game-theoretic formulation
497
the expected payoff of each selfish node is assessed using a repeated-game theoretical analysis. Specifically, a formal belief system using Bayes’ rule is developed to assign and update beliefs concerning other nodes’ continuation strategies for each node on the basis of its private imperfect information. Further, we not only show that the packet-forwarding strategy obtained from the belief-evaluation framework achieves a sequential equilibrium [322] that guarantees the strategy to be cheat-proof but also derive its performance bounds. The simulation results illustrate that the packetforwarding approach can enforce cooperation in autonomous ad hoc networks under noise and imperfect observation with only a small performance degradation compared with the unconditionally cooperative outcomes. The rest of this chapter is organized as follows. In Section 20.2, we illustrate the system model of autonomous ad hoc networks under noise and imperfect observation and derive a corresponding game-theoretic formulation. Vulnerability analysis for autonomous MANETs under noise and imperfect observation is carried out in Section 20.3. In Section 20.4, we consider the belief-evaluation framework and carry out the equilibrium and efficiency analysis for one-hop and multi-node multi-hop packet forwarding. The simulation studies are provided in Section 20.5.
20.2
The system model and game-theoretic formulation
20.2.1
System model We consider autonomous ad hoc networks in which nodes belong to different authorities and have different goals. Assume that all nodes are selfish and rational, that is, their objective is to maximize their own payoff, not to cause damage to other nodes. Each node may act as a service provider: packets are scheduled to be generated and delivered to certain destinations; or act as relays forwarding packets for other nodes. The sender will get some payoff if the packets are successfully delivered to the destination and the forwarding effort of relay nodes will also introduce certain costs. In this chapter we assume that some necessary traffic-monitoring mechanisms, such as those described in [283] [493] [483], will be launched by each node to keep track of its neighbors’ actions. However, it is worth mentioning that we do not assume any public or perfect observation, where public observation means that, when an action happens, a group of nodes in the network will have the same observation, and perfect observation means that all actions can be perfectly observed without any mistake. In ad hoc networks, due to their multi-hop nature and the lack of a central monitoring mechanism, public observation is usually not possible. Meanwhile, to the best of our knowledge, there exist no such monitoring mechanisms in ad hoc networks that can achieve perfect observation. Instead, in this chapter, we study cooperation-enforcement strategies that are based on imperfect private observation. Here, private means that the observation of each node is known only to itself and will not or cannot be revealed to others. We focus on two scenarios causing imperfect observation in ad hoc networks. One scenario is that the outcome of a forwarding action may be the dropping of a packet due
498
Belief evaluation for cooperation enforcement
Forwarding signal
Packet-drop signal
D1
X
S2
Action: Forward Observation: Forward
Figure 20.1
S1
X
D2
Action: Drop Observation: Drop
Packet forwarding in autonomous ad hoc networks under noise and imperfect observation.
to link breakage or transmission errors. The other scenario is that a node has dropped a packet but is observed as forwarding the packet, which may happen when the watchdog mechanism [283] is used and the node wants to cheat its previous node on the route. Figure 20.1 illustrates our system model by showing a network snapshot of one-hop packet forwarding between two users at a certain time stage under noise and imperfect observation. In Figure 20.1, there are two source–destination pairs (S1 , D1 ) and (S2 , D2 ). S1 and S2 need to help each other to forward packets to the destination nodes. At this stage, node S1 drops the packet and observes the packet-drop signal of node S2 ’s action, while node S2 forwards the packet and observes the forwarding signal of node S1 ’s action. The action and observation of each node are known only to itself and cannot or will not be revealed to other nodes. Owing to transmission errors or link breakage between S2 and D1 , S2 ’s forwarding action is observed as a packet-drop signal; due to possible cheating behavior between S1 and D2 , a forfeit forwarding signal may be observed by S2 . Therefore, it is important to design strategies for each node to make the optimal decision solely on the basis of these items of imperfect private information.
20.2.2
The static and repeated packet-forwarding game model We model the process of routing and packet forwarding between two nodes forwarding packets for each other as a game. The players of the game are two network nodes, denoted by i ∈ I = {1, 2}. Each player is able to serve as the relay for the other player and needs the other player to forward packets for him on the basis of the current routing selection and topology. Each player chooses his action, i.e., strategy, ai from the action set A = {F, D}, where F and D are packet-forwarding and -dropping actions, respectively. Also, each player observes a private signal ω of the opponent’s action from the set = { f, d}, where f and d are the observations of packet-forwarding and -dropping signals, respectively. Since the player’s observation cannot be perfect, the forwarding action F of one player may be observed as d by the other player due to link breakage or transmission errors. We let the probability of this eventuality be pf . Also, the non-cooperation action D may be observed as the cooperation signal f under certain circumstances. Without loss of generality, let the probability of observation error in our system be pe . This eventuality is usually caused by malicious cheating behaviors and
20.2 The system model and game-theoretic formulation
499
indicates that the group of packets has actually been dropped even though the forwarding signal f is observed. For each node, the cost of forwarding a group of packets for the other node during one stage of play is ", and the gain it can get for the packets that the other node has forwarded for it is g. ˜ Usually, the gain of successful transmission is for both the source node and the destination node. Noting that the source–destination pair in ad hoc networks usually serves to achieve a common communication goal, we consider that the gain goes to the source for the game modeling without loss of generality. We first consider packet forwarding as a static game that is played just once. Given any action profile a = (a1 , a2 ), we refer to u(a) = (u 1 (a), u 2 (a)) as the expected payoff profile. Let a−i and Prob(ωi |a−i ) be the action of the ith player’s opponent and the probability of observation ωi given a−i , respectively. Then, u i (a) can be obtained as follows: u i (a) = u˜ i (ai , ωi , a−i ) · Prob(ωi |a−i ), (20.1) ωi ∈
where u˜ i is the ith player’s payoff determined by the action profile and his own observation. Then, on calculating u(a) for different strategy pairs, we have the strategic form of the static packet-forwarding game shown as a matrix in Figure 20.2. Note ˜ which can be obtained from (20.1) considering the possibility that g = (1 − pf ) · g, of packet dropping. To analyze the outcome of a static game, we use the Nash equilibrium [123] [322], which is a set of strategies, one for each player, such that no selfish player has an incentive to unilaterally change his/her action. Noting that our two-player packet-forwarding game is similar to the setting of the prisoner’s dilemma game, the only Nash equilibrium is the action profile a ∗ = (D, D), and the better cooperation payoff outcome (g − ", g − ") of the cooperation action profile {F, F} cannot be practically realized in the static packet-forwarding game due to the greediness of the players. However, generally speaking, the above packet-forwarding game will be played many times in real ad hoc networks. It is natural to extend the above static game model to a multistage game model. Given that the past packet-forwarding behaviors do not influence the feasible actions or payoff function of the current stage, the multi-stage packetforwarding game can be further analyzed using the repeated-game model. Basically, Player 2 Player 1
F
D Figure 20.2
F
D
g–l, g–l
–l, g
g,–l
0,0
The two-player packet-forwarding game in strategic form.
500
Belief evaluation for cooperation enforcement
in the repeated games, the players face the same static game for every period, and the player’s overall payoff is a weighted average of the payoffs at each stage over time. Let ωit be the privately observed signal of the ith player in period t. Suppose that the game begins in period 0 with the null history h 0 . In this game, a private history for player i at period t, denoted by h it , is a sequence of player i’s past actions and signals, t−1 i.e., h it = aiτ , ωiτ τ =1 . Let Hit = (A × )t be the set of all possible period-t histories for the ith player. Denote the infinite packet-forwarding repeated game with imperfect private histories by G( p, δ), where δ ∈ (0, 1) is the discount factor and p = ( pf , pe ). Assume that pf < 1/2 and pe < 1/2. Then, the overall discounted payoff for player i ∈ I is defined as follows: Ui (δ) = (1 − δ)
∞
δ t u it a1t h t1 , a2t h t2 .
(20.2)
t=0
Folk theorems for infinite repeated games assert that there exists δˆ < 1 such that any feasible and individually rational payoff can be enforced by an equilibrium for all ˆ 1) that is based on the public information shared by players. However, one cruδ ∈ (δ, cial assumption for the folk theorems is that players share common information about each other’s actions. In contrast, the nature of our repeated packet-forwarding game for autonomous ad hoc networks determines that the nodes’ behavioral strategies can rely solely on the private information histories including their own past actions and imperfectly observed signals.
20.3
Vulnerability analysis In this section, we analyze the vulnerability caused by noise and imperfect observation in autonomous MANETs. First, we study the system vulnerability in the scenario of one-hop packet forwarding. Then, we further exploit the effect of noise and imperfect observation in the scenario of multi-hop packet forwarding. In the scenario of one-hop packet forwarding, the interactions of a pair of nodes forwarding packets for each other can be modeled as the two-player game in the previous section. Although it might seem a minor game-setting change from public observation to private observation due to noise and imperfect observation, such a change in game-setting introduces substantial challenges regarding the interactions, outcomes, and efficiency of our packet-forwarding game, which can be illustrated as follows. First, the noise and observation errors indicate that simple tit-for-tat [12] [5] strategies are not able to enforce an efficient cooperation paradigm among users since such an equivalent retaliation strategy leads to inefficient non-cooperative outcomes. Second, considering the selfishness of the users together with the effects of noise and imperfect observations, the users will not share their action information or observations of others’ actions, which indicates that no public information is available for the users. Therefore, the users are not able to coordinate their strategies for efficient outcomes relying solely on private histories, i.e., no recursive structure exists for the
20.3 Vulnerability analysis
501
forwarding strategies since the players decide their actions according to various private histories. Third, although dynamic game theory has allowed us to study and define the equilibrium concepts for the outcomes of a game with imperfect information, such as sequential equilibrium (SE) and perfect Bayesian equilibrium (PBE) [322] [123], it does not provide generalized efficient mechanisms by which to achieve SE or PBE in the scenarios of private information. Note that generous tit-for-tat (GTFT) [12] is able to partly alleviate the impact of noise and imperfect observation on the efficiency of the packet-forwarding game outcomes by assuming that the nodes may be generous in the sense that they contribute more to the network than they benefit from it. However, if the constraint of the private information is taken into consideration, GTFT cannot work properly. This is because, due to the game-setting of private observation, one user does not know the other user’s observation of her/his actions and has only imperfect observation of the other user’s actions, which leads to the result that efficient TFT cannot be carried out. It is clear from the above discussion that noise and imperfect observation cause several vulnerability issues even for simple one-hop packet forwarding in autonomous MANETs, which can be illustrated as follows. • Since the nodes make decisions on the basis of private information, each node must conduct statistical inference to detect potential deviations and estimate what others are going to do next. The existence of noise and the constraint of imperfect observation will result in false alarms or detection errors. Selfish nodes may be able to utilize such facts to contribute less effort while getting more benefits from others. • Considering that the nodes are not willing to or not able to share their information, the nodes cannot rely on others’ past experiences or recommendations on the nodes’ behaviors, which gives the selfish nodes more flexibility regarding their cheating behaviors. • In the presence of noise or observation errors, the cooperative nodes may falsely accuse other cooperative nodes of apparently non-cooperative behavior, which is actually caused by link breakage or transmission errors. How to maintain the cooperation paradigm in such scenarios remains a challenging problem. In the scenario of multi-node and multi-hop packet forwarding, more sophisticated vulnerability issues concerning the challenges of the self-organizing routing and the correlation of the nodes’ actions will arise. In general, due to the multi-hop nature, when a node wants to send a packet to a certain destination, a sequence of nodes must be requested to help forward this packet. We refer to the sequence of (ordered) nodes as a route, the intermediate nodes on a route as relay nodes, and the procedure involved in discovering a route as route discovery. In general, the routing process includes route discovery and packet forwarding. Route discovery is carried out in three consecutive steps. First, the requester notifies the other nodes in the network that it wants to find a route to a certain destination. Second, other nodes in the network will make their decisions on whether to agree to be on the discovered route or not. Third, the requester will determine which route should be used. From the discussion of the routing process, we can see that the action and observation of one node on a route will greatly affect the behaviors
502
Belief evaluation for cooperation enforcement
of other nodes on this route or alternative routes between the source and destination nodes, which in reverse affects the behavior of the original node. The above properties of multi-node and multi-hop packet forwarding may lead to more vulnerability issues than in the case of one-hop packet forwarding, as follows. • In scenarios with multiple nodes on one route, in order to detect or punish users exhibiting cheating behaviors, coordination needs to be built up among multiple nodes in order to have effective detection or punishment, which becomes very complicated and requires sophisticated strategy designs considering only the private information available to each node. • Since the routing process involves different steps, seemingly cooperative behaviors at each stage may jointly have cheating effects across multiple steps. From the gametheoretic point of view, each stage game in our dynamic packet-forwarding game consists of several subgames, such as a route-participation subgame and a routeselection subgame. The vulnerability issues need to be considered not only for each subgame but also for the overall game. • The multi-hop routing makes the observation of nodes more difficult since the packetdropping action at one node will affect the outcome of the multi-hop routing. Such propagation effects can be taken advantage of by selfish nodes to cheat in order to achieve more payoff. In order to combat the above vulnerability issues in autonomous MANETs under noise and imperfect observation, it is important to study a novel strategy framework comprehensively considering these issues.
20.4
A belief-evaluation framework In this section, we first develop a belief-evaluation framework for a two-player packetforwarding game in an attempt to shed light on the solutions to the more complicated multi-player case. An efficiency study is then carried out to analyze the equilibrium properties and performance bounds. Further, a belief-evaluation framework for general networking scenarios with multiple nodes and multi-hop routing is presented.
20.4.1
Two-player belief-based packet forwarding In order to have an efficient and robust forwarding strategy that is based on each node’s own imperfect observation and actions, we consider a belief-evaluation framework to enforce cooperation. First, we define two strategies, i.e., σF and σD . Let σF be the trigger cooperation strategy, which means that the player forwards packets in the current stage, and during the next stage the player will continue to forward packets only if it observes the other player’s forwarding signal f . Let σD be the defection strategy, which means that the player always drops packets regardless of its observation history. Such strategies are also called continuation strategies [33]. Since both of these strategies also
503
20.4 A belief-evaluation framework
determine the player’s following actions for every private history, the strategy path and expected future payoffs caused by any pair of the two strategies are fully specified. Let Vα,β ( p, δ), α, β ∈ {F, D} denote the repeated-game payoff of σα against σβ , which can be illustrated by the following Bellman equations for different pairs of continuation strategies: VFF = (1 − δ)(g − ") + δ (1 − pf )2 VFF + pf (1 − pf )VFD + pf (1 − pf )VDF + pf2 · VDD , (20.3) VFD = −(1 − δ)" + δ((1 − pf )(1 − pe )VDD + pf (1 − pe )VDD + pe (1 − pf )VFD + pf pe VFD ), (20.4) VDF = (1 − δ)g + δ((1 − pf )(1 − pe )VDD + pe (1 − pf )VDF + pf (1 − pe )VDD + pe pf VDF ), (20.5) VDD = (1 − δ) · 0 + δ (1 − pe )2 VDD + pe (1 − pe )VDD + pe (1 − pe )VDD + pe2 · VDD . (20.6) Note that the first term on the right-hand side (RHS) of each of the above equations represents the normalized payoffs of the current period, while the second term illustrates the expected continuation payoffs considering four possible outcomes due to the noise and imperfect observation. By solving the above equations, Vα,β ( p, δ) can easily be obtained. Then, we have VDD > VFD , for any δ, p. Furthermore, if δ > δ0 , then VFF > VDF , where δ0 can be obtained as δ0 =
" . (1 − pf − pe )g − [ pf (1 − pf ) − pe ]"
(20.7)
Suppose that player i believes that his opponent is playing either σF or σD , and is playing σF with probability μ. Then the difference between his payoff from playing σF and the payoff from playing σD is given by V (μ; δ, p) = μ · (VFF − VDF ) − (1 − μ) · (VDD − VFD ).
(20.8)
Hence V (μ) is increasing and linear in μ and there is a unique value π( p, δ) to make it zero, which can be obtained as follows: π(δ, p) =
−VFD (δ, p) , VFF (δ, p) − VDF (δ, p) − VFD (δ, p)
(20.9)
where π( p, δ) is defined so that it makes no difference whether player i plays σF or σD when player j plays σF with probability π(δ, p) and σD with probability 1 − π(δ, p). For simplicity, π(δ, p) may be denoted as π under circumstances for which no confusion could arise. In general, if node i holds the belief that the other node will help to forward packets with a probability smaller than 1/2, then node i is inclined not to forward packets for the other node. Considering such a situation, we let δ be such that π(δ, p) > 1/2.
504
Belief evaluation for cooperation enforcement
Table 20.1. The two-player packet-forwarding algorithm 1. Initialize using system parameter configuration (δ, pe , pf ) Node i initializes its belief μi1 of the other node as π(δ, p) and chooses the forwarding action in period 1 2. Belief update on the basis of the private history Update each node’s belief μit−1 to μit using (20.10)–(20.13) according to particular realizations of private history 3. Optimal decision of the player’s next move If the continuation belief μit > π , node i plays σF If the continuation belief μit < π , node i plays σD If the continuation belief μit = π , node i plays either σF or σD 4. Iteration Let t = t + 1, then go back to Step 2
It is worth mentioning that (20.8) is applicable to any period. Thus, if a node’s belief of an opponent’s continuation strategy being σF is μ, then, in order to maximize its expected continuation payoff, the node prefers σF to σD if μ > π and prefers σD to σF if μ < π . Starting with any initial belief μ, the ith player’s new belief when he takes action ai and receives signal ωi can be defined using Bayes’ rule as follows: μ h it−1 (1 − pf )2 , μ h it−1 , (F, f ) = (20.10) μ h it−1 (1 − pf ) + pe · 1 − μ h it−1 μ h it−1 (1 − pf ) · pf , (20.11) μ h it−1 , (F, d) = μ h it−1 · pf + (1 − pe ) · 1 − μ h it−1 μ h it−1 (1 − pf ) · pe , (20.12) μ h it−1 , (D, f ) = μ h it−1 · (1 − pf ) + pe · 1 − μ h it−1 μ h it−1 pf · pe . μ h it−1 , (D, d) = (20.13) μ h it−1 · pf + (1 − pe ) · 1 − μ h it−1 On the basis of the above discussion, we develop a two-player packet-forwarding algorithm that is based on the belief-evaluation framework in Table 20.1. Note that, using the belief system, each node need only maintain its belief value, its most recent observation, and its most recent action instead of the long-run history of information of its interactions with other users.
20.4.2
Efficiency analysis In this part, we show that the behavioral strategy obtained from the algorithm with a well-defined belief system is a sequential equilibrium [322] and further analyze its performance bounds.
505
20.4 A belief-evaluation framework
First, we briefly introduce the equilibrium concepts of the repeated games with imperfect information. As for the infinitely repeated game with perfect information, the Nash equilibrium is a useful concept for analyzing the game outcomes. Further, in the same scenario with perfect information, subgame perfect equilibrium (SPE) can be used to study the game outcomes. This is an equilibrium such that users’ strategies constitute a Nash equilibrium in every subgame of the original game, which eliminates those Nash equilibria in which the players’ threats are not credible. However, the above equilibrium criteria for the infinitely repeated game require that perfect information can be obtained for each player. In our packet-forwarding game, each node is able to have only its own strategy history and must form beliefs regarding other nodes’ future actions through imperfect observation. Sequential equilibrium is a well-defined counterpart of subgame-perfect equilibrium for multi-stage games with imperfect information. It has not only sequential rationality that guarantees that any deviations will be unprofitable but also consistency concerning zero-probability histories. In our packet-forwarding game with private history and observation, the strategy with belief system can be denoted as (σ ∗ , μ), where μ = {μi }i∈I and σ ∗ = σi∗ i∈I . By studying (20.10), we find that there exists a point φ such that μ h it−1 , (F, f ) < μ h it−1 as μ h it−1 > φ, while μ h it−1 , (F, f ) > μ h it−1 as μ h it−1 < φ. Here, φ as φ = [(1 − p f )2 − pe ]/(1 − pf − pe ). It is easy to show can be calculated that μ h it−1 , (ai , ωi ) < μ h it−1 when (F, d), (D, f ), and (D, d) are reached. Since we initialize the belief with π we have μit ≤ φ after any belief-updating operation if π < φ. Considering the belief updating in the scenario in which π ≥ φ becomes trivial, we assume that π < φ and thus μit ∈ [0, φ] and φ ≥ 1/2. Then, let the packetforwarding strategy profile σ ∗ be defined as σi∗ (μi ) = σF if μi > π and σi∗ (μi ) = σD if μi < π ; if μi = π , the node forwards packets with probability π and drops them with probability 1 − π. Noting that π(δ, p) ≤ φ, we obtain another constraint on δ, which can be written as follows: δ≥δ=
[(1 − pf
)2
" . − p e ] · g + " · pe
(20.14)
Using the above equilibrium criteria for the repeated games with imperfect information, we then analyze the properties of the strategy illustrated in Table 20.1 through the following theorems. Theorem 20.4.1 The strategy profile σ ∗ with belief system μ from Table 20.1 is a sequential equilibrium for π ∈ (1/2, φ). P ROOF. First, we prove the sequential rationality of the solution obtained by our algorithm. It has already been shown in [322] that (σ, μ) is sequentially rational if and only if no player i has a history in which a change in σi (h i ) increases his expected payoff. This is also called the one-step deviation property for sequential equilibrium, which we use in our proof to show the sequential rationality of the solution.
506
Belief evaluation for cooperation enforcement
There are three possible outcomes considering the relation between μ and π. (1) If μi h it−1 > π , a one-step deviation from σ ∗ is to drop packets during the current period and continue with σ ∗ during the next period. Since the action player i chooses is D, the operators (20.12) and (20.13) need to be considered for updating beliefs. Noting that μi h it−1 , (D, f ) is an increasing function with respect to μ h it−1 and μ h it−1 ≤ 1, we can obtain that μi h it−1 , (D, f ) < pe . Since π > 1/2 and pe < 1/2, we have the continuation belief satisfying μi h it−1 , (D, f ) < π . Then only the following two sub-cases need to be considered. (i) Suppose that μi h it−1 , (D, d) ≤ π. In this case, since μi h it−1 , (D, d) ≤ π and μi h it−1 , (D, f ) ≤ π, the one-step deviation results in the continuation strategy σD . Considering the node’s current action D, the deviating node will play σD in this sub-case. However, (20.8) shows that the rational node prefers σF rather than σD when μi h it−1 > π . Then, a one-step deviation here cannot increase the payoff of the node. (ii) Suppose that μi h it−1 , (D, d) > π . The one-step deviation is to drop packets during the current period and continue with σD if the history information set (D, f ) is reached or continue with σF if (D, d) is reached. Compared with the first sub-case, we find that the one-step deviation differs from σD only when the information set (D, d) is reached. Let Vˆ (μ) be the difference in payoff between the presented solution and the one-step deviation, which can be written as Vˆi μit−1 = Vi μit−1 4 3 − δ μit−1 · pf + (1 − pe ) · 1 − μit−1 (20.15) · Vi μ h it−1 , (D, d) , where the first term on the RHS is the difference in payoff between σF and σD , and the second term on the RHS is the conditional payoff difference when (D, d) is reached. Noting that (20.13) indicates μi h it−1 , (D, d) < μi h it−1 and V (μ) is an increasing function in μ, we have Vi μi h it−1 > Vi μi h it−1 , (D, d) . Moreover, because the coefficient of the second term in (20.15) is less than one, Vˆi μi h it−1 is strictly greater than zero. Thus, the one-step deviation is not profitable in this sub-case. Since there are no sub-cases other than the above ones, we show that, if μi h it−1 > π, theone-step deviation cannot increase the payoff for the node. t−1 (2) If μi h i < π , a one-step deviation from σ ∗ is to forward packets during the that current periodand continue with σ ∗ during the next period. Considering π < φ and μi h it−1 , (F, d) is an increasing function in μi h it−1 , we can show
20.4 A belief-evaluation framework
507
that μi h it−1 , (F, d) < 1/2 if μi h it−1 < π , thus μi h it−1 , (F, d) < π . Then, there are two sub-cases. (i) If μi h it−1 , (F, f ) ≥ π, the one-step deviation from σ ∗ becomes playing the cooperation strategy σF . As we have shown in (20.8), σD is preferable to σF if t−1 μi h i < π. t−1 (ii) If μi h i , (F, f ) < π , the deviating strategy differs from σF only when the be the difference in private history (F, f ) is reached. Let V˜ μi h it−1 payoff between the equilibrium strategy σD and the one-step deviation strategy, which can be obtained as V˜ μi h it−1 = V μi h it−1 4 3 − δ μi h it−1 (1 − pf ) + pe · 1 − μi h it−1 (20.16) · V μi h it−1 , (F, f ) . Note that V μi h it−1 < V μi h it−1 , (F, f ) , considering that μi h it−1 , (F, f ) > μi h it−1 . Since the coefficient of the second term on the RHS in (20.16) is less than one, we have a positive V˜ μi h it−1 , which shows that the one-step deviation in this sub-case cannot increase the payoff. (3) If μi h it−1 = π the node is indifferent between forwarding packets and dropping packets from (20.8). Obviously, a one-step deviation will not change the expected payoff. By studying the above three cases, we prove that the strategy (σ ∗ , π) of the packetforwarding game is sequentially rational when π ∈ (1/2, φ). Then, we prove the consistency of the strategy. Since the strategy is a pure strategy when μi = π we construct a completely mixed strategy σi , μi , which is done by allowing a tremble with a small probability from a purely forwarding or dropping strategy. By applying (20.10)–(20.13) to calculate the belief-update system with trembling, it is easy to show that μi converges to μi when approaches zero. Therefore, given a sequence ¯ = (n )∞ n=1 satisfying limn→∞ n = 0, we can show that the n n ∞ sequence σi , μi n=1 of strategies with completely mixed strategies converges to the strategy (σ ∗ , μ) when the belief system is updated by Bayes’ rule. Therefore, since the strategy satisfies the sequential-rationality and consistency properties when π ∈ (1/2, φ), it is a sequential equilibrium for the packet-forwarding game with imperfect private observation. Theorem 20.4.1 shows that the strategy profile σ ∗ and the belief system μ obtained from the algorithm constitute a sequential equilibrium, which not only responds optimally at every history but also has consistency on zero-probability histories. Thus, cooperation can be enforced using the algorithm since the deviation will not increase the players’ payoffs. Then, similarly to [33], it is straightforward to prove the following
508
Belief evaluation for cooperation enforcement
theorem, which addresses the efficiency of the equilibrium and shows that, when pe and pf are small enough, the strategy approaches the cooperative payoff g − ". Theorem 20.4.2 Given g and ", there exist δ˜ ∈ (0, 1) and p˜ for any small positive τ such that the average payoff of the strategy σ ∗ in the packet-forwarding repeated game ˜ G( p, δ) is greater than g − " − τ if δ > δ˜ and pe , pf < p. However, in real ad hoc networks, considering the mobility of the node, channel fading, and the cheating behaviors of the nodes, it might not be practical to assume very small pe and pf values. A more useful and important measurement is the performance bounds of the strategy given some fixed pe and pf values. We further develop the following theorem by studying the lower bound and upper bound of our strategy to provide a performance guideline. In order to model the prevalent data application in current ad hoc networks, we assume that the game discount factor is very close to 1. Theorem 20.4.3 Given the fixed ( pe , pf ) and discount factor of the repeated game δG close to 1, the payoff of the algorithm in Table 20.1 is upper-bounded by U¯ = (1 − κ) · (g − "),
(20.17)
where κ=
pf · [g(1 − pf ) + "] . (1 − pf − pe )(g − ")
(20.18)
The lower bound of the performance will approach the upper bound when the discount factor of the repeated game δG approaches 1 and the packet-forwarding game is divided into N subgames as follows: the first subgame is played in period 1, N + 1, 2N + 1, . . . and the second subgame is played in period 2, N + 2, 2N + 2, . . . , and so on. The optimal N is N = 2log δ/ log δG 3.
(20.19)
The strategy is played in each subgame with equivalent discount factor δGN . P ROOF. On substituting Vα,β obtained from (20.3)–(20.6) into (20.9), we have π(δ, p) =
" 1 − δ(1 − pf )2 · . g − " δ(1 − pf − pe )
(20.20)
Then, since the node i is indifferent between forwarding or dropping packets if its belief of the other node is equal to π, the expected payoff of the node i at the sequential equilibrium (σ ∗ , μ) can be written as V (π, δ, p) = π(δ, p) · VDF (δ, p) + (1 − π(δ, p)) · VDD (δ, p).
(20.21)
It is easy to show that V (π(δ, p), δ, p) is a decreasing function in δ when δ ∈ (0, 1). Then, the upper bound of the expected payoff can be obtained by letting δ be the smallest feasible value. From (20.7) and (20.14), we have δ > δ and δ > δ0 . Since δ > δ0 , we
20.4 A belief-evaluation framework
509
can derive the upper bound of the payoff of the algorithm as (20.17) by substituting δ into (20.21). However, the discount factor of our game is usually close to 1. Generally, δ is a relatively smaller value in the range (0, 1). In order to emulate the optimal discount factor δ, we introduce the following game-partition method. We partition the original repeated game G( p, δG ) into N distinct subgames as the theorem illustrates. Each subgame can be regarded as a repeated game with the discount factor δGN . The optimal subgame number N , which minimizes the gap between δGN and δ, can be calculated as N = 2log δ/ log δG 3. Since there is always a difference between δGN and δ, it is more important to study the maximal gap, which results in the lower bound of the payoff using our game-partition method. Similarly to [108], we can show that using the optimal N means that δGN ∈ ¯ where δ¯ = δ/δG . On substituting δ¯ into (20.21), we have the lower bound of the [δ, δ], payoff of the algorithm with the game-partition method. When δG approaches 1, and δ¯ approaches δ, the payoff of our algorithm achieves the payoff upper bound.
In the above theorem, the idea of dividing the original game into some subgames is useful as a way to maintain the efficiency when δ approaches one for our game setting. A larger δ indicates that future payoffs are more important for the total payoff, which results in a greater number of subgames. Since there are multiple subgames using the belief-based forwarding strategy, even if the outcomes of some subgames become the non-cooperation case due to the observation errors and noise, cooperation can still continue in other subgames, increasing the total payoff.
20.4.3
Multi-node multi-hop packet forwarding In the previous parts, we mainly focused on the two-player case, although in an ad hoc network there usually exist many nodes and multi-hop routing is generally enabled. In this section, we model the interactions among selfish nodes in an autonomous ad hoc network as a multi-player packet-forwarding game, and develop the optimal beliefevaluation framework on the basis of the two-player belief system.
20.4.3.1
The multi-node multi-hop game model In this section, we consider autonomous ad hoc networks in which nodes can move freely inside a certain area. For each node, packets are scheduled to be generated and sent to certain destinations. In contrast to the two-player packet-forwarding game, the multi-player packet-forwarding game concerns multi-hop packet forwarding, which involves the interactions and beliefs of all the nodes on the route. Before studying the belief-based packet forwarding in this scenario, we first model the multi-player packet-forwarding game as follows. • There are M players in the game, which represent M nodes in the network. Denote the player set as I M = {1, 2, . . . , M}. • Each player i ∈ I M has groups of packets to be delivered to certain destinations. The payoff from successfully having a group of packets delivered during one stage is denoted by g. ˜
510
Belief evaluation for cooperation enforcement
• For each player i ∈ I M , forwarding a group of packets for another player will incur the cost ". • Owing to the multi-hop nature of ad hoc networks, the destination player might not be within the sender i’s direct transmission range. Player i needs not only to find the possible routes leading to the destination (i.e., route discovery), but also to choose an optimal route from among multiple routing candidates to help forward the packets (i.e., route selection). • Each player knows only his own past actions and imperfect observation of other players’ actions. Note that the information history consisting of the above two parts is private to each player. Similarly to [396], we assume that the network operates in discrete time. In each time slot, one node from among the M nodes is randomly selected as the sender. The probability that the sender finds r possible routes is given by qr (r ) and the probability that each route needs h¯ hops is given by qh¯ (h¯ ) (assume that at least one hop is required in each time slot). Note that the h¯ relays on each route are selected from among the rest of the nodes with equal probability and h¯ ≤ 2g/"3. ˜ Assume that each routing session lasts for one slot and that the routes remain unchanged within each time slot. In our study, we consider that delicate traffic-monitoring mechanisms such as receipt-submission approaches [493] are in place; hence the sender is able to have observation of each node on the forwarding route.
20.4.3.2
The design of the belief-evaluation system In this part, we develop an efficient belief-evaluation framework for multi-hop packetforwarding games on the basis of the two-player approach. Since successful packet transmission through a multi-hop route depends on the actions of all the nodes on the route, the belief-evaluation system needs to consider the observation error caused by each node, which makes a direct design of the belief system for the multi-player case very difficult. However, the two-player algorithm can be applied to solve the multiplayer packet-forwarding problem by considering the multi-node multi-hop game as many two-player games between the source and each relay node. Let Rit denote the set of players on the forwarding route of player i in the tth period. Let μi, j denote the sender i’s belief value of the node j on the route. The forwarding strategy for the multi-player case is illustrated as follows.
The belief-based multi-hop packet-forwarding (BMPF) strategy In the multi-node multi-hop packet-forwarding game, given the discount factor δG and p = ( pe , pf ), the sender and relay nodes act as follows during different phases of the routing process. • Game partition and belief initialization. Partition the original game into N subgames N according to (20.19). Then, each node initializes N its belief of other nodes as π δG , p and forwards packets with probability π δG , p . • Route participation. The selected relay node on each route participates in the routing if and only if its beliefs regarding the sender and other forwarding nodes are greater than π.
20.4 A belief-evaluation framework
511
• Route selection. The sender selects the route with the largest μi = j∈Ri μi j with μi j > π from the route candidates. • Packet forwarding. The sender updates its belief of each relay node’s continuation strategy using (20.10)–(20.13) and decides the following actions on the basis of its belief. In the above strategy, the belief value of each node plays an important role. The nodes which intentionally drop packets will be gradually isolated by other nodes since the nodes with low belief values for the misbehaved nodes will not cooperate with them or participate in the possible routes involving those nodes. With the help of the routeparticipation and -selection stage, our strategy successfully simplifies the complicated multi-node multi-hop packet-forwarding game into multiple two-player games between the sender and relay nodes. However, the equivalent two-player gain g here is different from that in Table 20.1, which needs also to cope with the error propagation and routing diversity depending on the routing statistics such as qr (r ) and qh¯ (h¯ ). Note that the roles of sender or relay nodes may change over time depending on which source–destination pair has packets to transmit. Since each node is selfish and trying to maximize its own payoff, all nodes are inclined to follow the above strategy for achieving the optimal payoff. In order to formally show that cooperation is enforced, we present the following theorem. Theorem 20.4.4 The packet-forwarding strategy and belief-evaluation system specified by the BMPF strategy lead to a sequential equilibrium for the multi-player packetforwarding game. P ROOF. A sequential equilibrium for the game with imperfect information is not only sequentially rational but also consistent [322]. First, we prove the sequential rationality of the strategy using the one-step deviation property [322], which indicates that (σ, μ) is sequentially rational if and only if no player i has a history h i at which a change in σi (h i ) increases his expected payoff. During the route-participation stage, we assume that each forwarding node j ∈ Ri has built up a belief value of the sender i as μ ji and the belief values of any other relay node k ∈ Ri . The one-step deviation property is considered for the following three sub-cases for any forwarding node j. First, if μ ji > π and μ jk > π, k = j, a one-step deviation is not to participate in the routing. In this case, the forwarding node will miss the opportunity of cooperating with the sender, which has been shown in (20.8) to be profitable for the forwarding node. Second, if μ ji < π and μ jk > π, k = j, a one-step deviation is to participate in the routing. Since the relay node j will drop the packet from the sender i, the equivalent cooperation gain g in Table 20.1 will decrease due to dropping of packets by the participating nodes, which also decreases the future gain of node j. Although node j does not incur the cost of forwarding packets for node i, its future gain will be damaged due to a smaller g. Thus, one-step deviation is not profitable in this sub-case. Third, if μ ji < π and there exists a node k such that μ jk < π , the noncooperation forwarding behavior may occur since node j’s belief of node k is lower than the threshold π. Such a possible non-cooperation outcome may decrease the expected
512
Belief evaluation for cooperation enforcement
equivalent gain g, which results in a future payoff loss as (20.17) shows. Therefore, in all of the above three sub-cases of the route-participation stage, one-step deviation from the BMPF strategy cannot increase the payoffs of the nodes. During the route-selection stage, two sub-cases need to be considered for a one-step deviation test. First, if the largest μi with μi j < π, ∃ j is selected as the forwarding route, there are non-cooperation interactions between the sender i and relay j, which decreases the expected equivalent gain g and thereby lowers the future payoffs. Second, if the route with largest μi is not selected, the expected gain g can still be increased by using another route with a larger probability of successful forwarding. Thus, one-step deviation during the route-selection stage is not profitable. Further, Theorem 20.4.1 can be directly applied here to prove the sequential rationality for every packet-forwarding stage. To sum up, the BMPF strategy is sequentially rational for the multi-node multi-hop packet-forwarding game. Besides, following the definition of the consistency for sequential equilibria, it is straightforward to prove it for our BMPF strategy. Therefore, the multi-player packet-forwarding strategy is a sequential equilibrium. Since the above theorem has proved that the BMPF strategy is a sequential equilibrium, the cooperation among the nodes can be enforced and no selfish node will deviate from the equilibrium. Since all nodes will follow the strategy in order to have optimal payoffs, the expected gain g in Table 20.1 can be written as follows: g = g˜ · Er,h¯ [1 − [1 − (π(1 − pf ))h¯ ]r ] − E(h¯ ) · π ",
(20.22)
where E(h¯ ) is the expected number of hops and Er,h¯ represents the expectation with respect to the random variables r and h¯ . The first term on the RHS of (20.22) is the expected gain of the sender considering multiple hops and possible routes; the second term on the RHS is the expected forwarding cost of sender i for returning the forwarding favor of the other relay nodes on its route. Note that π in (20.22) is also affected by g as shown in (20.20), which makes the computation of g more complicated. However, as we show in Theorem 20.4.3, the optimal π approaches φ when δ approaches δ. Considering the situations when δG approaches 1, π can be very close to φ as δ is approached. Then, we can approximate g by substituting for π with φ in (20.22), which is determined solely by pf and pe .
20.5
Simulation studies In this section, we investigate the cooperation-enforcement results of the beliefevaluation framework by simulation. We first focus our simulation studies on one-hop packet-forwarding scenarios in ad hoc networks, to which the two-player belief-based packet-forwarding approach can be directly applied. Let M = 100, g = 1, and " = 0.2 in our simulation. In each time slot, any one of the nodes is chosen with equal probability as the relay node for the sender. For comparison, we define the cooperative strategy, in which we assume that every node
513
20.5 Simulation studies
will unconditionally forward packets with no regard to other nodes’ past behaviors. Such a cooperative strategy is not implementable in autonomous ad hoc networks, but it can serve as a loose performance upper bound of the strategy to measure the performance loss due to noise and imperfect observation. Figure 20.3 shows the average payoff and performance bounds of the strategy based on our belief-evaluation framework for various pf by comparing them with the cooperative payoff. Note that pe = 0.01 and δG = 0.99. It can be seen from Figure 20.3 that the approach can enforce cooperation with only a small performance loss compared with the unconditionally cooperative payoff. Further, Figure 20.3 shows that the average payoff of the strategy satisfies the theoretical payoff bounds developed in Theorem 20.4.3. The fluctuation of the payoff curve of our strategy is because the original game can be partitioned only into an integer number of subgames. Figure 20.4 shows the ratio of the payoffs of our strategy to those of the cooperative strategy for various pe and pf . Here we let δG = 0.999 to approach the payoff upper bound. It can be seen from Figure 20.4 that, even if pf is as large as 0.1 due to link breakage or transmission errors, our cooperation-enforcement strategy can still achieve as much as 80% of the cooperative payoff. In order to show that the strategy is cheat-proof among selfish users, we define the deviation strategies for comparison. The deviation strategies differ from the presented strategy only when the continuation strategy σF and observation F are reached. The deviation strategies will play σD with some probability pd of deviating instead of playing σF as the belief-evaluation framework. Figure 20.5 compares the nodes’
0.8
Average Node Payoff
0.75
0.7
0.65
0.6
0.55
0.5 0.01
Fully cooperative strategy The payoff lower bound of our strategy The average payoff of our strategy The payoff upper bound of our strategy
0.02
0.03
0.04
0.06
0.05
0.07
0.08
pf Figure 20.3
The average payoffs of the cooperative strategy and our strategy.
0.09
0.1
514
Belief evaluation for cooperation enforcement
1 pe = 0.01 pe = 0.1 pe = 0.2
Payoff ratio
0.95
0.9
0.85
0.8
0.75 0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
pf Figure 20.4
Payoff ratios of our strategy to the cooperative strategy. 0.8
Average payoff
0.75
0.7
0.65
0.6
0.55 0.02
Figure 20.5
Our strategy without deviation Our strategy with deviation pd = 0.1 Our strategy with deviation pd = 0.2 Cooperative payoff
0.03
0.04
0.05 pf
0.06
0.07
0.08
Payoff comparison of our strategy and deviating strategies.
average payoffs of our strategy, the cooperative strategy, and deviation strategies with probabilities of deviation pd = 0.1 and 0.2. Note that δG = 0.999 and pe = 0.1. Figure 20.5 shows that the strategy has much better payoffs than those of the deviating strategies.
20.5 Simulation studies
515
Next, we study the performance of the multi-hop multi-node packet-forwarding approach. Before evaluating the performance of the strategy, we first need to obtain the routing statistics such as qr (r ) and qh¯ (h¯ ). An autonomous ad hoc network is simulated with M nodes randomly deployed inside a rectangular region of 10γ × 10γ according to the two-dimensionally uniform distribution. The maximal transmission range is γ = 100 m for each node, and each node moves according to the random waypoint model [209]. Let the “thinking time” of the model be the duration of each routing stage. Dynamic source routing (DSR) [209] is used as the underlying method to discover possible routes. Let λ = Mπ/100 denote the normalized node density, i.e., the average number of neighbors for each node in the network. Note that each source– destination pair is formed by randomly picking two nodes in the network. Moreover, multiple routes with different numbers of hops may exist for each source–destination pair. Since the routes with the minimum number of hops achieve the lowest costs, without loss of generality, we consider only the minimum-hop-number routes as the routing candidates. In order to study the routing statistics, we first conduct simulations to study the number of hops on the minimum-hop-number route for source–destination pairs. Let h min (n i , n j ) = &dist(n i , n j )/γ ' denote the ideal minimum number of hops needed to traverse from node i to node j, where dist(n i , n j ) denotes the physical distance between node i and node j, and let h¯˜ (n i , n j ) denote the number of hops on the actual minimum-hop-number route between the two nodes. Note that we simulate 106 samples of topologies to study the dynamics of the routing in ad hoc networks. Figure 20.6 shows the approximated cumulative probability mass function (CMF) of the difference between h¯˜ (n i , n j ) and h min (n i , n j ) for various node densities. Using these results, the average number of hops associated with the minimum-hop-number route from node i to node j can be approximated using dist(n i , n j ), γ , and the corresponding CMF of the hop-number difference, which also gives the statistics of qh¯ (h¯ ). Besides, it can be seen from Figure 20.6 that a lower node density results in having a larger number of hops for the minimum-hop-number routes, since the number of neighbor nodes is limited for packet forwarding in such scenarios. Next, we study the path diversity of the ad hoc networks by finding the maximum number of minimum-hop-number routes for the source–destination pair. Note that there may exist scenarios in which a node may be on multiple minimum-hop-number forwarding routes for the same source–destination pair. For simplicity, we assume that, during the route-discovery phase, the destination randomly picks one such route as the routing candidate and feeds the routing information of all node-disjoint minimum-hop-number routes back to the source. Figure 20.7 shows the CMF of the number of minimum-hop-number routes for numbers various of hops when the node density is 30. Figure 20.7 actually shows the qr (r ) statistics when the ideal minimum number of hops is given. From the routing statistics given in Figures 20.6 and 20.7, we are able to obtain the expected equivalent two-player payoff table for multi-node and multi-hop packet-forwarding scenarios using (20.22). We compare the payoff of our approach with that of the cooperative one in Figure 20.8. Note that multi-hop forwarding will incur greater costs for the nodes since one successful packet delivery involves the packet-forwarding efforts of many relay
516
Belief evaluation for cooperation enforcement
1
The cumulative probability
0.9 0.8 0.7 0.6 0.5 0.4 0.3
λ = 10 λ = 20 λ = 30
0.2 0.1
Figure 20.6
0
1
2 3 The hop-number difference
4
5
The cumulative probability mass function of the hop-number difference between h¯˜ (n i , n j ) and h min (n i , n j ).
1 0.9
The cumulative probability
0.8 0.7 0.6 0.5 0.4 Two-hop route Three-hop route Four-hop route Five-hop route Six-hop route
0.3 0.2 0.1
2
4
6
8
10
12
14
16
18
The number of routes Figure 20.7
The cumulative probability mass function of the number of hops on the minimum-hop-number route when the node density is 30.
517
20.6 Summary and bibliographical notes
0.46 0.44 0.42
Average payoff
0.4 0.38 0.36 0.34 0.32 0.3 0.28
The proposed strategy with pe = 0.01 The proposed strategy with pe = 0.05 The proposed strategy with pe = 0.1 The proposed strategy with pe = 0.2
0.26 0.01
Fully cooperative strategy
0.02
0.03
0.04
0.06
0.05
0.07
0.08
0.09
0.1
pf Figure 20.8
Average payoffs of the proposed strategy in multi-node multi-hop scenarios.
nodes. Also, noise and imperfect observation will have more impact on the performance since each node’s incorrect observation will affect the payoffs of all other nodes on the selected route. We can see from Figure 20.8 that the strategy maintains high payoffs even when the environment is noisy and the observation error is large. For instance, when pe = 0.2 and pf = 0.1, the strategy still achieves over 70% of the payoff in the unconditionally cooperative case.
20.6
Summary and bibliographical notes In this chapter, we study the enforcement of cooperation in autonomous ad hoc networks under noise and imperfect observation. By modeling packet forwarding as a repeated game with imperfect information, we develop a belief-evaluation framework for packet forwarding to enforce cooperation in scenarios with noise and imperfect observation. We show that the behaviorial strategy with a well-defined belief system not only achieves sequential equilibrium, but also maintains high payoffs for both two-player and multi-player cases. Notice that only each node’s action history and imperfect private observation are required for the strategy. The simulation results illustrate that the belief-evaluation framework achieves stable and near-optimal equilibria in ad hoc networks under noise and imperfect observation. Interested readers can refer to [215]. Recently, some efforts toward mathematical analysis of cooperation in autonomous ad hoc networks using game theory have been made, such as in [396] [291] [5].
518
Belief evaluation for cooperation enforcement
In [396], Srinivasan et al. provided a mathematical framework for cooperation in ad hoc networks that focuses on the energy-efficient aspects of cooperation. In [291], Michiardi and Molva studied cooperation among selfish nodes in a cooperative gametheoretic framework. In [5], Altman et al. studied the packet-forwarding problem using a non-cooperative game-theoretic framework. Further, trust modeling and an evaluation framework [408] [386] have been extensively studied to enhance cooperation in autonomous distributed networks. These authors utilized trust (or belief) metrics to assist decision-making in autonomous networks through trust recommendation and propagation.
21
Defense against insider attacks
In this chapter we present a game-theoretic analysis of securing cooperative ad hoc networks against insider attacks in the presence of noise and imperfect monitoring. By focusing on the most basic networking function, namely routing and packet forwarding, we model the interactions between good nodes and insider attackers as secure-routing and packet-forwarding games. The worst-case scenarios in which initially good nodes do not know which the attackers are while the insider attackers know which nodes are good are studied. The optimal defense strategies have been devised in the sense that no other strategies can further increase the good nodes’ payoff under attacks. Meanwhile, the optimal attacking strategies and the maximum possible damage that can be caused by attackers are discussed. Extensive simulation studies have also been conducted to evaluate the effectiveness of the strategies.
21.1
Introduction Many important issues about security in ad hoc networks have not yet been fully addressed. One is the optimality measure of defense mechanisms. For example, what metrics should be used to measure the optimality of the defense mechanism? Under certain optimality metrics, what are the optimal defending strategies, especially when the environment is noisy and the monitoring is not perfect? What strategies should the attackers use to maximize the damage to the network, and consequently what is the maximum possible damage that the attackers can cause? In this chapter we jointly consider routing and packet forwarding in cooperative ad hoc networks, and model the interactions between good nodes and attackers as games, referred to as secure-routing and packet-forwarding games. We adopt the Nash equilibrium as a basic optimality metric. In order to fully address the above issues, we focus on the following scenario: initially good nodes do not know which nodes are attackers while attackers can know which nodes are good. This scenario can be regarded as the worst-case scenario from the defenders’ point of view. That is, if a strategy can work well under this scenario, it can work well under any scenario. In general the environment is noisy and full of uncertainty, which may mean that some decisions cannot be perfectly executed. For example, even if a node wants to forward a packet for another node, this packet may still be dropped due to link breakage. Further, perfect monitoring, that is, any action outcome can be correctly observed, is
520
Defense against insider attacks
either impossible or too expensive to afford in ad hoc networks due to the distributed nature and the limited resources. In this chapter the effects of noise and imperfect monitoring on the strategy design will be investigated, and the optimal defending strategies under both noise and imperfect monitoring will be devised by incorporating statistical attacker-detection mechanisms. The analysis shows that the strategies devised are optimal in the sense that no other strategies can further increase the good nodes’ payoff under attacks. Meanwhile, the optimal attacking strategies have also been devised. The rest of this chapter is organized as follows. Section 21.2 describes the secure-routing and packet-forwarding game model with incomplete type information. Section 21.3 presents the defending strategies devised by incorporating statistical attacker-detection mechanisms. The possible attacking strategies are also studied in this section. The optimality analysis of the devised strategies is presented in Section 21.4. The justification of the underlying assumptions and a performance evaluation of the devised strategies are given in Section 21.5.
21.2
System description and the game model
21.2.1
System description In this chapter, we consider cooperative ad hoc networks deployed in adversarial environments. According to their objectives, nodes in such networks can be classified into two types: good and malicious. The objective of good nodes is to optimize the overall system performance, while the objective of malicious nodes is to maximize the damage that they can cause to the system. In such networks, each node may have some data scheduled to be delivered to certain destinations, and the data rate from each node is determined by the common system goal, which is usually application-specific. In general, due to the multi-hop nature, when a node wants to send a packet to a certain destination, a sequence of nodes needs to be requested to help forward this packet. We refer to the sequence of (ordered) nodes as a route, the intermediate nodes on a route as relay nodes, and the procedure used to discover a route as route discovery. In general, route discovery has three stages. In the first stage, the requester notifies the other nodes in the network that it wants to find a route to a certain destination. In the second stage, other nodes in the network will make their decisions on whether to agree to be on the discovered route or not. In the third stage, the requester will determine which route should be used. Depending on whether they have gained access to the network, attackers can be classified into two types: insider attackers and outside attackers, where the former are legitimate users who have been granted the privilege of utilizing the network resources, while the latter are not legitimate and have no right to utilize the network resources. Existing schemes that are based on access control and secret communication channels can work well to defend against outside attackers [496] [161] [181] [335] [379] [491] [182] [183] [172]. Defending against insider attackers, however, is much more
21.2 System description and the game model
521
challenging due to the fully cooperative nature of such networks. Therefore our focus is on insider attackers. In general, a variety of attacks can be launched, ranging from passive eavesdropping to active interfering. Since we focus on packet forwarding, we will mainly consider the following two general attack models: dropping packets and injecting traffic. Dropping other nodes’ packets means that all the resources spent to transmit these packets are wasted, and the network’s performance is degraded. Attackers can also inject an overwhelming amount of packets into the network: once the others have forwarded these packets but cannot get payback, those resources spent to forward these packets are wasted. Meanwhile, the attackers are allowed to collude to increase their attacking capability. In cooperative ad hoc networks, without knowing any information about a node’s legitimate data-generation rate, the detection of traffic-injection attacks will become extremely hard (or impossible). Fortunately, since cooperative ad hoc networks are designed to fulfill certain common goals, it holds in general that a node’s legitimate data-generation rate can be known or estimated by some other nodes in the network. For example, in ad hoc sensor networks designed for environment surveillance, each node needs to send collected information to the centralized data collector, and the amount of data that each node can send is usually predetermined by the system goal, and can be known or estimated by some other legitimate nodes. In this chapter, we assume that the number of packets each node s in the network will generate by time t is Ts (t), which is usually different for different nodes. The number of packets that each node s will generate by time t can be modeled as a random variable, and Ts (t) can be regarded as a specific realization. In general, the exact value of Ts (t) might not be known by other nodes in the network. We assume that the upper bound of Ts (t), denoted by f s (t), can be known or estimated by some nodes in the network. In wireless ad hoc networks, some decisions might not be perfectly executed. For example, when a node has decided to help another node to forward a packet, the packet may still be dropped due to link breakage or transmission errors. We refer to those factors causing imperfect decision execution as noise, which may include environmental unpredictability and system uncertainty, channel errors, mobility, congestion, etc. We consider the following decision-execution error: the decision is to forward a packet but the outcome is the packet being dropped. Meanwhile, in such wireless networks, each node has only local, private, and imperfect observation of the other nodes’ behavior. Specifically, even if a packet has successfully been forwarded, this can still be observed as packet dropping by some nodes (e.g., this may happen frequently when watchdog [283] is used). Similarly, a packet-dropping event can also be observed as packet forwarding.
21.2.2
The game model To formally analyze the security issue in cooperative ad hoc networks, we model the dynamic interactions between good nodes and attackers as a secure-routing and packet-forwarding game with incomplete type information and imperfect observation:
522
Defense against insider attacks
• Players. The set of players is denoted by N , which is the set of legitimate nodes in the network. • Types. Each player i ∈ N has a type θi ∈ , where = {good, malicious}. Initially, each attacker knows any other player’s type, while each good player assumes that all nodes are good. That is, good nodes have incomplete information of the others’ type. Let Ng denote the set of good players and Nm = N − Ng the set of attackers. • Strategy space. Each node can act both as a source and as a relay, and has different strategy spaces when playing different roles. 1. Route-participation stage. After a relay node has received a message requesting it to be on a certain route, it can either accept this request, denoted by A, or not accept this request, denoted by NA. 2. Route-selection stage. For each source node that has a packet to send, after discovering a valid route (i.e., all nodes on this route have agreed to be on this route and each node on this route lies within the transmission range of the previous player on this route), its decision can be either request/use this route to send the packet, denoted by R, or not request/use this route to send the packet, denoted by NR. 3. Packet-forwarding stage. For each relay node, once it has received a request to forward a packet, its decision can be either forward this packet, denoted by F, or drop this packet, denoted by D. • Cost. For any player i ∈ N , transmitting a packet, either for itself or for the others, will incur cost ci . • Gain. For each good player i ∈ Ng , if a packet originating from it can be successfully delivered to its destination, it can get gain gi . • Imperfect execution. Owing to noise, with probability pe each decision F can be mistakenly executed as D. • Imperfect observation. With probability pm each forwarding outcome can be observed as dropping by the source (i.e., pm is the miss probability), and with probability pf each dropping outcome can be observed as forwarding by the source (i.e., pf is the false-positive probability). Meanwhile, when a node has injected a packet, with probability ps it can avoid being detected by those who know its legitimate traffic-injection rate. • Utility. For each player i ∈ N , we can model the players’ payoff functions as follows (c.f. Table. 21.1 for notations): 1. Good players. Since all good players belong to the same authority and pursue the common goals, they will share the same utility function, as follows: Ug (tf ) =
i∈Ng
(Si (tf )gi − Fi (tf )ci ) , i∈Ng Ti (tf )
(21.1)
where Fi (t) =
j∈N
Fi ( j, t).
(21.2)
21.2 System description and the game model
523
Table 21.1. A summary of notation pm pf ps pe gi ci tf L max L¯ min Ts (t) f s (t) Si (t) Fi ( j, t) Wi ( j, t) Ri ( j, t) F˜ j (i, t) T˜ j (t)
the probability that a forwarding outcome is observed as dropping (i.e., miss detection probability) the probability that a dropping outcome is observed as forwarding (i.e., false-positive probability) the probability that a node can avoid being detected when it has injected a packet the probability that a forwarding decision can be mistakenly executed as dropping the gain to node i if a packet originating from it can be successfully delivered to its destination the cost incurred by node i to transmit a packet the lifetime of the network a predetermined system parameter to specify the maximum number of hops per route. the average number of hops per selected route the number of packets that will be generated by s by time t the upper bound of Ts (t) which can be known or estimated by some nodes in the network the number of i’s packets that have been scheduled for sending and have successfully arrived at their destinations by time t the number of packets that i has forwarded for player j ∈ N by time t the total number of wasted packet transmissions that i has caused to j by time t due to i dropping packets that have been transmitted by j the number of times that node j has agreed to forward packets for node i by time t the number of times that i has observed that j has forwarded a packet for it by time t the number of packets that have been injected by j and have been observed by those nodes which know j’s legitimate traffic-injection rate
2. Malicious players. Since malicious players are allowed to collude, we assume that they will also share the same utility function, defined as follows: (W ( j, t ) + F (i, t ))c − α F (t )c i f j f j i f i i∈Nm j∈Ng Um (tf ) = . tf (21.3) Here the parameter α is introduced to determine the relative importance of attackers’ cost compared with good players’ cost. That is, from the attackers’ point of view, it is worth spending cost c to cause damage worth c to good players as long as α < c /c. The objective of good players is to maximize Ug , while the objective of attackers is to maximize Um . If the game is played for an infinite duration, then their utility functions will become Ug = limt→∞ Ug (t) and Um = limt→∞ Um (t), respectively.
Remark 1 On the right-hand side of (21.1), the numerator denotes the net profit (i.e., total gain minus total cost) that the good nodes obtain, and the denominator denotes the total number of packets that good nodes need to send. The utility function (21.1) represents
524
Defense against insider attacks
the average net profit that good nodes can obtain per packet that needs to be delivered. Since good nodes do not have any prior knowledge of the other nodes’ types, each good node might not know its exact payoff by time t, which introduces extra difficulty into optimal strategy design.
Remark 2 In (21.3), Wi ( j, t)c j represents the total damage (or wasted energy) that i has caused to j by time t due to i launching packet-dropping attacks, F j (i, t)c j represents the total damage that i has caused to j by time t due to i launching traffic-injection attacks, and Fi (t)ci represents the total cost i has incurred by launching both traffic-injection and packet-dropping attacks by time t. In summary, the numerator of the right-hand side of (21.3) represents the net damage that the attackers have caused to the good nodes. Since this value may increase monotonically, it is normalized by dividing it by the network lifetime tf . Now this utility function represents the average net damage that the attackers cause to the good nodes per time unit. From (21.3) we can see that in this game setting the attackers’ goal is to waste the good nodes’ energy as much as possible. Alteratively, attackers can also have other types of goals, such as minimizing the good nodes’ payoff. In Section 21.4 we will show that the performance of the defending strategy is not sensitive to the attackers’ specific goal, and in most situations maximizing (21.3) has the same effect as minimizing the good nodes’ payoff under the defending strategies.
Remark 3 The above game can be divided into many subgames as explained below. Once a player wants to send a packet to a certain destination, a subgame consisting of three stages will be initiated. In the first stage, the source will request some players to be on a certain route to the destination; in the second stage, the source will decide whether it should use this route to send the packet; and in the third stage, each relay will decide whether it should help the source to forward this packet once it has received it. We refer to each subgame as a single routing and packet-forwarding subgame. To simplify our illustration, we assume that gi = g for all i ∈ Ng and ci = c for all i ∈ N . Like many other routing protocols for ad hoc networks, in the above game, the maximum number of hops per route will be upper-bounded by L max , which is a predetermined system parameter. Without loss of generality, we assume that (1 − pe ) L max g > L max c, otherwise the expected gain may be less than the expected cost. Since in ad hoc networks energy is usually the most precious resource, we can directly relate the cost to energy. The physical meaning of the gain g may vary according to specific applications. However, as will be shown in Section 21.4, as long as g is reasonably large, it will not affect the strategy design. According to the above game model, in each single routing and packet-forwarding subgame, the strategy space for the initiator of this subgame is {R, NR}, while the strategy space for each relay node is {(A, F), (A, D), (NA, F), (NA, D)}. Here (A, F) means that a relay node agrees to be on a certain route in the route-participation stage and will forward the packet from the source in the packet-forwarding stage, (A, D) means that a relay node agrees to be on a certain route in the route-participation stage
21.3 Defense strategies with statistical attacker detection
525
but will drop the packet from the source in the packet-forwarding stage, (NA, F) means that a relay node does not agree to be on a certain route but will forward the packet from the source in the packet-forwarding stage, and (NA, D) means that a relay node does not agree to be on a certain route and will drop the packet from the source in the packet-forwarding stage. In the above game we have assumed that some necessary monitoring mechanisms will be launched to detect possible packet dropping, such as those described in [283] [493] [490] [483]. We have also assumed that, when a node transmits a packet, its neighbors can know which node is the source of this packet and which node is currently transmitting this packet. This can be achieved by the monitoring mechanism described in [484]. However, we do not assume perfect monitoring, and each node makes its decision solely on the basis of local private and imperfect observation. In general, pf , pm , and ps are determined by the underlying monitoring mechanism. In this chapter we studied a set of strategies to secure a cooperative ad hoc network against insider attackers under noise and imperfect observation. However, it is also worth pointing out that, in order for the considered strategies to work well, the existing security schemes, such as those described in [496] [161] [181] [335] [379] [491] [182] [183] [172], should be incorporated in order to achieve necessary access control, authentication, data integrity, and so on. In general, besides packet dropping, traffic injection, and collusion, there also exist other types of attack, such as jamming and slander. A main point of this chapter is to provide some insight on securing ad hoc networks under noise and imperfect observation.
21.3
Defense strategies with statistical attacker detection We first briefly study a simple subgame with complete type information and perfect observation: P1 requests P2 to forward a packet to P3 through the route P1 → P2 → P3 , and P2 has agreed to be on this route, as illustrated in Figure 21.1. Since the type information is complete, all players know each other’s type. This is a two-stage extensive game with P1 moving first. If P1 ’s action is NR, then the game will be terminated immediately; otherwise P2 will implement its action accordingly. The payoff profiles for this game under various scenarios are shown in Figure 21.2, where the first value in each payoff profile corresponds to P1 ’s payoff and the second corresponds to P2 ’s payoff. Here the payoff is defined as the gain minus the cost in this subgame. Depending on the types of P1 and P2 , there are four different scenarios. • Scenario 1. P1 is good and P2 is bad. Then the only Nash equilibrium is (NR, D) with payoff profile (0, 0), since no one can further increase their payoff by deviating. P1 R NR
Figure 21.1
P2 F
P3
D
A single routing and packet-forwarding subgame.
526
Defense against insider attacks
1
1
NR
NR
R
R
2 (0,0)
D
2 F
(0, 0)
D
(a) P1 is good, P2 is bad
(b) P1 is bad, P2 is good 1
1
NR
NR
R
R 2
2 (0, 0)
D (–c, –c)
F (g – 2c, g – 2c)
(c) both players are good Figure 21.2
(c–αc, –c)
(–αc, 0)
(g – c, –αc)
(–c, c)
F
(0, 0)
D
(–α c, –α c)
F (–2α c, –2α c)
(d) both players are bad
The payoff profiles under various scenarios.
• Scenario 2. P1 is bad and P2 is good. Then the only Nash equilibrium is (NR, D) with payoff profile (0, 0). • Scenario 3. Both players are good. In this scenario, if g > 2c, the only Nash equilibrium is (R, F) with payoff profile (g − 2c, g − 2c); if g < 2c, the only Nash equilibrium is (NR, F) with payoff profile (0, 0); whereas if g = 2c, there are two Nash equilibria, (NR, F) and (R, F), which have the same payoff profile (0, 0). • Scenario 4. Both players are bad. Then the only Nash equilibrium is (NR, D) with payoff profile (0, 0). From the above analysis we can draw the following conclusions for a two-hop subgame with complete type information. (i) A good node should neither forward any packet for attackers nor request any attackers to forward packets. Meanwhile, good nodes should always forward packets for other good nodes, provided that g > 2c. (ii) A malicious node should not forward any packet and should not request other nodes to forward packets. This can be easily generalized to the multi-hop scenario, that is, no good nodes should work with malicious nodes. However, defending against insider attacks in realistic scenarios is much more challenging, for the following reasons. First, good nodes cannot know which nodes are attackers a priori. Second, owing to noise, decision execution might not be perfect. Third, monitoring errors will be very common because of the fully distributed nature of the system and the limited available resources. Consequently, the attackers can easily take advantage of such information asymmetry and imperfection to cause more damage and to avoid being detected.
21.3 Defense strategies with statistical attacker detection
527
To handle incomplete type information, certain attacker-detection mechanisms should be applied. In general, one can base them on what is being observed to detect malicious behavior. For example, if a node has agreed to forward a packet but later drops it, other nodes (either its neighbor or the source of the packet) that have observed this inconsistency (i.e., agreeing to forward but dropping) can mark this node as malicious. If there is no decision-execution error and the observation is perfect, such a method can detect all intentional packet dropping. However, noise always exists and monitoring cannot be perfect. Under such realistic circumstances, detecting malicious behavior will become extremely hard due to the fact that an observed instance of misbehavior may be caused by intention, or by an unintentional execution error, or may simply be due to observation error. Now a node should not be marked as malicious simply because it has been observed to drop some packets. Accordingly, the attackers can take advantage of noise and observation errors to cause more damage without being detected.
21.3.1
Statistical detection of packet-dropping attacks To combat insider attacks under noise and imperfect observation, we first study what the normal observation when no attackers are present should be. In this case, when a node has made a decision to forward a packet (i.e., decision F), the probability pF that the outcome observed by the source is also forwarding can be calculated as follows: pF = (1 − pe )(1 − pf ) + pe pm .
(21.4)
Let Ri ( j, t) denote the number of times that node j has agreed to forward for node i by time t, and let F˜ j (i, t) denote the number of times that i has observed that j has forwarded a packet for it by time t. According to the central limit theorem [223], for any x ∈ R+ we can have F˜ j (i, t) − Ri ( j, t) · pF lim Prob √ ≥ −x = (x), (21.5) Ri ( j,t)→∞ Ri ( j, t) · (1 − pF ) · pF where 1 (x) = √ 2π
)
x −∞
e−t
2 /2
dt.
(21.6)
In other words, when Ri ( j, t) is reasonably large, F˜ j (i, t) − Ri ( j, t) pF can be approximately modeled as a Gaussian random variable with mean 0 and variance Ri ( j, t) pF (1 − pF ). Let isPDAi ( j) denote i’s belief about whether j has launched a packet-dropping attack, where isPDAi ( j) = 1 indicates that i believes that j has launched a packetdropping attack, while isPDAi ( j) = 0 indicates that i believes that j has not launched a packet-dropping attack. Let Bth be a reasonably large constant (e.g., 200). Then the following hypothesis-testing rule can be used by i to judge whether j has maliciously dropped its packets:
528
Defense against insider attacks
⎧ √ ⎨ 1 if F˜ j (i, t) < Ri ( j, t) pF − x Ri ( j, t) pF (1 − pF ) isPDAi ( j) = and Ri ( j, t) > Bth , ⎩ 0 otherwise.
(21.7)
If (21.7) is used to detect packet-dropping attacks, the false-alarm ratio would be no more than 1 − (x). It is worth mentioning that, even for a small positive x, the value of (x) can still approach 1 (e.g., (5) > 0.999).
21.3.2
Statistical detection of traffic-injection attacks In Section 21.3.1 we focus on packet-dropping attacks. Attackers can also try to inject an overwhelming amount of traffic to waste the good nodes’ resources. Let isTIAi ( j) denote i’s belief about whether j has launched a traffic-injection attack, where isTIAi ( j) = 1 indicates that i believes that j has launched a traffic-injection attack, while isTIAi ( j) = 0 indicates that i believes that j has not launched a trafficinjection attack. Let T˜ j (t) denote the number of packets that have been injected by j and have been observed by those nodes which know j’s legitimate traffic-injection rate. Then a simple detection rule can be as follows: 1 if T˜ j (t) > f j (t) isTIAi ( j) = (21.8) 0 otherwise. Under this detection rule, the maximum number of packets that attacker j can inject without being detected will be no more than f j (t)/(1 − ps ). This detection rule is very conservative since only observed packet-injection events are used. If ps can also be known by good nodes, we can modify (21.8) to further limit the number of packets that j can inject without being marked as malicious, such as changing the threshold from f j (t) to f j (t)/(1 − ps ). Since ps is usually not known and may vary among nodes, in this chapter we will not incorporate ps into the detection rule when we are performing traffic-injection-attack detection. The detection rule (21.8) can work well only when no retransmission is allowed. Next we show how to detect a traffic-injection attack when retransmission upon unsuccessful delivery is allowed. We first make a simple assumption that all selected routes have the same number of hops, denoted by L, and let q = (1 − pe ) L . Then, for each packet, the total number of tries needed to successfully deliver this packet to its destination can be modeled as a geometric random variable with mean 1/q and variance (1 − q)/q 2 . For any node j ∈ N , if ps = 0 and j has never intentionally retransmitted a packet that has successfully been delivered to its destination, according to the central limit theorem, for any x ∈ R+ we should have ⎛ ⎞ T˜ j (t) − T j (t)/q (21.9) ≤ x ⎠ = (x). lim Prob ⎝ > Tˆ j (t)→∞ T j (t)(1 − q)/q 2 In other words, when T˜ j (t) is reasonably large, T˜ j (t) − T j (t)/q can also be approximately modeled as a Gaussian random variable with mean 0 and variance T j (t)(1 − q)/q 2 . Then a modified detection rule can be as follows:
21.3 Defense strategies with statistical attacker detection
isTIAi ( j) =
⎧ ⎨ 1 ⎩
0
: if T˜ j (t) > f j (t)/q + x f j (t)(1 − q)/q and T˜ j (t) > Bth , otherwise.
529
(21.10)
Similarly, when the above detection rule is used, the false-alarm ratio would be no more than 1 − (x). In this case, the number of packets that attacker j can inject with : out being marked as malicious is upper-bounded by f j (t) + x f j (t)(1 − q) /[q(1 − ps )]. Compared with the case in which no retransmission is allowed, when retransmission is allowed, attackers can inject more packets without being detected, though good nodes can also enjoy higher throughput. In general the number of hops per selected route varies according to the locations of the source and destination and the network topology. Let L¯ min denote the average number of hops per selected route. For calculating q as used in (21.10), an alternative way ¯ is to let q = (1 − pe ) L min . However, this may lead to a higher false-positive probability since some nodes may experience longer routes due to their locations. In this chapter we adopt a more conservative way by letting q = (1 − pe ) L max . As a consequence, even when ps = 0, the resulting false-positive probability will be far less than 1 − (x), with the penalty that the attackers can also inject more packets without being detected. For example, for L max = 10, L¯ min = 4, and pe = 0.02, the extra increase would be about ¯ 12.9% (i.e., (1 − pe ) L min −L max − 1). Accordingly, the good nodes’ payoff will also be decreased.
21.3.3
A secure-routing and packet-forwarding strategy On the basis of the above analysis, we can arrive at the following strategy for secure routing and packet forwarding in cooperative ad hoc networks under noise and imperfect monitoring. In the secure-routing and packet-forwarding game under noise and imperfect monitoring, initially each good node will assume all other nodes are good. For each single routing and packet-forwarding subgame, assuming that P0 is good and is the source which wants to send a packet to Pn at time t, and a route P0 → P1 → · · · → Pn has been discovered by P0 . After P0 has sent requests to all the relays on this route asking them to participate, for each good node on this route the following strategies should be implemented in the various stages. (i) In the route-participation stage, A good relay Pi takes action A if and only if no nodes on this route have been marked as bad and the expected cost is less than the expected game n ≤ L max ; otherwise, it takes action NA. (ii) In the route-selection stage, P0 will take action R if and only if all the following conditions can be satisfied: (a) the packet is valid (i.e., it is scheduled to be sent by P0 ), (b) n ≤ L max , (c) no nodes on this route have been marked as malicious by P0 , (d) all relays have agreed to forward packets in the route-participation stage, and (e) this route has the minimum number of hops among all good routes to Pn known by P0 ; otherwise, P0 should take action NR.
530
Defense against insider attacks
(iii) In the packet-forwarding stage, each relay Pi will take action F if and only if it has agreed to be on this route in the route-participation stage; otherwise, it should take action D. Let x be a positive constant. Any node j will be marked as malicious by node i if it has been detected by any of the following rules: (21.7), (21.8) if retransmission is not allowed, and (21.10) if retransmission is allowed, where in (21.10) q = (1 − pe ) L max . Meanwhile, node j will also be marked as malicious if it has requested to send a packet through a route with a number of hops greater than L max . In the above defense strategy, each good node needs to know or estimate the following parameters: pe , pf , pm , and L max . Meanwhile, it also needs to set the two constants that are used in (21.7) and (21.10), namely Bth and x. L max is a system-level parameter and is known by all nodes in the network. The packet-dropping probability pe can be either trained offline, or estimated online by each node through evaluating its own packet-dropping ratio. In general different nodes may experience different pe at different times or locations. Under such circumstances, to reduce the incidence of false-positive results when performing attacker detection using (21.7) and (21.10), a node may set pe to be a little bit larger than the value experienced by itself. The two observation-error-related parameters pf and pm can be provided by the underlying monitoring mechanism. Similarly, different nodes may also experience different pf and pm in different situations. Therefore, when a node uses (21.7) to perform attacker detection, to limit the incidence of false-positive results, it may use the upper bounds of pf and pm provided by the underlying monitoring mechanisms. This will be further studied later.
21.3.4
Attacking strategy Since this chapter focuses on insider attackers, it is reasonable to believe that attackers can know the defending strategies employed by the system. This can be regarded as the worst-case scenario from the defenders’ point of view. This subsection studies what strategies the attackers should use to maximize their payoff when the secure-routing and packet-forwarding strategy is used by the good nodes. We first study packet-dropping attacks. According to the secure-routing and packetforwarding strategy, once a node i has been marked as malicious by another node j, i will not be able to cause damage to j again. Therefore, an attacker should avoid being detected in order to continue to cause damage to the good nodes. A simple strategy is to apply always (A, D). However, when applying this strategy, the maximum number of good nodes’ packets that an attacker can drop without being detected will be no more than |Ng | · Bth , while the penalty is that it will be detected as malicious and can no longer cause damage to the good nodes. Intuitively, attackers can selectively drop packets to avoid being detected and still cause continuous damage to good nodes. According to the secure-routing and packet-forwarding strategy, the number of a good node i’s packets that an attacker √ j can drop without being detected is upper-bounded by npe + x n/(1 + pm − pf ),
21.3 Defense strategies with statistical attacker detection
531
where n is the number of packets that j has agreed to forward for i. It is easy to check that √ √ ' ' ( ( x n x n npe + pm + n(1 − pe ) − pf 1 + pm − p f 1 + pm − p f : √ = npF − x n < npF − x npF (1 − pF ). (21.11) √ In other words, j has to forward at least n(1 − pe ) − x n/(1 + pm − pf ) packets for i in order to avoid being marked as malicious. However, recall that, even when there are no attackers, on average n(1 − pe ) packets will be dropped due to noise. That is, the extra number of i’s packets that j can selectively drop with√ out being detected is upper-bounded by x n/(1 + pm− pf ), while the cost incurred √ to forward packets for j is at least n(1 − pe )c − x n/(1 + pm − pf ) c. Since we have √ x n/(1 + pm − pf ) lim = 0, (21.12) n→∞ n(1 − pe ) selectively dropping i’s packets can bring hardly any gain to j if the game is played for long enough. According to the secure-routing and packet-forwarding strategy, a good node will not start performing packet-dropping-attack detection before having undergone enough interactions with another node (e.g., Bth ). Therefore, the following packet-droppingattack strategy can be used by an attacker j when acting as a relay node: for each good node i, it can drop the first Bth − 1 of i’s packets by playing (A, D), and then start playing (NA, D) forever. With this strategy, the damage that j can cause to i is upperbounded by Bth c without introducing any cost to j. It is easy to see that the relative damage Bth c/tf decreases monotonically with increasing network lifetime tf . Until now we have assumed that all nodes will experience the same pe , pf , and pm . However, such an assumption need not hold in general. For example, attackers may be able to decrease the pf and/or increase the pm experienced by a node. Let pf and
denote the actual false-positive probability and miss probability experienced by an pm attacker j. When j tries to drop i’s packets, in order to avoid being detected, the actual packet-droping ratio pe that j will apply to determine whether to drop i’s packet should satisfy √ x npF (1 − pF )
, (21.13) 1 − pe 1 − pf + pe pm > pF − n where n is the number of packets that j has agreed to forward for i. It is easy to check that, to satisfy (21.13) for all possible n, the maximum packet-dropping ratio pe that j can apply is upper-bounded by pe (1 − pf − pm ) + pf − pf
pe ≤ . (21.14)
1 − pf − pm
and/or decreasing From (21.14) we can see that increasing the miss probability pm the false-positive probability experienced by j can also increase pe , and consequently
532
Defense against insider attacks
increase the damage to i. Letting L avg denote the average number of wasted packet transmissions caused j when it drops i’s packets, according to the payoff definition by
a packet-dropping attack (21.3), as long as pe − pe /(1 − pe ) ≤ αL avg , launching with pe cannot bring any gain to j. However, if pe − pe /(1 − pe ) > αL avg , j should launch packet-dropping attacks by selectively dropping the good nodes’ packets with dropping probability calculated using (21.14). Now we study traffic-injection attacks. According to the secure-routing and packetforwarding strategy, to avoid being marked as launching a traffic-injection attack, an attacker j should ensure that T˜ j (t) ≤ f j (t). However, j might not know the exact value of T˜ j (t), and needs to estimate T˜ j (t) by itself. Recall that, for each packet injected by j, there is probability ps that it can avoid being detected. It is easy to show that T˜ j (t) − F j ( j, t)(1 − ps ) can be approximately modeled as a Gaussian random variable with mean 0 and variance F j ( j, t) ps (1 − ps ), where F j ( j, t) is the total number of packets injected by j until time t. On the basis of the above analysis, when no retransmission is allowed, a good trafficinjection strategy is as follows: j should try to limit the number of injected packets F j ( j, t) to satisfy the following condition: > F j ( j, t)(1 − ps ) + y F j ( j, t) ps (1 − ps ) < f j (t), (21.15) where y is a large positive constant. Using this strategy, the probability that j will be detected is upper-bounded by 1 − (y). When retransmission is allowed, according to the secure-routing and packet-forwarding strategy, the condition should be changed to : > f j (t) + x f j (t)(1 − q) F j ( j, t)(1 − ps ) + y F j ( j, t) ps (1 − ps ) < , (21.16) q where y is a large positive constant and x and q are defined in the secure-routing and packet-forwarding strategy. In summary, we can arrive at the following attacking strategy, referred to as the optimal attacking strategy.
(i) Packet-dropping attacks. Any attacker maximum possible pe calculated j, if the using (21.14) is larger than pe and pe − pe /(1 − pe ) ≤ αL avg , should try to selectively drop the good nodes’ packets with probability pe . Otherwise, it should apply the following strategy: for any good node i, j should try to drop the first Bth − 1 of i’s packets by playing (A, D), then start playing (NA, D) forever when acting as a relay node for i. (ii) Traffic-injection attacks. Any attacker j, if no retransmission is allowed, should try to inject traffic by following (21.15); otherwise, it should try to inject traffic by following (21.16). Meanwhile, when j has decided to inject a packet, it should pick a route with the following properties: (a) the number of hops is no more than L max , (b) all relays are good nodes, and (c) among all the routes known by j which satisfy (a) and (b), this route has the maximum number of hops.
21.4 Optimality analysis
21.4
533
Optimality analysis In this section we analyze the optimality of the strategy profile, where all good nodes follow the strategy described in Section 21.3.3 and all attackers follow the strategy described in Section 21.3.4. We will focus on the worst-case scenario from the good nodes’ point of view: when a malicious node wants to send a packet to another node, it can always find a route with L max hops and all relay nodes being good. This also is the best-case scenario from the attackers’ point of view. We focus on the scenario in which all nodes experience the same pe , pf , and pm . The scenario in which different nodes experience different pe , pf , and pm will be discussed at the end of this section. Theorem 21.4.1 In the secure-routing and packet-forwarding game in a noiseless environment with perfect observation (i.e., pe = pf = pm = ps = 0), the presented strategy profile with Bth = 1 forms a Nash equilibrium. P ROOF. To show that the strategy profile forms a Nash equilibrium, we need only show that no player can increase its payoff by unilaterally changing its own strategy. • P0 ’s actions when it is good. According to the secure-routing and packet-forwarding strategy, P0 will take action R if and only if (1) the packet to be sent is valid, (2) n ≤ L max , (3) no nodes on this route have been marked as malicious by P0 , (4) all relay nodes have agreed to be on this route, and (5) this route has the minimum cost among all good routes to Pn known by P0 . First, if P0 takes action R when the packet to be sent is not valid, the good nodes’ payoff cannot be increased, or may even be decreased. Second, if P0 takes action R when n > L max , P0 will be marked as malicious by other good nodes and cannot send any packets again, which will decrease the good nodes’ payoff. Third, if P0 takes action R when some nodes have been marked as malicious by P0 or some nodes do not agree to be the route, then the packet will be dropped by a certain relay node, and consequently all cost spent to transmit this packet will be wasted, and the good nodes’ payoff will be decreased. Fourth, if P0 takes action R when the selected route does not have the minimum cost among all good routes to Pn known by P0 , then, compared with the situation in which the good route with the minimum cost is used, some extra cost will be incurred if this route is used instead, which will decrease the good nodes’ payoff. Finally, if all the above conditions are satisfied but P0 takes action NR, the good nodes’ payoff will not increase in this case too, since not sending the packet or sending the packet using a non-minimum-cost route can bring no gain or can only bring less gain. • P0 ’s decision when it is malicious. According to the optimal attacking strategy, P0 will take action R if and only if (1) FP0 (P0 , t) < f P0 (t), (2) n = L max , (3) all relay nodes are good, and (4) all relay nodes have agreed to be on this route. First, if P0 takes action R when FP0 (P0 , t) ≥ f P0 (t) or n > L max , P0 will be marked as malicious by good nodes and cannot inject any packets again, which will surely decrease the attackers’ payoff. Second, if P0 takes action R when
534
Defense against insider attacks
n < L max or some relay nodes are malicious or some relay nodes do not agree to be on this route, then, since P0 can always find a route with L max hops and with all relay nodes being good, using a sub-optimal route surely cannot increase P0 ’s attack efficiency. Third, if all those conditions are satisfied but P0 takes action NR, then, since the maximum possible damage that can be caused by each packet injection is (L max − 1)c, the attackers’ payoff cannot be further increased either. • Pi ’s decision (0 < i < n) when it is good. According to the secure-routing and packet-forwarding strategy, Pi will take action (A, F) if all the other nodes on this route have not been marked as malicious by it and n ≤ L max ; otherwise, it will take action (NR, D). When no nodes on this route have been marked as malicious by it and n ≤ L max , then, since refusing to be on this route may cause the source to select a route with higher cost and dropping the packet will waste other good nodes’ cost, both will cause Pi ’s payoff to be decreased. When some nodes on this route have been marked as malicious by Pi or n > L max , if Pi agrees to be on this route or does not drop the packet, then, since the packet will finally be dropped by a malicious node, all of the effort that has been spent by good nodes in this subgame will be wasted, which surely cannot increase Pi ’s payoff either. • Pi ’s decision (0 < i < n) when it is malicious. According to the optimal attacking strategy, Pi will always take action (NA, D). We first consider the situation in which P0 is good. If Pi takes action (A, D), it will be detected as malicious immediately and cannot cause any further damage to P0 , which surely cannot increase the attackers’ payoff. If Pi takes action (A, F), this can only contribute to good nodes by helping good nodes to forward packets, and cannot increase the attackers’ payoff. Meanwhile, taking action (NA, F) surely cannot cause damage to the good nodes, since good nodes will not use Pi to forward packets. Now let us consider the situation in which the initiator P0 is malicious. It is also easy to check that taking action (NA, D) is always the best strategy from the malicious nodes’ point of view since P0 can always find a better route, that is, a route with L max hops and with all relay nodes being good. From the above analysis we can see that no player can increase its payoff by unilaterally changing its own strategy. Now we analyze nodes’ possible payoff under the strategy profile presented here. avg avg Let f i = f i (tf )/tf when tf is finite, and f i = limt→∞ f i (t)/t when tf is infinite. According to the secure-routing and packet-forwarding strategy, a good node will not work with any node that has been marked as being malicious. First, as we have shown in Section 21.3.4, playing (A, D) cannot increase the attackers’ payoff, provided that tf is infinite. Second, it is easy to see that playing (NA, F) and (A, F) cannot increase the attackers’ payoff either, since, when an attacker plays (NA, F), no good nodes will request it to forward packets, while when an attacker plays (A, F), it can only make contributions to the good nodes. Third, when an attacker tries to inject packets, similarly to the analysis in the proof of Theorem 21.4.1, it should always use the route with all
535
21.4 Optimality analysis
relay nodes being good and having agreed to be on the route. Meanwhile, from an attacker’s point of view, injecting more packets than specified will make it be marked as malicious and cannot cause any more damage to the good nodes, and consequently will decrease its payoff. Therefore, when no retransmission is allowed, according to (21.3), the attackers’ payoff will be upper-bounded by ( ' 1 1 f i (tf ) Um ≤ lim (L max − 1 − α)c + lim |Ng |Bth L avg c tf →∞ tf tf →∞ tf 1 − ps =
i∈Nm
i∈Nm avg fi
1 − ps
i∈Nm
(L max − 1 − α)c.
(21.17)
Here f i (tf )/(1 − ps ) is the number of packets that attacker i can inject into the network by time tf without being marked as malicious, (L max − 1)c is the maximum possible damage that an injected packet can cause to good nodes, αc is the cost incurred by attackers on forwarding a packet, and |Ng |Bth L avg c is the damage that j can cause by launching a packet-dropping attack. When retransmission is allowed upon unsuccessful delivery, from the attackers’ point of view, the only difference is that they can inject more packets without being detected. Now the attackers’ payoff will be upper-bounded by √ ( '' 1 x f i (tf )(1 − q) f i (tf ) Um ≤ lim + tf →∞ tf (1 − ps )q q i∈Nm
(
× (L max − 1 − α)c + |Ng |Bth L avg c =
i∈Nm
avg
fi (L max − 1 − α)c. (1 − ps )q
(21.18)
Now we analyze the good nodes’ payoff. Recall that L¯ min denotes the average number of hops among the routes selected by good nodes. We first consider the situation in which the environment is noisy and no retransmission is allowed. In this case, some good nodes’ packets will be dropped due to noise, and limt→∞ Si (t)/Ti (t) = ¯ (1 − pe ) L min . According to (21.1), for each i ∈ Ng , Fi (t) comes in two parts: forwarding packets for the good nodes and forwarding packets for the attackers. The total number of packets that the good nodes have forwarded for themselves is i∈Ng Ti (t) L¯ min by time t, and the total number of packets that the good nodes have forwarded for the attackers is no more than i∈Nm [ f i (t)/(1 − ps )](L max − 1). Meanwhile, for any given positive value x adopted in the secure-routing and packet-forwarding strategy, the overall false-positive probability will be upper-bounded by 1 − (x), that is, at most a fraction avg 1 − (x) of good nodes will be mistakenly marked as malicious. Let Ti = Ti (tf )/tf avg when tf is finite and Ti = limt→∞ Ti (t)/t when tf is infinite. Then the good nodes’ payoff will be lower-bounded by
536
Defense against insider attacks
Ug ≥ lim
t→∞
− lim
t→∞
i∈Ng (Si (t)g
− Ti (t) L¯ min c)
i∈Ng
j∈Nm
Ti (t)
[ f j (t)/(1 − ps )](L max − 1)c i∈Ng Ti (t) ¯
= (x)g(1 − pe ) L min avg j∈Nm f j (L max − 1) − L¯ min + c. avg (1 − ps ) j∈Ng Ti
(21.19)
When the environment is noiseless or when retransmission is allowed, all good nodes’ packets can be successfully delivered to their destinations with limt→∞ Si (t)/Ti (t) = 1 for i ∈ Ng . Meanwhile, the total number of packets that the good nodes have forwarded ¯ for themselves by time t is no more than i∈Ng [Ti (t)/(1 − pe ) L min ] L¯ min , and the total number of packets that the good nodes have forwarded for the attackers is no more than i∈Nm [ f i (t)/(q(1 − ps ))](L max − 1). Thus in this case the good nodes’ payoff can be lower-bounded by avg · (L max − 1) L¯ min j∈Nm f j + c. (21.20) Ug ≥ (x)g − avg q(1 − ps ) j∈Ng Ti (1 − pe ) L¯ min On the other hand, when the optimal attacking strategy is used by attackers, then, from the good nodes’ point of view, when no retransmission is allowed, the maximum possible payoff can also be upper-bounded by avg · (L max − 1) j∈Nm f j L¯ min ¯ − L min + c. (21.21) Ug ≤ g(1 − pe ) avg (1 − ps ) j∈Ng Ti When retransmission is allowed, the maximum possible payoff can be upper-bounded by avg L¯ min j∈Nm f j (L max − 1) + c. (21.22) Ug ≤ g − avg q(1 − ps ) j∈Ng Ti (1 − pe ) L¯ min From the above payoff analysis we can see that the good nodes’ payoff can be lower-bounded by a certain value, no matter what strategies the attackers use and what kind of goals the attackers have. In other words, the attackers’ goal has little effect on good nodes’ payoff when the secure-routing and packet-forwarding strategy is used by good nodes. From the above payoff analysis we can also see that, as long as the gain g is reasonably large, it will not play an important role in the strategy design. Theorem 21.4.2 In the infinite-duration secure-routing and packet-forwarding game under noise and imperfect observation, the secure-routing and packet-forwarding strategy presented here is asymptotically optimal from the good nodes’ point of view in the sense that, for any > 0, we can always find an x ∗ > 0 such that no other equilibrium
21.4 Optimality analysis
537
strategies can further increase the good nodes’ payoff by more than as long as the attackers also play optimally. P ROOF. We first consider the situation that no retransmission is allowed. From the above analysis we can see that, from the attackers’ point of view, to maximize their payoff, the optimal attacking strategy is to inject no more packets into the network than they are allowed to inject and not to forward any packet for the good nodes. In this case the good nodes’ maximum possible payoff is defined in (21.21). According to (21.19), the difference between the actual payoff and the maximum possible payoff is ¯ (1 − (x))(1 − pe ) L min g. Since (x) → 1 as x → ∞, for any > 0, we can always ∗ find a constant x such that the actual payoff is within of the maximum possible payoff. Similarly, we can also prove this for the situation in which retransmission is allowed. A strategy profile is said to be Pareto optimal if there is no other strategy profile that can simultaneously increase all players’ payoff; a strategy profile is said to be strongly Pareto optimal if there is no other strategy profile that can increase at least one player’s payoff without decreasing any other players’ payoff [322]. Theorem 21.4.3 In the infinite-duration secure-routing and packet-forwarding game, the strategy profile presented here is strongly Pareto optimal. P ROOF. To show that the strategy profile is strongly Pareto optimal, we need only show that no other strategy profile can further increase any player’s payoff without decreasing some other player’s payoff. We first show that the good nodes’ payoff cannot be further increased without decreasing the attackers’ payoff. According to (21.19), to further increase the avg good nodes’ payoff, one can either decrease L¯ min or decrease f j . First, since the minimum-hop-number routes have been used, L¯ min cannot be further decreased. Secavg ond, according to (21.3) and (21.17), decreasing f j always decreases the attackers’ payoff. Next we show that the attackers’ payoff cannot be further increased without decreasing the good nodes’ payoff. According to (21.3), to increase the attackers’ payoff, one can either try to increase i∈Nm , j∈Ng Wi ( j, t) and i∈Nm , j∈Ng F j (i, t) or try to decrease i∈Nm Fi (t). First, i∈Nm , j∈Ng F j (i, t) comes completely from traffic-injection attacks; it has been maximized and cannot be further increased. Since i∈Nm , j∈Ng Wi ( j, t) comes from launching packet-dropping attacks, increasing W ( j, t) will also decrease the good players’ payoff. Now we consider i∈Nm , j∈Ng i F (t). According to the above packet-forwarding strategy, attacker i will not i i∈Nm forward packets for others, so Fi (t) comes totally from transmitting packets for itself. Therefore, Fi (t) cannot be further decreased without decreasing the attackers’ payoff. Until now we have focused on the scenario in which pe , pf , and pm remain the same for all nodes at all times. However, as we have mentioned, this need not hold in general. Next we study the consequence when different nodes may experience different pe , pf ,
538
Defense against insider attacks
and pm . First, from the good nodes’ point of view, such variation may increase the false-positive probability when performing attacker detection. For example, for a node experiencing a lower packet-dropping ratio, when it uses this ratio to perform packetdropping-attacker detection, there will be a much higher probability that those nodes experiencing a higher packet-dropping ratio will mistakenly be marked as malicious (e.g., for a ratio higher than 1 − (x)). As mentioned in Section 21.3.3, to avoid a high false-positive probability, a good node may need to set a higher pe than the one experienced by itself when performing attacker detection. Meanwhile, a good node may also need to increase Bth and x to handle a possible bursty packet-dropping effect, which is normal in wireless networks due to fading. Similarly, when nodes experience different pf and pm , a good node may need to use the upper bounds of pf and pm to avoid a high false-positive probability when performing attacker detection. As a penalty, these variations can be taken advantage of by attackers to inject more packets and drop more packets without being marked as malicious, which consequently leads to a decrease of the good nodes’ performance. However, our simulation studies indicate that, even in such realistic scenarios, the secure-routing and packet-forwarding strategy presented here can still work very well.
21.5
Performance evaluation We conducted a series of simulations to evaluate the performance of the strategies in both static and mobile ad hoc networks. In each ad hoc network, nodes are randomly deployed inside a rectangular area of 1000 m × 1000 m. For mobile ad hoc networks, nodes move randomly according to the random waypoint model [209], which can be characterized by the following three parameters: the average pause time, the maximum velocity vmax , and the minimum velocity vmin . The following physical-layer model is used: two nodes can directly communicate with each other only if they are within each other’s transmission range; but it can easily be extended to a more realistic model in which the error probability is a function of distance. The MAC-layer protocol simulates the IEEE 802.11 distributed coordination function with a four-way handshaking mechanism [197]. On the basis of the above models, the static ad hoc networks can be regarded as the noiseless case, while the mobile ad hoc networks can be regarded as the noisy case in which decision-execution errors (i.e., the decision is F but the outcome is D) are caused solely by link breakage. For each node, the transmission power is fixed, and the maximum transmission range is 200 m. In the simulations, each good node will randomly pick another good node as the destination. Similarly, each attacker will also randomly pick another attacker as the destination. In both cases, packets are scheduled to be sent to this destination at a constant rate. The total number of good nodes is set to be 100 and the total number of attackers varies from 0 to 40. For each good or malicious node, the average packet inter-arrival time is 1 s, that is, Ti (t) = 2t3 for any time t and any node i ∈ N . Further, each good node i ∈ Ng will set f i (t) = 2t3 + 2 for any other node i ∈ N . All data packets are of the same size.
539
21.5 Performance evaluation
0.035 Mobility model 1 Mobility model 2 Mobility model 3 Mobility model 4
Average link breakage ratio
0.03 0.025 0.02 0.015 0.01 0.005 0
0
10
20
30
40
50
60
70
80
90
Node index in the network Figure 21.3
The average link-breakage ratio in mobile ad hoc networks.
Since the link-breakage ratio pe plays an important role in the strategy design, we first study the characteristics of link breakages in mobile ad hoc networks under different mobility patterns. In this set of simulations only good nodes will be considered. The average link-breakage ratio experienced by each node is calculated as the ratio between the total number of link breakages it experienced as the transmitter and the total number of packet transmissions it has tried as the transmitter. The total simulation time is 30 000 s. Figure 21.3 shows the link-breakage ratios experienced by different nodes for the four different mobility patterns listed in Table 21.2. First, from these results we can see that the average link-breakage ratio will vary under different mobility patterns. Second, under the same mobility pattern, the average link-breakage ratio is almost the same for each node. Figures 21.4(a)–(c) show the evolution of the average link-breakage ratios over time when mobility pattern 4 is used. In this set of simulations, 2 nodes are randomly selected from among the 100 nodes in the network. Figure 21.4(a) shows the link-breakage ratio averaged over every 100 s, Figure 21.4(b) shows the link-breakage ratio averaged over every 1000 s, and Figure 21.4(c) shows the accumulated average link-breakage ratio. From these results we can see that the link-breakage ratio experienced by each node may vary dramatically in a short period, but will become stable over a long period. These results suggest that, when performing attacker detection, if tf is not large enough, pe should be set higher than the long-term average in order to avoid a high false-positive probability, whereas if tf is large or goes to infinity, the average link-breakage ratio can be used when performing attacker detection, with a reasonably large Bth . Now we study the performance of the strategies in different scenarios. We use the “noiseless scenario” to denote static ad hoc networks, and the “noisy scenario” to denote mobile ad hoc networks. In both cases, all good nodes follow the secure-routing and
540
Defense against insider attacks
Table 21.2. Mobility patterns vmax vmax vmax vmax
= 10 m/s, vmin = 15 m/s, vmin = 15 m/s, vmin = 30 m/s, vmin
= 1 m/s, pause time = 500 s = 5 m/s, pause time = 300 s = 5 m/s, pause time = 100 s = 10 m/s, pause time = 100 s
0.1 0.09
Average link-breakage ratio per 1000 s
Average link-breakage ratio per 100 s
Pattern 1 Pattern 2 Pattern 3 Pattern 4
Node 1 Node 2
0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0
50
100
150
200
250
0.04 Node 1 Node 2
0.035 0.03 0.025 0.02 0.015 0.01 0.005
300
0
5
10
The accumulated link-breakage ratio
Time (x100 seconds)
15
20
25
Time (x1000 seconds)
0.045 Node 1 Node 2 0.04 0.035 0.03 0.025 0.02 0.015
50
100
150
200
250
300
Time (x100 seconds)
Figure 21.4
The evolution of pe in mobile ad hoc networks.
packet-forwarding strategy described in Section 21.3.3, and all (insider) attackers follow the optimal attacking strategy described in Section 21.3.4, with the only modification being that no attacker will intentionally drop packets. The total simulation time tf is set to be 10 000 s, and all results are averaged over 20 independent rounds. The following parameters are used: g = 20, c = 1, α = 1, L max = 10, pf = 0.05, pm = 0.05, and ps = 0.05. The acceptable false-alarm ratio is set to be 0.1%. For mobile ad hoc networks, mobility pattern 4 listed in Table 21.2 is used. Since tf is not very large, pe is set to be 3%, which is obtained through offline training. For static ad hoc networks, we focus on the case in which the attackers can always find routes with L max hops to inject packets. For mobile ad hoc networks, four scenarios are considered, as listed in Table 21.3, and DSR [209] is used as the underlying routing protocol to perform route discovery. The simulation results are illustrated in Figure 21.5.
541
21.5 Performance evaluation
Table 21.3. Noisy scenarios Scenario 1 Scenario 2 Scenario 3 Scenario 4
Retransmission is allowed, and attackers can always find an L max -hop route with all relays good No retransmission is allowed, and attackers can always find an L max -hop route with all relays good Retransmission is allowed, and attackers might not be able to find an L max -hop route with all relays good No retransmission is allowed, and attackers might not be able to find an L max hop route with all relays good
350
Noiseless scenario Noisy scenario 1 Noisy scenario 2 Noisy scenario 3 Noisy scenario 4
16 15 14 13 12
0
10
20
30
200 150 100
0
40
0
10
Attacker Number
(a)
(b)
1.1 1 0.9 0.8
0.6
Noiseless scenario Noisy scenarios 1 & 3 Noisy scenarios 2 & 4
0
Figure 21.5
10
20
30
40
30
40
g = 20, no retransmission g = 20, with retransmission g = 15, no retransmission g = 15, with retransmission g = 10, no retransmission g = 10, with retransmission
20
0.7
20
Attacker Number
Good nodes’ payoff
Good nodes’ packet-delivery ratio
250
50
11 10
Noiseless scenario Noisy scenario 1 Noisy scenario 2 Noisy scenario 3 Noisy scenario 4
300 Attackers’ payoff
Good nodes’ payoff
17
15 10 5 0
0
10
20
Attacker Number
Attacker Number
(c)
(d)
30
40
Payoff comparison when no attackers will drop packets.
Figure 21.5(a) compares the good nodes’ payoff under different scenarios. First, we can see that, when no attackers are present, the noiseless scenario has the highest payoff, and the noisy scenarios 2 and 4 (no retransmission is allowed upon unsuccessful packet delivery) have the lowest payoff. The reason is that the good nodes’ payoff is determined not only by their transmission cost, but also by the packet-delivery ratio. In noisy environments, when no retransmission is allowed upon unsuccessful packet delivery, the packet-delivery ratio will also be decreased, as illustrated in Figure 21.5(a), where
542
Defense against insider attacks
in this case the packet-delivery ratio is only about 89% (illustrated in Figure 21.5(c)). Second, we can see that allowing retransmission upon unsuccessful packet delivery can increase the good nodes’ payoff in these scenarios (noisy scenario 1 vs. noisy scenario 3, and noisy scenario 2 vs. noisy scenario 4). However, with increasing number of attackers, the performance gap between the two scenarios (with or without retransmission) will also decrease (noisy scenario 1 vs. noisy scenario 2, and noisy scenario 3 vs. noisy scenario 4). Third, in general, noise will decrease the good nodes’ payoff; however, in the noisy scenario 3 one can achieve a higher payoff than in the noiseless scenario when there are no fewer than 30 attackers. The reason is that, in the noiseless scenario, attackers can always find L max -hop routes, whereas in the noisy scenario 3, the average number of hops per route selected by the attackers is much smaller than L max , and the damage caused is less than in the noiseless scenario. Figure 21.5(b) demonstrates the attackers’ payoff under different scenarios. First, as shown in the case of noisy scenario 3 and 4, when the attackers cannot always use L max -hop routes to inject packets, their payoff will be decreased a lot compared with that for the cases in which they can, as shown in the case of noisy scenarios 1 and 2. Second, allowing retransmission upon unsuccessful packet delivery can also increase the attackers’ payoff, since now more packets can be injected by the attackers. Third, since the attackers’ packets may also be dropped under the noisy scenarios in which retransmission is not allowed, the attackers’ payoff will also be decreased compared with that in the noiseless scenario, as shown by the noisy scenario 2. However, when retransmission is allowed, the attackers’ payoff can still be increased even under the noisy scenarios, as is illustrated for the noisy scenario 1. Finally, Figure 21.5(d) illustrates the good nodes’ payoff under different g values, where now only the noisy scenarios 3 and 4 are considered. First, from these results we can see that, with increasing number of attackers, the performance gap between these two scenarios will also decrease. The reason is that the attackers can take advantage of retransmission to cause more damage to the good nodes. Second, with decreasing g, the performance gap between these two scenarios will also decrease. For example, when g = 10 and the number of attackers is 40, there is hardly any difference. In summary, the gain introduced by allowing retransmission becomes less and less with increasing number of attackers or decreasing g. However, it is worth mentioning that g does not change the underlying strategy design as long as it is reasonably large. Thus far we have considered only situations in which no attackers will intentionally drop packets. Next we study the situation in which the attackers will also try to drop the good nodes’ packets. In this set of simulations, three attacking strategies will be studied. In attacking strategy 1, no attackers will intentionally drop the good nodes’
packets for packets. In attacking strategy 2, each attacker will drop only the first Bth any good node that has requested it to forward them, and then will stop participating
packets will in route discoveries initiated by that good node, where the dropping of Bth
not be detected as malicious. In these simulations, we set Bth = 20. In attacking strategy 3, each attacker will always keep participating in the route discoveries initiated by the good nodes and will drop the good nodes’ packets in such a way that it will not be detected as malicious, which can be regarded as selective dropping.
543
21.5 Performance evaluation
17 Attacking strategy 1 Attacking strategy 2
16 Good nodes’ payoff
Attacking strategy 3
15 14 13 12 11 10 10
20
30
40
Attacker Number (a) 180 Attacking strategy 1
160
Attackers’ payoff
140
Attacking strategy 2 Attacking strategy 3
120 100 80 60 40 20 0 10
20
30
40
Attacker Number (b) Figure 21.6
Payoff comparison when some attackers will drop packets.
Figure 21.6(a) illustrates the good nodes’ payoff under different attacks. First, compared with attacking strategy 1, attacking strategy 3 even increases the good nodes’ payoff, though the attackers can drop some good nodes’ packets. The reason is that, when attacking strategy 3 is used, the attackers also need to keep forwarding packets for the good nodes, which will increase the number of nodes that the good nodes can use and reduce the value of L¯ min . Since the number of packets that the attackers can drop without being detected as malicious is very limited, the extra damage that they can cause is also very limited, and the good nodes’ payoff will consequently be increased. Second, compared with attacking strategy 1, attacking strategy 2 can decrease the good
544
Defense against insider attacks
nodes’ payoff a little bit due to the extra number of packets that they have dropped. However, since the number of packets that the attackers can drop is always bounded, with increasing time the effect of such packet dropping becomes less and less noticeable. Figure 21.6(b) illustrates the attackers’ payoff. First, attacking strategy 2 can increase the attackers’ payoff compared with attacking strategy 1. The reason is that the attackers can drop some extra packets without being detected when attacking strategy 2 is used. However, attacking strategy 3 can dramatically decrease the attackers’ payoff compared with attacking strategy 1, the reason being that forwarding packets for the good nodes will also incur a lot of cost, while the number of packets that they can drop without being detected as malicious is very limited. In summary, from the attackers’ point of view, when the network lifetime is finite, attacking strategy 2 should be used, though its advantage over attacking strategy 1 is very limited, and will decrease with increasing network lifetime.
21.6
Summary In this chapter we have investigated how to secure cooperative ad hoc networks against insider attacks under realistic scenarios, where the environment is noisy and the underlying monitoring is imperfect. We model the dynamic interactions between good nodes and attackers in such networks as a secure routing and packet-forwarding game. The optimal defense strategies have been devised; they are optimal in the sense that no other strategies can further increase the good nodes’ payoff under attacks. The maximum possible damage that can be caused by the attackers has also been analyzed. By focusing on the worst-case scenario from the good nodes’ point of view, that is, the good nodes have no prior knowledge of the other nodes’ types while the insider attackers can know which nodes are good, the devised strategies can work well in any scenario. Extensive simulations have also been conducted to justify the underlying assumptions and to evaluate the strategies. The simulation results demonstrate that the defending strategies can effectively secure cooperative ad hoc networks under noise and imperfect monitoring. Interested readers can refer to [482].
22
Secure cooperation stimulation under noise and imperfect monitoring
In this chapter we address cooperation stimulation in realistic yet challenging contexts where the environment is noisy and the underlying monitoring is imperfect. We have first explored the underlying reasons why stimulating cooperation under such scenarios is difficult. Instead of trying to force all nodes to act fully cooperatively, our goal is to stimulate cooperation in a hostile environment as much as possible through playing on conditional altruism. To formally address the problem, we have modeled the interactions among nodes as secure-routing and packet-forwarding games under noise and imperfect observation, and devised a set of reputation-based attack-resistant cooperation strategies without requiring any tamper-proof hardware or a central banking service. The performance of the devised strategies has also been evaluated analytically. The limitations of the game-theoretic approaches and the practicability of the devised strategies have also been investigated through both theoretical analysis and extensive simulation studies. The results have demonstrated that, although sometimes there may exist a gap between the ideal game model and the reality, game-theoretic analysis can still provide thoughtprovoking insights and useful guidelines when designing cooperation strategies.
22.1
Introduction In this chapter, instead of trying to force all nodes to act fully cooperatively, our goal is to stimulate cooperation among selfish nodes as much as possible without relying on any tamper-proof hardware or a central banking service. Further, instead of addressing this issue in ideal scenarios, we focus on realistic scenarios in which communication channels are error-prone, the underlying monitoring is imperfect, and there may be some malicious nodes whose goal is to cause damage to the network, which make achieving the above goal an extremely challenging task. As in previous chapters, we also focus on the most basic networking mechanism in ad hoc networks, namely packet forwarding, and we will jointly consider routing and packet forwarding by modeling the interactions among nodes as a multi-stage secure-routing and packet-forwarding game under noise and imperfect observation. We will explore the challenges involved in stimulating cooperation under such realistic settings, and identify the underlying reasons why in many situations cooperation cannot be enforced. Then we devise a set of reputation-based attack-resistant cooperation strategies without requiring any tamper-proof hardware or a central banking service,
546
Secure cooperation stimulation under noise and imperfect monitoring
and evaluate the performance of the devised strategies. When devising cooperation strategies, besides Nash equilibrium, the issues of fairness, cheat-proofness, and robustness against attacks have also been considered. Furthermore, the limitation of the game-theoretic approaches and the practicability of the devised strategies in reality have also been investigated through both theoretical analysis extensive simulation studies. Meanwhile, although our focus is on mobile ad hoc networks, networks with fixed topology have also been investigated when necessary. This chapter falls within the category of reputation-based cooperation-stimulation analysis for autonomous ad hoc networks under a game-theoretic framework. However, there are several major differences. First, we study this problem under more realistic and more challenging scenarios, where the communication medium is error-prone, the underlying monitoring mechanism is not perfect, and some nodes may be malicious. Second, instead of enforcing cooperation among nodes, which has been shown not to be achievable in most situations, our goal is to stimulate cooperation among selfish nodes as much as possible. Third, we have identified the reasons why in many situations cooperation cannot be enforced. Furthermore, we have also studied the limitations of game-theoretic approaches in reality. In the two previous chapters, we have proved that, in order to maximize its own payoff and be robust against possible cheating behavior, a player should not forward more packets than its opponent does for it. We have also shown that this strategy can achieve Pareto optimality, cheat-proofness, and absolute fairness. However, in Chapter 19, we have assumed perfect monitoring. In this chapter, we further focus on the scenario in which the underlying monitoring is not perfect, which makes the task much more challenging. Meanwhile, instead of trying to identify the conditions under which the strategy being considered is optimal, as was done in Chapter 21, in this chapter we investigate when and why the strategies cannot work well, through both analytic analysis and extensive simulations. Furthermore, we also study the possible limitations of game-theoretic approaches to solve cooperation issues. The rest of the chapter is organized as follows. In Section 22.2 we describe the system model, pose the challenges for cooperation stimulation in realistic contexts, and model the interactions among nodes as a multi-stage secure-routing and packet-forwarding game under noise and imperfect observation. The attack-resistant cooperation-stimulation strategies are described in Section 22.3, and the theoretical analysis of the devised strategy is presented in Section 22.4. Extensive simulations have also been conducted to evaluate the effectiveness of the strategies under various scenarios, with the results being summarized in Section 22.5. Section 22.6 compares our approaches with the existing approaches.
22.2
Design challenges and game description
22.2.1
System description and design challenges In this chapter we investigate how to stimulate cooperation among selfish nodes under realistic scenarios. We consider an autonomous mobile ad hoc network with a finite
22.2 Design challenges and game description
547
population of users, denoted by N . We do not assume the availability of any tamperproof hardware or a central banking service, therefore the scheme should be completely reputation-based. We focus on the situation that each user will stay in the network for a relatively long time, such as students on a campus. However, we do not require them to remain connected all the time, and we allow users to leave and join the network when necessary. It is worth pointing out specifically that our goal is not to force all the users to act in a fully cooperative fashion, which has been shown not to be achievable in most situations, as discussed in previous chapters. Instead, our goal is to stimulate cooperation among nodes as much as possible through playing on conditional reciprocal altruism, and at the same time take into consideration the possible cheating and malicious behavior as well as fairness concerns. We assume that each user has a unique registered and verifiable identity, and may send information to the others or request information from the others. We focus on the information-push model, in which it is the source’s duty to guarantee the successful delivery of packets to their destinations, but the results obtained can easily be extended to the information-pull model. We assume that, for each user i ∈ N , forwarding a packet will incur cost ci , and letting a packet be successfully delivered to its destination can bring it gain gi . Here the cost corresponds to the effort expended by i, such as energy, and the gain is usually user-specific and/or application-specific. In general, due to the multi-hop nature, when a node wants to send a packet to a certain destination, a sequence of nodes will usually be requested to help forward this packet. We refer to the sequence of ordered nodes as a route, the set of intermediate nodes on a route as relays, and the procedure involved in discovering a route as route discovery. In general, route discovery can be partitioned into three stages. In the first stage, the requester notifies other nodes in the network that it wants to find a route to a certain destination. In the second stage, other nodes in the network will make their decisions on whether they will agree to be on the discovered route. In the third stage, the requester will determine which route should be used. In general, not all packet-forwarding decisions can be perfectly executed. For example, when a node has decided to help another node to forward a packet, the packet may still be dropped due to link breakage or the transmission may fail due to channel errors. We refer to those factors that may cause decision-execution error as noise; they include environmental unpredictability and system uncertainty, channel noise, mobility, etc. We use pe to denote the average packet-dropping probability due to noise. It is worth mentioning that the packet-dropping probability may vary over time due to the varying channel conditions, mobility, etc. For packet dropping due to noise, both the i.i.d. case and the non-i.i.d. case will be studied. We also assume that some underlying monitoring schemes have been employed (such as those studied in [283] and Chapter 16) and that they can let the source know whether its packets have been successfully delivered to their destinations. Meanwhile, if a packet has been dropped by some relay, the underlying monitoring mechanism can let the source know who has dropped this packet. However, we do not assume perfect monitoring, instead, we assume that, even if a node has successfully forwarded a packet, with probability no more than pf it can be observed as having dropped the packet
548
Secure cooperation stimulation under noise and imperfect monitoring
(i.e., a false alarm). On the other hand, when a packet has been dropped by a certain relay, with probability no more than pm this can be observed as a forwarding event (i.e., missed detection). Here pf and pm characterize the capability of the underlying monitoring mechanism. It is easy to understand that pf and pm may vary according to the underlying monitoring mechanism and the monitoring environment. Before devising cooperation-stimulation strategies for autonomous mobile ad hoc networks, we first summarize some challenges that we may meet. • Existence of noise. In many cooperation-enforcement schemes, such as in [396] [121], each node decides its next-step action solely on the basis of the quality of service it has received in the current and/or previous stages, such as the normalized throughput. However, if there exists noise, some packets may be dropped unintentionally during the delivery. This can reduce the quality of service experienced by some nodes. As a consequence, these nodes will also lower the service quality provided by them. Such an avalanche effect may quickly propagate throughout the network and after some time no nodes will forward packets for the others. When designing cooperation-stimulation strategies in realistic scenarios, the effect of noise has to be thoroughly considered. • Imperfect monitoring. Since nodes usually base their decisions solely on what they have observed, imperfect monitoring can always be taken advantage of by greedy or malicious nodes to increase their performance. For example, when the misseddetection ratio is high, a node can always drop other nodes’ packets but still claim that it has forwarded them. None of the existing approaches have been designed with consideration of noise and imperfect monitoring, which greatly limits their potential applications in realistic scenarios. • Presence of malicious users. If no malicious nodes exist and all nodes want to enjoy a high-quality network service, such as high throughput, stimulating cooperation may be less challenging according to the following logic: misbehavior by some nodes can lead to a decrease of service quality experienced by some other nodes, which may consequently reduce the service quality they provide. After some while, such quality degradation will propagate back to those nodes which initially misbehaved. Therefore, nodes have no incentive to intentionally behave maliciously. However, since the attackers’ goal is usually to decrease the network service quality, they would like to cause propagation of such misbehavior. This makes cooperation stimulation extremely challenging. Further, it has been recognized that malicious behavior in autonomous ad hoc networks will not be uncommon due to the loose access control, and security issues have been overlooked in the past when designing cooperation-stimulation strategies. • Topology dependence. It has been pointed out in [121] that network topology plays an important role when designing cooperation-enforcement strategies, and usually it is impossible to find a strategy to force all nodes to play fully cooperatively in static ad hoc networks. For example, if a user is in a bad location such that no user relies on him to forward packets, it is usually impossible for him to find other users to help him.
22.2 Design challenges and game description
549
• Changing topology and opponent. In ad hoc networks, at each time instant each node may request different nodes to forward packets for it due to topology change or other reasons, and/or be requested by different nodes. This also poses a big challenge to cooperation stimulation: since nodes are selfish, unless a relay node is sure with high confidence that requesters will return the favor later, it has no incentive to forward packets for them. • Varying service-request rate. Similarly to changing opponents, we have identified that a variable request rate also plays an important role. For example, if a node has too many packets to send, it is usually impossible to let the other nodes forward all the packets for it, unless it can return enough favors to the others. Further, due to the topology change, a node that is being requested might not need the requester’s help immediately, though it may later. • Non-repeated model. Most of the existing literature addresses cooperation enforcement under a repeated-game model, such as in [396] [427] [5] [121], which assume either random connection or a fixed setup. However, the repeated-game model rarely holds in reality. This leads to a new challenge, namely that favors cannot be returned immediately, which is a major hurdle for effective cooperation stimulation. In [98], Dawkins demonstrated that reciprocal altruism is beneficial for every ecological system when favors are granted simultaneously. However, when favors cannot be granted simultaneously, altruism might not guarantee satisfactory future payback, especially when the future is unpredictable. The situation will deteriorate further when the observation is imperfect with high false-alarm and missed-detection ratios. In this chapter, one critical goal is to design attack-resistant cooperation-stimulation strategies for autonomous mobile ad hoc networks that can work well even in a noisy and hostile environment with imperfect monitoring.
22.2.2
The multi-stage secure-routing and packet-forwarding game As in Chapter 21, in this chapter we model the dynamic interactions among nodes in autonomous mobile ad hoc networks as a multi-stage secure-routing and packetforwarding game, as follows. • Players. A finite set of network users, denoted by N . • Types. Each player i ∈ N has a type θi ∈ , where = {sel f ish, malicious}. Let Ns denote the set of selfish players and Nm = N − Ns the set of attackers. Meanwhile, no player knows the others’ types a priori. • Strategy space. 1. Route-participation stage. Each player, on receiving a request asking it to be on a certain route, can either accept or refuse this request. 2. Route-selection stage. Each player who has a packet to send can, after discovering a valid route, either use or not use this route to send the packet. 3. Packet-forwarding stage. For each relay, once it has received a packet that it is requested to forward, its decision can be either forward or drop this packet.
550
Secure cooperation stimulation under noise and imperfect monitoring
• Cost. For any player i, transmitting a packet, either for itself or for the others, will incur cost ci . • Gain. For i ∈ Ns , it can get gain gi for any successfully delivered packet originating from it. • Utility. For each player i, let Ti (t) denote the number of packets that i needs to send by time t, let Si (t) denote the number of packets that have successfully reached their destinations by time t with i being the source, let Fi ( j, t) denote the number of packets that i has forwarded for j by time t, and let Fi (t) = j∈N Fi ( j, t). Let Wi ( j, t) denote the total number of useless packet transmissions that i has caused to j by time t due to i dropping packets transmitted by j. Let tf be the lifetime of this network. Then we model the players’ utility as follows. 1. For any selfish player i, its objective is to maximize Uis (tf ) =
Si (tf )gi − Fi (tf )ci . Ti (tf )
(22.1)
2. The objective of any attacker j is to maximize Um j (tf ) =
1 W j (i, tf ) + Fi ( j, tf ) ci − ηF j (tf )c j . tf
(22.2)
i∈N
Here η is introduced to determine the relative importance of the attackers’ cost compared with other nodes’ cost. That is, it is worth incurring cost c to cause damage c to other nodes only if η < c /c. If the game is played for an infinite duration, their utilities will become limtf →∞ Uis (tf ) and limtf →∞ U m j (tf ), respectively. On the right-hand side of (22.1), the numerator denotes the net profit (i.e., total gain minus total cost) that the selfish node i obtained, and the denominator denotes the total number of packets that i needs to send. This utility represents the average net profit that i can obtain per packet. We can see that maximizing (22.1) is equivalent to maximizing the total number of successfully delivered packets subject to the total cost constraint. If ci = 0, this is equivalent to maximizing the throughput. The summation on the right-hand side of (22.2) represents the net damage caused to the other nodes by j. Since in general this value may increase monotonically, we normalize it using the network lifetime tf . Now this utility represents the average net damage that j caused to the other nodes per time unit. From (22.2) we can see that in this game setting the attackers’ goal is to waste the other nodes’ cost (or energy) as much as possible. Other possible alternatives, such as minimizing the others’ payoff, will be discussed later. The above game can be divided into many subgames as explained below. Once a player wants to send a packet to a certain destination, a subgame consisting of at most three stages will be initiated: in the first stage, the source will request some players to be on a certain route to the destination; in the second stage, the source will decide whether it should use this route to send the packet; and in the third stage, each relay player will decide whether it should help the source to forward this packet once it has received the packet. We refer to each subgame as a single routing and packet-forwarding subgame.
551
22.3 Attack-resistant cooperation stimulation
22.3
Attack-resistant cooperation stimulation
22.3.1
Statistical detection of packet-dropping attacks Before devising attack-resistant cooperation-stimulation strategies, we first study how to handle possible malicious behavior. We focus on two classes of attacks: packet-dropping and traffic-injection attacks. Next we show how to detect a packet-dropping attack under noise with imperfect monitoring. Let Ri ( j, t) denote the number of packets that node j has agreed to forward for node i by time t, and let Hi ( j, t) denote the number of times that i has observed j forwarding a packet for it. If j has never intentionally dropped i’s packets, given pe , pf , and pm , on average we should have po Ri ( j, t) ≤ Hi ( j, t) ≤ (1 − pe + pe pm )Ri ( j, t),
(22.3)
with po = (1 − pe )(1 − pf ). Then a simple detection rule can be as follows: node i will mark node j as intentionally dropping packets if the following holds: Hi ( j, t) < Ri ( j, t) po − (Ri ( j, t), pe , pf , pm ),
(22.4)
where (n, pe , pf , pm ) is a function of pe , pf , pm , and n. In general, there is a tradeoff when selecting (n, pe , pf , pm ). A large (n, pe , pf , pm ) may incur a high misseddetection ratio, whereas a small (n, pe , pf , pm ) may result in a high false-alarm ratio. One way to find a good (n, pe , pf , pm ) is to apply the Neyman–Pearson hypothesistesting theory [344]. Let PF () denote the false-alarm probability resulting from using a certain in (22.4), and let PM () denote the miss probability resulting from using a certain ∗ in (22.4). Given a certain acceptable false-alarm probability α, we say that ∗ is optimal if ∗ ∈ min PM ()
s.t. PF () < α.
(22.5)
If packet dropping due to noise can be modeled as an independent identically distributed (i.i.d.) random process with dropping probability pe , and the observation errors are also independent identically distributed random processes, and they are independent of each other, then, according to the central limit theorem [223], for any x ∈ R, we have ( ' Hi ( j, t) − Ri ( j, t) po ≥ x ≥ 1 − (x), (22.6) lim Prob √ Ri ( j,t)→∞ Ri ( j, t) po (1 − po ) where 1 (x) = √ 2π Then we can let
)
x
−∞
e−t
2 /2
dt.
: (n, pe , pf , pm ) = x npo (1 − po ).
(22.7)
(22.8)
In this case, the false-alarm ratio will be no more than 1 − (x) when Ri ( j, t) is large, and the obtained detector (22.4) with being defined in (22.8) is an optimal
552
Secure cooperation stimulation under noise and imperfect monitoring
Neyman–Pearson detector subject to the false-alarm probability α = 1 − (x). Since in general (x) can still approach 1 even for a small positive x, (n, pe , pf , pm ) will be a very small value compared with npo for large n. However, in general neither packet dropping nor observation error is i.i.d. Under such circumstances, if the above detection rule is used, the false-alarm ratio will usually be larger than 1 − (x). In order to maintain the same false-alarm probability as in i.i.d. cases, in non-i.i.d. cases the threshold (n, pe , pf , pm ) should be increased. Let β = 1 − α, which can be explained as i’s confidence in its detection decision. The value of β lies in the range [0, 1], with 0 indicating that i has not marked j as malicious and 1 indicating that i is sure that j is malicious. Then we have β = (x) for the i.i.d. scenarios and β < (x) for the non-i.i.d. scenarios. In the rest of this chapter we will use (n, pe , pf , pm , β) to denote the detection threshold with detection confidence β. Once node i has marked node j as intentionally dropping packets, one possible rule is that it should not work with j again. However, such a rule has the drawback that, if j has mistakenly been marked as malicious, it can never recover, since i will not give it a chance. To overcome this drawback, we modify this decision rule such that j will be given a chance to recover, which will be described in the following subsection.
22.3.2
Cooperation strategy with attacker detection The strategies for non-malicious players involve decision-making in the following three stages: route participation, route selection, and packet forwarding.
22.3.2.1
The route-participation stage We first study what decision a selfish node i should make when it receives a routeparticipation request from node j. First, if i has detected j as malicious with confidence β, with probability 1 − β it should immediately refuse this request. Second, even if j has not been marked as malicious by i, i should accept this request only if it believes that it can get help from j later. However, whether i can get help from j depends on a lot of uncertain factors, such as i’s and j’s future requests, and the changing network topology, j’s strategy. Owing to the unpredictability of future events and favors not being granted simultaneously, stimulating i to act cooperatively is difficult. Here we focus on the scenario in which nodes will stay in the network for a relatively long time. We consider the following strategy: a node may first forward some packets for other nodes without getting instantaneous payback. However, in order to be robust against possible malicious behavior (e.g., a traffic-injection attack) or greedy behavior (e.g., requesting more but doing less in return), a node should not be too generous. Before formalizing the above strategy, we first introduce a simple procedure. Let β be i’s confidence regarding whether j is malicious. i then randomly picks a value r between 0 and 1, and will give j another chance if r < 1 − β. We refer to this procedure as a recovery check procedure. Let F˜ j (i, t) be i’s estimate of F j (i, t). Then the above
22.3 Attack-resistant cooperation stimulation
553
strategy can be translated as follows. i will accept j’s route-participation request only if j has passed recovery check and the following holds: Fi ( j, t) − F˜ j (i, t) < Dimax ( j, t).
(22.9)
As in [483], we refer to Fi ( j, t) − F˜ j (i, t) as i’s estimated balance with j, and refer to Dimax ( j, t) as the cooperation level. Setting Dimax ( j, t) to ∞ means that i will always help j, setting Dimax ( j, t) to −∞ means that i will never help j, and setting Dimax ( j, t) to be a finite value means that i will conditionally help j. Meanwhile, Dimax ( j, t) can be either constant or variable depending on i’s past interactions with j. It is easy to see that a good choice of Dimax ( j, t) is crucial for optimizing i’s performance. In order for the above strategy to work well, node i needs to have a good estimate of F j (i, t) for any other node j and needs to select a good cooperation level. We first study how to get a good estimate of F j (i, t). If i can have accurate knowledge of monitoring errors experienced by j, denoted by p˜ f and p˜ m , then we should have F j (i, t)((1 − pe )(1 − p˜ f ) + pe p˜ m ) 4 Hi ( j, t).
(22.10)
Then a good estimate of F j (i, t) can be F˜ j (i, t) =
Hi ( j, t) . (1 − pe )(1 − p˜ f ) + pe p˜ m
(22.11)
However, in general i cannot accurately estimate p˜ f and p˜ m . In such scenarios, a more conservative estimate can be F˜ j (i, t) =
Hi ( j, t) . (1 − pe )(1 − pf )
(22.12)
Consequently, j can take advantage of such inaccuracy to forward fewer packets for i, or ask i to forward more packets for it. This will be further investigated in Section 22.4. Now we study how to select a good cooperation level. First, finding an optimal cooperation level is usually impossible unless nodes can accurately predict the future. In general, the cooperation level Dimax ( j, t) is related to both i’s and j’s request rate. For example, if i has a relatively low request rate compared with those of the others, a relatively small Dimax ( j, t) should work well. However, if i’s request rate is much higher than those of the other nodes in the network or exhibits too highly bursty a pattern, a larger Dimax ( j, t) may be needed. Meanwhile, Dimax ( j, t) may also change according to i’s interactions with j. For example, if i and j have helped each other many times, slightly increasing their cooperation levels may be a good choice from both nodes’ point of view. Extensive simulations have been conducted to study the effect of the cooperation level, and the results suggest that, when all nodes have almost equal request rates, a relatively small cooperation level can work well.
22.3.2.2
The route-selection stage Next we study the strategy in the route-selection stage. Once a set of routes has been discovered by node i, with all relays on these routes having agreed to forward packets
554
Secure cooperation stimulation under noise and imperfect monitoring
for it, the following strategy will be implemented by i. First, i will not further consider this route if any relay cannot pass recovery check; second, among all those routes with all nodes having passed recovery check, i will pick the one with the minimum number of hops.
22.3.2.3
The packet-forwarding stage Now we consider the strategy in the packet-forwarding stage. Once any selfish node has agreed to forward a packet for a certain node, it should not intentionally drop this packet unless the following holds: (1 − pe )(1 − p˜ f ) + pe p˜ m ≤ p˜ m .
(22.13)
That is, p˜ f + p˜ m ≥ 1, where p˜ f and p˜ m are the actual false-alarm ratio and misseddetection ratio experienced by the node. If (22.13) holds, this means that the chance that it will be marked as malicious even after dropping all the packets will still be no greater than that in the case of its forwarding all packets due to the high monitoring inaccuracy. However, if (22.13) does not hold, intentionally dropping packets will not be a good strategy if it still need others’ help, since such dropping may cause it to be detected as malicious and consequently it will not get help from other nodes in the future. Let β(i, j) denote i’s confidence regarding whether j is malicious. By combining the attacker-detection strategy and the routing and packet-forwarding strategies described above, we devise the following attack-resistant cooperation-stimulation strategy. For each single routing and packet forwarding subgame, assuming that P1 is the initiator who wants to send a packet to Pn at time t, and a route “P1 → P2 → · · · → Pn ” has been discovered by P1 . After P1 has sent requests to all the relays on this route asking them to participate, for each non-malicious player on this route the following strategies should be implemented. (i) In the route-participation stage, any relay Pi will accept this request if and only if P1 can pass recovery check and FPi (P1 , t) − F˜ P1 (Pi , t) < D max Pi (P1 , t); otherwise, it should refuse. (ii) In the route-selection stage, P1 will use this route if and only if all relays on this route have passed recovery check and this route has the minimum number of hops among all those routes with all relays having passed recovery check; otherwise, P1 should not use this route. (iii) In the packet-forwarding stage, any relay Pi will forward this packet if and only if it has agreed to be on this route and (22.13) does not hold; otherwise, it should drop the packet. (iv) Attacker detection. Let β be an acceptable false-alarm ratio from P1 ’s point of view. Then it will mark a relay P j as malicious if (22.4) holds with i = P1 , j = Pi . Consequently, P1 updates β(P1 , P j ) using β.
22.4 Game-theoretic analysis and limitations
22.4
Game-theoretic analysis and limitations
22.4.1
Strategy analysis under no attacks
555
We first consider the decisions made by the relays in the packet-forwarding stage. As long as (22.13) does not hold and the source i can get an accurate estimate of F j (i, t), from any selfish node’s point of view, the only gain from intentionally dropping a packet is saving cost c j , while the penalty includes the increase in the probability being marked as malicious by i and the decrease in the number of packets that i will forward for j in the future. Therefore j has no incentive for intentionally dropping packets in such scenarios. What is the consequence of inaccurate estimation of F j (i, t)? Let’s assume that p˜ f and p˜ m are the actual false-alarm and missed-detection ratios experienced by j, and i does not know their values. In this case, i may use (22.11) to estimate F j (i, t), and we have F j (i, t) (1 − pe )(1 − pf ) 4 . (22.14) (1 − pe )(1 − p˜ f ) + pe p˜ m F˜ j (i, t) If p˜ f < pf , then we have F˜ j (i, t) > F j (i, t), and consequently ( ' F j (i, t) (1 − pe )(1 − pf ) . = lim Fi ( j,t)→∞ Fi ( j, t) (1 − pe )(1 − p˜ f ) + pe p˜ m
(22.15)
In other words, node j can take advantage of imperfect monitoring to increase its performance by forwarding fewer packets for node i. However, if the underlying monitoring mechanism can guarantee that pf and pm are small enough, the damage caused to node i will be very limited. Further, if node i also experiences a lower false-alarm ratio, the damage will be further reduced, since the above analysis is also applicable to i. We can also check that, if the false-alarm ratio and missed-detection ratio experienced by node i and node j are the same, then we can still have lim Fi ( j,t)→∞ F j (i, t)/Fi ( j, t) = 1. Next we consider the source’s decision in the route-selection stage. If no relays on the selected route have been marked as malicious by the source, it is easy to see that this is an optimal selection. What is the consequence if some relays have been marked as malicious? First, there is only a very small probability that those nodes can pass recovery check, so, even if they are malicious, the long-term average damage is still negligible. Second, since these nodes may have been mistakenly marked as malicious, such a chance can allow them to recover their reputation, and may consequently increase the source’s future payoff, since it may have more resources to select and use. Finally we analyze the relay’s decision in the route-participation stage. The optimality of the strategy in this stage depends on a lot of uncertain factors, such as the nodes’ future request pattern, the changing topology, the nodes’ future staying time, and the selection of a good cooperation level. Since most of these factors cannot be known a priori, the optimality of the strategies cannot be guaranteed. It is usually impossible to find an optimal strategy without being able to accurately predict the future. However, our simulation results show that, when nodes’ request rates do not vary a lot, a relatively low cooperation level can work well.
556
Secure cooperation stimulation under noise and imperfect monitoring
If the future is predictable, or at least partially predictable, such as that the network will remain alive for a long time, all nodes staying in the network will keep generating and sending packets, and any pair of nodes will meet and request each other’s help again and again, then each node can set its cooperation level to be a very large positive constant without affecting its overall performance (any extra constant cost will not affect the overall payoff as long as limt→∞ Ti (t) = ∞). Then the strategies can form a Nash equilibrium, and are Pareto optimal, are subgame perfect, and achieve absolute fairness (in cost), provided that each node i can accurately estimate F j (i, t) for any other node j and Dimax ( j, t) is large enough to accommodate possible variable and bursty requests between them. This can easily be proved by following the above analysis, but this is not done here due to limits on space. Unfortunately, such ideal scenarios usually do not exist in reality. That is, there exists a gap between the ideal game model and the reality. Accordingly, the strategy devised here cannot maintain its optimality in reality. However, our simulation results demonstrate that this strategy can still work well in most scenarios, which suggests that game-theoretic approaches can still provide thought-provoking insights and useful guidelines when devising cooperation strategies even when there exists some gap between the ideal model and the reality.
22.4.2
Attacking strategy and damage analysis Thus far we have focused mainly on scenarios in which no nodes are malicious. Next we analyze the possible damage that can be caused by attackers. Specifically, we focus on the following two important attacks: dropping packets and injecting traffic. That is, to damage the network, the attackers can either drop other nodes’ packets or inject a lot of traffic to consume other nodes’ resources. We first consider a packet-dropping attack. According to the devised strategy, for an attacker j to avoid being marked as malicious by node i, the highest packet-drop ratioping pe that it can employ should satisfy the following inequality in order for it to avoid being detected: (1 − pe )(1 − pf ) ≤ 1 − pe (1 − p˜ f ) + pe p˜ m ,
(22.16)
where p˜ f and p˜ m are the actual false-alarm ratio and missed-detection ratio experienced by j. That is, the observed number of packet forwardings is no less than the value corresponding to normal behavior. Since in general we have pe (1 − pf ) + ( pf − p˜ f ) ≥ 0,
(22.17)
the maximum possible pe that the attacker can use without being detected is pe =
min{1, [ pe (1 − pf ) + pf − p˜ f ]/(1 − p˜ f − p˜ m )} 1
if 1 − p˜ f − p˜ m > 0, if 1 − p˜ f − p˜ m ≤ 0. (22.18)
These results tell us that, if the attackers can make the missed-detection ratio large enough (i.e., p˜ m ≥ 1 − p˜ f ), they can arbitrarily drop packets without being detected.
22.5 Simulation studies
557
Now we study the case for 1 − p˜ f − p˜ m > 0. In this case, the attacker can set the packet-dropping ratio to pe = min{[ pe (1 − pf ) + pf − p˜ f ]/(1 − p˜ f − p˜ m ), 1}
(22.19)
Then we have min{[ p˜ m /(1 − pf − p˜ m )] pe , 1 − pe } < pe − pe < min{[ pe p˜ m + (1 − pe ) pf ]/(1 − p˜ m ), 1 − pe },
(22.20)
where pe − pe can be regarded as the extra damage caused by the attackers without their being detected. If an attacker can successfully exploit the underlying monitoring to avoid being detected, such as by virtue of a high p˜ m , then the extra number of packets it can drop without being detected can increase dramatically. According to (22.20), the extra damage may increase nonlinearly with increasing p˜ m . This suggests that it is critical to have a robust monitoring scheme to ensure that the monitoring error will not be too large. Actually, from (22.20) we can also see that, even for p˜ m = 0.5, pe − pe is still upper-bounded by pe + 2 pf , which is still small as long as pe and pf are small. For a traffic-injection attack, since each selfish node i will try to maintain lim Fi ( j,t)→∞ F j (i, t)/Fi ( j, t) = 1, for any node j, the extra number of packets that node j can request node i to forward is always bounded. According to (22.15), the maximum possible ratio between Fi ( j, t) and F j (i, t) is upper-bounded by [(1 − pe )(1 − p˜ f ) + pe p˜ m ]/[(1 − pe )(1 − pf )], provided that p˜ f + p˜ m < 1. Meanwhile, if the underlying monitoring mechanism can ensure that pm and pf are small, the ratio will be small. However, if j can successfully manage to let p˜ f + p˜ m ≥ 1, such as by making the missed-detection ratio approach 1, it can always request i to forward packets without returning the favor. It is worth noting that, under the strategies considered here, no matter what goal the attackers may have, the selfish nodes’ payoff can always be guaranteed as long as pe , pm , and pf are small. Meanwhile, if η (defined in (22.2)) is small enough, then, from an attacker’s point of view, maximizing (22.2) is almost equivalent to minimizing the selfish nodes’ payoff. Otherwise, maximizing (22.2) might not cause as much damage as minimizing the selfish nodes’ payoff, since in this case the attackers might not be willing to continuously drop packets without being detected due to the reason that this also requires the attackers to forward a lot of packets for other nodes, which might not be in their best interest.
22.5
Simulation studies In this section we conduct extensive simulations to evaluate the effectiveness of the strategy devised here and to identify when and why in some situations these strategies cannot work well.
558
Secure cooperation stimulation under noise and imperfect monitoring
In our simulations, both static and mobile ad hoc networks have been studied, with mobile ad hoc networks being our focus. In these simulations, nodes are randomly deployed inside a rectangular area of 1000 m × 1000 m, and each mobile node moves according to the random-waypoint model [489], which can be characterized by the following three parameters: the pause time, the minimum velocity vmin , and the maximum velocity vmax . We set vmin = 10 m/s, vmax = 30 m/s, and the average pause time 100 s. The MAC-layer protocol implements the IEEE 802.11 DCF with a four-way handshaking mechanism [197]. The link bandwidth is 2 Mbps, and the data-packet size is 512 bytes. DSR [209] is used as the underlying route-discovery protocol. The maximum transmission range is 250 m. Within the transmission range, the channel errors are characterized in terms of the outage probability. Outage is defined as the event that the received signal-to-noise ratio (SNR) falls below a certain threshold δ. Here, for the transmission distance d, the probability of outage PO is defined as, ' ( δ PO (d) = P(SNR(d) ≤ δ) = 1 − exp − . (22.21) SNR The transmission power has been adjusted in such a way that (1 − PO (d = 250))512 = 3%. In these simulations, each node randomly picks another node as the destination to which to send packets. The total number of selfish nodes is 100. Both pm and pf are set to be 5%, and β is set to be 0.1%. Each packet has a delay constraint, which is set to be 10 s. If a packet is dropped by some relay, no retransmission will be applied. For each node i, we set gi = 1 and ci = 0.1. The nodes are indexed from 1 to N , where N is the total number of nodes. To conduct performance evaluation and comparison, the following are measured for each selfish node in the simulations. • Normalized throughput: this is defined as the ratio between the total number of successfully delivered packets and the total number of packets scheduled to be sent. • Probability of no route available: this is defined as the percentage of packets dropped due to there being no valid route available. • Cost per successful packet delivery: this is the ratio between the total number of forwarded packets and the total number of successfully delivered packets originating from it. • Balance: this is the difference between the total number of packets that this node forwarded for the others and the total number of packets that the others forwarded for it. From (22.1) it is easy to see that a selfish node’s payoff can be calculated from its normalized throughput and the cost per successful packet delivery.
22.5.1
Mobile ad hoc networks vs. static ad hoc networks We first study the effect of mobility on cooperation stimulation. In this set of simulations, three types of networks are generated: mobile, partially mobile, and static. In
559
22.5 Simulation studies
Normalized throughput
1 0.8 0.6 0.4 0.2 0
20
40 60 Node index
80
100
20
40 60 Node index
80
100
Percentage of cases of there being no route available
1 0.8 0.6 0.4 0.2 0
Static Figure 22.1
Partially Mobile
Mobile
Effects of mobility on cooperation stimulation.
the partially mobile ad hoc network, the nodes with indices ranging from 1 to 50 are mobile, and the others are static. All nodes employ the same traffic pattern: the packet inter-arrival time follows an exponential distribution with mean 2 s. All nodes set their cooperation level to be 60. The simulation results are illustrated in Figure 22.1. First, from the throughput comparison we can see that, for the static case, the majority of nodes (85%) experience extremely bad throughput. This is because at most times they cannot find a route along which all relays are willing to help with forwarding (shown in the lower figure). The reason why several nodes have high normalized throughput is that the destinations are within the transmission range of the sources. These results suggest that the strategies devised here cannot be used in static ad hoc networks. Actually, in [121] [504] the authors have demonstrated that, in networks with fixed topology, cooperation enforcement cannot be achieved solely by relying on reputation. The most basic reason is that the service that a node can provide is usually not needed by its neighbors, and therefore its neighbors have no incentive to help it. From these results we can also see that, when all nodes are mobile, the normalized throughput can be fairly high. For example, all except for four nodes have normalized throughput greater than 80%. Even for those four nodes, the normalized throughput
560
Secure cooperation stimulation under noise and imperfect monitoring
is still more than 70%. We can also see that, for the majority of the nodes (96%), hardly any of their packets are dropped due to there being no available routes; that is, cooperation among nodes has been effectively stimulated. Now we study the partially mobile case. From the throughput comparison we can see that none of the mobile nodes has normalized throughput less than 40%, and most (33 out of 50) have normalized throughput higher than 80%. However, for the static nodes, the situation is totally reversed: half of them have normalized throughput less than 40%. This suggests that mobility can help stimulate cooperation. The underlying reason is that mobility can make the exchange of services more effective. An analogy to this is the effect of businessmen: without them, we can merely exchange services locally, so the service we can get will be very limited; while with the help of businessmen, services can be exchanged globally. From now on, we will mainly focus on mobile ad hoc networks with all nodes being mobile.
22.5.2
Bursty traffic pattern vs. non-bursty traffic pattern Next we investigate the effect of the traffic pattern on cooperation stimulation. In these simulations two traffic patterns are considered: bursty and non-bursty. In the bursty case, packets are generated in a bursty pattern with average burst length 10, whereas in the non-bursty pattern the packet arrival follows a Poisson process. In both cases the average packet-arrival rate is 0.5 packet/s. The simulation results are illustrated in Figure 22.2. It is surprising to see that the bursty case has a slightly better normalized throughput than that for the non-bursty case. This can be explained using the unsuccessfulforwarding ratio experienced by each node (shown in the lower figure): in the bursty case, the unsuccessful-forwarding ratio experienced by each node is 1% lower than that in the non-bursty case. This is because, in the non-bursty case, when a packet needs to be sent, there is a high probability that the existing route may have broken, since this route may have been discovered a long time ago, whereas in the bursty case, though link breakages still happen frequently, as long as the current route is good, almost all the packets can be delivered successfully. However, if nodes with a bursty pattern have much higher rates, or the burst length is much longer, the performance in the bursty case may be decreased, as will be shown later.
22.5.3
The effect of a negative cooperation level In this set of simulations some nodes set their cooperation level to be negative. Specifically, the first ten nodes set Dmax to be −30, and all the others set Dmax to be 60. The results are illustrated in Figure 22.3. From these results we can see that the majority of nodes (6 out of 10) that set Dmax to be negative have normalized throughput less than 65%. Meanwhile, they also cause some other nodes to experience lower normalized throughput (6 out of 90 have normalized throughput no more than 70%). These results suggest that, as long as a node wants to stay in the network for a long time and needs to send packets continuously, it should not set its cooperation level to be negative.
561
22.5 Simulation studies
Normalized throughput
1
0.9
0.8
0.7 Poisson Bursty
0.6
Unsuccessful-forwarding ratio
20
Figure 22.2
40 60 Node index
80
100
0.06
0.05
0.04 Poisson Bursty
0.03
20
40 60 Node index
80
100
Effects of traffic pattern on cooperation stimulation.
Normalized throughput
0.9 0.8 0.7 0.6 0.5 Dmax = 60 for all nodes Dmax = –30 for the first ten nodes
0.4 0.3
20
40 60 Node index
80
Figure 22.3
The effect of a negative cooperation level on cooperation stimulation.
22.5.4
The effect of cooperation level on cooperation stimulation
100
In this set of simulations, each node sets its traffic rate to be 0.5 packet/s following Poisson arrival. In each simulation a different Dmax value, ranging from 10 to 240, is used. The results are illustrated in Figure 22.4. From the upper figure we can see that,
Secure cooperation stimulation under noise and imperfect monitoring
Average payoff and throughput
562
1 0.8 0.6 0.4 0.2 Average throughput Average payoff
0
0
50
100
150
200
Dmax 9000 Dmax = 10 Dmax = 40 Dmax = 120
6000
Balance
3000 0 –3000 –6000 –9000
Figure 22.4
0
20
40 60 Node index
80
100
The effect of the cooperation level on cooperation stimulation.
once Dmax ≥ 80, both the average normalized throughput and the average payoff experienced by selfish nodes do not increase further, which suggests that in this case setting Dmax = 80 can almost approach the optimal solution in terms of normalized throughput. However, from the lower figure we can see that, with Dmax ≥ 80, the balance variation experienced by nodes also increases, which leads to high unfairness. That explains why we have set Dmax = 60 in our simulations: this provides a good tradeoff between payoff and fairness.
22.5.5
The effect of inhomogeneous request rates In this set of simulations, each node’s traffic rate is determined as follows: let i be a node’s index ranging from 1 to 100, then its traffic rate will be set as ((i mod 20) + 1)/2 packet/s. For particular configurations of Dmax and traffic pattern, three cases are studied: in cases 1 and 3, for each node its traffic follows Poisson arrival, whereas in case 2 each node’s traffic follows bursty arrival. Meanwhile, in cases 1 and 2, all nodes set Dmax to be 60, whereas in case 3, each node with index i sets D max to be 60 + (i mod 2). The results are shown in Figure 22.5. We first study the throughput comparison. From these results we can see that case 3 has the highest normalized throughput and case 2 has the lowest normalized throughput.
563
22.5 Simulation studies
Normalized throughput
1
0.8
0.6
0.4
0.2
case 1 case 2 case 3
20
40
60
80
100
Probability of there being no route available
Node index 0.4
0.3
case 1 case 2 case 3
0.2
0.1
0 20
40
60
80
100
80
100
Cost per successful packet delivery
Node index 7 6 5 4 3 2 1 0
case 1 case 2 case 3
20
40 60 Node index
6000 case 1 case 3
4000
Balance
2000 0 –2000 –4000 –6000 –8000
Figure 22.5
20
40 60 Node index
80
The effect of inhomogeneous request rates on cooperation stimulation.
100
564
Secure cooperation stimulation under noise and imperfect monitoring
This suggests that bursty traffic may decrease the performance, while, if a node has too much traffic to send, increasing its cooperation level can increase its performance. From these results we can also see that, with increasing traffic rate, the throughput decreases too. Although increasing Dmax can slightly increase the performance, it cannot completely solve the problem. The reason is that the service provided by those nodes with high traffic rates is not needed by those nodes with lower rates. This can be shown more clearly in the following simulations. By checking the second figure (probability of there being no route available) in Figure 22.5, we can see that in case 2 (the bursty case) a lot of packets will be dropped due to there being no available routes, especially when the node’s traffic rate is high, which explains why this case has the lowest throughput. From the third figure (cost per successful delivery) in Figure 22.5 we can see that, with increasing traffic rate, the number of hops per route may decrease slightly, which is a little bit surprising, but makes sense: when a node with a high traffic rate has used up the quota assigned by those nodes with lower rates, it is forced to use short routes such as one-hop routes. This is also confirmed by the results in the fourth figure, which indicates that the overall balance for the first 20 nodes almost reaches the maximum. Next we study an extremely asymmetric case. In this set of simulations, except for the first 10 nodes, which have packet-arrival rate 5 packet/s, all the other nodes have packet-arrival rate 0.5 packet/s. For the first 10 nodes’ Dmax values, three cases are studied: in case 1 they set Dmax = 60, in case 2 they set Dmax = 120, and in case 3 they set Dmax = 180. For the other nodes, in all three cases, Dmax = 60. The results are illustrated in Figure 22.6. From these results we can see that, by increasing Dmax from 60 to 120, a lot of gain can be obtained (the normalized throughput increases from 8% to 22%), whereas increasing Dmax from 120 to 180 introduces hardly any gain, and the normalized throughput is still only about 22%. This suggests that although increasing Dmax can provide some gain, this cannot change the inherent problem.
22.5.6
Effects of different types of packet-dropping attack In this set of simulations, we study the effect of different types of packet-dropping attack. Four attack strategies are studied: do not participate in any route discovery, drop all packets passing through, drop half of the packets passing through, and selectively drop packets passing through and at the same time keep avoiding being detected. Figure 22.7 illustrates the evolution of the normalized throughput and payoff averaged among all selfish nodes over time. From these results, first of all we can see that dropping all packets can cause the maximum damage. The reason for this is that we have set Bth to a large value (200), so each attacker can drop up to 199 of any other node’s packets without being marked as malicious. However, we can also see that, with increasing time, the selfish nodes’ performance will also increase. From these results we can also see that adaptive dropping can even increase the selfish nodes’ performance. This is because the damage attackers can cause is very limited in order to avoid their being detected, while continuing to forward packets for selfish nodes can reduce the selfish nodes’ average
565
22.5 Simulation studies
Normalized throughput
1 0.8 0.6 0.4
20
40 60 Node index
80
100
1 D max = 60 D max = 120 D max = 180
0.8 0.6 0.4 0.2
Cost per successful packet delivery
Probability of there being no route available
0
Figure 22.6
D max = 60 D max = 120 D max = 180
0.2
0
20
40 60 Node index
80
100
7 6 5 4 3 2
D max = 60 D max = 120 D max = 180
1 0
20
40 60 Node index
80
100
The effect of inhomogeneous request rates, an extreme case.
number of hops per selected route. Although intuitively adaptive dropping may cause a lot of damage, in reality this need not be the case.
22.5.7
The effect of the number of attackers In this set of simulations we study the selfish nodes’ average performance in the presence of various numbers of attackers, with the number of attackers ranging from 5 to 30. All attackers launch traffic-injection attacks, and will not forward any packets for
566
Secure cooperation stimulation under noise and imperfect monitoring
0.85
0.8
0.75
0.7 5000
10000 15000 Time index (seconds)
Payoff and throughput
0.8 0.7 0.6 0.5 0.4 0.3 0.2 Normalized throughput Average payoff
0.1
Figure 22.8
0.4 5000
10000 15000 Time index (seconds)
20000
Comparison of different types of packet-dropping attack.
0.9
0
0.45 No participation Drop all packets Drop half packets Adaptively drop
20000
1
0
0.5
No participation Drop all packets Drop half packets Adaptively drop
Total damage caused by the attackers
Figure 22.7
0.55
Average payoff
Normalized throughput
0.9
5
10 15 20 Number of attackers
25
30
x 104 B=5 B = 10 B = 15 B = 20 B = 25 B = 30
2
1.5
1
0.5
0
0
0.5 1 1.5 Time index (seconds)
2 x 104
Performance comparison for various numbers of attackers.
selfish nodes. The results are illustrated in Figure 22.8. From these results we can see that, with increasing number of attackers, the normalized throughput averaged over all selfish nodes remains almost unchanged, and the average payoff decreases only very slightly. This can be explained using the figure on the right, in which the total damage is defined as the total number of packets that selfish nodes have forwarded for each attacker. From this figure we can see that, after some time has elapsed, no more damage can be caused to selfish nodes because they have used up all of the quota assigned to them. This suggests that the strategy is robust against traffic-injection attacks.
22.5.8
Cooperation level vs. damage In this final set of simulations, the effect of Dmax on selfish nodes’ performance under traffic-injection attack is studied, with the selfish nodes’ Dmax varying from 20 to 100. The results are illustrated in Figure 22.9. From these results we can see that, after Dmax
567
22.6 Discussion
Total damage caused by the attackers
Average payoff and throughput
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 Average throughput Average payoff
0.1 0 20
Figure 22.9
30
40
50
60 Dmax
70
80
90
100
14000
Dmax = 20 Dmax = 40 Dmax = 60 Dmax = 80 Dmax = 100
12000 10000 8000 6000 4000 2000 0
0
0.5
1
1.5
Time index (seconds)
2 x 104
The effect of cooperation level on damage.
has passed 60, the selfish nodes’ average performance (normalized throughput and payoff) remains almost unchanged. Similarly to the results illustrated in Figure 22.8, for each given Dmax , the damage caused by the attackers will not change after some time due to their having used up all of the assigned quota. Meanwhile, the damage will increase linearly with increasing D max . On taking into consideration also the issue of fairness, these results also suggest that Dmax = 60 can be a good choice. However, we need to keep in mind that the selection of Dmax depends also on the underlying traffic rate. It is easy to understand that, with increasing traffic rate, we should also increase Dmax , especially when the mobility is low and traffic may exhibit a strongly bursty pattern and/or variable rates.
22.6
Discussion Compared with pricing-based schemes, such as those in [26] [27] [493] [3] [504], the major drawback of reputation-based schemes is that some nodes might not get enough help to send out all their packets. As we have demonstrated in Section 22.2, the reason lies in the combined effect of the facts that (1) favors cannot be granted simultaneously and (2) the future is unpredictable. The pricing-based schemes do not suffer from such problems, due to the fact that a node can get immediate monetary payback after providing services. The drawback of pricing-based schemes lies in the requirement for tamper-proof hardware or a central banking service. If such a requirement can be effectively satisfied with a low overhead, pricing-based schemes would be a better choice than reputation-based schemes. However, it is worth pointing out that pricingbased schemes also suffer from noise, imperfect monitoring, and possible malicious behavior.
568
Secure cooperation stimulation under noise and imperfect monitoring
The differences between this chapter and the existing reputation-based work (e.g., [396] [427] [57] [291] [5] [121]) are as follows. First, we address very realistic scenarios: noisy environment, imperfect monitoring, existence of attackers, mobile nodes, inhomogeneous traffic rate, future unpredictability, and so on. This makes our task extremely challenging, and optimal solutions might not be always available. Second, our goal is not to force all nodes to act fully cooperatively, but to stimulate cooperation among nodes as much as possible. The simulation results have demonstrated that our solution can work well in various scenarios and that the damage that can be caused by attackers is limited as long as the underlying monitoring mechanism will not introduce too much uncertainty. In most of the literature, such as in [396] [427] [5] [121], each node makes its decision solely on the basis of its own experienced quality of service, such as throughput. One advantage of such a scheme is that only end-to-end acknowledgment is required, which introduces very little monitoring overhead. Another advantage is that each node need keep only its own past state, which introduces very little storage overhead. In this chapter, we require the underlying monitoring mechanism to provide per-node monitoring, and each node needs to keep track of its balance with other nodes. Although this can introduce a higher overhead, such extra overhead is necessary to stimulate cooperation under noise and imperfect monitoring and in the presence of malicious behavior, as we have demonstrated in Sections 22.2 and 22.4. Otherwise, attackers can easily break down the network and greedy users can easily increase their payoff by taking advantage of noise and monitoring inaccuracy. From the analysis in Section 22.4 we can see that the underlying monitoring plays an extremely critical role in successfully stimulating cooperation among nodes. If the monitoring error is too high (i.e., high pe and pf ), then this can easily be taken advantage of by both malicious and selfish nodes. A robust and effective monitoring system will be a key to the successful deployment of autonomous mobile ad hoc networks in hostile environments, which also poses new research challenges. Further, the overhead associated with the underlying monitoring has not been included in our analysis, which may be crucial in practical implementation. In general, the higher the accuracy of the monitoring scheme, the larger the overhead it may incur. It is also worth mentioning that the security of the strategy presented here also relies on the existing secure protocols to achieve secure access control and secure authentication, and to defend against attacks launched during the route-discovery procedure, such as those in [496] [161] [181] [335] [379] [491] [182] [183] [172] [490] [483]. In general, besides dropping packets and injecting traffic, there are other types of attack, such as jamming and slander. In this chapter our intention is not to address all these attacks, but to provide insight into stimulating cooperation in hostile environments under noise and imperfect monitoring. Since the security of a system is determined by its weakest link, exploiting the possible system vulnerability is also a very important issue.
22.7 Summary and bibliographical notes
22.7
569
Summary and bibliographical notes In this chapter we have investigated the issue of cooperation stimulation for autonomous mobile ad hoc networks in a realistic context, where the communication channels are error-prone, the underlying monitoring is imperfect, and the environment is hostile with possible malicious behavior. We have identified the underlying reasons why stimulating cooperation among nodes in such scenarios is extremely challenging. Unlike most existing work, whose goal is to force all nodes to act fully cooperatively, our goal is to stimulate cooperation among selfish nodes as much as possible through reciprocal altruism. We have devised a set of reputation-based attack-resistant cooperation-stimulation strategies, which are completely self-organizing and fully distributed, and do not require any tamper-proof hardware or a central banking or billing service. Both theoretical analysis and extensive simulation studies have demonstrated that, although there may exist a gap between the game model and the reality, the game-theoretic approach can still provide thought-provoking insights and useful guidelines when devising cooperation strategies, and the devised strategies can effectively stimulate cooperation among selfish nodes in various scenarios and meanwhile be robust against attacks. Interested readers can refer to [488]. Game theory has been widely used to mathematically analyze cooperation enforcement in autonomous ad hoc networks, such as [396] [427] [57] [291] [5] [121]. In [396], Srinivasan et al. provided a mathematical framework for cooperation in ad hoc networks by focusing on the energy-efficient aspects of cooperation. In [121], Felegyhazi et al. defined a game model and identified the conditions under which cooperation strategies can form an equilibrium. In [5], Altman et al. studied the packet-forwarding problem in a non-cooperative game-theoretic framework and provided a simple punishing mechanism considering the end-to-end performance objective of the nodes. The study of selfish behavior in ad hoc networks has also been addressed in [427] [57]. All these schemes focus on selfish behavior and most of them study cooperation enforcement under a repeated-game framework. The schemes presented in [396] [427] [121] also directly relate to this chapter. In [396] Srinivasan et al. studied cooperation in ad hoc networks by focusing on the energy-efficient aspects of cooperation, in such a way that in their TFT-based solution the nodes are classified into different energy classes and the behavior of each node depends on the energy classes of the participants of each connection. They demonstrated that, if two nodes belong to the same class, they should apply the same packetforwarding ratio. Similar TFT-based approaches were also considered by Felegyhazi et al. in [121]. In [427], Urpi et al. claimed that it is not possible to force a node to forward more packets than it sends on average, and then concluded that cooperation can be enforced in a mobile ad hoc network provided that enough members of the network agree on it, as long as no node has to forward more traffic that it generates.
References
[1] B. Atakan and O. B. Akan. Biologically-inspired spectrum sharing in cognitive radio networks, in IEEE Wireless Communications and Networking Conference (WCNC), pages 43–48, 2007. [2] A. Al Daoud, T. Alpcan, S. Agarwal, and M. Alanyali. A Stackelberg game for pricing uplink power in wide-band cognitive radio networks, in Proceedings of the 47th IEEE Conference on Descision and Control, pages 1422–1427, 2008. [3] L. Anderegg and S. Eidenbenz. Ad hoc-VCG: a truthful and cost-efficient routing protocol for mobile ad hoc networks with selfish agents, in Proceedings of the 9th Annual International Conference on Mobile Computing and Networking, pages 245–259, 2003. [4] N. Ahmed, D. Hadaller, and S. Keshav. GUESS: gossiping updates for efficient spectrum sensing, in Proceedings of the 1st International Workshop on Decentralized Resource Sharing in Mobile Computing and Networking, pages 12–17. New York: ACM Press, 2006. [5] E. Altman, A. A. Kherani, P. Michiardi, and R. Molva. Non-cooperative Forwarding in Ad-Hoc Networks. Technical report, INRIA, Sophia Antipolis, France, 2004. [6] E. Altman, A. A. Kherani, P. Michiardi, and R. Molva. Non-cooperative forwarding in ad-hoc networks, in Networking 2005, pages 486–498. Berlin: Springer, 2005. [7] I. F. Akyildiz, W.-Y. Lee, M. C. Vuran, and S. Mohanty. NeXt generation/dynamic spectrum access/cognitive radio wireless networks: a survey. Computer Networks, 50:2127– 2159, 2006. [8] L. M. Ausubel and P. Milgrom. Ascending auctions with package bidding. Frontiers of Theoretical Economics, 1(1):1–42, 2002. [9] A. Attar, M. R. Nakhai, and A. H. Aghvami. Cognitive radio game for secondary spectrum access problem. IEEE Transactions on Wireless Communications, 8(4):2121–2131, 2009. [10] R. Castaneda, A. Nasipuri, and S. R. Das. Performance of multipath routing for on-demand protocols in ad hoc networks. ACM/Kluwer Mobile Networks and Applications (MONET) Journal, 6(4):339–349, 2001. [11] A. Abdul-Rahman and S. Hailes. A distributed trust model, in Proceedings of 1997 New Security Paradigms Workshop, pages 48–60. New York: ACM Press, 1998. [12] R. Axelrod. The Evolution of Cooperation. New York: Basic Books, 1984. [13] R. Axelrod. The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration. Princeton, MA: Princeton University Press, 1997. [14] M. Bloem, T. Alpcan, and T. Ba¸sar. A Stackelberg game for power control and channel allocation in cognitive radio networks, in Proceedings of the 2nd International Conference on Performance Evaluation Methodologies and Tools, 2007. [15] S. Buchegger and J.-Y. Le Boudec. Performance analysis of the CONFIDANT protocol, in Mobihoc, pages 226–236, 2002.
22.7 Summary and bibliographical notes
571
[16] S. Buchegger and J.-Y. Le Boudec. The effect of rumor spreading in reputation systems in mobile ad-hoc networks, in Proceedings of Wiopt ’03, 2003. [17] L. Badia, A. Erta, L. Lenzini, and M. Zorzi. A general interference-aware framework for joint routing and link scheduling in wireless mesh networks. IEEE Network, 22(1):32–38, 2008. [18] D. P. Bertsekas. Network Optimization: Continuous and Discrete Models. Belmont, MA: Athena Scientific, 1999. [19] D. P. Bertsekas. Dynamic Programming and Optimal Control, volumes 1 and 2. Belmont, MA: Athena Scientific, second edition, 2000. [20] D. P. Bertsekas. Nonlinear Programming. Belmont, MA: Athena Scientific, 2003. [21] S. Ball and A. Ferguson. Consumer applications of cognitive radio defined networks, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 518–525, 2005. [22] R. Bacchus, A. Fertner, C. Hood, and D. Roberson. Long-term, wide-band spectral monitoring in support of dynamic spectrum access networks at the IIT Spectrum Observatory, in 3rd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), 2008. [23] M. Blaze, J. Feigenbaum, and J. Lacy. Decentralized trust management, in Proceedings of the 1996 IEEE Symposium on Security and Privacy, pages 164–173, 1996. [24] U. Berthold, F. Fu, M. van der Schaar, and F. K. Jondral. Detection of spectral resources in cognitive radios using reinforcement learning, in Third IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), pages 1–5, 2008. [25] D. Bertsekas and R. Gallager. Data Networks. Englewood Cliffs, NJ: Prentice–Hall, second edition, 1992. [26] L. Buttyan and J. P. Hubaux. Enforcing service availability in mobile ad-hoc network, in The First Annual Workshop on Mobile Ad Hoc Networking & Computing (MobiHoc 2000), 2000. [27] L. B. and J.-P. Hubaux. Stimulating cooperation in self-organizing mobile ad hoc networks. Mobile Networks and Applications, 8(5):579–592, 2003. [28] N. Bambos and S. Kandukuri. Power controlled multiple access (PCMA) in wireless communication networks, in IEEE INFOCOM, pages 386–395, 2000. [29] N. Bambos and S. Kandukuri. Multimodal dynamic multiple access in wireless packet networks, in IEEE INFOCOM, 2001. [30] S. Buchegger and J.-Y. Le Boudec. Performance analysis of the CONFIDANT protocol, in Proceedings of the 3rd ACM International Symposium on Mobile Ad Hoc Networking & Computing, pages 226–236, 2002. [31] E. Blossom. GNU radio: tools for exploring the radio frequency spectrum. Linux Journal, page 122, 2004. [32] J. Broch, D. A. Maltz, D. B. Johnson, Y. Hu, and J. G. Jetcheva. A performance comparison of multi-hop wireless ad hoc network routing protocols, in ACM MobiCom’98, 1998. [33] V. Bhaskar and I. Obara. Belief-based equilibria in the repeated prisoners’ dilemma with private monitoring. Journal of Economic Theory, 102:40–69, 2002. [34] S. Boyd. Convex optimization of graph Laplacian eigenvalues, in Proceedings of the International Congress of Mathematicians, volume 3, pages 1311–1319, 2006. [35] K. Bian and J. M. Park. Segment-based channel assignment in cognitive radio ad hoc networks, in 2nd International Conference on Cognitive Radio Oriented Wireless Networks and Communications (CrownCom ’07), pages 327–335, 2007.
572
Secure cooperation stimulation under noise and imperfect monitoring
[36] J. W. Brewer. Kronecker products and matrix calculus in system theory. IEEE Transactions on Circuits and Systems, 25(9):772–781, 1978. [37] A. G. Barto and R. S. Sutton. Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press, 1998. [38] T. X. Brown and A. Sethi. Potential cognitive radio denial-of-service vulnerabilities and protection countermeasures: a multi-dimensional analysis and assessment. Mobile Networks and Applications, 13(5):516–532, 2008. [39] J. Bater, H. Tan, K. Brown, and L. Doyle. Modelling interference temperature constraints for spectrum access in cognitive radio networks, in IEEE International Conference on Communications, pages 6493–6498, 2007. [40] J. L. Burbank. Security in cognitive radio networks: the required evolution in approaches to wireless network security, in 3rd International Conference on Cognitive Radio Oriented Wireless Networks and Communications (Crown-Com), pages 1–7, 2008. [41] S. P. Boyd and L. Vandenberghe. Convex Optimization. Cambridge: Cambridge University Press, 2004. [42] V. Bhandari and N. H. Vaidya. Connectivity and capacity of multi-channel wireless networks with channel switching constraints, in 26th IEEE International Conference on Computer Communications (INFOCOM), pages 785–793, 2007. [43] P. Bajari and J. Yeo. Auction design and tacit collusion in FCC spectrum auctions. Information Economics and Policy, 21(2):90–100, 2009. [44] H. Celebi and H. Arslan. Utilization of location information in cognitive wireless networks. IEEE Wireless Communications, 14(4):6–13, 2007. [45] K. R. Chowdhury and I. F. Akyildiz. Cognitive wireless mesh networks with dynamic spectrum access. IEEE Journal on Selected Areas in Communications, 26(1):168–181, 2008. [46] J. Chapin and V. Bose. The vanu software radio system, in Software Defined Radio Technical Conference, 2002. [47] D. Cabric and R. W Brodersen. Physical layer design issues unique to cognitive radio systems, in IEEE 16th International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), volume 2, 2005. [48] C. Cordeiro and K. Challapali. C-MAC: a cognitive MAC protocol for multi-channel wireless networks, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 147–157, 2007. [49] C. Cordeiro, K. Challapali, D. Birru et al. IEEE 802.22: the first worldwide wireless standard based on cognitive radios, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 328–337, 2005. [50] C. Cordeiro, K. Challapali, and M. Ghosh. Cognitive PHY and MAC layers for dynamic spectrum access and sharing of TV bands, in Proceedings of the First International Workshop on Technology and Policy for Accessing Spectrum. 2006. [51] L. S. Cardoso, M. Debbah, P. Bianchi, and J. Najim. Cooperative spectrum sensing using random matrix theory, in 3rd International Symposium on Wireless Pervasive Computing, pages 334–338, 2008. [52] D. Chen, D. Z. Du, X. D. Hu et al. Approximations for Steiner trees with minimum number of Steiner points. Journal of Global Optimization, 18(1):17–33, 2000. [53] X. Cheng, D. Z. Du, L. Wang, and B. Xu. Relay sensor placement in wireless sensor networks. Wireless Networks, 14(3):347–355, 2008.
22.7 Summary and bibliographical notes
573
[54] D. Clarke, J.-E. Elien, C. Ellison et al. Certificate chain discovery in SPKI/SDSI. Journal of Computer Security, 9(4):285–322, 2001. [55] T. C. Clancy and N. Goergen. Security in cognitive radio networks: threats and mitigation, in 3rd International Conference on Cognitive Radio Oriented Wireless Networks and Communications (CrownCom), pages 1–8, 2008. [56] H. S. Chen, W. Gao, and D. G. Daut. Spectrum sensing using cyclostationary properties and application to IEEE 802.22 WRAN, in IEEE GLOBECOM, pages 3133–3138, 2007. [57] J. Crowcroft, R. Gibbens, F. Kelly, and S. Ostring. Modelling incentives for collaboration in mobile ad hoc networks, in WiOPT ’03, 2003. [58] J. Crowcroft, R. Gibbens, F. Kelly, and S. Östring. Modelling incentives for collaboration in mobile ad hoc networks. Performance Evaluation, 57(4):427–439, 2004. [59] S. Capkun and J.-P. Hubaux. BISS: building secure routing out of an incomplete set of security associations, in WiSe, 2003. [60] Fan R. K. Chung. Spectral Graph Theory. New York: American Mathematical Society, 1997. [61] T. Clausen, P. Jacquet, A. Laouiti et al. Optimized Link State Routing Protocol. InternetDraft, draft-ietf-manet-olsr-06.txt, September 2001. [62] T. C. Clancy, Z. Ji, B. Wang, and K. J. R. Liu. Planning approach do dynamic spectrum access in cognitive radio networks, in IEEE Global Communications Conference, 2007. [63] S. T. Chung, S. J. Kim, J. Lee, and J. M. Cioffi. A game-theoretic approach to power allocation in frequency-selective Gaussian interference channels, in IEEE International Symposium on Information Theory, page 316, 2003. [64] C. G. Cassandras and S. Lafortune. Introduction to Discrete Event Systems. Norwell, MA: Kluwer Academic Publisher, 1999. [65] T. Clancy. Achievable capacity under the interference temperature model, in 26th IEEE International Conference on Computer Communications (INFOCOM), pages 794–802, 2007. [66] T. Clancy. Formalizing the interference temperature model. Wiley Journal on Wireless Communications and Mobile Computing, 7(9):1077–1086, 2007. [67] T. C. Clancy. On the use of interference temperature for dynamic spectrum access. Annals of Telecommunications, 64(7):573–585, 2009. [68] W. Chen, K. B. Letaief, and Z. Cao. A joint coding and scheduling method for delay optimal cognitive multiple access, in IEEE International Conference on Communications (ICC ’08), pages 3558–3562, 2008. [69] G. Cheng, W. Liu, Y. Li, and W. Cheng. Joint on-demand routing and spectrum assignment in cognitive radio networks, in IEEE International Conference on Communications (ICC ’07), pages 6499–6503, 2007. [70] G. Cheng, W. Liu, Y. Li, and W. Cheng. Spectrum aware on-demand routing in cognitive radio networks, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 571–574, 2007. [71] D. Cabric, S. M. Mishra, and R. W. Brodersen. Implementation issues in spectrum sensing for cognitive radios, in Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, volume 1, 2004. [72] K. Challapali, S. Mangold, and Z. Zhong. Spectrum agile radio: detecting spectrum opportunities, in International Symposium on Advanced Radio Technologies, 2004. [73] K. Chen and K. Nahrstedt. iPass: an incentive compatible auction scheme to enable packet forwarding service in MANET, in ICDCS ’04, 2004.
574
Secure cooperation stimulation under noise and imperfect monitoring
[74] R. Chen and J. M. Park. Ensuring trustworthy spectrum sensing in cognitive radio networks, in 1st IEEE Workshop on Networking Technologies for Software Defined Radio Networks, pages 110–119, 2006. [75] R. Chen, J. M. Park, and K. Bian. Robust distributed spectrum sensing in cognitive radio networks, in 27th Conference on Computer Communications (IEEE INFOCOM), pages 1876–1884, 2008. [76] R. Chen, J. M. Park, and J. H. Reed. Defense against primary user emulation attacks in cognitive radio networks. IEEE Journal on Selected Areas in Communications, 26(1):25– 37, 2008. [77] R. Cressman. Evolutionary Dynamics and Extensive Form Games. Cambridge, MA: MIT Press, 2003. [78] P. Cramton, Y. Shoham, and R. Steinberg. Combinatorial Auctions. Cambridge, MA: MIT Press, 2006. [79] T. M. Cover and J. A. Thomas. Elements of Information Theory. New York: WileyInterscience, 1990. [80] J.-H. Chang and L. Tassiulas. Maximum lifetime routing in wireless sensor networks. IEEE/ACM Transactions on Networking, 12(4):609–619, 2004. [81] D. Cabric, A. Tkachenko, and R. W. Brodersen. Experimental study of spectrum sensing based on energy detection and network cooperation, in Proceedings of the First International Workshop on Technology and Policy for Accessing Spectrum, 2006. [82] D. Cabric, A. Tkachenko, and R. W. Brodersen. Spectrum sensing measurements of pilot, energy, and collaborative detection, in Military Communications Conference (MILCOM), pages 1–7, 2006. [83] Z. Chair and P. K. Varshney. Optimal data fusion in multiple sensor detection systems. IEEE Transactions on Aerospace and Electronic Systems, 22:98–101, 1986. [84] T. C. Clancy and D. Walker. Spectrum shaping for interference management in cognitive radio networks, in SDR Forum Technical Conference, 2006. [85] Y. Chen, G. Yu, Z. Zhang, H. H. Chen, and P. Qiu. On cognitive radio networks with opportunistic power control strategies in fading channels. IEEE Transactions on Wireless Communications, 7(7):2752–2761, 2008. [86] L. Cao and H. Zheng. Distributed spectrum allocation via local bargaining, in IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks (SECON), pages 475–486, 2005. [87] L. Cao and H. Zheng. Understanding the power of distributed coordination for dynamic spectrum management. Mobile Networks and Applications, 13(5):477–497, 2008. [88] L. Cao and H. Zheng. Distributed rule-regulated spectrum sharing. IEEE Journal on Selected Areas in Communications, 26(1):130–143, 2008. [89] L. Cao and H. Zheng. SPARTA: stable and efficient spectrum access in next generation dynamic spectrum networks, in IEEE INFOCOM, 2008. [90] D. Chen, Q. Zhang, and W. Jia. Aggregation aware spectrum assignment in cognitive ad-hoc networks, in 3rd International Conference on Cognitive Radio Oriented Wireless Networks and Communications (CrownCom 2008), pages 1–6, 2008. [91] T. Chen, H. Zhang, M. D. Katz, and Z. Zhou. Swarm intelligence based dynamic control channel assignment in CogMesh, in Proceedings of the IEEE International Conference on Communications (ICC ’08), 2008.
22.7 Summary and bibliographical notes
575
[92] T. Chen, H. Zhang, G. M. Maggio, and I. Chlamtac. Topology management in CogMesh: a cluster-based cognitive radio mesh network, in IEEE International Conference on Communications (ICC ’07), pages 6516–6521, 2007. [93] Y. Chen, Q. Zhao, and A. Swami. Bursty traffic in energy-constrained opportunistic spectrum access, in IEEE Global Communications Conference (Globecom), 2007. [94] Y. Chen, Q. Zhao, and A. Swami. Joint design and separation principle for opportunistic spectrum access in the presence of sensing errors. IEEE Transactions on Information Theory, 54(5):2053–2071, 2008. [95] Y. Chen, Q. Zhao, and A. Swami. Distributed spectrum sensing and access in cognitive radio networks with energy constraint. IEEE Transactions on Signal Processing, 57(2):783–797, 2009. [96] D. Abreu, P. Milgrom, and D. Pearce. Toward a theory of discounted repeated games with imperfect monitoring. Econometrica, 58(5):1041–1063, 1990. [97] F. Digham, M. Alouini, and M. Simon. On the energy detection of unknown signals over fading channels, in Proceedings of IEEE International Conferences on Communication, 5:3575–3579, 2003. [98] R. Dawkins. The Selfish Gene. Oxford: Oxford University Press, second edition, 1990. [99] R. J. DeGroot, D. P. Gurney, K. Hutchinson et al. A cognitive-enabled experimental system, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 556–561, 2005. [100] A. L. Drozd, I. P. Kasperovich, C. E. Carroll, and A. C. Blackburn. Computational electromagnetics applied to analyzing the efficient utilization of the RF transmission hyperspace, in IEEE/ACES International Conference on Wireless Communications and Applied Computational Electromagnetics, pages 1077–1085, 2005. [101] R. Day and P. Milgrom. Core-selecting package auctions. International Journal of Game Theory, 36(3):393–407, 2008. [102] T. Do and B. L. Mark. Joint spatial–temporal spectrum sensing for cognitive radio networks, in CISS, 2009. [103] N. Devroye, P. Mitran, and V. Tarokh. Achievable rates in cognitive radio. IEEE Transactions on Information Theory, 52(5):1813–1827, 2006. [104] H. A. David and H. N. Nagaraja. Order Statistics. New York: Wiley-Interscience, 2003. [105] C. Doerr, M. Neufeld, J. Fifield et al. MultiMAC – an adaptive MAC framework for dynamic radio networking, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 548–555, 2005. [106] M. Di Renzo, L. Imbriglio, F. Graziosi, and F. Santucci. Cooperative spectrum sensing for cognitive radio networks with amplify and forward relaying over correlated log-normal shadowing, in Proceedings of the Tenth ACM International Symposium on Mobile Ad Hoc Networking and Computing, pages 341–342, 2009. [107] D. V. Djonin, Q. Zhao, and V. Krishnamurthy. Optimality and complexity of opportunistic spectrum access: a truncated Markov decision process formulation, in IEEE International Conference on Communications (ICC ’07), pages 5787–5792, 2007. [108] G. Ellison. Cooperation in the prisoner’s dilemma with anonymous random matching. The Review of Economic Studies, 61:567–588, 1994. [109] A. O. Ercan, J. Lee, S. Pollin, and J. M. Rabaey. A revenue enhancing Stackelberg game for owners in opportunistic spectrum access, in 3rd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), 2008.
576
Secure cooperation stimulation under noise and imperfect monitoring
[110] R. Etkin, A. Parekh, and D. Tse. Spectrum sharing for unlicensed bands. IEEE Journal on Selected Areas in Communications, 25(3):517–528, 2007. [111] S. Eidenbenz, G. Resta, and P. Santi. COMMIT: a sender-centric truthful and energyefficient routing protocol for ad hoc networks with selfish nodes, in IEEE Parallel and Distributed Processing Symposium, 2005. [112] A. El-Sherif, A. K. Sadek, and K. J. R. Liu. On spectrum sharing in cooperative multiple access networks, in IEEE Global Communications Conference, 2008. [113] B. Farhang-Boroujeny. Filter bank spectrum sensing for cognitive radios. IEEE Transactions on Signal Processing, 56(5):1801–1811, 2008. [114] FCC. Spectrum policy task force report. FCC Document ET Docket No. 02-135, 2002. [115] FCC. Establishment of interference temperature metric to quantify and manage interference and to expand available unlicensed operation in certain fixed mobile and satellite frequency bands. FCC Document ET Docket 03-289, 2003. [116] FCC. Facilitating opportunities for flexible, efficient and reliable spectrum use employing cognitive radio technologies: notice of proposed rule making and order. FCC Document ET Docket No. 03-108, 2003. [117] W. Feng, J. Cao, C. Zhang, and C. Liu. Joint optimization of spectrum handoff scheduling and routing in multi-hop multi-radio cognitive networks, in 29th IEEE International Conference on Distributed Computing Systems, pages 85–92, 2009. [118] K. Fujisawa, Y. Futakata, M. Kojima et al. SDPA-M (SemiDefinite Programming Algorithm in MATLAB) Users Manual Version 6.2.0. Tokyo: Department of Mathematics and Computer Science, Tokyo Institute of Technology, 2005. [119] A. R. Fattahi, F. Fu, M. van der Schaar, and F. Paganini. Mechanism-based resource allocation for multimedia transmission over spectrum agile wireless networks. IEEE Journal on Selected Areas in Communications, 25(3):601–612, 2007. [120] A. Fehske, J. D. Gaeddert, and J. H. Reed. A new approach to signal classification using spectral correlation and neural networks, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 144–150, 2005. [121] M. Felegyhazi, J.-P. Hubaux, and L. Buttyan. Nash equilibria of packet forwarding strategies in wireless ad hoc networks. IEEE Transactions on Mobile Computing, 5(5):463–476, 2006. [122] M. Fiedler. Algebraic connectivity of graphs. Czechoslovak Mathematical Journal, 23:298–305, 1973. [123] D. Fudenberg and D. K. Levine. Game Theory. Cambridge, MA: MIT Press, 1991. [124] D. Fudenberg and D. K. Levine. The Theory of Learning in Games. Cambridge, MA: MIT Press, 1998. [125] D. Fudenberg and E. Maskin. The folk theorem in repeated games with discounting or with incomplete information. Econometrica, 54(3):533–554, 1986. [126] J. Feigenbaum, C. Papadimitriou, R. Sami, and S. Shenker. A BGP-based mechanism for lowest-cost routing, in The 21st Symposium on Principles of Distributed Computing, pages 173–182, 2002. [127] T. Fujii and Y. Suzuki. Ad-hoc cognitive radio-development to frequency sharing system by using multi-hop network, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 589–592, 2005. [128] F. Fu and M. van der Schaar. Learning to compete for resources in wireless stochastic games. IEEE Transactions on Vehicular Technology, 58(4):1904–1919, 2009.
22.7 Summary and bibliographical notes
577
[129] J. A. Filar, K. Vrieze, and O. J. Vrieze. Competitive Markov Decision Processes. Berlin: Springer, 1997. [130] C. Ghosh and D. P. Agrawal. Channel assignment with route discovery (card) using cognitive radio in multi-channel multi-radio wireless mesh networks, in 1st IEEE Workshop on Networking Technologies for Software Defined Radio Networks, pages 36–41, 2006. [131] D. Gambetta. Can we trust trust?, in Gambetta, D. (ed.) Trust: Making and Breaking Cooperative Relations, electronic edition. Oxford: Department of Sociology, University of Oxford, pp. 213–237, 2000. [132] W. A. Gardner. Signal interception: a unifying theoretical framework for feature detection. IEEE Transactions on Communications, 36(8):897–906, 1988. [133] A. Ghosh and S. Boyd. Growing well-connected graphs, in Proceedings of the 45th IEEE Conference on Decision and Control, pages 6605–6611, 2006. [134] S. Gandhi, C. Buragohain, L. Cao, H. Zheng, and S. Suri. A general framework for wireless spectrum auctions, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 22–33, 2007. [135] M. Grant, S. Boyd, and Y. Ye. CVX: Matlab software for disciplined convex programming. Avialable at http://www.stanford.edu/boyd/cvx, 1. [136] S. Gjerstad and J. Dickhaut. Price formation in double auctions. Games and Economic Behavior, 22:1–29, 1998. [137] L. Giupponi and C. Ibars. Distributed cooperation in cognitive radio networks: overlay versus underlay paradigm, in IEEE 69th Vehicular Technology Conference, 2009. [138] P. Gupta and P. R. Kumar. The capacity of wireless networks. IEEE Transactions on Information Theory, 46(2):388–404, 2000. [139] G. Ganesan and Y. Li. Cooperative spectrum sensing in cognitive radio. IEEE Transactions on Wireless Communications, 6(6):2204–2222, 2007. [140] G. Ganesan, Y. Li, B. Bing, and S. Li. Spatiotemporal sensing in cognitive radio networks. IEEE Journal on Selected Areas in Communications, 26(1):5–12, 2008. [141] L. P. Goh, Z. Lei, and F. Chin. Feature detector for DVB-T signal in multipath fading channel, in 2nd International Conference on Cognitive Radio Oriented Wireless Networks and Communications (CrownCom ’07), 2007. [142] M. Grötschel, L. Lovász, and A. Schrijver. Geometric Algorithms and Combinatorial Optimization. Berlin: Springer-Verlag, 1993. [143] D. J. Goodman and N. B. Mandayam. Power control for wireless data. IEEE Personal Communications Magazine, 7:48–54, April 2000. [144] M. Ghozzi, F. Marx, M. Dohler, and J. Palicot. Cyclostationarity-based test for detection of vacant frequency bands, in First International Conference on Cognitive Radio Oriented Wireless Networks and Communications, pages 1–5, 2006. [145] C. D. Godsil and G. Royle. Algebraic Graph Theory. New York: Springer, 2001. [146] G. Grimmett and D. Stirzaker. Probability and Random Processes. New York: Oxford University Press, 2001. [147] S. Ganeriwal and M. B. Srivastava. Reputation-based framework for high integrity sensor networks, in Proceedings of ACM Security for Ad-hoc and Sensor Networks (SASN), 2004. [148] A. Ghasemi and E. S. Sousa. Collaborative spectrum sensing for opportunistic access in fading environments, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 131–136, 2005.
578
Secure cooperation stimulation under noise and imperfect monitoring
[149] S. Geirhofer, J. Z. Sun, L. Tong, and B. M. Sadler. CMAP: a real-time prototype for cognitive medium access, in IEEE Military Communications Conference (MILCOM), pages 1–7, 2007. [150] S. Geirhofer, L. Tong, and B. Sadler. A measurement-based model for dynamic spectrum access in WLAN channels, in Proceedings of the IEEE Military Communications Conference, 2006. [151] S. Geirhofer, L. Tong, and B. M. Sadler. Dynamic spectrum access in the time domain: Modeling and exploiting white space. IEEE Communications Magazine, 45(5):66–72, 2007. [152] S. Geirhofer, L. Tong, and B. M. Sadler. Cognitive medium access: constraining interference based on experimental models. IEEE Journal on Selected Areas in Communications, 26(1):95–105, 2008. [153] L. Gao, P. Wu, and S. Cui. Power and rate control with dynamic programming for cognitive radios, in IEEE Global Telecommunications Conference (GLOBECOM ’07), pages 1699– 1703, 2007. [154] R. Gummadi, D. Wetherall, B. Greenstein, and S. Seshan. Understanding and mitigating the impact of RF interference on 802.11 networks. ACM SIGCOMM Computer Communication Review, 37(4):385–396, 2007. [155] G. Ganesan and L. Ye. Cooperative spectrum sensing in cognitive radio, part I: two user networks. IEEE Transactions on Wireless Communications, 6(6):2204–2213, 2007. [156] G. Ganesan and L. Ye. Cooperative spectrum sensing in cognitive radio, part II: Multiuser networks. IEEE Transactions on Wireless Communications, 6(6):2214–2222, 2007. [157] F. Gao, W. Yuan, W. Liu, W. Cheng, and S. Wang. Pipelined cooperative spectrum sensing in cognitive radio networks, in IEEE Wireless Communications and Networking Conference, pages 1–5, 2009. [158] H. Frey, J. K. Lehnert and P. Sturm. UbiBay: an auction system for mobile multihop ad-hoc networks, in ICEIS ’04, 2004. [159] H. Harada. A software defined cognitive radio prototype, in IEEE 18th International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC ’07), pages 1–5, 2007. [160] S. Haykin. Cognitive radio: brain-empowered wireless communications. IEEE Journal on Selected Areas in Communications, 23(2):201–220, 2005. [161] J. P. Hubaux, L. Buttyan, and S. Capkun. The quest for security in mobile ad hoc networks, in MobiHOC, 2001. [162] J. Huang, R. A. Berry, and M. L. Honig. Spectrum sharing with distributed interference compensation, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 88–93, 2005. [163] J. Huang, R. A. Berry, and M. L. Honig. Auction-based spectrum sharing. ACM/Springer Mobile Networks and Applications Journal (MONET), 11(3):405–418, 2006. [164] J. Huang, R. A. Berry, and M. L. Honig. Distributed interference compensation for wireless networks. IEEE Journal on Selected Areas in Communications, 24(5):1074–1084, 2006. [165] T. Holliday, A. Goldsmith, and P. Glynn. Wireless link adaptation policies: QoS for deadline constrained traffic with imperfect channel estimates, in IEEE ICC, 2001. [166] M. M. Halldórsson, J. Y. Halpern, L. E. Li, and V. S. Mirrokni. On spectrum sharing games, in The 23rd Annual ACM Symposium on Principles of Distributed Computing, pages 107– 114, 2004.
22.7 Summary and bibliographical notes
579
[167] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge: Cambridge University Press, first edition, 1985. [168] Z. Han, Z. Ji, and K. J. R. Liu. Dynamic distributed rate control for wireless networks by optimal cartel maintenance strategy, in IEEE Globecom, 2004. [169] Z. Han, Z. Ji, and K. J. R. Liu. Fair multiuser channel allocation for OFDMA networks using Nash bargaining solutions and coalitions. IEEE Transactions on Communications, 53(8):1366–1376, 2005. [170] Z. Han, Z. Ji, and K. J. R. Liu. Non-cooperative resource competition game by virtual referee in multi-cell OFDMA networks. IEEE Journal on Selected Areas in Communications, 25(6):1079, 2007. [171] Z. Han, Z. Ji, and K. J. R. Liu. A cartel maintenance framework to enforce cooperation in wireless networks with selfish users. IEEE Transactions on Wireless Communications, 7(5):1889–1899, 2008. [172] Y.-C. Hu, D. B. Johnson, and A. Perrig. SEAD: secure efficient distance vector routing for mobile wireless ad hoc networks. Ad Hoc Networks Journal, 1:175–192, 2003. [173] J. W. Huang and V. Krishnamurthy. Transmission control in cognitive radio systems with latency constraints as a switching control dynamic game, in 47th IEEE Conference on Decision and Control, pages 3823–3828, 2008. [174] Z. Han and K. J. R. Liu. Noncooperative power-control game and throughput game over wireless networks. IEEE Transactions on Communications, 53(10):1625–1629, 2005. [175] Z. Han and K. J. R. Liu. Resource Allocation for Wireless Networks: Basics, Techniques, and Applications. Cambridge: Cambridge University Press, 2008. [176] S. Huang, X. Liu, and Z. Ding. Opportunistic spectrum access in cognitive radio networks, in The 27th Conference on Computer Communications (IEEE INFOCOM 2008), pages 1427–1435, 2008. [177] K. Han, J. Li, P. Zhu, and X. Wang. The frequency–time pre-allocation in unlicensed spectrum based on the games learning, in 2nd International Conference on Cognitive Radio Oriented Wireless Networks and Communications (CrownCom), pages 79–84, 2007. [178] S. Hart and A. Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5):1127–1150, 2000. [179] A. Herzberg, Y. Mass, J. Michaeli, D. Naor, and Y. Ravid. Access control meets public key infrastructure or: assigning roles to strangers, in Proceedings of the 2000 IEEE Symposium on Security and Privacy, pages 2–14, May 2000. [180] W. D. Horne. Adaptive spectrum access: using the full spectrum space, in Telecommunications Policy Research Conference, 2003. [181] Y.-C. Hu, A. Perrig, and D. B. Johnson. Ariadne: a secure on-demand routing protocol for ad hoc networks, in MobiCom, 2002. [182] Y.-C. Hu, A. Perrig, and D. B. Johnson. Packet leashes: a defense against wormhole attacks in wireless networks, in IEEE INFOCOM, 2003. [183] Y.-C. Hu, A. Perrig, and D. B. Johnson. Rushing attacks and defense in wireless ad hoc network routing protocols, in WiSe, 2003. [184] Y. Hur, J. Park, K. Kim et al. A cognitive radio (CR) testbed system employing a wideband multi-resolution spectrum sensing (MRSS) technique, in IEEE 64th Vehicular Technology Conference, 2006. [185] Z. Han, C. Pandana, and K. J. R. Liu. A self-learning repeated game framework for optimizing packet forwarding networks, in IEEE Wireless Communications and Networking Conference, 2005.
580
Secure cooperation stimulation under noise and imperfect monitoring
[186] Z. Han, C. Pandana, and K. J. R. Liu. Distributive opportunistic spectrum access for cognitive radio using correlated equilibrium and no-regret learning, in IEEE Wireless Communications and Networking Conference (WCNC), pages 11–15, 2007. [187] M. Hirsch and S. Smale. Differential Equations, Dynamical Systems, and Linear Algebra. New York: Academic Press, 1974. [188] N. Han, S. H. Shon, J. H. Chung, and J. M. Kim. Spectral correlation based signal detection method for spectrum sensing in IEEE 802.22 WRAN systems, in The 8th International Conference in Advanced Communication Technology (ICACT ’06), volume 3, 2006. [189] T. Himsoon, W. P. Siriwongpairat, Z. Han, and K. J. R. Liu. Lifetime maximization via cooperative nodes and relay deployment in wireless networks. IEEE Journal on Selected Areas in Communications, 25(2):306–317, 2007. [190] Z. Han, A. L. Swindlehurst, and K. J. R. Liu. Smart deployment/movement of unmanned air vehicle to improve connectivity in MANET, in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC ’06), volume 1, pages 252–257, 2006. [191] Y. T. Hou, Y. Shi, and H. D. Sherali. Spectrum sharing for multi-hop networking with cognitive radios. IEEE Journal on Selected Areas in Communications, 26(1):146–155, 2008. [192] Y. T. Hou, Y. Shi, H. D. Sherali, and S. F. Midkiff. Prolonging sensor network lifetime with energy provisioning and relay node placement, in Proceedings of IEEE SECON, pages 295–304, 2005. [193] R. Hincapie, J. Tang, G. Xue, and R. Bustamante. QoS routing in wireless mesh networks with cognitive radios, in IEEE Global Telecommunications Conference, pages 1–5, 2008. [194] J. Hu and M. P. Wellman. Multiagent reinforcement learning: theoretical framework and an algorithm, in Proceedings of the Fifteenth International Conference on Machine Learning, pages 242–250, 1998. [195] W. Hu, D. Willkomm, M. Abusubaih et al. Dynamic frequency hopping communities for efficient IEEE 802.22 operation. IEEE Communications Magazine, 45(5):80–87, 2007. [196] IEEE 802.22 Working Group on Wireless Regional Area Networks. http://www.ieee802. org/22/. [197] IEEE Computer Society LAN MAN Standards Committee. Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE Std 802.11-1007. [198] IEEE Standard. IEEE Std 802.11a-1999, Part 11: wireless LAN medium access control (MAC) and physical layer (PHY) specifications, 1999. [199] A. S. Ibrahim, K. G. Seddik, and K. J. R. Liu. Connectivity-aware network maintenance via relays deployment, in Proceedings of the IEEE Wireless Communication and Networking Conference 2008 (WCNC08), pages 2573–2578, 2008. [200] A. S. Ibrahim, K. G. Seddik, and K. J. R. Liu. Connectivity-aware network maintenance and repair via relays deployment. IEEE Transactions on Wireless Communications, 8(1):356–366, 2009. [201] K. Ishizu, Y. Saito, Z. Lan, and M. Kuroda. Adaptive wireless-network testbed for cognitive radio technology, in Proceedings of the 1st International Workshop on Wireless Network Testbeds, Experimental Evaluation & Characterization, pages 18–25, 2006. [202] O. Ileri, D. Samardzija, and N. B. Mandayam. Demand responsive pricing and competitive spectrum allocation via a spectrum server, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 194–202, 2005. [203] A. Jøsang and R. Ismail. The beta reputation system, in Proceedings of the 15th Bled Electronic Commerce Conference, June 2002.
22.7 Summary and bibliographical notes
581
[204] Z. Ji and K. J. R. Liu. Belief-assisted pricing for dynamic spectrum allocation in wireless networks with selfish users, in IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks (SECON), pages 119–127, 2006. [205] Z. Ji and K. J. R. Liu. Collusion-resistant dynamic spectrum allocation for wireless networks via pricing, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), 2007. [206] Z. Ji and K. J. R. Liu. Dynamic spectrum sharing: a game theoretical overview. IEEE Communications Magazine, 45(5):88–94, 2007. [207] Z. Ji and K. J. R. Liu. Multi-stage pricing game for collusion-resistant dynamic spectrum allocation. IEEE Journal on Selected Areas in Communications, 26(1):182–191, 2008. [208] P. Johansson, T. Larsson, N. Hedman, B. Mielczarek, and M. Degermark. Scenariobased performance analysis of routing protocols for mobile ad-hoc networks, in ACM MobiCom’99, 1999. [209] D. B. Johnson and D. A. Maltz. Dynamic Source Routing in Ad Hoc Wireless Networks, pages 153–179. Dordrecht: Kluwer, 1996. [210] A. Jøsang. An algebra for assessing trust in certification chains, in Proceedings of the Network and Distributed Systems Security (NDSS ’99) Symposium, 1999. [211] T. Jiang and D. Qu. On minimum sensing error with spectrum sensing using counting rule in cognitive radio networks, in 4th Annual International Conference on Wireless Internet (WICON ’08), pages 1–9, 2008. [212] J. Jubin and J. D. Tornow. The DARPA packet radio network protocols. Proceedings of the IEEE, 95(1):21–34, 1987. [213] A. Jovicic and P. Viswanath. Cognitive radio: an information-theoretic perspective, in Proceedings of the IEEE International Symposium on Information Theory, pages 2413–2417, 2006. [214] Z. Ji, W. Yu, and K. J. R. Liu. An optimal dynamic pricing framework for autonomous mobile ad hoc networks, in IEEE INFOCOM, 2006. [215] Z. Ji, W. Yu, and K. J. R. Liu. Cooperation enforcement in autonomous MANETs under noise and imperfect observation. IEEE SECON06, 2006. [216] Z. Ji, W. Yu, and K. J. R. Liu. A game theoretical framework for dynamic pricing-based routing in self-organized MANETs. IEEE Journal on Selected Areas in Communications, 26(7):1204–1217, 2008. [217] J. Jia and Q. Zhang. Competitions and dynamics of duopoly wireless service providers in dynamic spectrum market, in Proceedings of the 9th ACM International Symposium on Mobile Ad Hoc Networking and Computing, pages 313–322, 2008. [218] J. Jia, Q. Zhang, and X. Shen. HC-MAC: a hardware-constrained cognitive MAC for efficient spectrum management. IEEE Journal on Selected Areas in Communications, 26(1):106–117, 2008. [219] J. Jia, J. Zhang, and Q. Zhang. Cooperative relay for cognitive radio networks, in IEEE INFOCOM, 2009. [220] J. Jia, Q. Zhang, Q. Zhang, and M. Liu. Revenue generation for truthful spectrum auction in dynamic spectrum access, in Proceedings of the Tenth ACM International Symposium on Mobile Ad Hoc Networking and Computing, pages 3–12, 2009. [221] Y. R. Kondareddy and P. Agrawal. Synchronized MAC protocol for multi-hop cognitive radio networks, in IEEE International Conference on Communications (ICC ’08), pages 3198–3202, 2008.
582
Secure cooperation stimulation under noise and imperfect monitoring
[222] K. Kim, I. A. Akbar, K. K. Bae et al. Cyclostationary approaches to signal detection and classification in cognitive radio, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 212–215, 2007. [223] O. Kallenberg. Foundations of Modern Probability. New York: Springer-Verlag, 1977. [224] H. Khalife, S. Ahuja, N. Malouch, and M. Krunz. Probabilistic path selection in opportunistic cognitive radio networks, in Proceedings of IEEE Globecom, 2008. [225] M. Kandori. Social norms and community enforcement. The Review of Economic Studies, 59:63–80, 1992. [226] R. M. Karp. Reducibility among combinatorial problems. Complexity of Computer Computations, 43:85–103, 1972. [227] S. Keshavamurthy and K. Chandra. Multiplexing analysis for spectrum sharing, in IEEE MILCOMM, pages 1–7, 2006. [228] H. Kushwaha and R. Chandramouli. Secondary spectrum access with LT codes for delay-constrained applications, in 4th IEEE Consumer Communications and Networking Conference (CCNC), pages 1017–1021, 2007. [229] M. Kobayashi, G. Caire, and D. Gesbert. Impact of multiple transmit antennas in a queued SDMS/TDMA downlink, in Proceedings of the 6th IEEE Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2005. [230] N. Khambekar, L. Dong, and V. Chaudhary. Utilizing OFDM guard interval for spectrum sensing, in IEEE Wireless Communications and Networking Conference (WCNC 2007), pages 38–42, 2007. [231] F. Kelly. Charging and rate control for elastic traffic. European Transactions on Telecommunications, 8(1):33–37, 1997. [232] C. Kloeck, H. Jaekel, and F. K. Jondral. Dynamic and local combined pricing, allocation and billing system with cognitive radios, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 73–81, 2005. [233] A. Kashyap, S. Khuller, and M. Shayman. Relay placement for higher order connectivity in wireless sensor networks, in IEEE INFOCOM, 2006. [234] S. Khattab, D. Mosse, and R. Melhem. Modeling of the channel-hopping anti-jamming defense in multi-radio wireless networks, in Proceedings of the 5th Annual International Conference on Mobile and Ubiquitous Systems: Computing, Networking, and Services, pages 1–10, 2008. [235] F. P. Kelly, A. K. Maulloo, and D. K. H. Tan. Rate control for communication networks: shadow prices, proportional fairness and stability. The Journal of the Operational Research Society, 49(3):237–252, 1998. [236] P. J. Kolodzy. Interference temperature: a metric for dynamic spectrum utilization. International Journal of Network Management, 16(2):103–113, 2006. [237] V. Krishna. Auction Theory. New York: Academic Press, 2009. [238] H. Kim and K. G. Shin. In-band spectrum sensing in cognitive radio networks: energy detection or feature detection?, in ACM Mobicom, 2008. [239] H. Kim and K. G. Shin. Fast discovery of spectrum opportunities in cognitive radio networks, in 3rd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), 2008. [240] L. Kleinrock and F. Tobagi. Packet switching in radio channels: part I – carrier sense multiple-access modes and their throughput-delay characteristics. IEEE Transactions on Communications, 23(12):1400–1416, 1975.
22.7 Summary and bibliographical notes
583
[241] S. Krishnamurthy, M. Thoppian, S. Venkatesan, and R. Prakash. Control channel based MAC-layer configuration, routing and situation awareness for cognitive radio networks, in IEEE Military Communications Conference (MILCOM), pages 455–460, 2005. [242] V. G. Kulkarni. Modeling and Analysis of Stochastic Systems. Boca Raton, FL: CRC Press, 1995. [243] P. Kyasanur and N. H. Vaidya. Protocol design challenges for multi-hop dynamic spectrum access networks, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 645–648, 2005. [244] H. Kushwaha, Y. Xing, R. Chandramouli, and H. Heffes. Reliable multimedia transmission over cognitive radio networks using fountain codes. Proceedings of the IEEE, 96(1):155– 165, 2008. [245] R. J. La and V. Anantharam. Optimal routing control: repeated game approach. IEEE Transactions on Automatic Control, 47(3):437–450, 2002. [246] Q. Li, J. Aslam, and D. Rus. Online power-aware routing in wireless ad hoc networks, in Proceedings of the 4th Annual IEEE/ACM Mobicom. pages 97–107, 2001. [247] L. Lai, H. El Gamal, H. Jiang, and H. V. Poor. Cognitive medium access: exploration, exploitation and competition. IEEE/ACM Transactions on Networking, 99 2010, to be published. [248] N. Li and J. C. Hou. Improving connectivity of wireless ad-hoc networks, in Proceedings of the Second Annual International Conference on Mobile and Ubiquitous Systems: Networking and Services (MobiQuitous ’05), pages 314–324, 2005. [249] M. L. Littman. Markov games as a framework for multi-agent reinforcement learning, in Proceedings of the Eleventh International Conference on Machine Learning, pages 163– 169, 1994. [250] J. Lunden, V. Koivunen, A. Huttunen, and H. V. Poor. Spectrum sensing in cognitive radios based on multiple cyclic frequencies, in 2nd International Conference on Cognitive Radio Oriented Wireless Networks and Communications (CrownCom ’07), pages 37–43, 2007. [251] H. Li, C. Li, and H. Dai. Quickest spectrum sensing in cognitive radio, in 42nd Annual Conference on Information Sciences and Systems (CISS ’08), pages 203–208, 2008. [252] L. Lovász. On the Shannon capacity of a graph. IEEE Transactions on Information Theory, 25:1–7, 1979. [253] R. M. Loynes. The stability of a queue with non-independent interarrival and service times. Proceedings of the Cambridge Philosophical Society, 58(3):497–520, 1962. [254] M. Laurent and F. Rendl. Semidefinite programming and integer programming. Handbook on Discrete Optimization, pages 393–514, 2005. [255] M. L. Littman and C. Szepesvari. A generalized reinforcement-learning model: convergence and applications, in Proceedings of the 13th International Conference on Machine Learning, pages 310–318, 1996. [256] A. Leu, K. Steadman, M. McHenry, and J. Bates. Ultra sensitive TV detector measurements, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), page 30–36, 2005. [257] K. J. R. Liu, A. K. Sadek, W. Su, and A. Kwasinski. Cooperative Communications and Networking. Cambridge: Cambridge University Press, 2008. [258] J. N. Laneman, D. N. C. Tse, and G. W. Wornell. Cooperative diversity in wireless networks: efficient protocols and outage behavior. IEEE Transactions on Information Theory, 50(12):3062–3080, 2004.
584
Secure cooperation stimulation under noise and imperfect monitoring
[259] J. Lehtomaki, J. Vartiainen, M. Juntti, and H. Saarnisaari. Spectrum sensing with forward methods, in Proceedings of the IEEE Military Communications Conference, pages 1–7, 2006. [260] G. Lei, W. Wang, T. Peng, and W. Wang. Routing metrics in cognitive radio networks, in 4th IEEE International Conference on Circuits and Systems for Communications (ICCSC), pages 265–269, 2008. [261] G. H. Lin and G. Xue. Steiner tree problem with minimum number of Steiner points and bounded edge-length. Information Processing Letters, 69(2):53–57, 1999. [262] E. L. Lloyd and G. Xue. Relay node placement in wireless sensor networks. IEEE Transactions on Computers, 56(1):134, 2007. [263] K. Lee and A. Yener. Outage performance of cognitive wireless relay networks, in Proceedings of IEEE GLOBECOM 2006, pages 1–5, 2006. [264] K. Lee and A. Yener. Throughput enhancing cooperative spectrum sensing strategies for cognitive radios, in Proceedings of IEEE ACSSC, pages 2045–2049, 2007. [265] Y.-C. Liang, E. Peh, Y. Zeng, and A. T. Hoang. Sensing-throughput tradeoff for cognitive radio networks, in IEEE International Conference on Communications (ICC), pages 5330– 5335, 2007. [266] M. A. McHenry. NSF spectrum occupancy measurements project summary, 2005. http:// www.sharedspectrum.com/inc/content/measurements/nsf/NSF Project Summary.pdf. [267] K. Muraoka, M. Ariyoshi, and T. Fujii. A novel spectrum-sensing method based on maximum cyclic autocorrelation selection for cognitive radio system, in 3rd IEEE Symposium on New Frontiers in Dynamicv Spectrum Access Networks (DySPAN 2008), pages 1–7, 2008. [268] D. W. Manchala. Trust metrics, models and protocols for electronic commerce transactions, in Proceedings of the 18th IEEE International Conference on Distributed Computing Systems, pages 312–321, 1998. [269] P. F. Marshall. Closed-form analysis of spectrum characteristics for cognitive radio performance analysis, in Third IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN), 2008. [270] R. Matheson. The electrospace model as a frequency management tool, in International ˝ Symposium on Advanced Radio Technologies, pages 126U–132, 2003. [271] U. Maurer. Modelling a public-key infrastructure, in Proceedings 1996 European Symposium on Research in Computer Security (ESORICS ’96). pages 325–350, 1996. [272] K. Maeda, A. Benjebbour, T. Asai, T. Furuno, and T. Ohya. Recognition among OFDMbased systems utilizing cyclostationarity-inducing transmission, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 516–523, 2007. [273] M. Muck, S. Buljore, P. Martigne et al. IEEE P1900. B: coexistence support for reconfigurable, heterogeneous air interfaces, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 381–389, 2007. [274] S. M. Mishra, S. Brink, R. Mahadevappa, and R. W. Brodersen. Cognitive technology for ultra-wideband/WiMax coexistence, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 179–186, 2007. [275] D. H. McKnight and N. L. Chervany. The meanings of trust. MISRC Working Paper Series, Technical Report 94-04, arlson School of Management, University of Minnesota, 1996. [276] G. Montenegro and C. Castelluccis. Statiscially unique and cryptographically verifiable (SUCV) identifiers and addresses, in Proceedings of NDSS, 2002.
22.7 Summary and bibliographical notes
585
[277] S. M. Mishra, D. Cabric, C. Chang et al. A real time cognitive radio testbed for physical and link layer experiments, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 562–567, 2005. [278] M. A. McHenry. NSF spectrum occupancy measurements project summary. Shared Spectrum Company, 2005. [279] A. B. MacKenzie and L. A. DaSilva. Game theory for wireless engineers. Synthesis Lectures on Communications, 1(1):1–86, 2006. [280] M. Mehta, N. Drew, G. Vardoulias, N. Greco, and C. Niedermeier. Reconfigurable terminals: an overview of architectural solutions. IEEE Communications Magazine, 39(8):82–89, 2001. [281] G. J. Minden, J. B. Evans, L. S. Searl et al. Cognitive radios for dynamic spectrum access – an agile radio for wireless innovation. IEEE Communications Magazine, 45(5):113–121, 2007. [282] G. J. Minden, J. B. Evans, L. Searl et al. KUAR: a flexible software-defined radio development platform, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 428–439, 2007. [283] S. Marti, T. J. Giuli, K. Lai, and M. Baker. Mitigating routing misbehavior in mobile ad hoc networks, in Proceedings of the 6th Annual International Conference on Mobile Computing and Networking, pages 255–265, 2000. [284] J. Mitola. Cognitive Radio: An Integrated Agent Architecture for Software Defined Radio. PhD thesis, KTH Royal Institute of Technology, Stockholm, 2000. [285] N. A. Moseley, E. A. M. Klumperink, and B. Nauta. A spectrum sensing technique for cognitive radios in the presence of harmonic images, in 3rd IEEE Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), 2008. [286] M. Maskery, V. Krishnamurthy, and Q. Zhao. Decentralized dynamic spectrum access for cognitive radios: cooperative design of a non-cooperative game. IEEE Transactions on Communications, 57(2):459–469, 2009. [287] J. Ma and Y. Li. Soft combination and detection for cooperative spectrum sensing in cognitive radio networks, in IEEE Global Telecommunications Conference, pages 3139–3143, 2007. [288] J. W. Mwangoka, K. B. Letaief, and Z. Cao. Joint power control and spectrum allocation for cognitive radio networks via pricing. Physical Communication, 2(1–2):103–115, 2009. [289] D. Maldonado, B. Le, A. Hugine, T. W. Rondeau, and C. W. Bostian. Cognitive radio applications to dynamic spectrum allocation: a discussion and an illustrative example, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 597–600, 2005. [290] P. Michiardi and R. Molva. Core: a COllaborative REputation mechanism to enforce node cooperation in mobile ad hoc networks, in IFIP – Communications and Multimedia Security Conference, 2002. [291] P. Michiardi and R. Molva. A game theoretical approach to evaluate cooperation enforcement mechanisms in mobile ad hoc networks, in WiOPT ’03, 2003. [292] R. Menon, A. B. MacKenzie, R. M. Buehrer, and J. H. Reed. A game-theoretic framework for interference avoidance in ad hoc networks. Proceedings of IEEE GLOBECOM 2006, 2006. [293] S. Maruyama, K. Nakano, K. Meguro, M. Sengoku, and S. Shinoda. On location of relay facilities to improve connectivity of multi-hop wireless networks, in Proceedings of the
586
Secure cooperation stimulation under noise and imperfect monitoring
[294] [295]
[296] [297] [298]
[299] [300] [301] [302]
[303] [304] [305]
[306]
[307]
[308] [309]
[310]
[311] [312]
10th Asia–Pacific Conference on Communications and 5th International Symposium on Multi-Dimensional Mobile Communications, pages 749–753, 2004. B. Mohar. Some applications of Laplace eigenvalues of graphs. Graph Symmetry: Algebraic Methods and Applications, 497:227–275, 1997. B. S. Manoj, R. R. Rao, and M. Zorzi. On the use of higher layer information for cognitive networking, in IEEE Global Telecommunications Conference (GLOBECOM ’07), pages 3568–3573, 2007. D. Monderer and L. S. Shapley. Potential games. Games and Economic Behavior, 14(1):124–143, 1996. S. M. Mishra, A. Sahai, and R. W. Brodensen. Cooperative sensing among cognitive radios, in IEEE International Conference on Communications (ICC), pages 1658–1663, 2006. M. A. McHenry, K. Steadman, and M. Lofquist. Determination of detection thresholds to allow safe operation of television band “white space” devices, in 3rd IEEE Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), pages 1–12, 2008. S. Mathur, L. Sankaranarayanan, and N. B. Mandayam. Coalitional games in Gaussian interference channels, in Proceedings of IEEE ISIT, pages 2210–2214, 2006. R. B. Myerson. Optimal auction design. Mathematics of Operations Research, 6(1):58–73, 1981. R. B. Myerson. Mechanism design, in S. N. Durlauf and L. E. Blume, editors, The New Palgrave Dictionary of Economics. Basingstoke: Palgrave Macmillan, 2008. V. Navda, A. Bohra, S. Ganguly, and D. Rubenstein. Using channel hopping to increase 802.11 resilience to jamming attacks. IEEE Infocom Minisymposium, pages 2526–2530, 2007. J. Neel, R. M. Buehrer, J. H. Reed, and R. Gilles. Game theoretic analysis of a network of cognitive radios, in IEEE Midwest Symposium on Circuits and Systems, 2002. N. Nie and C. Comaniciu. Adaptive channel allocation spectrum etiquette for cognitive radio networks. Mobile Networks and Applications, 11(6):779–797, 2006. D. Niyato and E. Hossain. A game-theoretic approach to competitive spectrum sharing in cognitive radio networks, in IEEE Wireless Communications and Networking Conference, pages 16–20, 2007. D. Niyato and E. Hossain. Competitive pricing for spectrum sharing in cognitive radio networks: dynamic game, inefficiency of Nash equilibrium, and collusion. IEEE Journal on Selected Areas in Communications, 26(1):192–202, 2008. D. Niyato and E. Hossain. Competitive spectrum sharing in cognitive radio networks: a dynamic game approach. IEEE Transactions on Wireless Communications, 7(7):2651– 2660, 2008. J. F. Nash Jr. The bargaining problem. Econometrica, 18(2):155–162, 1950. A. O. Nasif and B. L. Mark. Collaborative opportunistic spectrum access in the presence of multiple transmitters, in IEEE Global Telecommunications Conference (GLOBECOM), pages 1–5, 2008. J. O. Neel, R. Menon, A. B. MacKenzie, J. H. Reed, and R. P. Gilles. Interference reducing networks, in 2nd International Conference on Cognitive Radio Oriented Wireless Networks and Communications (CrownCom), pages 96–104, 2007. V. Naware, G. Mergen, and L. Tong. Stability and delay of finite-user slotted aloha with multipacket reception. IEEE Transactions on Information Theory, 51(7):2636–2656, 2005. G. Noubir. On connectivity in ad hoc networks under jamming using directional antennas and mobility. Lecture Notes in Computer Science, pages 186–200, 2004.
22.7 Summary and bibliographical notes
587
[313] N. Nisan and A. Ronen. Computationally feasible VCG mechanisms. In ACM EC ’00, pages 242–252, 2000. [314] J. Neel, J. Reed, and R. Gilles. The role of game theory in the analysis of software radio networks, in SDR Forum Technical Conference, 2002. [315] K. E. Nolan, P. D. Sutton, L. E. Doyle et al. Dynamic spectrum access and coexistence experiences involving two independently developed cognitive radio testbeds, in 2nd International IEEE Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), 2007. [316] M. P. Olivieri, G. Barnett, A. Lackpour, and A. Davis. A scalable dynamic spectrum allocation system with interference mitigation for teams of spectrally agile software defined radios, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 170–179, 2005. [317] Ofcom. Improving the sharing of the radio spectrum: final report. http://www.ofcom. org.uk/research/technology/overview/ese/share/. [318] M. Öner and F. Jondral. Air interface recognition for a software radio system exploiting cyclostationarity, in 15th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC ’04), volume 3, 2004. [319] M. Öner and F. Jondral. Cyclostationarity based air interface recognition for software radio systems, in IEEE Radio and Wireless Conference, pages 263–266, 2004. [320] D.-C. Oh and Y.-H. Lee. Energy detection based spectrum sensing for sensing error minimization in cognitive radio networks. International Journal of Communication Networks and Information Security (IJCNIS), 1(1):1–5, 2009. [321] M. J. Osborne and A. Rubinstein. Bargaining and Markets. New York: Academic Press, 1990. [322] M. J. Osborne and A. Rubinstein. A Course in Game Theory. Cambridge, MA: MIT Press, 1999. [323] G. O’Shea and M. Roe. Child-proof authentication for MIPv6 (CAM). ACM Computer Communications Review 29(2):313–318, 2001. [324] M. J. Osborne. An Introduction to Game Theory. New York: Oxford University Press, 2004. [325] R. G. Ogier, F. L. Templin, B. Bellur, and M. G. Lewis. Topology Broadcast Based on Reverse-Path Forwarding (TBRPF). Internet-Draft, draft-ietf-manet-tbrpf-05.txt, 2002. [326] G. Owen. Game Theory. New York: Academic Press, third edition, 1995. [327] R. Pal. Efficient routing algorithms for multi-channel dynamic spectrum access networks, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 288–291, 2007. [328] A. Papoulis. Probability, Random Variables, and Stochastic Processes. New York: McGraw–Hill, third edition, 1995. [329] C. E. Perkins, E. M. Belding-Royer, and S. R. Das. Ad Hoc On Demand Distance Vector (AODV) Routing. Internet-Draft, draft-ietf-manet-olsr-10.txt, 2002. [330] C. Perkins. Ad Hoc Networking. New York: Addison–Wesley, 2000. [331] F. Perich. Policy-based network management for next generation spectrum access control, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 496–506, 2007. [332] F. Perich, R. Foster, P. Tenhula, and M. McHenry. Experimental field test results on feasibility of declarative spectrum management, in 3rd IEEE Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), 2008.
588
Secure cooperation stimulation under noise and imperfect monitoring
[333] M. D. Perez-Guirao, R. Luebben, T. Kaiser, and K. Jobmann. Evolutionary game theoretical approach for IR-UWB sensor networks, in IEEE International Conference on Communications Workshops, pages 107–111, 2008. [334] A. Parsa, A. A. Gohari, and A. Sahai. Exploiting interference diversity for event-based spectrum sensing, in 3rd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), 2008. [335] P. Papadimitratos and Z. Haas. Secure routing for mobile ad hoc networks, in SCS Communication Networks and Distributed Systems Modeling and Simulation Conference (CNDS 2002), 2002. [336] J. D. Poston and W. D. Horne. Discontiguous OFDM considerations for dynamic spectrum access in idle TV channels, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 607–610, 2005. [337] A. Pandharipande and C. K. Ho. Spectrum pool reassignment for a cognitive OFDMbased relay system, in 2nd International Conference on Cognitive Radio Oriented Wireless Networks and Communications (CrownCom ’07), pages 90–94, 2007. [338] C. Pandana, Z. Han, and K. J. R. Liu. Cooperation enforcement and learning for optimizing packet forwarding in autonomous wireless networks. IEEE Transactions on Wireless Communications, 7(8):3150–3163, 2008. [339] P. Papadimitratos, Z. J. Haas, and E. G. Sirer. Path set selection in mobile ad hoc networks, in MobiHOC, 2002. [340] C. Pandana and K. J. R. Liu. Near-optimal reinforcement learning framework for energyaware sensor communications, IEEE Journal on Selected Areas in Communications, 23(4):788–797, 2005. [341] E. Peh and Y.-C. Liang. Optimization for cooperative sensing in cognitive radio networks, in IEEE Wireless Communications and Networking Conference (WCNC), pages 27–32, 2007. [342] C. Pandana and K. J. R. Liu. Robust connectivity-aware energy-efficient routing for wireless sensor networks. IEEE Transactions on Wireless Communications, 7:3904–3916, 2008. [343] A. J. Petrin, P. M. Markus, J. R. Pfeiffenberger et al. Cognitive radio testbed and LPI, LPD waveforms, in Military Communications Conference, 2006. MILCOM 2006, pages 1–2, 2006. [344] H. V. Poor. An Introduction to Signal Detection and Estimation. New York: SpringerVerlag, second edition, 1994. [345] R. H. Porter. Optimal cartel trigger price strategies. Journal of Economic Theory, April 1983. [346] A. Papoulis and S. U. Pillai. Probability, Random Variables and Stochastic Process. New York: McGraw-Hill, fourth edition, 2002. [347] C. E. Perkins and E. M. Royer. Ad-hoc on-demand distance vector routing, in Second IEEE Workshop on Mobile Computing Systems and Applications (WMCSA99), 1999. [348] J. Palicot and C. Roland. A new concept for wireless reconfigurable receivers. IEEE Communications Magazine, 41(7):124–132, 2003. [349] J. G. Proakis. Digital Communications. New York: McGraw–Hill, 2001. [350] R. L. Pickholtz, D. L. Schilling, and L. B. Milstein. Theory of spread-spectrum communications – a tutorial. IEEE Transactions on Communications, 30(5):855–884, 1982. [351] M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. New York: Wiley, 1994.
22.7 Summary and bibliographical notes
589
[352] H. N. Pham, J. Xiang, Y. Zhang, and T. Skeie. QoS-aware channel selection in cognitive radio networks: a game-theoretic approach, in IEEE Global Telecommunications Conference, pages 1–7, 2008. [353] Q. Peng, K. Zeng, J. Wang, and S. Li. A distributed spectrum sensing scheme based on credibility and evidence theory in cognitive radio context, in IEEE 17th International Symposium on Personal, Indoor and Mobile Radio Communications, pages 1–5, 2006. [354] Z. Quan, S. Cui, and A. H. Sayed. Optimal linear cooperation for spectrum sensing in cognitive radio networks. IEEE Journal of Selected Topics in Signal Processing, 2(1):28– 40, 2008. [355] Z. Quan, S. Cui, A. H. Sayed, and H. V. Poor. Wideband spectrum sensing in cognitive radio networks, in IEEE International Conference on Communications, pages 901–906, 2008. [356] Z. Quan, S. J. Shellhammer, W. Zhang, and A. H. Sayed. Spectrum sensing by cognitive radios at very low SNR, in IEEE Global Communications Conference, 2009. [357] T. S. Rappaport. Wireless Communications: Principle and Practice. Upper Saddle River, NJ: Prentice–Hall, second edition, 1999. [358] J. Ratliff. A folk theorem sampler. Lecture notes, on line, http://www.virtualperfection. com/gametheory/5.3.FolkTheoremSampler.1.0.pdf, 1996. [359] R. Rao and A. Ephremides. On the stability of interacting queues in a multi-access system. IEEE Transactions on Information Theory, 34:918–930, 1988. [360] F. Rashid-Farrokhi, K. J. R. Liu, and L. Tassiulas. Transmit beamforming and power control for cellular wireless systems. IEEE Journal on Selected Areas in Communications, 16(8):1437–1450, 1998. [361] F. Rashid-Farrokhi, L. Tassiulas, and K. J. R. Liu. Joint optimal power control and beamforming in wireless networks using antenna arrays. IEEE Transactions on Communications, 46(10):1313–1324, 1998. [362] M. M. Rashid, J. Hossain, E. Hossain, and V. K. Bhargava. Opportunistic spectrum access in cognitive radio networks: a queueing analytic model and admission controller design, in IEEE Global Telecommunications Conference (GLOBECOM ’07), pages 4647–4652, 2007. [363] D. Raychaudhuri, X. Jing, I. Seskar, K. Le, and J. B. Evans. Cognitive radio technology: from distributed spectrum coordination to adaptive network collaboration. Pervasive and Mobile Computing, 4(3):278–302, 2008. [364] J. Razavilar, K. J. R. Liu, and S. I. Marcus. Jointly optimized bit-rate/delay control policy for wireless packet networks with fading channels. IEEE Transactions on Communications, 50(3):484–494, 2002. [365] C. Raman, J. Kalyanam, I. Seskar, and N. Mandayam. Distributed spatio-temporal spectrum sensing: an experimental study, in Proceedings of the Asilomar Conference on Signals, Systems and Computers, 2007. [366] D. Raychaudhuri, N. B. Mandayam, J. B. Evans et al. CogNet: an architectural foundation for experimental cognitive radio networks within the future internet, in Proceedings of the First ACM/IEEE International Workshop on Mobility in the Evolving Internet Architecture, pages 11–16, 2006. [367] C. J. Rieser, T. W. Rondeau, C. W. Bostian, and T. M. Gallagher. Cognitive radio testbed: further details and testing of a distributed genetic algorithm based cognitive engine for programmable radios, in IEEE Military Communications Conference (MILCOM), volume 3.
590
Secure cooperation stimulation under noise and imperfect monitoring
[368] T. W. Rondeau, C. J. Rieser, B. Le, and C. W. Bostian. Cognitive radios with genetic algorithms: intelligent control of software defined radios, in Software Defined Radio Forum Technical Conference, 2004. [369] M. K. Reiter and S. G. Stubblebine. Resilient authentication using path independence. IEEE Transactions on Computers, 47(12):1351–1362, 1998. [370] D. Raychaudhuri, I. Seskar, M. Ott et al. Overview of the ORBIT radio grid testbed for evaluation of next-generation wireless network protocols, in IEEE Wireless Communications and Networking Conference, volume 3, 2005. [371] A. Rubinstein. Perfect equilibrium in a bargaining model. Econometrica, 50(1):97–109, 1982. [372] C. Raman, R. D. Yates, and N. B. Mandayam. Scheduling variable rate links via a spectrum server, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 110–118, 2005. [373] W. Ren, Q. Zhao, and A. Swami. Power control in cognitive radio networks: how to cross a multi-lane highway, in IEEE ICASSP, 2008. [374] L. Samuelson. Evolutionary Games and Equilibrium Selection. Cambridge, MA: MIT Press, 1998. [375] A. Sahai and D. Cabric. A tutorial on spectrum sensing: fundamental limits and practical challenges, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), 2005. [376] N. S. Shankar, C. Cordeiro, and K. Challapali. Spectrum agile radios: utilization and sensing architectures, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 160–169, 2005. [377] G. Sun, J. Chen, W. Guo, and K. J. R. Liu. Signal processing techniques in network-aided positioning: a survey. IEEE Signal Processing Magazine, 22(4):12–23, Jul. 2005. [378] J. E. Suris, L. A. DaSilva, Z. Han, and A. B. MacKenzie. Cooperative game theory for distributed spectrum sharing, in IEEE International Conference on Communications, pages 5282–5287, 2007. [379] K. Sanzgiri, B. Dahill, B. N. Levine, C. Shields, and E. M. Belding-Royer. A secure routing protocol for ad hoc networks, in Proceedings of ICNP, 2002. [380] O. Sahin and E. Erkip. Cognitive relaying with one-sided interference, in Proceedings of the Asilomar Conference on Signals, Systems and Computers, 2008. [381] Y. Song, Y. Fang, and Y. Zhang. Stochastic channel selection in cognitive radio networks, in IEEE Global Telecommunications Conference (GLOBECOM ’07), pages 4878–4882, 2007. [382] Secure Hash Standard. Federal Information Processing Standards Publication 180-1, 1995. [383] L. S. Shapley. Stochastic games. Proceedings of the National Academy of Science of the USA, 39(10):1095–1100, 1953. [384] W. Saad, Z. Han, M. Debbah, A. Hjørungnes, and T. Basar. Coalitional games for distributed collaborative spectrum sensing in cognitive radio networks, in Proceedings of IEEE INFOCOM, 2009. [385] Y. Sun, Z. Han, and K. J. R. Liu. Defense of trust management vulnerabilities in distributed networks. IEEE Communications Magazine, 46(2):112, 2008. [386] Y. Sun, Z. Han, W. Yu, and K. J. R. Liu. A trust evaluation framework in distributed networks: vulnerability analysis and defense against attacks, in Proceedings of IEEE INFOCOM, pages 230–236, 2006. [387] S. Srinivasa and S. A. Jafar. Soft sensing and optimal power control for cognitive radio, in IEEE Global Communications Conference, 2007.
22.7 Summary and bibliographical notes
591
[388] K. G. Shin, H. Kim, C. Cordeiro, and K. Challapali. An experimental approach to spectrum sensing in cognitive radio networks with off-the-shelf IEEE 802.11 devices, in 4th IEEE Consumer Communications and Networking Conference, pages 1154–1158, 2007. [389] H. B. Salameh, M. Krunz, and O. Younis. Distance- and traffic-aware channel assignment in cognitive radio networks, in 5th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks, pages 10–18, 2008. [390] A. Sankar and Z. Liu. Maximum lifetime routing in wireless ad-hoc networks, in Proceedings of IEEE INFOCOM, pages 1089–1097, 2004. [391] A. K. Sadek, K. J. R. Liu, and A. Ephremides. Cognitive multiple access via cooperation: protocol design and performance analysis. IEEE Transactions on Information Theory, 53(10):3677–3696, 2007. [392] P. D. Sutton, J. Lotze, K. E. Nolan, and L. E. Doyle. Cyclostationary signature detection in multipath Rayleigh fading environments, in 2nd International Conference on Cognitive Radio Oriented Wireless Networks and Communications (CrownCom ’07), pages 408– 413, 2007. [393] K. W. Shum, K. K. Leung, and C. W. Sung. Convergence of iterative waterfilling algorithm for Gaussian interference channels. IEEE Journal on Selected Areas in Communications, 25(6):1091–1100, 2007. [394] C. U. Saraydar, N. B. Mandayam, and D. J. Goodman. Efficient power control via pricing in wireless data networks. IEEE Transactions on Communications, 50(2):291–303, 2002. [395] J. Maynard Smith. Evolution and the Theory of Games. Cambridge: Cambridge University Press, 1982. [396] V. Srinivasan, P. Nuggehalli, C. F. Chiasserini, and R. R. Rao. Cooperation in wireless ad hoc networks, in Proceedings of IEEE INFOCOM, 2003. [397] P. D. Sutton, K. E. Nolan, and L. E. Doyle. Cyclostationary signatures in practical cognitive radio applications. IEEE Journal on Selected Areas in Communications, 26(1):13–24, 2008. [398] K. N. Steadman, A. D. Rose, and T. Nguyen. Dynamic spectrum sharing detectors, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 276–282, 2007. [399] M. Sharma, A. Sahoo, and K. D. Nayak. Channel selection under interference temperature model in multi-hop cognitive mesh networks, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 133–136, 2007. [400] M. Sharma, A. Sahoo, and K. D. Nayak. Channel modeling based on interference temperature in underlay cognitive wireless networks, in IEEE International Symposium on Wireless Communication Systems (ISWCS), pages 224–228, 2008. [401] O. Simeone, I. Stanojev, S. Savazzi et al. Spectrum leasing to cooperating secondary ad hoc networks. IEEE Journal on Selected Areas in Communications, 26(1):203, 2008. [402] A. H. Sayed, A. Tarighat, and N. Khajehnouri. Network-based wireless location. IEEE Signal Processing Magazine, 22(4):24–40, 2005. [403] Y. Selen, H. Tullberg, and J. Kronander. Sensor selection for cooperative spectrum sensing, in 3rd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), 2008. [404] A. Sahai, R. Tandra, S. M. Mishra, and N. Hoven. Fundamental design tradeoffs in cognitive radio systems, in The First International Workshop on Technology and Policy for Accessing Spectrum, 2006. [405] N. Singh and X. Vives. Price and quantity competition in a differentiated duopoly. The Rand Journal of Economics, 15(4):546–554, 1984.
592
Secure cooperation stimulation under noise and imperfect monitoring
[406] H. Shiang and M. van der Schaar. Distributed resource management in multi-hop cognitive radio networks for delay sensitive transmission. IEEE Transactions on Vehicular Technology, 58(2):941–953, 2009. [407] S. Sridharan, S. Vishwanath, S. A. Jafar, and S. Shamai. On the capacity of cognitive relay assisted Gaussian interference channel, in IEEE International Symposium on Information Theory (ISIT), pages 549–553, 2008. [408] Y. L. Sun, W. Yu, Z. Han, and K. J. R. Liu. Information theoretic framework of trust modeling and evaluation for ad hoc networks. IEEE Journal on Selected Areas in Communications, 24(2):305–317, 2006. [409] C. Song and Q. Zhang. Achieving cooperative spectrum sensing in wireless cognitive radio networks. ACM SIGMOBILE Mobile Computing and Communications Review, 13(2):14– 25, 2009. [410] Y. Song, C. Zhang, and Y. Fang. Stochastic traffic engineering in multi-hop cognitive wireless mesh networks. IEEE Transactions on Mobile Computing, 9(3):305–316, 2009. [411] C. Sun, W. Zhang, and K. B. Letaief. Cooperative spectrum sensing for cognitive radios under bandwidth constraints, in Proceedings of IEEE WCNC, pages 1–5, 2007. [412] W. Szpankowski. Stability conditions for some multiqueue distributed system: buffered random access systems. Advances in Applied Probability, 26:498–515, 1994. [413] H. Tang. Some physical layer issues of wide-band cognitive radio systems, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 151–159, 2005. [414] G. Theodorakopoulos and J. S. Baras. Trust evaluation in ad-hoc networks, in Proceedings of the ACM Workshop on Wireless Security (WiSE ’04), 2004. [415] A. Tkachenko, D. Cabric, and R. W. Brodersen. Cognitive radio experiments using reconfigurable BEE2, in Fortieth Asilomar Conference on Signals, Systems and Computers (ACSSC), pages 2041–2045, 2006. [416] A. Tkachenko, A. D. Cabric, and R. W. Brodersen. Cyclostationary feature detector experiments using reconfigurable BEE2, in 2rd IEEE Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 216–219, 2007. [417] R. W. Thomas, R. S. Komali, A. B. MacKenzie, and L. A. DaSilva. Joint power and channel minimization in topology control: a cognitive network approach, in IEEE International Conference on Communications, pages 6538–6543, 2007. [418] R. M. Thrall and W. F. Lucas. N -person games in partition function form. Naval Research Logistics Quarterly, 10(1):281–298, 1963. [419] J. Tang, S. Misra, and G. Xue. Joint spectrum allocation and scheduling for fair spectrum sharing in cognitive radio wireless networks. Computer Networks, 52(11):2148–2158, 2008. [420] C. K. Toh. Ad Hoc Mobile Wireless Networks: Protocols and Systems. Upper Saddle River, NJ: Prentice–Hall, 2001. [421] C.-K. Toh. Maximum battery life routing to support ubiquitous mobile computing in wireless ad hoc networks. IEEE Communications Magazine, 39(6):138–147, June 2001. [422] R. Tandra and A. Sahai. Fundamental limits on detection in low SNR under noise uncertainty, in Proceedings of WirelessCom, pages 464–469, 2005. [423] R. Tandra and A. Sahai. Noise calibration, delay coherence and SNR walls for signal detection, in 3rd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), 2008.
22.7 Summary and bibliographical notes
593
[424] R. Tandra and A. Sahai. SNR walls for signal detection. IEEE Journal of Selected Topics in Signal Processing, 2(1):4–17, 2008. [425] A. M. Tulino and S. Verdú. Random Matrix Theory and Wireless Communications. Now Publishers Inc, 2004. [426] A. Tonmukayakul and M. B. H. Weiss. Secondary use of radio spectrum: a feasibility analysis, in Telecommunications Policy Research Conference, 2005. [427] A. Urpi, M. Bonuccelli, and S. Giordano. Modeling cooperation in mobile ad hoc networks: a formal description of selfishness, in WiOPT ’03, 2003. [428] T. Ui. A Shapley value representation of potential games. Games and Economic Behavior, 31(1):121–135, 2000. [429] R. Urgaonkar and M. J. Neely. Opportunistic scheduling with reliability guarantees in cognitive radio networks. IEEE Transactions on Mobile Computing, 8(6):766–777, 2009. [430] J. Unnikrishnan and V. V. Veeravalli. Cooperative sensing for primary detection in cognitive radio. IEEE Journal of Selected Topics in Signal Processing, 2(1):18–27, 2008. [431] M. van der Schaar and F. Fu. Spectrum access games and strategic learning in cognitive radio networks for delay-critical applications. Proceedings of the IEEE, 97(4):720–740, 2009. [432] G. Vardoulias, J. Faroughi-Esfahani, G. Clemo, and R. Haines. Blind radio access technology discovery and monitoring for software-defined radio communication systems: problems and techniques, in Second International Conference on 3G Mobile Communication Technologies, pages 306–310, 2001. [433] W. Vickrey. Counterspeculation, auctions, and competitive sealed tenders. The Journal of Finance, 16(1):8–37, 1961. [434] F. E. Visser, G. J. Janssen, and P. Pawelczak. Multinode spectrum sensing based on energy detection for dynamic spectrum access, in IEEE Vehicular Technology Conference (VTC), 2008. [435] E. Visotsky, S. Kuffner, and R. Peterson. On collaborative detection of TV transmissions in support of dynamic spectrum sharing, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 338–345, 2005. [436] G. Vulcano, G. V. Ryzin, and C. Maglaras. Optimal dynamic auctions for revenue management. Management Science, 48(11):1388–1407, 2002. [437] J. Vartiainen, H. Sarvanko, J. Lehtomäki, M. Juntti, and M. Latva-aho. Spectrum sensing with LAD-based methods, in The 18th Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2007. [438] L. C. Wang and C. Anderson. On the performance of spectrum handoff for link maintenance in cognitive radio, in 3rd International Symposium on Wireless Pervasive Computing (ISWPC 2008), pages 670–674, 2008. [439] L. C. Wang and A. Chen. Effects of location awareness on concurrent transmissions for cognitive ad hoc networks overlaying infrastructure-based systems. IEEE Transactions on Mobile Computing, 8(5):577–589, 2009. [440] W. Wang, Y. Cui, T. Peng, and W. Wang. Noncooperative power control game with exponential pricing for cognitive radio network, in Proceedings of IEEE VTC, pages 3125–3129, 2007. [441] F. Weidling, D. Datla, V. Petty, P. Krishnan, and G. Minden. A framework for RF spectrum measurements and analysis, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), page 573–576, 2005. [442] J. W. Weibull. Evolutionary Game Theory. Cambridge, MA: MIT Press, 1995.
594
Secure cooperation stimulation under noise and imperfect monitoring
[443] M. P. Wylie-Green. Dynamic spectrum sensing by multiband OFDM radio for interference mitigation, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 619–625, 2005. [444] D. Willkomm, J. Gross, and A. Wolisz. Reliable link maintenance in cognitive radio systems, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 371–378, 2005. [445] D. T. C. Wong, A. T. Hoang, Y. C. Liang, and F. P. S. Chin. Dynamic spectrum access with imperfect sensing in open spectrum wireless networks, in IEEE WCNC, 2008. [446] X. Wang, P. H. Ho, and A. Wong. Towards efficient spectrum sensing for cognitive radio through knowledge-based reasoning, in 3rd IEEE Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), pages 1–8, 2008. [447] B. Wang, Z. Ji, and K. J. R. Liu. Primary-prioritized Markov approach for dynamic spectrum access, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 507–515, 2007. [448] B. Wang, Z. Ji, and K. J. R Liu. Self-learning repeated game framework for distributed primary-prioritized dynamic spectrum access, in Proceedings of IEEE SECON 2007, pages 631–638, 2007. [449] B. Wang, Z. Ji, K. J. R. Liu, and T. C. Clancy. Primary-prioritized Markov approach for dynamic spectrum allocation. IEEE Transactions on Wireless Communications, 8(4):1854–1865, 2009. [450] F. Wang, M. Krunz, and S. Cui. Price-based spectrum management in cognitive radio networks. IEEE Journal of Selected Topics in Signal Processing, 2(1):74–87, 2008. [451] B. Wang, K. J. R. Liu, and T. C. Clancy. Evolutionary game framework for behavior dynamics in cooperative spectrum sensing, in IEEE Globecom, pages 1–5, 2008. [452] B. Wang, K. J. R. Liu, and T. C. Clancy. Evolutionary cooperative spectrum sensing game: how to collaborate? IEEE Transactions on Communications, 58(3), 2010. [453] W. Wang, X. Li, and Y. Wang. Truthful multicast routing in selfish wireless networks, in ACM MobiCom ’04, 2004. [454] A. Wagstaff and N. Merricks. A subspace-based method for spectrum sensing, in SDR Forum Technical Conference, 2007. [455] D. Willkomm, S. Machiraju, J. Bolot, and A. Wolisz. Primary users in cellular networks: a large-scale measurement study, in 3rd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), 2008. [456] E. H. Watanabe, D. S. Menasché, E. de Souza e Silva, and R. M. Leão. Modelling resource sharing dynamics of VoIP users over a WLAN using a game-theoretic approach, in IEEE INFOCOM, pages 915–923, 2008. [457] R. W. Wolff. Stochastic Modeling and the Theory of Queues. Upper Saddle River, NJ: Prentice–Hall, 1989. [458] B. Wild and K. Ramchandran. Detecting primary receivers for cognitive radio applications, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 124–130, 2005. [459] M. Wellens, J. Riihijarvi, M. Gordziel, and P. Mahonen. Evaluation of cooperative spectrum sensing based on large scale measurements, in 3rd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’03), 2008. [460] A. D. Wood and J. A. Stankovic. Denial of service in sensor networks. Computer, 35(10):54–62, 2002.
22.7 Summary and bibliographical notes
595
[461] A. D. Wood, J. A. Stankovic, and G. Zhou. DEEJAM: defeating energy-efficient jamming in IEEE 802.15.4-based wireless networks, in Proceedings of the 4th IEEE Conference on Sensor, Mesh and Ad Hoc Communications and Networks (SECON), 2007. [462] B. Wang, Y. Wu, Z. Ji, K. J. R. Liu, and T. C. Clancy. Game theoretical mechanism design methods. IEEE Signal Processing Magazine, 25(6):74–84, 2008. [463] B. Wang, Y. Wu, and K. J. R. Liu. An anti-jamming stochastic game for cognitive radio networks. IEEE Journal on Selected Areas in Communications, submitted. http://www.ece.umd.edu/∼bebewang/publication.htm. [464] Y. Wu, B. Wang, K. J. R. Liu, and T. C. Clancy. A multi-winner cognitive spectrum auction framework with collusion-resistant mechanisms, in 3rd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), 2008. [465] Y. Wu, B. Wang, K. J. R. Liu, and T. C. Clancy. A scalable collusion-resistant multi-winner cognitive spectrum auction game. IEEE Transactions on Communications, 57(12):3805– 3816, 2009. [466] Y. Wu, B. Wang, K. J. R. Liu, and T. C. Clancy. Repeated open spectrum sharing game with cheat-proof strategies, IEEE Transactions on Wireless Communications, 8(4):1922–1933, 2009. [467] Q. Wang and H. Zheng. Route and spectrum selection in dynamic spectrum networks, in 3rd IEEE Consumer Communications and Networking Conference (CCNC), volume 1, 2006. [468] Y. Xing, R. Chandramouli, S. Mangold, and S. N. Shankar. Dynamic spectrum access in open spectrum wireless networks. IEEE Journal on Selected Areas in Communications, 24(3):626–637, 2006. [469] Y. Xin, T. Guven, and M. Shayman. Relay deployment and power control for lifetime elongation in sensor networks, in Proceedings of IEEE ICC, pages 3461–3466, 2006. [470] Y. Xu, J. Heidemann, and D. Estrin. Geography-informed energy conservation for ad-hoc routing, in Proceedings of MOBICOM, pages 70–84, 2001. [471] K. Xu, H. Hassanein, G. Takahara, and Q. Wang. Relay node deployment strategies in heterogeneous wireless sensor networks: multiple-hop communication case, in Proceedings of IEEE SECON, pages 575–585, 2005. [472] Y. Xing, C. N. Mathur, M. A. Haleem, R. Chandramouli, and K. P. Subbalakshmi. Dynamic spectrum access with QoS and interference temperature constraints. IEEE Transactions on Mobile Computing, 6(4):423–433, 2007. [473] W. Xu, T. Wood, W. Trappe, and Y. Zhang. Channel surfing and spatial retreats: defenses against wireless denial of service, in Proceedings of the 3rd ACM Workshop on Wireless Security, pages 80–89, 2004. [474] C. Xin, B. Xie, and C. C. Shen. A novel layered graph model for topology formation and routing in dynamic spectrum access networks, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 308–317, 2005. [475] Q. Xin, Y. Zhang, and J. Xiang. Optimal spectrum scheduling in cognitive wireless mesh networks, in International Wireless Communications and Mobile Computing Conference (IWCMC), pages 724–728, 2008. [476] T. Yücek and H. Arslan. Spectrum characterization for opportunistic cognitive radio systems, in IEEE Military Communication Conference, MILCOM, 2006. [477] T. Yücek and H. Arslan. A survey of spectrum sensing algorithms for cognitive radio applications. IEEE Communications Surveys & Tutorials, 11(1):116–130, 2009.
596
Secure cooperation stimulation under noise and imperfect monitoring
[478] R. Yates. A framework for uplink power control in cellular radio systems. IEEE Journal on Selected Areas in Communications, 13(7):1341–1348, 1995. [479] Y. Yuan, P. Bahl, R. Chandra, T. Moscibroda, and Y. Wu. Allocating dynamic timespectrum blocks in cognitive radio networks, in Proceedings of the 8th ACM International Symposium on Mobile Ad Hoc Networking and Computing, pages 130–139, 2007. [480] Z. Yang, G. Cheng, W. Liu, W. Yuan, and W. Cheng. Local coordination based routing and spectrum assignment in multi-hop cognitive radio networks. Mobile Networks and Applications, 13(1):67–81, 2008. [481] L. Yang, L. Cao, and H. Zheng. Physical interference driven dynamic spectrum management. In 3rd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’08), 2008. [482] W. Yu, Z. Ji, and K. J. R. Liu. Securing cooperative ad-hoc networks under noise and imperfect monitoring: strategies and game theoretic analysis. IEEE Transactions on Information Forensics and Security, 2(2):240–253, 2007. [483] W. Yu and K. J. R. Liu. Attack-resistant cooperation stimulation in autonomous ad hoc networks. IEEE Journal on Selected Areas in Communications (Special Issue on Autonomic Communication Systems), 23(12):2260–2271, 2005. [484] W. Yu and K. J. R. Liu. Secure cooperative mobile ad hoc networks against injecting traffic attacks, in Proceedings of IEEE SECON, pages 55–64, 2005. [485] W. Yu and K. J. R. Liu. Stimulating cooperation and defending against attacks in selforganized mobile ad hoc networks, in Proceedings of IEEE SECON, pages 65–75, 2005. [486] W. Yu and K. J. R. Liu. Defense against injecting traffic attacks in wireless mobile ad-hoc networks. IEEE Transactions on Information Forensics and Security, 2(2):227–239, 2007. [487] W. Yu and K. J. R. Liu. Game theoretic analysis of cooperation stimulation and security in autonomous mobile ad hoc networks. IEEE Transactions on Mobile Computing, 6(5):507– 521, 2007. [488] W. Yu and K. J. R. Liu. Secure cooperation in autonomous mobile ad-hoc networks under noise and imperfect monitoring: a game-theoretic approach. IEEE Transactions on Information Forensics and Security, 3(2):317–330, 2008. [489] J. Yoon, M. Liu, and B. Noble. Sound mobility models, in MobiCom, 2003. [490] W. Yu, Y. Sun, and K. J. R. Liu. HADOF: defense against routing disruptions in mobile ad hoc networks, in IEEE INFOCOM 2005, 2005. [491] M. G. Zapata and N. Asokan. Securing ad hoc routing protocols, in WiSe, 2002. [492] H. Zheng and L. Cao. Device-centric spectrum management, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), pages 56–65, 2005. [493] S. Zhong, J. Chen, and Y. R. Yang. Sprite: a simple, cheat-proof, credit-based system for mobile ad-hoc networks, in IEEE INFOCOM, volume 3, pages 1987–1997, 2003. [494] W. Zhu, B. Daneshrad, J. Bhatia et al. A real time MIMO OFDM testbed for cognitive radio & networking research, in Proceedings of the 1st International Workshop on Wireless Network Testbeds, Experimental Evaluation & Characterization, pages 115–116, 2006. [495] X. Zhou, S. Gandhi, S. Suri, and H. Zheng. eBay in the sky: strategy-proof wireless spectrum auctions, in Proceedings of the 14th ACM International Conference on Mobile Computing and Networking, pages 2–13, 2008. [496] L. Zhou and Z. Haas. Securing ad hoc networks. IEEE Network Magazine, 13(6):24–30, 1999.
22.7 Summary and bibliographical notes
597
[497] P. R. Zimmermann. The Official PGP User’s Guide. Cambridge, MA: MIT Press, 1995. [498] Q. Zhang and S. A. Kassam. Finite-state Markov model for Rayleigh fading channels. IEEE Transactions on Communications, 47(11):1688–1692, 1999. [499] Q. Zhao, B. Krishnamachari, K. Liu et al. On myopic sensing for multi-channel opportunistic access: structure, optimality, and performance. IEEE Transactions on Wireless Communications, 7:5431–5440, 2008. [500] Y. Zhang and W. Lee. Intrusion detection in wireless ad-hoc networks, in MobiCom, 2000. [501] Y. Zeng and Y. C. Liang. Covariance based signal detections for cognitive radio, in 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’07), pages 202–207, 2007. [502] Y. Zeng and Y. C. Liang. Maximum–minimum eigenvalue detection for cognitive radio, in IEEE 18th International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC ’07), pages 1–5, 2007. [503] Y. Zeng and Y. C. Liang. Spectrum-sensing algorithms for cognitive radio based on statistical covariances. IEEE Transactions on Vehicular Technology, 58(4):1804–1815, 2009. [504] S. Zhong, L. Li, Y. G. Liu, and Y. R. Yang. On designing incentive- compatible routing and forwarding protocols in wireless ad-hoc networks, in ACM MobiCom, pages 117–131, 2005. [505] Y. Zeng, Y. C. Liang, and R. Zhang. Blindly combined energy detection for spectrum sensing in cognitive radio. IEEE Signal Processing Letters, 15:649–652, 2008. [506] W. Zhang, R. K. Mallik, and K. B. Letaief. Cooperative spectrum sensing optimization in cognitive radio networks, in IEEE International Conference on Communications, pages 3411–3415, 2008. [507] Q. Zhao and B. M. Sadler. A survey of dynamic spectrum access. IEEE Signal Processing Magazine, 24(3):79–89, 2007. [508] X. Zhu, L. Shen, and T. S. P. Yum. Analysis of cognitive radio spectrum access with optimal channel reservation. IEEE Communications Letters, 11(4):304–306, 2007. [509] Q. Zhao, L. Tong, A. Swami, and Y. Chen. Decentralized cognitive MAC for opportunistic spectrum access in ad hoc networks: a POMDP framework. IEEE Journal on Selected Areas in Communications, 25(3):589–600, 2007. [510] Q. Zhang, H. Xue, and X. Kou. An evolutionary game model of resources-sharing mechanism in P2P networks, in Workshop on Intelligent Information Technology Application (IITA) 2007, pages 282–285, 2007. [511] J. Zhang and Q. Zhang. Stackelberg game for utility-based cooperative cognitiveradio networks, in Proceedings of the Tenth ACM International Symposium on Mobile Ad Hoc Networking and Computing, pages 23–32, 2009. [512] X. Zhou and H. Zheng. TRUST: a general framework for truthful double spectrum auctions, in IEEE INFOCOM, 2009. [513] J. Zhao, H. Zheng, and G. H. Yang. Distributed coordination in dynamic spectrum allocation networks, in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN ’05), 2005.
Index
access point, 38 actor–critic (AC) algorithm, 253, 256 additive white Gaussian noise (AWGN), 13 ALOHA, 229 attack denial of service, 225 dropping packets, 521 in network layer, 402 in an ad hoc network, 446 injecting traffic, 420, 446, 521 primary-user emulation, 17, 225 sensing data falsification, 22, 225 auction, 73, 136, 302 Dutch auction, 73 English auction, 73 first-price auction, 73 revelation principle, 303 second-price auction, 73, 141 backward induction, 58, 70 bad-mouthing attack, 383 bandwidth constraint, 304, 307 efficiency, 226, 229, 246 bargaining game, 78 base station, 101 Bayesian equilibrium, 77 belief, 143, 144, 146, 376, 502 Bellman equation, 149, 503 BER, 353 Bertrand game, 69, 71 best response, 51 bidding ring collusion, 75, 140 black hole, 402 BPSK, 353 capacity, 90 carrier sense multiple access (CSMA), 101 cartel maintenance, 70, 317 Cauchy–Schwartz inequality, 166 cellular networks, 20, 55 central limit theorem, 126
channel hopping, 203, 225 channel model, 229 cheat-proof, 75, 123, 473 coalitional game, 80 characteristic-function form, 81 grand coalition, 81 partition-function form, 81 Shapley value, 82 the core, 81 code-division multiple access (CDMA), 113 cognitive radio, 3, 5 applications, 7 cognitive capability, 5 cognitive cycle, 6 network architecture, 7, 26 reconfigurability, 6 spectrum management, 7 spectrum sensing, 6 spectrum sharing, 7 spectrum white space, 6 cognitive radio platforms, 39 cognitive relaying, 30 coherent detection, 18 collusion, 446, 459 bidding, 159 routing, 409 combinatorial auction, 75 competitive equilibrium, 137 complete information, 286 complexity, 165, 357 concave, 62, 100, 356 quasi-concave, 79 cone, 166 conflicting-behavior attack, 386 contagious equilibrium, 281 control channels, 202 control-channel management, 33 convergence, 35, 49, 53, 56, 60, 72, 150, 178, 190, 201, 312 speed, 267, 292 convex hull, 64, 470 convex optimization, 162, 166, 356 cooperative communication, 226
599
Index
cooperative spectrum sensing, 20 decision fusion, 22 distributed sensing, 24 experiment, 24 information sharing, 23 interference diversity, 23 malicious user, 22 user selection, 21 correlated equilibrium, 65 cost function, 61, 68, 69, 71, 85 Cournot game, 69, 70 cyclostationary feature detection, 15 cyclic autocorrelation function (CAF), 16 cyclic spectrum density, 16 data fusion, 23 delay, 20, 28, 37, 101, 269 constraint, 299, 304, 307, 445, 446, 558 sensitive, 84 detection statistics energy detection, 13 differential equation, 185 Dijkstra’s algorithm, 358 directional antenna, 225 diversity, 369 multiuser diversity, 5, 20, 30, 83 spatial diversity, 5, 20, 23, 30, 83, 199, 226 spectrum band diversity, 29 dominant strategy, 74, 161, 314 Doppler frequency, 257 shift, 343 double auction, 136 outstanding ask, 146 spread, 146 spread-reduction rule, 146 dynamic game, 138, 304 dynamic programming, 149, 307, 308, 311 dynamic source routing (DSR), 287, 401, 424 eigenvalue, 329, 354 eigenvector, 330, 354 energy detection, 13 hypothesis, 13 ergodic, 252 error-correcting code, 201 evolutionarily stable strategy (ESS), 59, 184 evolutionary game, 24, 59 fading flat fading, 179 frequency-selective fading, 17 Rayleigh fading, 229 shadowing, 21, 199
fairness absolute, 472 max–min, 99, 130 NBS, 79 proportional, 99, 119, 120, 472 Federal Communications Commission (FCC), 3, 73 feedback channel, 231 FFT, 17 fictitious play, 65 finite-state Markov channel (FSMC), 206, 257 first-order condition, 68, 122 Folk theorem, 62–64, 276, 316, 500 frame-up attack, 402 frequency-division multiplexing, 28 game theory, 50 bargaining game, 78 Bertrand game, 69 coalitional game, 80 cooperative game, 77 Cournot game, 69 evolutionary game, 59 extensive-form, 57 in spectrum sharing, 34 non-cooperative game, 50 potential game, 53 repeated game, 62, 115, 275 Stackelberg game, 69, 72 stochastic game, 83 zero-sum, 211 game tree, 57, 58 graph, 327, 352 Fiedler value, 330, 354 incidence matrix, 354 Laplacian matrix, 329, 354 gray hole, 402 guard guard band, 41 guard time, 17 hard decision, 22, 40, 199 Hessian, 100 heterogeneous, 21, 120, 121, 129, 185, 190, 191, 196, 275, 562 hierarchical spectrum market, 71 homogeneous, 59, 127, 138, 143, 185, 186, 194 hyperplane, 235 hypothesis, 13, 18, 179 IEEE 802.11, 101, 439, 558 IEEE 802.22, 45 IEEE P1900, 45 imperfect information, 57, 58, 141, 501, 505, 511 incentive compatible, 137 indicator function, 55, 193, 233 information theory, 377, 380
600
Index
injecting data packets attack (IDPA), 420 interference model physical, 158, 167 protocol, 158 interference temperature, 9, 90 interference temperature limit, 10, 11 interior-point method, 165, 357
selection, 56 uniqueness, 52 nearest neighbor, 227, 235 newcomer attack, 387 noise and imperfect monitoring, 521, 522, 547 nonlinear programming, 369 norm, 157
jamming, 202 Jensen’s inequality, 188
oligopoly, 68 on–off attack, 384 optimal auction, 74 order statistics, 120, 145 orthogonal frequency-division multiplexing (OFDM), 15–17, 30, 41, 43, 45 outage probability, 31, 32, 229, 230, 558 overhead, 23, 34, 102, 134, 147, 151, 153, 342, 390, 412, 416, 427, 438, 439
KKT, 61, 162 Lagrange multiplier, 162 learning based on replicator dynamics, 193 best response, 72 gradiant-based, 72 Minimax-Q, 84, 212, 213 myopic, 222 regret, 66 reinforcement, 252, 340 self-, 285 likelihood ratio test, 16, 19, 22 linear program, 27, 38, 82, 85, 215, 328 localization, 14, 33, 225 majority rule, 183 malicious user, 400 spectrum sensing, 22 market equilibrium, 68 Markov chain, 27 continuous-time, 91 flow-balance, 92 generator matrix, 94 stationary probability, 92 Markov decision process, 27, 251 matched filtering, 17, 22 mean value theorem, 315 mechanism design, 76, 123 transfer, 76, 124 medium access control, 26, 101 mobile ad hoc networks (MANET), 297, 399 mobility, 151, 289, 480, 489, 492, 539, 540, 558, 560 monopoly, 68 multi-stage game, 138, 500 multi-winner auction, 76, 156 multiple access, 12, 30, 226, 237, 246 myopic adjustment dynamic, 188 Nash bargaining solution (NBS), 78, 148 Nash equilibrium, 50, 114, 499 mixed strategy, 51 pure strategy, 51 refinement, 57
Pareto optimality, 56, 472 partially observable Markov decision process (POMDP), 31 payoff, 50 egalitarian, 60, 99 minmax payoff, 63 transferrable, 81 utilitarian, 60, 99, 119 penalty, 450, 530, 555 perfect information, 57 pilot, 18 Poisson process, 29, 35, 89 decomposition property, 98 polynomial time, 352, 361 potential function, 53 power control, 32, 54, 55, 61, 64, 87, 254, 269, 365, 369 graph, 33 interference constraint, 33 price of anarchy, 71 pricing, 35, 36, 41, 60, 110, 133, 135, 136, 138, 161–163, 297, 300, 302, 466, 488, 567 primary user, 3, 4, 133 private key, 402 probability distribution Bernoulli, 126 beta, 383 negative-exponential, 89 public key, 402 punishment, 115, 275 Q-learning, 84, 212 quantization, 23 query-flooding attack, 420 queuing, 95, 230 Markov chain, 95 stability, 232
601
Index
random waypoint model, 391, 439, 460, 489, 491, 492, 515, 538, 558 random-waypoint model, 151, 317, 411 Rayleigh fading, 112 reciprocal altruism, 448 regret learning, 66 relay, 30 replicator dynamics, 59, 185 reputation, 22, 296, 324, 374, 398, 444, 466, 468, 487 reserve price, 74, 142 revenue-equivalence theorem, 74 routing, 36, 298 rushing attack, 402 scalability, 44, 342 scheduling, 27, 29, 30, 33, 38 secondary user, 3, 4, 133 semi-definite programming (SDP), 166, 169, 356 sequential equilibrium, 58, 505 SHA-1, 425 Shapley value, 82 signal-to-interference-and-noise ratio (SINR), 55 slander attack, 446 social welfare, 158 soft decision, 22 spectrum allocation and sharing, 24 control-channel management, 33 cooperative, 26 distributed, 34 game, 34 hierarchical access, 25 medium access control, 26 non-cooperative, 26 open spectrum sharing, 25 spectrum handoff, 28 spectrum overlay, 26 spectrum underlay, 25 spectrum broker, 7 spectrum handoff, 28 spectrum sensing
coherent detection, 19 constrained, 31 cyclostationary feature detection, 15, 16 domain, 12 energy detection, 14 feature detection, 15–17 filter bank, 19 interference temperature, 9 interference-temperature limit, 10 multitaper method, 10 quickest detection, 19 statistical covariance, 19 subspace-based method, 10 spread spectrum, 225 Stackelberg game, 69, 72 standard function, 55 stochastic game, 83, 205 policy, 84, 211 subgame perfect equilibrium (SPE), 58, 316, 471, 505 sybil attack, 387 the core, 81 theta number, 166 time-division multiple access (TDMA), 119, 353 time-spectrum block, 34 tit-for-tat, 65 trust, 374, 376, 388 unmanned air vehicle (UAV), 351, 359, 369 value iteration, 84 Vickrey–Clarke–Groves (VCG) mechanism, 75, 125 watchdog, 412, 426 WiFi, 61, 154 wireless mesh network (WMN), 38 wireless personal area network (WPAN), 156 wireless sensor networks (WSN), 249, 350 WLAN, 36 wormhole, 402