March 2011 Cover 1
2/22/11
5:25 PM
Page 1
IEEE
March 2011, Vol. 49, No. 3
www.comsoc.org
MAGAZINE l ria to Tu d c ce So an 9 om dv ge C -A Pa ee TE e Fr L Se
• Dynamic
Spectrum Access • Cognitive Radio Networks • Future Media Internet • Network Testing
®
A Publication of the IEEE Communications Society
LYT-TOC-MAR
3/5/11
12:58 AM
Page 2
Director of Magazines Andrzej Jajszczyk, AGH U. of Sci. & Tech. (Poland)
IEEE
Editor-in-Chief Steve Gorshe, PMC-Sierra, Inc. (USA) Associate Editor-in-Chief Sean Moore, Centripetal Networks (USA) Senior Technical Editors Tom Chen, Swansea University (UK) Nim Cheung, ASTRI (China) Nelson Fonseca, State Univ. of Campinas (Brazil) Torleiv Maseng, Norwegian Def. Res. Est. (Norway) Peter T. S. Yum, The Chinese U. Hong Kong (China) Technical Editors Sonia Aissa, Univ. of Quebec (Canada) Mohammed Atiquzzaman, U. of Oklahoma (USA) Paolo Bellavista, DEIS (Italy) Tee-Hiang Cheng, Nanyang Tech. U. (Rep. Singapore) Jacek Chrostowski, Scheelite Techn. LLC (USA) Sudhir S. Dixit, Nokia Siemens Networks (USA) Stefano Galli, Panasonic R&D Co. of America (USA) Joan Garcia-Haro, Poly. U. of Cartagena (Spain) Vimal K. Khanna, mCalibre Technologies (India) Janusz Konrad, Boston University (USA) Abbas Jamalipour, U. of Sydney (Australia) Deep Medhi, Univ. of Missouri-Kansas City (USA) Nader F. Mir, San Jose State Univ. (USA) Amitabh Mishra, Johns Hopkins University (USA) Sedat Ölçer, IBM (Switzerland) Glenn Parsons, Ericsson Canada (Canada) Harry Rudin, IBM Zurich Res.Lab. (Switzerland) Hady Salloum, Stevens Institute of Tech. (USA) Antonio Sánchez Esguevillas, Telefonica (Spain) Heinrich J. Stüttgen, NEC Europe Ltd. (Germany) Dan Keun Sung, Korea Adv. Inst. Sci. & Tech. (Korea) Danny Tsang, Hong Kong U. of Sci. & Tech. (Japan) Series Editors Ad Hoc and Sensor Networks Edoardo Biagioni, U. of Hawaii, Manoa (USA) Silvia Giordano, Univ. of App. Sci. (Switzerland) Automotive Networking and Applications Wai Chen, Telcordia Technologies, Inc (USA) Luca Delgrossi, Mercedes-Benz R&D N.A. (USA) Timo Kosch, BMW Group (Germany) Tadao Saito, University of Tokyo (Japan) Consumer Communicatons and Networking Madjid Merabti, Liverpool John Moores U. (UK) Mario Kolberg, University of Sterling (UK) Stan Moyer, Telcordia (USA) Design & Implementation Sean Moore, Avaya (USA) Salvatore Loreto, Ericsson Research (Finland) Integrated Circuits for Communications Charles Chien (USA) Zhiwei Xu, SST Communication Inc. (USA) Stephen Molloy, Qualcomm (USA) Network and Service Management Series George Pavlou, U. of Surrey (UK) Aiko Pras, U. of Twente (The Netherlands) Networking Testing Series Yingdar Lin, National Chiao Tung University (Taiwan) Erica Johnson, University of New Hampshire (USA) Tom McBeath, Spirent Communications Inc. (USA) Eduardo Joo, Empirix Inc. (USA) Topics in Optical Communications Hideo Kuwahara, Fujitsu Laboratories, Ltd. (Japan) Osman Gebizlioglu, Telcordia Technologies (USA) John Spencer, Optelian (USA) Vijay Jain, Verizon (USA) Topics in Radio Communications Joseph B. Evans, U. of Kansas (USA) Zoran Zvonar, MediaTek (USA) Standards Yoichi Maeda, NTT Adv. Tech. Corp. (Japan) Mostafa Hashem Sherif, AT&T (USA) Columns Book Reviews Andrzej Jajszczyk, AGH U. of Sci. & Tech. (Poland) History of Communications Mischa Schwartz, Columbia U. (USA) Regulatory and Policy Issues J. Scott Marcus, WIK (Germany) Jon M. Peha, Carnegie Mellon U. (USA) Technology Leaders' Forum Steve Weinstein (USA) Very Large Projects Ken Young, Telcordia Technologies (USA) Publications Staff Joseph Milizzo, Assistant Publisher Eric Levine, Associate Publisher Susan Lange, Online Production Manager Jennifer Porcello, Publications Specialist Catherine Kemelmacher, Associate Editor
®
2
MAGAZINE March 2011, Vol. 49, No. 3
www.comsoc.org/~ci TOPICS IN RADIO COMMUNICATIONS SERIES EDITORS: JOSEPH EVANS AND ZORAN ZVONAR
30 GUEST EDITORIAL: THE MATURATION OF DYNAMIC SPECTRUM ACCESS FROM A FUTURE TECHNOLOGY, TO AN ESSENTIAL SOLUTION TO IMMEDIATE CHALLENGES GUEST EDITOR: PRESTON MARSHALL
32 UNDERSTANDING CONDITIONS THAT LEAD TO EMULATION ATTACKS IN DYNAMIC SPECTRUM ACCESS The threat of emulation attacks, in which users pretend to be of a type they are not in order to gain unauthorized access to spectrum, has the potential to severely degrade the expected performance of the system. The authors analyze this problem within a Bayesian game framework, in which users are unsure of the legitimacy of the claimed type of other users. RYAN W. THOMAS, BRETT J. BORGHETTI, RAMAKANT S. KOMALI, AND PETRI MÄHÖNEN
38 DYNAMIC SPECTRUM ACCESS OPERATIONAL PARAMETERS WITH WIRELESS MICROPHONES The authors provide a comprehensive analysis of dynamic spectrum access operational parameters in a typical hidden node scenario with protected wireless microphones in the TV white space. TUGBA ERPEK, MARK A. MCHENRY, AND ANDREW STIRLING
46 THE VIABILITY OF SPECTRUM TRADING MARKETS The authors focus on determining the conditions for viability of spectrum trading markets. They make use of agent-based computational economics to analyze different market scenarios and the behaviors of their participants. CARLOS E. CAICEDO AND MARTIN B. H. WEISS
54 UNIFIED SPACE-TIME METRICS TO EVALUATE SPECTRUM SENSING The authors present a unified framework in which the natural ROC curve correctly captures the two features desired from a spectrum sensing system: safety to primary users and performance for the secondary users. RAHUL TANDRA, ANANT SAHAI, AND VENUGOPAL VEERAVALLI
ADVANCES IN STANDARDS AND TESTBEDS FOR COGNITIVE RADIO NETWORKS: PART II GUEST EDITORS: EDWARD K. AU, DAVE CAVALCANTI, GEOFFREY YE LI, WINSTON CALDWELL, AND KHALED BEN LETAIEF
62 64
GUEST EDITORIAL WIRELESS SERVICE PROVISION IN TV WHITE SPACE WITH COGNITIVE RADIO TECHNOLOGY: A TELECOM OPERATOR’S PERSPECTIVE AND EXPERIENCE There is a fundamental change happening in spectrum regulation: the enabling of spectrum sharing, where primary (licensed) users of the spectrum are forced to allow sharing with secondary users, who use license-exempt equipment. MICHAEL FITCH, MAZIAR NEKOVEE, SANTOSH KAWADE, KEITH BRIGGS, AND RICHARD MACKENZIE
74
EMERGING COGNITIVE RADIO APPLICATIONS: A SURVEY
82
INTERNATIONAL STANDARDIZATION OF COGNITIVE RADIO SYSTEMS
90
COGNITIVE RADIO: TEN YEARS OF EXPERIMENTATION AND DEVELOPMENT
There are new opportunities for cognitive radio to enable a variety of emerging applications. The authors present a high-level view of how cognitive radio would support such applications JIANFENG WANG, MONISHA GHOSH, AND KIRAN CHALLAPALI The authors describe the current concept of the CRS and show the big picture of international standardization of the CRS. Understanding these standardization activities is important for both academia and industry. STANISLAV FILIN, HIROSHI HARADA, HOMARE MURAKAMI, AND KENTARO ISHIZU Although theoretical research is blooming, hardware and system development for CR is progressing at a slower pace. The authors provide synopses of the commonly used platforms and testbeds, examine what has been achieved in the last decade of experimentation and trials, and draw several perhaps surprising conclusions. PRZEMYSLAW PAWELCZAK, KEITH NOLAN, LINDA DOYLE, SER WAH OH, AND DANIJELA CABRIC
IEEE Communications Magazine • March 2011
LYT-TOC-MAR
3/5/11
12:58 AM
Page 4
2011 Communications Society Elected Officers
101
The authors present SpiderRadio, a cognitive radio prototype for dynamic spectrum access networking. SpiderRadio is built using commodity IEEE 802.11a/b/g hardware and the open source MadWiFi driver. S. SENGUPTA, K. HONG, R. CHANDRAMOULI, AND K. P. SUBBALAKSHMI
Byeong Gi Lee, President Vijay Bhargava, President-Elect Mark Karol, VP–Technical Activities Khaled B. Letaief, VP–Conferences Sergio Benedetto, VP–Member Relations Leonard Cimini, VP–Publications Members-at-Large Class of 2011 Robert Fish, Joseph Evans Nelson Fonseca, Michele Zorzi Class of 2012 Stefano Bregni, V. Chan Iwao Sasase, Sarah K. Wilson Class of 2013 Gerhard Fettweis, Stefano Galli Robert Shapiro, Moe Win 2011 IEEE Officers Moshe Kam, President Gordon W. Day, President-Elect Roger D. Pollard, Secretary Harold L. Flescher, Treasurer Pedro A. Ray, Past-President E. James Prendergast, Executive Director Nim Cheung, Director, Division III IEEE COMMUNICATIONS MAGAZINE (ISSN 01636804) is published monthly by The Institute of Electrical and Electronics Engineers, Inc. Headquarters address: IEEE, 3 Park Avenue, 17th Floor, New York, NY 10016-5997, USA; tel: +1-212705-8900; http://www.comsoc.org/ci. Responsibility for the contents rests upon authors of signed articles and not the IEEE or its members. Unless otherwise specified, the IEEE neither endorses nor sanctions any positions or actions espoused in IEEE Communications Magazine.
FUTURE MEDIA INTERNET GUEST EDITORS: THEODORE ZAHARIADIS, GIOVANNI PAU, AND GONZALO CAMARILO
110 112
A SURVEY ON CONTENT-ORIENTED NETWORKING FOR EFFICIENT CONTENT DELIVERY
128
PEER-TO-PEER STREAMING OF SCALABLE VIDEO IN FUTURE INTERNET APPLICATIONS
136
IMPROVING END-TO-END QOE VIA CLOSE COOPERATION BETWEEN APPLICATIONS AND ISPS
REPRINT
PERMISSIONS:
Abstracting is permitted with credit to the source. Libraries are permitted to photocopy beyond the limits of U.S. Copyright law for private use of patrons: those post-1977 articles that carry a code on the bottom of the first page provided the per copy fee indicated in the code is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. For other copying, reprint, or republication permission, write to Director, Publishing Services, at IEEE Headquarters. All rights reserved. Copyright © 2011 by The Institute of Electrical and Electronics Engineers, Inc.
144
152
Send address changes to IEEE Communications Magazine, IEEE, 445 Hoes Lane, Piscataway, NJ 08855-1331. GST Registration No. 125634188. Printed in USA. Periodicals postage paid at New York, NY and at additional mailing offices. Canadian Post International Publications Mail (Canadian Distribution) Sales Agreement No. 40030962. Return undeliverable Canadian addresses to: Frontier, PO Box 1051, 1031 Helena Street, Fort Eire, ON L2A 6C7
ADVERTISING: Advertising is accepted at the discretion of the publisher. Address correspondence to: Advertising Manager, IEEE Communications Magazine, 3 Park Avenue, 17th Floor, New York, NY 10016. SUBMISSIONS: The magazine welcomes tutorial or survey articles that span the breadth of communications. Submissions will normally be approximately 4500 words, with few mathematical formulas, accompanied by up to six figures and/or tables, with up to 10 carefully selected references. Electronic submissions are preferred, and should be sumitted through Manuscript Central http://mc.manuscriptcentral.com/commag-ieee. Instructions can be found at the following: http://dl.comsoc.org/livepubs/ci1/info/sub_guidelines.html. For further information contact Sean Moore, Associate Editor-inChief (
[email protected]). All submissions will be peer reviewed.
If video is encoded in a scalable way, it can be adapted to any required spatiotemporal resolution and quality in the compressed domain, according to a peer bandwidth and other peers’ context requirements. NAEEM RAMZAN, EMANUELE QUACCHIO, TONI ZGALJIC, STEFANO ASIOLI, LUCA CELETTO, EBROUL IZQUIERDO, AND FABRIZIO ROVATI
SYSTEM ARCHITECTURE FOR ENRICHED SEMANTIC PERSONALIZED MEDIA SEARCH AND RETRIEVAL IN THE FUTURE MEDIA INTERNET The authors describe a novel system and its architecture to handle, process, deliver, personalize, and find digital media, based on continuous enrichment of the media objects through the intrinsic operation within a content oriented architecture. MARIA ALDUAN, FAUSTINO SANCHEZ, FEDERICO ÁLVAREZ, DAVID JIMÉNEZ, JOSÉ MANUEL MENÉNDEZ, AND CAROLINA CEBRECOS
POSTMASTER:
SUBSCRIPTIONS, orders, address changes — IEEE Service Center, 445 Hoes Lane, Piscataway, NJ 08855-1331, USA; tel: +1-732-981-0060; e-mail:
[email protected].
The authors present a comprehensive survey of content naming and name-based routing, and discuss further research issues in CON. JAEYOUNG CHOI, JINYOUNG HAN, EUNSANG CHO, TED “TAEKYOUNG” KWON, AND YANGHEE CHOI
The authors present an architecture to enable cooperation between the application providers, the users, and the communications networks so that the quality of experience of the users of the application is improved and network traffic optimized. BERTRAND MATHIEU, SELIM ELLOUZE, NICO SCHWAN, DAVID GRIFFIN, ELENI MYKONIATI, TOUFIK AHMED, AND ORIOL RIBERA PRATS
EDITORIAL CORRESPONDENCE: Address to: Editor-
AND
CURLING: CONTENT-UBIQUITOUS RESOLUTION AND DELIVERY INFRASTRUCTURE FOR NEXT-GENERATION SERVICES
121
tion. $16 per year digital subscription. Non-member print subscription: $400. Single copy price is $25.
COPYRIGHT
GUEST EDITORIAL
CURLING aims to enable a future content-centric Internet that will overcome the current intrinsic constraints by efficiently diffusing media content of massive scale. WEI KOONG CHAI, NING WANG, IOANNIS PSARAS, GEORGE PAVLOU, CHAOJIONG WANG, GERARDO GARCIA DE BLAS, FRANCISCO JAVIER RAMON SALGUERO, LEI LIANG, SPIROS SPIROU, ANDRZEJ BEBEN, AND ELEFTHERIA HADJIOANNOU
ANNUAL SUBSCRIPTION: $27 per year print subscrip-
in-Chief, Steve Gorshe, PMC-Sierra, Inc., 10565 S.W. Nimbus Avenue, Portland, OR 97223; tel: +(503) 4317440, e-mail:
[email protected].
SPIDERRADIO: A COGNITIVE RADIO NETWORK WITH COMMODITY HARDWARE AND OPEN SOURCE SOFTWARE
AUTOMATIC CREATION OF 3D ENVIRONMENTS FROM A SINGLE SKETCH USING CONTENT-CENTRIC NETWORKS The authors present a complete and innovative system for automatic creation of 3D environments from multimedia content available in the network. THEODOROS SEMERTZIDIS, PETROS DARAS, PAUL MOORE, LAMBROS MAKRIS, AND MICHAEL G. STRINTZIS
TOPICS IN NETWORK TESTING SERIES EDITORS: YING-DAR LIN, ERICA JOHNSON, AND EDUARDO JOO
158 160
SERIES EDITORIAL ADJACENT CHANNEL INTERFERENCE IN 802.11A IS HARMFUL: TESTBED VALIDATION OF A SIMPLE QUANTIFICATION MODEL The authors report results that show clear throughput degradation because of ACI in 802.11a, the magnitude of which depends on the interfering data rates, packet sizes, and utilization of the medium. VANGELIS ANGELAKIS, STEFANOS PAPADAKIS, VASILIOS A. SIRIS, APOSTOLOS TRAGANITIS
167
EMERGING TESTING TRENDS AND THE PANLAB ENABLING INFRASTRUCTURE The authors address a number of fundamental principles and their corresponding technology implementations that enable the provisioning of large-scale testbeds for testing and experimentation as well as deploying future Internet platforms for piloting novel applications. SEBASTIAN WAHLE, CHRISTOS TRANORIS, SPYROS DENAZIS, ANASTASIUS GAVRAS, KONSTANTINOS KOUTSOPOULOS, THOMAS MAGEDANZ, AND SPYROS TOMPROS
President’s Page Certification Corner Conference Report/CCNC Conference Report/GLOBECOM Conference Preview/GrrenCom
4
6 12 14 16 20
Product Spotlights New Products Global Communications Newsletter Advertisers’ Index
23 24 25 176
IEEE Communications Magazine • March 2011
LYT-PRES PAGE-MAR
2/22/11
2:15 PM
Page 6
THE PRESIDENT’S PAGE
ICT SERVICES - “COMMUNICATIONS SOCIETY ON-LINE”
T
tion of his volunteer contributions, Alex he IEEE Communications Society received the 2006 Donald W. McLellan Meri(ComSoc), being geographically diverse, torious Service Award of ComSoc. relies heavily on state of the art Information James Won-Ki Hong is Professor and and Communication Technology (ICT) infraHead of the Division of IT Convergence structure and services for its governance, Engineering and Dean of the Graduate operations, and to host its community and School for Information Technology, group activities. All ComSoc activities involve POSTECH, Pohang, Korea. He received his participation from volunteers who live on varPh.D. from the University of Waterloo, Canaious continents and in different time zones. da in 1991. His research interests include netSo ICT infrastructure is essential in keeping work management, network monitoring and such a global Society operational. The Comanalysis, and convergence engineering. Durmunications Society’s ICT infrastructure and ing 2005-2009, James served as chair of the services are the technological foundation for IEEE ComSoc Committee on Network Operour “Communications Society On-Line.” ations and Management (CNOM). Currently The success of ComSoc in serving its memBYEONG GI LEE he serves on ComSoc’s BoG as Director of bership, our profession, industry, and humaniOn-Line Content, and is an active ty depends on ingenuity, promember of the NOMS/IM and fessional interests, and the enthusiAPNOMS steering committees. He astic volunteerism of its members was also General Co-Chair of the who apply those virtues toward the 2010 IEEE/IFIP Network Operacreation and maintenance of Comtions and Management Symposium Soc products and services. The on(NOMS 2010). In addition, he is an line mechanisms provided by the editorial advisory board member ICT infrastructure aid in organizing for JNSM, IJNM, JTM, and TNSM. conferences, running magazines Fred Bauer works at Cisco Sysand journals, conducting meetings, tems as a Technical Leader in the and socializing ideas. These mechaSmart Grids Group. Previously he nisms are critical to volunteer prowas at Nokia, PacketHop, SRI, and ductivity and the effective use of Intel. He received his Ph.D. in comeach volunteer’s precious time. puter engineering from the UniverServing the ICT needs of the ALEX GELMAN FRED BAUER sity of California at Santa Cruz in society, including providing on-line 1996. His research interests include content, is the responsibility of the multicast routing, wireless networks, Chief Information Officer (CIO), and mesh routing. He just completAlexander Gelman, who reports to ed a term as a Member-at-Large of the ComSoc President, and the ComSoc’s BoG (2008-2010).CurDirector of On-Line Content, rently he chairs the ComSoc GoverJames Hong, who reports to the nance Committee as well as the ad VP-Publications. Staff support is hoc Smartphonomics Committee. provided by the ICT Department, He serves as a member of the IEEE headed by David Alvarez, based in INFOCOM and the SECON steerComSoc’s New York office. I share ing committees and as a member of this month’s President’s Page with the IEEE Conferences Board. He is Alex Gelman, James Hong, David the chair of the Society’s Technical Alvarez, and Fred Bauer, who Program Integrity Initiative Comchairs ComSoc’s ad hoc “Smartphomittee (TPII). nomics” Committee. JAMES HONG DAVID ALVAREZ David Alvarez is a graduate of Alexander D. Gelman received the University of Florida and has his M.E. and Ph.D. in electrical been in the ICT field for 24 years. Prior to his employment engineering from CUNY. Presently he is CTO of the with IEEE ComSoc, he worked in development and ICT manNETovations consulting group. During 1998-2007 Alex was agement for T2 Medical and Coram Healthcare in Atlanta, the Chief Scientist of the Panasonic Princeton Laboratory. Georgia. He has been serving as ComSoc’s Director of InforDuring 1984-1998 he worked at Bellcore as Director of the mation Technology since 1997. Internet Access Architectures Research group. Alex has served on several IEEE, ComSoc, and IEEE-SA (IEEE StanCOMSOC’S ICT INFRASTRUCTURE dards Association) committees; he has worked on publications and conferences; served three terms as a ComSoc Vice PresiComSoc’s ICT Department employs the following staff dent; and served on the ComSoc Board of Governors (BoG) support team: and IEEE-SA BoG. Currently, he is ComSoc’s Chief Infor•Director – David Alvarez. David works with the ComSoc mation Officer, Vice Chair of the ComSoc Standards Board, CIO to determine ICT strategy, and manage information sysand a member of the IEEE-SA Standards Board. In recognitems and employees.
6
IEEE Communications Magazine • March 2011
LYT-PRES PAGE-MAR
2/22/11
2:15 PM
Page 7
THE PRESIDENT’S PAGE •Manager of On-Line Content and ComSoc ICT Products and Services Design – Natasha Simonovski. Natasha manages all ComSoc web sites and on-line ComSoc Website www.comsoc.org content. She is responsible for design and Digital Library dl.comsoc.org/comsocdl layout of ComSoc web page templates. Community community.comsoc.org •Program Manager – Joel Basco. Joel is responsible for ongoing management of ComSoc Webcasts www.comsoc.org/webcasts ComSoc products and services. Currently ComSoc Online Training www.comsoc.org/training/online-training his responsibilities include support of the ComSoc Member Aliases cmsc-ems.ieee.org/ ComSoc Digital Library and Webcasting programs. ComSoc Email Lists comsoc-listserv.ieee.org/cgi-bin/wa?HOME •Programmer/Developer – Matt Sielski. Community Booklet www.comsoc.org/about/ComSocCommunityBooklet Matt is responsible for development of Facebook www.facebook.com/IEEEComSoc applications on the ComSoc digital platLinkedIn www.linkedin.com/groups?mostPopular=&gid=81475 form (Drupal CMS) and related tasks. •Systems Administrator – Tony Ruiz. Twitter twitter.com/#!/comsoc Tony is responsible for supporting all of ComSoc’s systems and networks. He works closely with the IEEE IT group on networking, email and those events may be maintained by our ICT team, in which security services. case quick responses from the ICT team are essential, or •PC Technician – John Porter. John supports staff equipthe websites may be created and maintained by volunteers ment, including desktops, printers, mobile devices, and softeither on their own or on the ComSoc ICT infrastructure. ware. He also assists the Systems Administrator and works Both approaches require support by the ComSoc ICT with the Program Manager on video projects. team. •Web Content Administrator – Tammy Hung. Tammy In 2010, we began webcasting specific program elements works on keeping on-line content current. She also manages our from several conferences. Recording takes place during the Social Networking sites. She works with the Program Manager conference and is then made available on-line. The scenario to produce and maintain the Society’s Tutorials and Webinars. for this service includes the following steps: These are the products supported by the ICT team: •Identification of appropriate sessions for recording •ComSoc publications •Obtaining presenters’ consent •ComSoc Digital Library •Real-time webcast as well as recording utilizing voice over •ComSoc webcasts Power Point (VoPP) •ComSoc training •Questions and answers •Advertising •Conversion of recordings to Flash format •ComSoc conferences •Employing a content delivery network (CDN) for distribuThe following services are currently offered: tion •ComSoc website Details of this program can be found at http://www.com•Conference websites soc.org/webcasts. •Community website We are now pursuing an experiment that may lead to •Chapter and Technical Committee websites offering virtual conferences. ComSoc is in the process of orga•ComSoc member aliases nizing a completely on-line conference using the Cisco WebEx •Email lists platform. The experimental conference, itself “green,” is on •ComSoc E-News green communications, IEEE GreenCom 2011. The confer•ComSoc Community Booklet ence will feature parallel on-line sessions. For further inforIn 2010, ComSoc completed the “virtualization” of its ICT mation, please visit http://www.ieee-greencom.org/. infrastructure. Now ComSoc ICT relies on production servers in the IEEE Datacenter for support while still having the abilComSoc Email Alias:
[email protected] ity to create and deploy its own applications. Some applicaIt has been a long-term goal of our volunteer leaders tions are also a virtual overlay on IEEE’s applications services. to make ComSoc email aliases available to all ComSoc members. Toward the end of last year, we finally impleCOMSOC ICT SERVICES mented the mechanism that allows its members to proudly possess an email address that carries the name of our ComSoc Web Presence beloved society by displaying “@comsoc.org.” You may Some years ago, the Meetings and Conferences (M&C), obtain a ComSoc email alias at: http://cmsc-ems.ieee.org. Marketing, and ICT staff members selected a commercial We encourage all ComSoc members to take advantage of content management system, Eprise, to improve workflow and this opportunity. allow assistance with website updates from M&C staff and volunteers. Eprise has since served as the major platform for ComSoc Mailing Lists conference, chapter, and technical committee web pages. It is To support group activities, the ComSoc ICT team, led by relatively easy to use and many volunteer groups have already Tony Ruiz, Systems Administrator, and John Porter, Tech taken advantage of the tool. Support Specialist, in partnership with their IEEE colleagues, has implemented LISTSERV support for email Meetings and Conferences Services applications hosted on the IEEE Data Center servers. Among the major activities in ComSoc, M&C requires Group leaders may now create and manage mailing lists very intensive and responsive ICT support. Our workshops, using the LISTSERV website: http://listserv.ieee.org/request/ symposia, and conferences are in need of creating and add-listserv.html. maintaining pages for ComSoc events. The websites of
IEEE Communications Magazine • March 2011
7
LYT-PRES PAGE-MAR
2/22/11
2:15 PM
Page 8
THE PRESIDENT’S PAGE LEVERAGING OPEN SOURCE SOFTWARE During 2008-2009 the ComSoc ICT team made the decision to deploy open source software for support of ComSoc web applications. This initiative was started by the previous CIO, Harvey Freeman, supported by Fred Bauer. The major objective of this initiative was to deploy a wide range of applications and to improve the ComSoc ICT Department’s ability to respond to the ICT needs of our constituency in a timely fashion. The team chose Drupal as the platform that would best meet these needs. Drupal is an open source content management platform that is widely accepted around the world. Its applications range from personal blogs to enterprise solutions. Drupal is free, flexible, robust, and constantly being improved by hundreds of thousands of enthusiastic professionals from all over the world. ComSoc’s ICT team has joined this community. We now create our own modules and contribute them to the Drupal community, as well as benefit from the contributions of other community members. The Drupal platform was, and is, ideal for website development, thus leading naturally to ICT replicating comsoc.org on the Drupal platform. The new site, available “24/7,” went live in 2009. New features developed on the Drupal platform include: •A mobile version of the comsoc.org website (http://m.comsoc.org), the first in IEEE. •RSS feeds for ComSoc news, conference events and publications into social media pages setup on Facebook, LinkedIn, and Twitter. •A robust Webcast Storefront for the new Conferences Webcast program. •Marketing blog for promotion of the ComSoc website with RSS feeds into Social Media pages. ComSoc’s CIO, Alex Gelman, and the ICT team call on you, the ComSoc volunteers, to join the ComSoc Drupal development team and contribute your work to ComSoc and to the entire Drupal community. If you are interested in the development of this feature-rich content management platform, please e-mail Alex at
[email protected].
SERVING MOBILE USERS: SMARTPHONOMICS Under the leadership of ComSoc’s Vice President of Publications, Len Cimini, our Director of On-Line Content, James Hong, proposed a new initiative aimed at providing ComSoc services to members carrying smart phones, or any form of smart devices. James already had his students doing research on the new field of “smartphonomics” (in fact, this may have been the first use of that term). To facilitate the initiative, we created an ad hoc committee on “Smartphonomics,” with Fred Bauer as its Chair. This committee, involving both the ComSoc ICT team and a range of volunteers, has already achieved some goals, including mobile web access to comsoc.org and a ComSoc “app” for iPhones. There is much to be done in this smartphonomics area as an increasing number of members will be using mobile devices such as smart phones and tablet computers to access a growing number of resources, including those of ComSoc. The idea of the smartphonomics initiative is to give our members access to ComSoc material and resources directly from their mobile devices wherever they might be. There are two parts to this problem: 1) serving ComSoc webpages to mobile devices and 2) providing new and interesting content regularly to those mobile devices. The smartphonomics team started with the problem of webpage delivery. Last year, as described above, the ComSoc
8
ICT team members, Natasha Simonovski and Matt Sielski, enabled the www.comsoc.org site to recognize smartphone browsers and serve mobile content from the website m.comsoc.org. Currently, ComSoc Drupal-based content is automatically processed for mobile use. This new mechanism allows ComSoc to quickly serve many types of mobile devices conveniently with content stored on m.comsoc.org. The next logical step is to support applications native to the most popular mobile platforms. James Hong and his students at POSTECH graciously built and provided an iPhone application for ComSoc members with an Apple iOS device such as the iPhone, iPad, or iPod Touch. This application was demonstrated during December’s ComSoc committee meetings, showing how ComSoc members might soon be able to access all of the Communications Society via their smart devices. Smartphonomics service is now available to our members from the Apple App Store for free (http://itunes.apple.com/us/app/comsoc/id413046307). We plan to provide similar native smart phone applications for Android-based mobile devices soon, and possibly others later on. The second problem to be addressed by the ad hoc Smartphonomics Committee is what content to provide on a regular basis that would be of interest to our members. For this, we draw inspiration from IEEE’s newly created IEEE Technology News website: http://www.ieeetechnews.org. This IEEE website aggregates and summarizes a subset of articles from IEEE periodicals, making the summaries accessible to the general public. The ad hoc Smartphonomics Committee is working with a number of similar groups within ComSoc to identify which regularly updated content we can provide that would interest our members with smart devices. As always, we welcome input from you the reader on what you would like to see on ComSoc’s mobile website. Please direct your comments to Fred at
[email protected].
COMMUNITY AND SOCIAL NETWORKING ComSoc Community Sites The new community site (community.comsoc.org) featuring groups, blogs, forums and news feeds, went on-line in early 2009. The community site was open to all communications professionals, not just IEEE members. Groups were the most popular feature and many ComSoc committees began using them to communicate, share information and files, list events, create discussions, and conduct the day to day business of the groups. Presently, ComSoc hosts several community groups. Each group has its own policies for joining and operation. We encourage all ComSoc communities and groups, including all technical committees, chapters, organizing committees and other groups, to create community sites and use them for their ComSoc activities. ComSoc Blog Sites The ComSoc blog was set up by ComSoc’s ICT Department for the Marketing Department to blog updates on promotions, conferences, etc. All entries are automatically fed into social media sites. This site has attracted quite a few followers on our social media pages and has created significant interest in ComSoc promotions and services. The ComSoc blog is updated by the IEEE Marketing staff, Ting Qian and Max Loskutnikov. Also a blog page has been created on the ComSoc Community site: http://community.comsoc.org/blogs. It is moderated by a prominent volunteer, Alan Weissberger, who is also (Continued on page 10)
IEEE Communications Magazine • March 2011
LYT-PRES PAGE-MAR
2/22/11
2:15 PM
Page 10
THE PRESIDENT’S PAGE (Continued from page 8) the Chair of the ComSoc Santa Clara Valley Chapter. Alan’s tireless efforts helped to establish, debug, and maintain this site. ComSoc in Social Media ComSoc now has a presence on Facebook (http://www.facebook.com/IEEEComSoc) and LinkedIn (http://www. linkedin.com/groups?mostPopular=&gid= 81475). ComSoc e-News is also available on Facebook. To receive timely messages from ComSoc, please follow us on twitter: http:// twitter.com/comsoc. In addition, there is a ComSoc channel on YouTube: http://www.youtube.com/user/ ieeecomsoc. The very talented channel creator and artistic director is Max Loskutnikov. On-Line Content The cornerstone of the Society’s on-line content is the ComSoc Digital Library, http://dl.comsoc.org, which contains journals, magazines, conference proceedings, newsletters, online tutorials, and other resources such as distinguished lectures, webinars, and videos. It also contains material related to the IEEE Wireless Communication Engineering Technologies (WCET) certification program. All ComSoc-sponsored journals, magazines, and conference publications can be found in the ComSoc Digital Library. ComSoc newsletters include e-News (http://e-news. comsoc.org), the Global Communications Newsletter (http://dl.comsoc.org/gcn), and the IEEE Wireless Communications Professional Newsletter (http://www.ieee-wcet.org/). ComSoc’s on-line tutorial program, called Tutorials Now, provides a collection of recent tutorials given at ComSoc-sponsored conferences – GLOBECOM/ICC, INFOCOM, NOMS/IM, WCNC, CCNC, ENTNET, SECON, PIMRC and MILCOM. Each tutorial features expert presenters who review current topics in communications, with the aim of bringing newcomers up to speed and keeping others current. Available tutorials, which are 2.5 to 5 hours in length, contain the original visuals and voice-over by the presenter. Distinguished Lectures On-Line provides recorded lectures of distinguished lecturers (DLs) selected by the ComSoc Distinguished Lecturer Selection Committee. DLs are typically invited by ComSoc chapters and sister societies around the world. Having these available on line greatly increases their reach.
10
Live ComSoc Webinars and on-line panel sessions are open to all. Viewers and panelists participate from the convenience of their desks. Webinars focus on technologies, systems and services of current interest to communications engineers and scientists (http://www.comsoc.org/webinars). ComSoc Videos provides video recordings of notable keynote speeches at our major conferences such as the keynote speech on the history of the Internet given by Leonard Kleinrock at GLOBECOM 2007.
MORE SERVICES TO COME The ComSoc ICT team has aggressive plans for the future. Among them is to enable all ComSoc groups, including chapters and our sister societies, to deploy their own community web sites, conduct electronic meetings, and to create and maintain their own mailing lists using LISTSERV. Also in the plan is to enable all conferences to offer webcasts and recorded sessions as well as to make totally on-line conferences a reality. The latter is important for reaching out to those who may not be able to attend conferences in person. We also recognize that many colleagues will still seek the rich experience of attending a conference in person, which is something that current technology is not yet able to replicate. An important task in the works is to change the presentation format for the electronic version of IEEE Communications Magazine, which is included with Society membership. The technology under consideration is FlipBook. The new approach will allow all the features now available with PDF format and more. Tammy Hung is working closely with Jennifer Porcello of ComSoc’s Publications Department to deliver this new format. More features are in the planning stages to leverage the Drupal Content Management System, such as automatic generation of mobile web pages and RSS feeds. In view of Drupal’s power and flexibility, the ComSoc ICT team plans to create a robust digital platform and leverage it to deliver valuable current and future products and services. As presented so far, Communications Society On-Line, a collection of on-line content, products, and services being offered through our ICT platform, has grown rapidly to keep up with the information-sharing needs of our members and the broader community. All those involved with creating our “anytime, anywhere” cyberspace environment, including our ICT team, will continue to develop these capabilities even further. We welcome your input.
IEEE Communications Magazine • March 2011
LYT-CERTIFICATION-MAR
2/18/11
3:36 PM
Page 12
CERTIFICATION CORNER THE NEED FOR RECERTIFICATION BY ROLF FRANTZ In a fast changing field such as wireless communications, technology and the knowledge and skills that go with it can quickly become outdated. WCET certification represents, in a sense, a snapshot in time. The WCP credential indicates that the holder has demonstrated mastery of the field “today.” For those who earned the credential two or three years ago, that “today” is becoming “yesterday.” To maintain the credential in good standing, they need to recertify their knowledge and skills. As stated in the 2011 Candidate’s Handbook (“Recertification,” page 35): “… passing the examination is only one portion of certification. The wireless communication field is constantly changing and requires that wireless communication professionals keep current with changes in the profession. Maintaining an active certification status through recertification is the way in which certified professionals demonstrate their currency and preserve their professional edge. Recertification is required every five years, deter-
IENYCM2854.indd 1 12
mined by the expiration date of your current certification …” The first professionals to earn the WCP credential passed the exam in the Fall of 2008; they will need to recertify in 2013. That seems a long way off, but in reality, preparation for recertification needs to start well in advance. The most obvious way to recertify, of course, is to retake the exam. Passing the 2013 exam, which will have been significantly changed and updated in the intervening five years, will clearly demonstrate that a credential holder has kept up with advances in wireless communications technology. However, as with many other certifica-
tion programs that require recertification, ComSoc is working to provide alternative means for people to demonstrate that they have kept their skills and knowledge current with changes in technology. The formal recertification program is under development, and it is ComSoc’s intent to roll it out later this year. It will include options for recertifying by earning Personal Development Points (PDPs). Among the options under discussion for acquiring PDPs are: •Working in one or more of the seven technical areas covered by the WCET exam, performing tasks and holding responsibilities at the professional level. •Taking training courses from ComSoc and other providers that are specific to technology developments in one or more of the seven technical areas. •Attending professional conferences and workshops that address technical advances in one or more of the seven technical areas. •Participating in relevant sessions of local ComSoc chapters or communities, ranging from attending an educational session to being a featured speaker at such a session. •Authoring papers or articles in recognized industry publications on a topic or topics relevant to the technical areas covered by the WCET exam. •Conducting self-directed study via web-based programs, coaching or mentoring with peers, or individual study leading to a demonstrated increase in job skills and responsibilities. For any of these activities, of course, evidence of completion and success must be compiled and presented for evaluation and validation by a committee of wireless professionals. Accumulation of sufficient PDPs through a combination of varied activities – not just a single activity – would be the basis for authorizing recertification of the WCP credential for another five year period. As noted, the recertification program is currently under development. Comments or suggestions regarding the program (e.g., additional activities that might qualify for PDPs, or relative importance of activities) are welcomed via email to
[email protected].
2/16/11 3:25:05 PM IEEE
Communications Magazine • March 2011
LYT-CONF REP-CCNC-MAR
2/18/11
3:30 PM
Page 14
CONFERENCE REPORT IEEE CCNC 2011 HIGHLIGHTS LATEST ADVANCES IN ANYTIME, ANYWHERE CONSUMER COMMUNICATIONS The IEEE Consumer Communications and Networking Conference, CCNC 2011, recently completed its 8th annual event in Las Vegas with hundreds of international consumer communications and networking experts exploring the next generation of technologies designed to provide ondemand, anytime access to entertainment and information anywhere in the world. Held in conjunction with the 2011 International Consumer Electronics Show (CES), IEEE CCNC 2011 hosted several keynotes, panels, workshops, tutorials and research prototype demonstrations as well as nearly 350 technical papers highlighting the latest advances in industrial and academic research in a wide range of technologies related to consumer communications and networking: home and wide area networks, wireless and wireline communications, cognitive networks, peerto-peer networking, middleware and other applications enabling technologies, information and applications security, etc. Prior to IEEE CCNC 2011, the conference began showcasing research prototypes at the IEEE Communications Society (ComSoc) booth at CES located in the Las Vegas Convention Center. Throughout CES and then again at IEEE CCNC 2011, Telcordia demonstrated a research prototype of a real-time search tool that can enable users to save time, money, and bandwidth by previewing videos from different perspectives, rapidly determine their appropriateness prior to launching streaming applications. In addition, Drakontas, a New Jersey-based provider of geospatial tools, offered live demonstrations of its new SMYLE research prototype, which allows user groups to share locations, text messages, photos and graphical annotations in real-time environments. Additional advantages are the ability to collaborate on meeting times and experiences almost anywhere within a building complex, such as a theme park, through the use of the SMYLE mobile phone application or a standard PC web browser. IEEE CCNC 2011 then officially commenced on Sunday, January 9th with a full day of workshops dedicated to topics such as “Vehicular Communications System,” “Personalized Networks, Digital Rights Management Impact on Consumer Communications,” “Social TV: the Next Wave, Social Networking (SocNets)” and “Consumer eHealth Platforms, Services and Applications.” On the following morning, Dr. Simon Gibbs, IEEE CCNC 2011 General Co-Chair and Principal Engineer at Samsung, graciously welcomed all attendees. He thanked the conference’s organizing committee members as well as conference patrons – Samsung, Nokia, HD-PLC, Telcordia, and Drakontas – for help-
ing to make IEEE CCNC the flagship Consumer Communications and Networking conference. Afterwards, Dr. Kari Pulli of the Nokia Research Center in Palo Alto, California, addressed the forum about the “demise of film cameras and ascent of digital cameras” during his keynote on “Mobile Computational Photography.” As highlighted by Dr. Pulli, this is a marketplace that will continue to grow over the next five years with more than 150 million digital cameras shipped and nearly 1.1 billion camera phones sold internationally by 2014. Spurred by the introduction of real-time computational technology that allows “instant image processing,” future digital cameras will introduce numerous features that provide “full, lowlevel control of all camera parameters” as well as the ability to “rewrite the autofocus” function so as to combine images to create cleaner, clearer and brighter photos in seconds. Following Dr. Pulli’s speech, IEEE CCNC 2011 then launched the first of two days of technical sessions, technology and applications panels, and research prototype demonstrations in various Consumer Communications and Networking areas including security and content protection, entertainment networking, automotive multimedia, multiplayer networked gaming, next generation IPTV, social media, and personal broadcasting. Specific topics addressed the “Dissemination of Information in Vehicular Networks,” Smart Grid Emerging Services,” Ecological Home Networks” and “Smartphone Location-Aware Technologies.” On Wednesday, the conference program had another full day of keynotes, technical sessions and research prototype demonstrations. In the morning, Dr. Kiho Kim, Executive Vice President and Head of Digital Media & Communication R&D Center, Samsung Electronics, began the proceedings with his address titled “A Future Mobile Outlook: Infrastructure, Device and Service.” In his speech, Dr. Kim mentioned that “In the past, the ICT (Information and Communications Technology) industry’s megatrends - “being digital,” “being networked” and “being mobile” - have led us to paradigm shifts such as the “Internet revolution” of today.” He also said that “In the near future, no later than 2020, new technology enablers in mobile devices and wireless access infrastructures will initiate another paradigm shift. New life care services, in addition to the legacy infotainment services, will be delivered via “intelligent, not just smart,” mobile devices through an enhanced network that seamlessly utilizes local and personal networks.” Later that evening at the conference’s Awards Banquet, Dr.
Ben Falchuk demonstrated Telcordia’s latest multimedia visual search technology.
Dr. Kiho Kim, Executive Vice President and Head of Digital Media & Communication R&D Center of Samsung Electronics, addressed attendees at the Wednesday Keynote Session.
14
IEEE Communications Magazine • March 2011
LYT-CONF REP-CCNC-MAR
2/18/11
3:30 PM
Page 15
CONFERENCE REPORT Monica Lam, a Professor of Computer Sciences at Stanford University and the co-author of the “Dragon” book or as it is formally known, “Compliers, Principles, Techniques and Tools,” continued the discussion on future trends, while addressing “In Situ Online Social Networking.” In her presentation, Dr. Lam urged attendees to “ride the mobile computing wave” as she described mobile phones as “the perfect devices for changing the way we network” and “creating more peer-to-peer personal communications.” Dr. Lam also explored the development of new instantaneous group communication systems based on existing email technologies, which will soon create “open federated social networks” and facilitate the sharing of information “among pockets of friends” without the need to utilize third party proprietary infrastructures. The second banquet keynote presentation was by JeanPhilippe Faure, who is the CEO of Progilon and affiliated with HD-PLC consortium. Jean-Philippe is the Chairman of the Broadband over Power Line (BPL) IEEE P1901 Working Group. The IEEE 1901 standard has been recently successfully completed. The presentation focused on merits of the BPL technology that enables Internet Access services as well as in-home, in-vehicle and in-airplane networking over power lines for support of a broad range of applications that include Smart Grid communications, Web browsing, and entertainment. After the keynote presentation, Jean-Philippe Foure was presented a plaque for his achievements as the founding chair of the IEEE P1901 Working Group and for successful completion of the Broadband over Power Line Standard IEEE 1901. The plaque was presented by Alex Gelman, who initiated this standardization project, on behalf of the IEEE Communications Society President Byeong Gi Lee, VP-Technical Activities Mark Karol, and ComSoc Director of Standards Curtis Siller.
The banquet also featured numerous honors. This included the presentation of the Best Paper Award to Hosub Lee of Samsung Electronics for his paper on “An Adaptive User Interface Based on Spatiotemporal Structure Learning” and Special Mention Honor to Greg Elliott of MIT Media Lab for his paper on “TakeoverTV: Facilitating the Social Negotiation of Television Content in Public Spaces.” The overall Best Student Paper Award was also presented to Peter Vingelmann and Hassan Charaf of the Budapest University of Technology and Economics as well as Frank H.P. Fitzek, Morten Videbæk Pedersen and Janus Heide of the Aalborg University in Denmark for their paper on “Synchronized Multimedia Streaming on the iPhone Platform with Network Coding.” IEEE CCNC 2011 concluded on Wednesday, January 12 with a complete schedule of tutorials offering insights into subjects such as “State of the Art Research Challenges and P2P Networking,” “Cognitive Radio Networks,” “4G - Next Generation Mobile Applications,” “Wireless Mesh Networking Advances and Architectures,” “Consumer Network Standardization,” and “Technologies and Applications for Connecting All Your Electronic Devices with Personal Networks.” As Dr. Robert Fish, CCNC Steering Committee Chairman, mentioned at the banquet, next year, the 9th Annual IEEE Consumer Communications Networking Conference will begin once again with a preview of its comprehensive research demonstrations at the IEEE booth located within CES 2012. In addition, the IEEE CCNC 2012 “Call for Papers” has already been announced with all interested parties urged to visit http://www.ieee-ccnc.org/2012 for submission details. Ongoing conference updates can also be obtained via Twitter @IEEECCNC or by contacting Heather Ann Sweeney of IEEE ComSoc at 212-705-8938 or
[email protected].
Get to know your power supply! Combining OMICRON Lab’s Bode 100, Vector Network Analyzer with the new Picotest Signal Injectors enables you to perform high-fidelity measurements of: Non-Invasive & traditional Stability PSRR Input & Output Impedance Reverse Transfer ... and many other important power supply parameters in the range from 1 Hz - 40 MHz For FREE application notes and more, please visit: www.omicron-lab.com & www.picotest.com/blog
Smart Measurement Solutions Smart Measurement Solutions
IEEE Communications Magazine • March 2011 IENYCM2857_1.indd 1
15 2/16/11 6:34:21 PM
LYT-CONF REP-GLOBECOM-MAR
2/24/11
12:34 PM
Page 16
CONFERENCE REPORT IEEE GLOBECOM 2010 COMPLETES MOST SUCCESSFUL MEETING IN 53-YEAR HISTORY IEEE GLOBECOM 2010 recently held its most successful meeting in the conference’s 53-year history. Held from 6 – 10 December 2010 in Miami, Florida, this annual flagship event of the IEEE Communications Society (ComSoc) set numerous milestones including number of attendees (2,500+); symposia and workshop paper submissions (4,614); lowest tutorial acceptance ratio (8%); highest number of tutorial registrations with over 300 attendees per session; largest team of volunteers (8,500+) contributing to the conference’s success; and the largest number of high-profile invited speakers (100+) delivering keynote and plenary sessions, technical symposia, workshops, panels and tutorials (469). Now considered one of the world’s premier communications conferences, IEEE GLOBECOM 2010 initiated its five-day program on Monday, December 6 with the first of two full days of tutorials and a record-setting 21 workshops. For the first time in conference history, each tutorial session was free-of-charge to all conference attendees, who were extremely pleased with the high degree of diversity in the agenda and high-quality of presentations. Dedicated to the most recent industry and academic advancements, session topics ranged from adaptive wireless communications, wireless MediaNets and biologically-inspired communications to Femtocell networking, broadband wireless access, pervasive group communications and smart communications. The conference then officially commenced on Tuesday, when Executive General Chair Dr. Kia Makki was presented an American flag previously flown at the Florida state capitol building in Tallahassee during the morning introductions. Immedi-
Poster sessions provided networking and educational opportunities.
Business & Tech Forums were well received by attendees.
16
ately afterwards, IEEE ComSoc President Dr. Byeong Gi Lee highlighted the conference’s general theme of “Mobile Interactivity,” while thanking the ongoing efforts of the society’s 48,000 global members, who are “constantly researching, advancing and working to implement global communications technologies that enhance humanity and everyday life.” Dr. Haohong Wang, the conference’s technical program chair, followed these comments by reviewing the agenda of the largest technical program ever presented at IEEE GLOBECOM since its founding in 1957. Following these introductory remarks, Dr. Regina E. Dugan, Director of Defense Advanced Research Projects Agency (DARPA) then challenged all IEEE GLOBECOM 2010 participants to overcome the fear of failure, while reminding everyone that one person can forever change the quality of life for 6.9 billion people worldwide. She also continually reinforced this mission by stating that life is a miracle and the magic that can make all our lives better is needed more now than ever before. During her keynote, Dr. Dugan also noted how “diverse perspectives can lead to innovations that make difficult solutions seem easy.” This includes the use of “cognitive clouds” and social networks that can help surmount dilemmas — for in the end diversity of thought will always surpass ability. After this speech, IEEE GLOBECOM 2010 then proceeded with a three-day program filled with plenary addresses, technical symposia, business & technology forums, awards and exhibits designed to explore the entire range of communications technologies. This included a new plenary format that offered the visionary thoughts of 34 noted scientists and industry executives; new demo program featuring 20 demonstrations showcasing the latest research achievements of international experts working within a wide range of communications fields; new funding forum with invited speakers representing several important government funding agencies such as NSF, DARPA, ARO, ONR, AFOSR and DHS; and new early-bird Student Award encouraging productive student authors to submit high quality technical papers. Among the highlights of Tuesday’s agenda were also several high-level forums and presentations detailing the latest research and application advancements in areas such as “Faster, Greener and More Frugal Network Transport,” “The Channel Access Conundrum,” “Dynamic Spectrum Access,” “An Executive’s Perspective of Tomorrow’s Technology” and “Next Generation Internet.” For instance, the event’s Cloud Computing Forum reviewed the latest services and processes for not only enhancing the consistency of healthcare delivery systems throughout the world, but also the implementation of technologies that turn capital expenditures to operational expenses, greatly increase competencies and simplify the manageability and monitorability of internal infrastructures. In addition, the day’s Wireless Network Forum also explored “New Frontiers for Wireless” including the explosive growth of mobile video traffic, which is expected to expand to 500,000 worldwide subscribers by 2015. Additionally, the first of three days of technical symposia also began on Tuesday with a complete array of sessions dedicated to energy savings and power control protection protocols, privacy protection, Internet security, network coding and attacks, cognitive radio, sensing, estimation and communication feedback. In all, this three-day technical symposia program stretching from Tuesday through Thursday would include approximately 300 sessions composed of nearly 1,300 paper presentations ranging in
IEEE Communications Magazine • March 2011
LYT-CONF REP-GLOBECOM-MAR
2/24/11
12:34 PM
Page 17
CONFERENCE REPORT
Participants experienced live demos for the first time in conference history. subject from the latest routing protocols to resource management and peer-to-peer technologies for communication services. Other Tuesday highlights were delivered by the Annual Awards Luncheon, which honored the career and service contributions of IEEE ComSoc members and volunteers, and the Welcome Reception & Expo Opening held in the Hyatt Regency’s Riverfront Hall later that night. During the luncheon’s ceremonies, IEEE ComSoc Awards Chair H. Vincent Poor and Byeong Gi Lee presented to deserving honorees such as Leonard J. Cimini, Jr. of the University of Delaware, who received the IEEE Communications Society Donald W. McLellan Meritorious Service Award;” Abbas Jamalipour of the University of Sydney, who was presented the IEEE Communications Society Harold Sobol Award for Exemplary Service to Meetings & Conferences; Larry Greenstein of WinLab, Rutgers, who received the IEEE Communications Society Joseph LoCicero Award for Exemplary Service to Publications; Roberto Saracco of Telecom Italia, who earned the IEEE ComSoc/KICS Exemplary Global Service Award; and Tadashi Matsumoto, Milicia Stojanovic, Zoran Zvonar and Wei Su, who all were presented 2010 IEEE Fellow Awards. IEEE GLOBECOM 2010 then concluded on Tuesday evening with the Welcome Reception, which began when Executive General Chair Kia Makki and IEEE ComSoc President Byeong Gi Lee cut the traditional gateway ribbon. Once inside the hall, hundreds of conference attendees feasted on a broad selection of local delicacies, desserts and beverages, while connecting with old friends and making new ones. Other activities included browsing the numerous booths of exhibitors such as Telcordia, Cambridge, Elsevier, Wiley and Springer as live reggae and calypso music swirled through the hall. The IEEE ComSoc Pavilion also offered a host of society information including membership and networking benefits as well as details on upcoming IEEE ICC and IEEE GLOBECOM events. On the following morning, the IEEE GLOBECOM 2010 agenda renewed with the keynote presentation of Yoshihiro Obata, Executive Vice President and Chief Technology Officer of eAcess Ltd. and Executive Vice President of eMOBILE Ltd. Introduced by Dr. John Thompson, the conference’s technical program co-chair, Yoshihiro proceeded to highlight the history of Japan’s broadband industry and his own company’s growth, which rose from a two-person business in 1999 to a national enterprise that currently has nearly 3,000 individuals and $2.5 billion in sales. Beset within a deeply competitive environment, Yoshihiro detailed the market’s challenges and his company’s successful growth strategies in the world’s second largest telecom marketplace. Realizing Japan’s decreased dependence on fixed telecom services, eAccess and eMOBILE has invested heavily in mobile
IEEE Communications Magazine • March 2011
Executive General Chair Kia Makki was presented an American flag previously flown at the Florida state capital building.
Executive General Chair Kia Makki and ComSoc President Byeong Gi Lee opened the Welcome Reception. broadband delivery with the goal of offering 24-hour-a-day access to 80 to 90 percent of the population in the near future. While there were only 300,000 corporate users in the entire nation 10 years ago, today Japan’s mobile broadband industry generates $17 billion in revenue and represents 3.4 percent of the country’s GDP. By 2016, estimates also predict that Japan’s mobile broadband market will amass approximately 16 million subscribers. In addition, Wednesday’s well-attended IPv6 Forum began with the very real fact that the entire spate of global IPv4 Internet addresses will be exhausted within the next two years and the transition to IPv6 has “already taken five years too long.” As a result, many of the session’s speakers concurred that “if you do not transition to IPv6, you will not be a player in the Internet.” Subsequently, the effort to integrate IPv6 into existing and new infrastructures has seen “more activity in the past year than the previous 10 years combined.” Although in the early stages, this includes providing consumers with IPv6 connectivity options in the very near future. Furthermore, the forum’s distinguished panel of experts offered advice to any and all service and content providers dealing with IPv4 issues. First, “Don’t wait for customers to ask for IPv6. You won’t have the time to react.” Next, “Never pay for IPv6 beyond the service for IPv4. Run from vendors that want to charge extra.” And, education is key. Dispel IPv6 myths and fears by investing in training and classes that inform all interested parties on the differences between IPv4 and IPv6, while advising on their similarities. Other sessions such as the “Wireless Communications Forum” dealt with the challenges of servicing the world’s mobile device users, which already includes 4.1 billion subscriptions and
17
LYT-CONF REP-GLOBECOM-MAR
2/24/11
12:34 PM
Page 18
CONFERENCE REPORT the “Smart Grid and Green Technology Forum” that offered a host of options for enabling cost savings and lowering energy use through the implementation of smart appliances and meters on a global scale. Additional Wednesday highlights included plenary addresses on highly sensitive areas like “Energy-Efficient Wireless Applications,” “Multidimensional Convergence of Broadband Access Technologies,” “Physical Layer Security in Wireless Networks” and “Smart Phones: A Revolution in Mobile Computing.” On Thursday, Dr. Niki Pissinou, the conference’s operation chair, introduced Dr. Frederica Darema, the director of the U.S. Air Force Office of Scientific Research, to open the day’s proceedings. Dr. Darema’s keynote detailed the U.S. Air Force’s concept of network system science and the fundamental aspects of its practical impact. Lauded for her comprehensive mathematical approach to fostering synergism across widespread scientific disciplines, Dr. Darema’s complex network of sensoring and informational gathering has helped create encompassing advances in tornado monitoring, aircraft design, oil exploration, semiconductor manufacturing and electrical power grid enhancements, among others, over the past five years. Throughout the day, IEEE GLOBECOM 2010 participants were then feted to another full schedule of technical symposia, plenary speeches and informational forums highlighting the areas of design & development, multimedia communications, wireless networking, intelligent transportation and 4G architectural advancements. For example, presenters within the QoS Multimedia Forum offered insights and research findings related to the development of processing techniques that will help reduce traffic congestion as well as enhance the transmission and storage of real-time multimedia experiences. The Multimedia Communica-
tions Plenary then offered attendees a glimpse of next steps, which are in progress and starting to digitally teleport individuals and groups of people to virtual meeting arenas, while reconnecting families, delivering healthcare to remote regions and saving energy and commuting costs. Several other popular forums also featured the latest techniques for furthering the latency, spectral efficiency, antenna support and ultimately the performance of next generation mobile broadband wireless networks as well as the initial thrusts to develop Intelligent Transportation Systems (ITS) that shorten driving times, deliver quicker medical aid and reduce road-related injuries. According to session speakers, the implementation of vehicular communications should be a governmental and corporate mandate worldwide due to its ability to save hundreds of thousands of lives annually in addition to reducing drive time and costs, and delivering another full array of fee-based services to commuters. IEEE GLOBECOM 2010 then completed the most successful conference in its history with another full agenda of workshops on Friday, December 10. Notable technology experts representing nearly every phase of voice, data, image, and multimedia communications supervised learning sessions in numerous areas that included separation and overlay networks, P2P live streaming systems, self-managed future Internets, virtual machine migration, multimedia computing and real-time Internet trafficking. Buoyed by the tremendous response to last year’s meeting, IEEE GLOBECOM 2011 planning is already underway. For information on this global event, which will be held 5 – 9 December 2011 in Houston, Texas, please visit the conference web site at www.ieee-globecom.org/2011.
Are You Missing Some Pieces to the Puzzle? Let Remcom Consulting Complete It
Learn more at: www.remcom.com/consulting
Commercial Consulting
Government Contracting
Remcom Consulting is an ideal solution for those organizations that need electromagnetic expertise without an ongoing need for in-house EM simulation software. Remcom provides EM modeling experts, software developers, and high performance computational resources to meet your most challenging EM simulation analysis needs. Custom Software Development
Electromagnetic Simulation Solutions
IENYCM2871.indd 1
18
+1.888.7.REMCOM (US/CAN) +1.814.861.1299 www.remcom.com
2/22/11 10:29:26 PM
IEEE Communications Magazine • March 2011
LYT-CONF REP-GREENCOM-MAR
2/18/11
3:32 PM
Page 20
CONFERENCE PREVIEW 2011 IEEE ONLINE CONFERENCE ON GREEN COMMUNICATIONS SEPTEMBER 26 – 29, 2011 IEEE COMSOC EMPHASIZES NEED TO REDUCE GLOBAL GREENHOUSE GAS EMISSIONS BY HOSTING THREE-DAY EVENT ON ENERGY-EFFICIENT COMMUNICATIONS & GREEN TECHNOLOGIES TOTALLY ONLINE Paper” details, interested researchers, The IEEE Communications Society academics and industry experts are (ComSoc), the leading worldwide prourged to visit http://www.ieee-greenfessional organization dedicated to the com.org. advancement of communications techWith a deadline of March 20, 2011, nologies, will emphasize the ongoing the conference’s peer-review board is need to reduce global greenhouse gas currently accepting paper submissions on a wide range of topemissions by hosting the first annual IEEE Online Conference ics covering energy-efficient fixed and wireless communicaon Green Communications (GreenCom) totally online from tions and networking, communi- cations technologies for September 26 – 29, 2011. Dedicated to the latest advances in green solutions, and smart grid communications. Specific topenergy-efficient communications and green technologies, ics of interest include, but are not limited to: IEEE GreenCom’11 will enable attendees from around the •Energy-efficient protocols, extensions, transmission and networld to engage in discussions on the newest networking, working technologies. energy management and smart grid communications solutions •Energy-efficient communications management. without travel and from the comfort of their own home and/or •Energy-efficiency in mobile, home, work environments. sensor, and vehicular networks, and Webcast to international attendees Efficient and Flexible in data centers. by IEEE ComSoc and then published Live and recorded sessions •Green communication architectures at IEEE Xplore, IEEE GreenCom’11 and frameworks. was specifically designed to address to fit your schedule •Solutions for energy-efficient transport, global warming developments and its Ecological logistics, industries, and buildings. societal impact in an alternative, eco•Communication networks for Smart logical conferencing model that reachReduce global Grids and smart metering. es broader audiences, offers greenhouse gas emissions In addition, anyone interested in time-flexible participation and provides networking with colleagues or other near-physical experiences in a powerCost-Efficient attendees via Twitter, Facebook, ful, virtual forum where “energy effiLow registration fees, LinkedIn as well as receiving conferciency is discussed energy-efficiently.” no travel costs ence updates should visit Other distinct features include the http://www.ieee-greencom.org on a ability of speakers to present research High-Visibility and High-Quality regular basis or contact Heather Ann live and then answer audience quesPeer-reviewed papers Sweeney of the IEEE Communications tions with the aid of moderators. published in IEEE Xplore® Society at 212-705-8938 or h.sweeney@ For more information on IEEE comsoc.org. GreenCom’11, including “Call for
20
IEEE Communications Magazine • March 2011
SPOT PAGE-11-03
2/18/11
3:41 PM
Page 23
PRODUCT SPOTLIGHTS Silicon Labs
Synopsys Designers today need high-level synthesis optimization technologies that deliver high quality of results for FPGA and ASIC while enabling rapid exploration of performance, power, and area. Synphony HighLevel Synthesis (HLS) tools provide an efficient path from algorithm concept to silicon and enable greater design and verification productivity. http://www.synopsys.com
Learn how to simplify your timing design using glitch-free frequency shifting to address low-power design challenges and the complexity of generating a wide range of frequencies in consumer electronics applications such as audio, video, computing or any application that requires multiple frequencies. Download this in-depth white paper from Silicon Labs. http://www.silabs.com/frequency-shifting
Remcom Remcom’s Wireless InSite® is sitespecific radio propagation software for the analysis and design of wireless communication systems. It provides accurate predictions of propagation and communication channel characteristics in complex urban, indoor, rural and mixed path environments. Applications include wireless links, antenna coverage optimization, and jammer effectiveness. http://www.remcom.com/wireless-insite
Anritsu Company
JFW Industries Since 1979 JFW Industries has been a leader in the engineering and manufacturing of attenuators, RF switches, power dividers, and RF test systems. We deliver custom solutions at catalog prices. With more than 15,000 existing designs, the company can provide application specific devices to solve almost any RF attenuation and switching problem. http://www.jfwindustries.com
IEEE Communications Magazine • March 2011
Anritsu Company introduces the MS272xC Spectrum Master series that provides the broadest frequency range ever available in a handheld spectrum analyzer. Providing frequency coverage up to 43 GHz in an instrument that weighs less than 8 lbs., the MS272xC series is also designed with an assortment of applications to test the RF physical layer, making it easier than ever for field technicians, monitoring agencies and engineers to monitor over-the-air signals, locate interferers, and detect hidden transmitters. http://www.us.anritsu.com
GL Communications GL’s PacketExpert™ is a portable quad port (electrical and optical) tester that can perform independent Ethernet/IP testing at wirespeed. It includes bi-directional RFC 2544 testing, wirespeed BERT, frame capture, and loopback. It takes the confusion out of Ethernet testing at all protocol layers, from raw Ethernet frames to IP/UDP packets. It can be used as a general purpose Ethernet performance analysis tool for 10, 100, and 1 Gbps Ethernet local area networks. Key performance indicators include bit error count and rate, frame loss, sync loss and error free count/seconds, throughput, latency, frame count/rate and more. PacketExpert™ further supports IPV4, IPV6, smart loopback, user-defined loopback, configurable headers for MAC/IP/UDP, error insertion, sequence number generation, and detail reports in PDF format. http://www.gl.com/packetexpert
23
LYT-PRODUCTS-MAR
2/18/11
3:40 PM
Page 24
NEW PRODUCTS NEXT-GENERATION BASE STATION ANALYZER Anritsu Company The MT8222B BTS Master is Anritsu’s next-generation handheld base station analyzer that supports 4G standards now being deployed, as well as installed 2G/3G networks. Combining the inherent advantages of the BTS Master platform with new measurement capability, the MT8222B provides field engineers and technicians with a lightweight, handheld analyzer that can accurately and quickly measure all the key wireless standards, including LTE, WiMAX, WCDMA/HSDPA, CDMA /EV-DO and GSM/EDGE. A 20 MHz demodulation capability has been designed into the platform of the MT8222B BTS Master, allowing the instrument to measure LTE and WiMAX signals. Additionally, the MT8222B features a 30 MHz ZeroSpan IF output for external demodulation of virtually any other wideband signal. For comprehensive receiver testing, a Vector Signal Generator option is available that covers 400 MHz to 6 GHz and can generate two simultaneously modulated signals, plus noise. Accurate two-port cable and antenna analysis from 400 MHz to 6 GHz can be conducted with the MT8222B BTS Master. All key measurements – including return loss, cable loss, VSWR and distance-to-fault (DTF) – can be made with the compact analyzer. It can also measure gain, isolation, and insertion loss, to verify sector-to-antenna isolation. The MT8222B BTS Master also has spectrum analysis capability typically found in a bench-top instrument. In spectrum analyzer mode, the instrument has a wide frequency range of 150 kHz to 7.1 GHz, low phase noise of typically -100 dBc/Hz @ 10 kHz offset, low displayed average noise level (DANL) of typically -163 dBm in 1 Hz RBW, and wide dynamic range of >95 dB in 1 Hz RBW. A number of options are available to configure the MT8222B BTS Master to suit the specific measurement requirements of any field test application. Among the options are a 140 MHz Zero Span IF output with 30 MHz IF bandwidth, a GPS receiver that works with the analyzer’s standard 3.3/5 V Bias Tee that enables connection to BTS site GPS antennas, and frequency output and input with 25 ppb accuracy. The MT8222B BTS Master features Anritsu’s Master Software Tools (MST) and Line Sweep Tools (LST) comprehensive data management and analysis
24
software that save significant time when generating reports. Designed specifically for the field, the MT8222B BTS Master measures only 315 x 211 x 94 (mm), weighs 4.9 kg, and is battery operated. It incorporates Anritsu’s field-proven design, so the MT8222B BTS Master can withstand the harsh environments in which it is used. http://www.anritsu.com
SKYWORKS DEBUTS 6DIGITAL ATTENUATORS
AND
7-BIT
•PING utility to check remote IP address availability. The application takes confusion out of Ethernet testing at all protocol layers, from raw Ethernet to IP/UDP packets. It can be used as a general purpose Ethernet performance analysis for 10/100 Mbps or 1 Gbps Ethernet local area networks. Two of the four ports have both electrical and optical interfaces, enabling testing on optical fiber links also. http://www.gl.com
Skyworks
INSTRUMENT-GRADE BROADBAND MICROWAVE POWER AMPLIFIERS
Skyworks has introduced two digital attenuators with superior attenuation accuracy for base station, cellular-head end, repeater, test equipment, and femtocell manufacturers. The 6-bit device
Giga-tronics Incorporated
has an 0.5 dB least significant bit (LSB), and the 7-bit device has a 0.25 dB LSB. These attenuators provide precise control over multi-standard radio transmitters and receivers, allow the user to configure the device for serial or parallel control, and do not utilize blocking capacitors so that low frequency operation is possible. http://www.skyworksinc.com
WIRESPEED ETHERNET/PACKET TESTER GL Communications Inc. GL Communications has released its enhanced PacketExpert™, a quad port wirespeed Ethernet/packet tester. PacketExpert™ supports four electrical Ethernet ports (10/100/1000 Mbps) and twp optical ports (1000 Mbps). It connects to the PC through a USB 2.0 interface. Each GigE port provides independent Ethernet/IP testing at wirespeed for applications such as BERT, RFC 2544, and many more. The application has been enhanced with the following features: •User-defined or auto-negotiated electrical ports that can operate at 10/100/1000 Mbps line rates in full duplex mode; optical ports can operate at 1000 Mbps line rate in full duplex mode only. •Resolve utility to easily configure MAC address (through ARP).
The GT-1020A and GT-1040A instrument-grade broadband microwave power amplifiers from Giga-tronics cover 100 MHz to 20 GHz and 10 MHz to 40 GHz respectively, with flat frequency response, low noise figure, and low harmonics. Designed using broadband MMIC technology, these amplifiers typi-
cally provide 1/2 Watt (+27 dBm) at 20 GHz and 1/4 Watt (+24 dBm) at 40 GHz with > 25 dB gain and < 6 dB noise figure. Gain flatness is typically +/2.5 dB over the full frequency range. The Giga-tronics GT-1020A, 20 GHz and GT-1040A, 40 GHz amplifiers were designed in response to customer requests for higher power with flat frequency response and low noise. The amplifiers are easily used in R&D lab applications and manufacturing automated test systems to overcome power losses from a signal generator or whenever higher power is required. The small size and light weight make them ideal for Lab bench applications, while the ability to place the amplifier close to the device under test (DUT) minimizes cable loss for more optimal testing. The Giga-tronics GT-1020A and GT-1040A microwave power amplifiers are ideal companions to the Giga-tronics 20 GHz and 40 GHz Microwave Signal Generators. The amplifiers also feature high reverse isolation, excellent input and output match and the long life and reliability of solid-state technology. http://www.gigatronics.com
IEEE Communications Magazine • March 2011
LYT-NEWSLETTER-MAR
2/18/11
3:39 PM
Page 25
Global
Newsletter March 2011
ICIN 2010 — Weaving Applications into the Network Fabric: The Transformational Challenge By Warren Montgomery, Insight Research, and Chet Mc Quaide, StraDis Consulting, USA Over 150 delegates from 29 countries representing about 80 different organizations met for the 14th ICIN conference to discuss how new Internet and telecommunications technologies blend to deliver rich new services worldwide. ICIN 2010 took place in Berlin, Germany, on 11–14 October, with the technical co-sponsorship of the IEEE and IEEE Communications Society, and the support of many patrons including Deutsche Telekom Laboratories, Ericsson, Nokia Siemens Networks, Huawei, Alcatel-Lucent, EICT, Orange, and Berlin Partner. ICIN has a 21-year history of anticipating the key trends in the telecommunications services industry, and showcasing technologies and architectures that have become vital to the delivery of new services. The conference began with two tutorials, one focusing on identity management in web 2.0 and telecommunications, and the other covering service enabling technologies in fixed and mobile networks. The tutorials laid the foundation for key aspects of next-generation networks discussed throughout the week. The conference was formally opened by Max Michel (France Telecom), Chair of the ICIN 2010 Technical Programme Committee, and Heinrich Arnold representing Deutsche Telekom Laboratories, the local host. A series of keynotes accented the theme for the conference. Malcolm Johnson of the ITU delivered a video presentation concerning standards needed to achieve consistent user experience. Philip Kelley of AlcatelLucent presented a keynote on the market impact of the iPhone and similar devices. Bengt Nordström of Northstream presented a keynote on the critical issue of increasing operator revenue to support the cost of handling the rapid growth of mobile data traffic, given the impact of Internet players on communications. Thomas Michael Bohnert of SAP presented a keynote highlighting recent research on a next-generation Internet and the need for industry cooperation. The second keynote session focused on host country perspectives. Thomas Aidan Curran of Deutsche Telekom suggested that carriers should become more like software companies. Sigurd Schuster from Nokia Siemens Networks presented the challenges of managing user identities, highlighting the role carriers can play. In a final keynote, Felix Zhang of Huawei discussed the migration of applications to cloud computing and the role that the network can play in enhancing user experience. The keynote sessions were capped by a reception (sponsored by Deutsche Telekom Laboratories and Berlin Partner) attended by both ICIN attendees and leading Internet and communication entrepreneurs from the Berlin area. The main body of the conference took place on Tuesday and Wednesday at the Park Inn Berlin-Alexanderplatz, and
included 55 presentations in 14 sessions in two parallel tracks, plus eight demonstrations and poster presentations available for viewing both days and during an evening reception on Tuesday. The conference programme included a session on the industry impact of marketplaces and app stores, proposing that applications are much more important in motivating device purchases and service subscriptions than generating new revenue, and that they have the potential to profoundly change the user experience with networks. Another session presented three practical applications of IP Multimedia Subsytem (IMS) and discussed the application of IMS principles to all-IP networking going forward. A session on content distribution addressed the business and technical challenges of content delivery, including both new solutions to the problem of efficient content delivery to many types of devices and new challenges resulting from content sharing in social networking. A session on privacy highlighted the need for greater user control over private information, and for great care by network operators and application builders to avoid inadvertent release of private user information. A session on home networking discussed home network security challenges, secure sharing across home networks, and new types of home networking services. A session on the key service enabler context management highlighted the need to provide common access to the many sources of context information to enhance user experience. A session on service composition and several of the demonstrations illustrated the progress of service composition from theory to practice to become important in generating new services revenue. A session on social networking described the combination of social networking and multimedia, and the use of social networking as a viral marketing tool. On Wednesday the conference opened with sessions on “X as a service,” and intersystem and interdevice mobility. The XaaS session covered the migration of many applications to “cloud”-based implementations, and the natural role of the network in enhancing service implementation and delivery, while the mobility session focused on the Third Generation Partnership Project (3GPP) Enhanced Packet Core (EPC) architecture, identifying both strengths and gaps in supporting service and device migration. Sessions on content delivery and sensor networks filled out the morning. The content delivery session addressed some novel aspects of content delivery, including delivery of content-associated advertising and presenting user content recommendations. The sensor network (Continued on Newsletter page 4)
Global Communications Newsletter • March 2011
1
LYT-NEWSLETTER-MAR
2/18/11
3:39 PM
Page 26
IEEE RNDM 2010 Workshop, Moscow, Russia By Jacek Rak, Gdansk University of Technology, Poland, David Tipper, University of Pittsburgh, USA, and Krzysztof Walkowiak, Wroclaw University of Technology, Poland RNDM 2010, the Second International Workshop on Reliable Networks Design and Modeling, held in Moscow, Russia, 19–20 October, 2010, was a two-day event organized by Gdansk University of Technology, Poland, in cooperation with the University of Pittsburgh, United States, and Wroclaw University of Technology, Poland. The workshop was technically co-sponsored by IEEE Communications Society and IFIP TC6 WG 6.10 (Photonic Networking Group). It was collocated with the 2nd International Congress on Ultra Modern Telecommunications and Control Systems (ICUMT 2010). RNDM ’10 followed up on a very successful first edition that took place last year in St. Petersburg, Russia. The objective was to provide a forum for researchers from both academia and industry to present high-quality results in the area of reliable networks design and modeling. Special attention was paid to network survivability. Submitted papers were extensively reviewed by 45 members of the TPC and 18 external reviewers. The 23 accepted papers were organized into six technical sessions: Fault Management and Control in Survivable Networks; Survivability of Anycast, Multicast, and Overlay Networks; Fast Service Recovery; Methods for Measurement, Evaluation, or Validation of Survivability; Design of Dedicated/Shared Backup Paths; and Models and Algorithms of Survivable Networks Design and Modeling. The workshop program was enriched by the keynote talks of two speakers: Professor Tibor Cinkler (Budapest University of Technology and Economics, Hungary) and Professor James P. G. Sterbenz (University of Kansas, United States, and Lancaster
Presentation of the best paper award (left to right: Jacek Rak, Gdansk University of Technology, Poland, and Wouter Tavernier, IBBT, Ghent University, Belgium). University, United Kingdom). A panel discussion entitled “Future Research in Reliable Networks” chaired by Professor Maurice Gagnaire (Telecom ParisTech, France) concluded the event. Conference papers were published in printed proceedings and are also available at IEEE Xplore. Authors of the top papers have been invited to publish extended versions of their contributions in a special issue of Telecommunication Systems Journal (Springer). The next edition will be held in Budapest, Hungary, on 5–7 October, 2011. More information on the workshop may be found at http://www.rndm.pl
Addressing Europe’s Digital Divide: Toward Sustainable Public Service Media in South East Europe Roundtable International Conference, Sarajevo, 14-15 October 2010 By Dinka Zivalj, Regional Cooperation Council, and Kerim Kalamujic, University of Sarajevo, Bosnia and Herzegovina Broadcasters and government officials pledged to ensure a smooth digital transition in southeast Europe, and called on European institutions for financial support in the interests of European cohesion. This was a key message of a two-day roundtable conference entitled “Addressing Europe’s Digital Divide: Towards Sustainable Public Service Media in South East Europe.” Organized by the Regional Cooperation Council (RCC) Secretariat and the Geneva-based European Broadcasting Union (EBU), the meeting was held in Sarajevo, Bosnia and Herzegovina, on 14–15 October, 2010. It was attended by more than 50 general directors, media experts, and government, broadcast and regulatory officials. In a “way forward to 2020” signed by EBU Vice-President Claudio Cappon and RCC Secretary General Hido Biscevic, the conference participants agreed: •To establish an enduring cooperation to ensure the sustainability of all public service broadcasters in southeast Europe by 2020 •To promote the values and principles of public service media, as recognized by the Council of Europe and the European Union •To call on the European Union to support these goals politically and financially under its aim to guarantee European cohesion The conference highlighted the need for the public broadcasters, relevant ministries, and regulators in southeast Europe to join forces to ensure that the region does not lag behind the rest of Europe in meeting deadlines for digitalization and analogue switch-off. The participants agreed that the region’s public broadcasters in the digital era need to remain key actors in the evolving knowledge society, provide reliable information, quality educa-
2
tional, cultural and entertainment programmes, and be motors for regional development and investment in the creative industries. Speaking at the conference, Minister of Communications and Transport of Bosnia and Herzegovina, Rudo Vidovic, told the conference that digitalization would create a “free and open media market. This momentum needs to be seized.” Mr. Biscevic said sustainable public service media were vital for the countries of southeast Europe as a whole, and for the European Union they hope to join. But he said their role was threatened by a lack of investment in infrastructures “and also in human capital.” In an opening keynote speech, EBU Director General Ingrid Deltenre said public service broadcasters needed to play a leading role in the digitalization process in southeast Europe as they have played elsewhere on the continent. Conference speakers included Peter Karanakov, Executive Director, MKRT; Natasa Vuckovic Lesendric, Assistant Minister for the Media, Ministry of Culture of Serbia; Marija Nemcic, Deputy HRT General Manager for International Relations; Bledar Meniku, Ministry for Innovation, Information Technology and Communication of Albania; Maria Luisa Fernandez Esteban, Directorate General for the Information Society and the Media, European Commission; and Oliver Vujovic, Secretary General of the South East Europe Media Organisation. The participants comprised senior representatives of governments, broadcasters, and regulators from Albania, Bosnia and Herzegovina, Croatia, Greece, Moldova, Montenegro, Serbia, Slovenia, the Former Yugoslav Republic of Macedonia, and Turkey, as well as the EU and other relevant institutions.
Global Communications Newsletter • March 2011
LYT-NEWSLETTER-MAR
2/18/11
3:39 PM
Page 27
Report on the 3rd International Conference on Advanced Infocomm Technology (ICAIT 2010) 20 – 23 July 2010, Hainan, China By Xinwan Li and Yikai Su, Shanghai Jiaotong University, China The 3rd International Conference on Advanced Infocomm Technology (ICAIT 2010) was held from 20–23 July, 2010, Hainan, China. ICAIT is a three-day event comprising keynote speeches by both leading academic researchers and industrial experts, focused symposia, technical oral presentations, and updates by industrial presenters. This event provides a platform to introduce advanced infocomm technologies that will shape the next generation of information and communication systems and technology platforms. The conference was hosted by Hainan University in collaboration with Guangxi University, HoChiMinh City University of Technology, Huazhong University of Science and Technology, and Ningbo University. IEEE Communications Society Shanghai Chapter was the technical co-sponsor. ICAIT has been a yearly event since 2008 to bring together researchers, scientists, engineers, academicians, and students all around the world to share the latest updates on new technologies that will enhance and facilitate the next generation of information and communications systems and technologies. ICAIT forges links between nations, and builds bridges between the academic and non-academic communities, and between government and non-government organizations. Submissions from about 20 countries for technical presentations were received for ICAIT 2010. The papers went through peer reviews, nominally two or more reviews per paper, to ensure quality. We were highly honored to have two renowned keynote speakers, Prof. Muriel Medard from MIT, United States, and Prof. Anthony TS Ho from Surrey University, United Kingdom, to share with us their insight and perspectives on current trends, convergence technologies, and the strategic updates. ICAIT 2010’s venue, Hainan, the second largest ocean island and smallest land province in China, is located at the (Continued on Newsletter page 4)
ICAIT 2010 Conference Opening: (from left to right) Prof. Du Wencai from Hainan University; Prof. Muriel Medard from MIT; Prof. Fu Guohua, Vice president from Hainan University; and Prof. Anthony TS Ho from Surrey University.
Student volunteers from Hainan University.
Jitel and Telecom I+D: Spanish Rendezvous Points for TLC Professionals By Pilar Manzanares-Lopez and Josemaría Malgosa-Sanahuja, Spain The 9th Conference on Telematic Engineering (IX Jornadas de Ingeniería Telemática, JITEL 2010) took place from 29 September to 1 October in the University of Valladolid, Spain. This conference tries to be a propitious forum for the Spanish research groups on networking and telematic services. It tries to encourage both the interchange of experiences and results and the cooperation among the researching groups working in this field of knowledge. The origin of this conference dates back to 1997 when, coinciding with the centenary of the Telecommunication Faculty of Bilbao, it took place in the University of the Basque Country. Since then, JITEL was organized biannually until 2005 and, due to its great success, it has been held annually since 2006. Each edition takes place in a different city, at Spanish universities all around the country. JITEL is co-sponsored by the IEEE Spanish Section. Traditionally, the best papers are published in IEEE America Latina Transactions Magazine. In addition, the best two papers related to telematic engineering education will be published in a book called TIC Aplicadas a la Ingeniería (TICAI), an initiative of the Spanish IEEE Education Society.
In this last edition of JITEL, with the challenge of adapting the current studies to the new European Higher Education Area, the 1st Symposium on Educational Innovation on Telematic Engineering (I Jornadas de Innovacion Educativa en Ingenieria Telematica, JIE 2010) was created. This is a new forum where university staff can meet and exchange experiences in innovations in networking and telematic teaching, and also in the use of information and communication technologies in higher education in general and this technical area in particular. JITEL 2010’s schedule was coordinated with the XX Telecom I+D conference, with part of the session program coinciding with it. Telecom I+D is a meeting point for members of the academic world (universities and the main research centers) and members of the business world (national and international leading companies in this sector) to advance the interchange of experience and dissemination of knowledge, focused on the promotion of technological innovation. Under the lemma “20 Years Leading the Innovation to Change the Future,” Telecom I+D 2010 was focused on the commitment of ICT to society. Social networks, the television of the future, vehicular network (Continued on Newsletter page 4)
Global Communications Newsletter • March 2011
3
LYT-NEWSLETTER-MAR
2/18/11
3:39 PM
Page 28
ICUMT 2010 Congress in Moscow, Russia By Vladimir Vishnevsky, Konstantin Samouylov, Yevgeny Koucheryavy, Alexey Vinel and Dmitry Tkachenko, Russia The 2nd International Congress on Ultra Modern Communications and Control Systems was held in Moscow, Russia, on 18-20 October 2010. The congress was organized by SPIIRAS (an institute of the Russian Academy of Sciences), Peoples’ Friendship University of Russia (Moscow), and Tampere University of Technology (Finland). The IEEE Russia Northwest BT/CE/COM Chapter was a technical co-sponsor of the conference along with the Popov Society (a professional society of Russian radio and communication engineers). Patrons and supporters of the congress were Nokia, Nokia Siemens Networks, and other organizations. The technical program of ICUMT 2010 included five keynote talks by internationally recognized industrial and academic speakers, seven specialized workshops, and two main congress tracks: telecommunications andcontrol/robotics. Congress proceedings include about 200 papers selected frommore than 320 submissions with the help of more than 250 Technical Program Committee members, reviewers, and workshop chairs. The congress was attended by over 200 participants. New insights in radio technologies as well as the coexistence of research in control systems, robotics, and telecommunications were key topics of the congress. Keynote speakers presented insights and trends in vehicular ad hoc networks, networking security, multicarrier solutions, society of robots, and human-robot interaction. Significant scientific contributions were presented in the areas of applied problems in theory of probabilities and mathematical statistics, advanced sensing and control, systems of systems, mobile computing, reliable network design, and fiber optic systems. An industrial panel session, “The ICT Future by the Year 2015,” was organized by representatives of the top management from HewlettPackard, Alcatel-Lucent Bell Labs, Oracle Communications, Nokia, Nokia Siemens Networks, and Telecom Italia.
Global Newsletter www.comsoc.org/pubs/gcn
STEFANO BREGNI Editor Politecnico di Milano - Dept. of Electronics and Information Piazza Leonardo da Vinci 32, 20133 MILANO MI, Italy Ph.: +39-02-2399.3503 - Fax: +39-02-2399.3413 Email:
[email protected],
[email protected] IEEE COMMUNICATIONS SOCIETY
KHALED B. LETAIEF, VICE-PRESIDENT CONFERENCES SERGIO BENEDETTO, VICE-PRESIDENT MEMBER RELATIONS JOSÉ-DAVID CELY, DIRECTOR OF LA REGION GABE JAKOBSON, DIRECTOR OF NA REGION TARIQ DURRANI, DIRECTOR OF EAME REGION NAOAKI YAMANAKA, DIRECTOR OF AP REGION ROBERTO SARACCO, DIRECTOR OF SISTER AND RELATED SOCIETIES REGIONAL CORRESPONDENTS WHO CONTRIBUTED TO THIS ISSUE
THOMAS M. BOHNERT, SWITZERLAND (
[email protected]) KERIM KALAMUJIC, BOSNIA (
[email protected]) JOSEMARIA MALGOSA SANAHUJA, SPAIN (
[email protected]) EWELL TAN, SINGAPORE (
[email protected])
®
4
A publication of the IEEE Communications Society
The conference was really international, with speakers and attendees arriving in Moscow from 49 countries. The conference proceedings are available at IEEE Xplore. Some results of the conference are also available at the conference web site, http://www.icumt.org.
ICIN 2010/continued from page 1 session described several applications and illustrated the importance of interfacing sensors to communications networks, especially in machine-to-machine communication. The final two sessions in ICIN covered business opportunities in networks and future trends. The business opportunities session explored the impact on communication business of convergence with the Internet, necessitating operator strategy adjustments. The future opportunities session presented a series of new concepts including application of peer-to-peer networking concepts to increase network flexibility, techniques for using context, and strategies for managing data volume increases required to deliver multimedia services with high quality of service (QoS). At the closing session, TPC Chair Max Michel reviewed highlights of the conference, and Roberto Minerva (Telecom Italia Laboratories) discussed a recent ConnectWorld article on “deperimeterization.” Delegate feedback on ICIN was extremely positive. On the final day, two workshops were provided in conjunction with ICIN: The Second International Workshop on Business Models for Mobile Platforms, chaired by Pieter Ballon of Vrieje Universitet Brussel, Belgium; and Telecom Transformation: Are You Ready? delivered by Eileen Healey of Healey & Co. Both were well attended and thought provoking. Planning for ICIN 2011 is already underway under the leadership of Roberto Minerva of Telecom Italia Laboratories, TPC Chair for ICIN 2011. The conference will take place from 4–7 October, 2011 in Berlin. For more information, visit http://www.icin.biz.
JITEL AND TELECOM I+D/continued from page 3 technology, the digital home, and energy efficiency are fields of interest that were analyzed in different workshops and round tables. The strategic contribution of ICT to public education, public health, ande-administration were covered by discussions about social communications, smart health, and smart government, and emergency and disaster support. JITEL and Telecom I+D coordination aims to strengthen the links among the different agents of R&D&I in the scope of networking and telematic engineering. The success of participation and positive assessments indicates that, to participants and organizing committee members, it is desirable that this coordinated organization and collaboration be maintained in the future for the sake of progress in periods of both crisis and economic and social growth.
ICAIT 2010/continued from page 3 south end of the country. Boasting a pleasing climate, golden sunshine, white beaches, and lush forests, it is dubbed “the oriental Hawaii.” Hainan is blessed with a charming tropical landscape, contributing to its unique folklore and culture. It is known as a Chinese all-season garden. For more information, please see the web site www.icait.org or contact the authors at
[email protected] and
[email protected].
Global Communications Newsletter • March 2011
LYT-GUEST EDIT-Evans
2/22/11
11:38 AM
Page 30
SERIES EDITORIAL
RADIO COMMUNICATIONS: COMPONENTS, SYSTEMS, AND NETWORKS THE MATURATION OF DYNAMIC SPECTRUM ACCESS FROM A FUTURE TECHNOLOGY, TO AN ESSENTIAL SOLUTION TO IMMEDIATE CHALLENGES Joseph Evans
Zoran Zvonar
I
n this issue, we follow up on our efforts in 2009 and 2007 to highlight the rapidly evolving technologies of cognitive radio and dynamic spectrum access, beginning with a guest editorial on this topic from Preston Marshall. We would like to continue to encourage our readers to send us suggestions on the topics and trends in radio communications they feel should be addressed in the series — we look forward to your feedback! Joseph Evans and Zoran Zvonar The rapid evolution of dynamic spectrum access (DSA) is evident from the progression of the papers, and related events, from the IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN). At the first DySPAN in Baltimore in 2005, there was only one reported experiment (the Defense Advanced Research Projects Agency [DARPA] XG program). The bulk of the papers focused on what could or might happen in the DSA field. By the second DySPAN in Dublin, the most popular area of the conference was the demonstration room, which crowded several dozen operating DSA systems, or elements thereof, into a veritable smorgasbord of sensing and adaption technology. The third DySPAN in Chicago coincided with the U.S. FCC announcement of the first official adoption of DSA principles to provide dynamic access to unused television channels. The last DySPAN followed by one week the release of the U.S. Broadband Task Force report, which officially recognized DSA as a core mechanism to achieve the necessary increase in spectrum access. Following on the heals of the task force, the U.S. FCC formally started inquiry on the regulatory mechanisms to adopt the principles of DSA within the broader context of regulatory regimes (ET Docket No. 10-237, Promoting More Efficient Use of Spectrum Through Dynamic Spectrum Use Technologies). In half a decade, this technology has moved from an idealistic and poorly understood technology to a potential centerpiece in a national strategy to maximize the utility of the finite spectrum resource. This progression is reflected in the articles in this month’s issue. They are not focused on farsighted vision or future technological visions, but on several of the practical engineering impediments to the deployment of this technology. While there is still a need for future vision, this immediate focus is to demonstrate this technology in the context of some of the immediate challenges. Perhaps a measure of the successful emergence of a technology is the necessity to plan for, and design in mitigation of, the inevitable malicious behavior. Certainly the experience with the Internet demonstrates that both malicious and greedy behaviors will attract opportunities for inappropriate (often illegal) benefit and associated exploits if these considerations are not integrated into the technology from the onset. In “A Bayesian Game Analysis of Emulation Attacks in DSA Networks,” Thomas, Komali, Borghetti, and Mähönen demonstrate that it is possible to develop methods to detect attempts by spectrum sharing nodes to emulate the characteristics of “protected” spectrum users, and thus obtain advantages over other spectrum sharers. The rampant distribution of malware over the Internet is certainly proof that apparently malicious, or even vandalizing, behavior must be a concern early in every technology, because individuals will determine a mechanism to benefit from exploiting these weaknesses.
30
An apparently early use of spectrum sharing is the operation of wireless microphones in unused television spectrum. As more technologically rigorous mechanisms for spectrum sharing in these same bands are introduced in the content of TV white space, it is essential that the new technology be capable of operating in the same spectrum ecosystem as these early secondary users. “DSA Operational Parameters with Wireless Microphones” by Erpek, McHenry, and Stirling discusses this important topic, and demonstrates the viability of sensing regimes to this highly heterogeneous emission type (milliwatts from the microphone, as compared to megawatts from a television transmitter) For DSA to be successful, it must also provide a sustainable and effective market mechanism to ensure that spectrum uses are arbitrated to achieve their maximal utility. There is a general international consensus that market-based mechanisms are instrumental in achieving this objective. In “The Viability of Spectrum Trading Markets,” Caicedo and Weiss discuss the viability of markets as applied to secondary usage, and provide concepts for how such markets could achieve success though sufficient spectrum liquidity, market participants, and extent of the spectrum resources that would be accessible through such markets. Most discussions of DSA eventually focus on the ability of spectrum sensing regimes to detect and protect other users of the spectrum. However, to be effective, DSA must not only provide protection to “privileged” users, but also provide effective performance for the secondary user. In “Unified Space-Time Metrics to Evaluate Spectrum Sensing,” Tandra, Sahai, and Veeravalli present a unified framework that goes beyond protection of the primary user and also considers the performance of the secondary spectrum sharing user. Lastly, it is evident that the discussion of DSA is starting to lead to a much broader set of technologies; those that open the more general question of how devices can coexist with each other in dense, heterogeneous, and overlapping networks. Early experience with smartphones demonstrates that wireless architectures will not satisfy the demand with just linear growth in wireless capacity: it will be necessary to make much exponential growth in capacity as these devices become the prevalent solution in the marketplace.
BIOGRAPHY PRESTON MARSHALL (
[email protected]) is a director at the Information Science Institute at the University of Southern California’s Viterbi School of Engineering, where he leads research programs in wireless, networking, cognitive radio, alternative computing, and related technology research. He has 30 years of experience in networking, communications, and related hardware and software research and development. For most of the last decade, he has been at the center of cognitive radio research, including seven years as a program manager for DARPA, where he led key cognitive radio and networking programs, including the neXt Generation Communications (XG) program, Disruption and Delay Tolerant Networking (DTN), Sensor Networking, Analog Logic, and the Wireless Network After Next (WNaN) program. He has numerous published works, and has made many appearances as invited or keynote speaker at major technical conferences related to wireless communications. He holds a Ph.D. in electrical engineering from Trinity College, Dublin, Ireland, and a B.S. in electrical engineering and an M.S. in information sciences from Lehigh University.
IEEE Communications Magazine • March 2011
THOMAS LAYOUT
2/18/11
3:15 PM
Page 32
TOPICS IN RADIO COMMUNICATIONS
Understanding Conditions that Lead to Emulation Attacks in Dynamic Spectrum Access Ryan W. Thomas and Brett J. Borghetti, The Air Force Institute of Technology Ramakant S. Komali and Petri Mähönen, RWTH Aachen University
ABSTRACT Dynamic spectrum access has proposed tiering radios into two groups: primary users and secondary users. PUs are assumed to have reserved spectrum available to them, while SUs (operating in overlay mode) must share whatever spectrum is available. The threat of emulation attacks, in which users pretend to be of a type they are not (either PU or SU) in order to gain unauthorized access to spectrum, has the potential to severely degrade the expected performance of the system. We analyze this problem within a Bayesian game framework, in which users are unsure of the legitimacy of the claimed type of other users. We show that depending on radios’ beliefs about the fraction of PUs in the system, a policy maker can control the occurrence of emulation attacks by adjusting the gains and costs associated with performing or checking for emulation attacks.
INTRODUCTION This research is funded in part by the Air Force Research Lab and by the Air Force Office of Scientific Research. This work also supported in part by the European Union (ARAGORN project) and DFG through UMICresearch center facility. The views expressed in this article are those of the authors, and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the U.S. Government. This document has been approved for public release; distribution unlimited.
32
Dynamic spectrum access (DSA) is an exciting new concept that promises to bring flexibility to spectrum management. Instead of relying on traditional spectrum licenses, in which licensees have exclusive rights to a fixed, static amount of spectrum, DSA schemes allow unused spectrum to be used opportunistically by other users. These opportunistic users are called secondary users (SUs), and must only use spectrum not in use by primary users (PUs), radios that have priority licenses for the spectrum. To operate in either a PU or SU role will require either a license or operating approval from a regulator such as the Federal Communications Commission (FCC). Radios holding a PU license have access to a reserved frequency band. Radios holding an overlay SU license have the ability to scavenge spectrum from whatever spectrum is sensed to be unused. There may be times when radios do not want to use spectrum according to their license. Instead, the radio may pretend to hold a different license. The idea of such an emulation attack (EA) was first articulated by Chen in [1]. That
0163-6804/11/$25.00 © 2011 IEEE
work identified a specific kind of EA called the PU emulation attack (PUEA), in which a radio emulates a PU for either selfish or malicious reasons. When driven by selfishness, the radios emulate to maximize their spectrum usage; when driven by maliciousness, radios emulate to degrade the DSA opportunities of other spectrum users. Like any kind of illegal activity, deterring either kind of EA requires a combination of detection and punishment. In this vein, the core contribution of this work is to investigate system-wide behaviors when the detection of selfish EAs leads to punishment. By proposing a Bayesian game framework where radios are unsure of the legitimacy of the claimed license of other radios, we identify conditions under which a Nash equilibrium (NE) (stable operating point) can exist. These conditions give insight into questions such as whether policyabiding radios can coexist with selfish radios, with what probability selfish radios choose to launch EAs, and under what conditions selfish EAs are discouraged. The NE also reveals the probability of radios challenging the license of other radios to determine their legitimacy. These results can be used to determine appropriate policies to keep the rate of EA arbitrarily low, something of significant interest to regulatory agencies. Conversely, this information can be used by PU and SU licensees to determine how rampant EAs will be for their license type. Most work on emulation attacks has been focused on the detection aspect of the PUEA. Chen [1] suggests a location-based authentication scheme called LocDef, in which the location (calculated from the received signal strength [RSS] by a network of sensors) and waveform characteristics of a declared PU are compared against the known location and waveform characteristics for that transmitter. Particularly when the PUs consist of static, well characterized users such as television broadcasters, this can be an effective scheme. A clustering approach is described in [2], in which waveform features are used to categorize a signal as being from a PU. More generally, in [3, 4], various statistical tests are developed to determine whether it is likely that the measured
IEEE Communications Magazine • March 2011
THOMAS LAYOUT
2/18/11
3:15 PM
Page 33
RSS of the declared PU could have come from an actual PU. Unlike these works, which are primarily about detecting EAs, we focus on how to cope with these attacks once they are detected, and provide conditions for formulating an appropriate response mechanism system.
BAYESIAN GAMES WITH INCOMPLETE INFORMATION Bayesian games, or games with incomplete information, are modeled as having players of different types. Knowledge of these types may or may not be extended to all players, meaning that some players may not know other players’ types. In these games, random chance (called nature) selects types before the game is played according to some probability distribution (this uncertainty is known as incomplete knowledge). Strategy decisions in a Bayesian game are made based on a combination of players’ types, their belief in the distribution of types from nature, the actions available to them, and the payoffs under these combinations. A Bayesian game contains several components; we present a simplified view of them here. First is the player set, consisting of all players in the game. There is a set of types, where a type represents a particular player’s characteristics. For instance, in a fighting game, players might be of two types, strong or weak. Each player and type has a set of strategies that represents the possible choices in the game. In the fighting game the strong player may have the strategies of punch or run, and the weak player may have the strategies of kick or cower. When a player and type are only allowed to select one strategy at a time (i.e. just punch or just run for the strong player), we say they are playing a pure strategy. Mixed strategies are a more general case. A mixed strategy is represented by some probability distribution over all possible pure strategies. Each player has a utility function ui that maps all players’ chosen strategies and types to a real value, representing the preference of that particular combination (higher values are considered more desirable). Finally, each player has a distribution function that represents their beliefs about nature’s distribution of types. In a Bayesian game, it is assumed that everything except for the player’s type (including its utility, pure strategy choices, the number of players, etc.) is “common knowledge,” meaning that all players know them, know that all players know them, and so on. One of the more useful game theoretic-concepts is the NE, which describes a state from which no rational player has any motivation to unilaterally change their chosen mixed or pure strategies (doing so would result in a equal or lower utility, thus giving the player no incentive to change). The Bayesian Nash Equilibrium (BNE) is an extension of the NE definition that incorporates the players’ types and beliefs. The pure-strategy NE is fairly easy to conceptualize in a game in which both players have two pure strategies. Under pure-strategy NE, neither player will switch to their other pure strategy, because doing so alone would not increase their utility. The same is true of the mixed strategy NE — neither player will switch to any other pure or
IEEE Communications Magazine • March 2011
mixed strategy since it won’t increase their utility. To understand under what kinds of mixed strategies NEs arise, it helps to use a slightly different perspective. If a mixed strategy is played such that the other player’s expected utilities from either pure strategy are equal, then that other player will not prefer any particular pure or mixed strategy over any other (all will have the same expected utility). If both players play such a mixed strategy — making the other player “indifferent” to all mixed and pure strategies, they will be playing the mixed strategy NE. Neither player has any incentive to change their mixed strategy.
The pure-strategy NE is fairly easy to conceptualize in a game in which both players have two pure strategies. Under pure-strategy NE, neither player will switch to their
SYSTEM MODEL
other pure strategy,
We assume that a network consists of multiple radios attempting to access (via PU and SU licenses) a shared block of spectrum. We assume that SU licenses are exclusive of PU licenses, meaning that radios can have one license or the other, but not both. Furthermore, we assume that interlopers (radios without a license) that use the spectrum frequently will be shut down or forced to get approval, making their presence uncommon. Therefore, this analysis ignores unlicensed radios and dual-licensed radios. We anticipate future work investigating the effect these users have on the system. Without loss of generality, we suppose that all radios are within transmission range of one another and suffer from interference under the sub-bands that are in use by other radios. (Non-interfering radios can be disregarded in our spectrum sharing model because they do not influence each other’s performance and therefore can coexist.) In this manner, the spectrum can be considered to be a common resource of which all active radios are attempting to utilize some portion. The bandwidth that is available for a legitimate PU is considered to be a fixed and reserved quantity (of course, this assumption only holds when the bandwidth is legitimately used; when a radio performs an EA then more than one PU may be in a PU frequency band). In contrast, the bandwidth for SUs is dependent on the available spectrum (we use this term to describe all spectrum in a block not in use by a PU 1 )and the number of SUs sharing it. This forms a DSA system in which PUs have negotiated with the regulatory body a license that allows priority interference-free access to a fixed subset of the spectrum block. Conversely, an SU license allows a radio to scavenge some amount of bandwidth from the spectrum that is unused by the PUs. Bandwidth for SUs is distributed among all SU radios as a function of the available free bandwidth and number of SUs. Although the process of negotiating spectrum allocation among SUs is beyond the scope of this work, we assume that all SUs receive an equal benefit from the process. Although DSA proposes to relax the rules for how spectrum is accessed, it does not propose to eliminate regulation altogether. It is expected that radios will utilize the spectrum block in accordance with the license they hold. In support of this, we assume that there is some mechanism in place that allows the verification of a radio’s PU or SU license. As discussed earlier, several mechanisms have been suggested in the literature for this, including location services, wave-
because doing so alone would not increase their utility.
1
This term is also referred in literature as white space or spectrum holes.
33
(a)
rs
SU
r*s
r*s
(b)
SU
SU
r*p
PU
r*p
PU
Available spectrum
PU
Frequency block
Page 34
Available spectrum
3:15 PM
Frequency block Available spectrum
rp
2/18/11
Available spectrum
THOMAS LAYOUT
(c)
(d)
Figure 1. Illustrations of the four revenue variables: a), the revenue of the PU; b), the revenue of the SU; c), the revenue of an SU with an additional SU in the available spectrum; d), the revenue of a PU when it shares spectrum with an additional PU.
form identification (using either passive or active techniques), and certificates. In this work we do not concern ourselves with the actual approach used, only assuming that there is one and it has some non-zero cost to perform (in terms of such resources as processing time, power consumption, and communication overhead). Selfish radios (radios with the potential to perform a selfish EA) can either emulate, utilizing the spectrum in the manner of a license they do not hold, or use the spectrum legitimately and not perform an EA. Both types of licensees, PU and SU, if they are not policy-abiding, may choose to perform an EA. The idea of an SUEA is less discussed than the PUEA, but may occur when the available spectrum for SU use is more desirable than the spectrum reserved for PUs use (and SU and PU licenses are exclusive of one another). Other radios in the system can choose to either check the credentials of this radio (which we term challenge, since the selfish radio may or may not actually be emulating) and suffer the verification cost discussed above, or accept the stated role of the radio and allow the selfish radio to continue operating as it pleases. Our model assumes the existence of a regulatory body (e.g., the FCC) with the authority to punish violators of policy. When a violation is detected (if a selfish radio’s EA is challenged by another radio), the regulatory body has the capability to employ a punishment to the violator. The punishment cost can come in several possible forms, including financial penalties, bandwidth restrictions, timeouts, or the forfeiture of other radio resources by the violator. For simplicity, we assume that punishments are fixed, meaning penalties do not change under repeated good or bad behavior. Furthermore, once punishments have been paid, violators are allowed to resume operations in the spectrum (under the terms of their correct license). Finally, we assume that there are enough radios in the system that a radio knowing its own license type (SU or PU) will not have an effect on its belief of the distribution of the license types in the network. Furthermore, decisions to emulate and challenge are made simultaneously,
34
preventing the decision of one radio from sequentially affecting the decision of the other. The utility function for the radios is of the form ui(s, θ)= ri(s, θ) – ci(s, θ), where r i (s, θ) is the revenue a radio of type θ i playing against other types gets under a particular strategy profile, s (that radio’s strategy and all other radios’ strategies), and c i(s, θ) is the cost assessed under that action (leaving the total utility as the profit). While the specific revenue values vary depending on the particular spectral scenario under which the radios are operating, there are four types of revenue that are of interest. The first, rp, is the revenue a PU receives when it uses its reserved spectrum. Similarly, rs is the benefit an SU receives when it shares available spectrum (spectrum that is not occupied by a PU) with some fixed number of other SUs. These two benefits are illustrated in Fig. 1a and 1b. When a radio has to share the available spectrum with one more radio than in the rs case, it gets benefit r*s. This is illustrated in Fig. 1c. Finally, when two (emulating) PUs attempt to access reserved spectrum, they receive benefit r* p. Figure 1d illustrates the case where the two PUs attempt to access two non-overlapping spectrum bands. Similar to the revenues, there are two types of costs that are of interest. The first is the regulatory body penalty for performing an EA, ce. The other is the cost of challenging the other radio’s license, cc. As it is not clear that every time a violation is detected the regulatory body will be able to enact a punishment, without any loss of generality ce can be considered an expected cost. Similarly, c c may represent the expected cost to challenging a license if the scheme employed requires a dynamic amount of resources. From this system model, we define two games that drive our analysis in the rest of the article, under the general assumption that the cost of challenging and emulating are greater than 0. The first game (the one-way game) is between a policy-abiding SU and a selfish radio holding either an SU or a PU license, and the second
IEEE Communications Magazine • March 2011
THOMAS LAYOUT
2/18/11
Radio 1
3:15 PM
Emulate Legitimate
Page 35
Radio 2 Challenge Accept rp – ce , r*s , rs – cc r*s rp , rp , rs – cc rs Radio 1 type = PU
Radio 1
Emulate Legitimate
Radio 2 Challenge Accept r*s – ce , rp , r*s – cc rs r*s, r*s, r*s – cc r*s Radio 1 type = SU
Our model assumes the existence of a regulatory body (such as the FCC) with the authority to punish violators of
Figure 2. The normal form one-way EA game, in which the selfish radio’s (radio 1’s) type is unknown to radio 2, and radio 2’s type is known. For each strategy pair, the topmost is radio 1’s, the bottom radio 2’s. (the two-way game) is between two selfish SUs that may perform an EA but are unsure if the other will challenge their license. We use twoplayer one-shot Bayesian games to represent the interactions between radios. Modeling these as two-player games is similar to the attacker/ defender model used in [5, 6]. We assume that radios determine a strategy for dealing with other spectrum users and/or using the spectrum upon the decision to use the spectrum block, based only on their knowledge of radio utilities and their beliefs of type distributions. These strategies are applied immediately as other radios attempt to utilize the spectrum. These interactions occur quickly compared to spectrum users arriving and leaving, so we approximate them as one-on-one one-shot events. Future work can relax these assumptions, providing insight into multistage multiradio interactions. We determine the pure and mixed strategy BNE of these two games to gain insight into the steady-state rate of EAs in the system. We pursue an analytical approach here, where the values of the revenues and costs are not defined, allowing these results to be used by regulatory agencies for engineering DSA ecosystems.
THE ONE-WAY EMULATION ATTACK GAME We now formally analyze the interaction between a policy abiding SU and a radio of unpredictable type (either SU or PU) that may choose to either launch a selfish EA or act true to its type. In this game a radio willing to perform an EA interacts with an SU that is not going perform an EA, but may or may not choose to challenge the license of the other radio. This game represents a scenario where policy dictates that radios can only be challenged at particular asymmetric times (e.g., when a radio enters the spectrum for the first time); once radios have begun to utilize the spectrum, they are entrenched regardless of their legitimacy. This kind of policy might emerge if a certificate-based scheme was used for authentication and certificates were only shared upon entry to a spectral band. The player set therefore consists of two radios. Radio 1 can be one of two types, PU or SU, based on the license granted to the radio. Radio 2 can only be one type, an SU. While both radios know their own type, radio 2’s type is known to radio 1, while radio 1’s type is unknown to radio 2. Due to radio 2’s partial information about radio 1, there are two actions radio 1 can take: either emulate and pretend it is the opposite of its true type, or
IEEE Communications Magazine • March 2011
act legitimately and present itself without deception. Radio 2 has two actions available: either challenge radio 1 and check its stated license or accept its claim. The game is considered to be played simultaneously; the normal form matrix of utilities for the two types of radio 1 is shown in Fig. 2. For the interaction in Fig. 2, when radio 1 is of type PU, the utility outcomes are as follows. If radio 1 decides to emulate and radio 2 challenges it, the regulatory body ensures that radio 1 only uses the PU spectrum (rp) and pays the penalty (c e). Under this scenario, radio 2 gets the revenue of not sharing the available spectrum with an additional radio, but also must pay the cost (cc) of challenging radio 1’s license. If, instead of challenging, radio 2 had accepted radio 1’s emulation, both radios will share the available spectrum with the other radio (r*s). If radio 1 does not emulate, it receives the benefit of the PU spectrum. If radio 2 decides to challenge when radio 1 has acted legitimately, radio 1 has committed no wrong doing and will still get the revenue from the PU spectrum, while radio 2 will still have to pay for challenging (cc) and get the same revenue (rs). Similarly, if radio 1 acts legitimately and radio 2 accepts, both radios receive their appropriate spectrum revenues without any costs. Similar logic is used to determine the payoff matrix when radio 1 is of type SU. We begin by investigating under what circumstances it only makes rational sense for radio 2 to play either challenge or accept, regardless of the strategy of player 1 — when this occurs, the strategy is called dominant. We do this by investigating the expected utility for radio 2 when facing all possible strategy choices by its uncertain opponent. We define radio 2’s belief that radio 1 is of type PU to be φ and belief that it is of type SU to be 1 – φ (radio 1’s beliefs are not interesting, since radio 2’s type is known to all). The expected utility is calculated for every strategy-type combination; for instance, the expected utility of playing challenge against a PU playing emulate or an SU playing legitimate is calculated by summing radio 2’s utility of playing challenge against PU emulate (top left box in Fig. 2) weighted by φ and radio 2’s utility of playing challenge against SU legitimate (bottom left box in Fig. 2) weighted by 1 – φ. This is continued for all four combinations of strategies the two types can play against challenge, and all four strategies the two types can play against accept. We find that under some circumstances there is no dominant strategy, and under others accept is a dominant strategy, but challenge is never a dominant strategy. There are two general conditions where accept is a dominant strategy for radio 2. The first
policy. When a violation is detected (if a selfish radio’s EA is challenged by another radio) the regulatory body has the capability to employ a punishment to the violator.
35
THOMAS LAYOUT
2/18/11
3:15 PM
Both one-way and two-way EA games are games of imperfect information that admit BNE in pure and mixed strategies. The conditions that lead to these BNE can serve as guidelines for regulatory agencies and policymakers to ensure that the rate of EA in the system are kept arbitrarily small.
Page 36
occurs when the revenue gained from challenging is less than cost of challenging, and the second occurs when radio 2 believes there are fewer emulating radios. In other words, accept is a dominant strategy when the expected gain from challenging is less than the cost of challenging. When one of these conditions does hold (meaning accept is the only rational strategy to play for radio 2), either radio 1’s strategy of emulating or acting legitimately will be irrational and can be eliminated. Which one gets eliminated depends on whether radio 1 gets more revenue from acting as a PU vs. sharing the available spectrum as an SU with radio 2. If it gets more revenue from sharing, radio 1 will play emulate; otherwise, it will play legitimate. Thus, there are two pure strategy NE, one in which radio 2 plays accept and radio 1 plays emulate, and one in which radio 2 plays accept and radio 1 plays legitimate. For the mixed strategy equilibria, we find the cost of emulating affects the probability of radio 2 challenging. For a fixed amount of revenue gained from emulating, higher costs of emulating lead to lower probabilities of challenging. In other words, as the penalties increase on emulating, radio 2 has to challenge less frequently to maintain radio 1’s indifference to either emulating or acting legitimately. This makes sense, as the cost of emulating acts to counter the incentive of emulating, and the greater the cost, the less often challenge has to be played to make the expected utilities the same for radio 1 choosing between legitimate operation and emulate. Radio 1 types that emulate do so with a probability that makes the expected gain for radio 2 from challenging equal to the cost of challenging. A regulatory agency may be concerned with how likely it is that radios will perform EAs under various revenue and cost scenarios. We term this probability p[EA], which is the probability that a randomly encountered radio will choose to emulate. We specifically investigate the worst case p[EA], pmax[EA], which is defined as maximum p[EA] across all beliefs that radios have about the fractions of types in the system. To determine this, we assume that the beliefs radios have about the distribution of types are correct, meaning that φ actually represents the true fraction of devices that are PUs. For some ranges of parameter values, any belief admits a pure strategy equilibrium that has player 2 emulating, giving a pmax[EA] of 1. For other ranges of parameters, not just any belief admits a pure strategy equilibria. In these cases, p max [EA] is the largest fraction (φ) of emulating types that supports a pure strategy equilibrium. It is worth noting that p max[EA] is not correlated with the cost of emulating. In other words, no matter what the magnitude of the penalty that the regulatory agency places on emulating, there is no pure strategy profile that would change the worst-case probability of emulation.
THE TWO-WAY EMULATION ATTACK GAME We now explore a different game in which each radio has partial knowledge about the type of the other. In this game both radios are SUs that will
36
potentially commit an EA but do not know if the other radio will challenge them. In this game every claimed PU license represents an EA; however, not all radios will take the effort to verify the attack and report it to the regulatory agency. Note that the types in the two-way game are different than in the one-way EA game: one type always challenges the claimed license type of other radios, and the other type always accepts the claimed license. Both radios know their own type but are unaware of the other radio’s type. Each type has the same actions, emulate or act legitimately. The game is played simultaneously; the normal form of the game for the four possible combinations of types is shown in Fig. 3. The two-way game may manifest in a scenario where a subset of the SUs have the capability to report EAs and will therefore challenge all licenses, while others do not have this capability (perhaps they are not able to communicate with the regulatory body) and never challenge. Alternatively, it represents a scenario in which the challenge/accept mixed strategy is dictated by policy rather than the radio itself. In this game, some of the more interesting payoffs pairs include: when both radios emulate and are both of type challenge, the radios share the available spectrum (r*s ) and pay both the punishment cost and the cost of checking the license credentials; when both radios emulate and both radios are of type accept, the radios share the same PU spectrum (r*p) (which, as discussed earlier, may be equal to or less than r p, depending on the scenario); and when one radio performs an EA and is of type accept while the other radio does not and is of type challenge, it leads to both radios sharing the available spectrum (r*s) and paying either the punishment cost (ce) or the cost to challenge (cc). Figure 3 shows at least one pure strategy equilibrium. In the case where the revenue from being the only SU is greater than the shared revenue as a PU, and the shared revenue as an SU is greater than the revenue from being the only PU, the strategy of acting legitimately is dominant for both types of radios — challenge and accept — regardless of their beliefs about the distribution of types. However, if these conditions do not hold, there are no easily identifiable pure strategy BNE, and we must look for a mixed strategy equilibrium. In this configuration both radios of any type will share the spectrum as SUs because there is no spectral incentive to perform an EA. To determine the mixed-strategy BNE, we note that there is no need for radios of type challenge to mix strategies; for these radios φ alone determines which strategies dominate. When a radio of type challenge faces a radio also of type challenge, no matter what mixed strategy either radio selects, the utility is either r*s – ce – cc (if it emulates) or r*s – cc (if it acts legitimately). In the same manner, if a radio of type challenge faces a radio of type accept, no matter what mixed strategy the accept radio selects, the utility of the challenge radio is either rp – cc if it emulates or r*s – cc if it acts legitimately. In this manner, no mixed strategy from either type can make radios of type challenge indifferent. When playing a mixed strategy, the probability of emulating for radios of type accept increases pro-
IEEE Communications Magazine • March 2011
THOMAS LAYOUT
2/18/11
3:15 PM
Radio 2
Emulate
Page 37
Radio 2 Legitimate Emulate
Legitimate
Radio 2 Legitimate Emulate
Radio 2 Legitimate Emulate
r*s – ce – cc , r*s – ce – cc
r*s – ce – cc , r*s – cc
E
r p – cc , rs – ce
rp – cc , rs
E
rs – ce , rp – cc
r*s – ce , r*s – cc
E
r*p, r*p
rp, rs
L
r*s – cc , r*s – ce – cc
r*s – cc , r*s – cc
L
r*s – cc , r*s – ce
r*s – cc , r*s
L
rs , rp – cc
r*s , r*s – cc
L
rs, rp
r*s , r*s
Radio 1
E
Radio 1 = challenge Radio 2 = challenge
Radio 1 = challenge Radio 2 = accept
Radio 1 = accept Radio 2 = challenge
Radio 1 = accept Radio 2 = accept
Figure 3. The normal form two-way EA game, in which both radios’ types are unknown to the other. For each strategy pair, the topmost is radio 1’s, the bottom radio 2’s. portionally to the expected revenue from getting PU spectrum without being challenged and decreases proportionally to the expected cost of getting caught. In other words, the radios of type accept do not like emulating when the cost of emulating goes up or the benefit of emulating goes down. Furthermore, because of the game symmetry, all mixed and pure BNE strategies are the same for radios 1 and 2. In general, regulatory agencies will have a harder time creating good rules of thumb for dealing with mixed strategy BNE in this game, as the BNE behavior of the system under mixed strategies is strongly dictated by the relative magnitudes of the revenue gained and lost under different spectral use cases. However, under all mixed strategies, increasing the cost of emulating will decrease the propensity for radios to emulate.
CONCLUSIONS Opening up the spectrum to allow different licensees to coexist in a DSA network may encourage selfish activity. In this work we examine one such manifestation of selfish behavior, emulation attacks. Selfish radios may choose to emulate (and conceal their true type) or act legitimately from time to time, so as to increase the expected benefit they obtain from the spectrum, thereby giving rise to uncertainty in the system. We modeled the interactions between a selfish radio and policy-abiding radio (one-way EA game) and between two selfish radios (twoway EA game) as Bayesian games. Both one-way and two-way EA games are games of imperfect information that admit BNE in pure and mixed strategies. The conditions that lead to these BNE can serve as guidelines for regulatory agencies and policymakers to ensure that the rate of EA in the system are kept arbitrarily small. Of particular interest to regulators, we find that the penalties levied against violators do not always have an effect on the BNE, and the belief that radios have regarding type distribution will often dictate the BNE strategy. This implies that the perceived and actual distribution of types may be as important a parameter for regulatory agencies to control as the penalties.
REFERENCES [1] R. Chen, J.-M. Park, and J. H. Reed, “Defense Against Primary User Emulation Attacks in Cognitive Radio Networks,” IEEE JSAC, vol. 26, Jan 2008, pp. 25–37. [2] T. R. Newman and T. C. Clancy, “Security Threats to Cognitive Radio Signal Classifiers,” Proc. Wireless@VT 2009, 2009.
IEEE Communications Magazine • March 2011
[3] S. Anand, Z. Jin, and K. Subbalakshmi, “An Analytical Model for Primary User Emulation Attacks in Cognitive Radio Networks,” Proc. IEEE DySPAN 2008, 2008. [4] Z. Jin, S. Anand, and K. Subbalakshmi, “Detecting Primary User Emulation Attacks in Dynamic Spectrum Access Networks,” Proc. IEEE ICC ’09, June 2009, pp. 1–5. [5] Y. Liu, C. Comaniciu, and H. Man, “A Bayesian Game Approach for Intrusion Detection in Wireless Ad Hoc Networks,” GameNets ’06, 2006. [6] F. Li and J. Wu, “Hit and Run: A Bayesian Game Between Malicious and Regular Nodes in MANETs,” Proc. IEEE SECON, 2008, pp. 432–40.
BIOGRAPHIES RYAN W. THOMAS [M] is an assistant professor of computer engineering in the Department of Electrical and Computer Engineering at the Air Force Institute of Technology. He received his Ph.D. in computer engineering from Virginia Tech in 2007; M.S. in computer engineering from the Air Force Institute of Technology in 2001; and his B.S. from Harvey Mudd College in 1999. He previously worked at the Air Force Research Laboratory, Sensors Directorate as a digital antenna array engineer. His research focuses on the design, architecture, and evaluation of cognitive networks, cognitive radios, and software-defined radios. RAMAKANT S. KOMALI [M] is currently with the Wireless Network Business Unit of Cisco Systems Inc. Prior to this, he was a postdoctoral researcher in the Department of Wireless Networks at RWTH Aachen University. He received his Ph.D. degree in electrical engineering from Virginia Polytechnic Institute and State University in August 2008. His research interests are in distributed resource allocation in wireless networks, topology control of cognitive radio networks, analysis, design, and optimization of wireless network protocols, and game theory. BRETT J. BORGHETTI is an assistant professor of computer science at the Air Force Institute of Technology. He earned a Ph.D. in computer science in 2008 from the University of Minnesota, Twin Cities; an M.S. degree in computer systems in 1996 from AFIT in Dayton, Ohio; and a B.S. in electrical engineering in 1992 from Worcester Polytechnic Institute, Massachusetts. His research interests focus on artificial intelligence, multi-agent systems, game theory, and machine learning. PETRI MÄHÖNEN [SM] (
[email protected]) is a full professor and head of the Institute for Networked Systems at RWTH Aachen University. He joined the faculty of RWTH in 2002 as Ericsson Chair of Wireless Networks. He has worked and studied in the United Kingdom, the United States, and Finland. His scientific interests include cognitive radio systems, networking and wireless communications, spatial statistics, and analysis of complex networks. He is with his group active in both theoretical and experimental research topics. In 2006 he received the Telenor Research Prize. He is serving as an Associate Editor for IEEE Transaction of Mobile Computing and Area Editor for the Journal of Computer Communications. He is working as scientific advisor or consultant for different international companies and research centers. He has been a chair or program committee member for numerous conferences and workshops. He was TPC Chair for IEEE DySPAN 2010, and serves as CoGeneral Chair for IEEE DySPAN 2011. He is a Senior Member of ACM and a Fellow of RAS.
37
MCHENRY LAYOUT
2/18/11
3:12 PM
Page 38
TOPICS IN RADIO COMMUNICATIONS
Dynamic Spectrum Access Operational Parameters with Wireless Microphones Tugba Erpek and Mark A. McHenry, Shared Spectrum Company Andrew Stirling, Larkhill Consultancy Limited
ABSTRACT
1
In the September 2010 decision, the FCC removed the sensing requirement for TV band devices that use the geolocation/database method of interference avoidance. Nevertheless, it left the door open for sensingonly devices and increased the minimum required detection threshold for wireless microphones from –114 dBm to –107 dBm.
38
This article provides a comprehensive analysis of dynamic spectrum access operational parameters in a typical hidden node scenario with protected wireless microphones in the TV white space. We consider all relevant effects and use an analysis framework that properly combines probabilistic technical factors to provide specific policy recommendations including the exclusion zone distances and the sensingbased DSA threshold detection levels. First, man-made noise measurements were taken in different locations and the amount of interference from man-made noise in potential wireless microphone channels was analyzed. Data collection results show that man-made noise levels can be up to 30 dB above the thermal noise floor. Furthermore, indoor-to-outdoor path loss measurements were conducted to determine the required exclusion distance for DSA-enabled TV band devices to ensure reliable wireless microphone operation in a typical application. The results show that the required exclusion zone can be safely and conservatively set at around 130 m when the results from man-made noise measurements and wireless microphone propagation measurements are used. Additionally, we developed a simulation to determine the required DSA sensing threshold levels for impairment-free wireless microphone operation. An indoor-to-outdoor path loss model was created based on the above path loss measurement results. This statistical path loss model was used to determine the received signal level at TV band devices and the interference level at the wireless microphone receiver. Our results show that the sensing threshold can be set at around –110 dBm (in a 110 kHz channel) for impairment-free wireless microphone operation when man-made noise and representative propagation models are used.
INTRODUCTION The opening of white spaces in the broadcast television bands to new devices enabled by dynamic spectrum access (DSA) technology promises a range of economic and social benefits by enabling the use of spectrum that has lain either unused or underused.
0163-6804/11/$25.00 © 2011 IEEE
REGULATORY PROGRESS Across the world, regulators are becoming aware of the importance of opening up TV white spaces for license-exempt use. Regulators in the United States and United Kingdom have sought to enable these gains without impacting the operations of existing users: mainly TV broadcasters and protected wireless microphone users. In the United States, proceedings on TV white spaces are now concluded in rights of the FCC’s decisions [1] in November 2008 and September 2010. In the United Kingdom, Ofcom published proposals [2] in June 2009 for opening up TV white spaces to new applications. One of the areas regulators find particularly challenging is the determination of how to protect wireless microphones, which are well established in the TV bands and integral to the broadcast and entertainment industries. Data concerning real-world wireless microphone system performance is sparse, and operating practice is not well documented. Difficulty in obtaining data on wireless microphone use and inaccurate methods to combine statistical factors have led regulators toward an unnecessarily conservative approach.
THE MOST IMPORTANT DSA ISSUE The most important current DSA issue is to help regulators develop spectrum access policies that provide a fair balance between interference to protected legacy spectrum users and practical implementation. The difficulties encountered in the FCC’s testing of prototype TV white space devices in the rule making process did not initially lead to practical DSA rules. This is especially true for sense-based rules for TV band devices, which are currently unnecessarily conservative in the United Kingdom (–126 dBm sensing threshold), but these rules were recently modified in the United States (–107 dBm).1 The lack of reasonable spectrum access policies is likely to impede the application of sensing-based DSA unnecessarily, while regulatory trends are favoring DSA generally, especially geolocationbased DSA. There are DSA interference analyses in the literature. In [3] Dhillon et al. performed an interference analysis at a wireless microphone receiver with single and multiple interferers. They concluded that DSA devices have the
IEEE Communications Magazine • March 2011
MCHENRY LAYOUT
2/18/11
3:12 PM
Page 39
potential to cause some level of interference to wireless microphones; collaborative sensing will reduce the risk significantly. However, that paper does not consider many of the technical issues such as statistical multipath propagation and probabilistic antenna front-to-back ratios. that regulators are concerned with. Additionally, it does not provide specific DSA-sense based rule parameters such as required sensing threshold values. In [4] Gurney et al. argue that the geolocation method (dynamically updated databases) is better than spectrum sensing. Motorola also supports the use of geolocation databases by TV band devices [5]. Nevertheless, geolocation methods have multiple drawbacks such as: • The worst case propagation and wireless microphone temporal use assumptions that lead to low spectrum use • The cost and limitations of maintaining and being connected to TV station location databases In [6] Buchwald et al. and in [7] Yu-chun et al. propose a disabling beacon system design, which will protect the wireless microphones from DSA operation. The beacon approach provides assured protection from TV band devices, but implementation is expensive since the system operator needs to purchase and deploy a beacon [8]. As a result, compared to geolocation and beacon signal methods, the sensing-based method is the most suitable method to protect wireless microphones.
CONTRIBUTION OF THIS ARTICLE In order to help regulators make more informed decisions in protecting wireless microphones from harmful interference, the results of our analysis provides specific technical performance parameter recommendations. Geographic-based DSA rule: The minimum separation needed to avoid harmful interference between a wireless microphone system and a DSA-enabled TV band device operating in the same UHF channel. This separation defines the exclusion zone. Sense-based DSA rule: The minimum level to which DSA-enabled TV band devices would have to sense, to ensure that they avoid using an occupied channel. This is related to the exclusion zone since, by definition, there is no interference risk from white space devices, which are outside the zone. The rest of this article is structured as follows. The next section discusses the impact of man-made noise on wireless microphone operation. We then explain the geographic exclusion DSA method and the determination of reasonable exclusion distances. The sense-based DSA method and the determination of reasonable detection threshold values are then described. Finally, conclusions are given in the final section.
IMPACT OF MAN-MADE NOISE ON WIRELESS MICROPHONE OPERATION DSA operation should impact the operation of wireless microphones for an amount that is less than but comparable to the performance limitations due to noise. Man-made noise is often
IEEE Communications Magazine • March 2011
the dominant noise source, but is rarely considered in DSA analysis. This section develops wireless microphone performance estimates in the presence of man-made noise. As part of this study, noise and interference measurements were conducted in a range of locations, including private dwellings and public venues, in the Tyson’s Corner area of northern Virginia in April 2009.
Regulatory analyses of wireless microphone protection requirements thus far have largely assumed that the
A BRIEF LEXICON OF NOISE
noise floor at the
Noise forms a backdrop to wireless communications, determining the lowest signal level that can be received (i.e., the receiver sensitivity). There are two key sources of noise: thermal noise and receiver noise. Thermal noise (also known as JohnsonNyquist noise) is generated in electrical conductors at the radio frequency input of the receiver. These conductors include the antenna and any lead connecting it to the receiver. The receiver also generates noise, further limiting its sensitivity. This latter component of noise, quantified in the receiver’s noise figure, is a function of the nature and configuration of the components in its RF input stage. This article brackets thermal noise and receiver noise together, and refers to the combination as the reception noise floor. In addition to noise arising in the receiver, there may be signals arising from external sources, which the receiver can detect. These may be either wanted signals, from which the receiver can extract useful information, or unwanted signals, which impair the receiver’s ability to recover the wanted signal. The unwanted signals are often referred to collectively as interference or man-made noise. In the case of wireless microphone operation in the TV bands, common examples of unwanted signals include signals from other wireless microphones operating in the vicinity and television transmissions. Harmful interference is interference that seriously degrades, obstructs, or repeatedly interrupts a protected service. TV white space devices are also a potential source of interference, if operating in the same channel and sufficiently close to the microphone receiver. The enabling regulatory framework for TV white space devices includes measures to protect wireless microphone operations from harmful interference, which need to be based on a solid understanding of the interference risk new DSA-enabled devices pose. In this article the risk of serious degradation to wireless microphone operation is gauged by determining the carrier-to-interference-andnoise ratio (CINR). CINR is the ratio between the wanted signal (referred to as the carrier) and an aggregated unwanted signal, in which manmade noise, receiver noise, and thermal noise are all included. The res u l ting val u e can be co mpared directly with the minimum value of CINR needed by a wireless microphone to ensure reliable operation. The minimum value of CINR is not published, but manufacturers indicate that 25 dB is a representative figure, appearing in the ERA report for Ofcom on cognitive access [9].
wireless microphone receiver is equal to the reception noise floor. Little account has been taken of potential sources of interference other than white space devices.
39
MCHENRY LAYOUT
2/18/11
3:12 PM
Page 40
Church Loc3 - antenna height 61 inches; date: 16-Apr-2009; time: 14-05-50 -30 -40 -50 -30 -60
-40
Power (dBm)
-50
-70
-60 -70
-80
-80 -90
-90
-100 -100
-110
150
-120 502
100 502.5
503
503.5
50 504
Frequency (MHz)
504.5
505
0
-110 -120
Time (ms)
Figure 1. Typical man-made noise contains discrete spectral and temporal features that is significantly different than Gaussian noise.
ESTABLISHING THE LEVEL OF MAN-MADE NOISE Regulatory analyses of wireless microphone protection requirements thus far have largely assumed that the noise floor at the wireless microphone receiver is equal to the reception noise floor. Little account has been taken of potential sources of interference other than white space devices. Sources of man-made noise include television stations, electrical equipment in homes, offices, and factories, and so on. Studio and stage environments have their own sources of noise, particularly other wireless microphones, as well as lighting systems, lifting machinery, and so forth. The characteristics of man-made noise are well understood. Indeed, the ITU has longestablished guidelines on typical levels of manmade noise that are to be expected in range of different locations. However, the ITU Recommendation provides only mean levels, which are insufficient to determine the risk of interference to wireless microphones. It is the peak levels of such noise that cause problems, because the human ear is sensitive to even brief interruptions or artifacts in an audio signal. Therefore, we made noise measurements that took the noise’s temporal variation into account.
MAN-MADE NOISE MEASUREMENTS 2 This
was obtained by adding the thermal noise floor level (a constant with value –115 dBm, calculated by adding the receiver noise figure of 6 dB to (–174 dBm/Hz over 200 kHz,)) to the CRNR value in the first column.
40
All of the man-made noise measurements were made at ground level. It is possible that wireless microphones were sometimes used in our experimental area. We made measurements of wireless microphone signals using the same equipment, and then we visually compared our noise measurements to ensure that no wireless microphone signals were present. It is critical to select unoccupied test frequen-
cies to make the man-made noise measurements. We used TV channels 16, 19, 21, 28, 37, 53, 56, 64, 65, and 69, which are unoccupied in the Tysons Corner, Virginia area. This was verified using rooftop antenna measurements on top of a 10-story office building. Measurements were made in 150 potential wireless microphone channels (200 kHz bandwidth), distributed over the 10 vacant UHF TV channels, at a number of measurement points in each location. The first step in the measurement process was to determine the reception noise floor of the measurement system considering conducted and non-conducted emissions. The receptio n no is e fl o o r o f the measurement equipment (using a 200 kHz bandwidth) was confirmed as –110 dBm, given a theoretical thermal noise floor value of –121 dBm and equipment noise figure of 11 dB). Any signal value above this level was interpreted as manmade noise. An example man-made noise measurement is shown in Fig. 1. Man-made signals are not Gaussian-type noise and contain large temporal and spectral features that are not Gaussian noise in character. The carrier-to-reception-noise-ratio (CRNR) values given next in this article refer to a reception noise floor of: • –115 dBm, in the man-made noise impact sections, corresponding to a noise figure of 6 dB and a channel bandwidth of 200 kHz • –117.5 dBm, in the exclusion zone and sensing threshold section, corresponding to a noise figure of 6 dB and a channel bandwidth of 110 kHz These bandwidth and noise figures were chosen to allow comparison with the results of ERA’s analysis for Ofcom [9]. A sample noise power distribution plot from the single-family house measurements is shown in Fig. 2. It can be seen from the figure that man-made noise can range up to 30 dB above the thermal noise floor. The thermal noise floor here is –115 dBm, using a bandwidth of 200 kHz and a measurement system noise figure of 6 dB.
ASSESSING THE IMPACT OF MAN-MADE NOISE In order to assess the potential impact of manmade noise, we considered each of the noise level samples taken, per location. For each of a range of wanted signal (carrier) levels at the wireless microphone receiver and each measurement sample, we calculated the CINR. If the result was greater than 25 dB, it was deemed that the noise level was subcritical; thus, microphone operation would not have been impaired in that particular channel at that time and place. The results of the calculation across the measurement sample base for each location are summarized in Table 1: • The left column of the table indicates the signal level at the receiver in terms of its ratio to the thermal noise floor (i.e., CRNR). The adjacent column, on the right, shows the absolute signal power level received by the wireless microphone receiver.2
IEEE Communications Magazine • March 2011
2/18/11
3:12 PM
Page 41
• Each of the remaining cells in each row gives a score for each location, at the given CRNR value, corresponding to the ratio of the number of samples in which the noise level was found to be subcritical to the total number of noise level samples taken in that location. For example, if, in 99 of 100 samples, the noise level was found to be subcritical, the impairment-free score would have been 99 percent. • The right column of the table shows an impairment-free score calculated from samples aggregated across all the locations used. It may be observed in Table 1 that a CRNR of around 60 dB is needed to ensure impairment-free scores of 100 percent in all locations. Given that wireless microphones require a minimum CINR of 25 dB, this implies a man-made noise increment of around (60 dB – 25 dB =) 35 dB. It is worth remembering that the noise measurements described above were made in suburban areas. Undoubtedly the man-made noise levels in urban and metropolitan venues are even greater.
Distribution of noise power inside single family house 700
600
500 Number of cases
MCHENRY LAYOUT
400
300
200
100
0 -115
-100 -95 Noise power (dBm)
-90
-85
-80
Figure 2. Noise level distribution from measurements taken inside a single-family house.
USERS COMPENSATE FOR MAN-MADE NOISE BY ENSURING HIGHER RECEIVED SIGNAL LEVELS To compensate for the relatively high level of man-made noise experienced at most major venues, wireless microphone users need to ensure that received signal levels are much greater than would be needed if thermal noise were the only consideration. These augmented signal levels, achieved by minimizing the distance between microphone and receiver, allow wireless microphone systems to tolerate much higher TV band device signal levels than regulators have so far assumed.
-105
-110
GEOGRAPHIC EXCLUSION ZONE DSA METHOD In order to estimate the required separation of a DSA-enabled TV band device from a wireless microphone receiver using the same channel, it is necessary to be able to predict the propagation loss on a path between the two devices. The measurement process, described below, enabled us to compile a database of propagation values over a range of distances when the transmitter is inside a building and the receiver is outdoors.
Proportion of samples where microphone operation would not have been impaired (%) Wireless microphone CRNR (dB)
Received power level (dBm)
Location Indoor venue parking lots
Inside single family house
Inside condo
Wolf trap parking lot*
All locations
10
–104.9
0%
0%
0%
0%
0%
20
–94.9
0%
0%
0%
0%
0%
30
–84.9
0%
0%
0%
0%
0%
40
–74.9
98.3%
91.8%
84.2%
99.1%
91.5%
50
–64.9
100%
99.3%
99.5%
100%
99.6%
60
–54.9
100%
100%
100%
100%
100%
70
–44.9
100%
100%
100%
100%
100%
* Wolf Trap is a public venue, which is known as a normally quiet location.
Table 1. The potential impact of man-made noise on wireless microphone operation.
IEEE Communications Magazine • March 2011
41
MCHENRY LAYOUT
2/18/11
3:12 PM
Page 42
Prop loss, CINRmin = 25 dB, fc = 556.36 MHz -40 131 m exclusion distance -60
Propagation loss (dB)
-80
CRNR = 60 dB
-100
-120
-140
-160
-180
0
500
1000
1500 Distance (m)
2000
2500
Figure 3. Required TV band device exclusion zone size for a given wireless microphone signal level.
PROPAGATION LOSS MEASUREMENTS We carried out measurements of propagation loss for 4094 possible TV band device to wireless microphone receiver separation distances, ranging up to 2.7 km. These were at three indoor public venues, which were not in use at the time and whose wireless microphone systems had been switched off. An indoor test transmitter with an emission power of 20 dBm at 556.36 MHz was collocated with each venue’s wireless microphone receiver, and coupled to an omnidirectional antenna. A test receiver, mounted in a van, measured the signal strength at a large number of locations around the outside of the venue. The outdoor receiver was linked to an omnidirectional antenna, mounted on the roof of the van, with its height matched to that of the test transmitter (2 m above the ground). This elevated location for the receiver antenna means that the measurement results understate the likely propagation loss suffered by a signal from a real TV band device, leading to a conservative exclusion zone estimate.
ESTIMATING THE REQUIRED EXCLUSION ZONE SIZE Using the propagation loss measurements acquired through the process described above, we were able to estimate the required separation of a DSA-enabled TV band device and a wireless microphone receiver. For each of a range of signal (carrier) levels at the wireless microphone receiver, it was possible to calculate the propagation loss required to prevent interference from a TV band device. The calculation assumed a TV band device transmission power density of 4.4 dBm in a 110 kHz channel (equivalent to 20 dBm in a 4 MHz channel). The TV band device was deemed not to
42
cause interference in a particular position when the CINR remained above 25 dB at the wireless microphone receiver. Since the measurements from each of the indoor venues were similar, it was reasonable to combine them into a single data set consisting of a grid of 1 dB by 1 m buckets. Figure 3 shows the scatter plot for the measured path loss data, overlaid with a red horizontal line showing the propagation loss required between the TV band device and the wireless microphone receiver to ensure a CINR (for the wanted signal) greater than 25 dB when the wireless microphone signal level (CRNR) at the receiver is 60 dB (i.e., a wanted signal carrier level of greater than –54.9 dBm; see Table 1). The matching vertical red line indicates the distance beyond which all data points had a propagation loss equal to or greater than the minimum needed (i.e., all data points fell below the horizontal line). This provides the most conservative (largest) estimate of the size required for the exclusion zone. The reception noise floor used in this analysis is –117.5 dBm, calculated using a bandwidth of 110 kHz and assuming a receiver noise figure of 6 dB. At the microphone signal level illustrated by the horizontal red line (–57.5 dBm, corresponding to CRNR = 60 dB), the required propagation loss to avoid impairing microphone operation can be seen from Fig. 3 to be around –87 dB. This minimum value of propagation loss can be seen from the figure to have been achieved at all possible values of distance greater than that marked by the vertical red line, which can therefore safely be chosen as the boundary of the exclusion zone. The results of estimating exclusion zone size for a range of CRNR values are summarized in Table 2. The proportion of measurements made at distances greater than or equal to the chosen exclusion zone size that meet the minimum propagation loss requirement is referred to here as the impairment-free score. It corresponds to the percentage of positions outside the chosen exclusion zone at which a DSA-enabled TV band device would not have impaired microphone operation when operating on the same channel. A score of 100 percent means that a TV band device operating on the same channel as the wireless microphone would not cause interference when located anywhere outside the exclusion zone. The estimated exclusion zone sizes (radii) corresponding to each of a range of received microphone signal levels are presented in Table 2. The rightmost two columns of the table show how the exclusion zone could be contracted if lower levels of impairment risk were tolerable. Since man-made noise is significantly higher than thermal noise in areas where wireless microphones are used, such systems are evidently deployed with a much higher received signal than would be justified from assuming only that thermal noise applied. It is estimated that the received signal level used is typically in excess of 60 dB above the reception noise floor (i.e., CRNR = 60 dB). Consulting Table 2, at this received signal level, a requirement for 100 percent impairment-free operation given these mea-
IEEE Communications Magazine • March 2011
MCHENRY LAYOUT
2/18/11
3:12 PM
Page 43
surements leads to an exclusion zone for DSAenabled TV band devices of radius 131 m. The reception noise floor used here is –117.5 dBm, calculated assuming a microphone receiver bandwidth of 110 kHz and a receiver noise figure of 6 dB, to be comparable with the values used in ERA’s report on cognitive access for Ofcom [9].
Impairment-free microphone operation score 100%
99.9%
99%
Wireless microphone CRNR (dB)
Received power level (dBm)
SENSE-BASED DSA METHOD
30
–87.5
732
513
280
The previous section established that taking man-made noise and realistic propagation losses into account provides significant scope for limiting the exclusion zone. Regulators in the United States and United Kingdom seem to prefer DSA-enabled TV band devices find vacant UHF channels through geolocation, whereby the devices look up which channels are vacant at their position in a database. However, DSA-enabled devices may also rely on spectrum sensing in order to check that a channel is free. For these devices, it is important that sensing thresholds are sufficiently low to protect microphones (and TV reception), but not so low as to make TV band devices unnecessarily costly, difficult to produce, and liable to detect unoccupied channels as occupied. In this section we consider how these factors impact the sensing threshold requirement. In general, the sense-based DSA method is able to estimate the link loss between the DSAenabled transmitter and the protected transceiver by measuring the received power level from the protected transceiver and knowing its transmit power level. Estimating this link loss enables the DSA-enabled radio to adjust its transmit power level (or to decide to transmit or not) to avoid causing unwanted interference to the protected transceiver. In the wireless microphone situation, a protected wireless microphone receiver does not transmit a signal; hence, the receiver is a hidden node. The DSA-enabled TV band device estimates the minimum likely link loss between it and the wireless microphone receiver (L3) by measuring the wireless microphone to TV band device link loss (L2). The DSA-enabled TV band device continually measures the received signal level from the wireless microphone transmitter. If the received signal level is above the sensing threshold, the TV band device does not transmit on the same channel. Because of the hidden node problem, the risk of interference to a protected wireless microphone is a complex statistical function of the sensing threshold value.
40
–77.5
304
246
132
50
–67.5
187
131
64
60
–57.5
131
82
<50
70
–47.5
81
52
51
SIMULATION DESCRIPTION To establish a relationship between impairmentfree wireless microphone operation and the value chosen for the sensing threshold, around one million randomly chosen possible combinations of wireless microphone, wireless microphone receiver, and DSA-enabled TV band device positions were considered using the propagation loss data gathered as described above. For each position combination, a calculation was made of whether or not microphone operation might have been impaired.
IEEE Communications Magazine • March 2011
Dynamic spectrum access device detection threshold (dBm)
Table 2. Estimates of required exclusion-zone size for the data set in Fig. 1 (TV band device transmission power of 20 dBm into 4 MHz). In order to generate the large number of possible position combinations required, we used a Monte Carlo simulation. Fixing the wireless microphone receiver at the center, the simulation generated one million different combinations of wireless microphone (transmitter) and TV band device positions over an area of 1 km2. The basis for the simulation was as follows: • The wireless microphone receiver was positioned at the center of the grid. • The wireless microphone was limited to positions within a 100 m square subset of the 1 km2 grid. • The TV band device was allowed to range anywhere within the 1 km × 1 km grid. • For each point in the simulation, a propagation loss value was chosen at random from the values measured earlier for the given distance between wireless microphone and TV band device. • In 5 percent of the points, the propagation loss was increased by 20 dB to account for body loss (amounting to 50,000 of the 1 million simulated cases). • The wireless microphone transmission power was taken as 14.8 dBm, with a system bandwidth of 110 kHz and a noise figure of 6 dB used for the wireless microphone receiver, yielding a reception noise floor of –117 dBm. • The TV band device’s transmission power was taken as 20 dBm (100 mW EIRP) within a transmission bandwidth of 4 MHz, amounting to 4.4 dBm in a 110 kHz channel. The propagation loss model used in the simulation drew directly on the measurements described in the previous section. It was applied to transmissions between wireless microphone and TV band device as well as between TV band device and wireless microphone receiver, using the assumption that the model was applicable to all paths ending within the central 100 m2 zone allowed for microphone roaming in the simulation. No path loss assumptions were made to calculate the received signal level at the wireless
43
MCHENRY LAYOUT
2/18/11
3:12 PM
Page 44
Impairment-free microphone operation score 100%
99.9%
99%
Wireless microphone CRNR (dB)
Received power level (dBm)
30
–87.5
–144
–144
–141
40
–77.5
–133
–122
–98
50
–67.5
–119
–101
–84
60
–57.5
–111
–85
–69
70
–47.5
–104
–71
>-60
Dynamic spectrum access device detection threshold (dBm)
Table 3. Estimated sensing threshold values for DSA-enabled TV band devices (with TV band device transmission power of 20 dBm into a 4 MHz channel).
microphone receiver from the wireless microphone transmitter. The CRNR values used in Tables 1 and 2 (30 dB, 40 dB, etc.) were used to calculate the received signal level at the wireless microphone receiver in the absence of TV band devices. For each synthesized position combination generated by the simulation, the distances between the DSA-enabled TV band device and wireless microphone transmitter, and TV band device and wireless microphone receiver were calculated, and corresponding propagation loss values were retrieved from the propagation loss measurement base. Since the measurement base included a number of possible propagation loss values for each value of distance, the particular value retrieved by the simulation was chosen at random from the set of applicable values for the distance in question. In 5 percent of cases, 20 dB was added to the propagation loss to simulate the effect of body absorption [10]. The sensing threshold vs. failure rate plots were created at the end of the simulation for each different CRNR level.
ESTIMATING THE DSA SENSING THRESHOLD
3
The FCC recently modified its rules for TV band devices to reduce the exclusion distance from 1 km to 400 m for devices operating at 100 mW EIRP or less transmit power [1b, paragraph 61].
44
The results of the simulation are presented in Table 3, with estimated sensing thresholds corresponding to a range of possible wanted signal levels at the microphone receiver. In the third column, the sensing threshold value given ensures 100 percent impairment-free operation — meaning that in all the cases in the simulation, either the wireless microphone signal was detected by the DSA-enabled TV band device, or the TV band device was sufficiently separated from the wireless microphone receiver for its transmissions not to impair microphone operation. For example, if the CRNR equaled 60 dB, a sensing threshold requirement of –111 dBm for the DSA-enabled TV band device would have been sufficient to protect wireless microphone operation from impairment when both were using the same microphone channel. Either the TV band device would have been able to detect the wireless microphone at the specified threshold, and it would have moved to an alter-
native channel, or its signal would have been sufficiently attenuated at the wireless microphone receiver. The two rightmost columns of Table 3 show how the required sensing threshold could be relaxed if a small impairment risk to microphone operation were tolerable. The reception noise floor used as the reference for CRNR here is –117.5 dBm, calculated assuming a bandwidth of 110 kHz and an equipment noise figure of 6 dB, comparable with ERA’s analysis for Ofcom [9].
CONCLUSIONS The conclusions of this study are as follows.
MAN-MADE NOISE The effects of man-made noise have not properly been taken into account in protection analyses to date. The reception noise floor has been considered while determining the maximum allowable interference-to-noise ratio for DSA-enabled TV band devices. On the other hand, many other devices impact wireless microphone operation more than TV band devices, and this study shows that man-made noise is one of the dominant factors that interferes with wireless microphone operation. Our measurements show that the peak manmade noise level can go up to 30 dB above the reception noise floor. As a result, wireless microphones have to have high CRNR values (>60 dB) in order to operate reliably. Since wireless microphones have high signal margins, interference from DSA-enabled TV band devices will be negligible. Man-made noise levels should be taken into consideration while determining the requirements for DSA operation. A reasonable requirement for DSA-enabled devices would be to impact the noise level by no more than 10 dB less than 3 percent of the time.
DSA EXCLUSION DISTANCE METHOD We conducted propagation loss measurements in suburban areas to determine the required exclusion distance for DSA-enabled TV band devices. When man-made noise and representative propagation models are used, the required exclusion zone can be safely and conservatively set at around 130 m.3
DSA SENSING METHOD The sensing threshold depends on statistical parameters. DSA-enabled TV band devices can measure the path loss between themselves and a protected wireless microphone transmitter, but they cannot measure the path loss between themselves and a wireless microphone receiver. As a result, there is a hidden node factor in the wireless microphone sensing threshold calculations. Furthermore, multipath, blockage, and body loss factors make the detection of wireless microphone signals more difficult. All probabilistic parameters should be considered together while calculating the required sensing threshold level instead of combining the worst case of each of these individual factors. On the other hand, wireless microphone receivers always experience high interference levels because of man-made
IEEE Communications Magazine • March 2011
MCHENRY LAYOUT
2/23/11
12:26 PM
Page 45
noise, co-channel wireless microphone signals, and broadcast TV signals. Considering the fact that wireless microphones have to have high signal margins in order to operate properly, when man-made noise and representative propagation models are used the required sensing threshold can be set at around –110 dBm (in a 110 kHz channel).4
REFERENCES [1] FCC, “Unlicensed Operation in the TV Broadcast Bands,” a) Second Report and Order and Memorandum Opinion and Order, 23 FCC Rcd 16807, Nov. 2008; b) Second Memorandum Opinion and Order, FCC 10-174, Sept. 23, 2010. [2] Ofcom, “Digital Dividend: Cognitive Access, Statement n Licence-Exempting Cognitive Devices Using Interleaved Spectrum,” July 2009. [3] R. S. Dhillon and T. X. Brown, “Models for Analyzing Cognitive Radio Interference to Wireless Microphones in TV Bands,” IEEE DySPAN, 2008. [4] D. Gurney et al., “Geo-Location Database Techniques for Incumbent Protection in the TV White Space,” IEEE DySPAN, 2008. [5] “Motorola FCC Ex-Parte Filing,” Oct. 2007, pp. 2–10. [6] G. J. Buchwald et al., “The Design and Operation of the IEEE 802.22.1 Disabling Beacon for the Protection of TV Whitespace Incumbents,” IEEE DySPAN, 2008. [7] W. Yu-chun, W. Haiguang, and P. Zhang, “Protection of Wireless Microphones in IEEE 802.22 Cognitive Radio Networks,” IEEE ICC, 2009. [8] A. Hartman and E. Reihl, “Mitigating the Effects of Unlicensed Devices on Wireless Microphones,” Shure Inc., Presentation, Nov. 2005. [9] ERA, “Report for Ofcom on Cognitive Access,” Jan. 2009. [10] Analysis of PMSE Wireless Microphone Body Loss Effects, COBHAM Technical Services, ERA Technology Report, Page 18, 2009.
ADDITIONAL READING [1] Mass Consultants Limited, “Man-Made Noise Measurement Programme Final Report,” no. 2, Sept. 2003, p. 58. [2] Red-M, “Predicting Path Loss between Terminals of Low Height,” Final Report, Feb. 26, 2007, Fig. 23 (a), p. 40.
BIOGRAPHIES T UGBA E RPEK (
[email protected]) has been employed by Shared Spectrum Company as a systems engineer since June 2007. Her areas of interest include DSA system analysis, predicting interference statistics, and performance prediction; development and testing of DSA detectors for the TV spectrum band; man-made noise spectrum measurements; propagation loss model creation and validation measurements; and spectrum occupancy data collection and analysis. She earned an M.S. degree in electrical engineering from George Mason University (GMU) with a specialization in digital signal processing and wireless communications. She worked as a research assistant in the Network Architecture and Performance Laboratory at
IEEE Communications Magazine • March 2011
GMU. Her research topics included dynamic spectrum sharing in wireless networks. She graduated with a B.S. degree in electrical engineering from Osmangazi University, Eskisehir, Turkey. MARK A. MCHENRY (
[email protected]) has experience in military and commercial communication systems design, including research of the next generation of advanced wireless networks, the development of high dynamic range multi-band transceivers, as well as founding of two high-tech companies with concentration in automated spectrum management technology. In 2000 he founded Shared Spectrum Company (SSC), which is developing automated spectrum sharing technology. Shared Spectrum Company develops advanced technologies for government and industry customers with challenging radio frequency and networking needs. He was also a co-founder of San Diego Research Center (acquitted by Argon ST), a wireless research and developing company supporting the test and training community. Previously he was a program manager at the Defense Advanced Research Projects Agency (DARPA), where he managed multiple programs. He has worked as an engineer at SRI International, Northrop Advanced Systems, McDonnell Douglas Astronautics, Hughes Aircraft, and Ford Aerospace. He received the Office of Secretary of Defense Award for Outstanding Achievement in 1997 and the Office of Secretary of Defense Exceptional Public Service Award in 2000. He has multiple RF technology related patents. He graduated with a B.S. in engineering and applied science from the California Institute of Technology in 1980. He received an M.S. in electrical engineering from the University of Colorado and a Ph.D. in electrical engineering from Stanford University. He was named Engineer of the Year by the DC Council of Engineering and Architectural Societies in February 2006. He was appointed by Secretary of Commerce Carlos Gutierrez to serve as a member of the Commerce Spectrum Advisory Committee in December 2006 and 2008. A NDREW S TIRLING (
[email protected]) founded Larkhill Consultancy Limited in 2005 to assist companies developing business around new digital distribution technologies, providing regulatory and strategy input. After studying physics at Imperial College, he started his career at BBC R&D developing digital content production and distribution technology. As systems group manager at Philips/Panasonic JV he developed AV networking technology, which he helped Mercedes-Benz and its key suppliers engineer into mass production cars. He is named as inventor on a number of patents related to local area communications. He joined consultants Cambridge Consultants/ Arthur D. Little as a manager, where he assisted investors in evaluating new businesses and helped multinational players develop digital strategies. As strategy manager at ITC/Ofcom he developed digital TV switchover and related spectrum policy, authoring a landmark report with the BBC for the U.K. government. Larkhill's clients have included BT, the BBC, Dell, Microsoft, and the U.K. government. On behalf of Microsoft, Larkhill facilitates an informal coalition of major companies interested in exploiting the TV white spaces,- focusing on the United Kingdom and Europe. He sits on the External Advisory Board for the EU QoSMoS project, which is looking at how Europe can best exploit the opportunities from secondary spectrum access.
4
The FCC recently modified its rules for sensingonly TV band devices to increase the minimum required detection threshold for wireless microphones from –114 dBm to –107 dBm, averaged over a 200 kHz bandwidth [1b] at 35 and Appendix B at 15.717(c).
45
CAICEDO LAYOUT
2/18/11
3:03 PM
Page 46
TOPICS IN RADIO COMMUNICATIONS
The Viability of Spectrum Trading Markets Carlos E. Caicedo, Syracuse University Martin B. H. Weiss, University of Pittsburgh
ABSTRACT Spectrum trading markets are of growing interest to many spectrum management agencies. They are motivated by their desire to increase the use of market-based mechanisms for spectrum management and reduce their emphasis on command and control methods. Despite the liberalization of regulations on spectrum trading in some countries, spectrum markets have not yet emerged as a key spectrum assignment component. The lack of liquidity in these markets is sometimes cited as a primary factor in this outcome. This work focuses on determining the conditions for viability of spectrum trading markets. We make use of agentbased computational economics to analyze different market scenario and the behaviors of its participants. Our results provide guidelines regarding the number of market participants and the amount of tradable spectrum that should be present in a spectrum trading market for it to be viable. We use the results of this analysis to make recommendations for the design of these markets.
INTRODUCTION The radio frequency spectrum is a highly regulated resource whose management is usually deferred to a government agency in most countries. The tasks related to spectrum management encompass all the activities related to regulating this resource, including spectrum allocation and assignment of spectrum as well as regulation enforcement activities. For our purposes, spectrum allocation refers to defining acceptable uses of certain bands (e.g., FM radio), whereas spectrum assignment is the process of granting rights to particular users in a band that has been allocated (e.g., a radio station). Traditional spectrum allocation and assignment mechanisms have focused on avoiding interference between users and on the type of use given to spectrum rather than on the efficient use of spectrum and the maximization of economic benefits. Due to this, most of the spectrum is used suboptimally most of the time with low average occupancy values (less than 10 percent as reported in [1]). Additionally, managing spectrum has become
46
0163-6804/11/$25.00 © 2011 IEEE
increasingly difficult for regulatory agencies due to the new technologies and uses for spectrum that are continuously emerging and placing increasing demands on this resource. Thus, more flexible assignment mechanisms have to be put in place to adjust to this new reality while still achieving the best usage of spectrum possible under economic or social welfare considerations [2]. Spectrum auctions have become a common technique for regulatory agencies to assign spectrum to new users. Once the auction is over, however, the license holders do not get feedback about the current valuation of spectrum. For economically driven spectrum assignment to be optimally effective, a secondary market must exist that allows spectrum users to optimally choose between capital investment and spectrum use on a continuous basis, not just at the time of initial assignment [2]. The interactions in the market should take into account the geographic reusability and non-perishable characteristics of spectrum, which make its trading different than trading traditional market commodities. Unlike much of the dynamic spectrum assignment (DSA) literature, which focuses on noncooperative sharing, spectrum trading is a form of primary cooperative spectrum sharing that involves permanent license transfers for economic consideration [3]. Thus, it assigns spectrum to those who value it most, allowing for the establishment of dynamic market-driven and competitive wireless communication markets. Spectrum trading markets are of growing interest to many spectrum management agencies. They are motivated by their desire to increase the use of market-based mechanisms for spectrum management to increase spectrum efficiency. The research reported here is, in many ways, a best case analysis to determine the viability of those markets.
PARTICIPANTS IN A SPECTRUM TRADING MARKET To understand the organization of and interactions in a spectrum trading (ST) market, we need to know what entities participate in such a market. In [4] we elaborated a classification for market structures that support ST. This classification considered two main types of market
IEEE Communications Magazine • March 2011
CAICEDO LAYOUT
2/18/11
3:03 PM
Page 47
structures for spectrum trading: over-the-counter (OTC) markets and exchange-based market operation. We focus on the exchange-based case in this article. Figure 1 illustrates an exchange-based trading scenario. In this scenario the exchange collects the offers to sell and offers to buy (bids) spectrum, determines the winning bid, and transfers the spectrum usage right from the selling spectrum user to the new owner of that right. The entities that participate in exchange-based ST markets are the following.
SPECTRUM EXCHANGE An entity that provides and maintains a marketplace or facilities for bringing together buyers and sellers of spectrum in which spectrum trading transactions can take place. It also publicizes prices and anonymizes trading entities.
SPECTRUM USER An entity that uses spectrum for a particular purpose. A spectrum user (SU) might be acting in one of the following roles at a given moment in time: • Spectrum license holder (SLH): An entity that owns a spectrum license and offers it for trading in exchange for financial compensation. This entity can be a wireless service provider, market maker, or spectrum exchange that has been assigned a spectrum trading band by a regulatory agency. In general, SLHs hold spectrum for speculation or for their own use. • Spectrum license requestor (SLR): An entity that submits bids for spectrum licenses to the ST market with the intent to acquire the license. SLRs obtain spectrum for speculation or their own use. An entity that acts as an SLR can be a wireless service provider, market maker, or company/enterprise that acquires spectrum on behalf of another.
SPECTRUM REGULATOR A spectrum regulator is a government entity that oversees the ST market and defines the regulations for its operation. It is also responsible for maintaining a spectrum availability and assignment database which is updated every time a spectrum trade is completed to register the identity of the new holder of spectrum.
MARKET MAKERS A market maker facilitates trading; it does not provide services with its inventory. It acts as a dealer that holds an inventory of spectrum and stands ready to trade when an SLR (buyer) or SLH (seller) desires. It gets revenue through the spread between the sell and buy prices for spectrum, and holds a spectrum inventory for negotiating and speculating.
EXCHANGE-BASED ST MARKETS In exchange-based ST markets, the spectrum exchange is the central entity for market operation. In general, an exchange denotes the idea of a central facility where buyers and sellers can transact and which may charge fees for its ser-
IEEE Communications Magazine • March 2011
Regulator Spectrum exchange
le ab d a s tr id of rum f b s t g c o in e n st sp tio o p P ce Re
In f bm ga orm t iss he ati io rin on n g of bi ds
Su
Trading finalization
Spectrum user: owns spectrum and wants to sell it.
Spectrum user: needs spectrum and wants to buy it.
Figure 1. Spectrum trading scenario.
vices. In the traditional sense, an exchange is usually involved in the delivery of the product. For a spectrum exchange to allow use of traded spectrum, the required devices do not need to be collocated in the exchange, so the exchange might not be involved in the delivery of service. We assume that spectrum exchanges make use of continuous double auctions as a mechanism to match buyers and sellers. We consider that the spectrum exchange acts as a pooling point (POOL) if its facilities house the communication equipment that enable the delivery of wireless services through spectrum acquired by a buyer in the exchange. This kind of exchange also takes care of the configuration of equipment required to make the spectrum usable to the new license holder. A non-pooling point exchange (NOPOOL) only delivers the authorization for use of spectrum to the buying party in a spectrum trade. The new SLH must then use this authorization to configure its devices to make use of the spectrum it has just acquired. From a functional perspective a spectrum exchange can be a band manager (BM) for a given segment of spectrum over a region or have no band manager functionality (NOBM). BM exchanges support-leasing arrangements in addition to permanent license transfers. In contrast to BM exchanges, a NOBM exchange will only facilitate the trading of spectrum among entities in the market without holding any spectrum inventory itself. For scenarios where the exchange has BM functionality, SLRs send a request for spectrum to the exchange, which, if possible, will assign spectrum to the requesting entity in the form of a timed lease within the band managed by the exchange. For a NOBM exchange the spectrum it will handle for trading will come from market participants that use the exchange and make bids and offers of spectrum. It is worth mentioning that unless the market has defined a basic
47
CAICEDO LAYOUT
2/18/11
3:03 PM
Page 48
Exchange type
Characteristics
POOL_BM
Pooling point + band manager functionality • Use of traded spectrum is enabled and configured through equipment/infrastructure owned by the exchange. • All tradable spectrum is held by the exchange • All tradable spectrum returns to or is given by the exchange
POOL_NOBM
Pooling point only, no band manager functionality • Use of traded spectrum is enabled and configured through equipment/infrastructure owned by the exchange. • Different segments of spectrum can be activated and configured through the equipment/infrastructure of the exchange • No spectrum inventory is held by the exchange
NOPOOL_BM
Non-pooling point + band manager functionality • All tradable spectrum is held by the exchange • All tradable spectrum returns to or is given by the exchange • Exchange grants authorizations for use of spectrum (no equipment configuration is done by the exchange)
NOPOOL_NOBM
Non-pooling point, no band manager functionality • No spectrum inventory is held by the exchange • Exchange grants authorizations for use of spectrum (no equipment configuration is done by the exchange)
Table 1. Types of exchanges. amount of bandwidth as a spectrum-trading unit, it will be very complicated to match bids and offers of spectrum without incurring wasteful assignment of this resource. Although giving a particular structure to the way a spectrum-trading band should be segmented will limit its operational flexibility, it also provides benefits in terms of simplifying the specifications to characterize a particular spectrum trade and managing interference between ST users. From the previous discussion, the proposed classification generates four types of spectrum exchanges that can be used to implement an ST market. These are listed in Table 1.
MODELING OF ST MARKETS We use agent-based computational economics (ACE) to analyze spectrum trading markets and the behaviors of their participants which, given their variety, would be difficult to study with conventional statistical and analytical tools. ACE is “the computational study of economic processes modeled as dynamic systems of interacting agents” [5]. An agent in an ACE model is a software entity with defined data and behavior. Agents can represent individuals, institutions, firms, and physical entities. ACE methods have been used to study cooperative secondary spectrum sharing (in which temporary usage rights are negotiated) [6]. A spectrum trading market modeling tool (SPECTRAD) has been developed as part of our research work, and makes use of ACE methods and concepts. With this tool, we model the participants in a ST market over a set of different scenarios. Our focus is to determine the conditions for viability of these markets. We define a viable spectrum trading market as one that possesses good liquidity and sustainability characteristics. As a first step in our analysis we have chosen to examine spectrum markets where only one wireless standard is used, and the unit segments of tradable frequency have equal operational conditions (fungibility). Future work will
48
examine more complicated and realistic scenarios. This is the best case scenario for liquidity. When modeling markets, the agents representing market participants have limited (if any) knowledge of the decisions and state of other market participants (bounded rationality). Agents adapt their behavior based on their goals, and their interaction with the market and/or other agents. In ACE modeling, once initial conditions have been specified, the evolution of the model is only dependent on the interactions among agents, and given the diversity of interactions that can arise, it is difficult to perform straightforward causal analysis by tracking one market participant. Thus, ACE models provide a tool to observe the aggregate behaviors that emerge on a system from the individual behaviors of its components (agents). Analysis of these behaviors over several scenarios can provide insights into characteristics of new markets, the effect of economic policies, and the roles of institutions. By characterizing the trading, information overhead, and infrastructure costs of different ST market implementation architectures, and since we are interested in the running behavior (sustainability) of the market once its operating infrastructure has been put in place, we find that the only differentiating factor between them is whether the exchange is organized to work as a band manager (BM) or not (NOBM) [2]. For the sake of brevity we mention our model results with NOBM scenarios in this article. The following subsections describe the assumptions and behaviors of the market entities used in our models. Further details of the implementation of our models can be found in [2].
GENERAL MARKET SETUP AND MODEL ASSUMPTIONS The following are the assumptions used in our models: • Interference conditions do not impact the services provided over a unit of traded spectrum.
IEEE Communications Magazine • March 2011
CAICEDO LAYOUT
2/18/11
3:03 PM
Page 49
• Trading takes place over an exchange entity and over a single geographic service area in which the wireless services providers (modeled by SU agents) have enough radio base stations to provide service coverage. A market scenario is defined by the following set of parameters: • Number of market participants (numSU) • Distribution of spectrum users’ valuation level (L) • Amount of available spectrum for trading (S) In the market scenarios considered, wireless service requests manifest to each SU as requests of traffic to be served for which the SU has to determine if it has sufficient resources. The SUs can obtain resources to serve traffic by either acquiring spectrum in the form of basic bandwidth units (BBUs) or using a unit of transmission of an alternate technology (AT). Investment in AT transmission units can resemble investing in equipment to make better use of spectrum already owned by the SU or in wireline technology, thus avoiding the purchase of additional BBUs. The choice between BBUs or ATs will be based on the economic benefit a given SU might receive from making a selection as it tries to minimize its costs for providing wireless service. Each SU will have a fixed price for its choice of AT unit which does not change during the life of the market. Thus, if an SU is acting as a spectrum license requestor (i.e., buyer) when the market price for a BBU is higher than its AT price, the SU will buy ATs; and when BBU prices are lower or equal to its AT price, the SU will buy BBUs.
BEHAVIORS FOR ST MARKET ENTITIES Spectrum User Behavior — Spectrum users are the agents that model wireless service providers (WSPs), and buy and sell spectrum in order to meet traffic requests (buy) or obtain economic gain (sell). For our analysis we model the aggregate traffic demand for each SU within the ST service area with an exponential distribution with a mean of μtraffic. The interval between changes of traffic demand is modeled as an exponential distribution with a mean of μtchange. In this model we assume that the traffic demands faced by SUs are not correlated. The SUs submit requests to buy (bids) or sell (asks) to the exchange. The exchange collects these requests and tries to find the best match between requests to establish a trade. The SU can query the exchange for its current market quote, which contains the minimum ask and maximum bid price posted in the market. SUs use this information in their pricing decisions. Additionally, an SU can post buy/sell orders that should execute immediately at the current best price of the market (market order) or specify to the exchange its desire to buy/sell BBUs at the best price possible but in no event pay/sell for more/less than a specified limit price (limit orders). If an SU buys AT units, it is aware that they have a finite lifetime and should be decommissioned in the future based on their mean lifetime. NOBM Spectrum Exchange Behavior — We assume a continuous order-driven market in which SUs may trade at any time they choose.
IEEE Communications Magazine • March 2011
After each order is posted, the exchange updates its order book, and if a trade can take place, it transfers the spectrum license from the seller to the buyer and records the details of the trading transaction. It also informs the regulator agent about the trade so that it can keep track of the owner of each BBU. After each trade or if there was no trade, the exchange announces the market quote, informing market participants of the current market ask price (best price at which spectrum is being sold in the market) and the current market bid price (price of the best offer to buy spectrum in the market). This allows market participants to adapt their price behavior to make competitive bids or asks in the future. Market Maker — The market maker (MM) provides liquidity to the market and corrects market imbalances. In our model the MM agent stands ready to make bids for spectrum if no SU is posting a bid, and it posts an offer to sell if no SU is on the selling side of the market. This makes the MM a very reactive entity that only intervenes in the market when there is a severe imbalance (i.e., no buyers or no sellers) in order to keep the market alive. Using a simplified MM allows us to determine which market scenarios are viable without much economic intervention from entities that do not provide wireless services. The MM has an initial inventory of BBUs assigned to it which it uses to keep a bid-ask (buy-sell) spread present at all times in the market. When market intervention by the MM is not required, the MM will issue a bid or ask with the objective of getting its spectrum inventory back to its reference level, which is the same as its initial spectrum inventory amount.
In our model the market maker agent stands ready to make bids for spectrum if no SU is posting a bid, and it posts an offer to sell if no SU is on the selling side of the market. This makes the market maker a very reactive entity that only intervenes in the market when there is a severe imbalance.
Regulator Agent — A regulator agent models a regulator entity, oversees the trades being conducted in the market, and updates a spectrum assignment database so that ownership of a given BBU can be verified if needed.
FACTORS FOR ST MARKET VIABILITY Our focus is on determining the conditions for viable ST markets with respect to their liquidity and sustainability characteristics, so we selected a set of parameters/measures that capture the main characteristics of these markets. •Midpoint price for spectrum BBU: This price gives an indication of the average price at which a BBU is being valued in the market. Low values of this measure indicate an excess in supply or low spectrum demand in the market, while high values indicate low supply or high demand for spectrum. •Relative bid/ask spread: The bid/ask spread is the difference between the best sell price and the best buy price in the market. The relative bid/ask spread is the size of the bid/ask spread relative to the midpoint price of the quoted asset. This factor can be used as an indicator of the liquidity of a market [7,8]. In other words, high values of this parameter indicate low liquidity in the market, while low values would indicate high liquidity.
49
CAICEDO LAYOUT
2/18/11
3:03 PM
Page 50
Parameter
Values
Number of spectrum users (numSU)
4, 5, 6, 10, 20, 50 (this number includes one market maker)
Distribution of spectrum users’ valuation level (L) Table indicates proportion of spectrum users at a given valuation level (willingness to pay)
Available spectrum (S) Values indicate the number of BBUs available for trading
Dist #
Low
Medium High
1
1/3
1/3
1/3
2
1/2
1/4
1/4
3
1/4
1/4
1/2
5*numSU, 10*numSU, 15*numSU, 20*numSU, 25*numSU. The amounts of spectrum where chosen for each value of numSU in order to have
⎛ S ⎞ R=⎜ ⎟ in the set {5, 10, 15, 20, 25} ⎝ numSU ⎠
Table 2. Parameters for modeled markets.
ID
Factor
Pass (P)
Fail (F)
Score P/F
C1
Percentage of completed market runs
≥70%
≤50%
2/–2
C2
Relative bid-ask spread
≤20%
≥50%
1/–1
C3
Mid-point BBU price
≥100
≤25
1/–1
C4
Relative difference of the MM’s inventory to its reference level
≤25%
≥100%
1/–1
Table 3. Criteria for NOBM market scenario evaluation.
•MM’s BBU inventory difference: This parameter tracks the difference between the MM’s current spectrum holdings (BBU inventory) and its reference level of inventory units. Substantial deviations from the reference level would indicate problems in the buying or selling side of the market. •Percentage of offered spectrum: Expressing the amount of spectrum being offered for sale as a percentage of the total spectrum available in the market, we find that if this percentage is high, the majority of the tradable spectrum is not in use by the SUs and thus is being offered for sale. This would indicate low spectrum efficiency. In general, the lower the value of this percentage, the more efficiency there is in terms of spectrum use. •Number of complete market runs: For the collection of statistics to analyze each market scenario, 100 runs of each scenario are performed in SPECTRAD. Activity in a market starts with a series of mock auctions so that the SUs can find an initial starting price for trading. If this initial phase is not successful in finding a starting price, the market does not proceed to actual spectrum trading. This factor counts how many of the attempted market runs were successful in finding a stable starting price and thus able to initiate spectrum trading. It is useful as an indicator of market viability since a high percentage of complete market runs indicates that initiating trading is feasible and sustainable with-
50
out difficulty. In contrast, having a low percentage of complete market runs would indicate that the market structure is not well suited to support sustainable trading. All of these parameters were found to be applicable to NOBM markets. A similar analysis and determination of viability factors was conducted for BM markets, but it is not discussed here. The reader can find more details on BM cases in [9].
VIABILITY CRITERIA AND RESULTS A summary of the values used for the parameters of the market scenarios modeled is shown in Table 2. Each run of the model was executed for 5000 time ticks of which 3000 were used for the warm-up period. Data was collected on the last 2000 time ticks. In the scenarios modeled, the variation of the tradable spectrum amount and number of SUs are related in such a way that the value of the average number of BBUs per spectrum user (R) is in the set {5, 10, 15, 20, 25}. For all scenarios, when R is equal to 10, on average every spectrum user has enough spectrum to serve its average traffic requirement value. Thus, R values lower than 10 indicate an under-supply of spectrum, while values greater than 10 indicate an over-supply of this resource to attend the average traffic needs of a SU. In order to determine the viable NOBM markets based on the factors mentioned in the previous section, we developed decision criteria to determine if the behavior of a particular factor in a market is to be considered as desirable/ acceptable or undesirable/unacceptable. Additionally, in order to keep track of the aggregate behavior characteristics of a market, we gave a score to each factor with a positive value when the market complies positively with the desired behavior characteristic or a negative one when it complies with the undesirable behavior criteria. Based on the total scores for a market’s behavior, a final list of viable markets was obtained. Most of the threshold values for each criterion were derived from the simulation data by tak-
IEEE Communications Magazine • March 2011
CAICEDO LAYOUT
2/18/11
3:03 PM
Page 51
ing into account the values that determine breakpoints for different behaviors for a scenario. Table 3 summarizes the criteria to be used to evaluate and give scores to the different scenarios studied in this work. Factors such as the percentage of completed market runs where given more weight than other factors given their relative importance in the determination of viability characteristics (sustainability in this case).
Total score 5 Number of SUs 4 5 6 10 20 50
4 3 2
VIABILITY IMPLICATIONS NOBM spectrum trading markets are viable under the criteria used in this work and the ideal conditions mentioned earlier for markets with more than 6 spectrum users as long as there is no oversupply of spectrum, that is, when R = 5 and R = 10 (although cases with 5 SUs were viable when R = 10). A value of R = 5 indicates scenarios where on average there is 50 percent less spectrum per SU to serve the SU’s average traffic requirement. A value of R = 10 is the reference value where the amount of spectrum per user is very close to being enough to serve a SU’s average traffic requirement and is where most of the viable scenarios are found. When R = 15, there is a 50 percent oversupply of spectrum and in this case, the viable markets are those with 10 to 20 spectrum users. Thus, if there is little or no oversupply of spectrum and with a number of spectrum users greater than or equal to 6, most NOBM spectrum trading markets will be viable. Some of the implications of the results from the models used in this work are as follows. Number of Market Participants — Spectrum trading is viable in markets with no excessive oversupply or undersupply of spectrum for a wide range of spectrum user values. However, when the number of SUs is less than 6, NOBM markets are unviable. Regulators and entities interested in these markets must make sure that enough trading participants will be in the market. The results presented here can serve as a guideline, but should be complemented by further study of market environments under different traffic (demand) patterns.
IEEE Communications Magazine • March 2011
0 -1 -2 -3 -4 -5
5
10
15 R
20
25
Figure 2. Scores for NOBM scenarios.
Number of SUs 4 5 6 10 20 50
100
80 Assigned spectrum (%)
Using the list of criteria for scenario evaluation mentioned in Table 3, the scores for the simulated scenarios are summarized in Fig. 2. We consider markets with scores greater than 0 as viable. Scenarios with this condition met several of the desirable conditions for a viable market; in particular, they all have a percentage of running markets greater than 50 percent. We only show the scores for scenarios with user valuation distribution #1 (as defined in Table 2) since there was no difference in terms of viable markets among scenarios with different valuation level distributions. Based on the scores, we can say that most of the viable market scenarios are those that have R values that meet the condition 5 ≤ R ≤ 10 and a number of spectrum users (numSU) such that 6 ≤ numSU ≤ 50. When R = 15, the viable scenarios are those with 10 ≤ numSU ≤ 20. Figure 3 shows the spectrum efficiency results from our modeled scenarios.
Score
1
VIABLE NOBM MARKETS
60
40
20
0
5
10
15 R
20
25
Figure 3. Percentage of assigned spectrum for NOBM scenarios.
Market Makers in a ST Market — Simple MMs as providers of liquidity, like the ones used in the models of this work, help in the establishment of viable markets by holding a spectrum inventory with which they can transact. Thus, regulators will not have to specify complex MM behavior requirements or rules. Since an MM does not make use of its spectrum, assigning too much inventory to an MM would decrease spectrum efficiency. However, the greater the inventory level of the MM, the better prepared it would be to intervene in the market if there is a lack of spectrum offerings. Thus, regulators will need to define the level of spectrum holdings of an MM to reach a desired balance of market viability vs. spectrum efficiency. Amount of Available Spectrum for Trading — Oversupply of spectrum negatively affected all market scenarios considered. In particular, an oversupply of 100 percent above the level of spectrum the SUs need to serve their average
51
CAICEDO LAYOUT
2/18/11
Our methods and tools can be extended to the study of other telecommunications markets or scenarios where adaptive behaviors of the system participants are allowed and where studying the emergent behavior of the market is of interest.
3:03 PM
Page 52
traffic needs leads to unviable markets. An adequate amount of spectrum for trading should be made available in the market in order to maintain trading activity. Our scenarios showed viability with spectrum amounts 50 percent above or below the level of spectrum needed to attend average traffic in the market. Spectrum Efficiency — In the viable ST market scenarios, NOBM markets provided for spectrum efficiencies between 51 and 77 percent under the ideal conditions of this work. These results show a positive characteristic of ST markets that is of great interest to regulators and SUs. Analysis of spectrum efficiency in other market scenarios and conditions is left for future work.
CONCLUSIONS The outcomes of our study can help policy makers and wireless service providers (future and current) understand the required conditions for implementing spectrum trading markets. The market and operational scenarios we used assumed ideal conditions among which were the use of a single wireless technology and the fungibility of spectrum in the service area where ST was operating. These restrictions were used to define initial scenarios (almost ideal) over which to study the dynamics of spectrum trading. Future work will make use of our modeling tool (SPECTRAD) to analyze more complicated scenarios. Our models indicate that spectrum markets can be viable if sufficient numbers of market participants exist and the amount of tradable spectrum is balanced to the demand. Given that a minimum of five to six active spectrum users (wireless service providers) are necessary in a particular service area, it seems unlikely that spectrum markets will be viable in mobile markets unless the barriers to market entry for new service providers are lowered. ST markets may well create the incentive for the appearance of new types of wireless service providers, but without a truly liberalized ST market in place it is unlikely they will do so. Thus, a challenge for regulators and researchers alike will be identifying an appropriate band to promote spectrum trading or to facilitate the entry of new market participants, and perhaps even end users. The matter of balancing tradable spectrum to demand will prove more challenging for regulators because this requires insight into service demand, even though this need not be too precise. Thus, it will be important to develop useful (and observable) proxies that enable regulators to estimate how well markets are balanced. The
52
viability of spectrum trading in more complicated trading scenarios (i.e., more than one wireless standard and/or non-fungible spectrum) is left for future work. An important byproduct of our research is demonstrating how ACE can be applied to the study of telecommunication markets. Our methods and tools can be extended to the study of other telecommunications markets or scenarios where adaptive behaviors of the system participants are allowed and studying the emergent behavior of the market is of interest.
REFERENCES [1] M. A. McHenry, “NSF Spectrum Occupancy Measurements Project Summary,” 2005. [2] C. E. Caicedo, Technical Architectures and Economic Conditions for the Viability of Spectrum Trading Markets, Ph.D. dissertation, Univ. of Pittsburgh School of Information Sciences, 2009. [3] M. Weiss and W. Lehr, “Market-Based Approaches for Dynamic Spectrum Assignment,” working paper, Nov. 2009; http://d-scholarship.pitt.edu/2824/. [4] C. E. Caicedo and M. Weiss, “An Analysis of Market Structures and Implementation Architectures for Spectrum Trading Markets,” Telecommun. Policy Research Conf., Fairfax, VA, 2008. [5] L. Tesfatsion, “Agent-Based Computational Economics: A Constructive Approach to Economic Theory,” Ch. 16, Handbook of Computational Economics, vol. 2, L. Tesfatsion and K. L. Judd, Eds., North-Holland, 2006, pp. 831–80. [6] A. Tonmukayakul and M. Weiss, “A Study of Secondary Spectrum Use Using Agent-Based Computational Economics,” Netnomics, vol. 9, no. 2, 2008, pp. 125–51. [7] L. Harris, Trading and Exchanges: Market Microstructure for Practitioners, Oxford Univ. Press, 2003. [8] T. Chordia, R. Roll, and A. Subrahmanyam, “Liquidity and Market Efficiency,” J. Financial Economics, vol. 87, 2008, pp. 249–68. [9] C. Caicedo and M. Weiss, “The Viability of Spectrum Trading Markets,” IEEE DYSPAN ‘10, Apr. 2010, Singapore.
BIOGRAPHIES CARLOS E. CAICEDO BASTIDAS (
[email protected]) is an assistant professor and director of the Center for Convergence and Emerging Network Technologies (CCENT) of the School of Information Studies at Syracuse University. He holds a Ph.D. in information science from the University of Pittsburgh, and M.Sc. degrees in electrical engineering from the University of Texas at Austin, and Universidad de los Andes, Colombia. His current research interests are spectrum trading markets, secondary use of spectrum, security, and management of data networks. MARTIN B. H. WEISS [M‘76] holds a Ph.D. in engineering and public policy from Carnegie Mellon University and an M.S.E. in computer, information, and control engineering from the University of Michigan. He is currently a faculty member and associate dean for academic affairs and research at the School of Information Sciences at the University of Pittsburgh. He has performed techno-economic research in telecommunications and telecommunications policy over the past 20 years, and currently works on topics related to cooperative secondary use of electromagnetic spectrum.
IEEE Communications Magazine • March 2011
SAHAI LAYOUT
2/18/11
3:14 PM
Page 54
TOPICS IN RADIO COMMUNICATIONS
Unified Space-Time Metrics to Evaluate Spectrum Sensing Rahul Tandra, Qualcomm Anant Sahai, University of California, Berkeley Venugopal Veeravalli, University of Illinois at Urbana-Champaign
ABSTRACT Frequency-agile radio systems need to decide which frequencies are safe to use. In the context of recycling spectrum that may already be in use by primary users, both the spatial dimension to the spectrum sensing problem and the role of wireless fading are critical. It turns out that the traditional hypothesis testing framework for evaluating sensing misses both of these and thereby gives misleading intuitions. A unified framework is presented here in which the natural ROC curve correctly captures the two features desired from a spectrum sensing system: safety to primary users and performance for the secondary users. It is the trade-off between these two that is fundamental. The spectrum holes being sensed also span both time and space. The single-radio energy detector is used to illustrate the tension between the performance in time and the performance in space for a fixed value of protection to the primary user.
INTRODUCTION Philosophically, frequency-agile radios’ spectrum sensing is a binary decision problem: is it safe to use a particular frequency where we are, or is it unsafe? So it seems natural to mathematically cast the problem as a binary hypothesis test. Most researchers model the two hypotheses as primary user present and primary user absent. This suggests that the key metrics should be the probability of missed detection PMD and the probability of false alarm P FA . But is this truly the right model? Does it illuminate the important underlying trade-offs? To understand how metrics can matter, it is useful to step back and consider familiar capacity metrics. Traditionally, the community studied point-to-point links. There, Shannon capacity (measured in bits per second per Hertz) is clearly the important metric. However, this is not enough when we consider a wireless communication network — the spatially distributed aspect is critical, and this shows up in the right metrics. For instance, Alouini and Goldsmith in [1] propose the area spectral efficiency (measured in bits per second per square kilometer per Hertz)
54
0163-6804/11/$25.00 © 2011 IEEE
when links are closely packed together, and Gupta and Kumar in [2] further propose the transport capacity (measured in bit-meters per second per Hertz) when cooperation (e.g., multihop) is possible. It is these metrics that give much deeper insights into how wireless communication systems should be designed. Spectrum sensing is about recycling bandwidth that has been allocated to primary users and thereby increasing the capacity pre-multiplier for the secondary system. There turns out to be a significant spatial dimension to spectrum recycling for a simple reason — the same frequency will be reused by another primary transmitter once we get far enough away. Thus, the potential spectrum holes span both time and space. To see why ignoring this spatial dimension is misleading, we must first review the traditional binary hypothesis testing story where the central concept is sensitivity: the lowest received signal power for which target probabilities of false alarm and missed detection can be met. More sensitive detectors are considered better and it is well known that sensitivity can be improved by increasing the sensing duration. However, why does one demand very sensitive detectors? The strength of the primary’s signal is just a proxy to ensure that we are far enough. If wireless propagation were perfectly predictable, then there would be a single right level of sensitivity. It is the reality of fading that makes us demand additional sensitivity. But because fading can affect different detectors differently, a head-to-head comparison of the sensitivity of two detectors can be misleading. Instead, the possibility of fading should be incorporated into the signal present hypothesis itself. The bigger conceptual challenge comes in trying to understand false alarms. The traditional hypothesis-test implicitly assumes that a false alarm can only occur when the primary users are entirely absent. But in the real world, the spectrum sensor must also guard against saying that it is close to the primary when it is far enough away. The signal absent hypothesis needs to be modified in some reasonable way that reflects both these kinds of false alarms. We must take into account that the users doing the sensing have some spatial distribution.
IEEE Communications Magazine • March 2011
SAHAI LAYOUT
2/18/11
3:14 PM
Page 55
Protected region
Cognitive radio
Cognitive radio
Primary Tx Primary Rx
No-talk region
Primary Rx
Primary Tx Decodable radius
Figure 1. This figure illustrates the scenario of cognitive radios acting as sensing-based secondary users recycling TV whitespaces. The secondary user is allowed to use the channel if it is outside both the protected region and the no-talk region (the tan-colored annulus shown in the figure) of each primary transmitter that is currently ON. The spectrum-sensing problem boils down to identifying whether the secondary user is within a space-time spectrum hole or not. Once both hypotheses have been appropriately modified, the receiver-operating-characteristic (ROC) curve appropriately reflects the fundamental trade-off in spectrum sensing between the safety guarantee for the primary users (captured by a metric we call the Fear of Harmful Interference, F HI ) and the secondary users’ ability to recycle the leftover spectrum for themselves (captured by the Weighted Probability of Space Time Recovered metric, WPSTR). However, there are two subtle, but important, issues that must be addressed along the way lest we end up with trivial trade-offs. Both of these have to do with understanding the nature of the safety guarantee to the primary users. First, the underlying probabilistic model regarding the spatial distribution of the secondary users should not be consistent across the two hypotheses. In fact, it is better to use a worst-case spatial distribution under the frequency band occupied hypothesis so that the primary users’ safety guarantee is strong. Second, the safety guarantee to primary users needs to be weakened at the start of each primary ON period. A timedomain sacrificial buffer zone needs to be introduced within which interference from secondary users is permissible; this gives the secondary user some time to evacuate the band and thus allows for some sensing. Without such a sacrificial buffer, the trade-offs almost invariably become trivial [3]. Unlike the traditional sensitivity-oriented metrics, these new metrics give a unified framework to compare different spectrum-sensing algorithms and yield several new insights into the space-time sensing problem. First, they clearly show that fading uncertainty forces the WPSTR performance of single-radio sensing algorithms to be very low for desirably small values of F HI. This captures the fact that a single radio examining a single frequency cannot distinguish whether it is close to the primary user and severely shadowed, or if it is far away and not shadowed. Second, the metrics reveal the importance of diversity and how simple non-coherent detection can outperform matched filters in
IEEE Communications Magazine • March 2011
practice. Third, an example is used to show that there exists a non-trivial trade-off between the spatial and temporal performance for a spectrum sensor. In general, there exists an optimal choice of the sensing time for which the WPSTR metric is maximized.
SPECTRUM SENSING BY SECONDARY USERS Spectrum holes [4], are regions in space, time and frequency within which it is safe for a secondary radio system to recycle the spectrum. The picture on the left in Fig. 1 shows there is a spatial region around every primary transmitter, called the no-talk region, within which secondary users are not allowed to transmit. The spectrum hole is everywhere else — shown here in green. Intuitively, the two important dimensions along which a sensing algorithm should be evaluated are: the degree to which it is successful in identifying spectrum holes that are actually there; and the amount of harmful interference caused to the primary system by falsely identifying spectrum holes. An ideal approach — for example, involving a centralized database with primary user participation and geolocation functionality for secondary users [5] — would, by definition, create zero unauthorized harmful interference and yet recover all the spectrum holes. To make the problem concrete, we now focus on a single primary user transmitting on a given frequency band. The picture on the right in Fig. 1 illustrates that the primary transmitter (a TV tower in this example) has a protection region (gray region in the figure), and any potential primary receivers within this area must be protected from harmful interference. The resulting no-talk radius rn can be computed from the protection radius, the transmit power of the secondary radio, and the basic wireless propagation model [6]. The sensing problem thus boils down to deciding whether the distance from the TV tower is less or greater than rn.
55
SAHAI LAYOUT
2/18/11
3:14 PM
Page 56
REVIEW OF THE TRADITIONAL TIME-DOMAIN FORMULATION FOR SENSING Currently, the most popular formulation of the spectrum sensing problem casts it as a binary hypothesis test between the following two hypotheses: primary ON and primary OFF. The two traditional hypotheses are:
tor can reliably meet (PFA, PMD) targets is called the detector’s sensitivity. Furthermore, the minimum number of samples required to achieve a target sensitivity is called the detector’s sample complexity. The traditional metrics triad of sensitivity, P FA , and P MD , are used along with the sample complexity to evaluate the performance of detection algorithms.
Signal absent H0 : Y[n] = W[n] — Signal present H1 : Y[n] = √PX[n] + W[n], (1)
DRAWBACKS WITH THE TRADITIONAL FORMULATION
for n = 1, 2, …, N. Here P is the received signal power, X[n] are the unattenuated samples (normalized to have unit power) of the primary signal, W[n] are noise samples, and Y[n] are the received signal samples. The two key metrics in this formulation are: the probability of false alarm, PFA, which is the chance that a detector falsely thinks that the signal is present given that the signal is actually absent; and the probability of missed detection, PMD, which is the chance that the detector incorrectly declares the signal to be absent given that the signal is actually present. The lowest signal power P at which the detec-
The key idea behind the formulation in the previous section is that a detector that can sense weak signals will ensure an appropriately low probability of mis-declaring that we are outside the no-talk radius whenever we are actually inside. However, this formulation has some fundamental flaws. How Much Sensitivity Do We Really Need? — The right level of sensitivity should correspond to the signal power at the no-talk radius. If there were no fading, the required sensitivity would immediately follow from the path-loss model. The traditional approach to deal with fading is to incorporate a fading margin into the target sensitivity (e.g., set the sensitivity low enough to account for all but the 1 percent worst case fades). However, different detectors may be affected differently by the details of the fading process. For example, a coherent detector looking for a single pilot tone would require a larger fading margin than a non-coherent detector averaging the signal power over a much wider band. Hence, thinking in terms of a single level of sensitivity for all detectors is flawed. How to Measure the Performance of Spectrum Sensors — Traditionally, the frequency unused hypothesis (H 0 ) has been modeled as receiving noise alone. However, it is perfectly fine for the primary transmitter to be ON, as long as the spectrum sensor can verify that it is outside the primary’s no-talk radius. The real world hypothesis H0 is actually different at different potential locations of the secondary radio. Building in a fading margin to the sensitivity has the unfortunate consequence of causing the false-alarm probabilities to shoot up when the spectrum sensor is close, but not too close, to the primary transmitter. This makes parts of the spectrum hole effectively unrecoverable [7]. Figure 2 shows this effect in the real world.
SPECTRUM SENSING: A SPACE-TIME PERSPECTIVE Figure 2. The map on the top shows the location of TV towers (red triangles and circles) transmitting on channel 39 in the continental United States. The larger disks around the transmitters show the no-talk region around the TV transmitters within which a secondary user cannot recycle the channel. This shows that the true spectrum hole covers about 47 percent of the total area. The effective no-talk region for a radio using the –114 dBm rule (from [5]) is shown in the bottom figure — only 10 percent of the total area remains. This figure is taken from [7] where more details can be found on the available whitespace spectrum in TV bands.
56
The discussion in the previous section forces us to rethink the traditional hypothesis-testing formulation. Fading must be explicitly included and the reality of different potential locations must also be explicitly accounted for. The received signal can be modeled as Y[n] ——— = √ P(R) X[n] + W[n] whenever the primary transmitter is ON, where R is the distance of the secondary radio from the primary transmitter. The received signal power P(R) is actually a random operator (modeled as independent of both
IEEE Communications Magazine • March 2011
SAHAI LAYOUT
2/18/11
3:14 PM
Page 57
the normalized transmitted signal and the noise) that depends on both the path loss and fading distributions. This gives the following composite hypothesis testing model:
Building in a fading margin to the sensitivity has the
⎧ ⎪ P ( R) X [ n ] + W [ n ] R > rn and primary ON H 0 :Y [ n ] = ⎨ primary OFF ⎪W [ n ] ⎩
H 1 :Y [ n ] = P ( R) X [ n ] + W [ n ]
R ∈ [ 0, rn ] ,
unfortunate consern
(2)
quence of causing TX
the false alarm probabilities to shoot
where we still need to decide on the primary user’s ON/OFF behavior and the distributions for R in the two hypotheses to permit evaluation of the two kinds of error probabilities for any spectrum sensor.
up when the spectrum sensor is close, but not too close, to the primary
MODELING SPACE The true position of the secondary user relative to the primary transmitter is unknown. This is why we are sensing. For H 1 , it is natural to assume a worst-case position, generally at just within rn where the primary signal is presumably weakest. A worst-case assumption makes the quality guarantee apply uniformly for all the protected primary receivers. Suppose we took the same approach to H 0. Typically, the worst case location under H 0 would be just outside r n with the primary user ON. After all, if we can recover this location, we can presumably recover all the locations even further away or when the primary user is OFF. Alas, this approach is fatally flawed since the signal strength distributions just within rn and just outside of rn are essentially the same. No interesting trade-off is possible because we are missing the fundamental fact that a sensing-based secondary user must usually give up some area immediately outside of r n to be sure to avoid using areas within rn. Simply averaging over the distance R also poses a challenge. The interval (rn, ∞) is infinite and hence there is no uniform distribution over it. This mathematical challenge corresponds to the physical fact that if we take a single primaryuser model set in the Euclidean plane, the area outside r n that can potentially be recovered is infinite. With an infinite opportunity, it does not matter how much of it we give up! In reality, there are multiple primary transmitters using the same frequency. As a radio moves away from a given primary transmitter (R increases), its chance increases of falling within the no-talk radius of an adjacent primary transmitter. The picture on the bottom in Fig. 3 illustrates the Voronoi partitioning of a spatially distributed network of primary transmitters, and the picture on the top shows the effective single primary transmitter problem with a finite area. The key is to choose a probability measure w(r)rdr so as to weight/discount area outside rn appropriately. The rigorous way to do this is to use results from stochastic geometry and pointprocess theory [8]. However, the key insights can be obtained by choosing any reasonable probability measure. The numerical results here have been computed assuming w(r) is constant (= c) for 0 < r ≤ r n , and an exponential weighting function, w(r) = Aexp(–κ(r – rn)), for r > rn. The constant part essentially tells us the probability of the primary being OFF. An exponential distri-
IEEE Communications Magazine • March 2011
transmitter. This makes parts of the spectrum hole effectively unrecoverable.
rn TX 1 rn TX 2
rn TX 3
Figure 3. The picture on the bottom shows the Voronoi partitioning of the space between primary transmitters. The multiple primary transmitter problem is approximated as an ideal single primary problem by including a spatial weighting function w(r) that discounts the value of areas far away from the primary transmitter. bution is chosen for the rest because it has the maximum entropy among the set of all distributions on [rn, ∞) with a given mean. In our case, this mean is related to the average minimum distance between two primary towers transmitting on the same channel.
MODELING TIME The probability of being within the no-talk radius in H0 seems to capture the ON/OFF behavior in a long-term average sense. But long-term averages are not enough to allow us to evaluate sensing. Intuitively, if the primary user is coming and going very often, the issue of timeliness in sensing is more important than when the primary user is like a real television station and switches state very rarely, if at all. Consider a secondary user that is located inside the no-talk radius. Let U[n] = 1 only if the primary transmitter is ON at time instance n. Assume that we start sensing at time instances n i, and at the end of each sensing interval, the secondary user makes a decision of whether the frequency is safe to use (Di = 0) or not safe to use (Di = 1). The secondary user transmits only
57
SAHAI LAYOUT
2/18/11
3:14 PM
Page 58
Primary buffer zone (Δ)
The naive definition does not recognize
Opportunities recovered
that causality implies that the initial segments of a
Interference Primary Tx
U [n]
ON OFF
Time
primary transmission are intrinsically more exposed to
Secondary Tx
r Z [n]
D=0
interference. This is
spatial status of primary receivers located at the edge of decodability.
D=0
Figure 4. The state of the primary user U[n], the sensing epochs, as well as the secondary ON/OFF process Zr[n] (dashed red line) are shown in the figure. The red sensing windows indicate events when the detector declares the frequency to be used, and the blue sensing windows indicate when the detector declares the frequency to be unused. The primary sacrificial buffer zones are shown by the purple shaded regions on the function U[n], and the actual harmful interference events are shown by shaded tan regions on U[n]. if the frequency is deemed to be safe, and then senses again. This induces a random process Zr[n] ∈ {0, 1} denoting the state of a secondary user located at a distance r from the primary transmitter, with 1 representing an actively transmitting secondary user. An example scenario is shown in Fig. 4. Intuitively, harmful interference could be quantified by measuring the fraction of the primary ON time during which a secondary user located inside the no-talk radius is transmitting. Suppose that the primary transmitter is OFF, a secondary user senses for N samples, correctly declares that the primary is OFF, and hence starts transmitting. There is still a finite non-zero probability that the primary comes back ON while the secondary is transmitting. This probability depends on the duration of the secondary user’s transmission, but might have no connection to how long N is! For example, there would indeed be no connection in a Poisson model (the maximum-entropy modeling choice) for primary transmissions [3]. The secondary could thus cause interference even when its spectrum sensor is as correct as it could possibly be. If this definition of interference were to be adopted, the only way to drive the probability of interference to zero would be to scale the secondary transmission time to zero. This would give a relatively uninteresting tradeoff between the protection to the primary system and the performance of the secondary user. The naive definition does not recognize that causality implies that the initial segments of a primary transmission are intrinsically more exposed to interference. This is the time-domain counterpart to the spatial status of primary receivers located at the edge of decodability. Just as these marginal receivers must be sacrificed for there to be meaningful spectrum holes, it makes sense to assume that there is a temporal sacrificial buffer zone (Δ samples long) at the beginning of every OFF to ON transition of the primary user (illustrated as purple regions in Fig. 4). Secondary transmissions during this time should not be considered harmful interference.
58
D=1
Secondary sensing
the time-domain counterpart to the
D=0
SPACE-TIME METRICS We now define two key metrics that are similar to the traditional metrics of P FA and P MD , but are computed on the composite hypotheses in Eq. 2. The trade-off between them is the fundamental ROC curve for the problem of spectrum sensing. Safety: Controlling the Fear of Harmful Interference — This metric measures the worstcase safety that the sensing-based secondary user can guarantee to the primary user under uncertainty. We call it the Fear of Harmful Interference F HI = sup 0≤ r ≤ r n sup F r ∈FF r P F r (D = 0⎪R = r), where D = 0 is the detector’s decision declaring that the frequency is safe to use, and Fr is the set of possible distributions for P(r) and W[n] at a distance of r from the primary transmitter. The outer supremum is needed to issue a uniform guarantee to all protected primary receivers. The inner supremum reflects any non-probabilistic uncertainty in the distributions of the path-loss, fading, noise, or anything else. Performance: Success in Recovering Spectrum Holes — By weighting the probability of finding a hole P FH (r) with the spatial density function, w(r)r, we compute the weighted probability of space-time recovered (WPSTR) metric: WPSTR =
∞
∫0
PFH ( r )w ( r ) r dr, where
1 ⎧ M if r > rn ⎪lim M →∞ M ∑ n =1 I( Z r [ n ]=1) ⎪⎪ M , (3) PFH ( r ) = ⎨ ∑ n =1 I( Z r [ n ]=1,U [ n ]= 0 ) ⎪lim r > r if n M →∞ M ⎪ ∑ n =1 I(U [ n ]= 0 ) ⎩⎪ ∞
and ∫0 w(r)rdr = 1. The I above is shorthand for indicator functions that take the value 1 whenever their subscript is true and 0 otherwise. Notice that the integral spans locations inside and outside the no-talk radius (0 to ∞). The name WPSTR reminds us of the weighting of performance over space and time. 1 – WPSTR is the
IEEE Communications Magazine • March 2011
2/18/11
3:14 PM
Page 59
appropriate analog of the traditional PFA, except that it also implicitly includes the overhead due to the sample-complexity.
1 Nc = 102 samples 5% pilot power x=1 dB
INSIGHTS FROM THE NEW SPACE-TIME FRAMEWORK
Assume that the primary transmitter is always ON. This corresponds to sensing a spectrum hole whose temporal extent is infinite, so it does not matter how long we spend sensing. To remind us of this spatial focus (and to maintain consistency with [4]), we call the secondary performance metric the Weighted Probability of Area Recovered, WPAR, instead of WPSTR. Consider a single secondary user running a perfect radiometer (i.e., one with an infinite number of samples). If the noise variance is perfectly known, it is straightforward to derive expressions for F HI and WPAR [4]. The black curve in Fig. 5 shows the F HI vs. WPAR tradeoff. Notice that the WPAR performance at low FHI is bad even for the perfect radiometer. This captures the physical intuition that guaranteeing strong protection to the primary user forces the detector to budget for deep non-ergodic fading events. Unlike in traditional communication problems where there is no harm if the fading is not bad, here there is substantial harm since a valuable spectrum opportunity is left unexploited. Impact of Noise Uncertainty: SNR Walls in Space — The real world noise power is not perfectly known. In the traditional formulation, this uncertainty causes the radiometer to hit an SNR wall that limits its sensitivity [9]. What happens under these new metrics? See [10] for the details, but the result is illustrated by the red curve in Fig. 5. The noise uncertainty induces a critical F HI threshold below which none of the spectrum hole can be recovered (WPAR = 0). In traditional terms, the sensitivity required to budget for such rare fading events is beyond the SNR wall. Just as the sample-complexity explodes to infinity as the SNR Wall is approached in terms of traditional sensitivity, the area recovered crashes to zero as FHI approaches this critical value. Dual Detection: How to Exploit Time-Diversity — The true power of these new metrics is that they allow us to see the importance of diversity. This can be cooperative diversity as discussed in [4], but the effect can be seen even with a single user. For example, one could presumably exploit time diversity for multipath if we believed that the actual coherence time is finite N c < ∞. However, for the radiometer, all the thresholds must be set based on the primary user’s fear of an infinite coherence time — the spectrum sensor might be stationary. The radiometer cannot do anything to exploit the likelihood of finite coherence times even if the sensor is likely to be moving. The situation is different for a sinusoidal
IEEE Communications Magazine • March 2011
0.8
WPSTR
ALWAYS ON PRIMARIES: PURELY SPATIAL SPECTRUM HOLES
Pil ot de an tecto d inf r w ini ith te ou co t n he ois r en e ce unc tim ert e ain ty
SAHAI LAYOUT
0.6 Pilot detector with noise uncertainty and finite coherence time
ter ome
out
with
eu nois
ty
tain
ncer
0.4 i
Rad
0.2 Radiometer with noise uncertainty 0 10-6
10-4 10-2 Fear of harmful interference (FHI)
100
Figure 5. Under noise uncertainty (1 dB here), there is a finite FHI threshold below which the area recovered by a radiometer is zero (WPSTR = 0). The coherent detector (modified matched filter) has a more interesting set of plots discussed in this article. pilot tone, as illustrated by the blue curve in Fig. 5. The best-case scenario for coherent detection from a traditional sensitivity perspective — infinite coherence time with no noise uncertainty — can be worse in practice than a simple radiometer with noise uncertainty. As the sinusoidal pilot is narrowband, the matched filter suffers from a lack of frequency diversity as compared to the radiometer: fading is more variable and the resulting conservatism costs us area. So does the matched filter have any use in wideband settings? Yes. It gives us an opportunity to deal with uncertain coherence-times. We can run two parallel matched filters — one assuming an infinite coherence time and the other doing non-coherent averaging to combine matched-filtered segments of length Nc — with their thresholds set according to their respective assumptions on the coherence time. If either of them declares that the frequency is used, then the secondary user will not use this frequency. This ensures that the FHI constraint is met irrespective of the actual coherence time. The dualdetector approach thus leads to different FHI vs. WPAR curves depending on what the mix of underlying coherence times is (stationary devices or moving devices). The dashed curve in Fig. 5 shows the performance of the matched filter running with a known coherence time of Nc. Because it enjoys time-diversity that wipes out multipath fading, it is only limited by the same non-ergodic wideband shadowing that limits even the wideband non-coherent detector without noise-uncertainty. In principle however, this dual detector still has an SNR wall due to noise uncertainty. However, to be able to illustrate this effect, Fig. 5 was plotted using a very short coherence time Nc = 100. For any realistic coherence time, the SNR wall effect would become negligible at all but extremely paranoid values for FHI.
59
SAHAI LAYOUT
2/18/11
3:14 PM
Page 60
ON/OFF PRIMARIES: SPACE-TIME TRADE-OFF Radiometer performance (Pon=0.75, Δ=2000)
1
When the primary signal has both ON and OFF periods, both space and time must be considered together.
0.9
WPSTR
0.8
0.7
0.6 N=35 N=50 N=100 N=150
0.5
0.4 10-4
10-2 10-1 10-3 Fear of harmful interference (FHI)
100
Time vs space 0.9 0.8 0.7
Space-metric
0.6 0.5 0.4 0.3 0.2 0.1 0 0
1
0.2
0.4 0.6 Traditional time-metric
1
Space-time performance vs N (FHI=0.001) Traditional time-metric
0.9 0.8 Performance metrics
0.8
Pon=1 Pon=0.75 Pon=0.5 Pon=0.25
0.7 0.6 0.5
WPSTR
0.4 0.3 0.2 0.1 0 101
102 103 Number of samples (N)
104
Figure 6. The plot on top shows the FHI vs. WPSTR trade-off for a radiometer as the sensing time N varies. Note that the optimal value of the sensing time is a function of the target FHI. The plots on the bottom drill down for a particular FHI = 0.001 and show the trade-off between the traditional time metric Δ – N/Δ (1 – PFA) and space recovery for a radiometer. The traditional metric underestimates the optimal sensing duration N whenever there is a spatial component of the spectrum holes.
60
Memoryless Sensing Algorithms — Even this restrictive class of algorithms brings out some interesting trade-offs that are absent in the space-only scenario discussed earlier. The assumptions we make are: • The detector senses for N contiguous samples and makes a decision based these samples alone. This is clearly suboptimal because longer-term memory could help significantly [11]. • The secondary user’s sensing and transmission times are non-adaptive and fixed in advance. • The primary state does not change within a sensing window. This approximation can lead to a lower rate of missed-detections because if the primary were to turn ON somewhere near the end of the sensing window, there would be a good chance that the detector would not trigger. To counter this, we enforce that the sum of the sensing-duration and the secondary user’s transmission-duration is less than the buffer Δ. This ensures that the only way to cause unauthorized harmful interference is by having a sensing error during a window in which the primary is ON. Numerical Simulations — For simplicity, consider a radiometer facing no fading at all. See [10] for the derivation of expressions for FHI and WPSTR, but the results can also be extended to other single-user or even cooperative sensing algorithms. The parameters used to obtain our numeric results are chosen to match those in [12] and are described below: The TV tower’s transmit power is assumed to be P t = 10 6 W, its protection radius r p = 134.2 km, and the no-talk radius r n = 150.3 km. The received power at a distance r from the TV tower is modeled as P(r) = Pt ⋅ l(r) where log l(r) is a piece-wise linear continuous function of log r chosen to approximate the International Telecommunication Union (ITU) propagation curve given in [12, Fig. 1]. Finally, the exponent in the spatial weighting function w(r) : = Aexp – κ(r – rn) is chosen to be κ = 0.02 km–1 . The top plot in Fig. 6 shows curves depicting the FHI vs. WPSTR performance of a radiometer for different sensing times N. It is clear that the optimal N is a function of the desired safety FHI The bottom right plot takes a slice at a fixed FHI = 0.001, and considers the radiometer’s WPSTR performance as a function of the sensing time. It compares this with the traditional perspective’s overall performance metric: ⎛ N⎞ ⎜1 − ⎟ (1 − PFA ) . ⎝ Δ⎠ Notice that at very low N, essentially nothing is recoverable since the F HI forces the detection threshold to be so low that noise alone usually triggers it. There is an optimal value for N that balances the time lost to sensing with the oppor-
IEEE Communications Magazine • March 2011
SAHAI LAYOUT
2/18/11
3:14 PM
Page 61
tunities lost from false alarms, but the traditional perspective is far more aggressive about setting the sensing duration N. This is because the two traditional hypotheses are well separated, but for potential locations close to r n , the relevant hypotheses are much closer. As illustrated in the bottom left plot in Fig. 6, there is a tension that must be balanced between performance in space (which demands high-fidelity from the radiometer and hence more sample complexity) and the solely time-oriented traditional performance metric.
CONCLUDING REMARKS It is tempting to force spectrum sensors to be very sensitive so as to guarantee protection to the primary user (e.g., the –114 dBm rule in [5]). But the traditional metrics completely miss that this forces the loss of a significant portion of the spatial spectrum holes because of a presumed lack of diversity. To see the underlying tradeoffs, a new joint space-time formulation is needed that formulates the spectrum-sensing problem as a composite hypothesis test. Unfortunately, simple single-user strategies cannot obtain enough diversity to get a good trade-off. One needs to look at other sensing strategies like dual detection, collaborative sensing, multiband sensing, and so on, to improve performance. The key is to have a robust way for the secondary user to conclude that it is indeed not deeply shadowed (not being shadowed is, after all, the typical case) and thereby avoid being more sensitive than is warranted. One possibility, that deserves further investigation, is to exploit sensor memory. If a secondary user has seen a strong primary signal in the near past, it knows that it is probably not deeply shadowed. This suggests that cooperative change-detection-based algorithms can improve sensing performance in both space and time.
REFERENCES [1] M. S. Alouini and A. Goldsmith, “Area Spectral Efficiency of Cellular Mobile Radio Systems,” IEEE Trans. Vehic. Tech., vol. 48, no. 4, July 1999, pp. 1047–66. [2] P. Gupta and P. R. Kumar, “The Capacity of Wireless Networks,” IEEE Trans. Info. Theory, vol. 46, no. 2, Mar. 2000, pp. 388–404. [3] S. Huang, X. Liu, and Z. Ding, “On Optimal Sensing and Transmission Strategies for Dynamic Spectrum Access,” IEEE DySPAN, Oct. 2008, pp. 1–5. [4] R. Tandra, S. M. Mishra, and A. Sahai, “What is a Spectrum Hole and What Does it Take to Recognize One?” Proc. IEEE, May 2009, pp. 824–48. [5] FCC, “In the Matter of Unlicensed Operation in the TV Broadcast Bands: Second Report and Order and Memorandum Opinion and Order,” tech. rep. 08-260, Nov. 2008. [6] A. Sahai, N. Hoven, and R. Tandra, “Some Fundamental Results on Cognitive Radios,” Allerton Conf. Commun., Control, Comp., Oct. 2004. [7] S. M. Mishra, Maximizing Available Spectrum for Cognitive Radios, Ph.D. Dissertation, UC Berkeley, 2010. [8] F. Baccelli and B. Blaszczyszyn, “Stochastic Geometry and Wireless Networks,” Foundations and Trends in Net., vol. 3, no. 3–4, 2009, pp. 249–449.
IEEE Communications Magazine • March 2011
[9] R. Tandra and A. Sahai, “SNR Walls for Signal Detection,” IEEE J. Sel. Topics Signal Process., Feb. 2008, pp. 4–17. [10] R. Tandra, A. Sahai, and V. Veeravalli, “Space-Time Metrics for Spectrum Sensing,” IEEE DySPAN, Apr. 2010, pp. 1–12. [11] A. Parsa, A. A. Gohari, and A. Sahai, “Exploiting Interference Diversity for Event-Based Spectrum Sensing,” IEEE DySPAN, Oct. 2008. [12] S. J. Shellhammer et al., “Performance of Power Detector Sensors of DTV Signals in IEEE 802.22 WRANs,” 1st Int’l. Wksp. Tech. Policy Accessing Spectrum, June 2006.
BIOGRAPHIES RAHUL TANDRA (
[email protected]) is a senior systems engineer at Qualcomm Research Center, Qualcomm Inc., San Diego, California. He currently works on the design and standardization of next-generation WLAN systems. He received his Ph.D. degree from the Department of Electrical Engineering and Computer Sciences at the University of California at Berkeley in 2009, where he was a member of the Wireless Foundations research center. In 2006 he worked as a summer intern with the Corporate R&D division of Qualcomm Inc., developing spectrum sensing algorithms for the IEEE 802.22 standard. Prior to that, he received an M.S. degree from Berkeley in 2005 and a B.Tech. degree in electrical engineering from the Indian Institute of Technology Bombay. His research interests are in wireless communication and signal processing. He is particularly interested in fundamental research questions in dynamic spectrum sharing.
One possibility is to exploit sensor memory. If a secondary user has seen a strong primary signal in the near past, it knows that it is probably not deeply shadowed. This suggests that cooperative changedetection based algorithms can improve sensing performance in both space and time.
ANANT SAHAI [BS’94, SM’96] (
[email protected]) is an associate professor in the Department of Electrical Engineering and Computer Sciences at the University of California at Berkeley, where he joined the faculty in 2002. He is a member of the Wireless Foundations center. In 2001 he spent a year at the wireless startup Enuvis developing adaptive software radio algorithms for extremely sensitive GPS receivers. Prior to that, he was a graduate student at the Laboratory for Information and Decision Systems at the Massachusetts Institute of Technology. His research interests span wireless communication, decentralized control, and information theory. He is particularly interested in spectrum sharing, the nature of information in control systems, and power consumption. V ENUGOPAL VEERAVALLI [S’86, M’92, SM’98, F’06] (
[email protected]) received his B.Tech. degree in 1985 from the Indian Institute of Technology Bombay (Silver Medal Honors), his M.S. degree in 1987 from Carnegie-Mellon University, and his Ph.D. degree in 1992 from the University of Illinois at Urbana-Champaign, all in electrical engineering. He joined Illinois in 2000, where he is currently a professor in the Department of Electrical and Computer Engineering, a research professor in the Coordinated Science Laboratory, and the director of the Illinois Center for Wireless Systems (ICWS). He served as a program director for communications research at the U.S. National Science Foundation in Arlington, Virginia, from 2003 to 2005. He was with Cornell University before he joined Illinois, and has been on sabbatical at MIT, IISc Bangalore, and Qualcomm, Inc. His research interests include distributed sensor systems and networks, wireless communications, detection and estimation theory, and information theory. He is a Distinguished Lecturer for the IEEE Signal Processing Society for 2010–2011. He has been on the Board of Governors of the IEEE Information Theory Society. He has also served as an Associate Editor for IEEE Transactions on Information Theory and IEEE Transactions on Wireless Communications. Among the awards he has received for research and teaching are the IEEE Browder J. Thompson Best Paper Award, the National Science Foundation CAREER Award, and the Presidential Early Career Award for Scientists and Engineers (PECASE).
61
LYT-GUEST EDIT-Au
2/22/11
11:39 AM
Page 62
GUEST EDITORIAL
ADVANCES IN STANDARDS AND TESTBEDS FOR COGNITIVE RADIO NETWORKS: PART II
Edward K. Au
T
Dave Cavalcanti
Geoffrey Ye Li
his feature topic is a continuation of our September 2010 feature topic on standards and testbeds for cognitive radio networks. In this second part of the feature topic, two invited articles and three articles selected from a pool of high-quality submissions are introduced. We hope our readers will find these articles useful not only for understanding recent developments, but also for inspiring their own work. Our feature topic begins with an invited article, “Wireless Service Provision in TV White Space with Cognitive Radio Technology: A Telecom Operator’s Perspective and Experience,” contributed by M. Fitch and his colleagues at British Telecom. This article describes three use cases that are of interest to telecommunications operators: future home networks, street coverage, and broadband access to rural and underserved premises. The authors also present results of modeling and experimental trials, and identify a number of technical and business challenges as well. Wang and his colleagues at Philips Research North America next present the article “Emerging Cognitive Radio Applications: A Survey,” which presents, primarily from a dynamic spectrum access perspective, some emerging applications, desirable benefits, and unsolved challenges. The authors also illustrate related standardization that uses cognitive radio technologies to support such emerging applications. Following the two invited articles, the next article overviews the cognitive radio system (CRS) and its standardization progress. Generally speaking, the CRS can be characterized as a radio system having capabilities to obtain knowledge, adjust its operational parameters and protocols, and learn. Many CRS usage scenarios and business cases are possible. In the article “International Standardization of Cognitive Radio Systems,” Filin et al. describe the current CRS concept and describe the ongoing standardization progress in different international standardization bodies. The final two articles are related to testbeds, architectures, and prototypes of cognitive radio networks. For the
62
Winston Caldwell
Khaled Ben Letaief
first article, “Cognitive Radio: Ten Years of Experimentation and Development,” Pawelczak and his coauthors provide synopses of the state-of-the-art hardware platforms and testbeds, examine what has been achieved in the last decade of experimentation and trials relating to cognitive radio and dynamic spectrum access technologies, and present insights gained from these experiences in an attempt to help the community grow further and faster in the coming years. The last article that appears in this issue is titled “SpiderRadio: A Cognitive Radio Network with Commodity Hardware and Open Source Software” and is contributed by Sengupta et al. In this contribution, the authors begin with a discussion of the key research issues and challenges in the practical implementation of a dynamic spectrum access network. The discussion is followed by a presentation of the lessons learned from the development of dynamic spectrum access protocols, the design of management frame structures, the implementation of the dynamic spectrum access network protocol stack using software, and the results of testbed experimental measurements.
ACKNOWLEDGMENT We would like to thank all individuals who have contributed toward this special issue. In particular, we thank all the reviewers for their high-quality professional reviews within the tight deadlines given to them. Special thanks are due to the Editor-in-Chief, Dr. Steve Gorshe, for his guidance. Last but not least, we would also like to thank the publications staff of IEEE Communications Society, Devika Mittra, Jennifer Porcello, Cathy Kemelmacher, and Joe Milizzo, whose professional advice and support throughout the development of this feature topic we appreciate.
BIOGRAPHIES EDWARD AU [M] (
[email protected]) holds a Ph.D. degree in electronic and computer engineering from Hong Kong University of Science and Technology (HKUST). As a principal engineer of Huawei Technologies, he has worked on research and product development on 100 Gb/s and beyond optical long-haul communications and is now leading a project on fixed
IEEE Communications Magazine • March 2011
LYT-GUEST EDIT-Au
2/22/11
11:39 AM
Page 63
GUEST EDITORIAL wireless transmission systems. He has actively participated in standardization organizations and industry forums. He is the primary technical representative of Huawei in the Wi-Fi Alliance and an active contributor to the Optical Interconnecting Forum (OIF), where he is a co-editor of the channel coding project for 100Gb/s DWDM optical transmission systems, and a member of Speakers Bureau in representing OIF at industry and academic events. He was also a working group secretary of IEEE 802.22, the first international standards on cognitive radio networks. He is also staying active in research community. He is currently an Associate Editor of IEEE Transactions on Vehicular Technology and lead guest editor for the IEEE Communications Magazine Feature Topic on Advances in IEEE Standards and Testbeds for Cognitive Radio Networks. He is a founding member of the Shenzhen Chapter, IEEE Communications Society. DAVE CAVALCANTI [M] (
[email protected]) is a senior member of research staff at Philips Research North America, Briarcliff Manor, New York since 2005. He received his Ph.D. in computer science and engineering in 2006 from the University of Cincinnati, Ohio. He received his M.S. in computer science in 2001 and B.Sc. in electrical engineering in 1999 (with Honors) from the Federal University of Pernambuco, Brazil. His research interests include communication protocols for wireless networks, cognitive radio networks, wireless sensor networks, heterogeneous networks, and controls and automation applications of wireless networks. He has contributed to standardization of several connectivity technologies in the IEEE 802.15, 802.11, and 802.22 working groups. He has delivered tutorials and served as a technical program committee member of several conferences in the area of wireless communications and networking. He has served as a guest editor of IEEE Wireless Communications. He has served as Chair of the IEEE Computer Society Technical Committee on Simulation (TCSIM) since 2008. G EOFFREY Y E L I [F] (
[email protected]) received his B.S.E. and M.S.E. degrees in 1983 and 1986, respectively, from the Department of Wireless Engineering, Nanjing Institute of Technology, China, and his Ph.D. degree in 1994 from the Department of Electrical Engineering, Auburn University, Alabama. He was a teaching assistant and then lecturer with Southeast University, Nanjing, China, from 1986 to 1991, a research and teaching assistant with Auburn University, Alabama, from 1991 to 1994, and a post-doctoral research associate with the University of Maryland at College Park from 1994 to 1996. He was with AT&T Labs — Research, Red Bank, New Jersey, as a senior and then principal technical staff member from 1996 to 2000. Since 2000 he has been with the School of Electrical and Computer Engineering at Georgia Institute of Technology as an associate and then a full professor. He has held the Cheung Kong Scholar title at the University of Electronic Science and Technology of China since March 2006. His general research interests include statistical signal processing and telecommunications, with emphasis on OFDM and MIMO techniques, crosslayer optimization, and signal processing issues in cognitive radios. In these areas he has published about 200 papers in refereed journals or conferences and filed about 20 patents. He has also written two books. He has served or is currently serving as an editor, a member of editorial board, and guest editor for about 10 technical journals. He organized and chaired many international conferences, including technical program vice-chair of the IEEE ICC ’03. He was selected as a Distinguished Lecturer for 2009–2010 by IEEE Communications Society, and won the 2010 IEEE Communications Society Stephen O. Rice Prize Paper Award in the field of communications theory.
IEEE Communications Magazine • March 2011
WINSTON CALDWELL (
[email protected]) received his B.Eng. degree in electrical engineering from Vanderbilt University and his M.S. degree in electrical engineering from the University of Southern California. He is a licensed Professional Engineer in the state of California with over 16 years of electrical engineering experience, specializing in RF propagation and antenna design. He is vice president of spectrum engineering for News Corporation’s Fox Technology Group. In the past he has served as a systems engineer in the computer industry with EMC Corporation and as a senior data systems and telemetry engineer in the aerospace industry with the Boeing Company. To facilitate his up-to-date knowledge and understanding in the changing world of frequency spectrum technologies and policies, he is an active member and presenter of engineering analyses at the ITU, IEEE, MSTV, NABA, NAB, and SMPTE. He is a founding member of IEEE P802.22, Chairman of the IEEE 802.22.2 Recommended Practice Task Group, and Liaison to the IEEE 802.18 Radio Regulations — Technical Advisory Group. KHALED BEN LETAIEF [F] (
[email protected]) received a B.S. degree with distinction in electrical engineering from Purdue University, West Lafayette, Indiana, in December 1984. He received M.S. and Ph.D. degrees in electrical engineering from Purdue University in August 1986 and May 1990, respectively. From January 1985 and as a graduate instructor in the School of Electrical Engineering at Purdue University, he taught courses in communications and electronics. From 1990 to 1993 he was a faculty member at the University of Melbourne, Australia. Since 1993 he has been at Hong Kong University of Science & Technology, where he is currently dean of engineering. He is also Chair Professor of Electronic and Computer Engineering as well as director of the Hong Kong Telecom Institute of Information Technology and the Wireless IC System Design Center. His current research interests include wireless and mobile networks, broadband wireless access, OFDM, cooperative networks, cognitive radio, MIMO, and beyond 3G systems. In these areas he has over 400 journal and conference papers, and has given invited keynote talks as well as courses all over the world. He has three granted and 10 pending U.S. patents. He has served as a consultant for different organizations and is the founding Editor-in-Chief of IEEE Transactions on Wireless Communications. He served on the editorial boards of other prestigious journals including IEEE Journal on Selected Areas in Communications — Wireless Series (as Editor-in-Chief). He has been involved in organizing a number of major international conferences and events. These include serving as Co-Technical Program Chair of the IEEE ICC — Circuits and Systems, ICCCS ’04; General Co-Chair of the 2007 IEEE Wireless Communications and Networking Conference; Technical Program Co-Chair of IEEE ICC ’08, and Vice General Chair of IEEE ICC ’10. He has served as an elected member of the IEEE Communications Society Board of Governors and IEEE Distinguished Lecturer. He also served as Chair of the IEEE Communications Society Technical Committee on Wireless Communications, Chair of the Steering Committee of IEEE Transactions on Wireless Communications, and Chair of the 2008 IEEE Technical Activities/Member and Geographic Activities Visits Program. He is a member of the IEEE Communications and Vehicular Technology Societies’ Fellow Evaluation Committees as well as of the IEEE Technical Activities Board/PSPB Products & Services Committee. He is the recipient of many distinguished awards including the Michael G. Gale Medal for Distinguished Teaching (highest university-wide teaching award at HKUST); 2007 IEEE Communications Society Publications Exemplary Award; eight Best Paper Awards; and the prestigious 2009 IEEE Marconi Prize Paper Award in Wireless Communications. He is Vice-President for Conferences of IEEE ComSoc.
63
FITCH LAYOUT
2/18/11
3:09 PM
Page 64
COGNITIVE RADIO NETWORKS (INVITED ARTICLE)
Wireless Service Provision in TV White Space with Cognitive Radio Technology: A Telecom Operator’s Perspective and Experience Michael Fitch, Maziar Nekovee, Santosh Kawade, Keith Briggs, and Richard MacKenzie, BT
ABSTRACT Currently there is a very fundamental change happening in spectrum regulation, possibly the most fundamental ever in its history. This is the enabling of spectrum sharing, where primary (licensed) users of the spectrum, are forced to allow sharing with secondary users, who use license-exempt equipment. Such sharing is free for the secondary users, subject to the condition that they do not cause harmful interference to the primary users. The first instance of such sharing is occurring with the UHF digital TV spectrum, in what is commonly called TV white space. Regulators such as the FCC in the United States and Ofcom in the United Kingdom have indicated that other spectrum will follow suit. Cognitive radio is an enabling technology that allows such sharing. Following recent rulings by FCC and Ofcom and the emergence of a series of related industry standards, CR operation in TVWS is moving from the research domain towards implementation and commercialization, with use-cases that are of interest to telecom operators. In this article we describe three such use cases: future home networks, coverage of the street from inside buildings, and broadband access to rural and underserved premises. We present results of modeling and trials of technical feasibility, undertaken by the Innovate and Design team at BT. Based on our experience we draw conclusions regarding the feasibility and commercial importance of these use cases, and identify some of the remaining technical and commercial challenges.
INTRODUCTION Cognitive radio (CR) [1, 2] is being intensively researched as the enabling technology for secondary access to TV white space (TVWS). The TVWS spectrum comprises large portions of the UHF spectrum (and VHF in the United States) that is becoming available on a geographical basis for sharing as a result of the switchover
64
0163-6804/11/$25.00 © 2011 IEEE
from analog to digital TV. This is where secondary users can, using unlicensed equipment, share the spectrum with the digital TV transmitters and other primary (licensed) users such as wireless microphones. Such secondary access is free, but is conditional upon not causing harmful interference to the primary users. The avoidance of interference through the use of cognitive techniques, databases and sensing is described later in this article. The total capacity associated with TVWS turns out to be quite significant. For example, modeling commissioned by Ofcom revealed that over 50 percent of locations in the United Kingdom have more than 150 MHz of TVWS available for cognitive access. This offers scope for a considerable amount of capacity, and when the wide-area coverage ability of the UHF frequency range is also taken into account plus the fact that its use is free, TVWS becomes an attractive proposition for the use cases described in this article. Signals in the TV bands travel much further in cluttered environments than WiFi or 3G signals, and they penetrate into buildings with much less loss. Both in the United States [3, 4] and more recently in the United Kingdom [5, 6], the regulators have given conditional endorsement to this new sharing mode of access, and there is also significant industry effort underway toward standardization, trials, and testbeds. These include geolocation databases and sensing for primary user protection, agile transmission techniques, and the so-called etiquette protocols for intra- and intersystem coexistence in TVWS. So far the majority of research on cognitive access to TVWS has been focused on a single cognitive device. However, the provision of commercial services based on the technology (e.g., unlicensed mobile broadband or wireless home networks) will involve situations with systems of multiple cognitive equipment types that may belong to either the same or different service providers. Feasibility studies of CR for such commercial applications therefore require sys-
IEEE Communications Magazine • March 2011
FITCH LAYOUT
2/18/11
3:09 PM
Page 65
tem-level studies performed in the context of real-life service scenarios. Implementations of TVWS services are likely to start with point-to-multipoint deployments (i.e., with zero mobility), such as rural broadband access and backhaul to small 3/4G cells, and later progress to more mobile and quality of service (QoS)-aware systems. Access to TVWS will enable more powerful public Internet connections with extended coverage and improved download speeds. Time-division duplex (TDD) systems are preferable to frequency-division duplex (FDD) when using TVWS, since FDD requires a fixed separation of base station transmit and terminal transmit frequencies, and this condition restricts the number of TVWS channels available. TDD is free from this restriction and is also better suited to asymmetrical links; this points towards 802.11x (WiFi), 802.16 (WiMAX), and Third Generation Partnership Project (3GPP) Time Division Long Term Evolution (TD-LTE) being suitable candidates that have mature standards. The requirement to avoid adjacent channels may be imposed if the TVWS transmitters have outof-band emissions that are too high. The combination of FDD and adjacent-free would severely reduce the amount of spectrum available and is best avoided. We all know that there is a growing demand for high-data-rate wireless services such as data, video, and multimedia to users in homes, in offices, and on the move. At the same time governments have pledged to close the digital divide between urban and rural communities by providing universal broadband (2 Mb/s in the United Kingdom) to citizens regardless of their geographical location, and there is sometimes funding available to assist the process. Consequently, telecom operators are under pressure to costeffectively provide such universal broadband services to rural communities, and are investigating the wireless option as an alternative to DSL or fiber. Network operating costs and site acquisition costs will cause traditional macro network designs to be uneconomical in providing equivalent coverage and capacity using licensed 3G and (in the near future) LTE bands because of the need for smaller cells, this need arising both from demand for higher bit rates and to support more users in the system. Therefore, alternative solutions based on CR technology operating on a exempt-exempt basis in TVWS spectrum are becoming commercially interesting, in particular for fixed line operators that have a significant fiber and copper infrastructure, as well as potential new entrants, such as Google and Microsoft. TVWS may provide a viable and highly scalable alternative to conventional solutions based on cellular and/or WiFi technologies.
TVWS SPECTRUM AVAILABILITY AND DATABASE STRUCTURE The commercial case for using TVWS will depend upon the amount of spectrum that becomes available for sharing, upon how the availability of this spectrum varies with location and upon transmit power allowed by cognitive devices [7].
IEEE Communications Magazine • March 2011
Figure 1 shows the allocation of the UHF spectrum in the United Kingdom after the completion of the digital switchover (DSO) from analog to digital TV [5]. The 128 MHz of spectrum marked in green (16 channels) is the cleared spectrum which Ofcom plans to license through auctions. The 256 MHz of spectrum (32 channels) marked in purple is the so-called interleaved spectrum which can be used on a geographical basis for exempt-exempt use by CRs. Finally, the channel marked in pink (channel 38) is licensed by Ofcom for exclusive access by wireless microphones and other program making and special events (PMSE) equipment. This is a safe haven for such devices, but it is not sufficient for large events that commonly use over 100 wireless microphones. Hence, these primary users will use other channels as well, resulting in uncertainty of where in the spectrum these devices will be. There are two ways of dealing with this uncertainty. First, the TV transmitters and radio microphones can be registered in a database, and then the CR device can interrogate the database periodically to find out which channels are free. Such periods will typically be 2 h. Second, the CR device can sense the spectrum to detect when channels are free. In the future we believe that both methods will be used together in order to have flexibility and achieve maximum efficiency in secondary use of TVWS. In the short term, however, geolocation databases seem to provide a technically feasible and commercially viable solution. These are important processes that must be followed by TVWS devices which distinguish them from both licensed and currently used exempt-exempt devices (e.g., as used in the industrial, scientific, and medical [ISM] bands). Figure 2 shows an idea for a TVWS database structure, where the national regulator either owns or contracts out the supply of a central database, and there are several secondary databases that are typically owned by network operators. It is likely that the regulator will want to certify the algorithms used in all the databases. Such algorithms will be used to determine which channels can be used by the secondary devices and the transmit power they can employ without causing interference. The regulator is concerned only with protection of primary services and does not recommend or mandate any method of negotiation between secondary users. Obviously such negotiation is required for reasons of fairness, and this will be built into etiquette between the secondary databases shown in Fig. 2. Ofcom in the United Kingdom is currently clarifying whether they have the necessary legal powers to regulate databases, and they will gain such powers from the government if needed. The central database contains the boundaries of the primary users, and algorithms to calculate the TVWS channels and the powers that can safely be used without causing interference. The location certainty input to this database is used to calculate the required protection margin. The greater the uncertainty in location of the cognitive device, the higher the margin and the lower the allowed transmit power. Therefore, if the topology of a cognitive network is master-slave,
The commercial case for using TVWS will depend upon the amount of spectrum that becomes available for sharing, upon how the availability of this spectrum varies with location and upon transmit power allowed by cognitive devices.
65
FITCH LAYOUT
2/18/11
3:09 PM
Page 66
Channel 21 22 23 24 25 26 27 28 29 30 31 32 frequency (MHz) 470-478 478-488 488-494 494-502 502-510 510-518 518-528 528-534 534-542 542-550 550-558 558-568 33
34
35
36
37
38
39
40
41
42
43
44
568-574 574-582 582-600 500-508 508-606 606-614 614-622 622-630 630-638 638-648 648-654 654-662 45
46
47
48
49
50
51
52
53
54
55
56
662-670 670-678 678-680 680-694 694-702 702-710 710-718 718-726 726-734 734-742 742-750 750-758 57
58
59
60
61
62
63
64
65
66
67
68
758-766 766-774 774-782 782-790 790-798 798-806 806-814 814-822 822-830 830-838 838-846 846-854 69 854-862
Retained/interleaved spectrum
Cleared spectrum
PMSE
Figure 1. The U.K. UHF TV band after the completion of digital switchover [5].
Regulatory control National regulator
Central database
Secondary database
Inputs: location, location certainty Outputs: channels, powers, time to live
Secondary database
Interface to MAC Inputs: link quality and mobility requirements, topology, antenna directivity, etc. Outputs: channels, powers, time to live, etc.
Figure 2. A possible database structure.
the locations of the slaves will be different from that of the master; one way of handling this situation is to enter a low value for the location certainty. In this way, the location of the group is catered for by the master unit, giving a large uncertainty about its own position. This method will result in an overly conservative allocation of channels and transmit powers. A better idea might be to have a structure as shown in Fig. 2, where the secondary database contains more detailed information about the system topology such as antenna directivities, and the QoS and mobility needs of the individual radio links so that the spectrum portfolio from the central database can be more efficiently brokered. The secondary database would optimize the location information that is given to the central database, to obtain a potentially wider choice of channels and higher transmit powers. The secondary databases ca broker fairness and trading between secondary users, and would modify content and algorithm parameters using input from users and sensing. Methods for preventing spectrum hogging are under study, as are secure methods for interrogating and validating the databases. This two-step database approach is
66
being developed in European collaborative Framework 7 project QoSMOS [8] of which BT is co-coordinating partner. Another European project where BT plays a leading role is QUASAR [9], which is developing techno-economic decisions to support methodologies and tools that can support operators in quantifying the value of TVWS to their business, based on specific service provision requirements and future customer’s demand. Internally, BT has developed a set of modeling tools to quantify the availability and variability of TVWS spectrum across the United Kingdom. These make use of publicly available coverage maps of digital terrestrial TV (DTT), which were generated via computer simulations from Ofcom’s database of location, transmit power, antenna height, and transmit frequencies of the United Kingdom’s DTT transmitters and repeaters. It combines these coverage maps with terrain data and simplified propagation modeling calculations to obtain estimates for the vacant TVWS frequencies at any given location in the United Kingdom. The amount of spectrum available is adversely affected if we impose the condition that the adjacent channel is kept free. Combining this field strength data with propagation modeling of cognitive devices, the available TVWS spectrum for cognitive access at any given location has been computed with a spatial resolution of 100 m. Herein also lies a problem, because the amount of available TVWS appears to increase as the spatial resolution reduces because smaller shadowed regions begin to be counted, a kind of fractal effect. What is missing is a standard method of computing the amount of TVWS spectrum available. Identification of suitable metrics and time/spatial resolutions is identified as a future research topic, and the different national regulators should agree on and specify these parameters. The terrain data used in our modeling tool is based on the STRM2 data elevation data set, which at present is the most complete high-resolution digital topographic database of Earth. Finally, using statistics of housing distribution in the United Kingdom, we have also extracted the population weighted TVWS availability, which is important in investigating the feasibility of
IEEE Communications Magazine • March 2011
FITCH LAYOUT
2/18/11
3:09 PM
Page 67
broadband provision to BT’s customers in both urban and rural areas. Figure 3 shows, as an example, the availability and frequency decomposition of TVWS spectrum for low power CRs, such as those used for home networks, in Central London, after DSO. In this figure, vacant channels are shown as white bars while occupied channels are left black. As can be seen the available TVWS channels are highly non-contiguous. This feature can greatly restrict the use of TVWS for highthroughput applications, such as HDTV, by existing WiFi and 4G access technologies, as modulation techniques implemented in these technologies can only operate using contiguous portions of spectrum. QoSMOS is developing flexible radio technology to bond channels even if they are not contiguous. In the case of London, if we are constrained to contiguous spectrum for pooling purposed, there is only 16 MHz available out of a total of 96 MHz. We shall discuss some of the consequences of this limitation in the next section. Figure 4 is a map of the United Kingdom, with 1km square pixels that are colored according to the amount of TVWS that will be available after DSO. It can be seen that large amounts of TVWS are available in remote areas like Wales and Scotland. Figure 5 shows our computed populationweighted complementary cumulative distribution function of TVWS availability for the entire United Kingdom, once again for low-power devices (about +10 dBm EIRP). In some future high-power use-cases, such as rural broadband (up to about +3 dBm EIRP), CRs maybe constrained not to use vacant TV channels adjacent to those used for TV broadcasting. The blue curve in Fig. 5 shows the CDF of TVWS availability without this constraint while the red curve is the CDF with this constraint. As can be seen from this figure, just over 70 percent of the U.K. population has potential access to at least 100 MHz of TVWS when no adjacent channel constraint is imposed. Although the spectrum availability is somewhat reduced when this constraint is imposed, there is still a considerable bandwidth available (at least 50 MHz for just over 50 percent of population).
USE CASES We classify the use cases into the following three scenarios: • Indoor services, which generally require small coverage, and hence power levels that are either significantly lower, due to better propagation characteristics in the VHF/UHF bands, or comparable to that used in current ISM bands • Outdoor coverage from indoor equipment, which requires penetration through walls with medium range coverage ( a few hundred meters), and hence power levels that are generally higher than or comparable to that in the ISM bands • Outdoor services, which may require significantly higher transmit power levels than are currently permitted in the ISM bands (comparable to those used by cellular systems)
IEEE Communications Magazine • March 2011
Central London
Free channels (8 MHZ)
Occupied
20
25
30
35
40 45 Channel number
50
55
60
65
Figure 3. Free channels in London area are fragmented over the band (black bars). Most indoor applications of TVWS can already be realized using WiFi and Zigbee technology operating in the 2.4 GHz and 5 GHz ISM bands. The main advantage of using TVWS is that the additional capacity offered will help to relieve congestion, in particular in the 2.4 GHz band. This use can also result in better indoor propagation of signals through the home. Furthermore, the lower frequencies of TVWS bands can result in lower energy consumption (roughly an order of magnitude) compared to WiFi/Zigbee. This is a particularly interesting advantage for use case scenarios that involve battery-powered devices (laptops, smartphones, sensors, etc). In the following sections we describe three use cases in more detail, with a discussion of the modeling and trial studies performed at BT in order to examine their technical feasibility and potential business benefits.
WIRELESS MULTIMEDIA STREAMING FOR CONNECTED HOMES Currently operators are rolling out next-generation access networks. This means that optical fibers are being installed either all the way to homes or to street cabinets with very high rate digital subscriber line (VDSL) providing the last links from the cabinets to homes. This will enable fixed broadband speeds of between 40–100 Mb/s on uplink and downlink to homes. Such connection speeds are required in order to support streaming of high-definition multimedia content to homes via the Internet, including high-definition TV (HDTV) and on-demand AV content, such as BBC’s iPlayer and BT YouView. With this kind of high-data-rate content brought to homes, a new challenge for wireless is to provide distribution within the home environment. Furthermore, some home users may wish to view content on mobile and portable terminals, which typically have low-resolution screens, while others maybe using HDTV terminals and set-top boxes, which in many cases need to be connected wirelessly to the Internet.
67
FITCH LAYOUT
2/18/11
3:09 PM
Page 68
Currently wireless multimedia streaming inside homes is mainly supported using WiFi technology operating in the 2.4 GHz band. In urban areas this band is already suffering from capacity limitations due to a combination of interference caused by the high density of devices that use the band and the well-known inefficiencies of WiFi’s distributed coordination mechanisms. One potential solution is to migrate to the currently non-congested 5 GHz exemptexempt band, which has 160 MHz of spectrum in band A and 220 MHz in band B. One problem with using the 5 GHz for whole-home content distribution is that it does not deliver high bandwidths into all rooms in even a medium-sized house, in particular into rooms that are on a different floor from the access point. Simulations and tests performed by BT indicate that WiFi at 5 GHz using 802.11n can deliver less than 10 Mb/s into rooms directly above or below the access point, and less than 1 Mb/s if there is a combination of floors and walls to penetrate, where the access point is near the floor in a cor-
Figure 4. TVWS distribution across the United Kingdom: red: little; white: much.
68
ner of a ground floor room, which is where most people locate it. The opening up of TVWS spectrum for exempt-exempt use could provide an alternative delivery mechanism that resolves both the interference and range issue of the current WiFi solutions and this alternative is being intensely considered by some of the industry player in the CogNea alliance [10], which has already developed a TVWS standard primarily focused on whole-home multimedia distribution [11]. Also the IEEE currently has a working group developing the 802.11af amendment which will allow for 802.11 deployment in TVWS. We have performed system simulation studies to assess the feasibility of using TVWS spectrum for multimedia distribution. We consider deployment scenarios in urban environments that involve thousands of home access points densely packed in a small area of a city like London. In order to realistically model service provision in TVWS spectrum in such scenarios, the impact of interference from neighboring homes, the spatial distribution of houses and the presence of walls and other obstacles within each house needs to be considered. Finally, the availability and frequency decomposition of TVWS can vary strongly with location and this factor also needs to be incorporated into any meaningful feasibility study. The deployment scenario used in the study reported here is 1 km 2 of a typical urban area in London (Bayswater) that contains about 5000 houses and office buildings. The house and street layout data for the study was highly detailed, and was imported from a geographic information system (GIS) database. The maximum access point density was derived from available surveys and was fixed at 20 percent; that is, 1000 randomly selected houses were assumed to have a home access point. Each access point is then associated with a client within a maximum distance of 12 m, which is representative of U.K. homes. Propagation of WiFi and TVWS signals within each house and between houses are derived as a function of distance, geometry of houses, and number of walls and floors, using appropriate propagation models. 1 The availability of TVWS channels for the scenario studies is derived from the model described earlier. It is then assumed that a home access point queries a centrally managed database for TVWS channel availability via its fixed line connection. The home base station and the associated clients are configured in master-slave architecture, with the home base station advertising its presence and channel availability via regular beacons, enabling slaves to establish a communication link with the master. The effect of interference from access points and clients in neighboring homes is modeled in terms of the achievable signal-to-interferenceand-noise ratio (SINR) at the client, and is derived from the aggregate traffic load of all surrounding client-access point pairs which operate co-channel with the pair under consideration. We considered three different traffic loadings in the study: interference traffic profile of a persistent 2 Mb/s video stream, interference traffic profile of a mix of voice, video, and data at 2
IEEE Communications Magazine • March 2011
FITCH LAYOUT
2/18/11
3:09 PM
Page 69
Interference traffic profile
5 GHz
2.4 GHz
TVWS
2 Mb/s
97%
97%
99%
TVWS estimated from database
6 Mb/s
97%
90%
97%
Video
97%
75%
97%
Table 1. Performance comparison for 2 Mb/s service requirement.
Interference traffic profile
5 GHz
2.4 GHz
TVWS
2 Mb/s
98%
83%
98%
6 Mb/s
95%
70%
70%
Video
93%
55%
50%
Fraction of population
0.8
0.6
0.4
0.2
Table 2. Performance comparison for 6 Mb/s service requirement. 0.0
Mb/s, and interference traffic profile of mix voice, video and data at 6 Mb/s. These traffic profiles are later referred to as Video, 2 Mb/s, and 6 Mb/s, respectively. Simulations were performed of the achievable data-rates at the client under the above deployment and interference scenarios assuming operation in the 2.4 GHz band, the 5 GHz band and in the available TVWS frequencies, using the same random algorithm to assign the available channels to customer’s home access points. In the case of 2.4 and 5 GHz bands, the use of a 2 × 1 antenna diversity technique (IEEE 802.11n) was assumed while, for TVWS spectral band, single antennas were assumed at both access point and client due to the relatively large antenna size required at UHF. Finally in order to match the peak data rate of 2.4 and 5 GHz bands, which have 20 MHz wide channels, three 8 MHz TVWS bands were aggregated. The stochastic nature of offered traffic, client distance from the home access point and distribution of home networks means that the results of our study are best represented in terms of probability distributions. Tables 1 and 2 show the percentage of clients that can be served from the access point with a data rate of 2 and 6 Mb/s, respectively, for different interference traffic profiles. It can be seen from the above tables for low traffic loading, the TVWS spectral band outperforms the 2.4 GHz and 5 GHz spectral bands. However, for high traffic loadings, which correspond to applications like wireless HDTV streaming, better performance is achieved using the 5 GHz band. The main reason underlying the above results is the subtle interplay between coverage and interference effects. By operating in TVWS spectrum a larger coverage within each home can be achieved than is possible when using the 2.4 GHz and 5 GHz bands. However, at the same time the interference to neighboring homes is more severe due to better
IEEE Communications Magazine • March 2011
0
50
100 150 200 MHz free (red: adjacent channels also free)
250
Figure 5. Population-weighted complementary cumulative distribution function of TVWS availability. Red is when adjacent channels cannot be used. propagation and penetration characteristics. As a consequence there is an optimal transmitting power of around +3 dBm, in contrast to the significantly higher operating points for the 2.4 and 5 GHz WiFi bands which is +10 to 20 dBm. Although transmit power is only one of the factors that determines energy consumption of devices, our finding does point out an important potential advantage of operation in TVWS in terms of energy/battery power saving which, to our knowledge, had been overlooked previously. Overall our results show that TVWS spectrum on its own is perhaps unable to support future highdata rate multimedia streaming applications but is best used in conjunction with either the 2.4 or 5 GHz band, as a way for congestion relief or coverage extension. Furthermore, triplet channel bonding in TVWS is required in order to achieve the same base performance rate as in 2.4 and 5 GHz bands, which may not be always feasible with current access technologies due to the noncontiguous nature of TVWS spectrum (Fig. 1).
HIGH-SPEED BROADBAND WIRELESS ACCESS FROM THE INSIDE-OUT Wireless LAN (WLAN) operating in the ISM bands and using the IEEE 802.11x (i.e., standard WiFi technology) is one of the fastest and cheapest broadband wireless access (BWA) systems, and has seen a high growth rate. Examples of deployment include WiFi hotspots such as BT Openzone, municipal WiFi networks and public WiFi networks, such as BT-FON. With FON, residential broadband customers share a portion of their home WiFi broadband bandwidth for outdoor public use. The two traffic streams are
1
At fine-grained simulation level we used an approach based on dividing the square kilometer area into 1 m2 pixels and propagating signals from one pixel to another according to the underlying topology of houses, and location of walls and floors.
69
FITCH LAYOUT
2/18/11
3:09 PM
Page 70
Nomadic user WiFi TVWS or 3G/4G TVWS
Mobile user TVWS+3G/4G
Home hub
Home hub
Network
Network TVWS database
Figure 6. Indoor-to-outdoor broadband wireless access network using home access points operating in TVWS.
kept separate for security reasons and the residential users’ traffic receives higher priority on uplink and downlink. The outdoor use of such WiFi home networks is currently receiving a high level of interest from telecom operators and customers alike. Smartphones equipped with WiFi can roam onto the WiFi networks from cellular networks, obtaining a higher data-rate in most cases. In the United Kingdom, for example, there are currently over 1 million BT-FON subscribers/providers, amounting to 6.3 percent of the residential broadband customers so there is already significant coverage and there is potential for further growth. Furthermore, mobile operators like T-mobile have begun to seriously look at the use of WiFi indoor and outdoor networks as a means for offloading the exponentially growing mobile data traffic from their highly overcommitted cellular networks. Unfortunately, despite the high density of home access points available for community sharing, the coverage provided by such community WiFi networks is currently rather patchy hence limiting the usefulness and commercial success of such networks. This is due to a combination of the relatively stringent regulatory caps on transmit power levels of WiFi in the United Kingdom and the rest of Europe, e.g., 20 dBm (100 mW) in the 2.4 GHz band and also due to the high wall and floor penetration loss suffered by signals in the ISM bands. We performed system studies for several urban and sub-urban service areas in London and elsewhere in order to investigate whether, by switching operation of the network from the ISM bands to TVWS spectrum, the above coverage limitations could be overcome. The architecture of the resulting network is shown in Fig. 6. Once again our proposed architecture is based on the use of geo-location technology for location determination combined with a database look-up. Each access point is assumed to be connected to one, or multiple, databases which provide information on the unused TVWS channels that are available at the location of the access point — and information on maximum transmit power levels that could be used in each channel. Furthermore, the use of master-slave technology means that the necessary functionalities for
70
database lookup and channel selection need to be implemented only in the access points, so keeping the complexity and cost of end-user devices to a minimum. Users with a TVWS modem, or dongle, can connect from outside to home access points via a TVWS channel that is periodically advertised via beacons by each access point. For the chosen deployment environment, dense urban, urban and sub-urban, a range of deployment densities of BT-FON access points were investigated in the study. Interference effects from neighboring home hubs were modeled for a worst-case traffic profile comprising of 2 Mb/s persistent video streaming traffic. Statistics on achievable data rates as a function of location of the outdoor client were then computed on a 1m2 grid resolution level. Figure 7 summarizes the results of system studies which were performed for the dense urban deployment scenario. It shows the effect of switching operation frequency of access points from 5 GHz to 2.4 GHz and then to TVWS UHF bands on the achievable broadband (2 Mb/s) coverage of the BT-FON network. We found that, due to interference effects for each band considered, there is an optimal deployment density of access points beyond which the coverage does not further improve, and can even start degrading. From Fig. 7 it can be seen that coverage is very patchy when the system operates at 5 GHz and some improvement is gained by switching to 2.4 GHz. The most striking result is achieved when home access points switch operation to TVWS where, with only a 20 percent deployment density, a blanket 2 Mb/s indoor-outdoor coverage is achieved. Note that this broadband coverage level at 2 Mb/s data rate is 25 times higher than that achievable with 3G technologies such as HSPA. It is economically much more viable than the broadband coverage that could be offered by 4G cellular technologies due to the relatively low infrastructure and site acquisition costs.
RURAL BROADBAND In the United Kingdom right now, about 15 percent of households (2.75 million) cannot access broadband (2 Mb/s). These are located mainly, but not exclusively, in rural areas. The problem is apparently not limited to the United Kingdom as latest EU statistics show that 30 percent of the EU’s rural population has no access to high speed Internet. The problem is caused mainly by long metal lines, either between the exchange and the house or in the backhaul to the exchange itself. For some situations, the cost of upgrading the lines or backhaul is very high when considering the relatively few users who would benefit, or the terrain through which the lines pass. The threshold capital cost, beyond which the commercial case disappears for the large operator, is around $2000 per house. Consequently, operators in the United Kingdom and elsewhere are looking seriously into wireless alternatives and TVWS is a tempting proposition. The topology here is essentially point-to-point or point-to-multipoint, which indicates that WiFi, fixed WiMAX, or TD-LTE are potential options for the air inter-
IEEE Communications Magazine • March 2011
FITCH LAYOUT
2/18/11
3:09 PM
Page 71
face since CR technology is agnostic to the actual air interface used. Rural broadband using LTE is being pursued by Verizon in the United States and T-mobile in Germany and both have secured cleared DTT spectrum in the 700 MHz band which they plan to use for its delivery. Another option would be to use unlicensed wireless broadband provision in the ISM bands using a combination of WiFi in 5 GHz band (for point-to-point wireless backhaul) and WiFi in 2.4 GHz band (for point-tomultipoint) distribution. This option is currently being exploited in the United States, mainly on small scales for local communities, where there are currently around 3000 Wireless Service Providers (WISP) that use this model. The WISP community model is incompatible with large operators such as BT for two reasons. First, large operators are required by regulation to unbundle, or offer network transport to multiple communications providers (CPs). Therefore a large operator will build out infrastructure which has the necessary connection points and network management processes for this to be possible, rather than build a community network which is essentially just one CP. Secondly, it is not economically viable for a community network to use a large operator for backhaul to the Internet. This is because the CP has to pay for both backhaul (a wholesale function) and Internet provision (a retail function) — and a large operator is forbidden by regulation to subsidies one with the other. It may cost a total of £20k per year to connect a community network to the Internet at 8 Mb/s, which can connect perhaps 50 households — and this is expensive compared with, say, a satellite network operator who is not bound by such regulation. TVWS is attractive for use with rural broadband, first because it is free and secondly because it is fairly stable; the primary users will only rarely change their use of the band in such areas. Since it is fixed point-to-multipoint, there is also a low probability that other secondary users will cause interference, although of course mechanisms must be in place to make this probability approach zero. The large operator must, though, make provision for when too little TVWS is available. The worst scenario is where an operator begins to provide broadband services using TVWS and then changes occur either with primary users or other secondary users appear which means that such services must be reduced or suspended. Such provision could be the reservation of a few channels in each region, which can be used in this situation as a fallback — and the operator would need to pay the regulator for such reservation. This topic is only just starting to be discussed between operators and regulators, and little progress has so far been made. The optimum amount of spectrum to reserve in each region is identified as a further research challenge. In the US the provision of cost-effective rural broadband in TVWS has been under evaluation for a number of years, and this has been one of the initial motivations for FCC to open these bands for cognitive access. Furthermore, the IEEE 802.22 family of standards has been devel-
IEEE Communications Magazine • March 2011
Figure 7. Achievable indoor-outdoor broadband coverage using BT-FON access points operating at 5 GHz (top) 2.4 GHz (center), and TVWS (bottom).
71
FITCH LAYOUT
2/18/11
Internet
3:09 PM
Page 72
Router/ gateway
TVWS
WiFi
Figure 8. Block diagram of the testbed.
oped specifically with rural broadband in mind [12]. The situation in the United Kingdom is different because the TVWS is exclusively in the UHF band, the channels are wider at 8 MHz and the population density distribution is different. We have investigated the feasibility of rural broadband provision in TVWS in the United Kingdom and first indications are that it may be technically and commercially feasible. Consequently BT is setting up TVWS testbeds in Scotland and England to make measurements of coverage, capacity and efficiency in practical situations. The aim of one BT study was to investigate how many houses could and should be connected using TVWS. Houses which should not be connected using TVWS are those which already have adequate DSL provision and these are generally where the total line length is less than about 3 km. There are very few houses in the United Kingdom that are further than 8 km from an exchange, so that the window of opportunity for TVWS in rural broadband is for houses that are between 3 and 8 km. We call such houses not-spots. Each isolated community is, therefore characterized by two parameters; the number of not-spots in the cluster within this distance window and the distance between the furthermost not-spot and the nearest exchange. The first parameter determines the required wireless bandwidth that needs to be available in order to provide 2 Mb/s broadband to customers while the maximum distance determines the transmit power level required to establish the required link. We have used a database of U.K. housing density together with an internal database of the 2.75 million U.K. not-spots and 5500 BT exchanges to count the number that can be covered with TVWS. We cannot divulge the actual results, but the number is high enough to be potentially commercially feasible, provided that a satisfactory resolution is obtained to the fallback problem discussed above. Not-spots in rural areas tend to be clustered into groups, where the largest group is around 60 but more typically they are 10 to 30. Assuming that customer’s traffic is bursty, and using an overbooking factor of 10, it follows that the required data rate on the downlink for the
72
largest groups should be around 2 * 60/10 = 12 Mb/s. In the worst case, this would need to be delivered over an 8 km distance via the TVWS link. Our initial estimate indicates that it should be possible to provide such a data rate via a TVWS link in the UHF band, provided the transmit power allowed by the regulator is not less than 4W. This aligns, coincidentally perhaps, with the 4W transmit power limit imposed by FCC on fixed TVWS devices in the United States. However, unlike the FCC, Ofcom has not imposed any maximum transmit power limits on devices that rely on a geo-location database for incumbent detection but has left the choice of this parameter to be determined by the database. We have constructed a laboratory testbed to assess the air interface performance in respect of transmit powers and bandwidth — and also the MAC layer functions; a basic diagram of the setup is shown in Fig. 8. In the testbed, the TVWS link is formed between two Ubiquity software-defined radio modules and Yagi antennas. The TVWS used is 762 MHz, transmitted under the authority of a test and development license. We have achieved 6 Mb/s throughput in the 8 MHz channel, but are currently working to improve this through the use of higher-level modulation schemes, multiple-input multiple-output (MIMO), and improved MAC functions, toward the 12 Mb/s we believe is necessary.
CONCLUSION A regulatory framework for secondary utilization of TV white spaces is well underway in both the United States and the United Kingdom, while important steps in this direction are being taken within the European Union and elsewhere. Using results from our recent research at BT, we have shown in this article that cognitive access to TVWS is a significant new opportunity for operators to provide a range of improved and new wireless services. In addition to the three use cases described here, another important application we are investigating includes high-capacity wireless connectivity for future data-intensive smart utility grids and vehicular communication networks. The first use cases that will employ TVWS will be fixed point-to-point links such as rural broadband access. This is forecast to occur in the next one to two years. Looking ahead, the use-cases will become more mobile and with variable and managed QoS, such as indoor to outdoor coverage via super WiFi community networks and possibly femtocells. Mobile network operators are interested in the use of TVWS for cellular extension and rural access. The challenges to be overcome include: • Quantifying the amount of TVWS spectrum that is available, which is being addressed in some EC part-funded projects like CogEU and QUASAR [9]. • Provision of reliable service and managed mobility and QoS, which is also being addressed in some EC projects such as QoSMOS. • Agreement across Europe and the United States on regulatory aspects.
IEEE Communications Magazine • March 2011
FITCH LAYOUT
2/18/11
3:09 PM
Page 73
• There are currently many overlapping standards emerging from the European Telecommunications Standards Institute (ETSI), IEEE, and International Telecommunication Union (ITU), which may lead to fragmentation of the market. • The growth of the necessary ecosystem so that terminals and equipment are available at reasonable cost. • CR equipment certification procedures. • Development of a new value chain including databases and over-the-top services based on location. The FCC is setting up a new advisory group to look into this. • Optimization of fallback spectrum methods. • Database structure and etiquette. • Flexibility of radio devices. • MAC layer development and its interaction with the databases.
ACKNOWLEDGMENT The authors thank their colleagues at BT and our partners in the EU FP7 projects QoSMOS and QUASAR for useful discussions.
REFERENCES [1] J. Mitola III and G. Q. Maguire, “Cognitive Radio: Making Software Defined Radio More Personal,” IEEE Pers. Commun., vol. 6, no. 4, Aug. 1999. [2] A. Wyglynski, M. Nekovee, and T. Hou, Eds., Cognitive Radio Communications and Networks: Principles and Practice, Academic Press, 2009. [3] FCC, “Second Report and Order in the Matter of Unlicensed Operation in the TV Broadcast Bands (ET Docket no. 04-186), Additional Spectrum for Unlicensed Devices Below 900 MHz and in 3 GHz Band (EC Docket No 02-380),” Nov. 14, 2008. [4] FCC, “Second Memorandum Opinion and Order in the Matter of Unlicensed Operation in the TV Broadcast Bands (ET Docket no. 04-186), Additional Spectrum for Unlicensed Devices Below 900 MHz and in 3 GHz Band (EC Docket No 02-380),” Sept. 23, 2010. [5] Ofcom, “Statement on Cognitive Access to Interleaved Spectrum,” July 1, 2009; http://www.ofcom.org.uk/consult/condocs/cognitive/statement. [6] Ofcom, “Digital Dividend: Geolocation for Cognitive Access,” http://www.ofcom.org.uk/consult/condocs/ cogaccess/cogaccess.pdf. [7] M. Nekovee, “Cognitive Radio Access to TV White Spaces: Spectrum Opportunities, Commercial Applications and Remaining Technology Challenges,” Proc. IEEE DyPAN, Singapore, Apr. 2010. [10] Cognitive Networking Alliance; http://www.cognea.org/. [11] ECMA Std. 392, “MAC and PHY for Operation in TV White Space,” Dec. 2009.
IEEE Communications Magazine • March 2011
[12] IEEE 802.22, “WG on WRANs (Wireless Regional Area Networks)”; http://www.ieee802.org/22/. [8] QoSMOS; http://www.ict-qosmos.eu. [9] QUASAR; http://www.quasarspectrum.eu.
BIOGRAPHIES MICHAEL FITCH (
[email protected]) works in the Research and Technology part of BT Innovation and Design, leading a small research team specializing in physical and systems aspects of wireless communications. He has been with BT since 1989 working in various research and development roles, currently working on a number of collaborative projects on emerging wireless technologies such as LTE, small cells, and cognitive radio. In addition he provides consultancy to other parts of BT on wireless matters. Previous experience is with satellite systems and mobile radio systems. He holds a first degree in math and physics, a Ph.D. in satellite communications, and is a member of the IET. MAZIAR NEKOVEE leads research on cognitive radio and new paradigms for spectrum access at BT, and provides advice to BT’s Spectrum Strategy Group. His research focuses on analysis, performance modeling, and algorithm development for complex networked systems. He obtained his M.Sc. in electrical engineering (cum laude) from Delft University of Technology in the Netherlands and his Ph.D. in theoretical physics from the University of Nijmegen, also in the Netherlands. He is a recipient of the prestigious Industry Fellowship from the Royal Society. He is the author of over 60 papers in peer-reviewed journals and conferences, and holds several patents. He has been involved in the management of a number of national, European, and international collaborative projects.
Looking ahead, the use cases will become more mobile and with variable and managed QoS, such as indoor to outdoor coverage via super WiFi community networks and possibly femtocells. Mobile network operators are interested in the use of TVWS for cellular extension and rural access.
K EITH BRIGGS has a Ph.D. in mathematics from Melbourne University, and has published in the areas of dynamical systems, computational number theory, biomathematics, and statistical mechanics. He has worked in the last 10 years on mathematical and statistical modeling of communications systems. He also publishes in the field of historical linguistics, especially as applied to toponyms. SANTOSH KAWADE received his M.Sc. in telecommunications engineering from University College London and is currently studying part-time toward a doctorate degree at University College London. Since joining BT, he has contributed to research toward understanding the fundamental performance limits of wireless networks, radio wave propagation, and interference modeling. He has published a number of academic papers in this area. RICHARD MACKENZIE received his M.Eng. in electronic engineering from the University of York, United Kingdom, in 2005 and his Ph.D. in electronic and electrical engineering from the University of Leeds, United Kingdom, in 2010. His Ph.D. focused on improving QoS over wireless home networks, with a focus on real-time video. He joined BT Innovate & Design in 2009, where he works as a wireless researcher. His work mainly involves MAC layer analysis and wireless testbed implementations for cognitive radio.
73
WANG LAYOUT
2/18/11
3:20 PM
Page 74
COGNITIVE RADIO NETWORKS (INVITED ARTICLE)
Emerging Cognitive Radio Applications: A Survey Jianfeng Wang, Monisha Ghosh, and Kiran Challapali Philips Research North America
ABSTRACT
REGULATION
Recent developments in spectrum policy and regulatory domains, notably the release of the National Broadband Plan, the publication of final rules for TV white spaces, and the ongoing proceeding for secondary use of the 2360–2400 MHz band for medical body area networks, will allow more flexible and efficient use of spectrum in the future. These important changes open up exciting opportunities for cognitive radio to enable and support a variety of emerging applications, ranging from smart grid, public safety and broadband cellular, to medical applications. This article presents a high-level view on how cognitive radio (primarily from a dynamic spectrum access perspective) would support such applications, the benefits that cognitive radio would bring, and also some challenges that are yet to be resolved. We also illustrate related standardization that uses cognitive radio technologies to support such emerging applications.
NATIONAL BROADBAND PLAN
INTRODUCTION Current spectrum allocations are based on a command-and-control philosophy; that is, spectrum is allocated for a particular application (e.g., TV broadcasting), and such allocations do not change over space and time. There have been several important developments in the past few years in the spectrum policy and regulatory domains to accelerate opportunistic uses of spectrum. The most recent of these are the publication of the National Broadband Plan in March 2010 [1], the publication of the final rules for unlicensed devices in the TV bands in September 2010 [2], and the ongoing proceeding for secondary use of the 2360–2400 MHz band for medical body area networks (MBANS) [3]. Cognitive radio (CR) technology plays a significant role in making the best use of scarce spectrum to support the increasing demand for emerging wireless applications, such as TV bands for smart grid, public safety, broadband cellular, and the MBAN band for medical applications. In order to take advantage of these new opportunities, a number of standards (e.g. IEEE 802.22 [4], IEEE 802.11af, ECMA 392 [5], IEEE SCC41, and ETSI RRS [6]) are either in development or have already been completed.
74
0163-6804/11/$25.00 © 2011 IEEE
The National Broadband Plan (NBP) is a policy document that was the culmination of almost a year’s worth of work by the Federal Communications Commission (FCC) with input from industry and government agencies on how to formulate spectrum policy in order to facilitate broadband usage for the coming years. One of the main recommendations of the NBP is to free up 500 MHz of spectrum for broadband use in the next 10 years with 300 MHz being made available for mobile use in the next five years. The Plan proposes to achieve this goal in a number of ways: incentive auctions, repacking spectrum, and enabling innovative spectrum access models that take advantage of opportunistic spectrum access and cognitive techniques to better utilize spectrum. The Plan urges the FCC to initiate further proceedings on opportunistic spectrum access beyond the already completed TV white spaces (TVWS) proceedings.
TV WHITE SPACES REGULATION The major worldwide regulatory agencies involved in developing rules for the unlicensed use of TV white spaces are the FCC in the United States, the Office of Communications (Ofcom) in the United Kingdom, and the Electronic Communications Committee (ECC) of the Conference of European Post and Telecommunications (CEPT) in Europe. The FCC released the final rules for “Unlicensed Operation in the TV Broadcast Bands” in September 2010 [2]. This was the culmination of many years of deliberations on the subject, starting with the first NPRM in May 2004 and followed by laboratory and field testing of sensing devices through 2007 and 2008 and the second report and order in 2008[7]. A recent study shows the opportunity provided by TV white spaces is shown to be potentially of the same order (~62MHz) as the recent release of “beachfront” 700MHz spectrum for wireless data service [8], while New America Foundation has another estimate of 15–40 channels available in major cities [9]. The main features of the rules as set forth in this order are as follows:
IEEE Communications Magazine • March 2011
WANG LAYOUT
2/18/11
3:20 PM
Page 75
Fixed device
Mode II, mode I
Sensing only
Channels
2~51 except 3, 4, 37. Non-adjacent channels only
21–51 except 37
21–51 except 37
Power limit
4W
100 mW
50 mW
Ofcom has made progress in developing regulations for the TV white spaces with a first consulta-
Incumbent protection mechanisms
Database
Potential applications
Smart grid (network gateway, smart meters), cellular backhaul (BS, relay station), MBMS (BS)
Database, contact verification (mode I)
Sensing
Public safety, femtocell, MBMS (CE)
Public safety, femtocell
tion released on February 16, 2009, and a further statement in July 2009. The detailed rules
Table 1. TVBD parameters and applications.
have yet to be released but first
•TV band devices (TVBDs) are divided into two categories: fixed and personal/portable. Fixed TVBDs operate from a known, fixed location and can use a total transmit power of up to 4 W effective isotropic radiated power (EIRP), with a power spectral density (PSD) of 16.7 mW/100 kHz. They are required to either have a geolocation capability or be professionally installed in a specified fixed location and have the capability to retrieve a list of available channels from an authorized database. Fixed TVBDs can only operate on channels that are not adjacent to an incumbent TV signal in any channel between 2 and 51 except channels 3, 4, and 37. Personal/portable devices are restricted to channels 21–51 (except channel 37) and are allowed a maximum EIRP of 100 mW with a PSD of 1.67 mW/100 kHz on non-adjacent channels and 40 mW with a PSD of 0.7 mW/100 kHz on adjacent channels, and are further divided into two types: mode I and mode II. Mode I devices do not need geolocation capability or access to a database. Mode II devices must have geolocation capability and the means to access a database for list of available channels. •Sensing was a mandatory feature to protect incumbents in the previous ruling [7] but is now an optional feature in fixed mode I and mode II devices. Incumbent protection will be through the use of authorized databases that have to guarantee security and accuracy of all communications between it and fixed or mode II devices. Geolocation means in mode II devices have to be accurate within ±50 m. Since sensing is optional, in order to maintain up-to-data channel availability information, Mode II devices need to check their location every 60 s and, if the location changes by more than 100 m, have to access the database for an updated channel list. In order to facilitate mobility, mode II devices are allowed to download channels for a number of locations within an area and use a channel that is available within that area without the need to access the database as long as it does not move outside the area. In addition, a new mechanism is defined in the rules to ensure that mode I devices that do not have geolocation are within the receiving range of the fixed or mode II device from which it obtained the list of channels on which it could operate. This is the “contact verification” signal, which needs to be received by the mode 1 device every 60 s, or else
IEEE Communications Magazine • March 2011
it will have to cease operation and reinitiate contact with a fixed or mode II device. •A sensing-only device is a personal/portable TVBD that uses spectrum sensing only to determine a list of available channels. Sensing only devices may transmit on any available channels in the frequency bands 512-608 MHz (TV channels 21–36) and 614–698 MHz (TV channels 3851), and are allowed a maximum transmit power of 50 mW with a PSD of 0.83 mW/100 kHz on non-adjacent channels and 40 mW with a PSD of 0.7 mW/100 kHz on adjacent channels. In addition, sensing only device must demonstrate with an extremely high degree of confidence that they will not cause harmful interference to incumbent radio services. The required detection thresholds are: ATSC digital TV signals: –114 dBm, averaged over a 6 MHz bandwidth; NTSC analog TV signals: –114 dBm, averaged over a 100 kHz bandwidth; and Low power auxiliary, including wireless microphone, signals: –107 dBm, averaged over a 200 kHz bandwidth. A TVBD may start operating on a TV channel if no TV, wireless microphone or other low power auxiliary device signals above the detection threshold are detected within a minimum time interval of 30 secs. A TVBD must perform inservice monitoring of an operating channel at least once every 60 secs. After a TV, wireless microphone or other low power auxiliary device signal is detected on a TVBD operating channel, all transmissions by the TVBD must cease within two seconds. •Safe harbor channels for wireless microphone usage are defined in all markets to be the first available channel on either side of Channel 37. TVBDs cannot operate on these channels. In addition, licensed and unlicensed wireless microphone users can register in the database if they can demonstrate that they require adequate protection from interference. Table 1 summarizes the various parameters and potential applications of TVBDs enabled by the US TVWS rules. Meanwhile, Ofcom, the regulatory body in the UK, has also made significant progress in developing regulations for the TV white spaces with a first consultation released on February 16, 2009, and a further statement in July 2009 [10]. The detailed rules have yet to be released but first indications are that TVBDs will require either sensing or gelocation/database access. The
indications are that TVBDs will require either sensing or gelocation/database access.
75
WANG LAYOUT
2/18/11
3:20 PM
Adding intelligence throughout the newly networked grid increases grid reliability, improves demand handling and responsiveness, increases efficiency,
Page 76
sensing levels being proposed for sensing-only devices are –120 dBm for digital TV and –126 dBm for wireless microphones. The ECC has just begun working on cognitive radio in the TV bands within its newly created group SE 43 [11] which is tasked with defining the technical and operational requirements of operating in the TV white spaces. Draft ECC report 159 [12] was released on Sept 30, 2010 for public consultation. This report will be used as the starting point for regulatory activities within the ECC.
better harnesses and
MBANS REGULATORY ACTIVITIES IN THE US
integrates renew-
The proposal to allocate the frequency band 2360–2400MHz for MBANS on a secondary basis was initially made in the US by GE in 2007 [13], followed by a NPRM issued by the FCC in 2009 [3]. The principal incumbent in this band in the US is Aeronautical Mobile Telemetry (AMT), which uses 2360–2390 MHz, and there are a number of proposals under consideration by the FCC that would allow MBAN devices as secondary users to coexist with the primary ATS user, without either creating interference to or being subject to interference from AMT services. These include exclusion and coordination zones as well as additional interference mitigation mechanisms, such as Listen-Before-Transmit and Adaptive Frequency Selection (LBT/AFS). Since the proposed transmit power for MBANS is quite low (1 mW in 2360–2390 MHz and 20 mW in 2390–2400 MHz), simulations have shown that these techniques would work well to protect AMT from interference while also maintaining the quality of service required for the MBANS application [14, 15]. Thus, spectrum utilization is maximized by allowing opportunistic use of the underused frequency band 2360-2400 MHz for MBANS applications, instead of allocating new spectrum exclusively for this purpose. This proceeding is still under consideration by the FCC and final rules are yet to be published. In Europe, activities have been restricted to Low Power-Active Medical Implants (LP-AMI). Draft ECC Report 149 [16] considers the feasibility of frequency bands 2360–2400 MHz, 2400–2483.5 MHz, 2483.5–2500 MHz and 2700–3400 MHz for LP-AMI, and concludes with the recommendation that the frequency band 2483.5–2500 MHz is the most promising candidate band for this purpose, based on analysis of interference between LP-AMI and existing and proposed future incumbents in these bands. External medical telemetry or MBANs, is not considered in this report; however, proposals have been made to initiate an activity to explore the use of the 2360–2400 MHz frequency band for MBANs in Europe in order to harmonize with the anticipated regulation in the United States.
able/distributed energy sources, and potentially reduces costs for the provider and consumers.
SMART GRID NETWORKS Transformation of the 20th-centrury power grid into a smart grid is being promoted by many governments as a way of addressing energy independence and sustainability, global warming and emergency resilience issues [17, 18]. The smart grid comprises three high-level layers, from an architectural perspective: the physical power
76
layer (generation and distribution), the communication networking layer, and the applications layer (applications and services, e.g., advanced metering, demand response, and grid management). A smart grid transforms the way power is generated, delivered, consumed and billed. Adding intelligence throughout the newly networked grid increases grid reliability, improves demand handling and responsiveness, increases efficiency, better harnesses and integrates renewable/distributed energy sources, and potentially reduces costs for the provider and consumers. Sufficient access to communication facilities is critically important to the success of smart grids. A smart grid network would typically consist of three segments [17]: • The home/building area networks (HANs) that connect smart meters with on-premise appliances, plug-in electrical vehicles, and distributed renewable sources (e.g., solar panels) • The advanced metering infrastructure (AMI) or field area networks (FANs) that carry information between premises (via smart meters) and a network gateway (or aggregation point), which will often be a power substation, a utility pole-mounted device, or a communications tower • The wide area networks (WANs) that serve as the backbone for communication between network gateways (or aggregation points) and the utility data center While HANs can use WiFi, Zigbee, and HomePlug, and WANs can leverage the fiberbased IP backbone or even the broadband cellular network infrastructure, appropriate technologies for AMI/FANs are still under consideration. The dimension of an AMI/FAN could range from a few hundred meters to a few kilometers or more (e.g., in rural areas). Bandwidth requirements are estimated in the 10–100 kb/s range per device in the home or office building [17]. This may scale up quickly with the number of devices on a premise if appliance-level data points as opposed to whole-home/building data are transmitted to the network gateway. Power line communication (PLC) is used in some AMI but has bandwidth and scalability problems. Moreover, the safety issues associated with ground fault currents are of concern as well. Some wireless meter readers currently use the 900 MHz unlicensed band. This is not without complications, however, since this band will soon become crowded due to the growth of unlicensed devices including smart meters. IEEE 802.15.4g, the Smart Utility Networks (SUN) Task Group, is currently working to create a physical layer (PHY) amendment for AMI/FAN by using license-exempt frequency bands such as 700 MHz–1 GHz and the 2.4 GHz band. It remains to be seen how 802.15.4g handles interference, which is common to unlicensed devices operating in these bands. The cellular network is an alternative for AMI/FAN as well. However, the investment and operation costs could be high. Moreover, cellular networks themselves face bandwidth challenges as cellular data traffic grows dramatically year by year. Cellular networks also have coverage issues in certain places (e.g., rural areas).
IEEE Communications Magazine • March 2011
WANG LAYOUT
2/18/11
3:20 PM
Page 77
With the high transmission power and
Spectrum database GreenHome
CR based wide area AMI/FAN
GreenHome
superior TV band propagation characteristics, the network gateway may reach all the smart meters
WAN Network gateway (e.g., fixed TVBD)
Smart meter (e.g., fixed TVBD) GreenHome
with one or two hops, e.g., covering an entire town. In
HAN GreenHome
rural areas, available TVWS channels could be abundant so channel availabili-
Figure 1. Smart grid networks.
ty would not be Cognitive-radio-based AMI/FANs may offer many advantages such as bandwidth, distance and cost, as compared with other wireline/wireless technologies in certain markets. Figure 1 illustrates a CR-based wide area AMI/FAN. In this case, the network gateway and smart meters are equipped with CR and dynamically utilize unused/underutilized spectrum to communicate with each other directly or via mesh networking over a wide area with minimal or no infrastructure. The network gateway connects with a spectrum database over a WAN and serves as the controller to determine which channel(s) to use for the AMI/FAN based on the location and transmission power needed for smart meters. Taking TVWS as an example, since network gateways and smart meters are both fixed, they can operate in the fixed mode and use transmission power up to 4 W EIRP. With the high transmission power and superior TV band propagation characteristics, the network gateway may reach all the smart meters with one or two hops (e.g., covering an entire town). In rural areas available TVWS channels could be abundant, so channel availability would not be an issue. There are several other standardization groups currently working on the incorporation of cognitive radio technologies to utilize TVWS to support applications such as smart grid networks, particularly AMI/FANs. Within the IEEE, the following groups are developing standards for TVWS: The IEEE 802.22 Working Group is nearing completion of the standard for TVWSbased wireless regional area networks for ranges up to 10–100 km, which could be used for largescale smart grid networks; an IEEE 802.15 study group (SG) has been created recently to investigate the use of TVWS; and IEEE 802.11af is spearheading the development of an IEEE 802.11 amendment for TVWS operation for WLANs. Like other unlicensed devices, CR-enabled AMI/FAN devices are not immune from interference or congestion, especially if they are heterogeneous and not coordinated with each other. This may introduce issues such as reliability and delay, and limit the applicability of unlicensed devices for more critical grid control or real-time
IEEE Communications Magazine • March 2011
smart grid applications. CR-enabled AMI/FANs should go beyond just dynamic spectrum access and develop self-coexistence mechanisms to coordinate spectrum usage, and may even prioritize spectrum use according to the class of smart grid traffic (e.g., real-time vs. non-real-time, emergency report vs. demand response). The IEEE 802.19.1 Working Group is currently working on developing a standard for wireless coexistence in the TVWS and may help mitigate interference issues among CR-based AMI/FANs. Furthermore, CR-enabled AMI/FANs should also consider how to interoperate with other wireless technologies such as wireless cellular networks in order to make the smart grid more resilient, scalable, accessible, and of better quality.
an issue.
PUBLIC SAFETY NETWORKS Wireless communications are extensively used by emergency responders (e.g., police, fire, and emergency medical services) to prevent or respond to incidents, and by citizens to quickly access emergency services. Public safety workers are increasingly being equipped with wireless laptops, handheld computers, and mobile video cameras to improve their efficiency, visibility, and ability to instantly collaborate with central command, coworkers, and other agencies. The desired wireless services for public safety extend from voice to messaging, email, web browsing, database access, picture transfer, video streaming, and other wideband services. Video surveillance cameras and sensors are becoming important tools to extend the eyes and ears of public safety agencies. Correspondingly, data rates, reliability, and delay requirements vary from service to service. On the other hand, the radio frequencies allocated for public safety use [19] have become highly congested in many, especially urban, areas [20]. Moreover, first responders from different jurisdictions and agencies often cannot communicate during emergencies. Interoperability is hampered by the use of multiple frequency bands, incompatible radio equipment, and a lack of standardization.
77
WANG LAYOUT
2/18/11
3:20 PM
Page 78
Spectrum coordinator
WAN
Access point (e.g., fixed TVBD)
Communication vehicle (e.g., mode I TVBD)
(e.g., mode II TVBD) Emergency 911 EMS Helicopter Operations
CR based MANET
Figure 2. Public safety networks. In coping with the above challenges, the U.S. Department of Homeland Security (DHS) released its first National Emergency Communications Plan (NECP) in July 2008. The more recently released National Broadband Plan [1] clearly reflects the effort to promote public safety wireless broadband communications. The recommendations include creating a public safety broadband network, creating an administrative system that ensures access to sufficient capacity on a day-to-day and emergency basis, and ensuring there is a mechanism in place to promote interoperability. Cognitive radio was identified as an emerging technology to increase efficiency and effectiveness of spectrum usage in both the NECP report and the National Broadband Plan. With CR, public safety users can use additional spectrum such as license-exempt TVWS for daily operation from location to location and time to time. With appropriate spectrum sharing partnerships with commercial operators, public safety workers can also access licensed spectrum and/or commercial networks. For example, the public safety community could roam on commercial networks in 700 MHz and potentially other bands both in areas where public safety broadband wireless networks are unavailable and where there is currently an operating public safety network but more capacity is required to respond effectively to an emergency. Figure 2 illustrates public safety communications with incorporation of CR networking technologies. In this case, location-aware and/or sensing-capable CR devices together with the spectrum coordinator in the back office respond to the emergency and coordinate with users (including primary and secondary users)
78
in/around the incident area to ensure the emergency responders have sufficient capacity and means for communications on the field and to/from infrastructure. In addition, CR can improve device interoperability through spectrum agility and interface adaptability, or a network of multiple networks. CR devices can communicate directly with each other by switching to common interface and frequency. Furthermore, with help of multi-interface or software-defined radio (SDR), CR can serve as the facilitator of communications for other devices which may operate in different bands and/or have incompatible wireless interfaces. As illustrated in Fig. 2, such CR devices (communication facilitators) can be located in a few powerful emergency responders’ vehicles and wireless access points. This lifts the burden off the handheld devices for each to have CR capability to mitigate the issue that different emergency responders may use different radios today and very likely in the future as well. It remains to be seen how CR technologies will support priority delivery and routing of content through its own network as well as public networks, thus protecting time-sensitive life-saving information from loss or delay due to network congestion. This goes beyond spectrum awareness to content awareness, from the PHY to the application layer. Standardization remains key to the success of CR. ECMA 392 standard is the first international standard that specifies PHY and medium access control (MAC) layers to enable personal/portable devices to operate in TVWS. While ECMA 392 is not designed specifically for public safety, it may be suitable for the following reasons. ECMA 392 supports dynamic channel use by using both geolocation-based databases as well as sensing, and can be adapted to comply with local spectrum regulations. Compared to other existing standards, ECMA 392 not only supports flexible ad hoc networking but also quality of service (QoS), which is required for on-field emergency communications.
CELLULAR NETWORKS The use of cellular networks is undergoing dramatic changes in recent years, with consumers’ expectations of being always connected, anywhere and anytime. The introduction of smartphones, the popularity of social networks, growing media sites such as Youtube, Hulu, and flickr, introduction of new devices such as ereaders, have all added to the already high and growing use of cellular networks for conventional data services such as email and web-browsing. This trend is also identified in the FCC’s visionary National Broadband Plan [1]. This presents both an opportunity and a challenge for cellular operators. The opportunity is due to the increased average revenue per user due to added data services. At the same time, the challenge is that in certain geographical areas, cellular networks are overloaded, due partly to limited spectrum resources owned by the cellular operator. Recent analysis [21] suggests that the broadband spectrum deficit is likely to approach 300 MHz by 2014, and that
IEEE Communications Magazine • March 2011
WANG LAYOUT
2/18/11
3:20 PM
Page 79
making available additional spectrum for mobile broadband would create value in excess of $100 billion in the next five years through avoidance of unnecessary costs. With the FCC’s TVWS ruling, new spectrum becomes available to cellular operators. In the long term, television band spectrum that is currently not described as white spaces may also become available to cellular operators, as discussed in the National Broadband Plan. Specifically, the plan discusses the possibility for current license holders of television spectrum to voluntarily auction their licenses, in return for part of the proceeds from the auction. The plan envisions that this newly freed spectrum could be used for cellular broadband applications (hence the name of the plan). Many papers have investigated the application of spectrum sensing or spectrum sharing in cellular networks [6, 22, 23]. Figure 3 illustrates how cognitive radio technologies can augment next generation cellular networks like LTE and WiMAX to dynamically use these newly available spectrums either in the access or backhaul parts of their networks. A spectrum coordinator can be added in the non-access stratum (NAS) to allow cellular networks to dynamically lease spectrum from spectrum markets and/or identify secondary license exempt spectrum opportunities to meet the cellular traffic demand given a location and time period. The base stations (including relay stations) configure channels to operate according to the instructions of the spectrum coordinator and aggregate the spectrum for use. For access network applications, two use cases can be envisioned. The first is hotspots, such as game stadiums and airports, where a large number of users congregate at the same time. Take the example of a stadium: users increasingly have phones equipped with cameras that can capture pictures or videos of events at the game and upload them to media sites or send them to their friends. Such picture and video data puts enormous strain on the cellular network. In Cisco’s study 60 percent of growth is expected from such picture and video data. Today, some of this data can be offloaded to ISM band WiFi networks. However, due to the large amount of data generated in a small area (“hotspot”), both cellular networks and ISM band WiFi networks, are likely to be overloaded. If this data can be offloaded to additional spectrum, such as TVWS, the cellular network can then be used for voice applications in a more reliable fashion, thus benefiting both the user and cellular operator. The second access network application is similar to a femtocell. Today several cellular operators sell a mini-cell tower (looks like a WiFi access point) that consumers may buy and install in their homes. Typical users of femtocell are those that have bad coverage in certain parts of their homes, such as basements. These femtocell devices operate on the same frequencies as those of cellular operators. However, these femtocell devices have several issues. First, due to the fact that femtocell devices and cellular networks operate on the same frequency, the quality of the network suffers when these two networks interfere with each other. Second, the
IEEE Communications Magazine • March 2011
Primary user database
Spectrum market $
External IP network
Spectrum coordinator
Cellular IP network backbone
Base station (e.g., fixed TVBD)
S ag pect gre ru ga m tio n Licensed spectrum
Leased spectrum
Relay station Unlicensed spectrum
(e.g., Fixed or mode II TVBD)
(e.g., Mode II or mode I TVBD) Hot spots
Femtocell (e.g., Mode I TVBD)
Figure 3. Cellular networks.
coverage of these devices is limited. TV white space radio coverage is significantly improved due to the better propagation characteristics and in addition, there is no interference between the femtocell and main cell. A somewhat different issue than the data overload or spotty coverage discussed above also can be noted with cellular networks. Rural areas (to be more precise, areas with low population density distribution) are known to have poor coverage. Cellular operators have rights to use their spectrum nationwide, however, choose not to deploy their networks in rural areas. The reason for this is that a significant part of the costs of a cellular operator is infrastructure costs. These costs cannot be recovered in rural areas due to lack of sufficient number of subscribers in a given area. With white space spectrum, for example, being made available for unlicensed use, cellular operators can use them for backhaul, to connect their cell towers to their backbone networks, thus reducing labor intensive backhaul cables installation, and thus provide coverage to more customers in unserved and underserved areas. Some design considerations need to be kept in mind in using additional spectrum given that the transmission requirements associated with the additional spectrum could vary significantly from that of the primary cellular spectrum. Take TVWS as an example. The FCC rules as discussed above put certain restrictions on different
79
WANG LAYOUT
2/18/11
3:20 PM
Page 80
Primary user (ATS) protection zone Primary user database
WAN
Hospital MBAN coordinator
Access point
Hospital IT network
Body sensors
E-key to enable band I Band I + Transit Body Band II Controller sensors MBAN
In-hospital solution
Band II
Controller MBAN
Out-hospital solution
Figure 4. Medical body area networks.
device types. For data offloading between base stations and CPE, base stations would operate in fixed mode, and CPE can only operate in mode I. The PSD and strict emission mask requirement may restrict mode I personal/portable devices for uplink transmission. Therefore, for mode I devices, a class of receiver-only white space devices might easily be possible in the near term, enabling broadcast type or mainly downlink applications with minimal return channel interactivity over cellular or another return channel. However, the economic viability of such an application remains to be seen. On the other hand, the backhaul scenario as discussed above will have fewer issues.
WIRELESS MEDICAL NETWORKS In recent years there has been increasing interest in implementing ubiquitous monitoring of patients in hospitals for vital signs such as temperature, pressure, blood oxygen, and electrocardiogram (ECG). Normally these vitals are monitored by on-body sensors that are then connected by wires to a bedside monitor. The MBAN is a promising solution for eliminating these wires, thus allowing sensors to reliably and inexpensively collect multiple parameters simultaneously and relay the monitoring information wirelessly so that clinicians can respond rapidly [24]. Introduction of MBANs for wireless patient monitoring is an essential component to improving patient outcomes and lowering healthcare costs. Through low-cost wireless devices, universal patient monitoring can be extended to most if not all patients in many hospitals. With such ubiquitous monitoring, changes in a patient’s condition can be recognized at an early stage and appropriate action taken. By getting rid of wires and their management, the associated risks of infection are reduced using MBANs. Addi-
80
tionally, MBANs would increase patient comfort and mobility, improve effectiveness of caregivers, and improve quality of medical decision making. Patient mobility is an important factor in speeding up patient recovery. Quality of service is a key requirement for MBANs, and hence the importance of having a relatively clean and less crowded spectrum band. Today, MedRadio and WMTS band are used in many medical applications, but the bandwidth is limited and cannot meet the growing need [24, 25]. The 2.4 GHz industrial, scientific, and medical (ISM) band is not suitable for life-critical medical applications due to the interference and congestion from IT wireless networks in hospitals. By having the 2360–2400 band allocated for MBANs on a secondary basis, QoS for these life-critical monitoring applications can be better ensured. Moreover, the 2360–2400 MHz band is immediately adjacent to the 2400 MHz band for which many devices exist today that could easily be reused for MBANS, such as IEEE 802.15.4 radios. This would lead to low-cost implementations due to economies of scale, and ultimately lead to wider deployment of MBANs and hence improvement in patient care. MBAN communication will be limited to transmission of data (voice is excluded) used for monitoring, diagnosing, or treating patients. MBAN operation is permitted by either healthcare professionals or authorized personnel under license by rule. It is proposed that the 2360–2400 MHz frequency band be classified into two bands: 2360–2390 MHz (band I) and 2390–2400 MHz (band II). In the 2360–2390 MHz band, MBAN operation is limited for indoor use only to those healthcare facilities that are outside exclusion zones of AMT services. In the 2390–2400 MHz band, MBAN operation is permitted everywhere: all hospitals, in homes, and in mobile ambulances. There are a number of mechanisms for MBAN devices to access spectrum on a secondary basis while protecting incumbents and providing a safe medical implementation. An unrestricted contention-based protocol such as LBT is proposed for channel access. The maximum emission bandwidth of MBAN devices is proposed to be 5 MHz. The maximum transmit power is not to exceed the lower of 1 mW and 10logB dBm (where B is the 20 dB bandwidth in megahertz) in the 2360–2390 MHz band and 20 mW in the 2390–2400 MHz band. The maximum aggregated duty cycle of an MBAN is not to exceed 25 percent. A geographical protection zone along with an electronic key (e-key) MBAN device control mechanism is further used to limit MBAN transmissions. E-key device control is used to ensure that MBAN devices can access the 2360–2390 MHz frequency band only when they are within the confines of a hospital facility that is outside the protection zone of AMT sites. Figure 4 illustrates both in-hospital and outof-hospital solutions for using 2360–2390 MHz. Any hospital that plans to use the AMT spectrum for an MBAN has to register with an MBAN coordinator. The MBAN coordinator determines if a registered hospital is within protection zones of AMT sites (with possible coordination with primary users). If a hospital is outside protection zones, the MBAN coordina-
IEEE Communications Magazine • March 2011
WANG LAYOUT
2/18/11
3:20 PM
Page 81
tor will issue an e-key specifically for that hospital to enable MBAN devices within that hospital to access AMT spectrum. Without a valid e-key, by default MBAN devices can only use the 2390–2400 MHz band. The distribution of e-keys to MBAN devices that are connected to the hospital IT network can be automatically done either through wired or wireless links. MBAN devices must have a means to automatically prevent transmissions in the 2360–2390 MHz AMT band when devices go outdoors. Once a sensor in an MBAN loses its connection to its hub device, it stops transmission within the 2360–2390 MHz AMT spectrum or transitions to the 2390–2400 MHz band. The 2390–2400 MHz band can be used anywhere without restriction and hence without an e-key. Simulations have shown that these technologies would work well to protect AMT from interference while also maintaining the QoS required for the MBAN applications [13, 14]. The IEEE has been working on MBAN standardization. In addition to ongoing activities in IEEE 802.15.6 on BANs, 802.15 Task Group 4j was started in December 2010 to specifically develop standards for MBANs in the 2360–2400 MHz band by leveraging the existing IEEE 802.15.4 standard.
CONCLUSION Many milestones, both regulatory and technical, have been reached in opening spectrum for more flexible and efficient use, and this trend will continue. Cognitive radio technology plays a significant role in making the best use of scarce spectrum to support fast growing demand for wireless applications, ranging from smart grid, public safety, broadband cellular, to medical applications. Standard development organizations (SDOs) have begun to develop standards to take advantage of the opportunities. However, challenges still remain since CR-enabled networks have to coexist with primary as well as secondary users and need to mitigate interference in such a way that they can better support such applications from end to end.
REFERENCES [1] Connecting America: The National Broadband Plan, http://download.broadband.gov/plan/national-broadband-plan.pdf. [2] Unlicensed Operations in the TV Broadcast Bands, Second Memorandum Opinion and Order, FCC 10-174, Sept. 23, 2010. [3] Amendment of the Commission’s Rules to Provide Spectrum for the Operation of Medical Body Area Networks, Notice of Proposed Rulemaking, FCC, ET Docket no. 08-59. [4] IEEE 802.22 WG on WRANs (Wireless Regional Area Networks), http://www.ieee802.org/22/. [5] ECMA 392: MAC and PHY for Operation in TV White Space, 1st ed., Dec. 2009. [6] M. Mueck et al., “ETSI Reconfigurable Radio Systems: Status and Future Directions on Software Defined Radio and Cognitive Radio Standards,” IEEE Commun. Mag., vol. 48, Sept. 2010, pp. 78–86. [7] Unlicensed Operation in the TV Broadcast Bands, Second Report and Order, FCC 08-260, Nov. 14, 2008. [8] K. Harrison, S. M. Mishra, and A. Sahai, “How Much WhiteSpace Capacity is There?” 2010 IEEE Symp. New Frontiers in Dynamic Spectrum, Singapore, 6–9 Apr. 2010. [9] B. Scott and M. Calabrese, “Measuring the TV ‘White Space’ Available for Unlicensed Wireless Broadband,” New America Foundation, Tech. Rep., Jan. 2006.
IEEE Communications Magazine • March 2011
[10] Digital Dividend: Cognitive Access, http://www.ofcom. org.uk/consult/condocs/cognitive. [11] SE 43: Cognitive Radio Systems in White Spaces, http://www.cept.org/0B322E6B-375D-4B8F-868B3F9E5153CF72.W5Doc?frames=no&. [12] Dract ECC Report 159, “Technical and Operational Requirements for the Possible Operation of Cognitive Radio Systems in the “White Spaces” of the Frequency Band 470–790 MHz,” http://www.ero.dk/D9634A591F13-40D1-91E9-DAE6468ED66C?frames=no&. [13] Ex-parte comments of GE Healthcare in Docket 06135, http://fjallfoss.fcc.gov/ecfs/document/view? id=6519820996. [14] Reply Comments of Philips Healthcare Systems in Docket 08-59, http://fjallfoss.fcc.gov/ecfs/document/ view?id=7020244837. [15] Reply Comments of GE Healthcare in Docket 08-59, http://fjallfoss.fcc.gov/ecfs/document/view?id=7020244 842. [16] Analysis on Compatibility of Low-Power Active Medical Implant (LP-AMI) Applications Within the Frequency Range 2360–3400 MHz, in Particular for the Band 2483.5-2500 MHz, with Incumbent Services, http://www.ero.dk/0FFA3C12-E787-4868-9D02CC9CA6D5F335?frames=no&. [17] DOE, Communications Requirements of Smart Grid Technologies, report, Oct. 5, 2010 [18] D. J. Leeds, The Smart Grid In 2010: Market Segments, Applications and Industry Players, Gtm Research, July 2009. [19] T. L. Doumi, “Spectrum Considerations for Public Safety in the United States,” IEEE Commun. Mag., vol. 44, no. 1, Jan. 2006, pp. 30–37. [20] L. E. Miller, “Wireless Technologies and the SAFECOM SoR for Public Safety Communications,” NIST report, 2005. [21] Mobile Broadband: The Benefits Of Additional Spectrum, FCC, OBI tech. paper no. 6, Oct. 2010 [22] T. Kamakaris, M. M. Buddhikot, and R. Iyer, “A Case for Coordinated Dynamic Spectrum Access in Cellular Networks,” 1st IEEE Int’l. Symp. New Frontiers in Dynamic Spectrum Access Networks, Baltimore, MD, pp. 289–98. [23] I. F. Akyildiz et al., “A Survey on Spectrum Management in Cognitive Radio Networks,” IEEE Commun. Mag., vol. 46, no. 4, Apr. 2008, pp. 40–48. [24] M. Patel and J. Wang, “Applications, Challenges, and Prospective in Emerging Body Area Networking Technologies,” IEEE Wireless Commun., vol. 17, no. 1, Feb. 2010, pp. 80–88. [25] B. Zhen et al., “Frequency Band Consideration of SGMBAN,” IEEE 802.15-MBAN-07-0640-00, Mar. 2007.
Challenges still remain since CR-enabled networks have to coexist with primary as well as secondary users and need to mitigate interference in such a way that they can better support such applications from end to end.
BIOGRAPHIES J IANFENG W ANG (
[email protected]) is a senior member of research staff at Philips Research North America. He received his Ph.D. in electrical and computer engineering from the University of Florida in 2006. His research interests include cognitive radio, medical body area networks, wireless tele-health, M2M, and smart grid communications. His work has led to over 30 publications in journals and conferences. He has been serving as technical editor of ECMA 392 and is coexistence group leader of IEEE 802.22. M ONISHA G HOSH [SM] (
[email protected]) is a principal member of research staff at Philips Research currently working on cognitive and cooperative radio networks. She received her Ph.D. in electrical engineering in 1991 from the University of Southern California. From 1991 to 1998 she was a senior member of research staff in the Video Communications Department at Philips Research. From 1998 to 1999 she was at Lucent Technologies, Bell Laboratories, working on wireless cellular systems. Her research interests include estimation and information theory, error correction, and digital signal processing for communication systems. K IRAN C HALLAPALI [M] (
[email protected]) is a principal member of research staff at Philips Research North America. He has been with Philips since 1990. He graduated from Rutgers University with an M.S. degree in electrical engineering in 1992. He currently leads CogNeA, an industry alliance to bring cognitive radio solutions to market. He has published over 25 technical papers in IEEE journals and conferences, and has about 25 patents, issued or pending.
81
FILIN LAYOUT
2/18/11
3:08 PM
Page 82
COGNITIVE RADIO NETWORKS
International Standardization of Cognitive Radio Systems Stanislav Filin, Hiroshi Harada, Homare Murakami, and Kentaro Ishizu, National Institute of Information and Communications Technology
ABSTRACT The current radio environment is characterized by its heterogeneity. Different aspects of this heterogeneity include multiple operators and services, various radio access technologies, different network topologies, a broad range of radio equipment, and multiple frequency bands. Such an environment has a lot of technical and business opportunities. Examples are joint management of several radio access networks within one operator to balance load of these networks, detecting and using unused spectrum in the allocated frequency bands without interrupting the operation of the primary users of such frequency bands, and spectrum trading between several operators. To exploit such opportunities, the concept of cognitive radio system has been developed. Many CRS usage scenarios and business cases are possible. This has triggered a lot of standardization activity at all levels, including in the International Telecommunication Union, IEEE, European Telecommunications Standards Institute, and European Association for Standardizing Information and Communication Systems; each of these organizations is considering multiple CRS deployment scenarios and business directions. This article describes the current concept of the CRS and shows the big picture of international standardization of the CRS. Understanding of these standardization activities is very important for both academia and industry in order to select important research topics and promising business directions.
INTRODUCTION The views presented in this article are those of the authors and do not necessarily reflect the views of ITU, IEEE, ETSI, or ECMA. This research was conducted under a contract of R&D for radio resource enhancement, organized by the Ministry of Internal Affairs and Communications, Japan.
82
The current radio environment is characterized by its heterogeneity. Different aspects of this heterogeneity include multiple operators and services, various radio access technologies, different network topologies, a broad range of radio equipment, and multiple frequency bands. Such an environment has a lot of technical and business opportunities. Examples are joint management of several radio access networks (RANs) within one operator to balance the load of these networks, detecting and using unused spectrum in the allocated frequency bands without interrupting the operation of the primary
0163-6804/11/$25.00 © 2011 IEEE
users of such frequency bands, and spectrum trading between several operators. To exploit such opportunities, the concept of the cognitive radio system (CRS) has been developed. In general, the CRS can be characterized as a radio system having capabilities to obtain knowledge, adjust its operational parameters and protocols, and learn. Given such broad understanding of the CRS, many usage scenarios and business cases are possible. Various CRS deployment scenarios and use cases can be classified using two types of CRS: heterogeneous and spectrum sharing. Examples of heterogeneous CRSs are management of several RANs within one operator, and reconfiguration of base stations and terminals. An example of a spectrum sharing CRS is detecting and using unused spectrum. Currently international standardization of CRS is performed at all levels, including the International Telecommunication Union (ITU), IEEE, European Telecommunications Standards Institute (ETSI), and European Association for Standardizing Information and Communication Systems (ECMA), where each of these organizations is considering multiple CRS deployment scenarios and business directions. This article describes the current concept of the CRS and shows a big picture of CRS standardization to assist both academia and industry in selecting important research topics and promising business directions.
CRS CONCEPT DEFINITION OF CRS The first definition of cognitive radio was given by J. Mitola [1]. He defined cognitive radio as follows: “The term cognitive radio identifies the point at which wireless personal digital assistants and the related networks are sufficiently computationally intelligent about radio resources and related computer-to-computer communications to: (a) detect user communications needs as a function of use context, and (b) to provide radio resources and wireless services most appropriate to those needs.” Recently the understanding of the technology has evolved into the CRS concept. Academia and industry from different countries and projects have come up with mostly the same under-
IEEE Communications Magazine • March 2011
FILIN LAYOUT
2/18/11
3:08 PM
Page 83
standing of the CRS concept. The definition of CRS developed by Working Party (WP) 1B of the ITU — Radiocommunication Sector (ITUR) represents this common understanding. ITU-R WP 1B defines the CRS as “a radio system employing technology that allows the system: to obtain knowledge of its operational and geographical environment, established policies and its internal state; to dynamically and autonomously adjust its operational parameters and protocols according to its obtained knowledge in order to achieve predefined objectives; and to learn from the results obtained” [2]. The following key capabilities of the CRS are underlined in the ITU-R WP 1B definition: the capability to obtain knowledge, the capability to adjust operational parameters and protocols, and the capability to learn.
CRS CAPABILITIES The capability to obtain knowledge is one of the three key characteristics of CRS. Such knowledge includes the following key components: • CRS operational radio environment • CRS geographical environment • Internal state of CRS • Established policies • Usage patterns • Users’ needs The CRS operational radio environment is characterized by, for example, the current status of spectrum usage, indication of the available radio systems and their assigned frequency bands, coverage areas of these radio systems, and interference levels. The CRS geographical environment is characterized by, for example, positions of radios that are components of the CRS and other radio systems, orientation of antennas of radios of the CRS and other radio systems, and distribution of users in the geographic area of the CRS. The internal state of the CRS can be characterized by configuration of the CRS (e.g., frequency bands and protocols used by its radios), traffic load distribution, and transmission power values. The established policies may describe frequency bands allowed to be used by the CRS under certain conditions, where such conditions may include maximum level of transmission power in operating and adjacent frequency bands, rules that the CRS shall follow to avoid causing harmful interference to other radio systems. The usage patterns may collect behavior of the CRS, other radio systems, and users. The users’ needs may be described by user preferences or policies. Examples of such user preferences are requests for high bandwidth, low delay, fast download time, and low cost. In order to obtain knowledge, CRS can use various approaches, including: • Collecting information from component radio systems • Geolocation • Spectrum sensing • White space database access • Access to a cognitive pilot channel (CPC) Component radio systems of the CRS typically perform a lot of measurements, such as
IEEE Communications Magazine • March 2011
received signal power, signal-to-interferenceplus-noise ratio, and load. Also, they are aware of their current state, for example, frequency bands and radio access technologies (RATs) used by base stations and terminals, and transmission power values. All this information contributes a lot to the knowledge of the CRS. Positions of radios (e.g., base stations and terminals) that are components of the CRS and other radio systems can be obtained using geolocation. Geolocation can be performed during professional installation or using a localization system (e.g., Global Positioning System and wireless positioning system). White space database access and spectrum sensing is very important in some deployment scenarios of the CRS. These two approaches are used to identify white spaces, detect primary users, and identify white spaces, while they may also be used to detect secondary users. Access to a CPC is also very important in some CRS deployment scenarios. The CPC serves as a means to exchange information between components of the CRS, and in such cases the CPC is typically considered part of the CRS. The second key characteristic of the CRS is its capability to dynamically and autonomously adjust its operational parameters and protocols according to the obtained knowledge in order to achieve some predefined objectives. Such adjustment consists of two stages: decision making and reconfiguration. Typically, the CRS includes an intelligent management system responsible for making decisions regarding parameters and protocols that need to be adjusted. Following the decisions made, the CRS performs reconfiguration of its radios. Such reconfiguration may include change of the following parameters: output power, frequency band, and RAT. These three examples are regulated parameters;that is, they are typically specified by radio regulations. This is the new capability of the CRS, compared to commonly used adaptation methods like power control or adaptive modulation and coding. The third key characteristic of the CRS is its capability to learn from the results of its actions in order to further improve its performance. Figure 1 summarizes the CRS concept. The main components of the CRS are the intelligent management system and reconfigurable radios. The four main actions of the CRS are obtaining knowledge, making decisions, reconfiguration, and learning. The knowledge used by the CRS includes knowledge about operational radio and geographical environment, internal state, established policies, usage patterns, and users’ needs. The methods to obtain this knowledge include getting information from component radio systems of the CRS, geolocation, spectrum sensing, access to the CPC, and database access. Using the obtained knowledge, the CRS dynamically and autonomously makes reconfiguration decisions according to some predefined objectives (e.g., in order to improve efficiency of spectrum usage). Based on the decisions made, the CRS adjusts operational parameters and
White space database access and spectrum sensing is very important in some deployment scenarios of the CRS. These two approaches are used to identify white spaces and to detect primary users and to identify white spaces, while they may also be used to detect secondary users.
83
FILIN LAYOUT
2/18/11
3:08 PM
Page 84
protocols of its reconfigurable radios. Such parameters include output power, frequency range, modulation type, and RAT. Also, the CRS can learn from its decisions to improve its future decisions. The results of learning contribute to both obtaining knowledge and decision making.
Intelligent management system
Learning
Distributed decision making
Database access
Decision and adjustment
Obtaining knowledge
- Output power - Frequency range - Modulation type - Radio access technology - Protocols
- Radio environment - Geographical environment - Internal state - Established policies - Usage patterns - User’s needs
e.g. SDR
- Information from components of CRS - Geo-location - Spectrum sensing - Access to Cognitive Pilot channel Reconfigurable radios
Figure 1. CRS concept summary.
TYPES OF CRS Many scenarios of CRS deployment, CRS use cases, and CRS business cases are possible. We identify two key types of the CRS: heterogeneous type CRS and spectrum sharing type CRS. In the heterogeneous type CRS, one or several operators operate several radio access networks (RANs) using the same or different RATs. Frequency bands allocated to these RANs are fixed. These RANs may have different types of base stations. One type of base station is legacy, designed to use a particular RAT to provide wireless connection to terminals. Another type of base station is reconfigurable, having capability to reconfigure itself to use different frequency bands allocated to the operator and use different RATs as specified by the radio regulations for these frequency allocations. Operators provide services to users having different terminals. One type of terminal is legacy, designed to use a particular RAT. Such a terminal can connect to one particular operator or other operators having roaming agreements with the home operator. Another type of terminal is reconfigurable. Such a terminal has capability to reconfigure itself to use different RATs. Correspondingly, such a terminal can hand over between different RANs using different RATs operated by different operators. Optionally, a reconfigurable terminal can support multiple simultaneous links with RANs. Within the heterogeneous CRS, several deployment scenarios are possible. In one scenario the CRS has only legacy base stations, while some of the terminals are reconfigurable. Such terminals can make decisions to reconfigure themselves to connect to different component RANs inside the CRS. Also, terminal reconfiguration can be managed or supported from the network side. For this purpose, some management entities are deployed on the network side.
Basic access signaling Corresponding node Ubiquitous networking server Crossdevice handover
Cellular network Cross-device handover
Cellular network Wireless LAN Cross-network handover
Wireless LAN Cross-network handover
Cross-device handover Switch of terminal during communication Automatic handover according to user’s context
High-speed mobile ring network Scalable mobile core network supporting fast handover
Cross-network handover Switch of network in use during communication Handover according to context (content, speed, price, etc.)
Figure 2. Heterogeneous type CRS: cross-device and cross-network handover.
84
IEEE Communications Magazine • March 2011
FILIN LAYOUT
2/18/11
3:08 PM
Page 85
Aggregation proxy server
Application server Supporting NAT/HTTP proxy
WLAN private network 1
IP core network
WLAN private network 2
WLAN private network 3
W-CDMA commercial network
PHS commercial network
WLAN access point
Cognitive terminal
Figure 3. Heterogeneous type CRS: cross-operator multi-link handover. The heterogeneous CRS is considered, for example, in the following standards: IEEE 1900.4, IEEE draft standard P1900.4.1, and IEEE 802.21. One example of this deployment scenario is shown in Fig. 2 [3]. In this example a reconfigurable terminal performs cross-network handover and cross-device handover. Another example is shown in Fig. 3 [4], where a reconfigurable terminal performs cross-operator handover. Also, it can support multiple simultaneous links with different RANs. One more deployment scenario of the heterogeneous CRS is when a mobile wireless router serves as a bridge between multiple radio systems and terminals as shown in Fig. 4 [5]. Such a mobile wireless router has the capability to communicate with different radio systems using different RATs while providing connection to terminals using one RAT. In this case terminals do not need to have reconfiguration capability to communicate with different radio systems. All these deployment scenarios of the heterogeneous CRS are possible within the current radio regulations. In the spectrum sharing CRS several RANs using the same or different RATs can share the same frequency band. One deployment scenario of this type of CRS is when several RANs operate in unlicensed or lightly licensed spectrum, where the CRS capabilities can enable coexistence of such systems. Another deployment scenario is when a secondary system operates in the white space of a television broadcast operator frequency band. In such a scenario, the CRS capabilities should pro-
IEEE Communications Magazine • March 2011
Internet
Internet connections via WLANs, commercial HSDPA and PHS network
Communication devices (CF type, USB type)
Mobile wireless router Connected via WLAN
PCs and PDAs supporting WLAN access
Figure 4. Heterogeneous type CRS: mobile wireless router. vide protection of primary service (television broadcast) and coexistence between secondary systems. Spectrum sharing CRS is considered, for example, in the following standards: IEEE 1900.4, IEEE draft standard P1900.4a, IEEE
85
FILIN LAYOUT
2/18/11
3:08 PM
Page 86
ic Spectrum Access Networks and the 802 LAN/MAN Standards Committee are standardizing CRSs and their components. In ETSI, Technical Committee (TC) on Reconfigurable Radio Systems (RRS) has been developing reports describing different components of the CRS, as well as reports on the CRS concept and the regulatory aspects of the CRS. In the ECMA, Task Group 1 of Technical Committee 48 has standardized a CRS for TV white space. Network reconfiguration manager (NRM)
1. Collection and Analysis
2. Instruction of reconfiguration
Cognitive radio (CR) base station
3. Notification of operational frequencies
Cognitive radio (CR) terminal
Figure 5. Spectrum sharing type CRS. draft standard P1900.6, IEEE 802.11y, IEEE draft standard P802.11af, IEEE draft standard P802.19.1, IEEE draft standard P802.22, IEEE draft standard P802.22.1, and standard ECMA392. One example of a spectrum sharing CRS is shown in Fig. 5 [5]. In this example a cognitive radio base station (CRBS) and cognitive radio terminals (CRTs) sense the spectrum to detect temporarily vacant frequency bands. Based on the sensing results, the CRBS and CRTs reconfigure themselves to use these vacant frequency bands. Such reconfiguration can be supported by a management entity on the network side called the network reconfiguration manager. These deployment scenarios of the spectrum sharing CRS are possible within the national radio regulations of some countries.
INTERNATIONAL STANDARDIZATION OF CRS Due to very large interest in the CRS, its standardization is currently performed on all levels, including the ITU, IEEE, ETSI, and ECMA. In the ITU, ITU-R WPs 1B and 5A are currently preparing reports describing the CRS concept and the regulatory measures required to introduce the CRS. In the IEEE several Working Groups (WG) in Standards Coordination Committee (SCC) 41 on Dynam-
86
CRS STANDARDIZATION IN THE ITU ITU-R WP 1B is currently developing a working document toward draft text on World Radio Conference 2012 (WRC 12) agenda item 1.19. Agenda item 1.19 is “to consider regulatory measures and their relevance, in order to enable the introduction of software-defined radio and cognitive radio systems, based on the results of ITU-R studies, in accordance with Resolution 956 (WRC 07)” [6]. To prepare the working document, WP 1B has developed definitions of the software defined radio (SDR) and CRS [2]. Also, WP 1B has summarized the technical and operational studies and relevant ITU-R Recommendations related to the SDR and CRS. WP 1B has considered the SDR and CRS usage scenarios in different radio services. Also, WP 1B has considered the relationship between SDR and CRS. Currently, WP 1B is considering the international radio regulation implications of the SDR and CRS, as well as, methods to satisfy WRC 12 agenda item 1.19. The ITU-R WP 5A is currently developing the working document toward a preliminary new draft report, “Cognitive Radio Systems in the Land Mobile Service” [7]. This report will address the definition, description, and application of cognitive radio systems in the land mobile service. The following topics are currently considered in the working document: • Technical characteristics and capabilities • Potential benefits • Deployment scenarios • Potential applications • Operational techniques • Coexistence • Operational and technical implications
CRS STANDARDIZATION IN IEEE IEEE SCC 41 — IEEE SCC 41 is developing standards related to dynamic spectrum access networks. The focus is on improved use of spectrum, including new techniques and methods of dynamic spectrum access, which requires managing interference and coordination of wireless technologies, and includes network management and information sharing [8]. The 1900.1 WG developed IEEE 1900.1, “Standard Definitions and Concepts for Dynamic Spectrum Access: Terminology Relating to Emerging Wireless Networks, System Functionality, and Spectrum Management.” This standard creates framework for developing other standards within the IEEE SCC 41. The 1900.4 WG developed IEEE 1900.4,r “Architectural Building Blocks Enabling Net-
IEEE Communications Magazine • March 2011
FILIN LAYOUT
2/18/11
3:08 PM
Page 87
work-Device Distributed Decision Making for Optimized Radio Resource Usage in Heterogeneous Wireless Access Networks.” IEEE 1900.4 defines the architecture of the intelligent management system of a CRS. Both the heterogeneous and spectrum sharing CRS are supported by the IEEE standard 1900.4. Currently, the 1900.4 WG is developing two new draft standards: P1900.4.1 and P1900.4a. Development of draft standard P1900.4.1, “Interfaces and Protocols Enabling Distributed Decision Making for Optimized Radio Resource Usage in Heterogeneous Wireless Networks,” started in March 2009. P1900.4.1 uses IEEE 1900.4 as a baseline standard. It provides a detailed description of interfaces and service access points defined in IEEE 1900.4. Development of draft standard P1900.4a, “Architecture and Interfaces for Dynamic Spectrum Access Networks in White Space Frequency Bands,” started in March 2009 together with P1900.4.1. P1900.4a amends IEEE 1900.4 to enable mobile wireless access service in white space frequency bands without any limitation on the radio interface to be used. The 1900.5 WG is developing draft standard P1900.5, “Policy Language Requirements and System Architectures for Dynamic Spectrum Access Systems.” P1900.5 defines a vendor-independent set of policy-based control architectures and corresponding policy language requirements for managing the functionality and behavior of dynamic spectrum access networks. The 1900.6 WG is developing draft standard P1900.6, “Spectrum Sensing Interfaces and Data Structures for Dynamic Spectrum Access and other Advanced Radio Communication Systems.” P1900.6 defines the logical interface and data structures used for the information exchange between spectrum sensors and their clients in radio communication systems. On March 8, 2010 the ad hoc on white space radio was created within IEEE SCC41. The purpose is to consider interest in, feasibility of, and necessity of developing a standard defining radio interface (media access control and physical layers) for a white space communication system. IEEE 802 — IEEE 802 WGs are defining CRSs and components of the CRS [9]. The activity to define CRSs is currently performed in the 802.22 and 802.11 WGs, while the activity to specify components of a CRS is currently performed in 802.21, 802.22, and 802.19 WGs. The draft standard P802.22 is entitled “Draft Standard for Wireless Regional Area Networks Part 22: Cognitive Wireless RAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: Policies and Procedures for Operation in the TV Bands.” It specifies the air interface, including the cognitive MAC and PHY, of point-to-multipoint wireless regional area networks, comprised of a professionally installed fixed base station with fixed and portable user terminals operating in the unlicensed VHF/UHF TV broadcast bands between 54 MHz and 862 MHz (TV white space). The IEEE standard 802.11y is entitled “IEEE
IEEE Communications Magazine • March 2011
Standard for Information Technology — Telecommunications and Information Exchange between Systems — Local and Metropolitan Area Networks — Specific Requirements — Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications — Amendment 3: 3650–3700 MHz Operation in USA.” This standard defines the mechanisms (e.g., new regulatory classes, transmit power control, and dynamic frequency selection) for 802.11 to share frequency bands with other users. Draft standard P802.11af is entitled “IEEE Standard for Information Technology — Telecommunications and Information Exchange Between Systems — Local and Metropolitan Area Networks — Specific Requirements — Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications — Amendment: TV White Spaces Operation.” It is an amendment that defines standardized modifications to both the 802.11 physical layers and MAC layer to meet the legal requirements for channel access and coexistence in the TV White Space. IEEE 802.21 is entitled “IEEE Standard for Local and Metropolitan Area Networks — Part 21: Media Independent Handover Services.” It defines extensible media-access-independent mechanisms that enable the optimization of handover between heterogeneous IEEE 802 networks, and facilitate handover between IEEE 802 networks and cellular networks. Draft standard P802.22.1 is entitled “Standard to Enhance Harmful Interference Protection for Low Power Licensed Devices Operating in TV Broadcast Bands.” It specifies methods for license-exempt devices to provide enhanced protection to low-powered licensed devices from harmful interference when they share the same spectrum. Draft standard P802.19.1 is entitled “IEEE Standard for Information Technology — Telecommunications and Information Exchange Between Systems — Local and Metropolitan Area Networks — Specific Requirements — Part 19: TV White Space Coexistence Methods.” It specifies radio-technology-independent methods for coexistence among dissimilar or independently operated TV band device networks and dissimilar TV band devices.
Using the obtained knowledge, the CRS dynamically and autonomously makes reconfiguration decisions according to some predefined objectives (e.g., in order to improve efficiency of spectrum usage). Based on the decisions made, the CRS adjusts operational parameters and protocols of its reconfigurable radios.
CRS STANDARDIZATION IN ETSI TC RRS In ETSI standardization of the CRS is performed in the TC RRS [10]. ETSI Technical Report (TR) 102 682, “Functional Architecture for the Management and Control of Reconfigurable Radio Systems,” was published in July 2009. It provides a feasibility study on defining a functional architecture for reconfigurable radio systems, in terms of collecting and putting together all management and control mechanisms targeted at improving the utilization of spectrum and the available radio resources. This denotes the specification of the major functional entities that manage and direct the operation of a reconfigurable radio system, as well as their operation and interactions.
87
FILIN LAYOUT
2/18/11
3:08 PM
ETSI TC RRS is currently developing draft technical specification on “Coexistence Architecture for Cognitive Radio Networks on UHF White Space Frequency Bands.” This draft specification will define system architecture for spectrum sharing and coexistence between multiple cognitive radio networks.
Page 88
ETSI TR 102 683, “Cognitive Pilot Channel,” was published in September 2009. It provides a feasibility study on defining and developing the concept of the CPC for reconfigurable radio systems to support and facilitate end-to-end connectivity in a heterogeneous radio access environment where the available technologies are used in a flexible and dynamic manner in their spectrum allocation context. ETSI TR 102 802, “Cognitive Radio System Concept,” was published in February 2010. It formulates the harmonized technical concept for CRSs. Both infrastructure as well as infrastructureless radio networks are covered. Based on the system concept, the identification of candidate topics for standardization is the key target of this study, also including a survey of related activities in other standard development organizations. ETSI TR 102 803, “Potential Regulatory Aspects of Cognitive Radio and Software Defined Radio Systems,” was published in March 2010. This report summarizes the studies carried out by ETSI TC RRS related to the CRS and SDR. In particular, the study results have been considered for items of potential relevance to regulation authorities. ETSI TC RRS is currently developing a draft TR, “Operation in White Space Frequency Bands.” This draft report will describe how radio networks can operate on a secondary basis in frequency bands assigned to primary users. The following topics are currently considered: operation of the CRS in UHF white space frequency bands, methods for protecting primary users, system requirements, and use cases. Also, ETSI TC RRS is currently developing draft technical specification, “Coexistence Architecture for Cognitive Radio Networks on UHF White Space Frequency Bands.” This draft specification will define system architecture for spectrum sharing and coexistence between multiple cognitive radio networks. The coexistence architecture is targeted to support secondary users in UHF white space frequency bands.
CRS STANDARDIZATION IN ECMA In ECMA, standardization of the CRS is performed in Task Group 1 of Technical Committee 48. Standard ECMA-392, “MAC and PHY for Operation in TV White Space,” was published in December 2009 [11]. It specifies MAC and physical layers for personal/portable cognitive wireless networks operating in TV bands. Also, ECMA-392 specifies a number of incumbent protection mechanisms that may be used to meet regulatory requirements.
CONCLUSIONS In general, the CRS can be characterized as a radio system having capabilities to obtain knowledge, adjust its operational parameters, and protocols, and learn. Many CRS usage scenarios and business cases are possible. Currently, international standardization of CRS is being performed at all levels, including the ITU, IEEE, ETSI, and ECMA, where each
88
of these organizations is considering multiple CRS deployment scenarios and business directions. This article has described the current concept of the CRS and has shown the big picture of international standardization of the CRS. Figure 6 summarizes the international standardization of the CRS.
REFERENCES [1] J. Mitola and G.Q. Maguire, “Cognitive Radio: Making Software Radios More Personal,” IEEE Personal Commun., vol. 6, no. 4, Aug. 1999, pp. 13–18. [2] ITU-R SM.2152, “Definitions of Software Defined Radio (SDR) and Cognitive Radio System (CRS),” Sept. 2009. [3] M. Inoue et al., “Context-Based Network and Application Management on Seamless Networking Platform,” Wireless Personal Commun., vol. 35, no. 1–2, Oct. 2005, pp. 53–70. [4] H. Harada et al., “A Software Defined Cognitive Radio System: Cognitive Wireless Cloud,” IEEE GLOBECOM, Nov. 2007, pp. 249–99. [5] H. Harada et al., “Research and Development on Heterogeneous Type and Spectrum Sharing Type Cognitive Radio Systems,” 4th CrownCom, June 2009. [6] “Working Document towards Draft CPM Text on WRC12 Agenda Item 1.19, Annex 5 to Document 1B/158,” Feb. 2010. [7] “Cognitive Radio Systems in the Land Mobile Service, Working Document towards a Preliminary Draft New Report ITU-R [LMS.CRS], Annex 12 to Document 5A/601-E,” Nov. 2010. [8] IEEE DYSPAN Standards Committee; http://grouper.ieee. org/groups/scc41/. [9] IEEE 802 LAN/MAN Standards Committee; http://www. ieee802.org/. [10] M. Mueck et al., “ETSI Reconfigurable Radio Systems: Status and Future Directions on Software Defined Radio and Cognitive Radio Standards,” IEEE Commun. Mag., vol. 48, no. 9, Sept. 2010, pp. 78–86. [11] ECMA-392 Std., “MAC and PHY for Operation in TV White Space,” Dec. 2009.
BIOGRAPHIES S T A N I S L A V F I L I N [SM] (
[email protected]) is an expert researcher with the National Institute of Information and Communications Technology (NICT), Japan. He is currently serving as NICT representative to ITU-R WP 5A and WP 1B, ETSI TC RRS, and the IEEE 1900.4 WG. He has been a voting member of IEEE SCC 41 on dynamic spectrum access networks. In IEEE 1900.4 he has been serving as technical editor and chair of several subgroups. He was a voting member of the IEEE 1900.6 WG. He participated in the IEEE 802 EC SG on TV white space. He is a voting member of IEEE 802.11 and 802.19. He was chair of the IEEE SCC41 group on WS radio. In 2009 he received the IEEE SA SB award for contribution to the development of IEEE 1900.42009. HIROSHI HARADA (
[email protected]) is director of the Ubiquitous Mobile Communication Group at NICT and is also director of NICT’s Singapore Wireless Communication Laboratory. He joined the Communications Research Laboratory, Ministry of Posts and Communications, in 1995 (currently NICT). Since 1995 he has researched SDR, cognitive radio, dynamic spectrum access networks, and broadband wireless access systems on the microwave and millimeter-wave band. He has also joined many standardization committees and forums in the United States as well as Japan, and has fulfilled important roles for them, especially IEEE 802.15.3c, IEEE 1900.4, and IEEE1900.6. He currently serves on the Board of Directors of the SDR Forum and as Chair of IEEE SCC41 (IEEE P1900) since 2009 and Vice Chair of IEEE P1900.4 since 2008. He was also Chair of the IEICE Technical Committee on Software Radio (TCSR), 2005–2007, and Vice Chair of IEEE SCC41 in 2008. He is involved in many other activities related to telecommunications. He is a visiting professor of the University of Electro-Communications, Tokyo, Japan, and is the author of Simulation and Software Radio for Mobile Communications (Artech House, 2002). HOMARE MURAKAMI (
[email protected]) received his B.E. and M.E. in electronic engineering from Hokkaido University in
IEEE Communications Magazine • March 2011
FILIN LAYOUT
2/18/11
3:08 PM
Page 89
Currently internation-
ITU ITU-R
al standardization of WP IB
WP 5A
Definitions of SDR and CRS
CRS is performed at
Draft CPM text on WRC-12 agenda item 1.19
all levels, including
CRS in the land mobile service
ITU, IEEE, ETSI, and ECMA, where
IEEE
each of these
SCC41 1900.1
1900.1 Definitions and concepts for dynamic spectrum access
1900.4
1900.4 Architecture for optimized radio resource usage in heterogeneous wireless access networks P1900.4.1 Interfaces and protocols for optimized radio resource usage in heterogeneous wireless access networks
organizations considers multiple CRS deployment scenarios and business directions.
P1900.4a Architecture and interfaces for dynamic spectrum access networks in white space frequency bands 1900.5
P1900.5 Policy language requirements and architectures
1900.6
P1900.6 Spectrum sensing interfaces and data structures
Ad-hoc
WS radio
802.11
802.11y Wireless LAN: 3650-3700 MHz operation in USA
802
P802.11af Wireless LAN: TV white spaces operation 802.19
P802.19.1 TV white space coexistence methods
802.21
P802.21 Media independent handover services
802.22
P802.22 Wireless RAN: Policies and procedure for operation in the TV bands P802.22.1 Enhancing harmful interference protection
ETSI TC RRS WG1
Potential regulatory aspects of CRS and SDR CRS concept Operation in white space frequency bands Coexistence architecture for cognitive radio networks on UHF white space frequency bands
WG3
Functional architecture for RRS Cognitive pilot channel
ECMA TC 48 TGI
ECMA-392 MAC and PHY for operation in TV white space - Finished
- Ongoing
Figure 6. Summary of international standardization on CRS. 1997 and 1999. He has worked in the Communications Research Laboratory, Ministry of Post and Telecommunications, since 1999, which is now reorganized as NICT. He is currently a senior researcher in the Ubiquitous Mobile Communications Group of NICT. He worked at Aalborg University from 2003 to 2005 as a visiting researcher. His interest areas are cognitive radio networking, IP mobility, new transport protocols supporting wireless communications, and naming schemes.
IEEE Communications Magazine • March 2011
KENTARO ISHIZU (
[email protected]) received M.E. and Ph.D. degrees from Kyushu University, Japan, in 2003 and 2005, respectively, with a major in computer science. He has been working for NICT since 2002. He has been engaged in R&D projects on heterogeneous wireless networks, distributed content delivery networks, and cognitive wireless networks.
89
PAWELCZAK LAYOUT
2/22/11
1:56 PM
Page 90
COGNITIVE RADIO NETWORKS
Cognitive Radio: Ten Years of Experimentation and Development Przemyslaw Pawelczak, University of California, Los Angeles Keith Nolan and Linda Doyle, University of Dublin, Trinity College Ser Wah Oh, Institute for Infocomm Research Danijela Cabric, University of California, Los Angeles
ABSTRACT The year 2009 marked the 10th anniversary of Mitola and Maguire Jr. introducing the concept of cognitive radio. This prompted an outpouring of research work related to CR, including the publication of more than 30 special issue scientific journals and more than 60 dedicated conferences and workshops. Although the theoretical research is blooming, with many interesting results presented, hardware and system development for CR is progressing at a slower pace. We provide synopses of the commonly used platforms and testbeds, examine what has been achieved in the last decade of experimentation and trials relating to CR, and draw several perhaps surprising conclusions. This analysis will enable the research community to focus on the key technologies to enable CR in the future.
INTRODUCTION
This material is based on work supported by Science Foundation Ireland under Grant no. 03/CE3/I405 as part of CTVR at the University of Dublin, Trinity College, Ireland.
90
Cognitive radio (CR), in its original meaning, is a wireless communication paradigm utilizing all available resources more efficiently with the ability to self-organize, self-plan, and self regulate [1]. In its narrow, however far more popularized definition, CR-based technology aims to combat scarcity in radio spectrum using dynamic spectrum access (DSA) [2]. DSA technologies are based on the principle of opportunistically using available spectrum segments in a somewhat intelligent manner. Implementation and experimentation work has ramped up in the latter half of the decade. Because of the complexities involved in designing and developing CR systems [3, 4], more emphasis has been placed on the development of hardware platforms for full experimentation and testing of CR features. Since 1999, the first time the term cognitive radio was used in a scientific article [1], numerous different platforms and experimental deployments have been presented. These CR testbeds differ significantly in their design and scope. It is now appropriate to ask how mature these platforms are, what has been
0163-6804/11/$25.00 © 2011 IEEE
learned from them, and if any trends from the analysis of functionalities provided by these platforms can be identified. This article answers these questions. This article has three main sections and contributions. First, we present a primer on the common systems being used for CR research and development. The following section focuses on overviews of the key events in recent years that have helped progress the field of CR and DSA technologies. We then present insights gained from these experiences and look ahead at how the community can grow in the coming years. We conclude in the final section.
CR IMPLEMENTATION: PLATFORMS AND SYSTEMS We briefly review the most popular existing hardware and software radio systems, dividing these platforms into two headings. First, we deal with reconfigurable software/hardware systems, where the majority of the radio functionality, like modulation/coding/medium access control (MAC) and other layer processing, is performed in software. The burden in terms of processing and functionality on the radio frequency (RF) front-end is intended to be minimal in these cases. Second, we take a look at composite systems comprising a combination of purely software and hardware-based signal processing elements (e.g., field-programmable gate arrays [FPGAs]).
RECONFIGURABLE SOFTWARE/HARDWARE PLATFORMS We begin by focusing on three research-oriented systems: OSSIE, GNU Radio, and Iris. OSSIE — The Open Source SCA Implementation::Embedded (OSSIE) project is an open source software package for SDR development [5]. OSSIE was developed at Virginia Tech, and has become a major Linux-based open source SDR software kit, sponsored by the U.S. Nation-
IEEE Communications Magazine • March 2011
PAWELCZAK LAYOUT
2/22/11
1:56 PM
Page 91
al Science Foundation (NSF) and the Joint Tactical Radio System (JTRS), among others. OSSIE implements an open source version of the Software Communication Architecture (SCA) development framework supporting SDR development initiated by the U.S. Department of Defense, and it supports multiple hardware platforms. Further information is available at http://ossie.wireless.vt.edu. OSSIE is mostly used at Virginia Tech. GNU Radio — Arguably, the software defined radio (SDR) system with the most widespread usage is the open source GNU Radio project (http://www.gnuradio.org). It supports hardware-independent signal processing functionalities. Beginning in 2001 as a spin-off of the Massachusetts Institute of Technology’s (MIT’s) PSpectra code originating from the SpectrumWare project, the GNU Radio software was completely rewritten in 2004. Signal processing blocks are written in C and C++, while the signal flow graphs and visualization tools are mainly constructed using Python. GNU Radio is currently one of the official GNU projects having strong support from the international development community. A wide range of SDR building blocks are available, including ones commonly used to build simple CR-like applications (e.g., energy detection). The GNU Radio project prompted the development of the Universal Software Radio Peripheral (USRP) hardware by Ettus Research LLC, described later. Iris — Iris is a dynamically reconfigurable software radio framework developed by the University of Dublin, Trinity College. This is a general-purpose processor-based rapid prototyping and deployment system. The basic building block of Iris is a radio component written in C++, which implements one or more stages of a transceiver chain. Extensible Markup Language (XML) is used to specify the signal chain construction and characteristics. These characteristics can be dynamically reconfigured to meet communications criteria. Iris works in conjunction with virtually any RF hardware front-end and on a wide variety of operating systems. A wide range of components have been designed for Iris that are focused on CR-like systems. Multiple sensing components ranging from simple energy detection to more sophisticated filter bank and feature-based detection components are available. A suite of components for dynamically shaping and sculpting waveforms to make best use of available white space, or components that enable frequency rendezvous between two systems on frequencies that are not known a priori, have also been developed. For development purposes Iris can also interface with Matlab. Iris is predominantly used by the development group at the University of Dublin, Trinity College. RF Front-Ends — GNU Radio and Iris are designed to carry out the majority of signal processing in software. However, each system requires a minimal hardware RF front-end.
IEEE Communications Magazine • March 2011
USRP — The most commonly used RF frontend, especially in the research world, is the Universal Software Radio Peripheral (USRP). The USRP is an inexpensive RF front-end and acquisition board with open design and freely available documentation and schematics. The USRP is highly modular; a range of different RF daughterboards for selected frequency ranges may be connected. Two types of USRP are available. USRP 1.0 contains four high-speed analog-digital converters (ADCs) supporting a maximum of 128 Msamples/s at a resolution of 14 bits with 83 dB spurious-free dynamic range, an Altera Cyclone FPGA for interpolation, decimation, and signal path routing, and USB 2.0 for the connection interface. USRP 2.0 replaces the Altera FPGA with a Xilinx Spartan 3-2000 FPGA, gigabit Ethernet, and an ADC capable of 400 Msamples/s with 16-bit resolution. The reader is directed to http://wwww.ettus.com for further information.
Composite systems differ from reconfigurable software/hardware platforms in that composite systems contain all the required components (dedicated hardware and software, documentation, ready made software packages and modules, etc.) that allow for immediate CR development.
Other RF Front-Ends — A limited number of other RF front-ends are also available for use with these systems. These include the Scaldio flexible transceiver from IMEC, Belgium (http://www2.imec.be/ be en/research/greenradios/cognitive-radio.html), and the Maynooth Adaptable Radio System from the National University of Ireland, Maynooth [6].
COMPOSITE SYSTEMS The boundary between hardware and software frameworks (or platforms) is not as straightforward as might be assumed. The emphasis in reality is on reconfigurability. A number of composite platforms exist which have both software and hardware components that can be used to facilitate CR systems. Composite systems differ from reconfigurable software/hardware platforms in that composite systems contain all the required components (dedicated hardware and software, documentation, ready-made software packages and modules, etc.) that allow for immediate CR development. Iris began life on a general-purpose processor but has also migrated to an FPGA platform. On the FPGA platform, components can be run in software on the PowerPC and/or in hardware on the FPGA logic. The main Iris framework runs on the PowerPC with many of the components mentioned above in the FPGA logic. BEE — The Berkeley Emulation Engine (BEE) and its successor BEE2 are two hardware platforms developed by the University of California at Berkeley Wireless Research Center. BEE2 consists of five Xilinx Vertex-II Pro VP70 FPGAs in a single compute module with 500 giga-operations/s. These FPGAs can parallelize computationally intensive signal processing algorithms even for multiple radios. In addition to dedicated logic resources, each FPGA embeds a PowerPC 405 core for minimized latency and maximized data throughput between microprocessor and reconfigurable logic. To support protocol development and interfaces between other networked devices, the PowerPC on one of the FPGAs runs a modified version of Linux and a
91
PAWELCZAK LAYOUT
2/22/11
1:56 PM
Page 92
USRP2
KUAR
WARP
BEE2
RF bandwidth (MHz)
100
30
40
64
Frequency range (GHz)
DC-5 (non continuous)
5.25–5.85
2.4–2.5 (4.9–5.87)
2.39–2.49
Processing architecture
FPGA
FPGA
FPGA
FPGA
Connectivity
gigabit Ethernet
USB/Ethernet
gigabit Ethernet
Ethernet
No. of antennas
2
2
4
18
ADC performance
400 MS/s, 16 bit
105 MS/s, 14 bit
125 MS/s, 16 bit
64 MS/s, 12 bit
Community support
yes
no (defunct)
yes
no
In addition to the software-centric and composite systems described in this article, it is important to note that several stand-alone components have also been developed. The need for spectrum sensing, an important aspect of CR func-
Table 1. Summary of popular development solutions for CR, see also [6, Table 2].
tionality, has been a driver for this development work.
full IP protocol stack. Since FPGAs run at clock rates similar to those of the processor cores, system memory, and communication subsystems, all data transfers within the system have tightly bounded latency and are well suited for realtime applications. In order to interface this realtime processing engine with radios and other high-throughput devices, multigigabit transceivers (MGTs) on each FPGA are used to form 10 Gb/s full-duplex links. Eighteen such interfaces per BEE2 board are available, allowing 18 independent radio connections in an arbitrary network configuration. The BEE2 with network and Simulink capabilities can be used for experimenting with CRs implemented on reconfigurable radio modems and in the presence of legacy users or emulated primary users. Further information is available at http://bee2. eecs.berkeley.edu. WARP — The Wireless Open-Access Research Platform (WARP) (http://warp.rice.edu) from Rice University, Houston, Texas, is a complete hardware and software SDR design. WARP hardware is very similar in approach to the USRP. A motherboard serves as an acquisition board, while daughterboards serve as data collection boards. As of December 2009, two versions of motherboards were available. The version 2.2 motherboard is connected to a PC via a gigabit Ethernet interface. Motherboard processing is performed by a Xilinx Virtex-II Pro FPGA. Four independent motherboards can be connected at the same time. ADCs operating at 65 Msamples/s with 14-bit resolution are available. Software development for WARP is multilayered. It ranges from low-level veryhigh-speed integrated circuit hardware description language (VHDL) coding to Matlab modeling. Xilinx Matlab extensions for VHDL are available, and the code for WARP is widely open. As of December 2009, 21 demo implementations of different wireless functionalities using WARP originated from Rice University itself, while 17 are from other institutions around the world. KUAR — The Kansas University Agile Radio (KUAR) was an experimental hardware platform intended for the 5.25–5.85 GHz unli-
92
censed national information infrastructure (UNII) frequency band with a tunable range of 30 MHz [7]. It featured a Xilinx Virtex II Pro P30 FPGA with embedded PC for signal processing, four independent interfaces between the FPGA and embedded PC, and used an ADC with 105 Msamples/s and 14-bit resolution. The KUAR approach allows split processing between the embedded PC platform and FPGA. KUAR uses modified GNU Radio software to implement its signal processing features. Other Platforms — Many other custom SDR platforms are available that are unique in both hardware and software design. However, we need to emphasize that these platforms simply provide appropriate hardware and software for the digital processing required, integrated with an RF front-end. Hence, the user of these products does not need to look for a standalone RF front. Some commercial platforms such as the Lyrtech solutions (http://www.lyrtech.com) among others also exist but are not considered in this article. The summary of described components, along with additional parameters, is presented in Table 1.
OTHER SYSTEMS In addition to the software-centric and composite systems described in this article, it is important to note that several standalone components have also been developed. The need for spectrum sensing, an important aspect of CR functionality, has been a driver for this development work. Examples include Rockwell Collins, IMEC, and sensing devices from the Institute for Infocomm Research (I2R), Singapore, which is addressed later in this article. Finally, there are some well known DSAfocused SDR platforms that are not used directly in CR experimentation at the moment. The most prominent ones include the Japanese National Institute of Information and Communications Technology SDR Platform [6, Sec. 3.3], FlexRadio and PowerSDR used mainly for amateur radio work (http://www.flexradio.com), and SoftRock kits (http://www. dspradio.org).
IEEE Communications Magazine • March 2011
PAWELCZAK LAYOUT
2/22/11
1:56 PM
Page 93
BUILDING CR AND DSA SYSTEMS: EXPERIMENTATION AND TRIALS Following the brief synopses of the key systems enabling SDR and CR development, we proceed to the second main part of this article. We start with describing the experimental results of multiple platform interactions during recent SDR, CR, and DSA-focused conferences.
OVERVIEW OF IMPORTANT CR EXPERIMENTS Conference Demonstrations — In the latter part of the last decade, some independent conference venues featured demonstration sessions. The information relating to these events forms our starting point. We focus mostly on the demonstrations presented at IEEE DySPAN and SDR Forum (now Wireless Innovation Forum) conferences, which are the most recognized and largest directly related events in the community. A demonstration track was first established in the IEEE DySPAN conference series in 2007. Since that year there have been a total of 22 demonstrations. The SDR Forum annual technical symposium, run by the SDR Forum since 1996, organized their first demonstration track in 2007. The demonstrations presented that year comprised only SDR platforms and development kits for engineers. In 2008 real demonstrations were presented. In total, 12 demo platforms were shown, among them three that were related to DSA. During the 2009 SDR Forum conference event, 10 demonstrations were presented, among them three related to DSA systems. Important demos presented outside of these two venues are also included in this survey. The Association for Computing Machinery (ACM) MobiCom ’09 included only one CR-like demo from RWTH, Aachen University, Germany. In 2008 ACM MobiCom featured one CR demo from Microsoft Research, China. ACM SIGCOMM ’09 included one demonstration from the University of Dublin, Trinity College. The survey data for this article were collected as follows. From the publicly available data on each demonstration, we have extracted information related to the waveforms used, frequency ranges, form of spectrum sensing, transmit or receive capabilities, control channel usage, type of application used, sponsoring body, and number of developers. We focused only on actual demonstrations, ignoring demos that were either presenting development frameworks only, or based on SDR and reconfigurable platforms that were not related to CR or DSA systems. In total, we have identified 41 relevant demonstrations. For detailed information on each demonstration platform the reader is referred to the respective conference proceedings. The data are as follows: • IEEE DySPAN ’10: –Wright State University, Army Research, United States: “Spectrally Modulated Spectrally Encoded Platform”; sponsored by internal funds –University of Dublin, Trinity College, Ireland, European Union (EU): “OFDM
IEEE Communications Magazine • March 2011
Pulse-Shaping for DSA; Multi-Carrier CDMA for DSA”; both sponsored by Science Foundation Ireland –Institute for Infocomm Research, Singapore: “Communication in TV White Spaces”; sponsored by Singapore Agency for Science, Technology and Research –IMEC, Belgium, EU: “Wideband Spectrum Sensor”; sponsored by internal funds –RWTH, Germany, EU: “Policy Engine for Home Networks”; sponsored by German Research Foundation and EU ARAGORN Project; “OFDM Adaptation Based on Spectrum Sensing”; sponsored by German Research Foundation; “Decomposable MAC Framework”; sponsored by German Research Foundation and EU 2PARMA project –Communications Research Center, Canada: “WiFi Network with Spectrum Sensing”; sponsored by internal funds –University of Notre Dame, United States: “Primary User Traffic Pattern Detection”; sponsored by U.S. National Science Foundation and National Institute of Justice • SDR Forum ’09: –University of Oulu, Finland, EU: “Mobile Ad Hoc Network with Opportunistic CR MAC”; sponsored by internal funds –IMEC, Belgium, EU: “Wideband Spectrum Sensor”; (also IEEE DySPAN ’10), sponsored by internal funds –University of Piraeus, Greece, AlcatelLucent, Germany, EU: “Dynamic Radio Access Technique Re-Configuration”; sponsored by the EU E2R Project • ACM MobiCom ’09: –RWTH, Germany, EU: “CR Capacity Estimation”; sponsored by German Research Foundation and EU ARAGORN project • ACM SIGCOMM ’09: –University of Dublin, Trinity College, Ireland, EU: “An FPGA-Based Autonomous Adaptive Radio”; sponsored by Science Foundation Ireland • SDR Forum ‘08: –University of Dublin, Trinity College, Ireland, EU: “Cyclostationary Signature Embedding and Detection” (see IEEE DySPAN ‘07); sponsored by Science Foundation Ireland –Shared Spectrum Company, United States: “XG Radio”; sponsored by DARPA XG Program –Virginia Tech, United States: “Multinode CR Testbed”; sponsorship information unknown • ACM MobiCom ’08: –Microsoft Research, China: “WiFi Network on TV Bands”; sponsored by internal funds • IEEE DySPAN ’08: –TU Delft, University of Twente, Netherlands: “Non-Continuous OFDM with Spectrum Sensing”; sponsored by Dutch AAF Freeband Program –Philips Research, United States: “IEEE 802.11a with Frequency Adaptation”; sponsored by internal funds –Adaptrum, United States: “Wireless Micro-
In the latter part of the last decade, some independent conference venues featured demonstration sessions. The information relating to these events forms our starting point. We focus mostly on the demonstrations presented at the IEEE DySPAN and SDR Forum (now Wireless Innovation Forum) conferences.
93
PAWELCZAK LAYOUT
2/22/11
1:56 PM
Page 94
Figure 1. Measurement results example from IEEE DySPAN '08. In the waterfall plot the narrowband signal is a FM transmission and the broadband signal is an XG radio. The waterfall spans approximately 60 seconds of measurement.
phone Detection”; sponsored by internal funds –University of Dublin, Trinity College, Ireland, EU: “Cyclostationary Signature Embedding and Detection” (also SDR Forum ‘08); “Point to Point DSA Link with Spectrum Sensing”; both sponsored by Science Foundation Ireland –Virginia Tech, United States: “Heterogeneous Cooperative Multinode DSA network”; sponsored by U.S. National Institute of Justice, National Science Foundation, and DARPA –Institute for Infocomm Research, Singapore: “Transmission over TV White Spaces”; sponsored by the Singapore Agency for Science, Technology and Research –Motorola, United States: “WiFi-Like Operation in TV Bands”; sponsored by internal funds –Omesh Networks, United States: “ZigBeeBased Self-Configured Network”; sponsored by internal funds –Rockwell Collins, United States: “Spectrum Sensor and Signal Classifier”; sponsored by the DARPA XG Program –Shared Spectrum Company, United States: “XG Radio”; sponsored by the DARPA XG Program –University of South Florida, United States: “Spectrum Sensing with Feature Detection”; sponsored by internal funds –University of Utah, United States: “High Resolution Spectrum Sensing”; sponsored by internal funds • IEEE DySPAN ’07: –Shared Spectrum Company, United States: “XG Radio”; sponsored by the DARPA XG Program
94
–Motorola, United States: “WiFi-Like Network in Licensed Bands”; sponsored by internal funds –Virginia Tech, United States, University of Dublin, Trinity College, Ireland, EU: “Cognitive Engine-Based Radio Reconfiguration”; sponsored by Science Foundation Ireland –University of Dublin, Trinity College, Ireland, EU: “Cyclostationary Signature Embedding and Detection” (also SDR Forum ’08); sponsored by Science Foundation Ireland –University of Kansas, United States: “KUAR Presentation”; sponsored by the U.S. National Science Foundation, DARPA, and the Department of the Interior National Business Center –QinetiQ, United Kingdom: “Spectrum Monitoring Framework”; sponsored by internal funds –SRI International, United States: “Policy Reasoner Combined with SSC XG Radios”; sponsored by the DARPA XG Program –University of Dublin, Trinity College, Ireland, EU: “Extensions to XG Policy Language”; sponsored by Science Foundation Ireland –University of Twente, Netherlands, EU: “Spectrum Monitoring Device”; sponsored by the Dutch Adaptive Ad Hoc Free Band Wireless Communications (AAF) Program IEEE DySPAN ‘07 — During the first ever trial of its kind during IEEE DySPAN ’07, QinetiQ (a U.K. Ministry of Defense contractor) and Shared Spectrum Company carried out a simultaneous transceiver operation test in the UHF band. Data from the evaluation are not publically available as it was considered proprietary information. However, it was found that the Shared Spectrum Company’s detect-and-avoid system could coexist with a very fast hopping single-carrier system in the same frequency band. Further information regarding the demonstrations is available at http://www.ieee-dyspan.org/ 2007. A wireless trial licence was issued by the Commission for Communications Regulation (Comreg) in Ireland for multiparty trials in this case. Further information is available at http:// www.testandtrial.ie. IEEE DySPAN ‘08 — IEEE DySPAN ’08 featured 13 live demonstrations comprising Tx/Rx and Rx-only systems. A special temporary authority license was issued by the FCC for the 482–500 MHz frequency range, allowing multiple companies and academic institutions to occupy and interfere with each other for the duration of the event. The University of Dublin — Trinity College, Shared Spectrum Company (using XG nodes), I 2 R, University of Utah, Stevens Institute of Technology, OMESH Networks, Virginia Tech, and Motorola demonstrated DSA transceiver systems. Adaptrum, Philips, the University of South Florida, Anritsu, Rockwell Collins, and TU Delft carried out signal detection and analysis work using these transmission sources. This location features several high-
IEEE Communications Magazine • March 2011
PAWELCZAK LAYOUT
2/22/11
1:56 PM
Page 95
power analog TV transmitters in the immediate vicinity. The trials demonstrated that DSA systems and networks could be established and maintained even in close proximity to these high-power TV services and even in Chicago’s extremely crowded RF environment. Further information is available at http://www.ieee-dyspan.org/2008. Figure 1 is an example waterfall plot obtained using an Anritsu MS2721B handheld analyzer inside the conference demo room spanning approximately 1 min. The wideband signal is Shared Spectrum Company’s orthogonal frequency-division multiplexing (OFDM) signal from the XG nodes. This was operating on a do no harm basis and simply vacated any channel where the received signal level from a non-XG signal exceeded –90 dBm. In one scenario, a narrowband FM signal modulated with a 1 kHz sine wave was swept up and down in the frequency band to serve as a potential interferer to XG. It is clearly seen that the XG signal did move to a vacant channel. This proved that DSA is possible even in the shadow of extremely powerful adjacent channel TV transmissions. However, this also demonstrated the weakness of an energy detection do no harm approach. As an example of a simple denial of service attack demonstration, it was possible to trigger the XG signal to change channels as the detection system was energy-threshold-based. In some cases the XG and narrowband source appear on the same frequency. This is because the transmitted power of the narrowband interferer was reduced, and did not exceed the XG system detection threshold. IEEE DySPAN ‘10 — IEEE DySPAN ’10 featured 10 demonstrations of DSA systems. While some of the demonstrations possessed the capability to transmit, as was the case with the University of Notre Dame and Communications Research Centre Canada devices, all of them used license-exempt bands only. Two demos, one from RWTH, Aachen University, and one from University of Dublin, Trinity College, demonstrated the capability of non-contiguous OFDM transmission end effective subcarrier suppression techniques, again showing the demonstration using the license-exempt channels only. Key Commercial Experimentation and Trials — This section presents brief overviews of key commercial trials and experimentation work carried out in recent years that have broken new ground and helped influence the direction of CR and DSA research. DARPA XG Experimentation — DARPA XG radio was manufactured by Shared Spectrum Company in the early 2000s [8]. It is an implementation of a DSA system using interference detection and avoidance techniques. A policy engine is used for frequency selection and access. The XG radio uses the IEEE 802.16 physical layer, with a 1.75 MHz bandwidth OFDM signal and 20 dBm transmit power. All nodes in the network use a common frequency, despite the availability of more channels at a certain point of time.
IEEE Communications Magazine • March 2011
One of the most interesting field trial results were presented in [8] by the Defense Advanced Research Projects Agency (DARPA) XG program. The DARPA XG trial was presumably the first private CR system trial ever. On August 15–17, 2006 the U.S. Department of Defense’s DARPA demonstrated the capabilities of XG radios to work on a CR-like basis. Tests were performed at different locations in Virginia. Six mobile nodes were involved in the demonstrations, and as the authors claim, a demonstration was successful, proving that the idea of listen before talk communication equipped with policy-based reasoning in radio access is fully realizable. The system demonstrated very short channel abandon times of less than 500 ms (i.e., the time during which the device ceased communication at a certain channel and vacated it) and short reestablishment times (i.e., less than 200 ms) given the lack of pre-assigned frequencies. The reestablishment time is the time taken for the device to select a new channel and resume communications. The channel abandon goal of 500 ms was mostly met, and problems were mostly due to software and IEEE 802.16 modem glitches. During the experiment U.S. Department of Defense radios were operating in the 225–600 MHz range, and XG radios were selecting unused frequency channels in this range (i.e., one out of six possible), where the number of all possible channels to select was an implementation choice.
In some cases, the XG and narrowband source appear on the same frequency. This is because the transmitted power of the narrowband interferer was reduced, and did not exceed the XG system detection threshold.
Experiences from Spectrum Sensing in the TV Bands — The most prominent hardware trial for spectrum sensing thus far has been the FCC field trial conducted in 2008 by the Office of Engineering and Technology (OET). Five hardware prototypes from Adaptrum, I2R Singapore, Microsoft Corporation, Motorola Inc., and Philips Electronics North America were submitted for examination. The tests covered TV signals and Part 74 wireless microphone signals, in a laboratory controlled environment as well as the actual field. All devices supported sensing of TV signals, while the I2R, Microsoft, and Philips devices also supported wireless microphone sensing. TV Sensing Laboratory Test: In general, all devices exhibited good sensitivities (better than the –114 dBm threshold established by the FCC [9]) in the laboratory single channel test. The Philips device in particular achieved the best sensitivity in a clean signal environment while the Microsoft device had the best performance in captured signal tests. Most devices were able to maintain good sensitivities when the adjacent channel power was within manageable levels for the devices [10, Table 3-1] for adjacent channel test results. However, the sensitivities were not determined in some cases due to insufficient selectivity, receiver desensitization, or device malfunction. From the measurable detection thresholds, the I2R device threshold was better than –114 dBm for all cases except for one when the N + 1 adjacent signal level is at –28 dBm. The Philips device exhibited the best performance at low adjacent signal level of –68 dBm.
95
PAWELCZAK LAYOUT
2/22/11
We encourage open
1:56 PM
Page 96
ATSC channels
collaboration between research groups to help progress toward
NTSC channels
Prototype
Unoccupied Condition I
Condition II
Condition I
Condition II
Adaptrum
91%
51%
89%
30%
75%
I2R
94%
30%
25%1
10%1
81%
Motorola (geolocation)
100%
100%
100%
100%
71%
Motorola (sensing)
90%
48%
—
—
64%
Philips
100%
92%
100%
100%
15%
comprehensive demonstrations better linked to realworld scenarios. The IEEE DySPAN demonstrations series provided a glimpse of what
Note: 1 I2R’s white space device did not support NTSC but was tested by the FCC for NTSC anyway.
Table 2. Probabilities of proper channel classification.
value could be generated from these collaborative activities.
96
Nevertheless, the future spectrum sensing hardware development should tackle the issues of lack of receiver selectivity and receiver desensitization, especially when the adjacent channels have high powers. TV Sensing Field Test: Four test conditions (Table 2) were considered by the FCC [10]. Two of these test conditions involved the white space device (WSD) operating within the service contour of a station assigned to the channel. For condition I, the broadcast signal was viewable on a representative consumer TV, and for condition II, the broadcast signal was not viewable on a representative consumer TV. For condition II, we note that there is no mechanism to determine whether a TV signal actually exists in the measurement locations. All devices, under condition I tests, met the intended probability of detection of over 90 percent for ATSC channels. The geolocation database approach from Motorola was able to identify occupied channels with 100 percent accuracy. For identification of unoccupied channels, the I 2R device exhibited the best performance, but not with complete reliability. Ironically, the geolocation-database-based approach did not exhibit the best performance in this aspect, presumably due to incomplete information in the database. This shows that spectrum sensing alone works to some degree, but the performance could be further enhanced especially in the identification of unoccupied channels. Combining a geolocation database with spectrum sensing may be a better option depending on the specific deployment scenario in mind. Wireless Microphone Test: The field tests for wireless microphone sensing were performed with the I 2 R and Philips devices at two locations. The Philips device reported all of the channels on which the microphones were designated to transmit as occupied whether the microphone was transmitting or not. The I 2 R device indicated several channels as available even when the microphones were on. The wireless microphone field tests at first glance did not seem to give convincing results in the capability of the submitted WSDs to detect wireless microphone signals reliably. Nevertheless, the
White Space Coalition (WSC) later found out that the wireless microphone operators were improperly transmitting signals on many channels occupied by TV broadcast signals within the protected TV service contours during the field trials [11]. Even so, there is so far no comprehensive trial that proves the acceptable performance of wireless microphone signal detection. As an alternative, the WSC proposed to use beacons for protecting wireless microphone signals.
OBSERVATIONS FROM CR PLATFORMS’ INTERACTIONS We now proceed to the third and final part of this article. We focus on the many interesting conclusions that may be drawn from the observation of the development progress of demonstration platforms for CR-like systems and networks presented earlier. Some of these may seem to be surprising and contradict the common feeling about the way these networks are evolving. We also suggest recommendations to help the community evolve faster and advance the field of research. These are summarized below.
THERE ARE PRACTICALLY NO COMPREHENSIVE CR DEMONSTRATION PLATFORMS Almost all testbeds presented publicly are more or less focused on DSA functionality. From the surveyed demos, there is not a single one that presents at least a feature of CR that has been proposed in [1], like artificial intelligence (AI) usage in spectrum selection. We presume that the field is not mature enough to provide meaningful demonstrations with AI features. The more exciting AI functionality tends to lend itself better to scenarios involving networks, distributed resources, and higherplane functionality featuring teamwork and collaboration [12]. We encourage open collaboration between research groups to help progress toward comprehensive demonstrations better linked to real-world scenarios. The IEEE DySPAN
IEEE Communications Magazine • March 2011
2/22/11
1:56 PM
Page 97
demonstrations series provided a glimpse of what value could be generated from these collaborative activities. Further public dissemination of outcomes from these activities in the form of website content and publicly available videos would significantly increase the visibility and impact of this work. This in turn would increase the prospects of collaboration and joint project opportunities with external groups around the world.
12 10 8 Number
PAWELCZAK LAYOUT
6 4
OPEN SDR PLATFORMS DOMINATE THE RESEARCH MARKET
2 0
MANY TESTBEDS ARE NOT DSA IN THE STRICT MEANING OF THE TERM Surprisingly, the majority of platforms enabling real-world communication and presented in the past couple of years are designed to work in license-exempt bands, where no requirements on primary user protection are present. However, certain issues (e.g., the interference impact of secondary opportunistic usage on primary users, and adjacent channel and dynamic range issues) simply cannot be analyzed properly unless deployed in a frequency band with active real-world incumb e nt s. In ad d i t i o n t o t h e s e t e c h n i c al constraints, market mechanisms and economic drivers including light licensing and incentive auction schemes cannot be properly trialed in license-exempt bands. Spectrum regulators can provide wireless test and trial licensing options to help facilitate experiments in non-license-exempt spectrum that more closely meet real-world incumbent scenarios. The Commission for Communications Regulation (Comreg) in Ireland, the Office of Communications (Ofcom) in the United Kingdom, and the FCC (through their special temporary authority license mechanism) are examples of regulators that offer these options. We encourage research groups to avail of these opportunities where possible.
IEEE Communications Magazine • March 2011
Dedicated
USRP
IRIS
802.11
OTS SDR
KUAR
WARP
(a) 12 10
Number
8 6 4 2 0 OFDM
802.11
xPSK MC-CDMA SMSE
SC
802.15.4 802.16
(b) 20
15 Number
As seen in Fig. 2a, the majority of demonstrations use GNU Radio and either the USRP or dedicated RF front-ends. This demonstrates that open source SDR development kits and open hardware platforms are proving to be the most accessible university research platform for DSA-related research. On the other hand, other open source software components supporting development of CR-like systems, such as WARP, Iris, and OSSIE, described earlier, are mostly used by the universities that developed them. Open sourcing is a valuable means of enticing new users, supporting a wide range of development ecosystems, and increasing the impact of a research platform. Research institutions are encouraged to explore this option. Additional opportunities in the form of bespoke development work, greater employment opportunities for the researchers involved, and the prospects of a development lifetime not restricted by the duration of the project are potential indirect outcomes from this approach.
10
5
0
Energy
Other
Cyclostationarity
Feature
(c)
Figure 2. Current status of CR demonstration platforms presented in this article: a) hardware platforms used; b) waveforms used; c) types of signal detection used (OTS: off the shelf, SC: single carrier, SMSE: spectrally modulated spectrally encoded).
OFDM IS TYPICALLY THE DESIGN CHOICE FOR WAVEFORMS Referring to Fig. 2b, the majority of waveforms used have been OFDM-based (including DARPA XG). In addition, some prototypes are based on IEEE 802.11 standards where OFDM is a standard spectrum access scheme. USRPbased testbeds use OFDM to implement noncontiguous forms of this spectrum access scheme, which allows for the dynamic notching and shaping of subcarriers to accommodate detected incumbent frequency user activity. Some other demonstrations not using OFDM are available, like recent University of Dublin, Trinity College
97
PAWELCZAK LAYOUT
2/22/11
Cost is a major factor in influencing the market adoption of WSD-based technologies. Further real-world trials are required to determine whether sensing and the associated costs can be significantly reduced if geolocation-based approaches can be employed to meet the regulatory guidelines.
1:56 PM
Page 98
demonstrations with multicarrier code-division multiple access (MC-CDMA). Single-carrier (SC) waveform-based research should continue. SC schemes can alleviate the need for highly linear power amplifiers and backoff as is the case for OFDM, thus helping reduce the cost of user terminals. Single-carrier frequency-division multiple access (SCFDMA) is a variant of OFDM being used for Long Term Evolution (LTE) and LTEAdvanced terminals, the successor to High Speed Download Packet Access (HSPDA). The research community can therefore stand to potentially benefit from extending their existing OFDM-based work to target SC-FDMA, carrier aggregation, and other related LTEbased technologies.
ENERGY DETECTION IS THE MOST POPULAR SIGNAL DETECTION METHOD Energy detection is used by the majority of the systems addressed in this article to detect the presence of other users in a band of interest. Energy detection offers a greater detection speed and less computational complexity than cyclostationary feature analysis, for example. However, this comes at a cost. Energy detection is not highly regarded for accuracy in low signalto-noise ratio cases, as noted earlier. Among those demos enabling energy detection only, a few enable cooperation in spectrum sensing. However, it was found during the DySPAN demonstrations that this method is suboptimal and easy to abuse. There are many other interesting and more reliable sensing approaches in existence in the literature, including cyclostationary feature analysis [13, 14] and filter bank techniques [15], which lend themselves to implementation on a variety of the platforms mentioned in this article (Fig. 2c).
GEOLOCATION AND SENSING ARE NEEDED FOR MAXIMUM RELIABILITY BUT AT A COST The FCC WSD tests demonstrated that a combination of geolocation and sensing yielded the best results in condition I and II tests. However, the ability to sense signals down to the established thresholds may have implications in terms of significantly higher terminal costs than if a geolocation database approach was used on its own. Cost is a major factor influencing the market adoption of WSD-based technologies. Further real-world trials are required to determine whether sensing and the associated costs can be significantly reduced if geolocation-based approaches can be employed to meet the regulatory guidelines. The outcomes of this work would also help shape regulatory policy in terms of a stance that balances the need for primary user protection, and helping new markets to emerge and evolve. These factors would in turn help to increase the market adoption prospects of new white space-based technologies.
98
LACK OF APPROPRIATE RF FRONT ENDS A key bottleneck in CR experimentation has always been (and we believe continues to be) the availability of appropriate frequency-agile RF front-ends that can easily be coupled with the parts of the CR that carry out the digital processing — be they pure software systems like GNU Radio or a mix of hardware and software like the BEE. The USRP has been the most successful product to do just this, especially in terms of accessibility for researchers (Fig. 2a). We have approached the stage where out-ofthe-laboratory tests are now required to significantly progress the field of research. The RF front-end requirements must therefore evolve to support this work. Increased transmit power, frequency range coverage, smaller form factors, increased support for add-on modules, an increased range of interfaces, weatherproof housings, and more adaptable power source facilities are key to facilitating this shift in focus. The research community needs to engage with large equipment vendors to demonstrate ideas and prototype solutions to promote development of new RF front-ends in sufficient quantities to provide for larger-scale research and commercial activities.
SMALL AND CENTRALIZED SYSTEMS ARE THE DESIGN CHOICE FOR MOST OF THE PLATFORMS Designers have full control over their platforms with a centralized approach. This avoids the need for a control channel (19 out of 28 surveyed platforms focusing on networking had no control channel enabled); however, it means sacrificing the flexibility of the design. Most demos have two nodes, some have three, and there are a few that might have a few more. Thus, testbeds are small and not of a substantial enough size to really explore or uncover networking issues. There is much less focus on cognitive networks, and when a network focus is present the scenarios typically target single-digit numbers of nodes and centralized scenarios. The time to increase the scope of the research vision has now arrived. The research community is urged to expand their testbed plans to examine larger-scale and distributed multinode scenarios over wider geographical areas. Collaborative efforts are now beginning to focus on this more, however. Key activities in Europe, for example, include the European Science Foundation’s European Cooperation in Science and Technology (COST) IC0902 and COSTIC0905 (COST-TERRA) projects, which focus on applying CR across layers, devices, and networks, and developing a harmonized techno-economic framework for CR and DSA across Europe. Further information on these is available at http://cost-terra.org and http://newyork. ing.uniroma1.it/IC0902.
NO DRAMATIC INCREASE IN THE NUMBER OF AVAILABLE CR AND NETWORK PROTOTYPES The number of papers presented including cognitive radio as a keyword increases exponentially every year. However, every year IEEE DySPAN
IEEE Communications Magazine • March 2011
PAWELCZAK LAYOUT
2/22/11
1:56 PM
Page 99
has received a similar number of demonstration submissions. IEEE DySPAN ’07, ’08, and ’10 received 13, 15, and 12 submissions, respectively. More industry-led research is now required to increase the number of prototype systems from the small set of systems focused on long-term research-only concept ideas.
demos and testbeds for IEEE 802.22 missing; there is also a lack of literature on WRAN networks that directly take into account specifications of the standard to evaluate its performance [3, 4].
ONLY ONE THIRD OF THE PRESENTED DEMOS ARE FROM THE UNITED STATES
In this article we have presented a survey of state-of-the-art hardware platforms and testbeds related to CR concepts. We broke this work down into three sections. First, we present a primer on the common systems being used for CR research and development. Synopses of the key events in recent years that have helped progress the field of CR and DSA technologies follow this. Finally, we present insights gained from these experiences in an attempt to help the community grow further and faster in the coming years.
Although the United States still dominates in research and development of CR-like systems, due to worldwide interest, almost 60 percent of the demos are from Canada, the EU, and Asia.
UNIVERSITIES DOMINATE THE DEMONSTRATION MARKET As an emerging technology, DSA-based systems are the basis for patent generation and other intellectual property protection endeavors. This is one of the reasons why publicly viewable commercial offerings appear to be slow to emerge. On the other hand, university-created prototypes and research publications concerning these tend to emerge more quickly and involve public dissemination of the work through academic publications to help build the research profile and status of the research group and academic institution.
MORE EMPHASIS IS NEEDED ON REPORTING FAILURES The development path of an emerging technology includes failures as well as successes. In many cases, the reasons why a particular DSA or CR approach was not successful can be perhaps even more important than the small number of scenarios where the system does live up to its claims. While some technical reports focus on problems associated with DSA-related systems like [10], research publications tend not to focus on this valuable information. By reporting the reasons approaches may not work, the research community can avoid repeating the same mistakes and evolve faster.
EACH DEMONSTRATION WAS DEVELOPED BY A SMALL NUMBER OF PEOPLE Thanks in part to ready-made SDR systems, available documentation, and, in the case of the USRP, an active community of developers, the number of people involved in demonstrations can be limited. For the case of surveyed demos from the previous section, the average number of developers is approximately three.
ABSENCE OF IEEE 802.22 DEMONSTRATIONS Interestingly among all presented demonstrations, not a single one implemented the IEEE 802.22 protocol stack. Although some components for IEEE 802.22 have already been developed (e.g., the spectrum sensing module of [16]), none of the universities and companies have focused on these networks. Not only are
IEEE Communications Magazine • March 2011
CONCLUSIONS
Interestingly, among all presented demonstrations, not a single one implemented the IEEE 802.22 protocol stack. Although some components for IEEE 802.22 have already been developed (see the spectrum sensing module of [16]), none of the
ACKNOWLEDGMENTS
universities and
The authors would like to thank Rahman Doost Mohammady and Jörg Lotze for providing initial data for the demonstration survey.
companies have focused on these networks.
REFERENCES [1] J. Mitola III and G. Q. Maguire, Jr., “Cognitive Radio: Making Software Radios More Personal,” IEEE Personal Commun., vol. 6, no. 4, Aug. 1999, pp. 13–18. [2] J. Hoffmeyer et al., “Definitions and Concepts for Dynamic Spectrum Access: Terminology Relating to Emerging Wireless Networks, System Functionality, and Spectrum Management,” IEEE 1900.1-2008, Oct. 2, 2008. [3] F. Granelli et al., “Standardization and Research in Cognitive and Dynamic Spectrum Access Networks: IEEE SCC41 Efforts and Other Activities,” IEEE Commun. Mag., vol. 48, no. 1, Jan. 2010, pp. 71–79. [4] Q. Zhao and B. M. Sadler, “A Survey of Dynamic Spectrum Access: Signal Processing, Networking, and Regulatory Policy,” IEEE Signal Process. Mag., vol. 24, no. 3, May 2007, pp. 79–89. [5] C. R. Aguayo González et al., “Open-Source SCA-Based Core Framework and Rapid Development Tools Enable Software-Defined Radio Education and Research,” IEEE Commun. Mag., vol. 47, no. 10, Oct. 2009, pp. 48–55. [6] R. Farrell, M. Sanchez, and G. Corley, “Software-Defined Radio Demonstrators: An Example and Future Trends,” Hindawi Int’l. J. Digital Multimedia Broadcasting, 2009. [7] G. J. Minden et al., “KUAR: A Flexible Software-Defined Radio Development Platform,” Proc. IEEE DySPAN, Dublin, Ireland, Apr. 17–20, 2007. [8] M. McHenry et al., “XG Dynamic Spectrum Access Field Test Results,” IEEE Commun. Mag., vol. 45, no. 6, June 2007, pp. 51–57. [9] FCC, “Second Report and Order and Memorandum Opinion and Order, in the Matter of ET Docket no. 08260 and ET Docket no. 02-380,” tech. rep., Nov. 14, 2008; http://hraunfoss.fcc.gov/edocspublic/attachmatch/FCC-08-260A1.pdf. [10] S. K. Jones et al., “Evaluation of the Performance of Prototype TV-Band White Space Devices Phase II,” FCC Tech. Rep., Oct. 15, 2008; http://hraunfoss.fcc.gov/ edocs public/attachmatch/DA-08-2243A3.pdf. [11] Harris Wiltshire & Grannis LLP, “Ex Parte Filing in Response to FCC ET Docket Nos. 04-186, 02-380,” Aug. 19, 2008, Philips, in the name of the White Space Coalition. [12] K. E. Nolan and L. E. Doyle, “Teamwork and Collaboration in Cognitive Wireless Networks,” IEEE Wireless Commun., vol. 14, no. 4, Aug. 2007, pp. 22–27. [13] A. Tkachenko, D. Cabric, and R. W. Brodersen, “Cyclostationary Feature Detector Experiments using Reconfigurable BEE2,” Proc. IEEE DySPAN ‘07, Dublin, Ireland, Apr. 17–20, 2007. [14] P. D. Sutton, K. E. Nolan, and L. E. Doyle, “Cyclostationary Signatures in Practical Cognitive Radio Applications,” IEEE JSAC, vol. 26, no. 1, Jan. 2008, pp. 13–24.
99
PAWELCZAK LAYOUT
2/22/11
1:56 PM
Page 100
[15] B. Farhang-Boroujeny and R. Kempter, “Multicarrier Communication Techniques for Spectrum Sensing and Communication in Cognitive Radios,” IEEE Commun. Mag., vol. 46, no. 4, Apr. 2008, pp. 80–85. [16] J. Park et al., “A Fully Integrated UHF-Band CMOS Receiver with MultiResolution Spectrum Sensing (MRSS) Functionality for IEEE 802.22 Cognitive Radio Applications,” IEEE J. Solid-State Circuits, vol. 44, no. 1, Jan. 2009, pp. 258–68.
BIOGRAPHIES
CALL FOR PARTICIPATION IEEE TTM is a unique event for industry leaders, academics and decision making government officials who direct R&D activities, plan research programs or manage portfolios of research activities. This is the first ever organized Symposium of future technologies and will cover in a tutorial way a selected set of potentially high impact emerging technologies, their current state of maturity and scenarios for the future. The Symposium brings world renowned experts to discuss the evolutionary and revolutionary advances in technology landscapes as we look forward to 2020. All the presentations in this Symposium are given by invited World leading experts with excellent opportunity for informal interaction between the attendees and senior business leaders and world-renowned innovators. On-site visits of local companies will be organized.
Plenary Topics and Sessions: N N N N N N N N N N N
Impact of Technology on Environment Smart Grid China’s Path to Technology leadership Future Directions in Wireless and Future Mobile Services Future of silicon based microelectronics Carbon nanostructures and conducting polymers Energy Harvesting and Storage EͲHealth and Advances in Biomedical Engineering Cloud Computing Internet of Things Digital Content at Home
Keynote and Invited Speakers:
N Nobuhiro Endo, CEO NEC Corporation, Japan N George Arnold, National Coordinator for Smart Grid Interoperability, NIST, USA N Philipp Zhang, Chief Scientist, Huawei, China N Ghavam Shahidi, IBM Fellow, Director of Silicon Technology, IBM Watson Research Center, USA N Joe Weinman, Worldwide Lead, Communications, Media, and Entertainment Ind. Solutions, HP, USA N Peter Hartwell, Senior Scientist, HP Laboratories, USA N Hugh Bradlow, Chief Tech. Officer, Telstra, Australia N Michael Austin, VP, BYD America, USA N Raj Jammy, Dir., Silicon Front End Proc., Sematec, USA N Jia Ma, Chief Scientist, Wuxi SensiNet Institute, China N Roberto Saracco, Dir., Telecom Italia Future Ctr., Italy N Tero Ojanperä, Executive VP, Head of Services and Dev. Experience, Nokia, Finland N Minoru Etoh, Managing Director of Multimedia Laboratories, NTT DoCoMo N Pierre Mars, VP Appl. Engineering, CAPͲXX, Australia
Further Information:
http://www.techbeyond2020.ust.hk
YCM2869.indd 1
100
P RZEMYSLAW P AWELCZAK [S’03, M‘10] (
[email protected]) received his M.Sc. degree from Wroclaw University of Technology, Poland, in 2004 and his Ph.D. degree from Delft University of Technology, The Netherlands. From 2004 to 2005 he was a staff member of Siemens COM Software Development Center, Wroclaw, Poland. During fall 2007 he was a visiting scholar at the Connectivity Laboratory, University of California, Berkeley. Since 2009 he has been a postdoctoral researcher at the Cognitive Reconfigurable Embedded Systems Laboratory, University of California, Los Angeles. His research interests include cross-layer analysis of opportunistic spectrum access networks. He is a Vice-Chair of the IEEE SCC41 Standardization Committee. He was a coordinator and an organizing committee member of cognitive radio workshops collocated with IEEE ICC in 2007, 2008, and 2009. Since 2010 he has been a co-chair of the demonstration track of IEEE DySPAN. He was the recipient of the annual Telecom Prize for Best Ph.D. Student in Telecommunications in The Netherlands in 2008 awarded by the Dutch Royal Institute of Engineers. K EITH N OLAN (
[email protected]) received his Ph.D. degree in electronic engineering from the University of Dublin, Trinity College, Ireland, in 2005. He is a research fellow with the Telecommunications Research Centre (CTVR) at the University of Dublin, Trinity College. He has served as organizer, chair, and co-chair of demonstrations for IEEE DySPAN symposia, and on numerous TPCs for conferences concerning cognitive radio and dynamic spectrum access technologies. He currently serves on the management committee for COST Actions IC0902 and IC0905 (COST-TERRA), and is also a technical co-author of the IEEE P1900.1 standard. L INDA D OYLE (
[email protected]) is a member of faculty in the School of Engineering, University of Dublin, Trinity College. She is currently director of CTVR, a national research center that is headquartered in Trinity College and based in five other universities in Ireland. CTVR carries out industry-informed research in the area of telecommunications, and focuses on both wireless and optical communication systems. She is responsible for the direction of CTVR as well as running a large research group that is part of the center. Her research group focuses on cognitive radio, reconfigurable networks, spectrum management and telecommunications, and digital art. SER WAH OH [SM] (
[email protected]) obtained his B.Eng. from the University of Malaya, Malaysia, in 1996, and Ph.D. and M.B.A. degrees from Nanyang Technological University (NTU), Singapore, in 1999 and 2010, respectively. He is currently a research scientist and project manager at the Institute for Infocomm Research (I 2R), Singapore. He oversees TV white space activities in I2R, and is currently looking into application of TV white space on the smart grid. In 2008 he successfully led a team of researchers to contribute TV white space technologies to the field trial conducted by the U.S. FCC, which resulted in subsequent approval of TV white space in the United States. He was previously in charge of algorithm development for 3G WCDMA over a software-defined radio platform. At the same time, he also serves as technical adviser for Rohde & Schwarz and ComSOC Technologies. From 2005 to 2008 he concurrently held the position of adjunct assistant professor in NTU. Prior to I2R, he was a technical manager at STMicroelectronics in charge of teams in the Singapore and Beijing R&D Centers. He was responsible for 3G WCDMA and TD-SCDMA physical layer development. He is also a recipient of the 2010 Ernst & Young Cash Prize Award as the Top MBA Graduate, the 2009 Institution of Engineers Singapore Prestigious Engineering Achievement Award, and the IEEE ICT 2001 Paper Award. He has served as Demo Chair, Publicity Chair, and Track Chair, and on the TPCs for various conferences and seminars. He has published over 30 papers and several invited papers, and holds four U.S. patents with several pending. DANIJELA CABRIC (
[email protected]) received a Dipl. Ing. degree from the University of Belgrade, Serbia, in 1998 and an M.Sc. degree in electrical engineering from the University of California, Los Angeles, in 2001. She received her Ph.D. degree in electrical engineering from the University of California, Berkeley, in 2007, where she was a member of the Berkeley Wireless Research Center. In 2008 she joined the Faculty of Electrical Engineering at the University of California, Los Angeles as an assistant professor. Her key contributions involve the novel radio architecture, signal processing, and networking techniques to implement spectrum-sensing functionality in cognitive radios. She has written three book chapters and over 25 major journal and conference papers in the fields of wireless communications and circuits and embedded systems. She was awarded a Samueli Fellowship in 2008 and an Okawa Foundation research grant in 2009.
2/22/11 10:15:49
IEEE Communications Magazine • March 2011
SENGUPTA LAYOUT
2/18/11
3:13 PM
Page 101
COGNITIVE RADIO NETWORKS
SpiderRadio: A Cognitive Radio Network with Commodity Hardware and Open Source Software S. Sengupta, John Jay College of Criminal Justice K. Hong, R. Chandramouli ,and K. P. Subbalakshmi, Stevens Institute of Technology
ABSTRACT In this article we present SpiderRadio, a cognitive radio prototype for dynamic spectrum access networking. SpiderRadio is built using commodity IEEE 802.11a/b/g hardware and the open source MadWiFi driver. This helps us in developing and testing our prototype without having to buy and manage several licensed spectrum bands. We begin with a discussion of the key research issues and challenges in the practical implementation of a dynamic spectrum access network. Then the lessons learned from the development of dynamic spectrum access protocols, designing management frame structures, software implementation of the dynamic spectrum access network protocol stack, and testbed experimental measurement results are presented. Several trade-offs in prototype implementation complexity vs. network performance are also discussed. We also identify potential security vulnerabilities in cognitive radio networks, specifically as applied to SpiderRadio, and point out some defense mechanisms against these vulnerabilities.
INTRODUCTION Dynamic spectrum access (DSA) networking allows unlicensed users/devices (“secondary users”) to opportunistically access a licensed spectrum band owned by “primary users” subject to certain spectrum etiquettes and regulations. It is expected that DSA will alleviate some of the radio spectrum scarcity problem. Cognitive radios (CRs) enable spectrum sensing, DSA, and dynamic spectrum management. A CR senses primary licensed bands and detects the presence or absence of primary users in these bands. Then secondary users either release or occupy these primary bands depending on whether or not the primary users are present in these bands, respectively. Details on definitions and regulatory aspects of CRs can be found in [1]. Practical implementation of a CR faces several challenges in terms of hardware design, software stack implementation, interfacing the CR
IEEE Communications Magazine • March 2011
device with policy servers, and so on. Examples of some these issues are the following: •Synchronization: When two communicating CRs decide to move to a new band or channel, they must successfully synchronize with each other to resume communication. Therefore, implementing protocols for accurate synchronization message (e.g., available channel list) exchange, resynchronizing in the new band as quickly as possible to prevent loss of data from upper layers, data buffering strategies during the synchronization process, planning for the situation when a new band is not available immediately, and so forth must be considered. •Hardware delays: Using open source software (as explained in later sections) may result in the radio hardware being reset and restarted during channel switching. This hardware reset process configures the medium access control (MAC) layer, and adapts the radio to the potentially modified transmission and reception parameters in the new band. This hardware reset/restart process causes significant delays during channel switching. In this article we describe SpiderRadio: the setup and implementation of a software-driven CR using off-the-shelf IEEE 802.11a/b/g hardware supported by the Atheros chipset. The software abstraction hides the physical (PHY) layer details from the upper layers in the modified network protocol stack, as discussed later. The software abstraction layer is programmable and allows SpiderRadio to configure the transmission/reception parameters automatically to operate in any unused frequency band in the allowable spectrum bands. The implication of this feature is that SpiderRadio can be on several wireless networks at the same time operating on different frequency bands. It can also be connected to an infrastructure based wireless network and an ad hoc network simultaneously. This is in contrast to current radios which can only be configured to operate statically in any one frequency band connecting to one network. Some general guiding principles are derived based on our experience in SpiderRadio-based DSA testbed experiments.
0163-6804/11/$25.00 © 2011 IEEE
101
SENGUPTA LAYOUT
2/18/11
We propose and implement two special hardware queues that become active whenever any dynamic channel switching action needs to be triggered. The first hardware queues are the synchronization queue and data buffer queue.
3:13 PM
Page 102
RELATED WORK The majority of current research in CR-enabled DSA focuses on the theoretical aspects [2, 3, references therein] with relatively fewer attempts to build working prototypes. In [4] a softwaredefined CR prototype is developed that is able to sense spectrum in the UHF band based on waveform analysis, but no dynamic channel switching upon detection of primary devices is explored. A feature detector design for TV bands with emphasis on the PHY layer is presented in [5]. In [6] a CR network prototype is built based on field programmable gate array (FPGA), and a virtual sensing mechanism is developed. In [7] a CR prototype is built with off- the-shelf IEEE 802.11 devices for spectrum sensing. Primary incumbent detection based on counting PHY/cyclic redundancy check (CRC) errors is proposed. A primary device is emulated by a Rohde & Schwarz sine-wave signal generator with IEEE 802.11 access cards operating as secondary devices. However, dynamic frequency switching upon detection of primary devices is not considered here. The research in [8, 9] investigates adaptive channel width in wireless networks by focusing on spectrum assignment algorithms to handle spectrum variation and fragmentation. Limitations in most of the above mentioned works are that they do not comprehensively address the major DSA requirements of fast physical switching, data loss issues at the time of physical switching, synchronization failure and overhead issues, and hidden incumbents challenges. Synchronization failure between two secondary devices upon switching will prove fatal for secondary network data communication as effective throughput will drop drastically or, even worse, there may be loss of communication. Thus, in this work, we discuss the SpiderRadio prototype that addresses algorithmic and implementation issues for sensing-based dynamic frequency switching and communication.
CR PROTOTYPE IMPLEMENTATION CHALLENGES A CR prototype MAC will have many features similar to any existing standard MAC (e.g., IEEE 802.11 or IEEE 802.16). However, some distinguishing requirements for DSA make the implementation highly challenging. In DSA, when a CR node is switched on, it may use an etiquette such as listen before talk by scanning all the channels to find out whether any incumbent in the interfering zone is using any particular channel, and builds a spectrum usage report of vacant and used channels. Unlike the existing single-frequency radio devices (which operate using only one static frequency), CR nodes need to discover their communication peers through extensive channel scanning and beacon broadcasting [10]. Once a CR node locates broadcasts from communicating peers, it then tunes to that frequency and transmits back in the uplink direction with the radio node identifier. Authentication and connection registrations are then done gradually. Due to such an
102
extensive connection establishment procedure at the beginning, when the number of candidate frequency channels is large, the initial neighbor discovery process is likely to become highly time consuming. The available channel list may also change randomly due to the random arrival/ departure of primary users in these bands. Unless the MAC layers of the communication CR nodes synchronize in a different band proactively within a certain delay threshold, network connectivity may be lost. If network connectivity is lost, the nodes must go through the highly time consuming neighbor discovery process repeatedly. An efficient and robust synchronization mechanism is thus crucial. Another challenge for the nodes is the method and implementation to exchange the list of currently available channels and the channel to which they will resume communication upon detection of a primary in the current operating channel. Upon a successful channel switch and synchronization, the wireless card must reconfigure itself to the new frequency channel; thus, it needs to stop the data flow from the upper layers. This operation will adversely affect the performance at the higher layers, degrading the data throughput performance unless some remedial actions are taken to enhance the DSA MAC. Note that, despite these challenges, dynamic channel switching must still be simple with a goal toward fast switching, reduced synchronization failure, reduced synchronization overhead, and increased effective throughput.
SPIDERRADIO SYSTEM DESIGN As discussed before, the SpiderRadio prototype is based on IEEE 802.11a/b/g wireless access cards built with Atheros chipsets. The building block for the software stack (Fig. 1) is the Madwifi driver (http://madwifi-project.org/). Madwifi contains three sublayers: the IEEE 802.11 MAC layer, wrapper (an interface to lower layer) of the Atheros Hardware Abstraction Layer (HAL), and Atheros Hardware Abstraction Layer (the only closed source component). For the SpiderRadio , the IEEE 802.11 MAC layer is modified to speed up and increase the reliability of channel switching, while the wrapper of Atheros HAL is modified to build special hardware queues for the prototype.
MODIFICATIONS TO ATHEROS HAL WRAPPER We propose and implement two special hardware queues that become active whenever any dynamic channel switching action needs to be triggered. The first hardware queue is synchronization queue (sync queue), which is used for transmitting synchronization management frames only. The synchronization management frames are special-purpose frames and are used for synchronization between the communicating nodes at the time of switching. Whenever any of the two communicating CR nodes sense the necessity for channel switching (initiator node), it then enables the sync queue and triggers the channel switching request management frame from the sync queue with the ongoing data communication. Inside the synchronization management
IEEE Communications Magazine • March 2011
SENGUPTA LAYOUT
2/18/11
3:13 PM
Page 103
frame, we pack the destination channel information (called the candidate frequency channel(s), i.e., to which the CR nodes desire to switch to upon vacating the current channel). The second hardware queue, data buffer queue, is enabled when the communicating CR nodes are physically switching channels and the MAC for both nodes is being configured with the transmission and reception parameters in the new frequency band. With data buffer queue enabled, we allocate a local memory for buffering the data temporarily from the upper layer so that no data from the upper layer will be lost and the switching scheme will not have any adverse effect. These modifications are implemented within the Madwifi driver in such a way that dynamic channel switching in the PHY/ MAC layer is hidden from the upper layers, not affecting the upper layer functionalities at all, thus creating smooth, seamless switching.
Upper layer (TCP/IP)
802.11 media access control layer (modified for processing sync frames)
Atheros HAL wrapper (modified for sync frame queue and data buffer queue)
Atheros hardware abstraction layer Modified Madwifi
EXTENDED MANAGEMENT FRAME STRUCTURE We use an extended management frame for dynamic channel switching and synchronization purposes between two SpiderRadio nodes. To explain the new management frame, we begin with a discussion of the standard IEEE 802.11 MAC frame structure and MAC header [11]. In the IEEE 802.11 MAC header, a 2-bit type field indicates whether the frame is a control, management, or data frame, while a 4-bit subtype field indicates different subtypes of frames under one particular type of a frame. For example, with type field set for a management frame, there can be 16 different subtypes of management frames. Ten different subtypes of management frames are already defined in IEEE 802.11: beacon, probe request, probe response, association request, association response, re-association request, re-association response, disassociation, authentication, and de-authentication frame. Six more subtypes for 802.11 MAC management frames could be defined out of which we use one subtype of a management frame for channel switching and synchronization. Under this subtype, four more extended subtypes (signifying switching request, switching response, confirmation request, and confirmation response frames) are defined. Identification for these extended subtypes is built in the first two bytes of the frame body. The necessity and detailed usage of these four different extended management frames are explained later. In Fig. 2, the structure of the switching request/response frame is shown. A 2-byte SubType Identification field indicates this as a channel switch request/response frame. In the United States, there are three non-overlapping channels in IEEE 802.11g and 13 non-overlapping channels in IEEE 802.11a. A 2-byte destination channel bitmap is enough to create a bitmap for all these 16 channels. For bitmapping more channels, the 2-byte destination channel bitmap can be extended. The 8-byte timestamp indicates the time when this request frame is prepared for transmission. Similar to the switching request and response frame, confirm management frames are auxiliary synchronization management frames. A node receiving a confirmation response packet will
IEEE Communications Magazine • March 2011
Atheros IEEE 802.11 a/b/g wireless interface card
Figure 1. Proposed protocol stack for SpiderRadio. compare the current channel information from the confirmation request packet with the channel it is operating on currently. If they are the same, this node will copy the current channel and confirmation count field to the confirmation response frame bit by bit and send it back to the transmitter.
BITMAP CHANNEL VECTOR In order to address the hidden incumbent problem [12], we embed the candidate channels information inside the channel switching request management frame, instead of the initiator CR node attempting to convey a channel switching request using only one frequency channel’s information. The number of candidate channels is updated dynamically by the initiator node depending on the feedback received from the receiving CR node. The reason behind transmitting a synchronization message with multiple candidate frequencies is that even if the receiving CR node encounters a licensed incumbent transmission (hidden to the initiator CR node), it still has ways to choose other candidate channel(s) and report this incumbent transmission to the initiator using a similar management frame called a channel switch response management frame. With this mechanism, even with the presence of a hidden node incumbent, risk of synchronization failure is reduced significantly. Figure 2 shows the proposed channel switching request management frame structure in detail. Recall that there are three non-overlap channels in 802.11g and 13 non-overlap channels in 802.11a which are emulated as primary bands in our testbed experiments. Clearly, with primary devices dynamically accessing the bands, the availability of the spectrum bands for SpiderRadio nodes changes dynamically. Since we use multiple candidate frequency channels sent by
103
SENGUPTA LAYOUT
2/18/11
The advantage of using bitmap channel vector for trans-
3:13 PM
Page 104
Frame control field: Bits: 2 2 Protocol version
Type
4
1
1
1
SubType
To DS
From DS
More frag
1
1
1
1
1
Retry
Pwr mgt
More data
WEP
Rsvd
mitting candidate channel information is that fixed length management frame can be used even though the channel
Bytes: 2
2
6
6
6
2
6
0-2312
4
Frame control
Duration ID
Address 1
Address 2
Address 3
Sequence control
Address 4
Frame body
CRC
availability informa-
802.11 MAC header
tion is variable length. The fixed length bitmap channel vector is sufficiently easy and
Frame body of switching request/response frame
Bytes: 2
2
8
8
Subtype identification
Destination channel bitmap (for request) Destination channel (for response)
Final timeout swtiching time
Time stamp
quick to decode.
Destination channel bitmap vector (for request frame):
15
Position of bits: 14 13 12
...
...
...
1
0
Correspond channel 165 161 157 153 149 64 60 56 52 48 44 40 36 11 6 1 Correspond Freq (MHz): 5825 5805 5785 5765 5745 5320 5300 5280 5260 5240 5220 5200 5180 2457 2437 1412
Figure 2. Detailed structure of the switching request/response frame.
initiator nodes, embedding absolute information (spectrum band frequency) of candidate frequency channels would again invoke the challenge of a variable length management frame. This may consume more time to decode the header information for variable length frames. In order to solve this issue, we use a bitmap channel vector for sending candidate channel(s) information. Since we have 16 non-overlapping channels, we implement the length of this bitmap channel vector as 2 bytes in the MAC payload, as shown in Fig. 2, thus mapping the availability of each channel to a single bit. When a channel is available (candidate), the corresponding bit will be set to 1; otherwise, it will be set to 0. Note that the advantage of using a bitmap channel vector for transmitting candidate channel information is that a fixed length management frame can be used even though the channel availability information is variable length. The fixed length bitmap channel vector is sufficiently easy and quick to decode. Moreover, with the usage of a bitmap channel vector, the management frame becomes easily scalable. If there are more than 16 non-overlapping channels in any system, we only need to expand the programmable bitmap vector field for that system. Following the destination channel bitmap vector field, the next field signifies the final timeout switching time from the initiator’s per-
104
spective. This field is designed to indicate when the initiator node will timeout from the current synchronization mechanism (if no synchronization could be established; i.e., even after multiple switching requests, no response frame is received from the other communicating CR node), vacate the current channel, and start the resynchronization attempt through quick probing following the destination (candidate) channel bitmap vector.
DYNAMIC FREQUENCY SWITCHING IMPLEMENTATION Two SpiderRadio secondary devices communicating with each other on a frequency channel must vacate the channel upon detecting the arrival of a primary device (or for coexistence) on that particular channel and must switch to a new channel to resume communication. To enable efficient spectrum switching, each node maintains the spectrum usage report in a local spectrum usage report database (SURD), which keeps track of the bands occupied by the primary user or available open spectrum bands. When a channel switching event is triggered, the secondary devices have three requirements: • Switch as fast as possible to reduce waste of time and resume data communication quickly.
IEEE Communications Magazine • March 2011
SENGUPTA LAYOUT
2/18/11
3:13 PM
Page 105
Initiator
Data communication on new channel
Old channel
New channel
1. Channel switch request frame
Initiator
Old channel Receiver
Data communication on old channel
2. Switching
2. Switching
New Channel
Receiver
Synchronization failure probability (in %)
18
14 12 10 8 6 4 2 0
(a)
Dynamic switching: version 1
16
3000
2000 1000 500 Network traffic congestion (kbytes/s)
10
(b)
Figure 3. a) Dynamic channel switching with switching request management frame; b) synchronization failure probability results for version 1.
• Switch successfully to reduce synchronization failure so that the nodes do not end up in different channels and lose communication. • Keep the overhead for synchronization as small as possible to maximize effective data throughput. With the above goals in mind, we next discuss the implementation of three gradually improving versions of channel switching protocols for SpiderRadio in increasing order of complexity. Note that each version is more robust and complex, and requires more overhead than its predecessor.
DYNAMIC FREQUENCY SWITCHING: VERSION 1 In version 1 two SpiderRadio nodes communicating on a frequency channel, upon detecting primary user activity on this particular channel, trigger a frequency switching procedure and move to a new channel. The dynamic frequency switching procedure is initiated by one of the SpiderRadio nodes, which transmits a channel switching request management frame to the other node for synchronization. It then moves to the new channel. (The channel switching request management frame is explained in detail above.) The node initiating channel switching request management frame is called the Initiator SpiderRadio, while the other node is called the Receiver SpiderRadio. Receiver SpiderRadio, upon receiving the channel switching request management frame, switches to the new channel indicated in the payload of the management frame and resynchronizes with the Initiator. In Fig. 3a we present the synchronization procedure for the channel switching request management frame. The advantage of this method is its simplicity and reduced overhead. Moreover, dynamic synchronization is possible with initiation of a channel switch request management frame carrying
IEEE Communications Magazine • March 2011
the new channel information, thereby making channel switching quite fast. However, this method also has its drawbacks in terms of robustness, as follows: •As synchronization is heavily dependent on the channel switch request management frame, if this frame is lost, there is a high probability of synchronization failure as the initiator would end up being in the new channel, whereas the receiver will still be in the old channel, resulting in the loss of communication. •As initiator initiates the channel switch request management frame, the new channel information (to which the nodes would move to) is inserted in this frame by the initiator from its local SURD. The problem with such a protocol is that the receiver may have a primary device operating in the new frequency channel in its vicinity, however, the initiator does not have any information about this in its local SURD. As a result, the initiator would again end up being in the new channel, whereas the receiver will still remain in the old channel, thereby resulting in the loss of communication. •Another key issue of this method is the determination of the initiator node. If both communicating SpiderRadio nodes detect the arrival of a primary device and simultaneously initiate a channel switch request management frame with information about the new channel from their local SURDs, there is a probability of synchronization failure. As both the SpiderRadio nodes now act as initiators without knowing the status of the other node, both nodes will move to their specified new channels. Unless new channels selected by both the initiators are the same, synchronization failure is bound to happen. In Fig. 3b we present the experimental synchronization failure probability of version 1. We find that with the network traffic decreasing, the synchronization failure probability decreases as well. However, as observed from Fig. 3b, syn-
105
SENGUPTA LAYOUT
2/18/11
3:13 PM
Page 106
15
0.35
Dynamic switching: version 2
0.3
Management frame loss probably (%)
Synchronization failure probability (%)
Dynamic switching: version 2
0.25 0.2 0.15 0.1 0.05 0
10
5
0 3000
2000
1000
500
10
3000
2000
1000
500
Network traffic congestion (kbytes/s)
Network traffic congestion (kbytes/s)
(a)
(b)
10
Figure 4. a) Synchronization failure probability with version 2; b) channel switching request frame loss probability with version 2.
chronization failure probability in version 1 is very high, thus making this version a poor choice for SpiderRadio.
DYNAMIC FREQUENCY SWITCHING: VERSION 2 In this method synchronization is no longer dependent on the channel switching request management frame only. We introduce another management frame called the channel switch response management frame (discussed in detail previously). The synchronization protocol in version 2 is summarized as follows. Step 1: The initiator sends a channel switching request management frame carrying information on new channel(s). Step 2: Upon successful reception of the request frame, the responder transmits a channel switching response management frame carrying information about the agreed new channel back to the initiator as acknowledgment; then the responder switches to the new channel. Step 3: The initiator, after receiving the response frame, switches to the new channel. As the initiator switches only after receiving the response management frame, the chance of synchronization failure reduces significantly (Fig. 4a). Using version 2, we can also solve the problem of both nodes being initiators. If both the SpiderRadio nodes initiate channel switch request management frame, the one with earlier timestamp will win. The eight byte timestamp field from the switching request management frame will let both the nodes decide on the winner and other node will automatically follow the role of responder. Even with the enhancement, version 2 faces a drawback in terms of higher channel switching request frame loss probability as shown in Fig. 4b. This is because only one channel switching request frame is transmitted for channel switching initiation thus making the switching request frame highly loss-prone.
106
DYNAMIC FREQUENCY SWITCHING: VERSION 3 To avoid synchronization failure due to the loss of the channel switching response management frame, we introduce two more types of management frames, confirm request management frame and confirm response management frame. In Fig. 5a, we present the version 3 implementation of the dynamic channel switching protocol. The synchronization protocol in version 3 is summarized as follows. Step 1: The initiator sends channel switching request management frame(s) carrying information of new channel(s). Step 2: Upon successful reception of the request frame, the responder initiates a channel switching response management frame carrying information about the new channel back to the initiator as acknowledgment; then the responder switches to the new channel. Step 3: The initiator, after receiving the response frame, will switch to the new channel. Step 4: The responder will monitor the data communication on the new channel. Step 5: If no data communication is received from the initiator, the responder will send a confirm request management frame to the initiator. Step 6: If the initiator is on the new channel, it will send a confirm response frame, and communication on the new channel will resume. Step 7: If the responder could not receive a confirm response, it will consider that the initiator is in the old channel, and go back to the old channel and try to repeat the protocol if it is within the time threshold permitted by the primary device standard. In order to make version 3 more robust, we also configure SpiderRadio so that the initiator sends multiple channel switching request management frames to reduce the loss probability of
IEEE Communications Magazine • March 2011
SENGUPTA LAYOUT
2/18/11
3:13 PM
Page 107
3
Initiator
Responder
4. Directly normal data confirm
Initiator
1. Channel switch request frame
New channel
Old channel Responder
Data communication on old channel 2. Channel switch response frame
(a)
7. If Confirm fail: Back to old channel and repeat protocol
Old channel
5. If no normal data receive, send confirm request frame
3. wwitching
3. wwitching
New channel
Management frame loss probability (%)
Dynamic switching with 3 mgmt frames: version 3 6. Confirm response frame
2.5
2
1.5
1
0.5
0 3000
2000 1000 500 Network traffic congestion (kbytes/s)
10
(b)
Figure 5. a) Dynamic frequency switching: version 3; b) channel switching request frame loss probability with version 3. switching request frames. In our experiment we program the initiator to transmit three switching request management frames. The result is presented in Fig. 5b, which shows very low loss probability for switching request frames, thus making version 3 highly robust. From experiments, we also compare the synchronization failure probabilities of all three versions. The synchronization failure probability (in percentage) for version 3, even under very high network traffic congestion (approximately 3000 kbytes/s), turns out to be almost negligible (0.0050 percent) as compared to versions 1 and 2.
TESTBED SETUP AND EXPERIMENTAL RESULTS For conducting extensive experiments with SpiderRadio enabled nodes, we built two groups of SpiderRadio prototypes, one for indoor testing and the other for outdoor testing. Each node of the indoor group is a standard desktop PC running the Linux 2.6 operating system. They were all equipped with Orinoco 802.11 a/b/g PCMCIA wireless cards. There is no PCMCIA slot for the desktop PC, so we use an ENECB1410 PCMCIA-to-PCI adapter card for converting the PCMCIA devices to operate on the desktop PC. In the outdoor group SpiderRadio is deployed on two laptops running the Linux 2.6 operating system: a Compaq NC4010 and a Dell Inspiron 700m. Both were equipped with Orinoco 802.11 a/b/g PCMCIA wireless cards. The TX powers of these wireless devices were set to 100 mW. Another laptop running Windows Vista and equipped with Wi-Spy 2.4xacted as a monitor in the testbed. These Orinoco devices are equipped with Atheros 5212 (802.11 a/b/g) chipsets. For our testbed setup, the primary user bands were emulated using the 900 MHz, 2.4 GHz, and 5.1 GHz Wi-Fi spectrum bands. The primary user communication was emulated
IEEE Communications Magazine • March 2011
in two ways: two cordless phones communicating with each other using the intercom feature and an Agilent signal generator (e4437b) operating in the Wi-fi bands. The SpiderRadio node was configured to be the secondary user device for the experiments. For the purpose of sensing and detecting the arrival of the primary user, we implemented the spectrum sensing methodology based on observed PHY errors, received signal strengths, and n-moving window strategy as proposed in our earlier work [13]. We placed SpiderRadio nodes at a distance of 5–20 m from each other, communicating with TCP data streams. We carried out experiments under five different network traffic congestion scenarios: 3 Mbytes/s, 2 Mbytes/s, 1 Mbytes/s, 0.5 Mbytes/s, and 10 kbytes/s during day and night times. Note that since the testbed is located in Hoboken, New Jersey (in close proximity to Manhattan, New York City), the radio interference is significantly different during day and night. Interference due to students using the Stevens campus wireless network also varies significantly between night and day. In Fig. 6a we present the average time to synchronize under all five network traffic congestion scenarios. For showing the effectiveness of version 3 with three switch request management frames, we compare this with a simpler version 3 where only one switch request management frame is transmitted. The comparison result is shown in Fig. 6a. The first observation from this plot is that when the network traffic congestion decreases, the average time to synchronize also decreases for both mechanisms. However, the more interesting observation is that with higher network traffic, version 3 with three management frames performs much better than the older version 3 (i.e., with one management frame). The difference in the performance decreases gradually with a decrease in network traffic congestion. At the lowest network traffic congestion (i.e., 10 kbytes/s), version 3 with one management frame results in better performance than that with three management frames. This is
107
SENGUPTA LAYOUT
2/18/11
3:13 PM
Page 108
35
4
Version 3 with 3 management frames Version 3 with 1 management frame Average effective throughput (Mbytes/s)
Average time to synchronize (ms)
30
25 20
15 10
5 0
3.6 3.4 3.2 3 2.8 2.6 2.4 2.2 2
0
3000
2000
1000
500
Effective throughput without switching Effective throughput with switching
3.8
10
1
2
3
Network traffic congestion (kbytes/s)
Switching intervals (s)
(a)
(b)
5
10
Figure 6. a) Average time to synchronize; b) average effective throughput with various frequency intervals.
because with very low network traffic, the loss probability of a management frame is very low, so the need for redundant management frames to reduce loss probability is no longer needed. Thus, it can be concluded that at night time (or when network traffic congestion is very low or almost zero), version 3 with one management frame might be a better choice compared to three management frames to reduce overhead for the same performance. The effective throughput is shown in Fig. 6b. The results are shown for different switching intervals (1, 2, 3, 5, and 10). For benchmarking purposes, we calculate the ideal maximum throughout achievable under the same operating environment and conditions without any frequency switching. The dotted line in the figure depicts the maximum possible throughput (3.353 Mbytes/s — benchmark). As evident from the figure, the proposed CR system demonstrates high throughput even with very high frequent switching; and, as is obvious, with less frequent switching (switching every 5 or 10 s), the throughput achieved is almost the same as the benchmark throughput, proving the effectiveness of the proposed CR prototype.
FUTURE DIRECTIONS: SECURING THE SPIDERRADIO Recent research in the area of CR security [14] has underlined the need to consider security in the design stages of the CR network (CRN). Several flavors of denial-of-service (DoS) attacks can be launched on the CRN if the architecture and protocols are not designed specifically to avoid these problems. In keeping with that spirit, we discuss some potential security issues SpiderRadio will need to address and some potential solutions to these problems. Since the primary differentiating factor between wireless networks and CRNs is the need to sense and switch between spectrum
108
bands, most of the unique security threats to CRNs come from these two functionalities. We focus on the vulnerabilities existing in switching/ synchronization functionality of CRNs, rather than those that occur in sensing [15]. Earlier, several protocols for resynchronization have been proposed for SpiderRadio. All the protocols assume that the channel to which the initiator and Receiver SpiderRadios must switch (rendezvous), has already been determined. A malicious node with an intent to jam or desynchronize communication between these nodes can easily do so with very minimal resource expenditure on its part, by tracking the two communicating nodes and successively jamming these channels. In order to prevent this type of DoS, the security of the rendezvous sequence must be guaranteed at least to some extent. Several solutions will have to be co-opted to achieve this goal. These include a secure pseudo-random rendezvous sequence with meaningful convergence guarantees (to ensure faster channel synchronization), efficient cryptographic authentication of the switching request frame as well as the response management frame (in version 2 of the protocol), and confirmation frames (version 3). All solutions will have to be optimized for time to convergence.
CONCLUSIONS SpiderRadio’s software abstraction-based implementation platform at the MAC layer hides the PHY layer details from the higher layers in a network protocol stack efficiently. The specialpurpose queues built into the stack help alleviate higher layer packet losses during dynamic channel switching. The three versions of the proposed dynamic channel switching protocols seem to gracefully trade off complexity for achievable throughput. These protocols also achieve fast channel switching with negligible synchronization failure rate between the transmitter and the receiver. The empirical throughput observed in
IEEE Communications Magazine • March 2011
SENGUPTA LAYOUT
2/18/11
3:13 PM
Page 109
testbed experiments for different channel switching intervals is close to the ideal throughput without fixed channel access. This implies that the channel switching protocol and implementation in SpiderRadio is fast enough for practical dynamic spectrum access networking applications.
ACKNOWLEDGMENT This work is supported by grants from the National Institute of Justice #2009-92667-NJ-IJ, the National Science Foundation #0916180, and PSC-CUNY Award #60079-40 41.
REFERENCES [1] F. Akyildiz et al., “Next Generation/Dynamic Spectrum Access/Cognitive Radio Wireless Networks: A Survey,” Comp. Net., vol. 50, no. 13, 2006, pp. 2127–59. [2] H. Salameh, M. Krunz, and O. Younis, “Mac Protocol for Opportunistic Cognitive Radio Networks with Soft Guarantees,” IEEE Trans. Mobile Computing, vol. 8, no. 10, Oct. 2009, pp. 1339–52. [3] S.-Y. Tu, K.-C. Chen, and R. Prasad, “Spectrum Sensing of OFDMA Systems for Cognitive Radio Networks,” IEEE Trans. Vehic. Tech., vol. 58, no. 7, Sept. 2009, pp. 3410–25. [4] H. Harada, “A Software Defined Cognitive Radio Prototype,” IEEE 18th Int’l. Symp. Pers., Indoor and Mobile Radio Communications (PIMRC), pp. 1-5, 2007. [5] R. DeGroot et al., “A Cognitive-Enabled Experimental System,” IEEE DySPAN, 2005, pp. 556–61. [6] Y. Yuan et al., “KNOWS: Cognitive Radio Networks over White Spaces,” IEEE DySPAN, Apr. 2007, pp. 416–27. [7] K. Shin et al., “An Experimental Approach to Spectrum Sensing in Cognitive Radio Networks with off-the Shelf IEEE 802.11 Devices,” 4th IEEE CCNC, Jan. 2007, pp. 1154–58. [8] P. Bahl et al., “White Space Networking with Wi-Fi Like Connectivity,” Proc. ACM SIGCOMM 2009 Conf. Data Commun., 2009, pp. 27–38. [9] R. Chandra et al., “A Case for Adapting Channel Width in Wireless Networks,” Proc. ACM SIGCOMM 2008 Conf. Data Commun., 2008, pp. 135–46. [10] C. Cordeiro et al., “IEEE 802.22: the First Worldwide Wireless Standard based on Cognitive Radios,” IEEE DySPAN, 2005, pp. 328–37. [11] “IEEE Standard for Information Technology — Telecommunications and Information Exchange between Systems — Local and Metropolitan area Networks — Specific Requirements — Part 11: Wireless Lan Medium Access Control (MAC) and Physical Layer (PHY) Specifications,” Mar. 2007. [12] S. Sengupta et al., “Enhancements to Cognitive Radio Based IEEE 802.22 Air-Interface,” IEEE ICC, 2007, pp. 5155–60. [13] K. Hong, S. Sengupta, and R. Chandramouli, “SpiderRadio: An Incumbent Sensing Implementation for Cognitive Radio Networking using IEEE 802.11 Devices,” IEEE ICC, 2010.
IEEE Communications Magazine • March 2011
[14] F. Granelli et al., “Standardization and Research in Cognitive and Dynamic Spectrum Access networks: IEEE SCC41 Efforts and Open Issues,” IEEE Commun. Mag., Jan. 2010. [15] G. Jakimoski and K. Subbalakshmi, “Towards Secure Spectrum Decision,” IEEE Int’l. Conf. Commun., Symp. Sel. Areas Commun., June 2009.
The three versions of the proposed dynamic channel switching protocols
BIOGRAPHIES
seem to gracefully
SHAMIK SENGUPTA [M] (
[email protected]) is an assistant professor in the Department of Mathematics and Computer Science, John Jay College of Criminal Justice of the City University of New York. He received his B.E. degree (first class honors) in computer science from Jadavpur University, India, in 2002 and his Ph.D. degree from the School of Electrical Engineering and Computer Science, University of Central Florida, Orlando, in 2007. His research interests include cognitive radio, dynamic spectrum access, game theory, security in wireless networking, and wireless sensor networking. Shamik Sengupta serves on the organizing and technical program committee of several IEEE conferences. He is the recipient of an IEEE GLOBECOM 2008 best paper award.
trade-off complexity
KAI HONG (
[email protected]) received his B.S. degree in automatic control from Beijing Institute of Technology in 2004 and his M.S. degree in automatic control from Beijing Institute of Technology in 2007. He is currently a Ph.D. candidate in computer engineering at Stevens Institute of Technology, where he is a member of the Media Security, Networking, and Communications (MSyNC) Laboratory. His research focus is in the area of Cognitive Radio, dynamic spectrum access networks and wireless security.
for achievable throughput. These protocols also achieve fast channel switching with negligible synchronization failure rate between the transmitter and the receiver.
R. CHANDRAMOULI [M] (
[email protected]) is a professor in the Electrical and Computer Engineering (ECE) Department at Stevens Institute of Technology. His research in wireless networking, cognitive radio networks, wireless security, steganography/ steganalysis, and applied probability is funded by the NSF, U.S. AFRL, U.S. Army, ONR, and industry. He served as an Associated Editor for IEEE Transactions on Circuits and Systems for Video Technology (2000–2005). Currently, he is the Founding Chair of the IEEE ComSoc Technical Sub-Committee on Cognitive Networks, Technical Program Vice Chair of the IEEE Consumer Communications and Networking Conference (2007), and Chair of the Mobile Multimedia Networking Special Interest Group of the IEEE Multimedia Communications Technical Committee. K. P. (SUBA) SUBBALAKSHMI [M] (
[email protected]) is an associate professor in the ECE Department at Stevens Institute of Technology. Her research interests lie in cognitive radio networks, wireless security, and information forensics and security. Her research is supported by grants from the U.S. National Science Foundation, National Institute of Justice, and other Department of Defense agencies. She serves as Chair of the Security Special Interest group of IEEE ComSoc’s Multimedia Technical Committee. She has given tutorials on cognitive radio security at several IEEE conferences and has served as Guest Editior for several IEEE special issues in her area of interest.
109
LYT-GUEST EDIT-Zahariadis
2/22/11
5:21 PM
Page 110
SERIES EDITORIAL
FUTURE MEDIA INTERNET
Theodore Zahariadis
T
Giovanni Pau
he Internet has become the most important medium for information exchange and the core communication environment for business relations as well as for social interactions. Every day millions of people all over the world use the Internet for a plethora of daily activities including searching, information access and exchange, multimedia communications enjoyment, buying and selling goods, and keeping in touch with family and friends, just to name a few. Statistics show (Fig. 1) that Internet usage has achieved a penetration of 77.4 percent in North America, 61.3 percent in Australia, 58.4 percent in Europe, and an average of 28.7 percent of the total worldwide population as of 2010 [1]. This corresponds to more than a fourfold increase (444 percent actually) over a period of 10 years. If we consider that Asia and Africa count together for more than 70 percent of the world’s population and have currently the lowest penetration rates with much room to grow as their economies also develop, there is no doubt that many more people will acquire Internet access over the next 10 years. Moreover, it is a common belief that besides growing, the Internet is evolving toward even richer and more immersive experiences. Advances in video capturing and creation will lead to massive creation of new (user generated) multimedia content and Internet applications, including 3D videos, immersive environments, network gaming, and virtual worlds. Overall Internet traffic is expected to reach an average of 767 exabytes 1 per year in the period 2010–2014, four times the amount of data currently circulating in the Internet [2]. To have an idea of the amount of data this volume represents, this is the equivalent of 12 billion DVDs transferred over the Internet every month at the end of the forecast period. In this respect, future media Internet will not simply be a faster way to go online. The increasing flood of traffic and the new communication needs will pose many challenges to the network infrastructure. Overdimensioning (adding more powerful routers, more fiber, etc.) is only a temporary solution. At some point in time, structural changes will become necessary. While the extent of the architectural changes can be debated [3], it is not contested that essential elements of the current Internet infrastructure will need to be, to an extent, redesigned. New methods of content finding and streaming, diffusion of heterogeneous nodes and devices, new forms of (3D) user-centric/user generated content provisioning, the emergence of software as a service, and interaction with improved security, trustworthiness, and privacy. In this evolving environment, rich 3D content as well as community networks (peer-to-peer, overlays, and clouds) are expected to generate new models of interaction and cooperation, and be able to support new innovative applications, like virtual collaboration
Gonzalo Camarilo
environments, personalized services/media, virtual sport groups, online gaming, and edutainment. Scientists and engineers worldwide from industry, research centers, and universities are working toward the future media Internet architecture. In this special issue, we have striven to give an overview of the industrial and research community viewpoint by balancing the article selection to include the highest quality papers from both arenas. We start the special issue with the article entitled “CURLING: Content-Ubiquitous Resolution and Delivery Infrastructure for Next-Generation Services” by Ning Wang et al. Apart from an overview of the most prominent content-centric network approaches, the article highlights some of the most important challenges in the future media Internet, such as security, quality of service, scalability, reliability, and network management, considered from a telecom operator perspective. Solutions based on content-oriented networks (CONs) are among the most prominent approaches toward the future media Internet. In comparison to IP networking, within a CON, hosts identification is replaced by content identification and content file name is independent of the content file/segment location. In IP networking, a user should know which source server holds the content file of interest (spatial coupling) and communicate with that server throughout the content delivery (temporal coupling). In order to support this delivery method, search engines return as results to queries pointers to locations (URL) rather than pointers to the content itself.2 In CONs, the content generation and content consumption are decoupled in time and space, so that content is delivered based purely on its name (routing by name). Moreover, in IP networking a host address is irrelevant to its content name, which results in phishing and pharming attacks, while in CONs the authenticity of the contents can be easily verified. The next article, entitled “A Survey on Content-Oriented Networking for Efficient Content Delivery,” by Jaeyoung Choi et al. presents a comprehensive survey on content naming and name-based routing, quantitatively compares CON routing proposals, and evaluates the impact of the publish/subscribe paradigm and in-network caching. Most Internet traffic is due to video content; thus, efficient video coding and streaming is significant. Contrasting the conventional client-server model, in peer-to-peer (P2P) distribution models, video is delivered to the end users not directly from the server but in a fully distributed fashion by converting users to content redistributors. This may result in a major economy in scale, espe-
2 1
1 exabyte = 1018 bytes = 1000 petabytes = 1 million terabytes
110
One approach to decouple the search engines results from the content location is provided by the EC FP7 project COAST, “Content Aware Searching Retrieval and Streaming” (http://www.coast-fp7.eu)
IEEE Communications Magazine • March 2011
LYT-GUEST EDIT-Zahariadis
2/22/11
5:21 PM
Page 111
SERIES EDITORIAL
North America
77.4%
Oceania/ Australia
61.3%
Europe
58.4%
Latin Ameria/ Caribbean
34.5%
Middle East
29.8%
Asia
21.5% 10.9%
Africa
28.7%
World Avg. 0
10%
20%
30%
40%
50%
60%
70%
80%
90%
Penetration Rate
Figure 1. World Internet penetration rate, 2010 (source: Internet World Stats [1]) cially in cases of highly popular videos and proper selection of peer nodes. Furthermore, content needs to be displayed on a variety of devices featuring different sizes, resolutions, computational capabilities, and Internet access. If video is encoded in a scalable way, it can be adapted to any required spatio-temporal resolution and quality in the compressed domain, according to peers’ bandwidth and end-user context requirements. The next article, “Peerto-Peer Streaming of Scalable Video in Future Internet Applications” by Toni Zgaljic et al., presents a fully scalable extension (scalable video coding, SVC) of the latest H.264/MPEG-4 AVC video coding standard, and describes successful experiments of streaming SVC encoded videos over P2P networks. Many researchers envision that the future networked media applications will be multisensory, multi-viewpoint, and multistreamed, relying on (ultra) high definition and 3D video. These applications will place unprecedented demands on networks for high-capacity, low-latency, and low-loss communication paths. The next article, “Improving End-to-End QoE via Close Cooperation between Applications and ISPs” by Bertrand Mathieu et al., advocates the development of intelligent cross-layer techniques that, on one hand, will mobilize network and user resources to provide network capacity where it is needed, and, on the other hand, will ensure that applications adapt themselves and the content they are conveying to available network resources. Aiming to improve the quality of experience (QoE) and optimize network traffic, the article presents an architecture based on cooperation between the application providers, users, and communications networks. The high volumes of content create specific trends for efficient information mining and content retrieval. In the near future, search engines should not respond to a user query by just finding the most popular content, but what the user is actually seeking. As such, personalization and contextual issues should be taken into account. The next article, “System Architecture for Enriched Semantic Personalized Media Search and Retrieval in the Future Media Internet” by María Alduán et al., describes a system architecture that handles, processes, delivers, and finds digital media by providing the methods to semantically describe contents with a multilingual-multimedia-multidomain ontology, annotate content against this ontology, process the content, and adapt it to the network and network status. The article presents the architecture, and the modules’ functionalities and procedures, including the system application model, to the future media Internet concepts. Finally, in the future media Internet users will request new methods of communication and interaction, with much better QoE, well beyond today’s communication forms. It is expected that voice over IP, videoconferencing, IPTV, email, instant messaging, and so
IEEE Communications Magazine • March 2011
on will be completed by virtual environments and 3D virtual worlds, where friends and colleagues will meet, chat, and interact in more natural ways. The last (but not least) article of this special issue, “Automatic Creation of 3D Environments from a Single Sketch Using Content-Centric Networks” by Theodoros Semertzidis et al., describes an innovative core application that provides an interface where the user sketches in 2D the scene of a virtual networked world, and the system exploits dynamically similarity search and retrieval capabilities of the search-enabled content-centric network to fetch 3D models that are similar to the drawn 2D objects. The retrieved 3D models act as the building components for an automatically constructed 3D scene. Before we leave you to enjoy this special issue, as guest editors we would like to thank all authors, who invested a lot of work in their really valuable contributions, and also all reviewers, who dedicated their precious time in providing numerous comments and suggestions. Last but not least, we would also like to acknowledge the enlightening support of the Editor-in-Chief, Dr. Steve Gorshe and the publication staff.
REFERENCES [1] Internet World usage statistics, www.Internetworldstats.com/stats.html [2] CISCO Visual Networking Index Forecast 2010. [3] T. Zahariadis, Ed., “Fundamental Limitations of Current Internet and the Path to Future Internet,” European Commission Future Internet Architecture (FIArch) Experts Group, 2nd draft, Dec. 2010.
BIOGRAPHIES THEODORE ZAHARIADIS [M] (
[email protected]) received his Ph.D. degree in electrical and computer engineering from the National Technical University of Athens, Greece, and his Dipl.-Ing. degree in computer engineering from the University of Patras, Greece. Currently he is associate professor at the Technological Education Institution of Chalkida, Greece, and chief technical officer at Synelixis Solutions Ltd. From 1997 to 2003, he was with Lucent Technologies, first as technical consultant to ACT, Bell Labs, New Jersey, and subsequently as technical manager of Ellemedia Technologies, Athens, Greece, while from 2001 to 2006 he was also chief engineer at Hellenic Aerospace Industry. Since 1996 he is involved in various EC funded projects and currently chairs the EC Future Media Internet Architecture Think Tank (FMIA-TT) and the EC Future Internet Architecture (FIArch) Group. He is a member of the Technical Chamber of Greece and ACM. His current research interests are in the fields of broadband wireline/wireless/ mobile communications, content-aware networks, and sensor networking. Since 2001 he has been a Technical Editor of IEEE Wireless Communications and has served as principal guest editor of many special issues of magazines and journals. GIOVANNI PAU (
[email protected]) is a research scientist at the Computer Science Department of the University of California, Los Angeles. He obtained a Laurea degree in computer science and a Ph.D. in computer engineering from the University of Bologna. He served as Vice Chair and secretary of the ComSoc Multimedia Technical Committee and as Vice Chair for North America. He served as Technical Program Committee Vice-Chair for IEEE ICC ’06 General Symposium, Technical Program Committee CoChair for IEEE International Workshop on Networking Issues in Multimedia Entertainment (NIME ’06), and Technical Program Committee Co-Chair for NIME ’04) in conjunction with IEEE GLOBECOM, He is Associate Editor-inChief and co-founder of IEEE Communications Society Multimedia and a Steering Committee member for IFIP MedHocNet Mediterranean Ad Hoc Networking Workshop since 2002. He serves as Associate Editor of the Elsevier International Journal of Ad Hoc Networks and Springer International Journal on Peer-to-Peer Systems. His current research interests include mobility, wireless multimedia, peer-to-peer and multimedia entertainment, vehicular networks, and future Internet archtectures. GONZALO CAMARILLO (
[email protected]) works for Ericsson Research in Finland. He received M.Sc. degrees in electrical engineering from the Stockholm Royal Institute of Technology, Sweden, and Universidad Politecnica de Madrid, Spain. His research interests include signaling, multimedia applications, transport protocols, and networking architectures. He has authored a number of RFCs, books, patents, and scientific papers on these areas. He has co-authored, among other standards, the Session Initiation Protocol specification (RFC 3261). He has served on the Internet Architecture Board and has chaired a number of Internet Engineering Task Force (IETF) working groups. Currently, he is director of the Real-Time Applications and Infrastructures area at the IETF. He is also IETF liaison manager to the Third Generation Partnership Projects.
111
WANG2 LAYOUT
2/18/11
3:18 PM
Page 112
FUTURE MEDIA INTERNET
CURLING: Content-Ubiquitous Resolution and Delivery Infrastructure for Next-Generation Services Wei Koong Chai, University College London Ning Wang, University of Surrey Ioannis Psaras and George Pavlou, University College London Chaojiong Wang, University of Surrey Gerardo García de Blas and Francisco Javier Ramon Salguero, Telefónica Investigación y Desarrollo S.A.U. Lei Liang, University of Surrey Spiros Spirou, Intracom SA Telecom Solutions Andrzej Beben, Warsaw University of Technology Eleftheria Hadjioannou, PrimeTel PLC
ABSTRACT CURLING, a Content-Ubiquitous Resolution and Delivery Infrastructure for Next Generation Services, aims to enable a future content-centric Internet that will overcome the current intrinsic constraints by efficiently diffusing media content of massive scale. It entails a holistic approach, supporting content manipulation capabilities that encompass the entire content life cycle, from content publication to content resolution and, finally, to content delivery. CURLING provides to both content providers and customers high flexibility in expressing their location preferences when publishing and requesting content, respectively, thanks to the proposed scoping and filtering functions. Content manipulation operations can be driven by a variety of factors, including business relationships between ISPs, local ISP policies, and specific content provider and customer preferences. Content resolution is also natively coupled with optimized content routing techniques that enable efficient unicast and multicast-based content delivery across the global Internet.
INTRODUCTION The original Internet model focused mainly on connecting machines, whereby addresses point to physical end hosts, and routing protocols compute routes to specific destination endpoints. Nowadays the Internet is primarily used for transporting content/media, where a high volume of both user-generated and professional digital
112
0163-6804/11/$25.00 © 2011 IEEE
content (webpages, movies/songs, live video streams, etc.) is delivered to users who are usually only interested in the content itself rather than the location of the content sources. Human needs along with the nature of communication technologies have transformed the Internet into a new content marketplace, generating revenue for various stakeholders. In fact, the Internet is rapidly becoming a superhighway for massive digital content dissemination. In this context, many researchers have advocated a transition of the Internet model from host-centric to content-centric, with various architectural approaches proposed [1–7]. Many of these proposals support the key feature of location independence, where content consumers do not obtain explicit location information (e.g., the IP address) of the targeted content source a priori, before issuing the consumption request [1–3, 5, 7]. Nevertheless, location requirements are sometimes still demanded by both content consumers and providers. On one hand, content providers may want their content accessed only by content consumers from a specific region (known as geo-blocking); for example, BBC iPlayer, Amazon Video-on-Demand, Apple iTunes Store, and Sina video services. On the other hand, content consumers may prefer content originated from specific regions in the Internet; for instance, a U.S.-based shopper might only like to check the price of an item sold in Amazon online stores in North America rather than anywhere else in the world. Today, this is typically achieved through the user’s explicit input in the URL (e.g., Amazon.com
IEEE Communications Magazine • March 2011
WANG2 LAYOUT
2/18/11
3:18 PM
Page 113
and Amazon.ca) and supported by name resolution through the standard Domain Name System (DNS) [8], with the relevant requests directed toward the specific regional web server. Similar practice can be observed in multimediabased content access (e.g., video-on-demand services), where consumers have specific requirements regarding the location/area of content sources. In this article, we introduce a new Internetbased content manipulation infrastructure — CURLING: Content-Ubiquitous Resolution and Delivery Infrastructure for Next Generation Services. The objective is to both accurately and efficiently hit (or not hit) content objects in specific regions/areas of the Internet, based on user requirements and preferences. Such an approach, deployed by Internet service providers (ISPs), allows both content providers and consumers to express their location requirements when publishing/requesting content, thanks to the supported content scoping/filtering functions. In particular, instead of following the conventional DNS-like approach, where a content URL is translated into an explicit IP address pointing to the targeted content server, the proposed content resolution scheme is based on hop-by-hop gossip-like communication between dedicated content resolution server (CRS) entities residing in individual ISP networks. Content resolution operations can be driven by a variety of factors, including the business relationships among ISPs (provider/customer/peer), content consumer preferences and local ISP policies. This resolution approach is natively coupled with content delivery processes (e.g., path setup), supporting both unicast and multicast functions. Specifically, a content consumer simply issues a single content consumption request message (capable of carrying his/her location preferences on the content source candidate(s)), and then individual CRS entities collaboratively resolve the content identifier in the request, in a hop-by-hop manner, toward the desired source. Upon receiving the request, the selected content source starts transmitting the requested content to the consumer. During this content resolution operation, multicast-like content states are installed along the resolution path so that the content flows back immediately upon completion of the resolution process. By exploiting multicast delivery techniques, we increase the sustainability of the system in view of the expected explosion of content in the Internet.
BUSINESS MODEL We first present a basic business model that involves relevant stakeholders and their business interactions. The following top-level roles can be envisaged: • Content providers (CPs): the entities that offer content to be accessed and consumed across the Internet. These include both commercial CPs and end users who publish their content in the Internet. • Content consumers (CCs): the entities that consume content as receivers.
IEEE Communications Magazine • March 2011
CC-CP SLA Content provider
Content consumer
CC-ISP SLA
CP-ISP SLA
Internet service provider
ISP-ISP SLA
Figure 1. Business model. • ISPs: Equipped with the CURLING content-aware infrastructure, ISPs are responsible for dealing with the content publication requests from CPs, and content consumption requests from consumers, and for the actual delivery of the content, possibly with quality of service (QoS) awareness. Figure 1 shows the business interactions between individual roles. Since CPs rely on the underlying content-aware infrastructure owned by ISPs , they are expected to establish a service level agreement (SLA) involving relevant payment to the ISP for content publication services (CP-ISP SLA). In addition, since ISPs offer content searching/location and delivery services to CCs, a CC-ISP SLA can be established. Sometimes, CCs may need to pay CPs for consuming charged content (e.g., pay-per-view). This can be covered by the CC-CP SLA between the two. Finally, business contracts are also established between ISPs (ISP-ISP SLA), given a providercustomer or peering relationship between them. A low-tier ISP needs to pay its provider ISP not only for content traffic delivery, but also for delegated content publication/resolution services on behalf of its own customers, including directly attached CPs and consumers.
THE CURLING ARCHITECTURE Our solution requires a form of aggregatable labels capable of being sequentially ordered to which we refer to as content identifiers (IDs). A content to be published and accessed is allocated a globally unique content ID. Multiple copies of the same content that are physically stored at different sites in the Internet share one exclusive ID. Content manipulation operations rely on two distinct entities in the CURLING architecture: • The content resolution server (CRS), which handles content publication requests, discovers the requested content, and supports content delivery • The content-aware router (CaR), which collaborates with its local CRS(s) to enforce receiver-driven content delivery paths At least one CRS entity is present in every domain for handling local publication requests and content consumption requests, and interact-
113
WANG2 LAYOUT
2/18/11
3:18 PM
Page 114
Other CRS CRS
Inter-CRS protocol
Publication Resolution Monitoring module
Delivery
Consume Register
Server awareness
CP-CRS interface
Content management
Content provider
CRS-CP interface
Consume
CRS-CC interface
CC-CRS interface
Scoping /filtering preferences
Publish
Inter-CRS interface
Content consumer QoS requirements
Consume
Content server
Scoping preference
CRS-CaR interface CaR configuration
Network awareness CaR
CaR
Network
Figure 2. High-level architecture of the hop-by-hop hierarchical content resolution approach. ing with other neighboring CRS entities for content publication/resolution across domains. Both CPs and consumers are configured to know their local CRS. The number of CRSs in each domain depends on performance and resilience considerations. Figure 2 depicts the functional view of the CURLING architecture. The internal structure of the CRS entity consists of three logical components. The content management block is responsible for dealing with requests from both CPs and CCs (via CRS-CP and CRS-CC interfaces, respectively), including content ID allocation and entry creation upon new content registrations, and also content ID lookup upon each content consumption request from a CC. A dedicated content record repository is also maintained, including not only content ID lookup information, but also ingress and egress(es) CaR(s) within the local domain for each active content session being delivered in the network. The inter-CRS protocol component enables the communication between neighboring CRSs for handling interdomain content publication/consumption requests. Finally, the monitoring module gathers necessary near-real-time information on content server and underlying network conditions for supporting optimized content resolution and delivery configuration operations. CRSs communicate with other entities via specialized interfaces: • Inter-CRS interface enables interaction among CRSs in neighboring domains, especially when they cooperate in content publication and searching for a requested content across domains. • CRS-CP interface connects content servers owned by CPs with CRSs, and allows CPs to publish content, optionally with scoping requirements on potential CCs. This interface is also responsible for passing informa-
114
tion on server load conditions to a CRS for enabling optimized content resolution operations. • CRS-CC interface connects CC devices with the CRSs and allows consumers requesting and receiving content with scoping/filtering preferences on candidate content sources. • CRS-CaR interface allows a CRS to actively configure relevant CaRs for each content session (e.g., content state maintenance). It also gathers necessary information from the underlying network that will be used for optimized content resolution processes. A CaR is the network element that natively processes content packets according to their IDs. Generally, it is not necessary for every router in the network to be a CaR, and typically CaRs are planted at the network boundary as ingress and egress points for content delivery across ISP networks. The function of CaRs will be specified later with the description of the content delivery process.
HOP-BY-HOP HIERARCHICAL CONTENT-BASED OPERATIONS We envisage the following three-stage content operation life cycle: publication, resolution, and delivery. The task of content resolution is to: • Identify the desired content source in the Internet according to the requested content ID and optionally CC preferences • Trigger the content transmission by the selected content server Once the content server starts the transmission of the content upon receiving the content consumption request, the content delivery function is responsible for enforcing the actual delivery path back to the consumer.
IEEE Communications Magazine • March 2011
WANG2 LAYOUT
2/18/11
3:18 PM
Page 115
CONTENT PUBLICATION Content publication is the process of making content available across the Internet. It consists of two stages. Stage 1: Content Registration — It begins with the CP notifying the local CRS that a new content is now available via a Register message. In the case where multiple copies of the same content are available at different locations, the CP is responsible for informing the local CRS(s) of each content server hosting that specific content copy. Upon reception of the Register message, the CRS registers this content by creating a new record entry in its local content management repository containing a globally unique content ID assigned to that content, and the explicit location of the content (i.e., IP address of the content server). Stage 2: Content Publication Dissemination — Once the content is registered to a CRS, this CRS is responsible for publishing it globally to ensure successful discovery by potential consumers. This is achieved through the dissemination of the Publish message across CRSs in individual domains according to their business relationships. A Publish message is created by the CRS where the content is actually registered by the CP. By default, each CRS disseminates a new Publish message towards its counterpart in the provider domain(s) until it reaches a tier 1 ISP network. Each CRS receiving a new Publish message updates its content management repository with a new record entry containing the content ID and the implicit location of the content (i.e., the IP prefix associated with the neighboring domain from where the Publish message has been forwarded). Following this rule, each CRS effectively knows the locations of all the content within its own domain (explicitly) and those under it (its customer domains, implicitly). Peer domains, however, will not know the content records of each other. We introduce the concept of scoped publication to allow publication of content only to specific areas in the Internet as designated by the CP. This feature is able to natively support regionally-restricted content broadcasting services such as BBC iPlayer and Amazon VoD that are only available within the United Kingdom and United States, respectively. We achieve this through the INCLUDE option embedded in the Register/Publish messages where the CP specifies a scoped area in the Internet (e.g., only the IP prefix associated with the local ISP network where the content is registered). A special case of scoped publication is the wildcard mode (denoted by an asterisk symbolizing all domains) for which the CP has no restrictions on the geographical location of potential consumers in the Internet. Figure 3 illustrates different scenarios in the publication process. It depicts the domain-level network topology with each circle logically representing a domain containing a CRS entity. We first assume that CP S1 registers a content item (assigned with ID X1 by the local CRS in the stub domain A.A.A) to the entire Internet by
IEEE Communications Magazine • March 2011
issuing a Register message with a wildcard. Each intermediate CRS along the publication path creates a content entry for X1 associated with the IP prefix of its customer domain from where the Publish message has been forwarded. For clarity, the Publish messages are omitted in the figure for other scenarios. Our approach also allows local domain policies to influence the publication process (e.g., domain B.A has the policy of NOT propagating content X2 originated from the multi-homed domain A.B.B to its own provider). S3 illustrates the scoped registration by only registering content X3 to tier-2 domain A.A from this CP. This effectively limits the access of content X3 to domain A.A and its customer domain A.A.A. Finally, records for different copies of the same content can also be aggregated. For instance, both S4 and S5 host one copy of content X4 respectively, but the two Publish messages from B.B.A and B.B.B are merged at B.B, in which case domain B only records aggregated location information (X4 → B.B). A content consumption request for X4 received at B.B can be forwarded to either B.B.A or B.B.B based on performance conditions such as content delivery path quality or server load.
In the content resolution process, a content consumption request issued by a CC is resolved by discovering the location of the requested content and is finally delivered to the actual content source to trigger the content transmission.
CONTENT RESOLUTION In the content resolution process, a content consumption request issued by a CC is resolved by discovering the location of the requested content and is finally delivered to the actual content source to trigger the content transmission. A CC initiates the resolution process via a Consume message containing the ID of the desired content. The primary resolution procedure follows the same provider route forwarding rule in the publication process (i.e., the Consume message will be further forwarded to its provider(s) if the CRS cannot find the content entry in its local repository). If a tier-1 domain is not aware of the content location, the request is forwarded to all its neighboring tier-1 domains until the content consumption request is delivered to the identified content source. If the content is not found after the entire resolution process, an Error message is returned to the requesting CC indicating a resolution failure. The scoping functions can also be applied in the resolution process, either embedded in the request from a CC or actively issued by a CRS for route optimization purposes during the content delivery phase. The function allows a CC to indicate preferred ISP network(s) as the source domain of the requested content. Specifically, a CC may use the INCLUDE option in Consume messages, which carry one or multiple IP prefixes to indicate from where he/she would like to receive the content. 1 Since a set of explicit IP prefixes for candidate content source is carried in the Consume message, the corresponding resolution process becomes straightforward: each intermediate CRS only needs to forward the request (splitting required in the presence of multiple non-adjacent IP prefixes) towards the targeted IP prefix(es) directly according to the underlying BGP routes. In case multiple interdomain routes are available towards a specific prefix, the most explicit one will be followed, as
1
It is not always required that CCs know the actual IP prefix of the domains they prefer but their local CRSs may be responsible for translating the region information (e.g., domain names) into IP prefixes through standard DNS services.
115
WANG2 LAYOUT
2/18/11
3:18 PM
X1->A.A X2->A.B
Page 116
A
Tier-1
X4->B.B
B
Tier-2
Publish(A.A, X1) X2->A.B.B X1->A.A.A X3->S3
A.A
A.B
B.A
B.B
X4->B.B.A, B.B.B
X2->A.B.B
Tier-3
Publish(A.A.A, X1)
A.A.A X1->S1
X2->S2
S1::Register(*, X1) Wildcard mode
A.B.B
S2::Register(*, X2) Multi-homed
S3::Register(INCLUDE(A.A), X3) Scoped publication
B.B.A
B.B.B X4->S5
X4->S4
S4::Register(*,X4) S5::Register(*,X4) Content aggregation example Peering link Provider-customer link
Figure 3. Content publication process.
is consistent with today’s interdomain routing policy. In Fig. 4, CC C1 issued a Consume message for content X1 indicating its preference for content source in domain A or its customer domains. This Consume message is then explicitly forwarded towards A from B following the underlying BGP routing, but without splitting it to C despite that a copy of X1 is also accessible from C’s customer domain C.A. This scopingbased content resolution path is illustrated with the solid line in the figure. The filtering function in content resolution operations has complementary effect to scoping. Instead of specifying the preferred networks, the CC has the opportunity to indicate unwanted domains as possible sources of the desired content. The filtering function is enabled via the EXCLUDE option in Consume messages. It is important to note the fundamental difference in resolving content consumption requests with scoping and filtering functions. In contrast to the scoping scenario in which a content consumption request is explicitly routed towards the desired IP prefix(es) according to the BGP route, in the filtering case, each request is routed based on the business relationship between domains (similar to content publication operations). Consider again Fig. 4 with CC C2 requesting X1 with the exclusion of domain C and its customer domains. Since it is multihomed, the request is sent to both its providers A.B and B.A (see the dashed line in the figure). However, at the tier 1 level, domain C is excluded when resolving this request
116
even though a copy of content X1 can be found in the customer domain of C. A wildcard in a content consumption request can be regarded as a special case whereby the CC does not have preferences on the geographical location of the content source. The wildcardbased resolution is illustrated in Fig. 4 via the request from consumer C3 for content X2 (dotted lines). We see that B splits the request to both A and C at the tier-1 level. Since only A has the record entry for X2, the request is resolved to S2. Through these illustrations, we show that bidirectional location independence can be achieved in the sense that neither CCs nor CPs need to know a priori the explicit location of each other for content consumption. In particular, CCs may include implicit content scoping/filtering information when requesting content. The content resolution system then automatically identifies the server in the desired area that hosts the content. On the CP side, when content is published, scoping can be applied such that the content can only be accessed by consumers in the designated area in the Internet. We show in the following section that, thanks to the multicast-oriented content delivery mechanism, the content server is not aware of the explicit location of the active consumers of that content. We conducted simulation experiments based on a real domain-level Internet sub-topology rooted at a tier-1 ISP network (AS7018). This four-tier network topology is extracted (with
IEEE Communications Magazine • March 2011
WANG2 LAYOUT
2/18/11
3:18 PM
Page 117
X1->S3
C.A
Peering link Provider-customer link X1->C.A
C
X4->B.B X1->A.A X2->A.B
A
B
X2->A.B.B X4->B.B.A,B.B.B
A.A
X1->A.A.A X3->S3
A.B
B.A
B.B
X2->A.B.B
A.A.A
A.B.B
B.B.A
C2::Consume(EXCLUDE(C), X1) With filtering function S1
B.B.B
X4->S4
X2->S2
X1->S1
C3::Consume(*, X2) Wildcard mode
X4->S5
C1::Consume(INCLUDE(A), X1) With scoping function
S2
Figure 4. Content resolution in scoping, filtering, and wildcard modes. aggregation) from the CAIDA dataset [9], with explicit business relationships between neighboring nodes. Content sources and consumers were randomly distributed in the domains of this topology. According to our results, the average length of the content resolution paths between individual CCs and resolved content sources is 4.4 domain-level hops (i.e., the content is on average 4.4 autonomous systems [ASs] away according to the resolution paths). This is a very good result and also consistent with the general observation that Internet interdomain sessions are of similar length based on Border Gateway Protocol (BGP) routing and the power-law interdomain Internet topology.
CONTENT DELIVERY In CURLING, content delivery paths are enforced in a receiver-driven multicast manner that needs state maintenance based on content IDs. As described, content consumption requests are resolved through a sequence of CRSs according to either the business relationships between ISPs (in wildcard and filtering modes) or the BGP reachability information on the scoped source prefix (in scoping mode). In both cases, once a CRS has forwarded the content consumption request to its next hop counterpart, it needs to configure the local CaRs that will be
IEEE Communications Magazine • March 2011
involved in the delivery of content back from the potential server. Specifically, once a CRS receives a content consumption request from its counterpart in the previous hop domain and forwards it towards the next hop CRS, it needs to correspondingly install the content ID state at the local egress and ingress border CaRs connecting to the two neighboring domains. 2 The determination of ingress/egress CaRs for each content consumption request is purely based on the BGP reachability information across networks. Within each domain, the communication between the non-physically connected ingress and egress CaRs can be achieved either by establishing intradomain tunnels that traverse noncontent-aware core IP routers, or natively through the content-centric network routing protocols [1]. Therefore, the actual domain-level content delivery path is effectively the reverse path followed by the delivery of the original content consumption request. It is worth mentioning that CRSs do not directly constitute the content delivery paths, in which case the configuration interaction between the CRS and local ingress/egress CaRs is necessary. Let us take Fig. 5 for illustration. We assume that currently CC C1 (attached to domain 2.1/16) is consuming live streaming content X from server S (attached to domain 1.2.1/24). The content
2
In case of a failed content resolution, content states temporally maintained at CaRs can be either timed out or explicitly torn down by the local CRS.
117
WANG2 LAYOUT
2/18/11
3:18 PM
Page 118
CRS
According to our results, the content
1/8
traffic traversing
2/8
1.0.0.3
higher tier 1 and 2
1.0.0.1
ISPs can be reduced
route switching, and in particular by a substantial 28.1 per-
1.0.0.2
1.1.0.1
by 8.7 percent through peering
CRS
given that less traffic
1.2.0.1 CRS
CRS
1.2/16
1.1/16 Consume (INCLUDE{1.2.1/24}, X)
cent in tier-1 ISPs. This is beneficial
CRS
2.1/16
CRS Content consumer C2 1.2.1/24
Content consumer C1
traverses tier-1 domains through relatively long paths.
Peering link
S: X
Provider-customer link
Content server S
Figure 5. Multicast-based content delivery process.
delivery path traverses a sequence of intermediate domains, and each of the corresponding ingress/egress CaRs is associated with a star that indicates the content state maintained for content delivery. As mentioned, these states are configured by the local CRSs during the content resolution phase. Now CC C2 (attached to domain 1.1/16) issues a consumption request for the same content. Upon receiving the request, the local CRS forwards it to its provider counterpart in domain 1/8, as it is unaware of the content source location. Since the CRS in 1/8 knows that content flow for X is being injected into the local network via the originally configured ingress CaR 1.0.0.2, it then updates its outgoing next-hop CaR list by adding a new egress 1.0.0.3 leading towards CC C2. Thus, a new branch is established from CaR 1.0.0.2 which is responsible for delivering the content back to the new consumer C2 (the dashed line), but without any further content resolution process. The proposed content delivery operation is also supported by a routing optimization technique for path switching from provider routes to peering routes. In the figure, once the CRS in domain 1.1/16 noticed that the content flow with source address belonging to prefix 1.2.1/24 has been injected into the local domain via ingress CaR 1.1.0.1 via the provider route, and it also knows from the local BGP routing information that there exists a peering route towards the content source, it then issues a new scoping-based content consumption request: Consume(INCLUDE{1.2.1/24},X) to the CRS in domain 1.2/16 in the peering route towards the source. Upon receiving the request, the CRS in 1.2/16 updates the local CaR 1.2.0.1 by adding a new outgoing next-hop CaR 1.1.0.1. As a result, a new branch via the peering route is established towards C2. Once the ingress CaR 1.1.0.1 has received the con-
118
tent via the interface connecting to 1.2.0.1, it prunes the old branch via the provider route (the dashed line). This content delivery path optimization effectively reduces content traffic within top-tier ISP networks and also possibly reduces the content delivery cost for customer domains. Of course, this operation is not necessary if a CRS is allowed to send content consumption requests to its peering counterparts (in addition to the provider direction) during the resolution phase. However, such an option will incur unnecessarily higher communication overhead in disseminating content consumption requests, especially when the peering route does not lead to any source that holds the requested content. We are also interested in the actual benefit from such inter-domain routing optimization techniques for cost-efficient content delivery across the Internet, especially from the view point of tier-1 ISPs that constitute the Internet core. We used the same domain-level topology as previously described for evaluating the corresponding performance. According to our results, the content traffic (in terms of the number of media sessions) traversing higher tier 1 and 2 ISPs can be reduced by 8.7 percent through peering route switching, and in particular by a substantial 28.1 percent in tier-1 ISPs. This is beneficial given that less traffic traverses tier-1 domains through relatively long paths.
DISCUSSION ON SCALABILITY The domain-level hop-by-hop content resolution strategy presented follows a similar style to that proposed in [3]. However, through the new scoping and filtering functions, our architecture provides the necessary flexibility for both CPs and CCs to publish/request content at/from their desired area(s). The scalability of the system,
IEEE Communications Magazine • March 2011
WANG2 LAYOUT
2/18/11
3:18 PM
Page 119
thus, is dependent on the amount and popularity of content in each CRS, with the most vulnerable CRSs being those that maintain the highest number of popular content entries. This is in contrast with intuition that the most strained CRSs will be the tier-1 ones, since content publications and requests may often not reach the tier-1 level based on our approach. Again, we take BBC iPlayer as an example where both the content publication and consumption requests are restricted to IP prefixes only from the United Kingdom. In addition to that, local domain policies may also override the default publication route (see S2 in Fig. 3). Business incentives also present a natural load distribution mechanism for our system. We foresee ISPs charging higher publication tariffs for popular content published at higher tier domains (with tier-1 domains being the most expensive) which can be potentially accessed by a higher number of consumers. This mechanism forms a business tussle from the CPs’ point of view when provision of wider access is coupled with higher monetary cost. Instead, a CP may strategically replicate content to multiple lowertier regional ISPs (by applying scoping functions there) in which they believe their content will be locally popular. Finally, our system allows aggregation in two ways. First, as illustrated in Fig. 3 for S4 and S5, the record for the same content can be merged during the publication process among CRSs. Second, a block of sequential content IDs should be allocated to interrelated content so that they can be published in one single process. This rule exploits the fact that a specific CP usually offers content with some relationship with each other (e.g., all episodes of a television series). This allows for coarser granularity in the publication process whereby the CP can send only one Publish message to publish all the related content. The local CRS still assigns a unique content ID for each content, but the IDs are sequentially connected. The onwards publication process will only involve the entire block of IDs rather than the individual content records, especially towards high-tier ISPs.
CONCLUSIONS In this article, we present CURLING, a new content-based Internet architecture that supports content publication, resolution and delivery. Content providers can cost-efficiently publish content based on its expected popularity in different regions by scoping its publication while content consumers can express their location preferences by scoping/filtering their content consumption requests. The processes are devised so that both sides are oblivious of their counterpart’s location, resulting in a bi-directional location independence paradigm, but without sacrificing content providers’ and consumers’ location preferences. The proposed route optimization mechanism enhances the efficiency of content delivery by using content states established during the resolution process and initiating content delivery path switching; it mimics as such interdomain multicast delivery, which has seen very slow deployment until now.
IEEE Communications Magazine • March 2011
ACKNOWLEDGMENTS This work was undertaken under the Information Society Technologies (IST) COMET project, which is partially funded by the Commission of the European Union. We would also like to thank our project partners who have implicitly contributed to the ideas presented here.
REFERENCES [1] V. Jacobson et al., “Networking Named Content,” ACM CoNEXT ’09, 2009, pp. 1–12. [2] P. Jokela et al., “LIPSIN: Line Speed Publish/Subscribe Inter-networking,” Proc. ACM SIGCOMM ‘09, Barcelona, Spain, Aug. 2009. [3] T. Koponen et al., “A Data-Oriented (and Beyond) Network Architecture,” Proc. ACM SIGCOMM ‘07, Kyoto, Japan, Aug. 2007. [4] P. Francis and R. Gummadi, “IPNL: A NAT-Extended Internet Architecture,” Proc. ACM SIGCOMM ‘01, San Diego, CA, Aug. 2001, pp. 69–80. [5] I. Stoica et al., “Internet Indirection Infrastructure,” Proc. ACM SIGCOMM ‘02, Pittsburgh, PA, Aug. 2002, pp. 73–86. [6] D. Clark et al., “FARA: Reorganizing the Addressing Architecture,” Proc. ACM SIGCOMM, FDNA Wksp., Aug. 2003. [7] M. Caesar et al., “Routing on Flat Labels,” Proc. ACM SIGCOMM ‘06, Pisa, Italy, Sept. 2006, pp. 363–74. [8] P. Mockapetris, “Domain Names — Concepts and Facilities,” IETF RFC 1034, Nov. 1987. [9] CAIDA dataset; http://www.caida.org/research/topology/#Datasets.
The proposed route optimization mechanism enhances the efficiency of content delivery by using content states established during the resolution process and initiating content delivery path switching; it mimics as such inter-domain multicast delivery which that has seen very slow deployment until now.
BIOGRAPHIES WEI KOONG CHAI (
[email protected]) is a research fellow in the Department of Electronic and Electrical Engineering, University College London (UCL), United Kingdom. He received a B.Eng. degree in electrical engineering from Universiti Teknologi Malaysia in 2000 and an M.Sc. (Distinction) in communications, networks, and software, and a Ph.D. in electronic engineering, both from the University of Surrey, United Kingdom, in 2002 and 2008, respectively. His research interests include content-centric networks, QoS and service differentiation, resource optimization, and traffic engineering. NING WANG (
[email protected]) is a lecturer at the University of Surrey. He received his B.Eng. (Honors) degree from Changchun University of Science and Technology, P.R. China, in 1996, his M.Eng. degree from Nanyang University, Singapore, in 2000, and his Ph.D. degree from the University of Surrey in 2004. His research interests include Internet content delivery techniques, end-to-end QoS provisioning, and traffic engineering. IOANNIS PSARAS (
[email protected]) is a research fellow in the Department of Electronic and Electrical Engineering, UCL. He received a diploma in electrical and computer engineering, and a Ph.D. degree from Democritus University of Thrace, Greece, in 2004 and 2008, respectively. He won the Ericsson Award of Excellence in Telecommunications in 2004. He has worked at DoCoMo Eurolabs and Ericsson Eurolab. His research interests include congestion control, delay-tolerant networks, and user- and contentcentric networks. GEORGE PAVLOU (
[email protected]) holds the Chair of Communication Networks at the Department of Electronic and Electrical Engineering, UCL. Over the last 20 years he has undertaken and directed research in networking, network management, and service engineering, having extensively published in those areas. He has contributed to ISO, ITU-T, IETF, and TMF standardization activities, and has been instrumental in a number of key European and U.K. projects that produced significant results. He is currently the technical manager of COMET. C HAOJIONG W ANG (
[email protected]) holds a B.Sc. in computing and information technology from the University of Surrey, and an M.Sc. in computer science from Oxford University. He is a Ph.D. student in electrical engineering at the Centre for Communication Systems Research (CCSR), University of Surrey.
119
WANG2 LAYOUT
2/18/11
3:18 PM
Page 120
GERARDO GARCIA DE BLAS (
[email protected]) is a research engineer in the IP Network Technologies group in Telefónica I+D. He holds a Master’s in telecommunications engineering from the Technical University of Madrid, Spain. Since 2002 he has worked in Telefónica I+D, with his main focus on network evolution, network planning, and traffic analysis. He is currently coordinating the architecture definition in the EU FP7 project COMET. FRANCISCO JAVIER RAMON-SALGUERO (
[email protected]) received his Master’s in telecommunications engineering from Málaga University, Spain, in 2000, and his Master’s in economics at UNED, Spain, in 2006. With Telefónica I+D since 2000, currently he heads the IP Network Technologies group, coordinating research on long-term evolution of Internet technologies. Since 2010 he has led the FP7-COMET consortium, focused on providing to the future Internet a unified approach to content location, access, and delivery with the appropriate network resources. LEI LIANG (
[email protected]) is a research fellow at CCSR at the University of Surrey, from which he received his B.E. degree in 1998, M.S. degree in 2001, and Ph.D. degree in 2005. With strong knowledge in multiparty communications, network performance measurement and analysis, IP networking QoS, IP over satellite networks, IP multicast over satellite network, and IP network security, he has been heavily involved in 14 EU and U.K. projects since 2001.
120
SPIROS SPIROU (
[email protected]) is a senior engineer at Intracom Telecom, Greece, working on IPTV, content-aware networking, and network management. Previously, he was a research associate on data acquisition and grid computing at NCSR Demokritos, Greece, and CERN, Switzerland. He holds a B.Sc. (1997) in computer engineering, a postgraduate diploma (1998) in mathematics, and an M.Sc. (2000) in neural networks. He chaired the European Future Internet Socio-Economics Task Force. ANDRZEJ BEBEN (
[email protected]) received M.Sc. and Ph.D. degrees in telecommunications from Warsaw University of Technology (WUT), Poland, in 1998 and 2001, respectively. Since 2001 he has been an assistant professor with the Institute of Telecommunications at WUT, where he is a member of the Telecommunication Network Technologies research group. His research areas include IP networks (fixed and wireless), content-aware networks, traffic engineering, simulation techniques, measurement methods, and testbeds. E L E F T H E R I A H A D J I O A N N O U (
[email protected]) received a Dipl.-Ing. degree from the Electrical Engineering and Computer Science Department, Aristotle University of Thessaloniki, and an M.B.A. from the University of Macedonia, Greece. Previously, she was a research associate at the Information System Laboratory (ISlab), University of Macedonia, where she was involved in a number of projects relevant to eGovernment and eParticipation. Currently, she is a member of the R&D Department at Primetel PLC, an alternative telecommunication provider in Cyprus.
IEEE Communications Magazine • March 2011
CHOI LAYOUT
2/18/11
3:05 PM
Page 121
FUTURE MEDIA INTERNET
A Survey on Content-Oriented Networking for Efficient Content Delivery Jaeyoung Choi, Jinyoung Han, Eunsang Cho, Ted “Taekyoung” Kwon, and Yanghee Choi, Seoul National University
ABSTRACT As multimedia contents become increasingly dominant and voluminous, the current Internet architecture will reveal its inefficiency in delivering time-sensitive multimedia traffic. To address this issue, there have been studies on contentoriented networking (CON) by decoupling contents from hosts at the networking level. In this article, we present a comprehensive survey on content naming and name-based routing, and discuss further research issues in CON. We also quantitatively compare CON routing proposals, and evaluate the impact of the publish/subscribe paradigm and in-network caching.
INTRODUCTION
1
Many studies use their own terminology such as data-oriented, contentcentric, and contentbased. In this article, we use content-oriented as a generic term.
Video traffic has been and will be increasingly prevalent in the Internet. Some video content providers (CPs, e.g., YouTube and Hulu) have even begun to provide high-definition video streaming services. As the bit rate of multimedia traffic increases, the TCP/IP architecture may reveal its inefficiency in delivering time-sensitive multimedia traffic. Another important multimedia application is multicasting/broadcasting over IP networks (e.g., IPTV). However, the endpoint-based Internet is not suitable for multicast/broadcast due to issues including multicast address assignment and complex group management. Such ineptness leads to limited deployment and complicated multicast frameworks. At present, many voluminous contents (most of which are multimedia [1]) are delivered to numerous users by peer-to-peer (P2P) systems such as BitTorrent. In BitTorrent, for each content file there is a tracker, which informs a new peer of other peers. A peer exchanges missing parts (called chunks) of the content file with other peers. However, from a networking perspective, the delivery performance of BitTorrent is inefficient since a peer can download chunks only from a small subset of peers who may be distantly located. In general, P2P systems have limited information on peers downloading the
IEEE Communications Magazine • March 2011
same content and the network topology among them (e.g. proximity). In these “content-oriented” applications/services, an end user cares not about hosts, but about contents. However, the current Internet relies on the host-to-host communication model. This mismatch leads to application/service-specific solutions, which may be costly and/or inefficient. Two representative examples are: • Web caches and content delivery networks (CDNs) transparently redirect web clients to a nearby copy of the content file. • P2P systems enable users to search and retrieve the content file. To address the above mismatch, there have been studies on content-oriented networking (CON)1 (e.g. [2–4]). They strive to redesign the current Internet architecture to accommodate content-oriented applications and services efficiently and scalably. The essence of CON lies in decoupling contents from hosts (or their locations) not at the application level, but at the network level. Note that these proposals also solve or mitigate other Internet problems such as mobility and security. We argue that the new CON paradigm will: • Free application/service developers from reinventing application-specific delivery mechanisms • Provide scalable and efficient delivery of requested contents (e.g., by supporting multicast/broadcast/anycast naturally) In this article, we classify the prior studies on CON, discuss their technical issues, and identify further research topics. After demonstrating the performance of CON proposals, we conclude this article.
CONTENT-ORIENTED NETWORKING A CON architecture can be characterized by four main building blocks: • How to name the contents • How to locate the contents (routing) • How to deliver/disseminate the contents • How to cache the contents “in” the network
0163-6804/11/$25.00 © 2011 IEEE
121
CHOI LAYOUT
2/18/11
3:05 PM
Persistence refers to a property that once a content name is given, people would like to access the content file with the name as long as possible. For example, if the ownership of a content file is changed, its name becomes misleading with the above naming.
Page 122
There are relatively many studies on the first two components, to be classified in this section. The last two topics need more investigation under CON environments, which will be discussed later. Before presenting the taxonomy, let us discuss common characteristics in CON proposals [2–4]. A CON has three characteristics distinct from IP networking. First, a CON node 2 performs routing by content names, not by (host) locators. This means two radical changes: • Identifying hosts is replaced by identifying contents. • The location of a content file is independent of its name. An IP address has both the identifier and locator roles; hence, IP networking has problems like mobility. By splitting these roles, CON has location independence in content naming and routing, and is free from mobility and multihoming problems. Second, the publish/subscribe paradigm is the main communication model in CON: a content source announces (or publishes) a content file, while a user requests (or subscribes to) the content file. In IP networking, a user should know which source holds the content file of interest (spatial coupling), and the two hosts should be associated throughout the delivery (temporal coupling) [5]. However, with the publish/subscribe paradigm, we can decouple the content generation and consumption in time and space, so contents are delivered efficiently and scalably (e.g., multicast/anycast). Third, the authenticity of contents can easily be verified by leveraging public key cryptography. In IP networking, a host address seen by a user is irrelevant to its content name, which results in phishing and pharming attacks. For content authentication in CON, either a self-certifying content name [2, 4] or a signature in a packet [3] is used. We skip the security-related explanations here; see [2, 3] for details.
CONTENT NAMING We classify naming schemes in CON into three categories: hierarchical, flat, and attribute-based.
2
A CON node refers to a node that performs CON functionalities like content routing and caching, while a node may indicate an IP router as well as a CON node. 3
In this article, a name and an identifier are used interchangeably. 4 They also add a label (which has semantics) into a content identifier; however, the label can be interpreted only by endpoints (i.e. publishers and subscribers), not by innetwork nodes.
122
Hierarchical Naming — CCN [3] and TRIAD [6] introduce a hierarchical structure to name a content file. Even though it is not mandatory, a content file is often named by an identifier3 like a web URL (e.g. /www.acme.com/main/logo.jpg), where / is the delimiter between components of a name. Thus the naming mechanism in the proposals can be compatible with the current URLbased applications/services, which may imply a lower deployment hurdle. Its hierarchical nature can help mitigate the routing scalability issue since routing entries for contents might be aggregated. For instance, if all the contents whose names start with www.acme.com are stored in a single host, we need a single routing entry (to the host) for these contents. However, as content files are replicated at multiple places, the degree of aggregation decreases. For instance, if popular contents are increasingly cached by innetwork caching, the corresponding routing entries that have been aggregated should be split accordingly. Note that components in a hierar-
chical name (e.g., www.acme.com and logo.jpg) have semantics, which prohibits persistent naming. Persistence refers to a property that once a content name is given, people would like to access the content file with the name as long as possible. For example, if the ownership of a content file is changed, its name becomes misleading with the above naming. Flat Naming — To avoid the above shortcomings, DONA [2] and PSIRP [4] employ flat and self-certifying names by defining a content identifier 4 as a cryptographic hash of a public key. Due to its flatness (i.e., a name is a random looking series of bits with no semantics), persistence and uniqueness are achieved. However, flat naming aggravates the routing scalability problem due to no possibility of aggregation. As flat names are not human-readable, an additional “resolution” between (application-level) human-readable names and content names may be needed. Attribute-Based Naming — CBCB [7] identifies contents with a set of attribute-value pairs (AVPs). Since a user specifies her interests with a conjunction and disjunction of AVPs, a CON node can locate eligible contents by comparing the interest with advertised AVPs from content sources. It can facilitate in-network searching (and routing), which is performed by external searching engines in the current Internet. However, it has drawbacks such as: • An AVP may not be unique or well defined. • The semantics of AVPs may be ambiguous. • The number of possible AVPs can be huge.
NAME-BASED ROUTING CON should be able to locate a content file based on its name, which is called name-based routing. The prior studies can be classified depending on whether there is a systematic structure in maintaining routing tables of CON nodes. Unstructured Routing — Like IP routing, this approach assumes no structure to maintain routing tables; hence, the routing advertisement (for contents) is mainly performed based on flooding. CCN suggests inheriting IP routing, and thus has IP compatibility to a certain degree. Therefore, CCN might be deployed incrementally with current IP networking. CCN just replaces network prefixes (in IP routing) with content identifiers, so the modification of IP routing protocols and systems may not be significant. Just as network prefixes are aggregatable in IP routing, so are hierarchical content identifiers in CCN routing. However, as a content file is increasingly replicated or moved, the level of aggregation diminishes. Moreover, the control traffic overhead (i.e., the volume of announcement messages whenever a content file is created, replicated, or deleted) would be huge. Structured Routing — Two structures have been proposed: a tree and a distributed hash table (DHT). DONA is the most representative tree-based routing scheme. Routers in DONA form a hierarchical tree, and each router main-
IEEE Communications Magazine • March 2011
CHOI LAYOUT
2/18/11
3:05 PM
Page 123
Routing structure
Routing scalability
Hierarchical
Aggregatable, IP compatible
Unstructured
N (best) C (worst)
High
Flat
Persistent
Structured (tree)
C
Low
Naming
Naming advantages
CCN DONA
Control overheads
The current Internet architecture is
PSIRP
TRIAD
Flat
Hierarchical
Persistent Aggregatable, IP compatible
Structured (hierarchical DHT)
designed with pointto-point connectivity since early stage applications rely on
logC
Low
packet exchanges between two hosts.
Unstructured
N
High
As the Internet becomes increasingly
CBCB
(attribute, value) pairs
In-network searching
Source-based multicast tree
2A
High
popular, however, new applications
Table 1. Taxonomy of CON proposals in terms of naming and routing criteria is summarized. Note that the routing scalability of each proposal is in proportion to either N, C, A, or its logarithm/exponential. Here, N, C, and A are the numbers of publisher nodes, contents, and attributes in the entire network, respectively.
requiring different connectivities have emerged: one-to-
tains the routing information of all the contents published in its descendant routers. Thus, whenever a content file is newly published, replicated, or removed, the announcement will be propagated up along the tree until it encounters a router with the corresponding routing entry. This approach imposes an increasing routing burden as the level of a router becomes higher. The root router should have the routing information of all the contents in the network. Since DONA employs non-aggregatable content names, this scalability problem is severe. On the other hand, PSIRP [4] adopts hierarchical DHTs [8]. The flatness of a DHT imposes an equal and scalable routing burden among routers. If the number of contents is C, each router should have log(C) routing entries. However, the DHT is constructed by random and uniform placement of routers, and thus typically exhibits a few times longer paths than a tree that can exploit the information of network topology. Also, the flatness of a DHT often requires forwarding traffic in a direction that violates the provider-customer relation among ISPs; for instance, a customer ISP does not want to receive a packet from its provider ISP if the destination is not located inside. Table 1 compares the CON studies with a focus on naming and routing characteristics. As CCN and TRIAD adopt hierarchical naming, content names are aggregatable and IP compatible. However, their flooding-based (i.e., unstructured) routing incurs significant control traffic. If all the contents of the same publisher are stored in a single host, their routing entries are aggregated into a single one; thus, the routing scalability is proportional to the number of publisher nodes. TRIAD considers only this case. In CCN, however, the contents can be replicated, which may split the aggregated routing entries; in the worst case, the routing burden becomes on the order of the number of contents. DONA and PSIRP employ flat names for persistence. As they have systematic routing structures, the control traffic will not be substantial. In DONA, the root node of the tree should have the routing information of all the contents.
IEEE Communications Magazine • March 2011
Meanwhile, the DHT structure of PSIRP levies the routing burden of a logarithm of the number of contents on every node. With AVP-based content names, CBCB enables in-network searching and establishes source-based multicast trees to deliver contents between publishers and subscribers. As each attribute can be selected or not in a search query, the number of routing entries of a router may be proportional to 2 A where A is the total number of attributes. Furthermore, its control overhead will be high since each new query may have to be flooded across the network.
many and many-to-many.
FURTHER ISSUES IN CON MULTISOURCE DISSEMINATION The current Internet architecture is designed with point-to-point connectivity since early stage applications rely on packet exchanges between two hosts. As the Internet becomes increasingly popular, however, new applications requiring different connectivities have emerged: one-to-many (1:N) and many-to-many (M:N). 1:N connectivity represents content dissemination from a single source to multiple recipients; representative applications are online streaming and IPTV services. To support such applications with the point-to-point TCP/IP architecture, the Internet Engineering Task Force (IETF) standardized the IP multicasting framework, which is deployed in limited situations like a separate network for intradomain IPTV services. Compared to IP multicasting, CON accommodates 1:N connectivity naturally by the publish-subscribe paradigm in terms of content naming and group management [2, 4]. However, its link efficiency is not different from IP multicasting. Thus, let us focus on M:N connectivity. M:N connectivity takes place among multiple sources and multiple recipients. There are two kinds of M:N connectivity applications: • M instances of 1:N connectivity (e.g., videoconference). • M sources disseminate different parts of a content file to N recipients.
123
CHOI LAYOUT
2/18/11
3:05 PM
Page 124
P1
P2
P2P overlay 1
P5
R5
R1
R4
R2
IN-NETWORK CACHING
R3
R6
P4
P2P overlay 2
P3
Figure 1. Dissemination of the same file in the two separate overlays will be inefficient since peers are distantly located.
5
Even though CON does not care whether a single or multiple recipients subscribe to a particular content file, we implicitly use “deliver” for 1:1 connectivity and “disseminate” for 1:N and M:N connectivities. 6
By reducing incoming traffic from its provider ISP, its connection fee may also be lowered.
124
inefficient. The more sources of the same content the CON node learns of, the more selectively it may have to propagate the routing information of the sources. How to design a routing protocol for multiple sources, both intradomain and interdomain, is a crucial issue.
We focus on the latter case, whose representative applications are P2P systems like BitTorrent. Another application is multi-user online gaming in which different but partially overlapping game data are transmitted to players. Substantiating M:N connectivity requires application/service-specific overlays or relay mechanisms in the current Internet. However, CON can disseminate5 contents more efficiently at the network level by spatial decoupling of the publish/subscribe paradigm and content awareness at network nodes. Figure 1 illustrates how inefficiently a content file is distributed by current P2P operations. Suppose two P2P overlays are formed, and the peers in both overlays wish to download the same file. Unfortunately, peers in each overlay are distant; hence, the throughput is poor, which happens frequently in reality. This is because P2P systems are application-level solutions that cannot exploit network topology information. In contrast, CON can efficiently disseminate a content file among subscribers since CON nodes (R2 and R4) will help them download the content file also from the other overlay. Disseminating a content file from multiple sources is tightly coupled with name-based routing. That is, in order to exploit multiple sources in disseminating the same content in CON, each CON node may have to keep track of individual sources of the same content (e.g., CCN and DONA). In this case, a CON node can seek to retrieve different parts of the requested content in parallel from multiple sources to expedite dissemination. To the best of our knowledge, there is no prior study on this multisource dissemination at a networking level. Depending on roundtrip times and traffic dynamics of the path to each source, the CON node should dynamically decide/adjust which part of the content file is to be received from each source. Another relevant issue is what routing information should be stored and advertised by each CON node for multiple sources of the same content. For instance, suppose a CON node receives routing advertisements from two sources of the same content, and learns that one source is close and the other source is very far. It may not be useful to announce both of the sources since retrieving data from the far source would be
The advantages of in-network caching in an ISP may be twofold: • To reduce the incoming traffic from neighbor ISPs to lower the traffic load on its cross-ISP links6 (and hence its expense for transport link capacity) • To improve the delay/throughput performance by placing the contents closer to their users The latter has the same rationale as CDNs. The usefulness of caching is already proven by the commercial success of CDNs. In-network caching is also attractive to content providers (CPs) since it can mitigate the capital expense on their content servers. We believe an ISP with in-network caching capability can also offer CDN-like businesses to CPs if the majority of potential subscribers to the CPs are connected to the ISP (e.g., [9]). Considering the above incentives, it is viable to cache popular content files in CON nodes (or their corresponding storage servers); other studies (e.g., [2–4]) also suggest introducing in-network caching. While there are well studied caching policies such as least recently used and least frequently used replacement policies at individual nodes, the performance of in-network caching can be further improved by coordinating multiple CON nodes in a distributed fashion. There have been recent studies on distributed caching (e.g., how to locate caching points [9] and how to cache contents [10]). However, as they assume IP networking, their work is limited in that • Only a single source (or cache) delivers the content file to a subscriber. • Limited topologies (e.g. tree) or places (e.g. point of presence) are taken into account. Thus, we need to reformulate the distributed caching problem in CON environments; for instance, multisource dissemination and general network settings may have to be considered. Another (maybe smaller) topic is how to design a signaling protocol among CON nodes to support distributed caching. For instance, a routing protocol may have to be extended to facilitate coordinated caching among CON nodes without significant signaling traffic overhead. The major design issue would be that the more frequently the content files are replaced in a cache, the more routing information may have to be advertised to enhance the network-wide caching performance.
PERFORMANCE EVALUATION Using ns-2, we evaluate: • The effect of routing structures on the resolution time (or delay) to locate a content file • How much traffic load can be reduced by in-network caching network-wide
IEEE Communications Magazine • March 2011
2/18/11
3:05 PM
Page 125
In addition to tree and DHT structures, we introduce a new routing structure: two-tier. We first explain how the network topology is constructed and content requests are generated for experiments. Then we describe the two-tier routing structure, followed by the simulation results. End hosts, which publish or subscribe to content files, are collocated with CON nodes for simplicity. Using GT-ITM models, we generate a physical transit-stub topology, where a single transit domain connects 10 stub domains. There are 310 nodes total whose links have 10 Gb/s bandwidth capacity, among which 100 nodes in the stub domains are selected as CON nodes. There are 10 nodes in the transit domain, which do not serve as CON nodes since they normally have higher traffic loads (typically with higher link capacities) and hence may not be suitable for CON operations (e.g., in-network caching). Reflecting the Internet traffic statistics [1], four types of content files are published in the end hosts: video, audio, software, and web contents, with 68, 9, 9, and 14 percent in terms of the content volume, respectively. For each content type, 1000 content files are published and evenly distributed among the end hosts. The popularity (or request probability) of a particular content file is determined by the Zipf distribution whose parameter is set to 1.0. The arrival rate of subscriptions (or request rate) is set to 0.5 s–1. For details like the subtypes of each content type and the file size of each content subtype, see [1].
TWO-TIER Through our earlier experiments, we made the following observations: • As a tree can be formulated with network topology information (e.g., hop count between nodes), tree routing achieves higher throughput than DHT routing.7 • DHT routing is more scalable in terms of routing burden and more resilient to node/link failures due to multiple paths than tree routing. Thus, we introduce a hybrid approach whose routing structure consists of two tiers: a DHT is the high tier, and a tree is the low tier. At the low tier, CON nodes form a tree structure. As the tree covers only a part of the whole network, the routing scalability issue is not significant. At the high tier, CON nodes form a DHT. That is, only the root node of each tree will participate in the DHT. Therefore, a query for a content file published in the same tree will be serviced within the tree. If a query is for a content file outside the tree structure, the DHT structure is exploited to forward the query to the corresponding tree where the requested content is published. Figure 2 illustrates how a query is forwarded across the two trees via the DHT in two-tier.
COMPARISON OF ROUTING STRUCTURES We compare tree, two-tier, and DHT in terms of a resolution delay, which refers to how long it takes for a content request to arrive at the content publisher. Also, we measure how the routing structure resiliently routes content requests in presence of node failures.
IEEE Communications Magazine • March 2011
Root of the treenetwork that contains content A
Manager of content A 2
3 Hash (A) = 2 1
Y
X: Subscriber of A Y: Publisher of A
Content ID of A
: End host : CON node
X
Figure 2. Two-tier name-based routing architecture.
7
6.5
6 Tree DHT 2-tier
5.5 Resolution time (s)
CHOI LAYOUT
5
4.5
4
3.5
3 2.5 0
20000
40000 Simulation time (s)
60000
Figure 3. Resolution delay comparison between three structured approaches.
Figure 3 shows the resolution delays of the three routing structures. The tree structure outperforms the DHT since the DHT topology is constructed without any information on the physical topology. Thus, a content request goes back and forth among CON nodes of the DHT to reach its corresponding publisher node. The performance of two-tier falls between because it is a hybrid approach. Figure 4 shows the successful resolution ratio as the node failure rate increases. Here R means that each of 100 CON nodes will fail for one hour once during a simulation run on average. The performance gain of the DHT over the tree is noticeable since there are multiple paths
7
Usually, a node’s position in a DHT is determined randomly, e.g. by a hash function.
125
CHOI LAYOUT
2/18/11
3:05 PM
Page 126
among nodes in the DHT, and a failure of a higher-level node in the tree results in more routing failures. In the case of two-tier, there are 10 trees; each tree is formulated by the CON nodes in the same stub domain. The root nodes of the 10 trees participate in the DHT as well. From Figs. 3 and 4, the two-tier structure compromises on the trade-off between the tree (performance) and the DHT (resilience). Recall that the nodes of the two-tier will have less routing burdens than the tree.
1.2 Tree DHT 2-tier 1
Success ratio
0.8
0.6
In the second experiment, we demonstrate how much traffic load is reduced by CON. We compare DONA (tree structure), two-tier, and the current Internet. The cache replacement policy for the CON nodes is Least The cache size of each CON node is 5 Gbytes. Note that there is no publish/subscribe paradigm and in-network caching in the current Internet. Figure 5 shows how much CON proposals (DONA and two-tier) can reduce the networkwide traffic load by the publish/subscribe paradigm and in-network caching. The performance metric is the product of hop count and link bandwidth (consumed to deliver contents). As the simulation time goes on, the product of hop count and bandwidth diminishes in CON proposals due to the cache effect. Each plot is the average of 1000 s. Thus, the cache effect appears almost from the beginning. Sometimes, two-tier exhibits slightly poorer performance than DONA due to DHT overlay inefficiency.
CONCLUSIONS
0.4
0.2
0
1R
1.5R
2R
2.5R
3R
Figure 4. Robustness comparison between three structured approaches.
45 40
To fundamentally solve the mismatch between content-oriented Internet usage and host-based Internet architecture, content-oriented networking studies have been proliferated with a focus on naming and routing. In this article, we classify and compare the prior proposals in terms of naming and routing criteria. Also, we identify two important research topics in CON environments: • How to disseminate contents from multiple sources • How to decide which contents to cache in distributed environments We then compare CON routing structures and demonstrate the performance gain of CON over IP networking. For future work, an interesting topic would be to compare the in-network caching part of CON proposals and CDN solutions in terms of network traffic load mitigation.
ACKNOWLEDGMENT
35 Bandwidth (MB/s) * hopcount
NETWORK TRAFFIC LOAD
Internet DONA 2-tier
30 25
20 15 10
This publication is partially based on work performed in the framework of the Project COASTICT-248036, which is supported by the European Community. This work is also supported by NAP of Korea Research Council of Fundamental Science and Technology, and the IT R&D program of MKE/KEIT (10035245: Study on Architecture of Future Internet to Support Mobile Environments and Network Diversity). The ICT at Seoul National University provides research facilities. Professor Ted “Taekyoung” Kwon was on sabbatical leave at Rutgers University, where he had in-depth discussions with Professor Dipankar Raychaudhuri.
5
REFERENCES 0 0
40000
80000
120000
160000
Simulation time (s)
Figure 5. The impact of publish/subscribe paradigm and in-network caching on the content delivery.
126
[1] H. Schulze and K. Mochalski, ipoque -internet study 2008/2009, http://portal.ipoque.com/downloads/index/ study. [2] T. Koponen et al., “A Data-Oriented (and Beyond) Network Architecture,” SIGCOMM ’07, 2007, pp. 181–92. [3] V. Jacobson et al., “Networking Named Content,” CoNEXT ’09, New York, NY, 2009, pp. 1–12.
IEEE Communications Magazine • March 2011
CHOI LAYOUT
2/18/11
3:05 PM
Page 127
[4] K. Visala et al., “An Inter-Domain Data-Oriented Routing Architecture,” ReArch ’09: Proc. 2009 Wksp. Rearchitecting the internet, New York, NY, 2009, pp. 55–60. [5] P. Eugster et al., “The Many Faces of Publish/Subscribe,” ACM Computing Surveys 35, 2003, pp. 114–31. [6] M. Gritter and D. R. Cheriton, “An Architecture for Content Routing Support in the Internet,” 3rd Usenix Symp. Internet Technologies and Sys., 2001, pp. 37–48. [7] A. Carzaniga et al., “A Routing Scheme for ContentBased Networking,” IEEE INFOCOM ’04, Hong Kong, China, Mar. 2004. [8] P. Ganesan, “Canon in g major: Designing Dhts with Hierarchical Structure,” ICDCS, 2004, pp. 263–72. [9] J. Erman et al., “Network-Aware Forward Caching,” WWW 2009, 2009. [10] S. BORST et al., “Distributed Caching Algorithms for Content Distribution Networks,” INFOCOM 2010, 2010.
BIOGRAPHIES JAEYOUNG CHOI (
[email protected]) received his B.S. degree in computer science and engineering from Seoul National University, Korea, in 2004. He is currently working toward his Ph.D. degree at the School of Computer Science and Engineering, Seoul National University. His research interests include content-oriented networking, peer-to-peer networking/application, and global Internet infrastructure including inter/intradomain routing protocols. J INYOUNG H AN (
[email protected]) received his B.S. degree in computer science from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, in 2007. He is currently working toward his Ph.D. degree at the School of Computer Science and Engineering, Seoul National University. His research interests include peer-to-peer networks, content-centric networks, and ubiquitous computing.
IEEE Communications Magazine • March 2011
E UNSANG C HO (
[email protected]) received his B.S. degree in computer science and engineering from Seoul National University in 2008. He is currently working toward his Ph.D. degree at the School of Computer Science and Engineering, Seoul National University. His research interests include peer-to-peer, content-centric, and delay tolerant networking. T ED “T AEKYOUNG ” K WON (
[email protected]) is with the School of Computer Science and Engineering, Seoul National University. He was a visiting professor at Rutgers university in 2010. Before joining Seoul National University, he was a postdoctoral research associate at the University of California Los Angeles and City University of New York. He obtained his B.S., M.S., and Ph.D. at Seoul National Uuniversity. He was a visiting student at IBM T. J. Watson Research Center and the University of North Texas. His research interest lies in future Internet, content-oriented networking, and wireless networks. YANGHEE CHOI (
[email protected]) received his B.S. in electronics engineering at Seoul National University, M.S. in electrical engineering at KAIST, and D.Eng. in computer science at Ecole Nationale Superieure des Telecommunications Paris, France, in 1975, 1977, and 198,4 respectively. He worked at the Electronics and Telecommunications Research Institute (Korea), Centre National D’Etude des Telecommunications (France), and IBM Thomas J. Watson Research Center (United States) before joining Seoul National University in 1991. He was president of the Korea Institute of Information Scientists and Engineers. He is dean of the Graduate School of Convergence Science and Technology, president of the Advanced Institutes of Convergence Technologies, and chair of the Future Internet Forum of Korea. He has published over 600 papers in network protocols and architectures.
127
RAMZAN LAYOUT
2/18/11
3:11 PM
Page 128
FUTURE MEDIA INTERNET
Peer-to-Peer Streaming of Scalable Video in Future Internet Applications Naeem Ramzan, Queen Mary University of London Emanuele Quacchio, STMicroelectronics Toni Zgaljic and Stefano Asioli, Queen Mary University of London Luca Celetto, STMicroelectronics Ebroul Izquierdo, Queen Mary University of London Fabrizio Rovati, STMicroelectronics
ABSTRACT Scalable video delivery over peer-to-peer networks appears to be key for efficient streaming in emerging and future Internet applications. Contrasting the conventional server-client approach, here, video is delivered to a user in a fully distributed fashion. This is, for instance, beneficial in cases where a high demand for a particular video content is imposed, as different users can receive the same data from different peers. Furthermore, due to the heterogeneous nature of Internet connectivity, the content needs to be delivered to a user through networks with highly varying bandwidths. Moreover, content needs to be displayed on a variety of devices featuring different sizes, resolutions, and computational capabilities. If video is encoded in a scalable way, it can be adapted to any required spatio-temporal resolution and quality in the compressed domain, according to a peer bandwidth and other peers’ context requirements. This enables efficient low-complexity content adaptation and interoperability for improved peer-to-peer streaming in future Internet applications. An efficient piece picking and peer selection policy enables high quality of service in such a streaming system.
INTRODUCTION Multimedia applications over the Internet are becoming popular due to the widespread deployment of broadband access. In conventional streaming architectures the client-server model and the usage of content distribution networks (CDNs) along with IP multicast were the most desirable approaches for many years. The client/server architecture, however, severely limits the number of simultaneous users in video streaming. The reason is the bandwidth bottleneck at the server side, since usually many clients request the content from the server. A CDN overcomes the same bottleneck problem by introducing dedicated servers at geographically
128
0163-6804/11/$25.00 © 2011 IEEE
different locations, resulting in expensive deployment and maintenance. Compared to conventional approaches, a major advantage of peer-to-peer (P2P) streaming protocols is that each peer involved in content delivery contributes its own resources to the streaming session. Administration, maintenance, and responsibility for operations are therefore distributed among the users instead of handled by a single entity. As a consequence there is an increase in the amount of overall resources in the network, and the usual bottleneck problem of the client-server model can be overcome. Therefore, a P2P architecture extends exceptionally well with large user bases, and provides a scalable and cost-effective alternative to conventional media delivery services. The main advantage of P2P systems is bandwidth scalability, network path redundancy, and the ability to selforganize. These are indeed attractive features for effective delivery of media streams. Nevertheless, several problems are still open and need to be addressed in order to achieve high quality of service and user experience. In particular, the bandwidth capacity of a P2P system is extremely varying, as it relies on heterogeneous peer connection speeds, and directly depends on the number of connected peers. To cope with varying bandwidth capacities inherent to P2P systems, the underlying video coding/transmission technology needs to support bit-rate adaptation according to available bandwidth. Moreover, displaying devices at the user side may range from small handsets (e.g., mobile phones) to large HD displays (e.g., LCD televisions). Therefore, video streams need to be transmitted at a suitable spatio-temporal (ST) resolution supported by the user’s display device. If conventional video coding technologies are used, the above mentioned issues cannot be solved efficiently. Scalable video coding (SVC) techniques [1, 2] address these problems, as they allow “encoding a sequence once and decoding it in many different versions.” Thus, scalable coded bitstreams can efficiently adapt to the
IEEE Communications Magazine • March 2011
RAMZAN LAYOUT
2/18/11
3:11 PM
Page 129
application requirements. The adaptation is performed fully in the compressed domain, by directly removing parts of the bitstream. The SVC encoded bitstream can be truncated to lower resolution, frame rate, or quality. In P2P environments such real-time low-complexity adaptation results in a graceful degradation of received video quality, avoiding the interruption of the streaming service in case of congestion or bandwidth narrowing. Recently, P2P scalable video streaming has attracted significant attention from researchers. Liu et al. [3] employ layered video to accommodate asynchronous requests from users of heterogeneous bandwidths. Baccichet et al. [4] develop a mathematical framework to quantify the advantage of using a scalable codec for treebased overlays, particularly during network congestion. Ding et al. [5] and the MMV platform [6] present a P2P video on demand (VoD) system that utilizes SVC for delay minimization and to deal with heterogeneous user capabilities as well as dynamic end-to-end resource availability. The possibility to exploit the flexibility given by scalable bitstreams within P2P overlays has also been an important topic in large cooperative projects. Among others, P2P-Next [7] and SEA [8] aim to build the software infrastructure enabling high-quality and reliable P2P-based TV services over the Internet. In the following sections we explain the fundamentals of video streaming techniques used in some of these projects.
SCALABLE VIDEO CODING In general, a scalable video sequence can be adapted in three dimensions: temporal (frame rate reduction), spatial (resolution reduction), and quality (quality reduction), by simple parsing and dropping specific parts of the encoded representation. Thus, the complexity of adaptation is very low, in contrast to the adaptation complexity of non-scalable bitstreams. The SVC scheme gives flexibility and adaptability to video transmission over resource-constrained networks in such a way that, by adjusting one or more of the scalability parameters, it selects a layer containing an appropriate ST resolution and quality according to current network conditions. Figure 1 shows an example of video distribution through links supporting different transmission speeds and display devices. At each point where video quality/resolution needs to be adjusted, an adaptation is performed. Since the adaptation complexity is very low, the video can be efficiently streamed in such an environment. The latest video coding standard, H.264/ MPEG-4 AVC, provides a fully scalable extension, SVC 1 [1]. It reuses key features of H.264/MPEG-4AVC, and also uses some other techniques to provide scalability and improve coding efficiency. The scalable bitstream is organized into a base layer and one or several enhancement layers. SVC provides temporal, spatial, and quality scalability with a low increase of bit rate relative to the single-layer H.264/ MPEG-4 AVC. The SVC standard is based on a hybrid technology. In principle, it uses a combination of
IEEE Communications Magazine • March 2011
Scalable video encoder Cinema projector
Stored video Adaptation
Adaptation High definition TV
Adaptation
PC connected to office network Laptop connected to phone line
Mobile phone
Figure 1. Streaming of scalable video.
spatial transform based on discrete cosine transform and temporal differential pulse code modulation. An alternative approach is to use wavelets for both temporal and spatial decorrelation. This approach is commonly referred to as waveletbased SVC (W-SVC). Several recent W-SVC systems [2] have shown exemplary performance in different types of application scenarios, especially when fine-grained scalability is required. Observe that fine-grained scalability is not supported by the current SVC standard.
STREAMING OF SCALABLE VIDEO OVER P2P NETWORKS P2P protocols have been widely deployed for sharing files over the Internet. One of the most commonly used P2P protocols is BitTorrent [9]. However, BitTorrent is not suitable for streaming applications since segments of the video file are generally not received in sequential order. Thus, substantial research has been conducted recently to extend the BitTorrent protocol and make it suitable for streaming. An example of such an extended protocol is Tribler [10]. A generic P2P streaming architecture using SVC is depicted in Fig. 2. At the sender side, the video is compressed by a scalable encoder. The compressed bitstream may optionally be further processed to make it more suitable for transmission or add additional data into the stream. The processing stage may consist of separating the scalable bitstream into files, each carrying an individual scalable layer, or multiplexing the audio and video streams. The corresponding bit-
1
In the remainder of this article, the acronym SVC is used interchangeably to denote both the standard and the concept of scalable video coding.
129
RAMZAN LAYOUT
2/18/11
3:11 PM
To ensure
Page 130
Input video sequence
continuous video
Audio
playback, chunks
P2P Network
Content producer side
close to the playback position need to be
User
received on time, at least for the base layer of scalable video. To ensure this,
SVC encoder
Scalable bit-stream
User Torrent file
neighboring peers need to be carefully
User
User User
Bit-stream description
selected.
Consumer side Adaptation SVC decoder
Chunk picking
Peer selection
Figure 2. P2P streaming of scalable video.
stream description is created during either the encoding or processing phase. The description may contain information about the organization of the video into scalable layers, the resolution and quality of each layer, and so on. Finally, the torrent file is produced, which, among other information, describes the mapping of the stored video into chunks. A chunk represents the smallest unit of data that will be transmitted over the P2P network. Sometimes, the term piece is used to denote a chunk. In this article both terms (chunk and piece) are used interchangeably. At the consumer side, the peer downloads the torrent file and requests the video. Here, conventional P2P protocols used for file sharing require modifications. In BitTorrent, file chunks are downloaded in rarest-first fashion. This is an efficient strategy in file sharing applications since the availability of rare chunks is eventually increased, and higher downloads rates can be achieved for these chunks. However, in video streaming this can result in an interruption of the video playback since chunks are not received sequentially. Therefore, special care needs to be given to those chunks that are close to the playback position. An example of an algorithm that takes into account these considerations is Giveto-Get (G2G) [11], implemented in Tribler [10]. In this algorithm chunks of compressed video are classified into three priority categories: high, medium, and low. This classification depends on the current playback position. Chunks close to the playback positions are marked as high-priority chunks; they are downloaded first and in sequential order. Medium- and low-priority chunks are downloaded according to the standard BitTorrent strategy: rarest-first. Given a scalable encoded video unit, a peer can initially decide to recover all or just a subset
130
of available layers, depending on its capabilities. For instance, a mobile device can decide to retrieve the base layer providing CIF (352 × 288) resolution, while a set-top box may additionally retrieve the 4CIF (704 × 576) enhancement layer. Besides, in such a static selection, a peer can dynamically retrieve just a subset of layers in order to react to a temporary narrowing of the bandwidth. Such dynamic adaptation can be achieved through a carefully designed piecepicking policy. For this purpose the G2G algorithm can be modified to take into account not only the playback position, but also different layers in the scalable bitstream. To ensure continuous video playback, chunks close to the playback position need to be received on time, at least for the base layer of scalable video. Therefore, neighboring peers need to be carefully selected. Ideally, these peers should be able to deliver video pieces before they are requested by the video player. In the following subsections we explain three recent state-of-the-art P2P scalable video streaming systems based on the principles described above.
THE MMV PLATFORM The system proposed in [12] is based on Tribler. The main modifications are in the G2G algorithm and peer selection policy. The video sequence is compressed by the W-SVC encoder [2]. The compressed sequence consists of groups of pictures (GOPs) and scalable layers. The number of frames within a GOP and the number of layers are set during encoding and are constant throughout the sequence. Along with the bitstream, the description file is generated, which contains information on mapping of GOPs and layers into chunks and vice versa. The
IEEE Communications Magazine • March 2011
RAMZAN LAYOUT
2/18/11
3:11 PM
GOP 1
GOP 2
GOP 3
GOP 4
GOP 5
Video quality
GOP 0
Page 131
The size of a chunk Enhancement layers Base layer
has been set to 32 Kbytes; however, other sizes are
Sliding window
supported as well.
Playback position
If the size of a GOP GOP 1
GOP 2
GOP 3
GOP 4
GOP 5
Video quality
GOP 0
in MMV platform
in bytes is not an Enhancement layers Base layer
integer multiple of the chunk size, padding bytes are added at the end of
Playback position GOP 0
the GOP. In this way GOP 1
GOP 2
GOP 3
GOP 4
each GOP consists of
GOP 5
Video quality
chunks that are Enhancement layers
independent from
Base layer
other GOPs.
Playback position GOP 1
GOP 2
GOP 3
Video quality
GOP 0
GOP 4
GOP 5 Enhancement layers Base layer
Playback position Time
Figure 3. Sliding window operations. First row: prebuffering phase starts; second row: prebuffering phase ends; third row: the window shifts the first time; fourth row: the window shifts the second time.
description file is transmitted with the video sequence. It has the highest priority and therefore should be downloaded before the bitstream. The size of a chunk in the MMV platform has been set to 32 kbytes; however, other sizes are supported as well. If the size of a GOP in bytes is not an integer multiple of the chunk size, padding bytes are added at the end of the GOP. In this way each GOP consists of chunks that are independent from other GOPs. Piece Picking Strategy — At the beginning of the streaming session, information about GOPs and layers is extracted from the bitstream description file. At this point, a sliding window is defined, made of several GOPs (typically three to four), and the prebuffering phase starts. Chunks are picked only from those inside the window unless all of them have already been downloaded. In the latter case, the piece picking policy will be rarest-first. Inside the window, chunks have different priorities, following the idea from the original G2G algorithm. First, a peer will try to download the base layer (BL),
IEEE Communications Magazine • March 2011
then the first enhancement layer (EL1), and so on. Pieces from the BL are downloaded in sequential order, while all other pieces are downloaded rarest-first (within the same layer). The window shifts every t GOP s, where t GOP represents GOP duration in seconds. An exception is given in the first shift, which is performed after prebuffering. The duration of the prebuffering stage corresponds to the length of the sliding window in seconds. Every time the window shifts, two operations are made. First, downloaded pieces are checked to evaluate which layers have been completely downloaded. Second, pending requests concerning pieces of the GOP located just before the window are dropped. Fully downloaded layers from that GOP are sent to a video player for playback. Note that the window shifts only if at least the BL has been received; otherwise, the system auto-pauses. Figure 3 shows the behavior of the system with a window three GOPs wide. An early stage of the prebuffering phase is shown in Fig. 3, first row. The peer is downloading pieces from BL in a sequential way. In Fig. 3,
131
RAMZAN LAYOUT
2/18/11
3:11 PM
Page 132
1400
Video download rate Received video bit rate
Download rate (kb/s)
1200
1000
THE NEXTSHARE PLATFORM 800
600
400
200
0 0
40
80
120
160
200
240
280
320
Time (s)
Figure 4. Received download rate and received video bit-rate for Crew CIF sequence. second row, the first two layers have been downloaded, and chunks are being picked from EL2 according to the rarest-first policy. These pieces can belong to any GOP within the window. In Fig. 3, third row, the window has shifted, although not all pieces from EL2 in GOP 0 have been received. This layer is discarded from GOP 0. Inside the window, before downloading any other pieces from GOP 1 or 2, the system will pick chunks from GOP 3 until the quality of received layers is the same. In other words, before picking any chunks belonging to EL2, chunks belonging to the BL of GOP 3 and EL1 of GOP 3 will be picked. In Fig. 3, fourth row, all GOPs within the window have the same number of completed layers, and pieces are picked from EL3. Peer Selection Strategy — Occasionally, slow peers in the swarm may delay the receiving of a BitTorrent piece even if the download bandwidth is high. This problem is critical if the requested piece belongs to the BL, as it might force the playback to pause. Therefore, these pieces should be requested from good neighbors. Good neighbors are those peers that own the piece with the highest download rates, which alone could provide the current peer with a transfer rate that is above a certain threshold. Each time the window shifts, download rates of all the neighbors are evaluated, and the peers are sorted in descending order. Pieces are then requested from peers providing download rates above the threshold. The performance of this framework is shown in Fig. 4. Here, the download rate is not high enough to allow the transfer of all layers. Moreover, the download rate is not constant; therefore, the received video bit rate needs to match the behavior of the download speed. It can be seen that the video is received at a higher bit rate immediately after the prebuffering phase and at the end of the sequence. The higher quality after
132
the prebuffering phase results from the fact that the first couple of GOPs are in the window for a slightly longer period than other GOPs. At the end of the sequence, when the window cannot be shifted anymore, the window is shrinking. Therefore, more enhancement layers can be downloaded for GOPs at the end of the sequence. Based on the Tribler P2P protocol, the NextShare platform has been designed and is being implemented in the framework of the P2PNext project [7]. The NextShare platform supports delivery over the P2P overlay of compressed video contents conforming to the specifications of the SVC standard [1]. At the sender side, first, the SVC bitstream is divided into several files, each carrying a single scalable layer. Only the BL is encapsulated with audio in MPEG-TS transport format. Within the torrent file, layers are indexed as independent files, leaving the user free to select the one to be downloaded. Additional metadata is included in the torrent to provide a description of the scalable bitstream in terms of resolutions and bit rates supported. SVC layers are downloaded in the form of data segments from different files and composed by the NextShare P2P engine at the consumer side. Once a target resolution is composed, the corresponding data is forwarded to a media player. Adaptation Strategy — In NextShare the maximum target layer is selected and provided to the piece picking algorithm by matching context information and available scalable resolutions extracted from the torrent. To react to a temporary narrowing of the bandwidth, the adaptation decision is made periodically by checking the status of the input buffer, playback position, and pieces availability. Adaptation actions occur in correspondence to a synchronization point; for a compressed SVC bitstream this corresponds to an instantaneous decoding refresh (IDR) frame. IDR represents a special intra picture, which cuts off all inter-picture dependencies from previously decoded frames. The stream is therefore divided in units of constant temporal duration called time slots; each time slot corresponds to a period of frames between two consecutive IDRs. The size of a NextShare chunk has been set to 56,400 bytes in order to have multiple MPEGTS packets in each chunk, while preserving a reasonable chunk size for efficient transmission. Here, the size of each MPEG TS packet is 188 bytes. Each scalable layer is encoded at a constant bit rate (CBR), and hence is represented by a constant amount of bits in each time slot. Since CBR is used, it is possible to predict the number of chunks in a time slot of each layer. For example, for an SVC layer encoded at 512 kb/s and 25 frames/s, there will be a time slot of 64 frames (2.56 s), and the size of the corresponding block of frames will be approximately 164 kbytes, which corresponds approximately to three NextShare chunks. A bit overhead is considered in piece mapping for each scalable layer, in order to compensate for the drift from the target bit rate always present in CBR algorithms.
IEEE Communications Magazine • March 2011
RAMZAN LAYOUT
2/18/11
3:11 PM
Page 133
Time slot
The size of
Video quality
NextShare chunk has EL2
High priority H2
been set to 56400
Low priority L2
bytes in order to have a multiple
Quality switch
number of MPEG-TS EL1
High priority H1
Low priority L1
packets in each chunk, while preserv-
Quality switch
ing a reasonable size BL
High priority H0 t
Low priority L0
of a chunk for efficient transmis-
t+x
sion. Here, the size Time
Playback position
of each MPEG TS packet is 188 bytes.
Figure 5. Layered priorities in NextShare.
Piece Picking Strategy — The procedure implemented in NextShare to download scalable data chunks is an extension of the G2G algorithm. Priorities are defined as in G2G and extended to the multiple files, as depicted in Fig. 5. In the high-priority set pieces are downloaded sequentially, while in the low-priority set pieces are downloaded in a rarest-first fashion. Each block in the figure represents a time slot. In Fig. 5 at time instance t (playback position), the algorithm has to decide which block to download for time point (t + x). Here, the algorithm might decide to start downloading pieces inside EL1 at time t + 1 in order to improve the quality for the near future or may decide to continue downloading blocks for BL to ensure that the playback will not stop even if network conditions become worse. Periodically the algorithm makes such a decision depending on the current status of the download, pieces lost, and playback position. The controller implemented in NextShare tries to switch to a higher quality as soon as there is enough saved buffer for the current quality. Therefore, a safe buffer of chunks downloaded and not yet delivered to the player is defined; the size of this buffer is a function of the parameter x depicted in Fig. 5. The minimum value for x corresponds to five time slots, and can vary depending on network performance. When enough segments for the BL have been downloaded, the quality is increased, and the download of the next layer is selected. In this case the high-priority set is redefined, and high priorities are assigned to blocks belonging to the upper layers (the sequence of high priorities will follow layer dependencies H0 → H1 → H2). In order to guarantee a safe time alignment, for each increase in quality the initial time slots to download are ahead in time with respect to previous layers. An offset of one time slot is added at the beginning of each enhancement layer as depicted in Fig. 5. When not enough pieces are available for a certain resolution, the controller switches back to a safer position by interrupting the download of the correspondent file and reassigning priorities to lower layers.
IEEE Communications Magazine • March 2011
Peer Selection Strategy — The peer selection strategy is inherited from the approach implemented in Tribler. For each layer, pieces with an early deadline are downloaded from good (i.e., fast) peers. Bad peers (i.e., peers that missed a given deadline for a particular piece) are moved from the high-priority set to a low-priority one starting from the BL. A bad performance counter is incremented each time a piece download fails. As a consequence, peers with a positive bad performance counter are only available for picking low-priority pieces.
SEACAST PLATFORM In the framework of the SEA project [8], a different P2P architecture to support scalable video streaming has been implemented. While the NextShare and MMV platforms were both based on full mesh P2P topology, in SEA the scalable content is delivered over a multitree overlay. P2P platforms developed in SEA are based on the VidTorrent protocol [13], which creates an overlay forest of independent trees, each one carrying a different part of the stream. A central entity (tracker/broker) is used for creating and managing the overlay. Nodes of a tree structure have well defined parent-child relationships. Such an approach is typically push-based, that is, when a node receives a data packet, it also forwards copies of the packet to each of its children. If peers do not change too often, tree-based systems require little overhead, since packets are forwarded from node to node without the need for extra messages. However, with respect to mesh topologies in high churn environments, the tree must be continuously destroyed and rebuilt, which requires considerable control message overhead. Furthermore, nodes must buffer data for at least the time required to repair the tree in order to avoid packet loss. In the SEA project two different platforms have been developed. The first platform (SEABone) targets code optimization focusing on cross-platform development for set-top boxes. The second platform (SEACast) aims at adding support for layered and multiple-descriptive coding schemes [14].
133
RAMZAN LAYOUT
2/18/11
3:11 PM
Page 134
By parsing the SDP
SDP (Session and scalability information) Data packets BL Data packets EL
the peer knows available resolutions and dependencies among the layers for the selected video. By matching this
Video producer and publisher
SEAcast root node
Broker
information with local capabilities and local bandwidth,
SEAcast leaf node
it selects a target
SEAcast leaf node
resolution. Low-end smartphones SEAcast leaf node
SEAcast leaf node
SEAcast leaf node
High-end STB and laptop
Figure 6. SEAcast architecture. SVC Support in SEACast — The structure of the P2P tree generated with the SEACast application is depicted in Fig. 6. The content is injected in the tree overlay by a root node that informs the broker about the published content. In case of layered video, the root node also generates a file containing information about the content structure, in terms of supported resolutions and dependencies among the layers. Such information is formatted in a socalled Session Description Protocol (SDP) message [15]. The original VidTorrent protocol was modified to deliver layered contents over multitree overlays. That is, for an SVC video, each layer is delivered over a different tree. The granularity introduced by splitting the content into multiple layers allows peers with limited upload bandwidth to contribute to the swarm by uploading data relative to a reduced version of the scalable bitstream. Intrinsic robustness to node failures is added, as the loss of connections carrying an enhancement layer will cause temporary degradation of the overall quality. Clearly, in case of failure or congestion of a connection carrying the BL, there is no possibility to easily recover the quality even if enhancement layers are received. In such a situation, the performance may be improved by assigning priorities to each different tree according to the importance of the SVC layer carried, and injecting and forwarding the BL over the most reliable path. Data Transport in SEACast — In SEACast data packets are simply forwarded from parent to children nodes. As shown in Fig. 6, the publisher is connected to the SEACast root node by means
134
of a different Real-Time Transport Protocol (RTP) connection for each scalable layer. RTP connections act also as interfaces between client nodes and the media player. SVC data are encapsulated by the publisher in RTP payload. In the SEACast root node, RTP packets are directly pushed over a different tree according to the correspondent layer. Each SEACast client keeps a buffer of a few seconds for each tree in which it participates; RTP packets are extracted from the client buffer and again delivered to the media player over different sessions. The SVC stream is finally reconstructed inside the media player by aggregating the sub-bitstream, and synchronization among sessions is kept by using timestamp information contained in packet headers. Peer Selection and Adaptation Strategy in SEACast — When a peer wants to join the swarm, it first contacts the broker to receive the list of neighboring nodes that already have the video. The SDP message is also transmitted by the broker. By parsing the SDP, the peer knows available resolutions and dependencies among the layers for the selected video. By matching this information with local capabilities and local bandwidth, it selects a target resolution. According to the layers hierarchy, the peer pings its potential parent nodes, starting from the BL and repeating the operation for ELs. Each SVC layer is therefore retrieved from a different node. A local probe technique is used to select the best parent. In particular, a combination of roundtrip time, available bandwidth, and the number of already active connections is used. Once the parent node is selected for each layer, connections are established. Periodically, available
IEEE Communications Magazine • March 2011
RAMZAN LAYOUT
2/18/11
3:11 PM
Page 135
bandwidth is checked, and in case of congestion or connection failure, the peer contacts the broker again to rebuild the tree.
CONCLUSIONS In P2P networks video is streamed to the user in a fully distributed fashion. Network resources are distributed among users instead of handled by a single entity. However, due to the diversity of users’ displaying devices and available bandwidth levels in the Internet, the underlying coding and transmission technology needs to be highly flexible. Such flexibility can easily be achieved by SVC, where bitstreams can be adapted in the compressed domain according to available bandwidth or user preferences. When using SVC in P2P streaming, special care needs to be given to handling different bitstream layers according to the current playback position. Since it is highly important that pieces close to playback position arrive on time, these need to be downloaded from good peers. In this article we have presented several advanced P2P systems supporting streaming of scalable video and designed to support future Internet applications. Considering the flexibility given by scalable bitstreams within P2P overlays, it is clear that P2P streaming systems supporting SVC technology will play an important role in the Internet of the future.
ACKNOWLEDGMENT The authors wish to thank Simone Zezza from the Department of Electronics, Politecnico di Torino, for his help with revising this manuscript. This research has been partially funded by the European Commission under contract FP7247688 3DLife, FP7-248474 SARACEN, and FP7-248036 COAST.
REFERENCES [1] H. Schwarz, D. Marpe, and T. Wiegand, “Overview of the Scalable Video Coding Extension of the H.264/AVC Standard,” IEEE Trans. Circuits Sys. Video Tech., vol. 17, no. 9, Sept. 2007, pp. 1103–20. [2] M. Mrak et al., “Performance Evidence of Software Proposal for Wavelet Video Coding Exploration Group,” Tech. Rep., ISO/IEC JTC1/SC29/WG11/MPEG2006/ M13146, 2006. [3] Z. Liu, Y. Shen, and K. Ross, “LayerP2P: Using Layered Video Chunks in P2P Live Streaming,” IEEE Trans. Multimedia, vol. 11, no. 7, 2009. [4] P. Baccichet et al., “Low-Delay Peer-to-Peer Streaming Using Scalable Video Coding,” Proc. Packet Video ‘07, 2007, Lausanne, Switzerland, pp. 173–81. [5] Y. Ding et al., “Peer-to-Peer Video-on-Demand with Scalable Video Coding,” Comp. Commun., 2010. [6] PetaMedia Project; http://www.petamedia.eu. [7] P2PNext Project; http://www.p2pnext.org. [8] SEA Project; http://www.ist-sea.eu. [9] B. Cohen, “Incentives Build Robustness in BitTorrent,” Proc. 1st Wksp. Economics Peer-to-Peer Sys., 2003. [10] J. A. Pouwelse et al., “Tribler: A Social-Based Peer-toPeer System,” 5th Int’l. Wksp. Peer-to-Peer Sys., Feb. 2006; http://citeseerx.ist.psu.edu/viewdoc/summary? doi=10.1.1.60.8696. [11] J. J. D. Mol et al., “Give-to-Get: Free-Riding-Resilient Video-on-Demand in P2P Systems,” Proc. SPIE Multimedia Comp. Net., vol. 6818, San Jose, CA. [12] S. Asioli, N. Ramzan, and E. Izquierdo, “A Novel Technique for Efficient Peer-To-Peer Scalable Video Transmission,” Proc. Euro. Sig. Process. Conf., Aalborg, Denmark, Aug. 23–27, 2010.
IEEE Communications Magazine • March 2011
[13] VidTorrent Protocol; http://web.media.mit.edu/~vyzo/ vidtorrent/index.html. [14] S. Zezza et al., “SEACAST: A Protocol for P2P Video Streaming Supporting Multiple Description Coding,” Proc. IEEE Int’l. Conf. Multimedia Expo, New York, NY, 2009. [15] T. Schierl and S. Wenger, “Signaling Media Decoding Dependency in the Session Description Protocol (SDP),” IETF RFC 5583, July 2009; http://tools.ietf.org/html/rfc5583.
BIOGRAPHIES NAEEM RAMZAN (
[email protected]) is a postdoctoral researcher at the Multimedia and Vision Group, Queen Mary University of London. His research activities focus around multimedia search and retrieval, image and video coding, scalable video coding, surveillance-centric coding, multimedia transmission over wireless, and P2P networks. Currently, he is a senior researcher and core member of the technical coordination team in the EU funded projects PetaMedia and SARACEN. He is the author or co-author of more than 40 research publications.
Considering the flexibility given by scalable bitstreams within P2P overlays, it is clear that P2P streaming systems supporting SVC technology will play an important role in the Internet of the future.
EMANUELE QUACCHIO (
[email protected]) received an M.S. degree in electronic engineering from the Polytechnic University of Turin, Italy, in 2003. He worked two years as a researcher in the Department of Electronics of the same university and joined STMicroelectronics in 2006. His activities are focused on embedded software development for STB/mobile platforms and multimedia communication. He has published several papers in the principal journals of engineering and conferences. Since 2006 he has participated in a number of EU funded projects. TONI ZGALJIC (
[email protected]) received a Ph.D. degree from Queen Mary University of London in 2008. He is currently a research assistant in the Multimedia and Vision Group in the School of Electronic Engineering and Computer Science at the same university. His research interests include scalable video coding and transmission, universal multimedia access, surveillance-centric coding, and video transcoding. He has published more than 20 technical papers in these areas, including chapters in books. STEFANO ASIOLI (
[email protected]) received an M.Sc. degree in telecommunications engineering from the Department of Information Engineering and Computer Science (DISI), University of Trento, Italy, in 2009. He is currently pursuing a Ph.D. degree in electronic engineering at the Multimedia & Vision Group, Queen Mary University of London. His research interests include peer-to-peer networks and scalable video coding. L UCA C ELETTO (
[email protected]) received a Master’s degree in electronic engineering from the University of Padova, Italy, in 1998. He joined STMicroelectronics in 1999, where he contributed to research projects in video compression and streaming. He has published and co-authored several papers in the principal journals of engineering and conferences, and has been granted several patents on video technologies. As an MPEG committee member, he participated in the standardization of the H.264 specifications. He has collaborated on EU-funded projects. E BROUL I ZQUIERDO (
[email protected]) is Chair of Multimedia and Computer Vision and head of the Multimedia and Vision Group in the School of Electronic Engineering and Computer Science at Queen Mary University of London. He received a Ph.D. from Humboldt University, Berlin, Germany. He has been a senior researcher at the Heinrich-Hertz Institute for Communication Technology, Berlin, and the Department of Electronic Systems Engineering of the University of Essex. He holds several patents in multimedia signal processing and has published over 400 technical papers, including chapters in books. FABRIZIO SIMONE ROVATI (
[email protected]) received his electronic engineering degree from Politecnico of Milano in 1996. In 1995 he joined STMicroelectronics in the AST System R&D group working on digital video processing algorithms and architectures. He currently leads the corporate R&D group in the field of networked multimedia. During his career he has authored or co-authored 15 British, European, and U.S. granted patents, and 10 publications in conferences or technical journals.
135
MATHIEU LAYOUT
2/18/11
3:13 PM
Page 136
FUTURE MEDIA INTERNET
Improving End-to-End QoE via Close Cooperation between Applications and ISPs Bertrand Mathieu and Selim Ellouze, Orange Labs Nico Schwan, Bell Labs David Griffin and Eleni Mykoniati, University College London Toufik Ahmed, University Bordeaux-1 Oriol Ribera Prats, Telefonica I&D
ABSTRACT In recent years there has been a trend for more user participation in Internet-based services leading to an explosion of user-generated, tailored, and reviewed content and social-networking-based applications. The next generation of applications will continue this trend and be more interactive and distributed, putting the prosumers at the center of a massively multiparticipant communications environment. Furthermore, future networked media environments will be high-quality, multisensory, multi-viewpoint and multistreamed, relying on HD and 3D video. These applications will place unprecedented demands on networks between unpredictable and arbitrarily large meshes of network endpoints. We advocate the development of intelligent cross-layer techniques that, on one hand, will mobilize network and user resources to provide network capacity where it is needed, and, on the other hand, will ensure that the applications adapt themselves and the content they are conveying to available network resources. This article presents an architecture to enable this level of cooperation between the application providers, the users, and the communications networks so that the quality of experience of the users of the application is improved and network traffic optimized.
INTRODUCTION In recent years there has been a trend for more user participation in Internet-based applications. There has been an explosion of user-generated, tailored, and reviewed content, while social networking is beginning to replace traditional communications technologies such as email and websites. Typical examples of popular applications that only exist for and because of significant user participation are Facebook, YouTube, Flickr, Digg, eBay, Second Life, and Wikipedia. However, even though content is being created,
136
0163-6804/11/$25.00 © 2011 IEEE
modified, and consumed by a large number of participants, almost all of these applications still rely on servers adequately dimensioned and carefully positioned by service providers in large data centers at strategic locations across the Internet to ensure an adequate quality of experience (QoE) for their users. These deployments require significant investment to maintain; they cannot expand beyond the selected locations and have limited flexibility to adapt to demand variations over time. The next generation of applications will continue the trend of user-centricity where users are not just seen as consumers of a product or service, but are active participants in providing it. They will be more interactive and distributed, putting prosumers at the center of a massively multiparticipant communications environment where they can interact in real time with other users and provider resources, to provide and access a seamless mixture of live, archived, and background material. Furthermore, future networked media environments will be high-quality, multisensory, multi-viewpoint, and multistreamed, relying on HD and 3D video. These applications will place unprecedented demands on networks for high-capacity, low-latency, and low-loss communication paths between unpredictable and arbitrarily large meshes of network endpoints, distributed around the entire globe, putting additional pressure for upload capacity in access networks. If the entire burden of supporting high volumes of HD/3D multimedia streams is pushed to the Internet service providers (ISPs) with highly concurrent unicast flows, this would require operators to upgrade the capacity of their infrastructure by several orders of magnitude to ensure end-to-end quality of service between arbitrary endpoints. Rather than simply throwing bandwidth at the problem, we advocate the development of intelligent cross-layer techniques that, on one hand, will mobilize network and user resources
IEEE Communications Magazine • March 2011
MATHIEU LAYOUT
2/18/11
3:13 PM
Page 137
to provide network capacity where it is needed, and, on the other hand, will ensure that the applications adapt themselves and the content they are conveying to available network resources, considering core network capacity as well as the heterogeneity of access network and end device capabilities. Meeting these challenges requires a previously unseen amount of cooperation between application providers, users, and the communications networks that will transport the application data. While previous work, mainly in the telco domain, such as IP Multimedia Subsystem (IMS) [1] or Parlay/X [2] and more recently ALTO, has made progress in this direction, their use is confined to walled-garden environments or limited in terms of services the applications can request. Furthermore, they do not address the specific exchange of information in both directions so as to enable overlay applications and networks to be optimized. This article presents an architecture that enables this level of cooperation between the actors and elaborates on the related interactions. The next section presents some example applications to illustrate the benefits of collaboration between the applications and the network. We then present an overall picture of our approach. We then elaborate on the cross-layer interactions. The functional architecture which captures the high-level building blocks of our system is detailed in the following section. We then describe research and standardization initiatives related to our approach. Finally, a summary of the article and a discussion of future work is presented.
EXAMPLE APPLICATIONS The environment assumed in this article is one where applications are decoupled from underlying communications networks and organized as overlay networks. The application logic will be distributed over numerous overlay nodes provided by end users, application providers, and even the ISPs themselves. The complexity of the nodes and the application logic being executed at each one depends on the sophistication of the application. The architecture and cooperative approach presented in this article has been designed to be of benefit to a wide spectrum of applications. In the simplest cases the overlay algorithms may be streaming live video through swarms, while more complex applications may require on-the-fly multistream 3D video processing or real-time discovery and interactive tracking of user-generated content. Novel applications delivering rich user experiences will present new business opportunities for the consumer electronics and user applications industries beyond incremental changes to today’s server-centric and web-based content retrieval services. The interaction with underlying networks will also present new opportunities for ISPs to cooperate in the delivery of media applications. We present the following examples to illustrate the type of advanced services that can benefit from increased cooperation with underlying ISPs. Their features include low-latency and high-capacity content dissemination; manipulation of distributed and streaming content such as
IEEE Communications Magazine • March 2011
Figure 1. Bicycle use case.
the interpolation of multiple audio-visual streams from different viewpoints; exchange of live, multisensory, and contextual information between participants; and the discovery and navigation of distributed content, information and users. The common denominator is the collaborative production, processing, and consumption of a mixture of live, archived, and background high-quality media from multiple sources with demands than can outstrip the capabilities of the underlying networks unless the applications adapt themselves, and content is tailored to meet network capacity and performance constraints, and/or specific network services such as multicast or in-network caching facilities are provisioned to support their efficient distribution. One example of such an application is the multi-viewpoint coverage of sporting events, such as a bicycle race like the Tour de France (Fig. 1). In this scenario numerous fixed and mobile sources, such as professional media organizations, trackside spectators, as well as the cyclists themselves can generate live audio-visual streams. As each stream shows a specific view of a potentially different subject, a potentially large number of streams with overlapping content are generated. Consumers of this content, who may be distributed around the globe using various fixed and mobile end devices, can tailor their viewing experience by selecting from many streams according to their preference, or navigate between streams in real time to zoom or pan around, or follow particular cyclists. The popularity of individual streams is difficult to predict and may change rapidly; therefore, the creation and adaptation of efficient distribution trees or meshes to transmit the content to interested sets of users at the required quality levels present problems that cannot easily be solved by individual ISPs or the application overlay in isolation. ISPs are unaware of the popularity of dynamically changing content sources, the locations of the consumers of that content, or the heterogeneous end terminal capabilities in remote networks. From the ISPs’ perspective the applications are simply generating large quanti-
137
MATHIEU LAYOUT
2/18/11
3:13 PM
Page 138
of consumers, with some specific constraints of quality levels such as maximum latency. Our proposed solution of increased cooperation between ISPs and the overlays will assist in exchanging rich information between the application and the network so that the overlays can be organized efficiently and can adapt to network constraints by avoiding high cost or highly congested areas, or adapting the quality of the streams to match available network capacity, or enabling specific network services such as multicast distribution or caching to be invoked in areas of densely populated receivers.
OVERALL APPROACH
Figure 2. 3D virtual conference. ties of traffic between unpredictable locations. On the other hand, application logic can track and match content sources and consumers, but efficient distribution overlays can only be built with knowledge of underlying network capabilities so that caching and adaptation functions, for example, can be placed where they are required and are most effective, or advanced network services such as regional multicast distribution can be invoked where most needed to relieve congestion and improve the QoE for users. A second example is a virtual meeting such as a 3D virtual conference, where a large number of participants, represented by virtual avatars, can meet and communicate via voice and avatar gestures, as well as share additional multimedia data such as live video, 3D models, text, and presentation slides (Fig. 2). Users will be both consumers and generators of content — custom avatars, user-generated environments, and sources of interactive video streams. Participants can move around the virtual meeting space, attend presentations, establish special interest discussion groups, socialize in coffee breaks, and so on. Tracking the participants and managing their interests and participation in various activities is the responsibility of the application overlay only; however, the distribution of content to various groups of users is something that benefits from cooperation with the underlying networks. Users in the same virtual meeting room with a similar point of view need access to similar data, such as static background material as well as dynamically changing objects that need to be synchronized between many consumers. The efficiency of the system can therefore be greatly improved by organizing the overlay with regard to the position of content within the virtual space and making use of ISP-provided network services such as localized in-network content caching or multicast for distributing state changes of common objects to reduce latency in live updates, network load, and therefore costs. It can be seen that both of the above examples could generate huge amounts of data to be transmitted to sets of receivers that range from small groups to many hundreds or even millions
138
Because applications will be more participatory and interactive, today’s model of centralized or replicated servers in large data centers is likely to be replaced by a highly distributed model where processes run in user equipment and interwork with one another in an overlay layer and can be enhanced via the invocation of network services. In our approach, we advocate close and strong cooperation between ISPs and overlay applications for optimized delivery of content to end users. This cooperation is achieved via the comprehensive, media-aware and open Collaboration Interface between Network and Applications (CINA), which bridges the gap between ISPs and application overlays and aims at: • Increasing the degree of cooperation between the network layer and the applications through mutual exchange of information • Optimizing application overlay networks with respect to the capabilities of the underlying networks and the participant end users • Providing the means by which service providers can request the activation of specialized network services or resources to achieve efficient distribution of highly demanding content streams • Enabling dynamic adaptation of the content to meet the abilities of the underlying networks and user requirements The overall picture of the system is illustrated in Fig. 3, which highlights the CINA interface. The overlay application network consists of nodes provided by one or more service providers (SPs), the users themselves, and, optionally, ISP nodes. There will be separate overlay networks for each application. The different applications may be more or less dependent on SP nodes, with peer-to-peer (P2P) applications running entirely on user nodes being the extreme case. Given that the applications are global in coverage and require end-to-end traffic optimization involving multiple hops in different networks, it is necessary to collect information from many underlying networks via CINA. Since data from one network may conflict with that provided by another, or the quantity and quality of the information may differ from ISP to ISP, the harmonization of the information gleaned from the ISPs is required in the overlay. The overlay could also aggregate the information collected from different ISPs, with additional data collected by
IEEE Communications Magazine • March 2011
MATHIEU LAYOUT
2/18/11
3:13 PM
Page 139
Application #2 overlay Application #1 overlay
Content adaptation has two dimensions: personalizing and tailoring the content for the subjective viewpoint of the
CINA
user(s); and
Service provider site
encoding content
Service provider site
in a flexible way to match the
Customer premises
ISP
ISP
Cache server
End-user device
ISP
Customer premises
capabilities of the network.
Server equipment
Figure 3. Overview of the system and its relationship with users, ISPs, and overlay applications. measurements of the overlay itself for the global optimization of the application. The use of this information will benefit algorithms that are needed for optimizing the distribution of the content. These algorithms determine which application resources need to be involved, and how to best interconnect the participants and distribute the load and the content to achieve the best QoE given the available resources. Also, content adaptation services will profit from the cooperation. Until now digital coding and encoding systems have been designed following the client/server paradigm, but now applications will have to deal with the fact that content may come from several sources and terminal devices with different capabilities, residing in networks that offer different service levels. Applications need to adapt and select quality layers with a brand new set of constraints and circumstances. Pre-adaptation of content using offline pregeneration strategies on the subset of the most popular content is inefficient and cannot meet the continuously increasing number of users’ changing interests and heterogeneous terminal capabilities. Content adaptation has two dimensions: personalizing and tailoring the content for the subjective viewpoint of the user(s); and encoding content in a flexible way to match the capabilities of the network. The latter could also allow the upload capacity of participants, especially those that act as content sources, to be boosted by making use of parallel connections across several available access networks. Adaptation is achieved through mechanisms that dynamically adjust the content via the use of scalable video coding, video layer coding, or adaptive streaming being performed at the source or end-user side, or by intermediate nodes as appropriate to match the requirements of the set of users receiving the content while adhering to network capabilities and restrictions such as congested access links.
IEEE Communications Magazine • March 2011
CROSS-LAYER INTERACTIONS The Internet Engineering Task Force (IETF) ALTO working group defined an architecture and a protocol for communication between ALTO clients and ALTO servers. Possible ALTO information to be exchanged are an abstracted network map and an associated cost map or list of peers ranked according to ISP preferences. In our study we aim to go further, allowing network operators to provide any kind of network information it wishes or that it accepts to provide to overlay applications, after agreement between the two actors. For instance, the ISP could inform the application about the capabilities of its network: access networks (type, link capabilities, coverage, etc.), current network services status (availability of multicast groups, caches, etc.), as well as other possible metrics (load of routers, bandwidth, delay, etc.). Since some information is critical for network operators and they do not want to reveal it (e.g., internal detailed topology or BGP policies), the CINA interface is designed having in mind the agreements between the applications and the ISP, and is adaptable to allow any kind of agreed information to be exchanged. Furthermore, via the CINA interface, the network operators can also get information from the overlay so that they can optimize the traffic in their networks, mobilize resources, and adapt to the overlay applications, eventually transparently; this is not covered by ALTO. Typically, the application could inform the ISP about its traffic demand: information related to users (e.g., user location and estimated traffic matrix) or content (quantity of sources, their bit rates, adaptive coding, etc.). This information exchanged between the application and the ISP goes further than information reflecting the preferences and policies of the involved business entities as currently defined in the ALTO working group.
139
MATHIEU LAYOUT
2/18/11
3:13 PM
No one can force both the overlay applications and the network operators to collaborate, but with the possible win-win approach of both (optimized QoE for application, reduction of traffic load as well as monetization of network services), both actors will benefit from cooperation.
1 http://abilene.internet2.edu/
140
Page 140
CINA also goes further than ALTO in the way that it enables future networked media applications to make use of advanced network services in a dynamic and flexible way to achieve cost-efficient delivery of high QoE for their users. It is known that an ISP could offer information to applications such as the location of users or some user profile information, but in our approach we go further via the offering of advanced network services. For example, such possible network services can be: • Multicasting: Possibly with hybrid application layer and native IP multicast since the applications will usually be spread over several ISPs, or the use of high fan-out nodes located in the network. • Caching: Via the use of specialized nodes, provided either by ISPs or third-party entities, to optimize delivery and save bandwidth in the network. • Bandwidth on demand: To enable delivery toward end users over multiple access networks simultaneously, and provide bandwidth on demand over aggregated access networks. • Dynamic quality of service (QoS) mapping: Invocation and mapping of application QoS requirements, network capabilities, enduser devices, and access networks. • Ad/text insertion: In order to offer addedvalue services that might be monetized by network operators. • Content adaptation: The presence of heterogeneous end-user devices and network infrastructures will require multiple versions of the same resource that can be efficiently generated using content adaptation. Our work mainly relates to the open interface that lets applications request the dynamic activation of such services and does not focus on redefining them. It is the ISP’s responsibility to list the network services it can offer and make agreement with an application to decide if it is for free or provided with appropriate billing. The final decision of activating the service remains under the control and management of the network operator. No one can force the overlay applications and network operators to collaborate, but with the possible win-win approach of both (optimized QoE for application, reduction of traffic load, as well as monetization of network services), both actors will benefit from cooperation. To illustrate the potential benefits of cooperation between the application overlay and the underlying ISPs, consider the following example. The bicycle race use case introduced earlier depends on the streaming of live audio-visual streams from multiple sources to many consumers. The efficiency of the delivery scheme depends on the popularity of the streams and the distribution of the consumers across the Internet. While lowpopularity feeds can be delivered efficiently by unicast streaming techniques, the source node needs significant upload network capacity for this to scale to a large number of consumers. Overlay P2P swarming techniques reduce the need for high-bandwidth links at the source node, but they do not reduce the load on the underlying network, and can even increase traffic.
However, if the application overlay is able to invoke network services such as multicast or caching services through CINA, the load on the underlying network can be reduced significantly. In addition, there is less overhead in terms of swarming control operations required at the overlay. As an example, the Abilene 1 core network topology of 11 nodes and 14 links was assumed for the ISP. Assuming equal link metrics, a stream bandwidth of S, a single source located at one of the Abilene nodes and stream consumers at each of the other nodes, for unicast streaming the maximum link load ranges from 4S to 8S and the total traffic (sum of link load) ranges from 21S to 30S depending on the node to which the source is attached. In all cases the source requires an upload capacity of 10S. If the overlay uses a P2P swarming distribution scheme such as BitTorrent, the source node requires an upload capacity of no more than S resulting in a total load on the network (sum of link load) of approximately 26.8S, each peer retrieves an equal fraction of the stream from each of the others. If, however, the overlay is able to cooperate with the ISP through CINA and invoke network-layer multicast to distribute the stream to all consumers, the total load on the network is reduced to 11S, a reduction of 58.9 percent of the total traffic. If we assume there are even more consumers at each of Abilene’s core routers, the total traffic on the ISP’s network increases linearly with the average number of peers per core router to which they are attached, while the load in the multicast case is not increased.
OVERALL FUNCTIONAL ARCHITECTURE In this section we describe the functional architecture defined to model the new interfaces and interactions required to enable the users, third parties, application providers, and ISPs to contribute and allocate resources in a dynamic and coordinated way, allowing the cross-layer optimization between the independent application and the network processes. The proposed architecture is built in order to define the boundaries of responsibility between the involved actors and the required interactions that are required across these boundaries. The ISPs (lower layer blocks in Fig. 4) operate the network and provide Internet connectivity services at particular locations defining their network domains. Within their domains, the ISPs can provide enhanced network services, including multicast and prioritized traffic treatment, and they can operate application layer services such as content caching on behalf of the application, or for improving their particular traffic optimization objectives. On top of the network infrastructure, application providers (middle layer blocks in Fig. 4) operate their service infrastructure, which may span several locations and the network domains of many ISPs. Application providers may be assisted by thirdparty providers who would operate additional infrastructure at additional locations or provide specialized added-value services, offered to
IEEE Communications Magazine • March 2011
MATHIEU LAYOUT
2/18/11
3:13 PM
Page 141
1
D0 End-user application management
M2
Control flow Metadata flow Data flow
C2 C8
5 4
3
C5
Overlay management
Data management
2 Services control
C4
Overlay AAA
M4
M5
M1
M0 C0
D1
C1
C10 D10
M7 9
6 7
8 Network data management
M6
Network management
C6
C3
Network services control
Network AAA
M3 C7
Figure 4. Functional architecture.
application providers under flexible agreements. Finally, the end users (upper layer block in Fig. 4) access the applications and offer their resources to enhance the application infrastructure in a dynamic way, similar to thirdparty providers but on a smaller scale and possibly under the full control of the application providers. The rest of this section describes the blocks and interfaces that model the functionality and the interactions between these actors. The interactions between the network and the application providers in particular (interfaces between the middle and bottom layer blocks) define CINA. The first functional block, end-user application management, models the functionality at the end user. It includes functions for: • Content generation, consumption, search, and so on • Data flow handling (e.g., transmission, reception, synchronization) • Interest and profile management • Providing QoE feedback to the application At the application level, the overlay management block includes the application optimization logic, and communicates with services control to dynamically invoke services and resources where they are required, and with data management, which is responsible for maintaining up-to-date information about the network and the application. The data management block collects, consolidates, maintains, and provides network and application level information. It allows the application to access the information provided by ISPs regarding network performance and capabilities, and the network and application services they may provide, and the ISPs to access application information regarding the traffic demand and quality requirements at particular locations.
IEEE Communications Magazine • March 2011
The services control block manages the basic functions and value-added services that build the application overlay following the instructions from the overlay management block. Such basic services include data uploading to multiple receivers, content adaptation, caching, content personalization like picture-in-picture, ad placement, and so on. These services could be provided using dedicated application provider equipment, by third-party infrastructure or service providers, or by the end users themselves, contributing their own resources to the application. The overlay management block implements the overlay optimization algorithms, taking into account information regarding the particular content characteristics and adaptation options, the end-user access means and QoE requirements, the available network resources and offered network services (e.g., multicast) and the available application services (caching, content adaptation, etc.). This block decides which QoE is sustainable for the users and how to allocate resources to their service requests. It dynamically invokes resources and services, requesting multicast transmission within a particular ISP, performing content adaptation for a set of end users, activating a set of cache servers, and so on. Finally, it interacts with the underlying ISPs to coordinate the use of the network resources in a mutually beneficial way (e.g., by reducing the data rate for a specified region under congestion conditions) or to indicate where caching resources could be allocated to reduce the load on the network. At the network level, similar blocks model the network optimization, service control, and data management functionality. In particular, the network data management block maintains information about the network topology, offered
141
MATHIEU LAYOUT
2/18/11
Since our approach can be seen as an extension or evolution to ALTO, we investigate if solutions defined in this group can be applied to ours. Typically the use of a RESTful HTTP based interface that uses JSON encoding is under consideration.
3:13 PM
Page 142
network services, ISP policies, and preferences to be communicated to the application, the status of the network resources, the current network performance, and so on. This block interacts with the application to provide information regarding the particular network domain to the data management block. The network services control block operates the available network services, including multicast, caches, ad insertion, and so on. These services are invoked by the network management block to optimize the data distribution as a response to explicit requests received by overlay management, or independently to optimize network optimization objectives. The network management block is responsible for the management of the network resources within the ISP network domain. It receives information about the network from network data management, information about the traffic demand and quality requirements of different applications from data management, explicit requests for allocation of resources or invocation of services from overlay management, and finally, feedback regarding the performance of network services from network services control. Based on this information, the network management block determines the appropriate allocation of resources and invocation of network services, and specifies the preferences that are communicated to the overlay application. Finally, the interactions between different actors create security considerations that necessitate the introduction of authentication, authorization, and accounting (AAA) functionality. The overlay AAA block handles the authentication of users joining the overlay, ISPs cooperating with the overlay, and potentially third-party service providers. The network AAA block enables the authentication of overlays cooperating with the ISP. Both blocks offer standard AAA functions, such as accounting facilities, access authorization, and profile management with security mechanisms, and can be controlled by the overlay management and network management blocks, respectively.
RELATED WORK Overlay applications are currently agnostic of the underlying network infrastructure and thus perform end-to-end measurements to gain some knowledge [3], but as this is not in cooperation with the ISP, it can lead to undermining the routing policies of ISPs [4, 5]. To avoid this, several initiatives have promoted cooperation between overlay applications and underlying networks. The P4P initiative [6] and later the IETF ALTO working group have investigated how overlay networks and ISPs can cooperate to optimize traffic being generated by P2P applications and transported over the ISP’s infrastructure. In their approach the ISP is able to indicate preferences on which peers should exchange data. Our solution is related to ALTO; however, it proposes a much richer interface that will allow true cross-layer cooperation, in terms of both information exchange and the possibility for overlays to dynamically request ISPs’ network services. Since our approach can be
142
seen as an extension or evolution of ALTO, we investigate if solutions defined in this group can be applied to ours. Typically, the use of a RESTful HTTP-based interface that uses JSON encoding is under consideration. Some research work, such as that investigated in [7, 8], aims at building a P2P framework or a delivery platform for live TV. In contrast to our approach, they do not consider multi-viewpoint applications and highly interactive applications between participants in the network; also, [7] does not deal with content adaptation. Research work studied P2P support in the context of massively multiparticipant virtual environments such as [9] for virtual environments and [10] for 3D streaming. Those applications are close to the ones we focus on, but in these solutions the volume of the content is rather small compared to HD video from potentially a large number of sources we address in our approach. Also dynamic activation of network services via the cooperation between overlay and ISPs for improving QoE is not taken into consideration. The definition of interfaces between the underlying network and the application or control level has been investigated for some years now, and some solutions are deployed such as the well-known IMS [1]. IMS designs an interface between the control entity and the network entities. However, this interface is still closed and only usable by the ISPs in a walled-garden fashion. Also, the possible network services that might be dynamically activated are limited. In another initiative, the Parlay/OSA framework was created some years ago for the telecommunications networks and telecom services (call control, call redirection, etc.), and has more recently been adapted for the Internet (web services) with the Parlay/X specification [2]. The objectives of this group were similar to ours; however, the defined interfaces depend on the network service to be activated, do not permit the exchange of dynamic information between the network and the application (in both directions), and the supported services were limited to more traditional telecom applications rather than the media applications we aim to support. Furthermore, those two initiatives do not really address issues for the massive multiparticipant distributed applications we envision.
CONCLUSIONS AND FUTURE WORK In this article we have presented a new architecture, fostering cooperation between overlay applications and ISPs for optimized delivery of services to end users. Overlay algorithms are optimized thanks to information provided by ISPs, and the delivery QoE is further improved via the activation of network services provided by ISPs. While the approach presented in this article has been developed to support interactive, multiparty, high-capacity media applications such as those presented earlier, the architecture, the interface, and the principles of cross-layer cooperation can also benefit existing applications. Content distribution networks (CDNs) providing video on demand, for example, could have
IEEE Communications Magazine • March 2011
MATHIEU LAYOUT
2/18/11
3:13 PM
Page 143
greater awareness of network capabilities and also make use of other network services provided by CINA, such as capacity reservation for background distribution of content to CDN nodes. Ongoing work is focused on the internal functions of each block, including the development of overlay optimization algorithms, source selection algorithms according to context information, dynamic activation logic for network services such as multicast (and multi-ISP multicast), and caching/adaptation functions. Finally, evaluation through both simulation and testbed deployments is underway.
ACKNOWLEDGMENTS This work was supported by the ENVISION project (http://www.envision-project.org), a research project partially funded by the European Union’s 7th Framework Program (contract no. 248565). The authors wish to thank all project participants for their valuable comments and contributions to the work described in this article.
REFERENCES [1] 3GPP TS 23.228, “IP Multimedia Subsystem (IMS); Stage 2 (Release 7)”; http://www.3gpp.org. [2] OSA/Parlay X, “3GPP TS 29.199 Release 7 Specifications,” Sept. 2007. [3] V. Gurbani et al., “ A Survey of Research on the Application-Layer Traffic Optimization Problem and the Need for Layer Cooperation,” IEEE Commun. Mag., vol. 47 , no. 8, 2009, pp. 107–12. [4] R. Keralapura et al., “Can ISPs Take the Heat from Overlay Networks?,” ACM HotNets ‘04, Nov. 15–16, 2004, San Diego, CA. [5] T. Karagiannis, P. Rodriguez, and K. Papagiannaki, “Should Internet Service Providers Fear Peer-Assisted Content Distribution?,” Internet Measurement Conf., Oct. 19–21, 2005, Berkeley, CA. [6] H. Xie et al., “P4P: Explicit Communications for Cooperative Control Between P2P and Network Providers,” http://www.dcia.info/ documents/ P4P_Overview.pdf. [7] R. Fortuna et al., “QoE in Pull Based P2P-TV Systems: Overlay Topology Design Tradeoffs,” Proc. 10th Int’l. Conf. Peer-to-Peer Comp., Delft, The Netherlands, Aug. 2010. [8] R. Jimenez, L. E. Eriksson, and B. Knutsson, “P2P-Next: Technical and Legal Challenges”; tslab.ssvl.kth.se [9] D. Frey et al., “Solipsis: A Decentralized Architecture for Virtual Environments,” 1st Int’l. Wksp. Massively Multiuser Virtual Environments, 2008. [10] S.-Y. Hu et al., “FloD: A Framework for Peer-to-Peer 3D Streaming,” INFOCOM 2008.
BIOGRAPHIES B ERTRAND M ATHIEU [SM] (
[email protected]) is a senior researcher at France Telecom, Orange Labs since 1994. He received a Diploma of Engineering in Toulon, an M.Sc. degree from the University of
IEEE Communications Magazine • March 2011
Marseille, and a Ph.D. degree from the University Pierre et Marie Curie, Paris. His research activities are related to dynamic overlay networks, P2P networks, and informationcentric networking. He has contributed to several national and European projects, and published more than 30 papers in international conferences, journals, and books. He is a member of several conferences’ Technical Program Committees and an SEE Senior Member. SELIM ELLOUZE received his M. Eng degree in telecommunication from Ecole Nationale Supérieure d'Electronique Informatique et Radiocommunications de Bordeaux, Université Bordeaux 1. He is currently working toward a Ph.D. degree as a research engineer in Orange Labs Lannion, France. His research interests include the evolution of the control protocols for IP networks. NICO SCHWAN is a research engineer with Bell Labs Service Infrastructure Research Team in Stuttgart, Germany. He is also an academic at the University of Cooperative Education Stuttgart. His current research interests are in the areas of content delivery networks, multimedia Internet overlay applications, and content-centric networking. He has a B.Sc. degree in applied computer science from the University of Cooperative Education Stuttgart. DAVID GRIFFIN is a principal research associate in the Department of Electronic and Electrical Engineering, University College London (UCL). He has a B.Sc. from Loughborough University and a Ph.D. from UCL, both in electrical engineering. His research interests are in the planning, management, and dynamic control for providing QoS in multiservice networks, P2P networking, and novel routing paradigms for the future Internet.
Ongoing work is focused on the internal functions of each block, including the development of overlay optimization algorithms, source selection algorithms according to context information, dynamic activation logic for network services, such as multicast (and multi-ISPs multicast) and caching/adaptation functions.
ELENI MYKONIATI (
[email protected]) received a B.Sc. in computer science from Piraeus University, Greece, in 1996 and a Ph.D. degree from the National Technical University of Athens (NTUA), Greece in 2003. She has worked as a research associate for Telscom S.A. Switzerland, the NTUA DB Lab and Telecom Lab, and Algonet S.A. Greece. Since 2007 she is a senior research associate in the Department of Electronic and Electrical Engineering at University College London. Her research interests include businessdriven traffic engineering in IP networks, QoS, and peer-topeer networking. TOUFIK AHMED is a professor at ENSEIRB-MATMECA School of Engineers in Institut Polytechnique de Bordeaux (IPB) and performing research activities in CNRS-LaBRI Lab-UMR 5800 at University Bordeaux 1. His main research activities concern QoS management and provisioning for multimedia wired and wireless networks, media streaming over P2P networks, cross-layer optimization, and end-to-end QoS signaling protocols. He has also worked on a number of national and international projects. He is serving as TPC member for international conferences including IEEE ICC, IEEE GLOBECOM, and IEEE WCNC. He is currently team leader of the COMET group at LaBRI Lab. O RIOL R IBERA P RATS is a business consultant specialized in quantitative market analysis at the Telefonica R&D Center in Barcelona, Spain. He received his Telecommunications Engineer degree from the Universitat Politecnica of Catalonia and has had a consolidated career of ten years as product manager in the mobile Internet industry beginning at Nokia, and continuing with co-founding Genaker (a technology start-up) and now at Telefonica.
143
ALVAREZ LAYOUT
2/18/11
3:05 PM
Page 144
FUTURE MEDIA INTERNET
System Architecture for Enriched Semantic Personalized Media Search and Retrieval in the Future Media Internet María Alduán, Faustino Sánchez, Federico Álvarez, David Jiménez, and José Manuel Menéndez, Universidad Politécnica de Madrid Carolina Cebrecos, Indra Sistemas S.A.
ABSTRACT This article describes a novel system and its architecture to handle, process, deliver, personalize, and find digital media, based on continuous enrichment of the media objects through the intrinsic operation within a content oriented architecture. Our system and its architecture provide a solution that enhances the delivery, sharing, experience, and exchange by providing the methods to semantically describe contents with a multilingual-multimedia-multidomain ontology; annotate the content against this ontology; process the content and adapt it to the network and the network status, taking into account the user behavior and the user terminal device to consume the content; and enriching the content at any additional iteration or process, over a content-oriented architecture based on standardized interfaces. The article presents the architecture, modules, functionalities, and procedures, including the system application model to the future media Internet concepts for content-oriented networks.
INTRODUCTION The future media Internet is a research area that aims to improve the mechanism by which users communicate using the Internet and their experience. The media Internet supports professional and novice content producers, and is at the crossroads of digital multimedia content and Internet technologies. It encompasses two main aspects: media being delivered through Internet networking technologies (including hybrid technologies), and media being generated, consumed, shared, and experienced on the web. Our system and its architecture provide a solution that enhances the delivery, sharing, experience, and exchange by providing methods to: • Semantically describe contents with a multilingual-multimedia-multidomain ontology
144
0163-6804/11/$25.00 © 2011 IEEE
• Annotate the content against this ontology • Process the content and adapt it to the network and the network status, user behavior, and the terminal that is to consume the content • Enrich the content in every process and iteration • Apply to content-oriented network architecture based on standardized interfaces The system offers several advantages compared to others as it is able to enrich the content and metadata in every process by improving the search, retrieval, and content distribution operations simultaneously, improve the personalization of the results to the users’ profiles, and take advantage of content-oriented network architectures to work with content as a native information piece. This system architecture aims to improve the logical process dealing with semantic audiovisual content search, the development of new technologies for the future media Internet, and the integration of a broad spectrum of high-quality services for innovative search and retrieval applications. The major novelty of the system lies in the creation of an architecture consisting of different modules around a media component (the multimedia component) that joins together the information needed to perform the proposed operations (semantic search, automatic selection, composition of media, etc.), dealing with both the user context and preferences or smart content adaptation for existing network architectures. Within the scope of this media component is the data storage from all the processes involved in the semantic search (as low- and high-level descriptors from media inputs), a vector repository from user queries, and a variety of user-related data as well. The outcomes may have extremely high impact on users, producers, and content creators, which will be able to exploit the given capabilities for new business models in the future media Internet.
IEEE Communications Magazine • March 2011
ALVAREZ LAYOUT
2/18/11
3:05 PM
Page 145
Annotation
Search Non-structured text
Image Video Voice
Input interface
Text
Audio file
Text
Audio
Image
Content universe (text, audio, image and video)
Fusion
Over ontology
the system design is to handle, process, deliver, personalize, and find digital media, based on
Specific questions Image-image (query by example)
Video
The main target of
Indexation
M3
continuous enrichment in a content-
External Context
Audio-audio (query by example)
centric architecture,
Multimedia component
Ontology
by means of a three-dimensional
Output interface
ontology, and to Distribution
content to any
Adaptation (network, terminal, user)
Context caption User profile
Distribution
adapt multimedia network technology, device, language, context,
Recommender
Narrative syntactic automatic production
or user.
Interactive personalized content selection
QoE Personalization
Automatic generation
Figure 1. Global system architecture.
The remainder of the article is organized as follows. An overview of the related work is provided in the next section. We then describe the system architecture in detail. We explain the application of the architecture to the future media Internet, and finally, some conclusions are given.
RELATED WORK Several systems have been proposed in the past that can partially solve the issues of media processing, handling, distribution, and so on, such as media assets management systems or media delivery platforms. New architecture proposals that go beyond the classical concepts of networked media are [1], which proposes a new architecture for content-centric networks based on named content/content chunks instead of named hosts. This new approach decouples content routing/forwarding, location, security, and content access, departing from IP in a critical way. The new trend focuses on the content instead of on the hosts where it is stored; in this way, the new routing of content is achieved by name, not by host address [2–4]. Regarding media content production, the PISA project [5] uses standardized metadata standards and data models to construct a coherently integrated production platform. The different types of metadata are exchanged between the parts of the system, which makes feasible the implementation of an entire production workflow, providing seamless integration between different components. Finally, the Content-Aware Searching
IEEE Communications Magazine • March 2011
Retrieval and Streaming (COAST) project [6] proposes a future content-centric network (FCN) overlay architecture able to intelligently and efficiently link billions of content sources to billions of content consumers, and offer fast contentaware retrieval, delivery, and streaming. This related work offers some solutions to the management, storage, transmission, or routing of media, but does not offer a framework that can be adapted to different content-oriented networks, able to enrich the media and metadata, improving the result of the processes at every step, and with a user-centric approach to the processes (where personalization is applied to the media objects recommendation, and adaptation to the user context and behavior).
SYSTEM ARCHITECTURE The main target of the system design is to handle, process, deliver, personalize, and find digital media, based on continuous enrichment in a content-centric architecture, by means of a threedimensional ontology, and to adapt multimedia content to any network technology, device, language, context, or user (professional or not). For this reason, the system consists of the following modules, as depicted in Fig. 1: annotation, ontology, search, personalization, automatic content generation, and context adaptation.
SYSTEM MODULES AND FUNCTIONALITIES The system and its architecture are composed of different modules, which perform the aforementioned operations cooperating with each other, and with external systems and user inputs.
145
ALVAREZ LAYOUT
2/18/11
In order to get
3:05 PM
Page 146
Multilingual
higher-quality final results, previous modules have to work in a cooperative manner. The element in charge of
Multidomain
being the link among them is the multimedia component, which makes possible the interop-
Multimedia
Figure 2. Multidimensional ontology.
erability between the modules to enrich the multimedia objects.
146
The annotation module’s main functionality is to allow the classification of multimedia objects (images, video, audio, text, or a combination of these), which belong to a so-called Universe of Contents, by automatically or semi-automatically providing descriptive metadata to improve the knowledge about the object. The resulting metadata can be divided into two levels: low and high. Low-level metadata describe the nature of the content itself (e.g., in an audio track, the loudness), whereas high-level metadata define the properties of the content extracted after a specific processing (e.g., the music genre played in the audio track). High-level metadata are enriched by the ontology to obtain a semantic description (e.g., the audio track is a rock song). If the annotation is only made over each media type (text, audio, images, or video content), it can be ambiguous, and the metadata precision is reduced. This disadvantage can be minimized by using the existing synergies between the different types of information in media content. To avoid these problems, the annotation module implements multimodal fusion. Thus, the results of the annotation process turn out enriched, more robust, and with higher accuracy. After metadata extraction, the annotation module performs an indexation of the semantic enriched content to be accessed by other modules. The purpose of the ontology is to create formal models to represent the multimedia resources, taking into account the multidomain, multimedia, and multilanguage dimensions (Fig. 2). The ontology provides the system with a semantic nature, allowing it to store higher-level features to help to bridge the semantic gap. The search module includes the search and retrieval tasks, using indexed multimedia objects. This module responds to search queries with any media nature: text, audio, video, images, or a combination of them. These queries suffer a different processing, using the 3D ontology and different semantic processes, depending on the individual search technique to be applied: over non-structured text or ontology, based on specific questions and query by example techniques (video and audio). Query by example techniques allow image to image or audio to audio search rather than classic text (metadata) searches. Other functionalities of the search module are interpretation of multilingual queries by voice,
and interactive and hybrid (combination of text and a textual representation of any multimedia element) search. As the objective is to refine the search results to get the most suitable content according to user expectations, the main target of the personalization module is to adapt the multimedia content to the specific needs and contexts of users. To achieve this, the module generates user profiles that are enriched with information explicitly supplied by the users by modeling their behavior in the system. Another functionality of this module is to measure the quality of the user experience, which is also used to enrich the generated user profiles. The personalization module has hybrid recommendation engines that combine content-based filtering techniques and collaborative filtering algorithms to increase the efficiency and solve typical problems derived from the usage of pure filtering techniques. Therefore, the recommendations performed by this module present a great advantage: they consider not only the user profiles and their context, but also the semantic information extracted from the multimedia content. With the aim to offer an audiovisual summary of a group of multimedia objects that matches the users’ interests, the content generation module automatically generates an audiovisual narrative. For this purpose, domain knowledge narrative discourses’ characteristics are analyzed, and adequate syntactic patterns are established. The efficiency of this audiovisual compendium resides in good content selection and correct interpretation of the content. On the other hand, this efficiency depends on syntactic coherence according to the synopsis’s purpose. The adaptation module’s target is to provide multimedia objects with a mechanism to adapt the content format. This module adapts the formats of the original objects to create audiovisual summaries with a common format. Besides, the module plays an important role within the whole system due to the fact that content coding is normally network- or/and device-dependent. In order to get higher-quality final results, previous modules have to work in a cooperative manner. The element in charge of being the link among them is the multimedia component, which makes possible the interoperability between the modules to enrich the multimedia objects.
THE MULTIMEDIA COMPONENT The multimedia component, as can be seen in Fig. 3, is composed of four different metadata abstraction layers (identification layer, technical metadata layer, descriptive metadata layer, and functional metadata layer) to provide flexibility in the communications between the different modules of the system. Each of these layers is related to one or several modules. The objective of this layered design in the multimedia component is to provide the system with a multidimensional index, which allows complex queries with the multimedia component. The identification layer’s main target is to give a single identifier to each multimedia object. This identifier must be persistent since the object information, including location, may change over time. The structure design of the multimedia
IEEE Communications Magazine • March 2011
ALVAREZ LAYOUT
2/18/11
3:05 PM
Page 147
component identifiers is based on the Digital Object Identifier System (DOI) [7], which is persistent, unique, resolvable, interoperable, and very useful for the management of content over digital networks in an automated and controlled way. The main advantages of DOI names for the system are, on one hand, the difference from commonly used Internet pointers to material such as the URL (because they identify an object as a first-class entity, not only the place where the object is located) and, on the other hand, that a DOI is not intended as a replacement for other identifier schemes, although other recognized identifiers may be integrated into the DOI name syntax. The technical metadata layer contains metadata that can be directly extracted from the multimedia objects since they are related to production features (format, bit rate, etc.). These metadata become especially useful when transmitting the content through the network and adapting it to the user terminal or preferences. The descriptive metadata layer stores two different types of metadata: structural and semantic. The structural metadata describe the spatial and temporal elements of the multimedia objects, and the semantic metadata give a highlevel description of multimedia elements. The definition of both metadata types is based on the MPEG-7 standard [8]. The annotation module fills in the descriptive metadata layer and uses the ontology to extract the semantic metadata of the multimedia objects. The collection of these descriptors is used by the functional blocks (searching, personalization, generation) of the system to complete their individual applications. The fourth and higher layer of the multimedia component is the functional metadata layer. It is composed of high-level metadata generated in the functional modules, which are classified in three different categories: narrative metadata, syntax metadata, and affinity metadata. The narrative metadata express narrative characteristics of certain multimedia objects, which are deduced from the descriptive metadata layer. Narrative metadata include accurate descriptions of the way to articulate a discourse or storytelling through text or a group of pictures. These metadata are composed by items such as the narrative facts, characters, narrative space, narrative time, communicative purpose, or edition (in an audiovisual object). On the other hand, syntax metadata define how to show the narrative information; for example, in a video object, the number of shots inside a scene, the shot type (close-up, medium shot, long shot), or the style of the scene transitions (cut to, fade in, fade out, cross fade or other effects). Finally, affinity metadata are generated to model the subjective perception of a user. These type of metadata come from the manipulation of the descriptive metadata, and they have the objective of establishing affinity relations between users and multimedia objects. For instance, they include the measurement of the inherent geometry of a picture or some composite rules that can be influenced by the perception of a user.
IEEE Communications Magazine • March 2011
Technical metadata layer Descriptive metadata layer Structural metadata Space
Semantic metadata
Time
Functional metadata layer
Narrative metadata
Syntax metadata
Affinity metadata
Identification layer
Figure 3. Multimedia component layer-structure.
In the present system design, the multimedia component is the interoperable main core of the architecture that allows the multimedia objects to be continuously enriched. For this reason, every module within the system not only works for its specific application, but also for the global objective of the system, which becomes a wideranging tool. In this way, the content generation module uses the second-layer descriptive metadata for its own tasks of generation and production of new audiovisual objects. Throughout these processes, which are necessary to make possible the creation of original compositions, new metadata are generated so that the objects’ narrative and syntax are learned. After this step, the new metadata, obtained in the modeling procedure, are stored in the fourth multimedia component abstraction layer in order to be used by other system tasks. With these new available data, for example, the search module will be able to locate resources departing from narrative or syntactic information, and users will have the possibility to use this information to do refined filtering of the provided results. On the other hand, the personalization module recommendation engines can take into account the particular affinity of users with particular ways of telling stories or ideas (narrative and syntax). The affinity elements are filled by the personalization module, and they can be used, for example, by the content generation module to create personalized new audiovisual objects according to the style/taste of the user.
USE OF THE ARCHITECTURE FOR PERSONALIZED SEARCH AND RETRIEVAL OF MULTIMEDIA OBJECTS Due to the flexibility of the proposed architecture, there is a large number of use cases where different system modules can be combined to achieve a common goal; besides, these modules offer additional independent applications. Figure 4 presents the search and multimedia objects retrieval application as an example of
147
ALVAREZ LAYOUT
2/18/11
3:05 PM
Page 148
system functionality. As depicted in the picture, in this process not only the search module takes part, and the search is enriched. As can be observed in Fig. 4, first the user sends a query to the system by inputting text, image, video, audio, or a combination of those. These input multimedia objects are semantically enriched by the ontology, which takes into account the three dimensions (multilanguage, multidomain, and multimedia). Depending on the nature of the user query, the annotation module applies the most suitable metadata extraction procedure; then the search module uses the generated metadata and, according to their nature, starts a specific search process over the multidimensional index stored in the multimedia component. Depending on the user search
Query sending
Annotation (video, audio, text or image) Metadata extraction
Query by example
Yes
Ontology enrichment
Query by example? No
Multimedia component
Search Ontology Personalization
Content generation
Adaptation
Yes
Refinement? No
Distribution
Figure 4. Use case example: personalized search and retrieval.
148
target (the nature of the obtained metadata and the user preference), the process looks in the suitable multimedia component layer in order to retrieve the multimedia elements according to the user search. Thus, the results of the process are the objects’ identifiers found by the search methods (metadata and query by example techniques), and a weight factor that indicates their potential importance. Nevertheless, the process is not finished at this stage. The rest of the modules within the system allow the enrichment of the search, as well as an important increase of its accuracy. The results of the search module are sent to the personalization module, which combines the information of a multimedia object (from the multimedia component) with the user information. In this way, the recommendation engines modify and adapt the weight accuracy index provided by the search module to the user preferences. These engines sort the results depending on the user preferences and search context (search query location, time, etc.). As the recommendation engines have been designed to exploit all available information, the input data does not always need to be the same. Besides, the personalization module enriches and stores the metadata in the multimedia component in a structured way. The end-to-end personalization process is shown in Fig. 5. At this stage, the system has a list of the multimedia objects related to the user search query and his/her preferences. This list may be adapted to user permissions depending on the scenario. In the next step, the content generation module enriches both the results and the multimedia objects. This module adds a new piece of content to the previous results, which is automatically generated using the user preferences and considering the desired narrative and syntactic characteristics. This process is performed to coherently generate pieces of audiovisual content. This new generated resource can be seen as a multimedia summary of the obtained results that emphasizes the user interests. Finally, the adaptation module distributes the results to the user, allowing him/her to consume the audiovisual content regardless of the current location, network, or device. The first part of the overall process ends with the distribution stage. However, it is possible to perform iterative queries to refine the results or help the user to extend the search. In this iterative procedure, the same modules participate again. A point of major importance is the synchronization of the different processes. For this reason, a handle process has been included in order to lower the latencies and manage the synchronization. As some modules, such as the content generation module which manages a large number of audiovisual sources, can have higher latencies, the handle process may offer the user a version of the results without the automatic summarization. In this case, the user is informed about the possibility of receiving a synopsis when available.
IEEE Communications Magazine • March 2011
ALVAREZ LAYOUT
2/18/11
3:05 PM
Page 149
APPLICATION OF THE ARCHITECTURE TO THE FUTURE MEDIA INTERNET Internet evolution includes new paradigms to offer better content delivery services to users. Users give more value to the content, while the Internet, due to its original conception, gives more value to the localization of that content. For this reason, a major change is needed in the Internet of Contents orientation: from where to what. This new paradigm is called the contentoriented network (CON), which addresses the basic needs of Internet design to cope with content as a native element. The system proposed in this article is intended to work over a CON in a possible design of the future media Internet. This CON implementation will allow the proposed system to implement all the functionalities associated with the previously described modules at the network level, avoiding the development of ad hoc solutions to obtain the same results. Providing media content with searchable and accessible (metadata generation and structuring) capabilities is one of the major challenges of the future Internet. The proposed system becomes a powerful solution due to the architecture and functionalities of the described modules, which provide improved functionalities for not only the content-centric but also the user-centric approach followed in the personalization and content generation modules. In addition, the proposed multimedia component provides an enriched description of objects thanks to the layered metadata structure, which is continuously enriched in every process. This layered metadata structure adds the necessary interaction and is perfectly adaptable to the concept of a CON. In order to implement our system over a CON, modules can be allocated in the network cloud. The multimedia component is linked to specific nodes called content nodes, which allow the network to access and route both media essence and metadata information. Figure 6 depicts a high-level approximation of the system configuration over a CON. The approximation we followed is based on a CON search, with the advantage of being protocol agnostic (messages, naming, etc.), and adaptable to some of the content-centric networking (CCN) protocols already proposed [1]. The multimedia search process in Fig. 6 consists of six steps: 1) A user starts a search query composed of one or more multimedia objects (text, image, video or audio). 2) These multimedia objects are annotated by the annotation module service. The result of the annotation service is a metadata vector with the features of a multimedia object. This vector forms a new search query. 3) This metadata query is flooded to the CON and will reach every content node. When a content node receives a metadata search query, it has to perform search algorithms over the multimedia component to find the objects that satisfy the user query. The content node returns, as a reply, the metadata of the list of multimedia objects.
IEEE Communications Magazine • March 2011
Context analysis
Context ontology
User profile enrichment
Context enrichment
Recommendation (video, audio, text or image)
User preferences analysis
User profile
Search results analysis
Multimedia component
QoE measurement
User profile enrichment
Figure 5. Use case example: detailed personalization process in search and retrieval. 4) The CON router sends this reply to the content generation module service. First, the content generation module creates a discursive or narrative structure (depending on the communicative purpose) using only the descriptive and functional metadata of the objects (received as an entry). Second, once the narrative structure has been created and decided which multimedia objects are necessary to create it, the generation module throws a query to get only the necessary objects for the final edition. 5) In parallel to step 4, the CON router generates a request to the personalization module to obtain a list of recommendations over the search results. This module does not need to work with the original multimedia objects; it is enough to take advantage of the descriptive and functional metadata, stored in the multimedia
149
ALVAREZ LAYOUT
2/18/11
3:05 PM
Text annotation
Image/ video annotation
Page 150
Narrative structure analysis
Audio annotation
Communicative patterns Multimodel fusion
Logic composition
Social techniques
Edition module
Contentbased techniques
Hybrid recommendation
Generation
Annotation
Personalization 4
User profiles Metadata analysis
5
2 1
3 CON router
CON router
6
3
User n 3 3
Content node Content node
Content node MC MC
MC
Figure 6. The system over a content-oriented network.
component, and the user information, stored in the user profiles. The personalization module implements a hybrid recommender (contentbased and social), which is possible because of the presence of the multimedia component and the CON architecture. The multimedia component allows the existence of a sophisticated content-based recommender module, because a large amount of information is available for every multimedia element. The CON contributes knowledge of the multimedia objects users’ consumption, due to the fact that these objects can be unambiguously identified. This knowledge allows the development of automatic and transparent recommendation algorithms based on social techniques such as collaborative filtering. Content-based and social-techniques-based algorithms make up the hybrid recommender system. 6) The final recommended results and the automatic summary are sent back to the user. The search process described is supported by the previous classification procedures which are performed through the content nodes providing the features collected inside the multimedia component. Metadata are enriched dynamically by the different modules as described in this article.
ented networks, able to handle, process, deliver, personalize, find, and retrieve digital media. The advantages of our system go further with the provision of a complete framework to deal with media objects in a smarter way by proposing a multimedia component centered modular architecture that is able to enrich the content dynamically and improve the performance of the media processes described. Another meaningful feature relevant to the future media Internet is the user-centric approach. Our system offers an integral solution that involves the user from the beginning and is able to personalize media assets according to their needs, tastes, and preferences.
CONCLUSION
[1] V. Jacobson et al., “Networking Named Content,” Proc. 5th ACM CoNEXT ‘09, Rome, Italy, Dec. 1–4, 2009, pp. 1–12. [2] T. Koponen et al., “A Data-Oriented (and Beyond) Network Architecture,” ACM SIGCOMM, 2007. [3] D. Lagutin, K. Visala, and S. Tarkoma, “Publish/Subscribe for Internet: PSIRP Perspective,” Towards the Future Internet — A European Research Perspective, G. Tselentis et al., Eds., IOS Press, 2010, pp. 75–85. [4] M. Caesar et al., “ROFL: Routing on Flat Labels,” ACM SIGCOMM, 2006.
A system architecture for enriched semantic personalized media search and retrieval that has been adapted to future media Internet concepts has been described. The system provides a useful solution for the future media Internet, applicable to both evolutionary or purely content-ori-
150
ACKNOWLEDGMENTS This publication is based on work performed in the framework of the Spanish national project BUSCAMEDIA (CEN- 20091026), which is partially funded by the CDTI — Ministry for Science and Innovation and the project AMURA (TEC2009-14219-C03-01). The authors would like to acknowledge the contributions of colleagues from the partners of the BUSCAMEDIA project.
REFERENCES
IEEE Communications Magazine • March 2011
ALVAREZ LAYOUT
2/18/11
3:05 PM
Page 151
[5] D. V. Rijsselbergen et al., “How Metadata Enables Enriched File-Based Production Workflows,” SMPTE Motion Imaging J., May/June 2010. [6] COAST Consortium, “End-to-End Future Content Network Specification”; http://www.coast-fp7.eu/public/ COAST_D2.2_BM_FF_20100825.pdf. [7] N. Paskin, “Digital Object Identifier (DOI) System,” Encyclopedia of Library and Information Sciences, 3rd ed. [8] MPEG-7 Standard, “ISO/IEC 15938-3:2002, Information Technology — Multimedia Content Description Interface — Part 3: Visual.”
BIOGRAPHIES M ARIA A LDUAN M ELLADO (
[email protected]) received her computer engineer degree in 2009 from Centro Politécnico Superior, Universidad de Zaragoza. Since 2009 she has been a Ph.D. candidate and research assistant with the Visual Telecommunications Application Group in the Signals, Systems and Radio Communications Department of E.T.S. Ingenieros de Telecomunicación, Universidad Politecnica de Madrid, where she has been collaborating since 2008 . She is currently leading the architecture research work in the national project BUSCAMEDIA. She has several publications in national and international conferences. FAUSTINO SANCHEZ (
[email protected]) received his telecom engineer degree (Hons.) in 2008 and telecom systems Master’s degree (Hons.) in 2010, both from the Universidad Politécnica de Madrid. Since 2007 he has worked for the research group of the Visual Telecommunications Applications Group (G@TV) of Universidad Politécnica de Madrid. Currently, he is a Ph. D. candidate in the same group, and his professional interests include interactivity technologies, audience measurement techniques, user behavior modeling, and recommendation systems. About this, he has been participating with technical responsibilities in several national projects, and he is author and coauthor of several papers and scientific contributions in international conferences and journals. FEDERICO ALVAREZ [M‘07] (
[email protected]) received his Telecom Engineer degree (Hons.) in 2003 and Ph.D. degree (cum laude) in 2009, both from Universidad Politécnica de Madrid, where he is currently an assistant professor. Since 2003 he has worked for the research group in G@TV of Universidad Politécnica de Madrid. He has been participating with different managerial and technical responsibilities in several national and EU projects, being the coordinator of ARENA (IST-024124) and currently the coordinator of nextMEDIA (ICT-249065), and with a relevant role in projects such as SEA, AWISSENET, and SIMPLE. He had participated in national and international standardization fora (DVB, CENELEC TC206, etc.), is a member of the program committees of some scientific conferences and is author
IEEE Communications Magazine • March 2011
and co-author of 30+ papers and several books, book chapters, and patents in the field of ICT networks and audiovisual technologies. DAVID JIMÉNEZ (
[email protected]) received the Telecom Engineer degree (Hons.) in 2004 from Universidad Politécnica de Madrid. Since 2002 he has been a member of the Signals, Systems and Radio Communications Department of E.T.S. Ingenieros de Telecomunicación, where he is currently a Ph.D. candidate.His professional interests include image processing, digital video broadcasting, coding, compression formats, and very high resolution and immersive TV. His Master’s thesis was on the software emulation of DV format codecs, SMPTE 306M (D7), SMPTE 314M (DV), and SMPTE316M (D9). He joined the European University Elite Program of Texas Instruments in order to develop a real-time DSP-based multiformat codec. He has been chairing the Standardization Group within Foro Español de Alta Definición promoted by Ministerio de Industria, Turismo y Comercio. Currently he is working on visual quality assessment and quality of experience analysis. JOSÉ MANUEL MENÉNDEZ (
[email protected]) is an associate professor (tenured) of signal theory and communications since 1996 at the Signals, Systems, and Communications Department of E.T.S. Ingenieros de Telecomunicación, Universidad Politécnica de Madrid. Director of the Visual Telecommunication Application Research Group (G@TV) of the same university since 2004, he has more than 60 international publications about signal processing and communications, in both international journals and conferences, and many national publications, including a book (in Spanish) for the undergraduate engineering level. He has participated or led more than 80 R+D projects, including public funding (Spanish or European) and private. He has been a regular reviewer for the IEEE Signal Processing Society since 2000 for several journals and international conferences, and for IET Image Processing since 2009; he also collaborates with Spanish national and regional entities, as well as the European Commission in the evaluation and review of R+D projects, and with several national telecommunication and broadcasting companies as a consultant. C A R O L I N A C E B R E C O S D E L C A S T I L L O (
[email protected]) obtained her Telecom Engineer degree at Carlos III University, Madrid. She has worked for Indra, one of the biggest Spanish companies, focused on investigation and development of new technologies, since 2008. She works as a consultant and has participated in different research projects in the information technology field, mainly in the audiovisual sector, in both national and international environments. She has participated with different technical responsibilities in the CBDP European project and InmersiveTV national project, among others. She is currently leading ‘s responsibilities in the BUSCAMEDIA project.
151
DARAS LAYOUT
2/18/11
3:07 PM
Page 152
FUTURE MEDIA INTERNET
Automatic Creation of 3D Environments from a Single Sketch Using Content-Centric Networks Theodoros Semertzidis, Informatics and Telematics Institute and Aristotle University of Thessaloniki Petros Daras, Informatics and Telematics Institute Paul Moore, Atos Research & Innovation Lambros Makris, Informatics and Telematics Institute Michael G. Strintzis, Informatics and Telematics Institute and Aristotle University of Thessaloniki
ABSTRACT In this article a complete and innovative system for automatic creation of 3D environments from multimedia content available in the network is presented. The core application provides an interface where the user sketches in 2D the scene that s/he aims to build. Moreover, the GUI application exploits the similarity search and retrieval capabilities of search-enabled content-centric networks to fetch 3D models that are similar to the drawn 2D objects. The retrieved 3D models act as the building components for the automatically constructed 3D scene. Two CCN-based applications are also described, which perform the query routing and similarity search on each node of the CCN network.
INTRODUCTION In the new era of the media Internet where multimedia content is overflowing the networks and users are usually near a networked device, the available forms of communication seem inadequate. IP has changed the way we communicate and entertain ourselves, using voice over IP (VoIP), videoconferencing, IPTV, email, instant messaging, social networks, and so on. However, now is the time users really need a completely new user experience that transcends these communication forms. In this new form of communication, 3D virtual environments will be the places for meeting, conversation, and entertainment among friends and colleagues. Moreover, these places will be constructed dynamically from each user based on the exact needs of the communication session. In order to do so in an easy and intuitive way, users should have the ability to search, retrieve, and use the deluge of multimedia content available in the networks as building components of a new 3D environment. However, in the current Internet architecture search and retrieval of multimedia content is not addressed
152
0163-6804/11/$25.00 © 2011 IEEE
sufficiently since the need for such services appeared rather late in the design process. On the other hand, the new trend of content-aware networks aims to solve exactly theses issues by focusing on the content, not on the host computers and their network addresses. This approach allows the user to retrieve content by its name without knowing where this content resides. The Internet has evolved from a network of computers to a limitless resource of multimedia content with critical societal and commercial impacts. This fact led the academic community to study different new architectures that put the content in focus. In the 4WARD project a system for flexible, modular network components is examined [1], and an information-centric paradigm focuses on information objects rather than endto-end connections. In the ANA framework generic abstractions of networking protocols are presented in order to support network heterogeneity and coexistence of clean slate and legacy Internet technology and protocols [2]. Koponen et al. propose a replacement of DNS with flat, selfcertifying files in the DONA architecture [3]. Van Jacobson et al. [4] proposed the content-centric networking (CCN) approach, where the content is routed based on hierarchically named components. The CCN protocol is based on two packet types, Interest and Data. The consumer transmits an Interest packet, which is propagated in the network only to the nodes with available content. As a result, Data packets return to the consumer from the path the Interest packet passed. Based on the CCN architecture, Daras et al. [5] proposed an extension that introduces similarity multimedia content search in such networks. In search-enabled CCN, the user is able not only to retrieve content objects by name but also query the network for similar multimedia content. At the same time, there are a plethora of academic as well as commercial attempts to build a workflow that target on virtual 3D environments. Many of them aim at corporate communications,
IEEE Communications Magazine • March 2011
DARAS LAYOUT
2/18/11
3:07 PM
Page 153
Querying by example CCN searchenabled parties
(QBE) is a very common technique for querying multimedia
CCN searchCCN router enabled party CCN router
CCN search to IP gateway
a prerequisite for the user is to have an object that is similar
CCN router TCP/IP PC hosting sketchTo3D application
databases. However,
CCN searchenabled party
to what s/he is looking for. From this point of view,
CCN router
the QBE approach may not be that practical for a single-
Figure 1. The system's architecture.
modality search. events, and meetings, while others aim at elearning applications or entertainment. Products such as Assemb’Live, 3Dxplorer, web.alive, and Second Life are some of the commercial software available as services or appliances. All of them have their advantages and disadvantages based on the targeted audience due to the different background of each firm. Unfortunately, none of them provide a fast and easy-to-use interface for constructing on the fly a 3D environment from the media content that exists on the network nearby. Sketch as a user interface has been studied in various works in recent years, since drawing is one of the primary forms of communicating ideas. Humans tend to draw, with simple feature lines, the objects or places they want to describe. Chen et al. use sketch input along with some word tags to retrieve relevant images from a database and montage a novel image [6]. However, the search queries are in fact word tags, while the sketch helps only with the positioning of the objects and refinement of the search results. The PhotoSketch system presents another approach for achieving sketch to image retrieval, based on the direction of the gradients of the sketch images [7]. Pu et al. [8] present a sketch-based user interface for retrieving 3D computer-aided design (CAD) models. Various papers propose different descriptors and methods for extracting features from sketch images for 2D image or 3D model retrieval. However, to the best of our knowledge, none of them is targeting the creation of 3D worlds using such asymmetric retrieval of 3D models from 2D sketches. In this work we present a complete working environment for automatic construction of such 3D virtual places by exploiting the searchenabled CCN architecture and applying multimedia retrieval principles for the retrieval of 3D models from 2D sketches. The user sketches a 2D drawing of the scene s/he wants to construct. The objects, drawn in the 2D sketch, form the queries dispatched in a CCN to find similar 3D models. The similar media content are retrieved from the CCN through a gateway, and, finally, a 3D environment is automatically constructed.
IEEE Communications Magazine • March 2011
The rest of the article is organized as follows. The next section presents the overall system architecture and the subsystems of the proposed framework. We then give a detailed description of the proposed sketchTo3D application. Next, we describe the search components of the CCN network and present the experimental setup. The final section draws conclusions on the current work and gives insights for the future.
SYSTEM ARCHITECTURE The main components of the proposed system are an application named sketchTo3D, which provides the sketch user interface and displays the 3D virtual environment; the searchGateway application, which acts as a gateway between TCP/IP and the CCN network; and finally, the searchProxy application that runs on every search-enabled CCN party in order to perform a similarity search in their local repository. Figure 1 presents the overall architecture with the search-enabled CCN, the CCN search gateway, and the end user’s PC that hosts the sketchTo3D application. All these components are explained in detail in the following sections.
THE SKETCHTO3D APPLICATION Querying by example (QBE) is a very common technique for querying multimedia databases. However, a prerequisite for the user is to have an object that is similar to what s/he is looking for. From this point of view, the QBE approach may not be that practical for a single-modality search. On the other hand, in a multimodal setup where objects from one modality are always available to the user (e.g., sketch) for the formation of queries, the searching in such a system grants a greatly enhanced user experience. The sketchTo3D application aims to provide the user with an easy-to-use interface for searching 3D models and building a 3D virtual world. The application is coded in C++ using the QT 4.6 library for the graphical user interface (GUI), multithreading, and networking components. For the rendering of the 3D models, the openGL library is used.
153
DARAS LAYOUT
2/18/11
3:07 PM
Page 154
For the insertion of a
The sketch area
3D model into the 3D virtual world, three basic parameters should
Toolbar with basic functions
be known: the positioning of the object in the 3D space, the scale of the object with
Logs pane
respect to the other The 3D virtual world widget
objects of the scene, and the orientation of the 3D model in order to view it from
The results pane. Each search has a separate tab.
the right angle.
Figure 2. The GUI of the sketchTo3D application.
The sketch area is locked at the center of the GUI, and all other widgets are flexible to be docked at the left, right, top, or bottom of the application allowing the user to build a working environment that fulfills his/her needs (Fig. 2). At the left pane of the GUI (as depicted in Fig. 2) there is the basic toolbar with actions such as brush size or clearing the sketch. The submit button is pressed every time the user finishes drawing an object in order to start the search for similar 3D objects. The search process runs on a separate thread; thus, the user can continue drawing the next object in his/her scene. To the right side of the GUI the 3D virtual world is depicted as it is seen from a single camera pointing at (0,0,0) in Cartesian coordinates. The user may translate, rotate, and zoom the virtual camera by applying left, middle, or right click-and-drag mouse gestures, respectively, on the widget. Finally, at the bottom of the GUI the tabulated widget for the presentation of the search results is locked. Each search session is separated in a different tab in which the results are presented in ranked lists from the most to least similar objects. By double clicking on an object in the results pane, the 3D model is placed in the scene at the correct position. When all the results have been retrieved and the user has selected the appropriate 3D models from the ranked lists, the 3D environment is completed. By applying the same mouse actions described above, the user may navigate inside the 3D world and explore the object set.
SUBMITTING A SEARCH QUERY After the user clicks on the search button of the toolbar, a new search session is initiated. Since the search has to be conducted on each object of the scene alone, a segmentation step has to take
154
place first. Our segmentation technique makes the assumption that the user does not draw overlapping objects. Taking this into account the application keeps a history of the sketched images between each search. For isolating the new object from the scene, a simple subtraction of the current sketch image from the previous one is automatically made. Then a bounding square of the object is found. If the bounding square is larger than 100 × 100 pixels, a scaling step takes place in order to have a 100 × 100 pixels image of the object. The procedure is presented in Fig. 3. The scaling of the query image helps the descriptors extraction algorithm run faster, and thus be efficient for real-time usage, without crucially affecting the retrieved results. After the isolation of the sketched object from the scene, a new thread is initiated that handles all the procedures concerning this search without blocking the whole application from working properly. The query image extracted from the segmentation process is fed in the CMVD descriptor extractor [9], and a descriptor vector of low-level features of size 212 is computed. This descriptor vector is the actual query for each search. The next step is to make a request for connection to searchGateway using a TCP socket. When the connection is established and after a handshake process, the descriptor vector is sent to searchGateway for searching the CCN network. Finally, the thread enters in an idle state waiting for the gateway to reply with results.
FROM SKETCH TO 3D For the insertion of a 3D model into the 3D virtual world, three basic parameters should be known: the positioning of the object in the 3D space, the scale of the object with respect to the other objects of the scene, and the orientation of the 3D model in order to view it from the right
IEEE Communications Magazine • March 2011
DARAS LAYOUT
2/18/11
3:07 PM
Page 155
Scale — Taking into account the fact that all the 3D models are normalized at the unite sphere as in [9], the ratio of the 2D sketches may be used to apply the scale in the 3D environment. Indeed, the sketchTo3D application keeps an array of all objects that are drawn in the 2D sketch and their relevant sizes based on the size of the first object drawn. Moreover, in order to keep the ratio intact, we calculate the relevant sizes with a bounding square of every object at its centroid. Finally, based on this information a scaling factor is applied to every 3D model that is entered in the 3D virtual environment. Orientation — The orientation of 3D models is still an open issue, and much research is conducted toward an efficient solution. However, for this application we use a simple solution that works efficiently. We make the assumption that if we have a large unbiased database of 3D objects and their 2D view descriptors, and query this database with a sketch of a 2D view, the most similar results would be not only the most similar objects, but also the exact views of these similar objects. Moreover, by using the rotation matrix we used for the extraction of the 2D views of each model, we may orient the 3D models in the 3D world with respect to the virtual camera coordinates. Since the descriptor extraction technique we use extracts descriptors for 18 views of each 3D model, the application is able to orient the 3D model in question in 18 different angles in the scene [9]. For finer tuning of the orientation of the 3D model, the user interface of the SketchTo3D application allows for manual inplace rotation of the model.
SIMILARITY SEARCH IN CONTENT-CENTRIC NETWORKING The CCN architecture as described in [4] and implemented in the CCNx project [10] permits retrieval of content, provided that the consumer knows all or at least a prefix of the name of the
IEEE Communications Magazine • March 2011
Search #2
3D Model Positioning — The position of the 3D object in the 3D environment has to be inferred from the position of the object in the 2D sketch image as well as with respect to other objects in the sketch. In our approach we assume that all the objects of the scene are attached to the ground, so the Y coordinate of the 3D world is always Y = 0 (we have no elevation). Thus, by using as a reference point the lower-most point of the sketched object, we map the X-Y coordinates of the 2D to the X-Z coordinates of the 3D world, as seen in Fig. 4. A heuristic stretch factor needs to be taken into account in order to map the height of the sketch image to the required depth in the 3D world. The user interface provides the user the ability to change the stretch factor and experience a deeper or narrower 3D scene, after the insertion of the objects in the scene (Fig. 2).
Search #1
angle. The following subsections describe our approach to these issues.
a)
b)
c)
d)
e)
f)
Figure 3. The steps for the extraction of the query image from the sketch: a) 800 × 745 pixels. The current sketch and the image to subtract for the next sketch; b) 800 × 745 pixels. The Isolated object; c) 100 × 100 pixels. The final query image the is fed into the CMVD algorithm; d) 800 × 745 pixels. The current sketch and the image to subtract for the next sketch; e) 800 × 745 pixels. The Isolated object that resulted by subtraction of image 3a from image 3d; f) 100 × 100 pixels. The final query image is fed into the CMVD algorithm.
desired content object. Although this design has some serious advantages, it does not solve the problem of similar content searching, which is one of the most critical issues for future media Internet architectures. Our previous work in [5] is a first attempt to face this issue by proposing a search protocol as an extension to the CCN architecture. In the current work we use the aforementioned protocol to build a searchProxy daemon that works on the user space in the Linux system, as well as a gateway that works on the edge of the CCN network in order to interface it with other applications over TCP/IP networks. The search-enabled CCN network as depicted in Fig. 1 has two basic components: the CCN gateway, which is responsible for the interconnection of the CCN with the TCP/IP network, and the CCN party, which is a node of the network that acts as a consumer and producer of data. The CCN search gateway computer should have the searchGateway application and a CCND [10] daemon that is the actual CCN router. For the CCN party the applications needed are the CCND router, the searchProxy application that implements the search protocol and conducts the searches for similar objects, and a file proxy application. The file proxy application is an implementation of a CCN repository available as a demo application with the CCNx project distribution [10]. The searchGateway and searchProxy subsystems are presented in the following two subsections.
THE SEARCHGATEWAY The CCN searchGateway is a Java application that runs on a Linux computer and interfaces the CCN with the TCP/IP network. The basic operation of the application is to receive search
155
DARAS LAYOUT
2/18/11
3:07 PM
Page 156
To3D application informing it that a result is available as well as the ranking of this result. This message is transmitted through the TCP socket that was initially established for the transmission of the query descriptors. As described above, the sketchTo3D application gets the resulting 3D model files using the FTP client.
Y
X Y
THE SEARCHPROXY APPLICATION X
Z
Figure 4. Map X-Y coordinates from the sketch image to X-Z coordinates of the 3D world.
requests from outside the CCN network (via TCP connections), form the Interest queries, and submit the search Interests to the searchenabled CCN network. If there are similar objects in the CCN network, the CCN searchGateway first receives and caches the similar content from the CCN parties, and then communicates with the client that started this search to send the requested content using FTP. The gateway in idle time listens on a TCP socket for new clients from the IP side. When a new client requests a connection, a new thread is initiated that handles the connection. After the connection is established, a handshake procedure is followed to confirm that this is a valid client, and finally, the descriptor vectors from the query object are sent to the CCN gateway to form the search Interest. Figure 5 presents the messages exchanged by the CCN gateway and the sketchTo3D application for the initiation of a search session. Upon reception of the descriptors vector from the sketchTo3D client, the searchGateway forms a search Interest, described in detail in [5]. In short, the search Interest name contains the descriptor vectors as well as the local name prefix in order for the CCN parties to refer to the CCN gateway that expressed the Interest. While the main loop of the gateway waits for new search requests, the thread that expressed the Interest enters in a wait state, waiting for the CCN search-enabled parties to answer with similar multimedia objects. As a result of a successful search in a CCN party, a list of content names is sent to the gateway. The first record of the list refers to a file containing the ranking of the successfully retrieved content and the distance of each one from the query. The CCN searchGateway waits for a predefined time window for responses and finally uses the ranked lists from the CCN parties to re-rank the available content. The re-ranking is based on the Euclidean distance (L2) of each content descriptor vector to the descriptor vectors of the query object. The result of this procedure is a new ranked list from which the top K most similar 3D models are retrieved and cached in the CCN searchGateway. Moreover, for every file that is cached in the temporary FTP directory of the gateway, a message is sent to the sketch-
156
The searchProxy is also a Java application. Each search-enabled CCN party must have a searchProxy running in the background in order to support the CCN search protocol. A searchProxy instance is responsible for indexing the content that is available on the party’s local repository and reply to search queries if similar content exists in its index. For each 3D object in the local repository (file proxy) that has to be indexed, CMVD [9] descriptors (212 values for each view, 18 views in total) are extracted and saved on a local database. The searchProxy application uses a kdtree indexing structure implementation in order to organize the records and perform fast exact searches or nearest neighbor searches. The search process is as follows. First, a nearest neighbor search is performed to find the 10 most similar records in the database. Since the database consists of the descriptor vectors of the views of the 3D models, sometimes there are more than one views of the same 3D model that match the query and appear in the returned list. As a result, in a second step possible double records of 3D models are removed from the returned list. Third, the Euclidean distance is calculated between the query descriptors and each nearest neighbor in order to have the exact distances. Next, the 3D objects with distances greater than a threshold are also discarded from the results list. Based on the remaining objects and their distances, a ranked list file is created in the local repository containing information such as each object’s name, distance from the query, and the rotation matrix of the winning view of the 3D object. Then a collection of objects is compiled with the first one being the name of the ranked list file. After the compilation of the collection of 3D models that successfully passed the similarity matching process, a reply Interest is expressed to advertise the available results. When this Interest reaches the searchGateway, it is on the searchGateway’s side to collect the desired content from the file proxy that serves these content objects as described in the previous subsection.
EXPERIMENTAL SETUP The experimental setup consisted of one Windows PC that hosted the sketchTo3D application and two Windows PCs that hosted in total four virtual machines running the Ubuntu Linux operating system in order to form the CCN network. The first virtual box (VB1) worked as both a CCN search gateway and a CCN party. In other words, both searchGateway and searchProxy applications were running on the VB1 virtual machine. All the other virtual boxes (VB2, VB3, and VB4) played only the CCN party role of the network.
IEEE Communications Magazine • March 2011
DARAS LAYOUT
2/18/11
3:07 PM
Page 157
For the 3D models database we used the SHREC 2008 generic models track, which contains 1814 3D models of various objects (humans, vehicles, plants and flowers, etc.). The database was manually split into four overlapping parts, and each part was stored on a different virtual box of the CCN network in order to have different records on the different nodes of the network. However, we introduced a small overlap in order to test how duplicates would be handled from the re-ranking process of the gateway application.
sketchTo3D
CCN gateway “ccnG
atewa
”
“ACK
“sid”
CONCLUSIONS AND FUTURE WORK In this article we have presented a search and retrieval scheme that uses a single sketch to retrieve 3D models and compile a 3D environment by using the available multimedia content traveling in a content-centric network. By expanding the content-centric network to support multimedia similarity content search, we provide users the ability to retrieve multimedia content already available in the network without knowing where this content is stored or the name of each content in question. On the other hand, the sketchbased user interface is an intuitive UI that provides a greatly enhanced user experience for multimedia content search while helping users express in detail their thoughts and ideas. For the future, as far as the user interface is concerned, we plan to rebuild it in a web application in order to have a wider group of testers and extend the user’s actions available in the current version. Also, we are considering inserting real 3D video streams so as to create real-time on-the-fly 3D immersive environments from simple sketches.
ACKNOWLEDGMENTS This work was supported by the EU FP7 project 3DLife, ICT-247688.
REFERENCES [1] N. Niebert et al., “The Way 4WARD to the Creation of a Future Internet,” IEEE 19th PIMRC ‘08, Sept. 15–18, 2008, pp. 1–5. [2] G. Bouabene et al., “The Autonomic Network Architecture (ANA),” IEEE JSAC, vol. 28, no. 1, Jan. 2010, pp. 4–14. [3] T. Koponen et al., “A Data-Oriented (and Beyond) Network Architecture,” Proc. ACM SIGCOMM ‘07, Aug. 2007. [4] V. Jacobson et al., “Networking Named Content,” CoNext, 2009. [5] P.Daras et al., “Similarity Content Search in Content Centric Networks” ACM Multimedia, 2010, Firenze, Italy. [6] T. Chen et al., “Sketch2Photo: Internet Image Montage,” ACM Trans. Graphics, vol. 28, no. 5, Dec. 2009, pp. 1–10. [7] M. Eitz et al., “PhotoSketch: A Sketch Based Image Query and Compositing System,” ACM SIGGRAPH ’09, Aug. 3–7, 2009, New Orleans, LA. [8] J. Pu, K. Lou, and K. Ramani, “A 2D Sketch-Based User Interface for 3D CAD Model Retrieval,” Comp. Aided Design App., vol. 2, no. 6, 2005, pp. 717–27. [9] P. Daras and A. Axenopoulos, “A Compact Multi-View Descriptor for 3D Object Retrieval,” 7th Int’l. Wksp. Content-Based Multimedia Indexing, 2009, pp. 115–19. [10] Project CCNx, accessed July 2010; http://www.ccnx.org/.
BIOGRAPHIES THEODOROS SEMERTZIDIS received a Diploma degree in electrical and computer engineering from Democritus University of Thrace (2004) and an M.Sc. degree in advanced computer and communication systems from Aristotle University
IEEE Communications Magazine • March 2011
y”
”
K “AC
sid_n
umbe
r
“desc
”
ector
t
ors v
ript Desc
Figure 5. Handshake messages between the CCN gateway and a sketchTo3D client submitting query descriptors. of Thessaloniki, Greece (2009), where he is now a Ph.D. candidate. He has worked for the Informatics and Telematics Institute as a research associate since 2006. His research interests include distributed systems, and multimedia search and retrieval. PETROS DARAS [M‘07] (
[email protected]) is a senior researcher at the Informatics and Telematics Institute. He received a Diploma degree in electrical and computer engineering, an M.Sc. degree in medical informatics, and a Ph.D. degree in electrical and computer engineering from Aristotle University of Thessaloniki in 1999, 2002, and 2005, respectively. His main research interests include computer vision, search and retrieval of 3D objects, and medical informatics. PAUL MOORE is a graduate in computer business systems of Ryerson University, Toronto, Canada, and also holds a degree in economics from the University of Toronto. He has more than 20 years of experience in IT systems, including six years as technical director or coordinator of different European projects. He is head of the Media unit in Atos Research & Innovation, and is the representative for Atos Origin on the Steering Committee of NEM. LAMBROS MAKRIS is a research associate at the Informatics and Telematics Institute, Greece. He received his Diploma and Ph.D. in electrical engineering from Aristotle University of Thessaloniki in 1994 and 2007, respectively. His research interests include applications of local area and wide area networks, distributed information systems, databases, electronic commerce, data security, and encryption. MICHAEL GERASSIMOS STRINTZIS [M’70, SM’80, F‘04] received a Diploma degree in electrical engineering from the National Technical University of Athens, Greece, in 1967, and M.A. and Ph.D. degrees in electrical engineering from Princeton University, New Jersey, in 1969 and 1970, respectively. He is a professor of electrical and computer engineering at the University of Thessaloniki. He has served as Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology since 1999. In 1984 he was awarded one of the Centennial Medals of the IEEE.
157
LYT-SERIES EDIT-Lin
2/22/11
5:07 PM
Page 158
SERIES EDITORIAL
TOPICS IN NETWORK TESTING
Ying-Dar Lin
T
Erica Johnson
he objective of the Network Testing Series of IEEE Communications Magazine is to provide a forum across academia and industry to address the design and implementation defects unveiled by network testing. In the industry, testing has been a mean to evaluate the design and implementation of a system. But in academia, a more common practice is to evaluate a design by mathematical analysis or simulation without actual implementations. A less common practice is to evaluate a design by testing a partial implementation. That is, academia focuses more deeply on algorithmic design evaluation, while industry has broader concerns on both algorithmic design and system implementation issues. Often an optimized algorithmic component cannot guarantee optimal operation of the whole system when other components throttle the overall performance. This series thus serves as a forum to bridge the gap, where the design or implementation defects found by either community can be referred to by another community. The defects could be found in various dimensions of testing. The type of testing could be functionality, performance, conformance, interoperability, and stability of the systems under test (SUT) in the laboratory or field. The SUT could be black-box without source code or binary code, greybox with binary code or interface, or white-box with source code. For grey-box or white-box testing, profiling would help identify and diagnose system bottlenecks. For black-box testing, benchmarking devices of the same class could reflect the state of the art. The SUT can range from link-layer systems such as Ethernet, WLAN, WiMAX, third-/fourth generation (3G/4G) cellular, and digital subscriber line (xDSL) to mid-layer switches and routers, and upperlayer systems such as voice over IP, Session Initiation Protocol (SIP) signaling, multimedia, network security, and consumer devices such as handhelds, to even a large-scale network system. Our first call received nine submissions, and we selected two of them. The selection is based on several factors: how relevant to network testing the work is, how new the test methodology or result is, how informative the article is, and so on. The selected two happen to represent two extremes in terms of scale, with the first on a specific issue of a network component and the second on a large-scale distributed testbed. Both articles have authors at universities or research organizations. We shall continue to solicit contributions from industry, although industry people usually do not have publication as their priority job. The first article answers the question of how adjacent channel interference (ACI) affects the performance of IEEE 802.11a, which was believed to be ACI-free due to better channelization combined with orthogonal frequency-division multiplexed (OFDM) transmissions. They present a modified model for the signal-to-interference-plus-noise ratio (SINR) and quantify the throughput degradation due to ACI. The model is verified by the emulated wireless medium with cables and attenuators to isolate the affected 802.11 mechanisms. The result implies that even with a large number of channels in 802.11a, careful channel selection is still needed in order to achieve higher throughput.
158
Eduardo Joo
The second article presents a large-scale testbed, Panlab, for future Internet applications. Panlab is a testbed to pilot future Internet applications over the existing Internet. It is a distributed testbed with platform resources contributed by the Panlab partners, coordinated and configured by the Panlab office, and offering testing services to Panlab customers to test deployed applications or their control modules. With the shared Panlab testbed, piloting a new application becomes easier than constructing a global-scale testbed of one’s own to experiment with an application. The article illustrates how to use Panlab through a case study on testing adaptive admission control and resource allocation algorithms for web and database applications, where traffic generators are configured at two testbeds, and the web and database applications, along with the algorithms under test, reside at a third testbed.
BIOGRAPHIES YING-DAR LIN () is a professor of computer science at National Chiao Tung University, Taiwan. He received his Ph.D. in computer science from the University of California at Los Angeles in 1993. He spent his sabbatical year, 2007–2008, as a visiting scholar at Cisco Systems, San Jose, California. Since 2002 he has been the founder and director of the Network Benchmarking Laboratory (NBL, www.nbl.org.tw), which reviews network products with real traffic. He also cofounded L7 Networks Inc. in 2002, which was later acquired by D-Link Corp. His research interests include design, analysis, implementation, and benchmarking of network protocols and algorithms, quality of service, network security, deep packet inspection, P2P networking, and embedded hardware/software co-design. His work on multihop cellular has been cited over 500 times. He is currently on the editorial boards of IEEE Communications Magazine, IEEE Communications Surveys and Tutorials, IEEE Communications Letters, Computer Communications, and Computer Networks. ERICA JOHNSON is the director of the University of New Hampshire InterOperability Laboratory. In this role, she manages and oversees over 20 different data networking and storage technologies providing all aspects of administration, including coordination of high profile testing events, coordination with different consortiums, and working with various industry forums. She is also a prominent member of organizations both internally and externally, She enjoys a powerful mix of technology and business related activities. At the University of New Hampshire she participates in the UNH Steering Committee for Information Technology, Senior Vice Provost for Research Working Group, and Computer Science Advisory Board. In the industry she was appointed technical representative of North America for the IPv6 Ready Logo Committee and was also chosen to be an IPv6 Forum Fellow. Passionate about the laboratory and its possibilities, she continues to work with many industry forums, commercial service providers, network equipment vendors, and other universities in order to further the InterOperability Laboratory’s mission. E DUARDO J OO is software project leader at Empirix, Inc., Bedford, Massachusetts. He received his M.S. in computer system engineering, computer communications and networks, from Boston University in 2006. He joined Empirix, Inc., in 2001 and has led the successful development of network testing and emulation systems, including PacketSphere Network Emulator, PacketSphere RealStreamer, Hammer NxT, and Hammer G5. He is currently leading the development of next-generation mobile broadband data network monitoring and testing tools. His areas of interest include voice and data protocols, and wired, wireless, and mobile network communications.
IEEE Communications Magazine • March 2011
ANGELAKIS LAYOUT
2/18/11
3:04 PM
Page 160
TOPICS IN NETWORK TESTING
Adjacent Channel Interference in 802.11a Is Harmful: Testbed Validation of a Simple Quantification Model Vangelis Angelakis, Linköping University Stefanos Papadakis, Foundation for Research and Technology — Hellas (FORTH) Vasilios A. Siris, FORTH and Athens University of Economics and Business Apostolos Traganitis, FORTH and University of Crete
ABSTRACT Wireless LAN radio interfaces based on the IEEE 802.11a standard have lately found widespread use in many wireless applications. A key reason for this was that although the predecessor, IEEE 802.11b/g, had a poor channelization scheme, which resulted in strangling adjacent channel interference (ACI), 802.11a was widely believed to be ACI-free due to a better channelization combined with OFDM transmission. We show that this is not the case. ACI does exist in 802.11a, and we can quantify its magnitude and predict its results. For this, we present minor modifications of a simple model originally introduced by [1] that allow us to calculate bounding values of the 802.11a ACI, which can be used in link budget calculations. Using a laboratory testbed, we verify the estimations of the model, performing experiments designed to isolate the affected 802.11 mechanisms. This isolation was enabled by not using the wireless medium, and emulating it over cables and attenuators. Our results show clear throughput degradation because of ACI in 802.11a, the magnitude of which depends on the interfering data rates, packet sizes, and utilization of the medium.
INTRODUCTION
1
A transmit spectral mask is the power contained in a specified frequency bandwidth at certain offsets relative to the total carrier power.
160
The IEEE 802.11a standard amendment describes an OFDM-based physical layer for 802.11 wireless stations operation in the 5 GHz band. Due to the poor channelization of 802.11b/g that left only three of the available channels non-overlapping, the channelization scheme of 802.11a was over-advertised to offer 19 non-overlapping channels in the European Telecommunications Standards Institute (ETSI) regulatory domain and 20 in the Federal Communications Commission (FCC) domain. This implied that no adjacent channel interference (ACI) was to be expected in 802.11a, and therefore no performance degradation would be
0163-6804/11/$25.00 © 2011 IEEE
observed for neighboring links operating in neighboring channels. Indeed, the 52 orthogonal frequency-division multiplexing (OFDM) subcarriers defined in the 802.11a amendment appear to lie well within the channel bandwidth of 20 MHz, and each channel’s central frequency has a spacing of 20 MHz from the next/previous adjacent channel. Still, examining the transmit spectral mask 1 required for compliance in the specification [2], we see that some transmitted power is allowed to leak not only to the immediately adjacent channels, but also as far as two channels away from the communication channel. Meanwhile, the wireless network research community endorsed 802.11a as the standard of choice for multiradio nodes and dense wireless LAN (WLAN) deployments. Two main reasons were behind this: First the 2.4 GHz band had been already overcrowded as the 802.11b/g compliant devices had been in the market long before the 802.11a ones and second because it was widely believed that 5 GHz capacity problems due to interference would be mitigated by the non-overlapping channels promised by the standard and the vendors. Unfortunately, although the power allowed to leak into the neighboring channels is indeed quite low compared to the transmitted signal power, it is sufficient to cause ACI effects, especially when neighboring radio interfaces use nearby channels, or when the signal-to-interference-plusnoise ratio (SINR) observed at the receiver of node is marginally larger than the threshold required to support a required rate.
RELATED WORK The authors of [3] performed experiments on a testbed with Atheros-based 802.11a interfaces to examine the effect of potential ACI on a dualradio multihop network. Their work includes both laboratory and outdoor experiments using omnidirectional antennas. The former indicated that the Atheros AR5213A-chipset interfaces they employed were indeed compliant with the
IEEE Communications Magazine • March 2011
ANGELAKIS LAYOUT
2/18/11
3:04 PM
Page 161
spectral requirements of the 802.11a specification. Their testbed was based on a single board Linux-based PC that hosted two interfaces and used the opensource MadWifi driver. The outdoors experiments in that work were the first to provide evidence of ACI. They report observing no board crosstalk or interference other than that caused by operating neighboring links on adjacent channels. They were the first to suggest increasing channel separation and antenna distance as well as using directional antennas in order to mitigate the effects of 802.11a ACI on the reduction of throughput. These were the first reports of 802.11a ACI, which, however, did not include any insight or attempts at some solid hypothesis as to why this ACI exists. However, ACI effects were clearly demonstrated, leaving no doubts about the existence of ACI in 802.11a. In [1] the authors introduced a simple model to theoretically quantify ACI caused by overlaps in neighboring channels. Their key idea was focused on taking an integral over the whole overlapping region of the interfering channels spectral masks. They applied it to the spectral masks of 802.11b/g which have had known overlap issues due to poor channelization design, and also that of 802.16. They claim that the use of partially overlapped channels is not harmful, provided that higher layers take it into consideration and adapt accordingly. Furthermore they show that a careful use of some partially overlapped channels can often lead to significant improvements in spectrum utilization and application performance, with respect to the interfering nodes’ distances. In [4, 5] we introduced minor modifications to the limits of the integral used for the ACI quantification model introduced by [1] and were the first to apply it to the 802.11a spectral mask; we produced results on a testbed where the wireless channel was emulated using attenuators, and on a testbed with real outdoor mid-range wireless links using directional antennas. Those two works verified: • Our hypothesis that ACI observed in 802.11a is caused by the overlap of the channel sidelobes allowed by the IEEE specifications • That ACI can be caused by channels that are not only directly adjacent • That ACI can be harmful if not taken into account during system and resource planning The testbeds in all the above papers used Atheros-based wireless interfaces and the opensource MadWifi driver. This choice was made primarily because MadWifi was at the time the de facto reference driver for the vast majority of testbeds in the literature. Since then the MadWifi driver has been rendered obsolete, declared legacy, and is no longer supported by the Linux kernel. In this work we provide new evidence that 802.11a ACI can be quantified and its effects predicted. In particular, we first demonstrate the existence of 802.11a ACI on a testbed with Atheros-based interfaces driven by the newly developed ath5k open source driver. Second, we quantify the ACI effect in terms of goodput, completely isolating the medium access control
IEEE Communications Magazine • March 2011
(MAC) and physical (PHY) layer mechanisms that are susceptible to its effect, using a wireless link emulation testbed.
only three of the available channels to
The SINR criterion for data reception (Eq. 1) requires that the signal of interest power arriving at a receiver, over the sum of the interference and the thermal noise powers must be above a threshold which is defined with respect to the transmission parameters (modulation scheme) and the quality of service requirements (data rate of transmission and reception bit error rate [BER]). In Eq. 1 we assume k interfering transmitters operating on the same channel as the signal of interest transmitter, with powers Pi and Ptx, respectively.
k
channelization of 802.11b/g that left
SYSTEM MODEL/ACI QUANTIFICATION
Ptx ⋅ PathLoss(tx, rx )
Due to the poor
be non-overlapping, the channelization scheme of 802.11a was over-advertised to offer 19 non-overlapping channels in the ETSI regulatory domain and 20 in the FCC domain.
≥ θSINR
N rx + ∑ Pi ⋅ PathLoss(i, rx )
(1)
i =1
Typically, the SINR criterion is applied, as in Eq. 1, in single-channel systems, where the interfering transmissions are assumed to occupy the entire bandwidth of the used channel and are considered noise. In a channelization scheme where more than one channels are used with some partial overlap on their bandwidth [1], an ACI factor X i,rx is introduced for each of the interferers, which can be used in the SINR calculations. This factor depends on the spectral properties of the channels and the transmitted signals, and the separation between the channels of an interferer i and the receiver rx. Specifically, the affecting properties are the interchannel spectral distance, channel bandwidth, spectral mask, and receiver filter. This factor takes values in [0, 1], with 0 indicating no overlap (i.e., complete orthogonality) and 1 indicating that the interferer is using the same channel as the receiver. For our work we calculate this interference factor by normalizing the spectral mask S(f) within a frequency width w that should be at least equal to the nominal channel width, and then filter this normalized S′(f) over the frequencies that will be within the bandpass filter of the receiver. Ideally, for the case of 802.11a, the spectral mask should be a flat bandpass 20 MHz filter, but for the sake of being more realistic we assume that the interfaces employed use a single imperfect, wider than nominal, bandpass filter both for transmission and reception. In the general case we could use Eq. 2 to obtain the factor X i,rx for an interferer i and a receiver rx, as a function of R′(f), the normalized receiver filter transfer function in [–w/2, w/2]; w 2
xi,rx =
∫
−w 2
R′ ( f ) Si′ ( f − fint ) df ,
(2)
where we have denoted fint the frequency offset at which the interfering channel is centered (Fig. 1).
161
ANGELAKIS LAYOUT
2/18/11
3:04 PM
Page 162
S‘(f)
-w/2
that our calculations in Table 1 use the spectral mask mandated by the standard, which is an envelope for the actual implementations as vendors compete to achieve better specifications for their cards. In order to experimentally verify our calculations we developed a testbed with off-the-shelf equipment. As in our previous work, we chose to emulate the wireless medium rather than using the air in order to remove its non-deterministic characteristics, avoid unknown interference, and eliminate the inherent wireless medium uncertainty from our investigation. This led us to a laboratory testbed where nodes’ antenna connectors were interconnected using coaxial cables, attenuators, signal splitters, and combiners. We separated the MAC and PHY mechanisms that affect the efficiency of the protocol in the presence of ACI in order to obtain bounds for the worse cases.
S‘(f-fint)
0
fint
w/2
Figure 1. Graphical representation of the calculation of Eq. 2.
Receiver bandwidth
Immediately adjacent channel power leakage Xi,(i±1)
Next adjacent channel power leakage Xi,(i±2)
20 MHz
–22.04
–39.67
∞
–19.05
–36.67
THE COMMUNICATION MECHANISMS AFFECTED BY ACI IN 802.11
Table 1. Theoretically calculated Xi,rx in dB.
DATA RECEPTION MECHANISM ERRORS Finally, in a system where all radio interfaces adhere to the same protocol, it is reasonable to assume that all nodes have the same S(f); and furthermore, that this output filter matches the receiver filter, and so: S(⋅) = R(⋅). Under these two assumptions, Eq. 2 becomes w 2
xi,rx =
∫
−w 2
S ′ ( f ) S ′ ( f − fint ) df ,
(3)
where we have denoted fint the frequency offset at which the interfering channel is centered (Fig. 1). Using this model and the spectral mask for 802.11a mandated by the standard in [2], we calculated the maximum compliant power leakage between two neighboring 802.11a channels. Table 1 shows the results of our calculations of the ACI Xi,rx factor expressed in dB (essentially the attenuation of the transmitted power due to the frequency offset) that may be used directly in any link budget calculation. Table 1 indicates that the interference factor X is sufficient for a single transmitter to inject ACI power at a receiver which will be well above the thermal noise in an 802.11a system under conditions enabled by proximity or power allocation, even if the interfering transmitter were using the next adjacent channel to that of the receiver. For example, assuming the typical thermal noise of –101 dBm and a 20 MHz 802.11a channel centered at 5600 MHz (channel 120) and zero antenna gains, an interferer on an adjacent channel (say channel 124 at 5620 MHz) transmitting at only 1 mW (0 dBm) within approximately 40 m of the receiver would be received above noise, reducing the perceived SINR by at least 3 dB within that range. Therefore, because of the channel design in IEEE 802.11a, ACI will be observed and, if not properly considered, will cause degradation of a system’s performance. We must note, though,
162
Assuming that the interference caused by 802.11a stations can be modeled as white Gaussian noise, we can determine whether the SINR requirements for the 802.11a transmission rates, given in the specifications for 10 percent packet error rate, can be met under the presence of ACI. Since each interfering node produces some ACI at the receiver, interesting interface topologies can be observed, arising by poor system design, where the total ACI will bring the SINR at the receiver below the threshold. Multiradio Nodes — In a multiradio node, assignment of neighboring channels on interfaces that have their antennas close together has been shown to cause reduced performance [3]. In such a scenario the interference arriving at a receiver can be sufficiently high to be harmful due to proximity, which causes low interference path losses. Single-Radio Nodes — In dense topologies where channel allocation may inevitably provide nearby links of adjacent channels, if the path losses to some receivers are high, or the number of concurrent interfering transmissions is large, their aggregate power may be high enough to bring the SINR below threshold.
CLEAR CHANNEL ASSESSMENT FALSE NEGATIVES IEEE 802.11 employs a distributed coordination function (DCF), which essentially is a carrier sense multiple access with collision avoidance (CSMA/CA) MAC protocol with binary exponential backoff. The DCF defines a basic access mechanism and an optional request-tosend/clear-to-send (RTS/CTS) mechanism. Let us consider just the basic access mechanism. In the DCF a station has to sense the channel as clear (i.e., idle) for at least a duration of DIFS + CWmin (both defined in [6]) in order to gain access to it. The 802.11a standard requires that a
IEEE Communications Magazine • March 2011
ANGELAKIS LAYOUT
2/18/11
3:04 PM
Page 163
clear channel assessment (CCA) mechanism be provided by the PHY layer. The CCA mechanism that will provide this information is to proclaim a channel as busy when it decodes a PHY layer preamble at a power at least equal to that of the basic rate of the 6 Mb/s sensitivity, 2 or detects any signal with power 20 dB above the basic transmission rate 6 Mb/s sensitivity. Interference can cause the CCA to misreport in the case of nearby located interfaces. A channel may be sensed as busy due to high received power from a neighboring channel that is interfering. This can occur when two nearby 802.11a transmitters contend over different channels, such as in a poorly designed multiradio node; for example, in a multiradio mesh node that has two or more interfaces using nearby channels, with omni- or directional antennas, and with insufficient spatial separation, or EM shielding between them.
EXPERIMENTAL VERIFICATION TESTBED DESCRIPTION The testbed of our experiments consisted of four nodes interconnected using cables and attenuators, eliminating the unpredictable wireless medium and thus fully controlling the transmission and reception paths and losses. Each node is an EPIA SP13000 mini-ITX motherboard with a 1.3 GHz C3 CPU and 512 Mbytes RAM, running Gentoo Linux with the wireless testing tree kernel v.2.6.31-rc8-wl. With a RouterBoard miniPCI to PCI adapter, an Atheros AR5213A chipset miniPCI wireless interface card (CM9-GP) was used on each node, running on the ath5k 802.11a/b/g driver and hostapd v0.6.8. The ath5k driver was modified to independently use the two antenna connectors of the wireless card, one only for transmission and the other only for reception. This was the primary key enabler for the design of the experiments conducted. We also used the AirMagnet Laptop Analyzer v6.1 software to monitor the wireless traffic and a Rohde & Schwarz FSH6 spectrum analyzer for channel power and bandwidth verification. The interconnectivity of the nodes was routed through coaxial cables, four-way HyperLink Tech splitters/combiners, Agilent’s fixed attenuators (of 3, 6, 10, 20, and 50 dB), and programmable attenuators by Aeroflex/Weinschel with 0 to 55.75 dB attenuation range and steps of 0.25 dB. For the traffic generation and throughput measurements, we used the iperf v2.0.4 with pthreads enabled. Before any measurement a bootstrap procedure was followed where the wireless interface was completely reset before applying the new settings, as a precaution catering to the unstable nature of the ath5k driver, as our experience has shown. We generated UDP traffic both in the interfering link and the link under test, to avoid the flow control mechanism of TCP and thus get results for the maximum goodput at the receiver.
EXPERIMENTS’ SETUP We set up just two links to realize the scenarios of Fig. 2. One is the test link (link, Fig. 3a) and the second is the interference link (interferer, Fig. 3a) to be tuned at a channel neighboring
IEEE Communications Magazine • March 2011
Rx1 Tx2
Tx1
Rx2 (a)
Channel busy
Rx2
Rx1 Tx2
Tx1 (b)
Figure 2. The SINR effect on packet reception: a) Rx2 will not be able to correctly decode the data transmitted by Tx2 due to high interference from nearby channel transmission of Tx1; b) Tx2 may falsely report the channel as busy if channel Tx2 → Rx2 is adjacent to channel Tx1 → Rx1. the one used first. With these two links we were able to generate the topologies of Fig. 2 and conducted two experiments. To avoid confusion, for both the link and interferer we use the terms source and destination for the nodes that produce and consume the iperf traffic, respectively. Note that the interferer was made completely unaware of the link by proper power assignment at the link’s sender transmitter, and the losses and isolation along the paths leading to both the receivers in the interferer. First we tested the ACI effect on the data reception mechanism. To do this, we injected the traffic from the interferer’s sender to the receiving connector of the link destination. Using the values of Table 1, we calculated the transmission power required for the interference and the attenuator values in order to bring the SINR at the receiver below threshold. In the second experiment set the effect of ACI on the CCA mechanism was examined. For this the testbed interconnection was slightly altered so that the interferer’s sender was coupled with the receiver of the link’s sender. The values for the attenuators again were calculated using Table 1 and taking in mind the CCA requirements.
HARDWARE, SOFTWARE, AND 802.11 ISSUES Transmission Power Instability — A major aspect of the interference mechanism is the initial transmission power, which together with the path losses determine the received interference power. Unfortunately, early experiments showed that the power control in the ath5k driver is not yet stable enough; for given power settings the actual output power depended on the data rate of the transmitted data, which appeared to be arbitrary and not due to an expected power cutoff at higher rates. To deal with this instability,
2 Sensitivity
is the minimum input power level at which decoding can be achieved at a desired BER in a given rate.
163
ANGELAKIS LAYOUT
2/18/11
Rx
3:04 PM
Page 164
Tx
50
Link
S Tx
D Rx
x x
x
16
Tx I
Interferer
Rx J Tx
Rx 16
(a)
(b)
Figure 3. a) The reception mechanism testbed layout schematic representation; b) the actual testbed of the experiments. at each data rate setting we measured the received power at some fixed point of our testbed during a calibration run, and based on the measured value we compensated accordingly by adjusting the attenuators. In order to verify the theoretical assumptions presented earlier we coupled the measured SINR with the achievable throughput. In each data rate the expected throughput is relative to the SINR as for each constellation and coding rate the BER is directly linked to the signal-tonoise ratio (SNR) [7]. With the use of the programmable attenuator we were able to control the signal attenuation per dB; therefore, we obtained measurements that cover in detail a wide SNR space for each data rate. Antenna Isolation Instability — Another major problem for designing and conducting the experiments was the separation of transmission and reception to the two antenna connectors on the wireless card. Unfortunately, disabling the antenna diversity option in the ath5k driver was not enough. The result was to have some sparse transmissions from the antenna connector designated for reception, reducing the accuracy of measurements. The problem was solved by making the appropriate corrections in the source code for the antenna connector handling in the ath5k driver. Another interesting observation was that the ath5k/Atheros chipset combination had a periodical 3 s timeout during transmission. It was easily observed as a gap in the channel utilization graph during heavy traffic generation. This behavior could be attributed to the periodical recalibration of the radio frequency (RF) front-end the wireless cards perform. The result was a lack of interference during that period, giving a chance for unhindered communication in the link. The Role of Channel Utilization — One key aspect of the interference generated by 802.11 devices is that it is not constant in time. It follows the timings of the 802.11 DCF, and can be con-
164
sidered noise that is there only during the time in which the interfering transmitter is active. Therefore, its effect will depend on the utilization of the interfering link. As seen in [8] the utilization of the wireless medium is inversely proportional to the data rate being used for a given achievable throughput. Quite an interesting observation is that even at the maximum achievable throughput, the utilization of the 54 Mb/s data rate is far lower than that of the 6 Mb/s rate. Therefore, one can expect that a 6 Mb/s interfering sender running at full throughput will be more harmful than a full throughput 54 Mb/s interferer. With the interface connectivity explained earlier, we managed to produce an 802.11a jammer that does not sense the channel prior to transmitting, and therefore can transmit as frequently as a single user DCF allows, thus maximizing the utilization of the medium for its respective data rate.
EXPERIMENTAL RESULTS Packet Capture — The rate in the link under investigation was always fixed, having disabled the automatic rate selection mechanism. For each rate we increase the attenuation level at the programmable attenuator X (Fig. 3a) by one dB per measurement run. Essentially we decrease the SINR by one dB in each step and record the average throughput for a 3 min measurement run period. For each data rate we have a signal strength measurement for an attenuation value in order to have the differences in transmission powers between them. The differences in the transmission power of each rate are used to compensate the attenuation levels so that the results of the throughput are directly comparable. The results are presented in Fig. 4. It is obvious that the throughput curves closely follow the expected SINR — throughput degradation (e.g., as computed in [9]). Interferer’s Rate Effect — As already stated in the previous section, all the experiments were conducted with fixed rate and given utilization at
IEEE Communications Magazine • March 2011
2/18/11
3:04 PM
Page 165
the interferer. In this experiment we investigated whether the data rate of the interferer has any impact on the interference experienced by the receiver. In order to have comparable results for each data rate, we adjust the packet size to keep the utilization fixed. The results revealed that the data rate of the interferer has a strong impact, with increasing intensity at higher data rates. The reasons may be the constellation, which becomes denser at higher rates, and the transmission-idle time distribution. Since the ath5k driver always keeps the basic rate at 6 Mb/s to verify the above assumption, we also used a Cisco 1240 AP where different basic rates (6, 12, and 24 Mb/s) can be defined. With the Cisco AP we saw similar behavior, but with an even greater degradation in throughput. Although all the parameters were the same and it was expected that the devices should behave the same, we noticed that different devices do have some minor differences that result in different ACI. The differences can be attributed to protocol timing parameters as the Cisco AP consistently achieves higher throughput than the Atheros/ath5k in an interference-free environment. In order to further investigate the transmitting-idling time distribution theory we used a series of packet sizes (1–1472 bytes) for the throughput measurement. We took a baseline measurement with absence of ACI and then produced ACI, keeping the same utilization with three different data rates (6, 24, 54 Mb/s). In 6 Mb/s we had a throughput degradation of 55–58 percent of the baseline, in 24 Mb/s 63–67 percent, and in 54 Mb/s about 80–91 percent with the greater degradation in larger packet sizes. This result verifies that the mechanism behind the rate effect of the ACI is the distribution of transmission and idle times, as the long-term medium utilization may be constant but the interleaved transmitting-idle periods are denser in higher data rates. CCA — In this experiment we have set channel 60 for the test link and performed a baseline measurement of the achieved throughput using a 1000-byte UDP payload, without any interference and for all possible data rates (Table 2, column 1). The values of the attenuators were properly calculated so that an interferer with output power 0 dBm tuned to the adjacent channel would trigger false positives in the CCA mechanism of the test link sender. A second series (Table 2, column 2) was recorded with the interferer at the same channel as the test link where the CCA mechanism is expected to be triggered — this marks the results of a collocated and non-contending transmitter at the same channel as the link. Tuning the interferer at channel 56 resulted in a throughput loss ranging from 55 percent at 6 Mb/s to 85 percent at 54 Mb/s, which of course is due to the busy medium state that the transmitter is frequently sensing. As is obvious from Table 2, for the same transmission power level two channels away (channel 52), the CCA mechanism is not affected. In Table 1 we observe that the difference in power leakage between adjacent and next adjacent channels is 18 dB. With that in mind, we raised
IEEE Communications Magazine • March 2011
30
54 Mb/s 48 Mb/s 36 Mb/s 24 Mb/s 18 Mb/s 12 Mb/s 9 Mb/s 6 Mb/s
25
Throughput (Mb/s)
ANGELAKIS LAYOUT
20
15
10
5
0 10
0
20
30 Attenuation (dB)
40
50
60
Figure 4. Results of the packet capture experiment.
Link at channel 60
Interferer at channel
Tx rate (Mb/s)
Baseline
60
56
52
52 (+18 dB)
6
4.86
2.21
2.54
4.94
2.23
9
6.92
3.03
3.10
7.91
2.32
12
8.92
3.12
2.67
9.01
2.12
18
11.81
3.15
2.71
11.72
2.29
24
14.30
3.20
2.78
14.10
2.46
36
18.11
3.40
2.81
18.21
2.65
48
21.13
3.00
2.82
21.19
2.33
54
23.11
3.26
2.95
22.10
2.71
Table 2. ACI effect on the throughput, in Mb/s, due to CCA false positives. the transmission power by 18 dB, and observed that the CCA mechanism was again triggered, verifying once more our model of Fig. 1. The similar results of all the columns where the CCA was triggered indicate the binary nature of the mechanism: if the received power exceeds the threshold, regardless of the channel distance, the medium is sensed as busy.
CONCLUSIONS Despite the general belief that 802.11a is free of ACI, due to the use of non-overlapping channels, we have shown that the need for careful channel selection is also present in the 5 GHz band. Through the use of the emulated wireless medium, we have isolated the affected 802.11 mechanisms and quantified the throughput degradation due to ACI. The two main mechanisms that are affected are data reception and clear channel assessment. In the first case the SNR is degraded, making reception impossible,
165
ANGELAKIS LAYOUT
2/18/11
The large number of channels available in 802.11a, taking into consideration the facts identified and justified in this article, provide the opportunity and motivation respectively for meticulous channel selection in order to achieve high throughput.
3:04 PM
Page 166
and in the second case the transmitter stalls its data as it incorrectly senses the medium busy. Nevertheless, the large number of channels available in 802.11a, taking into consideration the facts identified and justified in this article, provide the opportunity and motivation for meticulous channel selection in order to achieve high throughput.
ACKNOWLEDGMENTS This work was supported by the General Secretariat for Research and Technology, Greece, through project 05-AKMON-80, and by the European Commission in the 7th Framework Programme through project EU-MESH (Enhanced, Ubiquitous, and Dependable Broadband Access using MESH Networks), ICT215320, http://www.eu-mesh.eu. The authors acknowledge the help of Mr. Nick Kossifidis on the testbed setup and the modifications of the ath5k code.
REFERENCES [1] A. Mishra et al., “Partially Overlapped Channels Not Considered Harmful,” SIGMetrics/Performance ‘06, Saint Malo, France, June 2006. [2] IEEE 802.11a, “Supplement to IEEE 802.11 Standard — Part 11: Wireless LAN, Medium Access Control (MAC), and Physical Layer (PHY) Specifications: High-Speed Physical Layer in the 5 GHz Band,” Sept. 1999. [3] C. M. Cheng et al., “Adjacent Channel Interference in Dual-Radio 802.11 Nodes and Its Impact on Multihop Networking,” IEEE GLOBECOM ‘06, San Francisco, CA, Nov. 2006. [4] V. Angelakis et al., “Adjacent Channel Interference in 802.11a: Modeling and Testbed Validation,” 2008 IEEE Radio Wireless Symp., Orlando, FL, Jan. 2008. [5] V. Angelakis et al., “The Effect of Using Directional Antennas on Adjacent Channel Interference in 802.11a: Modeling and Experience with an Outdoors Testbed,” WiNMee 2008, Berlin, Germany, Mar. 2008. [6] IEEE 802.11, “Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications,” Aug. 1999. [7] J. Proakis, Digital Communications, 4th ed., McGraw Hill, 2001. [8] J. Jun, P. Peddabachagari, and M. Sichitiu, “Theoretical Maximum Throughput of IEEE 802.11 and Its Applications,” IEEE NCA ‘03, Cambridge, MA, Apr. 2003. [9] D. Qiao, S. Choi, and K. G. Shin, “Goodput Analysis and Link Adaptation for IEEE 802.11a Wireless LANs,” IEEE Trans. Mobile Comp., vol. 1, no. 4, Oct.–Dec. 2002, pp. 278–92.
166
ADDITIONAL READING [1] J. Robinson et al., “Experimenting with a Multi-Radio Mesh Networking Testbed,” 1st WiNMee ‘05, Italy, Apr. 2005.
BIOGRAPHIES VANGELIS ANGELAKIS [S‘05, M‘09] (
[email protected]) is a postdoctoral research fellow at the Mobile Telecommunications Group of the Department of Science and Technology, University of Linköping, Sweden. He received his B.Sc., M.Sc., and Ph.D. degrees from the Department of Computer Science at the University of Crete, Greece, in 2001, 2004, and 2008, respectively. In the summer of 2005 he was a visiting researcher at the Institute of Systems Research of the University of Maryland at College Park. In 2004 he began working as a research assistant and postdoctoral research fellow with the Telecommunications and Networks Laboratory at FORTH-ICS. His research interests include wireless network planning and resource allocation optimization and management. STEFANOS PAPADAKIS [S‘07, M‘10] (
[email protected]) is a research and development engineer at the Institute of Computer Science of FORTH. He received his degree in physics (2001), and his M.Sc. (2004) and Ph.D. (2009) degrees in computer science from the University of Crete. Since 2001 he has been working as a research assistant in the Telecommunications and Networks Laboratory at FORTH-ICS. His research interests include position location techniques in wireless networks, radio propagation modeling, and wireless network planning. VASILIOS A. SIRIS [M‘98] (
[email protected]) is an assistant professor in the Department of Informatics, Athens University of Economics and Business, and a research associate at the Institute of Computer Science of FORTH. He received his degree in physics (1990) from the National and Kapodistrian University of Athens, his M.S. (1992) in computer science from Northeastern University, Boston, Massachusetts, and his Ph.D. (1998) in computer science from the University of Crete. In spring 2001 he was a visiting rresearcher at the Statistical Laboratory of the University of Cambridge, and in summer 2001 and 2006 he was a research fellow at the research laboratories of British Telecommunications (BT), United Kingdom. His research interests include resource management in wired and wireless networks, traffic measurement, and analysis for QoS monitoring and anomaly detection. A POSTOLOS T RAGANITIS joined FORTH-ICS in 1988 and since then has coordinated and participated in a number of EU funded projects in the Communications and Health Care sector. He is head of the Telecommunications and Networks Laboratory and a professor in the Department of Computer Science, University of Crete, where he also teaches and does research in the areas of digital communications and wireless networks. He holds M.Sc. and Ph.D. degrees from Princeton University, New Jersey.
IEEE Communications Magazine • March 2011
WAHLE LAYOUT
2/18/11
3:21 PM
Page 167
NETWORK TESTING SERIES
Emerging Testing Trends and the Panlab Enabling Infrastructure Sebastian Wahle, Fraunhofer FOKUS Christos Tranoris and Spyros Denazis, University of Patras Anastasius Gavras, Eurescom GmbH Konstantinos Koutsopoulos, Bluechip Technologies SA Thomas Magedanz, Technische Universität Berlin Spyros Tompros, University of Patras
ABSTRACT Networks, services, and business models need to be piloted in a large-scale environment that mimics as far as possible the Internet and its future version, although we certainly do not know what this will look like. In this highly demanding and dynamic context, new frameworks and (meta-) architectures for supporting experimentation are proposed, aiming at discovering how existing and emerging testbeds and experimental resources can be put together in such a way that testing and experimentation may be carried out according to specific requirements (industry, academia, etc.). The work presented in this article addresses a number of the fundamental principles and their corresponding technology implementations that enable the provisioning of large-scale testbeds for testing and experimentation as well as deploying future Internet platforms for piloting novel applications. The proposed concepts and architecture of Panlab, a pan-European testbed federation, aspires to facilitate and support user needs in these new areas.
INTRODUCTION The Internet has grown dramatically in only a few decades, and the growth in many areas continues to be exponential. Such areas include Internet connectivity, web page impressions, social network users, video content bandwidth, as well as the value of transactions conducted by enterprises that use the Internet inherently for their business. This pace requires a number of evolutionary or even revolutionary steps concerning the supporting infrastructures, regardless of whether this affects the transport network architecture, the supporting platforms in the wider sense, or the processing of information. In most cases, emerging networking concepts, services, and business models need to be
IEEE Communications Magazine • March 2011
piloted in a large-scale environment that mimics as far as possible the current Internet and its future version. We can expect that the future Internet, a very large-scale constellation of systems and processes, might not be too different from an aggregation of the best ideas currently in research on how to construct and operate networks, platforms, and systems. For this reason, large-scale testbeds emerge that aim not only at providing an experimental environment for such new ideas to be investigated, but, most important, at providing a high degree of realism in supporting experiments executed under reallife conditions. To this end, testing as a discipline that evaluates the design and implementation of components and systems in order to reveal their flaws is undergoing profound transformations. Figure 1 shows the impact of increasing experiment realism on the associated cost [1]. In order to enable experimentation on a very large scale, federated systems are required. In addition, many experiments target the interaction of the intended end user of a service or device (a product) and under which conditions users engage in a business relationship using the product. Therefore, if a business dimension is added to experimentation activities, we speak of a pilot. We define a pilot as the “execution of an experiment or test including business relationship assumptions, exemplifying a contemplated added value for the end user of a product.” The Pan-European Laboratory (Panlab) concepts and architecture outlined in this article aspire to facilitate and support the needs for large-scale testing, experimentation, and piloting. In the following we provide a description of the most relevant initiatives in future Internet experimentation followed by the key concepts and objectives of Panlab. We then present the Panlab architecture and its main components that allow Panlab users to create a federated testing environment comprising widely dispersed testbeds.
0163-6804/11/$25.00 © 2011 IEEE
167
WAHLE LAYOUT
2/18/11
3:21 PM
Page 168
Log(cost) Cost=f(complexity, resource, environmental conditions)
Real OS Real applications Real platforms Real conditions
Real federated system Real systems and apps Real conditions Distributed resources
Real OS Real applications “In-lab” platforms Synthetic conditions Models for key OS mechanisms Algorithms and kernel apps Abstracted platforms Synthetic conditions Models sys, apps, platforms, conditions
Formal model
Simulation
Emulation
Heterogeneous federation Homogeneous federation
Loss of real experimental conditions Loss of experimental conditions, reproducibility, repeatability, etc. Real systems
Log(realism) As the opposite of the abstraction level
Figure 1. New areas for testing and experimentation and their impact on the cost.
EXPERIMENTAL FACILITIES AND RESOURCE FEDERATION Currently, a number of future Internet initiatives such as FIRE (Europe), FIND/GENI (United States), AKARI (Japan), and their related research projects rely on federated experimental facilities/testbeds to test and validate their solutions. The U.S. GENI projects are organized in socalled spirals where the findings of each spiral are assessed toward the end of a spiral and define the requirements for the next spiral phase. The general high-level GENI architecture defines several entities and functions aligned with the Slice-Based Facility Architecture (SFA). In its second version the SFA draft specification defines a control framework and federation architecture as a lowest common denominator for some of the GENI clusters (ProtoGENI, PlanetLab, etc.). In Europe, several projects (e.g., Onelab2, Panlab/PII and others) are contributing to the Future Internet Research & Experimentation (FIRE) experimental facility. The number of FIRE facility projects was recently increased with new projects (BonFIRE, TEFIS, etc.) joining in. The FIRESTATION support action coordinates a FIRE Office as well as a FIRE Architecture Board that is foreseen to drive the specification of mechanisms on how to federate the diverse facilities. This is currently ongoing work. The main countries active in Asia are Japan, Korea, and China. Joint Asian activities are carried out under the Asia-Pacific Advanced Network (APAN) initiative as well as PlanetLab CJK (China, Japan, Korea), a joint PlanetLab cooperation by those three countries. A full overview of future Internet testbeds and their control frameworks was published earlier in [2].
168
THE PANLAB FRAMEWORK: ROLES, CONCEPTS, AND ARCHITECTURE The Panlab framework and its corresponding architecture aim at provisioning and managing distributed testbeds enabling experimenters to carry out various testing and experimentation activities. Such activities are supported by a Panlab office, a coordination center that facilitates interactions between Panlab’s customers/experimenters and the different test sites. This gives rise to the following main roles in Panlab: • A Panlab partner is a provider of infrastructure resources/testbed components (e.g., hardware, software, virtualized resources) necessary to support the testing services requested by the customer. Partners interact with the Panlab office to offer requested testbed resources to customers. Collectively, the Panlab partners represent the Panlab federation. • A Panlab customer has access to specific infrastructure and functionality necessary to perform testing and experimentation according to its needs. Customers are typically interested in carrying out R&D activities using resources provided by Panlab partners. They are supported by the Panlab office in order to implement and evaluate new technologies, products, or services drawing upon the large resource pool available through the entire Panlab federation. • A Panlab office realizes a brokering service, serving Panlab partners and customers by coordinating legal and operational processes, the provisioning of the infrastructures and services to be used for testing and experimentation, and the interconnectivity of the various partner test sites and customers. Entities taking on those roles interact with each other forming various consecutive views (Fig. 2) that correspond to a series of mappings ranging from generic customer testing requirements to detailed provisioning and management operations. In Panlab, we have identified three distinct views that correspond to different levels of abstraction with respect to testing features. The different Panlab roles are shown in Fig. 3. The customer view captures the testbed requirements of the user, namely the abstract design of the desired testbed: the virtual customer testbed (VCT). The design can be either specific or agnostic with regard to the physical location of the resources and how the underlying federation of testbeds will allocate and provision them. In this view, the customer will search and request available abstract resources across the Panlab federation. The customer view can be compiled in many ways and using various tools; for example, textually (by filling out forms), graphically (by using a graph tool), using a domain-specific language (DSL), or a combination of those. The main goal is to empower the customer to define the requirements for his VCT in a consistent and unambiguous way. For instance, the customer may explicitly select computing resources, type of connectivity and networking, and specific services that must be installed and configured on each resource.
IEEE Communications Magazine • March 2011
WAHLE LAYOUT
2/18/11
3:21 PM
Page 169
Computing resource Application Application Network QoS: best effort
Computing resource Application
Computing resource Application
Application
Testbed B Computing resource Application
Customer
Computing resource App
Application
IGW IGW
VPN SUT
Testbed A
Testbed A Customer view
Testbed B
Computing resource
Customer Federation view
Testbed view
Figure 2. The three views of a testbed. In the federation view, a federated infrastructure is expressed that involves the Panlab partner testbeds that will participate in the creation of a VCT along with the testbed resources to be provisioned. The federation frameworks maps customer requests and takes actions on relevant orchestration of services for this view. This view can be either specific or agnostic with regard to the internal infrastructure and topologies of the individual testbeds. The scope of the federation view covers the connectivity requirements across the various testbeds meeting the customer’s requirements. Accordingly, provisioning requests are submitted to each participating testbed, for example, to create a virtual private network (VPN) between all involved test sites, to deploy virtualized resources, to install operating systems and applications, and to configure the entire environment. This view is more detailed than the customer view but less detailed than the testbed view. It is required on the federation platform level for brokering between customers and the testbeds. Finally, the testbed view contains the actual realization and deployment of a VCT and its associated experimental resources across the selected testbeds. It meets the requirements of the federation view and exhibits all internal infrastructure elements, e.g., internal gateways, switches, routers, computers, etc. that will participate in the testing/experiment execution. It is more detailed than the federation view for each individual domain. However, on a domain/testbed level, information concerning the other domains is usually meaningless. In this sense, global information is lost (or at least meaningless) taking the testbed view. The aforementioned views are necessary to create a taxonomy of Panlab functionality and orchestrate the selection and provisioning of Panlab resources across various testbeds. This orchestration is carried out through the Panlab office and its corresponding architectural elements aiming at automating various Panlab related operational processes [3]. Using the Teagle framework, Panlab customers may assemble a desired VCT and request its deployment. Teagle takes the customer request as well as testbed and resource availability as input to coordinate the provisioning of the desired environment. As Panlab controlled resources reside in distributed testbeds, configuration operations are carried out by an architectural component (local to each testbed), called the Panlab testbed man-
IEEE Communications Magazine • March 2011
ager (PTM), which interacts with Teagle supporting management operations. Also, as the resources controlled by Teagle are highly heterogeneous components (e.g., hardware, software, abstract services), the Panlab framework introduces the concept of resource adaptors (RAs) that abstract management capabilities (reference point T2, Fig. 3), which are then offered through a common application programming interface (API) (reference point T1, Fig. 3). Finally, interconnection gateways (IGWs) interconnect the federated testbeds and components inside the testbeds with remote peers that are part of the configuration through automatic VPN (crosssite, reference point I1, Fig. 3) and VLAN (intrasite, reference point I2, Fig. 3) deployment.
TEAGLE The Panlab architectural element Teagle allows browsing through the Panlab Federation offerings, enables the definition of VCTs, and executes their provisioning. Teagle relies on the Panlab federation model and framework that handles all the technical, operational, and legal aspects of generic resource federation [4]. Currently, Teagle implements the following functions (Fig. 4): • A model-based repository: collectively consists of several registries for users, resources, configurations, and so on. • A creation environment (VCT tool): allows the setup and configuration of VCTs. The tool can make use of all available resources across the Panlab federation, but is restricted by policies that can be set on a global, per-domain, or per-resource level. The associated request processor exposes an API that is called by the VCT tool or other tools. • An orchestration engine: generates an executable workflow for resource provisioning. The engine receives a VCT definition from the request processor. • A Web portal: exposes search and configuration interfaces, as well as general information. • A policy engine: allows the evaluation of policies that resource providers can define. The engine also allows for global federation policies. • A gateway: handles bidirectional communication between domain managers (PTMs) and Teagle internal entities.
169
WAHLE LAYOUT
2/18/11
3:21 PM
Page 170
Configure and book resources
Offer and maintain resources U1
Panlab customer
Customer
Panlab partner Panlab office
T1
U2
TEAGLE
PTM RA
Control resources
T2
IGW I2
Resource
IGW
RA
Panlab resource registry and repository
Teagle
A1 I2
3G Testbed resources
Panlab partner testbed
Figure 3. Panlab roles and architectural components.
Customer
Portal Policies
Test suites
Results
Registries: VCTs resources PTMs Info/data IDM models
Repository
REP
Repository gateway
U1
REP
Policy engine POL POL VCT tool and request processor
TEAGLE
Orchestration engine
TG
Teagle gateway
SPEC T1 PTM
RA RA RA
R R R T2
Figure 4. Teagle architecture. The repository holds data about available resource types and instances, and can be queried by other Teagle components such as the VCT tool. The Panlab customer can launch the VCT tool (a Java Web Start application, see Fig. 5) from the Teagle portal and use it to define a VCT. From a list of available resources, selected elements can be dropped, connected, and configured on the tool workbench. During the VCT design, the tool interacts with the policy engine to indicate impossible or forbidden testbed layouts or configurations. When the VCT design
170
has been finished, the tool stores the VCT definition in the repository, and the booking/scheduling procedure can be initiated via the VCT tool or the Teagle portal. Upon booking, the request processor retrieves the VCT definition from the repository, triggers policy evaluation, and sends it to the orchestration engine. The orchestration engine assembles a workflow, executes it, and sends individual provisioning (CRUD: create, read, update, delete) requests to the involved PTMs via the Teagle gateway on interface T1. The PTMs are
IEEE Communications Magazine • March 2011
WAHLE LAYOUT
2/18/11
3:21 PM
Page 171
Figure 5. The Teagle VCT tool.
responsible for conducting the provisioning of resources according to the incoming requests. From the perspective of the Teagle platform, this operation is opaque and could be performed in a number of ways. However, the currently available implementation of a PTM uses resource adapters as an abstraction layer for heterogeneous resources. The PTMs report on the status of resource operations (success/failure/ configuration), and the responses are aggregated at the orchestration engine, stored in the repository, and finally presented to the user. More details on specific components provided by the Teagle framework can be found in a dedicated Panlab Wiki [5] and the Teagle portal [6]. The portal also provides tutorial videos that demonstrate the usage of Teagle components such as the VCT tool. In addition, Panlab regularly offers training events to give further insights and allow for hands-on experience. The next section focuses on the resource description mechanism used by Teagle.
THE TEAGLE RESOURCE REGISTRY AND RESOURCE DESCRIPTION MODEL Given that the federation system needs to deal with a great number of highly heterogeneous and a priori unknown resources, the model used to structure and describe the resources needed to be extensible. An existing information model was extended for our purposes to represent characteristics of the resources and their relationships in a common form independent of a specific repository implementation. Resources can be modeled as concrete physical entities
IEEE Communications Magazine • March 2011
such as a physical machine, or abstract logical resources such as an administrative domain. The DEN-ng information model [7] that is rooted in the area of network management and autonomic networking was used and extended to allow the description of resources as managed entities, their life cycle, as well as associated policies. In terms of DEN-ng, resource entities provide a service and have a certain configuration attached that can be defined and altered using the federation tools that are exposed to the experimenter via Teagle. Resources can exist as physical or logical resources where resource providers can define a list of resource instances as specific subtypes based on the model to represent their federation offerings. The repository implementation has been realized as a number of applications running as contexts on an application server. Each application has its own data storage facility and exposes a HTTP-based RESTful interface with a number of Representation State Transfer (REST) resources. The repository only deals with storage and retrieval of data on behalf of client applications. This allows the architectural entities that collectively represent the Teagle framework, to develop independently of the repository but to rely on a common data model. Figure 6 shows a snapshoot of the resource management part of the entire model on which the Teagle repository is based. We differentiate between resource types (ResourceSpec class) and resource instances (ResourceInstance class) where every instance is of a certain type. This allows the instantiation of multiple resource instances of the same type at a given Panlab partner testbed (e.g., virtual machines) where the different instances can have different configurations. Resources can also be offered in a predefined configuration (pre-existing instances).
171
WAHLE LAYOUT
2/18/11
3:21 PM
The PTM is deployed as installable
Page 172
ResourceInstance Boolean shared
st
GlassFish ESB v2.1 application server. It consists of two JBI service assemblies that implement the
ResourceSpec
at e
components in a
resourceSpec
ConfigurationInfo
1 Geometry
configurationParameters
ResourceInstanceState
Integer w Integer y Integer h Integer x *
1
* ConfigurationBase
ConfigParam *
core logic of the PTM and a Web
configurationBases configParams
application that is the administration
Configuration Configlet
ConfigParamComposite
interface of the PTM. *
ConfigParamAtomic String defaultParmValue String confiParamType *
configurationParamComposite
Figure 6. Resource configuration model.
THE PANLAB TESTBED MANAGER Each Panlab partner test site exposes via the Panlab Testbed Manager (PTM), a domain manager interface T1 (Fig. 3) that is currently implemented as a SOAP web service interface. The PTM exposes resources as services to the federation and provides the mapping from federation level commands to resource specific communication using resource adaptors (RAs). The PTM resource adaptation layer (RAL) implements mechanisms that aid resource discovery and monitoring by generating specific change events to communicate the resource status as well as control information to the PTM core. RAs are registered with the runtime framework of the PTM and make their presence known to the rest of the platform so that responsible PTM modules can keep track of the corresponding events regarding each specific resource representation and consequently the actual resource. RAs plug into a PTM the same way device drivers plug into an operating system to control specific devices. Since the RAs are the only reference towards an actual resource, several events regarding status of both adaptor and resource can be reported: • Pending acknowledgments with respect to requested configuration actions • Failures in the operation of a resource adaptor • Communication loss or failure with resources • Resource failures • Resource reset • Upgrades (resource or adaptor) • Busy states The operation of RAs and the RAL in general is characterized by the fact that the PTM has a common view of all the resources for which it is responsible. No resource-specific communication protocols are exposed to or need to be dealt with by the PTM modules. Every resource is
172
integrated in the PTM runtime environment in an agnostic manner with respect to what a resource may be from the point of view of networking procedures and topology. The PTM is deployed as installable components in a GlassFish ESB v2.1 application server. It consists of two JBI service assemblies that implement the core logic of the PTM and a Web application that is the administration interface of the PTM. RAs are implemented in Java and are installed in the PTM as Java/OSGi bundles. To ease the development of RAs, we defined the Resource Adapter Description Language (RADL). RADL is a concrete textual syntax for describing an RA based on an abstract syntax defined in a meta-model. RADL’s textual syntax allows an easy description of the resource configuration parameters and how the RA should react upon receiving CRUD requests. The RADL support tools generate Java skeleton code ready to be plugged into the PTM.
THE INTERCONNECTION GATEWAY The IGW, as shown in Fig. 3, is responsible for providing and controlling connectivity between Panlab partner test sites. The federated distributed resources are interconnected by means of meshed IGWs that provide a separated layer 2 tunnel per VCT. This allows building large scale virtual overlay networks dynamically over the public Internet. IGWs are ingress-egress points to each site for intra-VCT communication via one automatically configured multi-endpoint tunnel per virtual testbed. It is able to act as a dynamically configurable hub and allows isolation of local testbed devices. One VPN per VCT instance is configured between all neighbor IGWs and enforces isolation of local resources by dynamically configured collision domain isolation. A collision domain is an isolated network segment on a partner’s physical test-site where data packets are sent on a shared channel.
IEEE Communications Magazine • March 2011
WAHLE LAYOUT
2/18/11
3:21 PM
Page 173
IGWs automatically establish connections to other peer IGWs. An important design criterion was to make them as self-configuring as possible. For such meshing of all IGWs that are part of a specific VCT, a stateless low-overhead tunneling was chosen. The IGW can be exposed as any other resource in Teagle tools like the VCT tool or not, depending on the level of configuration granularity that is requested by the experimenter. On the IGWs internal connection state machine, active VCTs are lists of tuples consisting of the other IGW’s external address and the collision domain(s) associated with the specific VCT behind it. Each interconnection state can be expanded by adding more interconnections. New interconnection states do not interfere with existing states. They use the same VPN tunnel but are separated during the routing and filtering process. This guarantees an on-demand automatic resource interconnection across Panlab sites without using proprietary inter-IGW protocols. An experimenter is able to connect single devices (e.g., test clients) to his/her VCT using a customer dial-in (reference point U2, Fig. 3) feature. This layer 2 tunneling protocol (L2TP)based on-demand tunnel delivers direct access to a specific VCT as if the experimenter were working within a partner domain that had a local IGW. The main functionalities provided by an IGW are to interconnect, keep, and protect the mapping of local collision domain communication to external VPN interconnection. Therefore, it functions like an IP-based trunking device for testbed components communicating on data planes separated by collision domains on the internal side and VPN-based access on the external side. Routing of data plane packets in between these secure channels is done by an IGW internal module, the interconnection engine. If requested via the domain manager, quality of service (QoS) rules may be enforced on routing decisions, for instance limiting connections between test sites to a certain maximum throughput rate. In front and in the back of the interconnection engine, the secure channels are de-encapsulated/decrypted and filtered by a stateful IP-based firewall. This makes sure that access to specific resources can be restricted as defined by the Panlab partner as a resource provider. On the external side of the IGW, there may also be generic collision domains bridged to testbeds that are not publicly accessible. In this way it is possible to perform real QoS reservations such as asynchronous transfer mode (ATM) or fiber optic links. As collision domain channel isolation is required for connecting the federated resources, IEEE 802.1Q VLAN-based systems have been added as a mandatory requirement and prerequisite for running separated experiments in parallel. Since several VLANs may be used as a shared medium to connect multiple resources in a single test site, the experimenter has full control over the network topology to be deployed. A virtualized host resource may act as a software router within a VCT. However, this flexibility comes with significant complexity in configuring
IEEE Communications Magazine • March 2011
the network layer. On one hand, the VCT tool provides the means to abstract from such complexity; on the other hand, some Panlab customers require this level of configuration granularity. The Panlab mechanisms allow satisfying such diverse user requirements.
As collision domain channel isolation is required for connecting the federated resources,
CASE STUDY: A SETUP FOR TESTING ADAPTIVE ADMISSION CONTROL AND RESOURCE ALLOCATION ALGORITHMS In this section we demonstrate how Panlab mechanisms and prototypes can be used in terms of an example experiment setup targeting adaptive admission control and resource allocation algorithms. The experimenter sets up a desired VCT as shown in Fig. 7 using resources from Panlab controlled test sites. The setup contains RUBiS, an auction site prototype modeled after eBay.com that is used to evaluate application design patterns and application server performance scalability. It provides a virtualized distributed application that consists of three components: an application server, a database, and its workload generator, which produces the appropriate requests. It is deployed in a virtualized environment using XEN server technology, which allows regulating system resources such as CPU usage and memory. The adaptive admission control and resource allocation algorithm, which is the system under test (SUT) in this specific setup, is a proxy-like component for admission control using XEN server technology to regulate CPU usage. RUBiS clients produce requests pushing the RUBiS server side components to their limits. The adaptive admission control and resource allocation algorithm is tested against network metrics like round-trip time and throughput. The experimenter uses the VCT tool to configure (e.g., IP addresses, bindings, max. client requests, VPN access) all resources such as the RUBiS HTTP traffic generators, web servers, and XEN machines. The screenshot provided by Fig. 5 shows a simplified VCT setup for this experiment at the highest configuration granularity. The experimenter also needs to provide the algorithm under test by logging into the proxy unit and installing it using an SSH account and a private key to access the VCT through a VPN. The equipment/resources used in this setup are: • XEN servers which host virtual machines with RUBiS based work load generator resources • A virtual machine for hosting the algorithm unit, based on a Linux image capable of compiling C and Java software • XEN servers, which host virtual machines with the RUBiS Web app and database installed While running the experiment, there is a need to reconfigure resources and get monitoring data from the resources. To achieve this, our case study scenario uses the Federation Comput-
IEEE 802.1Q VLAN based systems have been added as a mandatory requirement and as a prerequisite for running separated experiments in parallel.
173
WAHLE LAYOUT
2/18/11
3:21 PM
Page 174
We will continue to extend Panlab in a user-demand-driven
Experimenter (Panlab customer)
way, targeting the provisioning of
Testbed C Providing Panlab controlled resources
Testbed A Providing Panlab controlled resources
Web Application rubis_app
large-scale testbeds for testing and experimentation as well as deploying
Traffic generator rubis_cl Traffic generator rubis_cl
Internet
Algorithm (SUT) rubis_proxy
Web Application rubis_app
future Internet platforms for piloting novel applications.
Database rubis_db
Database rubis_db Traffic generator rubis_cl Traffic generator rubis_cl Providing Panlab controlled resources Testbed B
Figure 7. The setup of the case study and participating testbeds.
ing Interface (FCI) API [8] provided by Panlab. The FCI can be used for accessing federated resources and embedded into an application/ SUT in order to gain control of the requested resources during the experiment. Via a model-tocode transformation using the VCT definition stored in the Panlab repository, VCT resources can be exported as Java classes that allow the experimenter to work with the resources as objects programmatically. This allows the user application/SUT to access the testbed resources during execution of the experiment in order to manage and configure various environment parameters or obtain resource status information.
CONCLUSION One possible evolution path of the Internet could be that it will be based on a core platform concept that prescribes a number of fundamental principles for integration. This core platform might enumerate generic functions that support a diverse number of application areas and allow the integration of optional application-specific functions. From the set of generic and optional functions, and following the fundamental principles, it will be possible to deploy platform instances that best serve target application areas. In this scenario, it will be crucial to be able to flexibly compose and evolve platform instances. Flexible composition, instantiation, and federation require the functions to be described in terms of their functional and non-functional aspects, facilitating dynamic discovery and composition of higher-layer applications. The testbeds that are currently used to validate emerging concepts and technologies might eventually evolve into the future Internet themselves. As Panlab provides an extensive platform to flexibly compose abstracted resources to be consumed as services, it positions itself on the
174
Internet evolution path as outlined above. The core part of the Panlab federation platform is the Teagle framework. We will continue to extend Panlab in a user-demand-driven way, targeting the provisioning of large-scale testbeds for testing and experimentation as well as deploying future Internet platforms for piloting novel applications. Also, interoperability with related initiatives and platforms has already been addressed, but will be extended in the near future.
ACKNOWLEDGMENT Parts of this work received funding by the European Commission’s Sixth Framework Programme under grant agreement no. 224119. We would like to thank the Panlab and PII consortia for good cooperation, Dimitri Papadimitriou, Alcatel-Lucent Bell Labs, for his contributions to the FIRE white paper serving as input for our Fig. 1, as well as Prof. Dr. Paul Müller, TU Kaiserslautern/G-Lab, for the discussions around federation.
REFERENCES [1] A. Gavras (Ed.), “Experimentally Driven Research White Paper,” v. 1, Apr. 2010; http://www.ict-fireworks.eu/ fileadmin/documents/Experimentally_driven_research_V 1.pdf. [2] T. Magedanz and S. Wahle, “Control Framework Design for Future Internet Testbeds,” Elektrotechnik und Informationstechnik, vol. 126, no. 7, July 2009, pp. 274–79. [3] S. Wahle et al., “Technical Infrastructure for a Pan-European Federation of Testbeds,” TridentCom ‘09, Apr. 2009, pp. 1–8. [4] S. Wahle, T. Magedanz, and A. Gavras, “Conceptual Design and Use Cases for a FIRE Resource Federation Framework,” chapter in Towards the Future Internet — Emerging Trends from European Research, IOS Press, Apr. 2010, pp. 51–62. [5] Panlab; http://www.panlab.net. [6] Teagle Portal; http://www.fire-teagle.org. [7] J. Strassner, Policy-Based Network Management, Morgan Kaufmann, 2003. [8] FCI; http://trac.panlab.net/trac/wiki/FCI.
IEEE Communications Magazine • March 2011
WAHLE LAYOUT
2/18/11
3:21 PM
Page 175
BIOGRAPHIES S EBASTIAN W AHLE (
[email protected]) leads the Evolving Infrastructure and Services group at NGNI within the Fraunhofer FOKUS institute in Berlin. The group is active in a number of national and international R&D projects in the future Internet field and supports the commercial Fraunhofer NGN testbed deployments at customer premises worldwide. He received a Diploma-Engineer degree in industrial engineering and management from Technische Universität Berlin. His personal research interests include resource federation frameworks and service-oriented architectures. CHRISTOS TRANORIS holds a Ph.D. since 2006 from the Electrical and Computer Engineering Department of the University of Patras in the area of software processes on modeling and design of industrial applications. Currently he is a member of the Network Architectures and Management Group of the same department, which carries out research in the areas of future Internet, peer-to-peer, and network management while currently participating in related EU projects. SPYROS DENAZIS received his B.Sc. in mathematicsfrom the University of Ioannina, Greece, in 1987, and in 1993 his Ph.D. in computer science from the University of Bradford, United Kingdom. He worked in European industry for eight years, and is now an assistant professor in the Department of Electrical and Computer Engineering, University of Patras, Greece. His current research includes P2P and future Internet. He works in the PII, VITAL++, and AutoI EU projects. He has co-authored more than 40 papers. ANASTASIUS GAVRAS has more than 20 years of professional experience in academic and industry research. He joined
IEEE Communications Magazine • March 2011
Eurescom, the leading organization for managing collaborative R&D in telecommunications, more than 10 years ago as program manager. His current interests are large-scale testbed federations for future Internet research and experimentation. He is actively involved in several future Internet initiatives and projects in Europe (FIA, FIRE, PII), and has co-authored several papers and articles in the area. KONSTANTINOS KOUTSOPOULOS received his degree of electrical engineer and Ph.D. in the field of personal and mobile telecommunications from the National Technical University of Athens. He has participated in various IST projects since 1998. He has experience in mobile communications, security, networking, and software development. His research interests include networking, embedded systems, security, and software techniques. He has been working for BCT since March 2006. THOMAS MAGEDANZ is a full professor in the electrical engineering and computer sciences faculty at Technische Universität Berlin, Germany, holding the chair for next-generation networks (http://www.av.tu-berlin.de) since 2003. In addition, he is director of the Next Generation Network Infrastructure (NGNI) competence center at the Fraunhofer Institute FOKUS (http://www.fokus.fraunhofer. de/go/ngni). SPYRIDON TOMBROS received his Ph.D. in broadband communications from the National Technical University of Athens and his Master’s from the same faculty of the University of Patras. His research interests are in the field of protocols and physical communication systems design for mobile, wireless, and home networks. He has many years of working experience with network test floors and test tools manufacturing. He has over 30 scientific publications and books.
175
LYT-ADINDEX-March
2/24/11
12:35 PM
Page 176
ADVERTISERS’ INDEX Company
Page
ADVERTISING SALES OFFICES Closing date for space reservation: 1st of the month prior to issue date
Agilent Technologies.................................................................................3, 13
NATIONAL SALES OFFICE Eric L. Levine Advertising Sales Manager IEEE Communications Magazine 3 Park Avenue, 17th Floor New York, NY 10016 Tel: (212) 705-8920 Fax: (212) 705-8999 Email:
[email protected]
CTIA...........................................................................................................22 GL Communications.................................................................................12 IEEE...........................................................................................................53
SOUTHERN CALIFORNIA Patrick Jagendorf 7202 S. Marina Pacifica Drive Long Beach, CA 90803 Tel: (562) 795-9134 Fax: (562) 598-8242 Email:
[email protected]
IEEE Technology Time Machine.............................................................100 IMS 2011 ....................................................................................................29
NORTHERN CALIFORNIA George Roman 4779 Luna Ridge Court Las Vegas, NV 89129 Tel: (702) 515-7247 Fax: (702) 515-7248 Cell: (702) 280-1158 Email:
[email protected]
JFW Industries ..........................................................................................1 Omicron Lab..............................................................................................15 Remcom .....................................................................................................18
SOUTHEAST Scott Rickles 560 Jacaranda Court Alpharetta, GA 30022 Tel: (770) 664-4567 Fax: (770) 740-1399 Email:
[email protected]
RT Logic.....................................................................................................5 Samsung .....................................................................................................Cover 4
EUROPE Rachel DiSanto Huson International Media Cambridge House, Gogmore Lane Chertsey, Surrey, KT16 9AP ENGLAND Tel: +44 1428608150 Fax: +44 1 1932564998 Email:
[email protected]
SK Telecom ................................................................................................11 Synopsys .....................................................................................................Cover 2 Wiley-Blackwell .........................................................................................Cover 3
APRIL 2011 EDITORIAL PREVIEW Integrated Circuits for Communications IETF Standards Update Recent Progress in Machine to Machine Communications
MAY 2011 EDITORIAL PREVIEW International Microwave (IMS 2011) Show Issue Advances in Cooperative Wireless Networking Topics in Optical Communications Hybrid Networking: Evolution Towards Combined IP Services and Dynamic Circuits Network Capabilities Automotive Networking and Applications
176
IEEE Communications Magazine • March 2011