February 2011 Cover 1
1/20/11
3:12 PM
Page 1
IEEE www.comsoc.org
Passive Optical Networks
MAGAZINE l ria to Tu eo c id So d V 9 om n g e C dba Pa ee oa e Fr Br Se
Special Supplement
February 2011, Vol. 49, No. 2
•Next-Generation Mobile Networks •Synchronization over Next Generation Packet Networks
®
A Publication of the IEEE Communications Society
LYT-TOC-FEB
1/20/11
12:03 PM
Page 2
Director of Magazines Andrzej Jajszczyk, AGH U. of Sci. & Tech. (Poland)
IEEE
Editor-in-Chief Steve Gorshe, PMC-Sierra, Inc. (USA) Associate Editor-in-Chief Sean Moore, Centripetal Networks (USA) Senior Technical Editors Tom Chen, Swansea University (UK) Nim Cheung, ASTRI (China) Nelson Fonseca, State Univ. of Campinas (Brazil) Torleiv Maseng, Norwegian Def. Res. Est. (Norway) Peter T. S. Yum, The Chinese U. Hong Kong (China) Technical Editors Sonia Aissa, Univ. of Quebec (Canada) Mohammed Atiquzzaman, U. of Oklahoma (USA) Paolo Bellavista, DEIS (Italy) Tee-Hiang Cheng, Nanyang Tech. U. (Rep. Singapore) Jacek Chrostowski, Scheelite Techn. LLC (USA) Sudhir S. Dixit, Nokia Siemens Networks (USA) Stefano Galli, Panasonic R&D Co. of America (USA) Joan Garcia-Haro, Poly. U. of Cartagena (Spain) Vimal K. Khanna, mCalibre Technologies (India) Janusz Konrad, Boston University (USA) Abbas Jamalipour, U. of Sydney (Australia) Deep Medhi, Univ. of Missouri-Kansas City (USA) Nader F. Mir, San Jose State Univ. (USA) Amitabh Mishra, Johns Hopkins University (USA) Sedat Ölçer, IBM (Switzerland) Glenn Parsons, Ericsson Canada (Canada) Harry Rudin, IBM Zurich Res.Lab. (Switzerland) Hady Salloum, Stevens Institute of Tech. (USA) Antonio Sánchez Esguevillas, Telefonica (Spain) Heinrich J. Stüttgen, NEC Europe Ltd. (Germany) Dan Keun Sung, Korea Adv. Inst. Sci. & Tech. (Korea) Danny Tsang, Hong Kong U. of Sci. & Tech. (Japan) Series Editors Ad Hoc and Sensor Networks Edoardo Biagioni, U. of Hawaii, Manoa (USA) Silvia Giordano, Univ. of App. Sci. (Switzerland) Automotive Networking and Applications Wai Chen, Telcordia Technologies, Inc (USA) Luca Delgrossi, Mercedes-Benz R&D N.A. (USA) Timo Kosch, BMW Group (Germany) Tadao Saito, University of Tokyo (Japan) Consumer Communicatons and Networking Madjid Merabti, Liverpool John Moores U. (UK) Mario Kolberg, University of Sterling (UK) Stan Moyer, Telcordia (USA) Design & Implementation Sean Moore, Avaya (USA) Salvatore Loreto, Ericsson Research (Finland) Integrated Circuits for Communications Charles Chien (USA) Zhiwei Xu, SST Communication Inc. (USA) Stephen Molloy, Qualcomm (USA) Network and Service Management Series George Pavlou, U. of Surrey (UK) Aiko Pras, U. of Twente (The Netherlands) Networking Testing Series Yingdar Lin, National Chiao Tung University (Taiwan) Erica Johnson, University of New Hampshire (USA) Tom McBeath, Spirent Communications Inc. (USA) Eduardo Joo, Empirix Inc. (USA) Topics in Optical Communications Hideo Kuwahara, Fujitsu Laboratories, Ltd. (Japan) Osman Gebizlioglu, Telcordia Technologies (USA) John Spencer, Optelian (USA) Vijay Jain, Verizon (USA) Topics in Radio Communications Joseph B. Evans, U. of Kansas (USA) Zoran Zvonar, MediaTek (USA) Standards Yoichi Maeda, NTT Adv. Tech. Corp. (Japan) Mostafa Hashem Sherif, AT&T (USA) Columns Book Reviews Andrzej Jajszczyk, AGH U. of Sci. & Tech. (Poland) History of Communications Mischa Schwartz, Columbia U. (USA) Regulatory and Policy Issues J. Scott Marcus, WIK (Germany) Jon M. Peha, Carnegie Mellon U. (USA) Technology Leaders' Forum Steve Weinstein (USA) Very Large Projects Ken Young, Telcordia Technologies (USA) Publications Staff Joseph Milizzo, Assistant Publisher Eric Levine, Associate Publisher Susan Lange, Online Production Manager Jennifer Porcello, Publications Specialist Catherine Kemelmacher, Associate Editor
®
2
MAGAZINE February 2011, Vol. 49, No. 2
www.comsoc.org/~ci SPECIAL SUPPLEMENT ADVANCES IN PASSIVE OPTICAL NETWORKS GUEST EDITORS: MAHMOUD DANESHMAND, CHONGGANG WANG, AND WEI WEI
S12 GUEST EDITORIAL S16 OPPORTUNITIES FOR NEXT-GENERATION OPTICAL ACCESS Next-generation optical access technologies and architectures are evaluated based on operators’ requirements. The study presented in this article compares different FTTH access network architectures. DIRK BREUER, FRANK GEILHARDT, RALF HÜLSERMANN, MARIO KIND, CHRISTOPH LANGE, THOMAS MONATH, AND ERIK WEIS
S25 COST AND ENERGY CONSUMPTION ANALYSIS OF ADVANCED WDM-PONS The authors compare several WDM-PON concepts, including hybrid WDM-PON with integrated per-wavelength multiple access, with regard to these parameters. They also show the impact and importance of generic next-generation bandwidth and reach requirements. KLAUS GROBE, MARKUS ROPPELT, ACHIM AUTENRIETH, JÖRG-PETER ELBERS, AND MICHAEL EISELT
S33 TOWARD ENERGY-EFFICIENT 1G-EPON AND 10G-EPON WITH SLEEP-AWARE MAC CONTROL AND SCHEDULING The authors briefly discuss the key features of 10G-EPON. Then, from the perspective of MAC-layer control and scheduling, they discuss challenges and possible solutions to put optical network units into low-power mode for energy saving. JINGJING ZHANG AND NIRWAN ANSARI
S39 MULTIRATE AND MULTI-QUALITY-OF-SERVICE PASSIVE OPTICAL NETWORK BASED ON
HYBRID WDM/OCDM SYSTEM
The authors present a new scheme to support multirate and multi-quality-of-service transmission in passive optical networks based on a hybrid wavelength-division multiplexing/optical code-division multiplexing scheme. The idea is to use multilength variable-weight optical orthogonal codes as signature sequences of a hybrid WDM/OCDM system. HAMZEH BEYRANVAND AND JAWAD A. SALEHI
S45
PASSIVE OPTICAL NETWORK MONITORING: CHALLENGES AND REQUIREMENTS The authors address the required features of PON monitoring techniques and review the major candidate technologies. They highlight some of the limitations of standard and adapted OTDR techniques as well as non-OTDR schemes. MOHAMMAD M. RAD, KERIM FOULI, HABIB A. FATHALLAH, LESLIE A. RUSCH, AND MARTIN MAIER
IMT-ADVANCED AND NEXT-GENERATION MOBILE NETWORKS GUEST EDITORS: WERNER MOHR, JOSE F. MONSERRAT, AFIF OSSEIRAN, AND MARC WERNER
82 84
GUEST EDITORIAL
92
ASSESSING 3GPP LTE-ADVANCED AS IMT-ADVANCED TECHNOLOGY: THE WINNER+ EVALUATION GROUP APPROACH
EVOLUTION OF LTE TOWARD IMT-ADVANCED The authors provide a high-level overview of LTE Release 10, sometimes referred to as LTE-Advanced. First, a brief overview of the first release of LTE and some of its technology components is given, followed by a discussion on the IMT-Advanced requirements. The technology enhancements introduced to LTE in Release 10, carrier aggregation, improved multi-antenna support, relaying, and improved support for heterogeneous deployments, are described. STEFAN PARKVALL, ANDERS FURUSKÄR, AND ERIK DAHLMAN
The authors describe the WINNER+ approach to performance evaluation of the 3GPP LTE-Advanced proposal as an IMT-Advanced technology candidate. The official registered WINNER+ Independent Evaluation Group evaluated this proposal against ITU-R requirements. The authors provide an overview of the ITU-R evaluation process, criteria, and scenarios, and focus on the working method of the evaluation group. KRYSTIAN SAFJAN, VALERIA D’AMICO, DANIEL BÜLTMANN, DAVID MARTIN-SACRISTAN, AHMED SAADANI, AND HENDRIK SCHÖNEICH
IEEE Communications Magazine • February 2011
LYT-TOC-FEB
1/20/11
12:03 PM
Page 4
2011 Communications Society Elected Officers
102
COORDINATED MULTIPOINT: CONCEPTS, PERFORMANCE, AND FIELD TRIAL RESULTS
112
EVOLUTION OF UPLINK MIMO FOR LTE-ADVANCED
122
A 25 GB/S(/KM2) URBAN WIRELESS NETWORK BEYOND IMT-ADVANCED
Byeong Gi Lee, President Vijay Bhargava, President-Elect Mark Karol, VP–Technical Activities Khaled B. Letaief, VP–Conferences Sergio Benedetto, VP–Member Relations Leonard Cimini, VP–Publications Members-at-Large Class of 2011 Robert Fish, Joseph Evans Nelson Fonseca, Michele Zorzi Class of 2012 Stefano Bregni, V. Chan Iwao Sasase, Sarah K. Wilson Class of 2013 Gerhard Fettweis, Stefano Galli Robert Shapiro, Moe Win 2011 IEEE Officers Moshe Kam, President Gordon W. Day, President-Elect Roger D. Pollard, Secretary Harold L. Flescher, Treasurer Pedro A. Ray, Past-President E. James Prendergast, Executive Director Nim Cheung, Director, Division III IEEE COMMUNICATIONS MAGAZINE (ISSN 01636804) is published monthly by The Institute of Electrical and Electronics Engineers, Inc. Headquarters address: IEEE, 3 Park Avenue, 17th Floor, New York, NY 10016-5997, USA; tel: +1-212705-8900; http://www.comsoc.org/ci. Responsibility for the contents rests upon authors of signed articles and not the IEEE or its members. Unless otherwise specified, the IEEE neither endorses nor sanctions any positions or actions espoused in IEEE Communications Magazine.
GUEST EDITORS: STEFANO BREGNI AND RAVI SUBRAHMANYAN
130 132
GUEST EDITORIAL
140
SYNCHRONIZATION OF AUDIO/VIDEO BRIDGING NETWORKS USING IEEE 802.1AS
148
NGN PACKET NETWORK SYNCHRONIZATION MEASUREMENT AND ANALYSIS
156
PERFORMANCE ASPECTS OF TIMING IN NEXT-GENERATION NETWORKS
164
USING IEEE 1588 AND BOUNDARY CLOCKS FOR CLOCK SYNCHRONIZATION IN TELECOM NETWORKS
EDITORIAL CORRESPONDENCE: Address to: Editorin-Chief, Steve Gorshe, PMC-Sierra, Inc., 10565 S.W. Nimbus Avenue, Portland, OR 97223; tel: +(503) 4317440, e-mail:
[email protected].
AND
REPRINT
PERMISSIONS:
POSTMASTER:
Send address changes to IEEE Communications Magazine, IEEE, 445 Hoes Lane, Piscataway, NJ 08855-1331. GST Registration No. 125634188. Printed in USA. Periodicals postage paid at New York, NY and at additional mailing offices. Canadian Post International Publications Mail (Canadian Distribution) Sales Agreement No. 40030962. Return undeliverable Canadian addresses to: Frontier, PO Box 1051, 1031 Helena Street, Fort Eire, ON L2A 6C7
SUBSCRIPTIONS, orders, address changes — IEEE Service Center, 445 Hoes Lane, Piscataway, NJ 08855-1331, USA; tel: +1-732-981-0060; e-mail:
[email protected]. ADVERTISING: Advertising is accepted at the discretion of the publisher. Address correspondence to: Advertising Manager, IEEE Communications Magazine, 3 Park Avenue, 17th Floor, New York, NY 10016. SUBMISSIONS: The magazine welcomes tutorial or survey articles that span the breadth of communications. Submissions will normally be approximately 4500 words, with few mathematical formulas, accompanied by up to six figures and/or tables, with up to 10 carefully selected references. Electronic submissions are preferred, and should be sumitted through Manuscript Central http://mc.manuscriptcentral.com/commag-ieee. Instructions can be found at the following: http://dl.comsoc.org/livepubs/ci1/info/sub_guidelines.html. For further information contact Sean Moore, Associate Editor-inChief (
[email protected]). All submissions will be peer reviewed.
The authors present a survey on the technical challenges of future radio access networks beyond LTE-Advanced, which could offer very high average area throughput to support a huge demand for data traffic and high user density with energyefficient operation. They highlight various potential enabling technologies and architectures to support the aggressive goal of average area throughput 25 Gb/s/km2
SYNCHRONIZATION OVER ETHERNET AND IP IN NEXT-GENERATION NETWORKS
tion. $16 per year digital subscription. Non-member print subscription: $400. Single copy price is $25.
Abstracting is permitted with credit to the source. Libraries are permitted to photocopy beyond the limits of U.S. Copyright law for private use of patrons: those post-1977 articles that carry a code on the bottom of the first page provided the per copy fee indicated in the code is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. For other copying, reprint, or republication permission, write to Director, Publishing Services, at IEEE Headquarters. All rights reserved. Copyright © 2011 by The Institute of Electrical and Electronics Engineers, Inc.
The evolution of LTE uplink transmission toward MIMO has recently been agreed in 3GPP, including the support of up to four-layer transmission using precoded spatial multiplexing as well as transmit diversity techniques. The authors provide an overview of these uplink MIMO schemes, along with their impact on reference signals and DL control signaling. CHESTER SUNGCHUNG PARK, Y.-P. ERIC WANG, GEORGE JÖNGREN, AND DAVID HAMMARWALL
in beyond IMT-Advanced systems. SHENG LIU, JIANJUN WU, CHUNG HA KOH, AND VINCENT K. N. LAU
ANNUAL SUBSCRIPTION: $27 per year print subscrip-
COPYRIGHT
Coordinated multipoint or cooperative MIMO is one of the promising concepts to improve cell edge user data rate and spectral efficiency beyond what is possible with MIMO-OFDM in the first versions of LTE or WiMAX. Interference can be exploited or mitigated by cooperation between sectors or different sites. Significant gains can be shown for both the uplink and downlink. RALF IRMER, HEINZ DROSTE, PATRICK MARSCH, MICHAEL GRIEGER, GERHARD FETTWEIS, STEFAN BRUECK, HANS-PETER MAYER, LARS THIELE, AND VOLKER JUNGNICKEL
EVOLUTION OF THE STANDARDS FOR PACKET NETWORK SYNCHRONIZATION The authors summarize the work done by ITU-T Q13/15 over the last six years to standardize the transport of timing over packet networks. They provide a summary of the published documents in this area from ITU-T while providing some of the background that went into each document including the specification of synchronous Ethernet and IEEE 1588 telecom profiles. JEAN-LOUP FERRANT AND STEFANO RUFFINI The Audio/Video Bridging project in the IEEE 802.1 working group is focused on the transport of time-sensitive traffic over IEEE 802 bridged networks. Current bridged networks do not have mechanisms that enable meeting these requirements under general traffic conditions. IEEE 802.1AS is the AVB standard that will specify requirements to allow for transport of precise timing and synchronization in AVB networks. GEOFFREY M. GARNER AND HYUNSURK (ERIC) RYU As the transport of data across the network relies increasingly on Ethernet/IP methods and less on the TDM infrastructure, the need for packet methods of synchronization transport arises. Evaluation of these new packet methods of frequency and time transport requires new approaches to timing measurement and analysis. LEE COSART Circuit-switched networks based on time-division multiplexing require synchronization to deliver information, whereas packet-switched networks can deliver information in an asynchronous environment. However, all real-time services require that synchronization and timing information be delivered over the network. Performance of timing distribution is quantified using particular metrics and adherence to requirements determined by using masks. KISHAN SHENOI
The authors describe the use of IEEE 1588 and boundary clocks for clock distribution in telecom networks. The technology is primarily used to serve the radio interface synchronization requirements of mobile systems such as WiMAX and LTE, and to reduce the deployment and dependence of GPS systems in base stations. MICHEL OUELLETTE, KUIWEN JI, SONG LIU, AND HAN LI President’s Page Letters to the Editor Certification Corner New Products
4
6 12 14 16
Conference Calendar Product Spotlights Global Communications Newsletter Advertisers’ Index
17 20 21 176
IEEE Communications Magazine • February 2011
LYT-PRES PAGE-FEB
1/20/11
3:46 PM
Page 6
THE PRESIDENT’S PAGE
COMSOC MARKETING: VALUED OFFERINGS FOR VALUED CUSTOMERS
M
and challenges that must be addressed in arketing, in general, is a term that order to best serve ComSoc customers. I we often use, but are not sure of the share this issue with Stan Moyer, ComSoc exact definition. For the purposes of this artiDirector of Marketing and Industry Relacle, we cite the definition provided by the tions, and John Pape, (staff) Director of MarAmerican Marketing Association.1 keting and Creative Services. Marketing is the activity, set of institutions, Stan Moyer is an executive director and and processes for creating, communicating, strategic research program manager in the delivering, and exchanging offerings that have Applied Research area of Telcordia Techvalue for customers, clients, partners, and nologies, where he has worked since 1990. society at large. Currently, he is leading a business developKey to this definition is the concept that ment effort for end-user information privacy “offerings” must have value for the customer. protection for mobile services. In the past he Therefore, determining what products and led research and business development activiservices customers find of value is a crucial ties related to digital content services and aspect of marketing. Our members are “cusBYEONG GI LEE home networking. He has also worked on tomers,” with the Society’s marketing efforts ATM switch hardware, broadband having primary focus on satisfying network architectures and prototheir needs while meeting the cols, middleware, Internet network broader goals of their Society. Since and application security, Internet any organization must operate withQoS, and voice over IP. He is curin the boundaries of its mission rently President of the OSGi and/or goals, we revisit the goals of Alliance. He served as a member of the IEEE Communications Society the IEEE Technical Activities (ComSoc) first, which are two-fold: Board Finance Committee. Within •Scholarly — scientific and ComSoc, he is currently serving as directed toward the advancement of Director-Marketing and Industry the theory, practice, and application Relations, a member of the Comof communications engineering and Soc Standards Board, Vice-Chair of related arts and sciences. the IEEE CCNC steering commit•Professional — promoting high tee, and co-Chair of the Ad hoc professional standards, developSTAN MOYER JOHN PAPE Industry Promotion Committee. ment of competency and the Stan has a ME degree in Electrical advancement of the standings of Engineering from Stevens Institute of Technology and an members of the professions. MBA degree in Technology Management from the University These goals are implemented through a diverse set of of Phoenix. activities by members and the business actions managed by John Pape has served as the (staff) Director of Marketing volunteer leaders and paid staff. Our main business includes and Creative Services since 1997. His responsibilities include publishing journals and magazines, holding meetings and conplanning and implementing the society’s marketing activities ferences, offering education and training, and selling adverfor membership, publications, continuing education, and contisements. Our business is to serve ComSoc’s customers, ferences. During his tenure, products have migrated from including members, and to achieve the goals of the Society. print to electronic media, and marketing tactics have evolved Next, we need to understand the background of ComSoc’s from direct mail and manual processing to complicated e-mail customers, which include members, publication subscribers, campaigns and social media outreach. Recently, he has led conference attendees, and recipients of other ComSoc serComSoc’s efforts to provide members with a digital option for vices. Most current members have advanced educations: postIEEE Communications Magazine and to create and execute graduate degrees in EE, physics, mathematics, computer the plan to offer a virtual course in wireless communications sciences, business, or related fields. engineering. From 1989 to 1997, he managed the Publications Serving ComSoc’s “customers” is the mission of ComSoc’s Marketing Department of the American Society of Civil EngiMarketing and Creative Services Department, which promotes neers. He has managed marketing activities for more than 30 all ComSoc products and provides creative services to Comyears with international publishers including S. Karger PubSoc officers, volunteers, and all other departments of Comlishers, Methuen, and Springer-Verlag. Soc. The department completes over 300 marketing projects per year in order to refine and renew ComSoc’s offerings. COMPETITORS AND ADDRESSABLE MARKET ComSoc offerings can be grouped into four major areas: membership, publications, conferences, and education & ComSoc, just like most businesses, has competitors for its training. products and services. ComSoc competes for time, prestige, In this article we will describe the marketing of ComSoc in money, authors, attendees, readers, volunteers, subscribers, terms of ComSoc offerings, marketing process, and the issues and resources with other organizations, events, publishers, and information sources in the communications field. Some might 1 American Marketing Association website, http://www.marketingpowconsider communications websites or self-organized social media as directly competitive as they can provide alternative er.com/aboutama/pages/definitionofmarketing.aspx
6
IEEE Communications Magazine • February 2011
LYT-PRES PAGE-FEB
1/20/11
3:46 PM
Page 7
THE PRESIDENT’S PAGE
Telecommunications Management and technical consulting Scientific R&D services
US employment trends (1000s)
1400
1200
1000
800
600
400 1/99
1/00
1/01
1/02
1/03
1/04
1/05
1/06
1/07
1/08
1/09
FIGURE 1. U.S. employment trends. resources for members and potential members. Some known competitors, in that context, include trade or corporate-centric organizations such as the ITU, GSMA, IET, ACM, TIA, ATIS, CTIA, AFCEA, OSA, ISOC, and other national organizations. Publishers such as John Wiley and Sons, SpringerVerlag, CMP, Cambridge University Press, and Elsevier provide alternative global publications (books, journals, magazines) as information sources and legitimate venues for scholarly authors and practical technical publishing. Certain forums and/or special interest groups (SIGs) deal with rapid technological developments, such as the WiMax Forums, Telecommunications Management Forum, NGN, IMS, and the Femto Forum. Trade shows and technical conferences (sponsored by corporate entities such as the Yankee Group or non-profits such as PCIA) on communications topics can be found throughout the year at locations around the globe. Within the IEEE, other societies such as Antennas and Propagation, Signal Processing, Vehicular Technology, Information Theory, Computer, Photonics, Consumer Electronics, and Microwave Theory and Techniques all have some technical overlap with ComSoc. However, ComSoc does not have a broad-based direct global competitor serving individuals within the global community in its technical scope. When marketing products and services, it helps to understand the addressable market – that is, the entire space for which our products and services would be of interest. Defining the estimated universe of potential members or communications related subject matter experts can be a challenge. Based on published data, there have been about 800,000 EE BS degrees granted in the US in the past 40 years. Less than half of them attained a Master’s degree or Ph.D. level degree. Historically about 14% enter communications-centric employment, resulting in about 110,000 individuals with an undergraduate degree and about 50,000 individuals holding an EE Master’s degree or higher in the US. With 20,000 members in the US, ComSoc member demographics imply that ComSoc has captured about 25% of the potential US market holding Master’s degrees or higher. Of those earning a Bachelor’s Degree, data would suggest that ComSoc has captured about 10% of the potential US universe. The potential member universe could include other disciplines such as physics, mathematics, computer science, and business management, but all EE graduates do not pursues careers in EE fields. Data sources for the international higher education area and employment markets are unreliable and inconsistent; it is not possible to estimate realistically the global member universe, although someone may estimate the size to be double the US.
IEEE Communications Magazine • February 2011
The US Department of Labor maintains employment data for the telecom industry for all employees (including nondegreed employees). Employment in the traditional telecommunications industry has declined by 35% since reaching highs of more than 1.4 million in 2000; there have been other areas of employment growth. While telecom employers have been shedding traditional full-time employees, technical consulting and scientific research employment have increased by 50% in the last decade. As a result, the US membership market is not as apparent or easily accessible as it once was. Communications specialists can be found in a much broader array of companies and working scenarios. Figure 1, drawn based on the US Bureau of Labor statistics charts, represents this shift graphically.
MEMBERSHIP One of the primary “products” that ComSoc provides to “customers” is membership in the society. As a society of the IEEE, ComSoc membership is offered for a fee in addition to IEEE membership dues. ComSoc sets the price and defines the benefits of society membership. The prime justifications for membership are to maintain technical competence, receive desired services at discounted rates, and network with colleagues. Each member receives monthly issues of IEEE Communications Magazine, the leading technical periodical devoted to communications technologies on an advanced level. ComSoc membership dues are $25 in 2011, with digital delivery of IEEE Communications Magazine; in 1999 ComSoc membership dues were $23 with print delivery of IEEE Communications Magazine. From the end of February to the end of September each year, dues for new members are half the regular price. ComSoc membership was about 8,800 when the Society was founded in 1963 with the establishment of IEEE. Membership has increased rapidly over the past 15 years, which was influenced by the half-year free membership campaign started in 1998 and the technology bubble of the late 1990’s. However, it began to decline from the early 2000’s due to the rapid decline of traditional telecommunications employment and full implementation of IEEE Xplore, which resulted in online IEEE (and ComSoc) content availability. The membership decline stopped at the 40,000+ level in the late 2000’s and maintained that level until it began to increase in 2010 to about 48,000. The majority of ComSoc members reside outside the US, whereas the majority of IEEE members reside in the US. This is the result of several phenomena. The communications industry diffused globally and became successful in many Asian and European countries. IEEE Xplore sales penetration in the US market has resulted in many potential US members satisfying their need for technical publications without joining ComSoc. In the late 1990’s, our surveys indicated that more than 62% of members were industry-employed; in 2008 that number decreased to 45%. This indicates that any membership growth in the future is most likely to come with the support of industry, consulting, or government areas. Each year ComSoc recruits about 11,000 new members. The majority of these new members result from offering to new IEEE members a free ComSoc membership for the first year. Some new members join during the annual IEEE renewal process. To recruit new members, marketing executes various print, online, and e-mail direct response campaigns; trade show exhibits; free book premiums; conference registration offers; and monthly new IEEE member e-campaigns. In addition, membership recruiting is supported by extensive web page updating; distributing promotion material to Sister Societies;
7
LYT-PRES PAGE-FEB
1/20/11
3:46 PM
Page 8
THE PRESIDENT’S PAGE Publication
% of CommMag Readers that Read
EE Times
23.50%
Telecommunications
20.60%
EDN
15.90%
Electronic Design
15.10%
Wireless Design & Development
13.60%
Microwave & RF
13.20%
Network World
13.10%
Microwave Journal
12.60%
Lightwave
8.90%
Business Communications Review
7.00%
Test & Measurement World
6.50%
Internet Telephony
5.60%
Photonics Spectra
4.90%
Telephony/ Connected Planet
4.10%
Urgent Communications
2.70%
TABLE 1. Percentage of CommMag readers that read other magazines. local Chapter support with promotion material, sample copies, posters and special offers; cover wraps; free offers of a brief communications history book highlighting Society notables; an up-to-date Society PowerPoint presentation for group presentations; back-office coordination/support for data and other membership development activities such as Chapter Chair Congresses, Distinguished Lecturer Tours, and volunteer visits; Best of the Best and other book/DVD/product special offers; and special opportunistic efforts. In recent years, the Industry Now program has been established to offer bulk or multiple memberships to companies in emerging economies that are not acquainted with the advantages of ComSoc association. Retaining members is a constant process of providing reminders and opportunities to members that illustrate the value of membership. The annual ComSoc Community Directory and letter from the President are sent to each new ComSoc member on a biweekly basis. Members are surveyed for satisfaction and needs. Articles and columns dealing with member issues appear every month in IEEE Communications Magazine, as do advertisements specifically aimed at members. Monthly issues of e-News spotlight the President’s monthly message and present member-only special offers, including conference registrations, the Book of the Month, free tutorials, technically sponsored Webinars, new product offers, and other useful information such as the Top Ten list of ComSoc papers appearing in IEEE Xplore, and content announcements for optional publications. Support for volunteer committees, Chapters, Distinguished Lecturer Tours, premiums, and other programs also contribute to retention efforts.
PUBLICATIONS ComSoc publishes magazines, journals/transactions, proceedings, DVDs, books (with IEEE’s publisher John Wiley and independently) and newsletters. There are several methods to measure the popularity and effectiveness of periodical publications, e.g., subscription data, electronic PDF downloads, submissions, the ISI Journal Citation report, and reader/member surveys. Most periodicals are available in print or electronic format.
8
There are three general categories for the subscription market: subscription agents; libraries; and individuals. ComSoc relies heavily on IEEE Sales and Marketing for sales to libraries and to subscription agents; all electronic package sales, including consortia licensing sales, are handled by the IEEE. The IEEE offers several packages of periodicals. The All Societies Periodical Package (ASPP), Enterprise, the IEEE Electronic Library (IEL), and the new IEEE Communications Library are among the offerings that include ComSoc periodicals and conference proceedings. IEEE participates in library conferences such as the ALA and SLA annual trade shows. All ComSoc periodicals are included in the annual Society brochure, on Society Membership applications, online web and PDF formats, and in the ComSoc Community directory. IEEE Communications Magazine (CommMag) is the most important Society publication, which all members receive monthly. The editorial data reflects hot topics of interest to members and is written in a style to be accessible to all members, with academic and corporate interests alike. CommMag is a hybrid publication, containing editorial material that can be described as scholarly with sufficient industry attraction to generate $1 million plus in advertising sales each year. No other IEEE society magazine can claim the distinction of most non-member subscriptions, most 2009 magazine Xplore views, high ISI Journal Citation Report impact factor (rated #5 in telecommunications in 2009), and $1+ million generated in advertising revenue. CommMag is the most significant scholarly/industry publication in the specialty of communications. To emphasize the uniqueness of ComSoc members, a survey of readers found that no competitive trade publication was read by more than 25% of CommMag readers (see Table 1). IEEE Wireless Communications Magazine and IEEE Network Magazine complement IEEE Communications Magazine and focus on two strong areas of communications technology. These are optional bi-monthly publications with technical cosponsorship with the Computer Society. Both are similar to CommMag in layout and offer a broader readership concept. Core ComSoc archival journals/transactions including IEEE Transactions on Communications, IEEE Journal on Selected Areas in Communications, IEEE Communications Letters, IEEE Communications Surveys and Tutorials (e-only), and IEEE Transactions on Network and Service Management (eonly) reflect the scope of the scholarly activity. And financially co-sponsored journals such as IEEE Transactions on Wireless Communications, IEEE/ACM Transactions on Networking, IEEE/OSA Journal of Lightwave Technology, IEEE/OSA Journal of Optical Communications and Networking, and IEEE Transactions on Mobile Computing demonstrate relationships with other related specialties.
CONFERENCES All conferences sponsored and co-sponsored by ComSoc are marketed and promoted by mixed media through different channels. The degree of marketing effort increases as the level of financial ownership and budget increases. For a specific event, a separate marketing plan or strategy is created. There are some common areas. Most of these events include conference proceedings with papers also appearing online in IEEE Xplore. This results in authors/presenters wanting to submit papers and contribute to the conferences and ComSoc revenue. Conferences generate more gross revenue than any other product in the portfolio, and conferences command more marketing resources than any other products in the ComSoc portfolio. Conferences have different levels of financial co-sponsor(Continued on page 10)
IEEE Communications Magazine • February 2011
LYT-PRES PAGE-FEB
1/20/11
3:46 PM
Page 10
THE PRESIDENT’S PAGE (Continued from page 8) ship that range from fully financially sponsored by ComSoc (e.g., ICC, GLOBECOM, CCNC), financially co-sponsored by ComSoc (e.g., OFC/NFOEC, MILCOM), and conferences with no financial sponsorship (IEEE Sarnoff 2009, WTS 2009, IEEE Policy 2009). Those conferences are typically technically co-sponsored by ComSoc. Depending on budgets, most fully-owned ComSoc conference marketing includes web site development, Call-forPapers (CFP) assistance, advance and final program design and production, online and media advertising, flyers, and a series of e-mail efforts. Recently, there have been additional activities such as recording live sessions and other program events. Under the ComSoc Webcasts brand, access to these events can be purchased for live participation or recorded listening. Often, a keynote speech can be enjoyed free of charge. At some recent flagship events, there have been increased visibility activities with press releases and local, national, and global media coverage. Some ComSoc volunteer leaders have made trips to local high schools and colleges to explain the benefits of careers in the communications sciences. At a recent MILCOM, high school science students visited the exhibit floor to sample the experience and view the scope of the industry. Social media – e.g., blogs, Facebook, Linkedin, Twitter – are now playing a larger role in conference marketing. ComSoc has more than 35 social media sites dedicated to conferences! Social media also provides new ways for registrants to participate. Marketing efforts related to conferences not only help to promote the conference, but also utilize the conference to market other ComSoc products and services. Conferences serve as a great forum for the ComSoc marketing staff and volunteers to meet ComSoc members and conference attendees to get feedback and input on what they do and do not like about ComSoc, the conference, and other products and services.
Wireless Communications (specifically the areas covered by the WCET certification program) was held at 4G World in Chicago. A five-day virtual intensive course on Wireless Communications was held in September 2010. This first offering was very successful, with 75 registrants from 15 countries. Each day’s sessions were held over the Internet; participants never had to leave their computers. For 2011, the five-day virtual intensive course on wireless communications is scheduled for multiple offering times. Eventually this event could be settled as a quarterly or bimonthly course, demand permitting. ComSoc’s WCET (Wireless Communication Engineering Technologies) Certification Program was officially launched in early 2008. It was developed by ComSoc and an international collection of industry experts to address the worldwide wireless industry’s growing need for professionals with real-world problem-solving skills. Industry consensus, obtained through an industry survey with more than 1,300 individual responses representing more than 65 countries from around the world, was that a wireless certification program is necessary. The purpose of the WCET is to certify individuals in wireless communications. Two testing windows are offered each year, during which individuals can sit for the exam. There are more than 500 testing sites located in 75 countries around the world. The exam is administered at a Computer Based Testing facility and consists of 150 multiple-choice questions encompassing seven major wireless areas: RF engineering; propagation and antennas; access technologies; network and service architecture; network management and security; facilities infrastructure; agreements, standards, policies, and regulations; and fundamental knowledge. The WCET program has evolved for the past two years while passing through a learning curve. Marketing strategy for the WCET exam originally started with the objective to generate direct individual applications, but the focus was changed from the individual to the company and from the exam to training. The five-day virtual intensive course thus developed has proved popular with industry and provides a natural sequence for those wishing to successfully navigate the WCET exam.
EDUCATION AND TRAINING
IN CLOSING
Continuing education and/or educational products/product development are important but least developed areas within ComSoc. In reality, everything ComSoc produces falls under the subject of education, and most member surveys indicate support for additional educational opportunities. In response, we plan to invest intensive efforts to fully develop the education and training areas in accordance with the progress of the mobile converged communications era. Conference Tutorials are developed with ComSoc events, but they are developed under the banner of the event itself, so marketing for these tutorials falls under the domain of the individual event. Tutorials Now represent an online portfolio of individual half-day or full-day tutorials that had been given at ComSoc events. The total number of titles accumulated so far is 84, but it grows every year. Selected presenters record voice over slide presentations after the event and forward the completed files to ComSoc for quality control testing and uploading. Presenters receive an honorarium and/or royalties for accepted tutorials. Recently these tutorials have been indexed for access through the ComSoc Digital Library. Some Tutorial Now modules are offered as potential sponsorship to companies. These can be offered to ComSoc members free of charge for a limited time when a sponsor has been secured. Courses/sessions can be developed for non-ComSoc conferences and trade shows. In 2009 and 2010, a course on
With a membership of more than 48,000 global individuals, ComSoc is the second largest IEEE society and has strong volunteer commitments, a dedicated staff, expert operational support, and a global reputation of excellence. ComSoc excels at producing technical publications, organizing technical conferences, as well as fostering educational programs. ComSoc has a potential to grow much more in the future while undergoing transformation toward the converged communications era. ComSoc’s marketing has adapted to many changes in the past decade, thus enabling ComSoc to reach its current status. Anticipating future opportunities, the ComSoc volunteer/staff partnership for marketing will support increased industry patronage, new partnerships with organizations and companies that can help enhance ComSoc’s position as the “go-to” resource for the communications industry. Further, it will help promote new publications and conferences in emerging communications areas, develop new services geared to industry, attract non-US members, expand digital delivery of information and virtual meetings, explore social media opportunities, and prepare for the unexpected. ComSoc’s marketing will keep playing a pivotal role while ComSoc navigates through the newly emerging converged communications era, creating new offerings that have value for the ComSoc’s customers and thereby making ComSoc a valuable home for all the communications communities and professionals of the world.
10
IEEE Communications Magazine • February 2011
LYT-LETTERS EDITOR-February
1/21/11
10:07 AM
Page 12
LETTERS TO THE EDITOR EDITED BY MISCHA SCHWARTZ Comments on “An Early History of the Internet” by Leonard Kleinrock Greg Adamson, Melbourne
To the Editor, I read your “History of Communications” pages with interest, and particularly the August 2010 article by Leonard Kleinrock. This article creates a challenge for the reader: how to weigh an account of historical events by a major participant in those events. I have separately seen the problem described as “military history written by generals.” You would be aware that Donald Davies in part covered the same ground in an article published in 2001 (The Computer Journal, vol. 44, no. 3, pp. 152–62). I found Davies’ account very moving: a renowned researcher in his dying months establishing his view of a contested period of discovery (and not to assert his own claim). After reviewing the work of Kleinrock and Paul Baran, he summed up his finding in the following way: “My contention is that the work of Kleinrock before and up to 1964 gives him no claim to have originated packet switching, the honour for which must go to Paul Baran. The passage in his book on time-sharing queue discipline, if pursued to a conclusion, might have led him to packet switching, but it did not.” I appreciate that Leonard Kleinrock would not agree with this perspective, yet I feel his article is too oblique. I would be very interested in seeing a
12
more specific response to the points that Davies made. Perhaps you should have an occasional column titled “Debates in the History of Communications.” Response to Greg Adamson: by Leonard Kleinrock
In my August 2010 article, I state that the detail I afford to my perspective is based on personal experience and is not a claim to importance. I also call attention to many more histories of the Internet in need of study; Adamson calls this whole enterprise “military history written by generals” and has asked me to respond to claims he cites from the late Donald Davies in which my work on time-sharing is addressed. A major goal of packetization is to prevent long messages from hogging the channel and thereby causing shorter messages to wait inordinately. This was raised by Davies as a major concern since it provided the network operator the ability to control network delay rather than being at the mercy of the end user. In my 1962 dissertation I clearly considered the role of message priority classes and priority queueing disciplines in accomplishing this. Relating this to network delay, I devoted Chapter 5 to studying the “...manner in which message delay is affected when one introduces a priority structure (or queue discipline) into the set of messages....” I isolated the effect of queue discipline by looking at a single node, as “An understanding of the effects of a
priority discipline at the single-node level is necessary before one can make any intelligent statements about the multinode case.” Among the classes of discipline I studied were the preemptive disciplines where the transmission of a message can be interrupted and then continued later. I devoted an entire section to time-shared servicing of data traffic in which I broke messages into smaller, fixed size pieces. I also provided a mathematical analysis and showed that the deleterious effects of channel hogging were indeed avoided. It would take far more space to delineate the properties of packet switching, since it involves much more than just chopping messages into smaller fixed length segments (the issue addressed by Adamson to which I have herein responded). Briefly, it also involves network efficiency which I addressed with the broader issue of demand access; it involves robustness and reliability which comes about from distributed adaptive routing in a mesh network, which I also presented in my early work. In the August 2010 article, I explain how my work informed the technology of the ARPANET as well as its timing relative to that of Baran’s. Prior to the writing of Davies’ posthumous article, he contacted me about these topics, and I did respond accordingly. Dr. Adamson has raised these same issues, which I have already explained in my article.
IEEE Communications Magazine • February 2011
LYT-CERTIFICATION-FEB
1/19/11
4:25 PM
Page 14
CERTIFICATION CORNER THE VALUE OF VOLUNTEERS BY ROLF FRANTZ In many of the months this column has appeared, it has often included some words about the volunteers who have made WCET certification possible. It’s worth taking more time to describe the many roles that volunteers have played – and continue to play – because of the value they bring to the program. It started when the original Practice Analysis Task Force (PATF) convened in December 2006 to develop the Delineation (the description of wireless practice and underlying knowledge). The PATF included more than 15 industry experts from around the world who gave their time to get the program off to a strong start. Without their commitment, there would not be a WCET certification program. Several of these volunteers remain actively involved today, especially in championing the value of certification within the industry. The next volunteers were the participants in Focus Groups and the Independent Reviewers. Dozens of industry experts studied the Delineation and offered constructive feedback that helped make it
14 IENYCM2830.indd 1
even more representative of wireless communications practice in a broad range of companies and countries. Again, some of those volunteers remained active in the WCET program for years, taking on specific roles and responsibilities, and their continued involvement has been valuable in guiding and growing the program. Special thanks go to the volunteers who have served on the Industry Advisory Board. They have given focused feedback on all aspects of the WCET program, ranging from promoting the program to industry to critiquing the WCET website. The fact that Board members represent companies in all segments of the wireless commu-
nications industry, based in countries around the world, has helped maintain the vendor neutrality and trans-national nature of WCET certification. Other volunteers have served as question writers, question reviewers, and on the exam committee that has used the best questions to create exams covering the breadth of wireless communications. They have maintained the balance among the seven technical areas that was developed by the PATF and reinforced by industry feedback as to the relative importance of various tasks and knowledge in the workplace. Some of these volunteers recently joined with others to form a “mini-PATF” to review all the detailed feedback on the Delineation that we have received. They invested their time and effort to refresh and update the Delineation to reflect changes in the industry over the past few years. A Core Team of volunteers has provided steady leadership to the program throughout its history. They have led committees that looked at issues of policy, marketing, the Handbook, strategy, training, the WEBOK, recertification, and question writing and exam creation. A couple dozen volunteers have made a significant commitment to these leadership positions over the years. The Steering Committee is responsible for the long-term direction of the WCET program. More than a dozen volunteers have served on this committee, helping to identify strengths and weaknesses and also areas where ComSoc can build on the certification program. An example of the latter is the development of ComSoc training offerings, an area where there was a clear demand in the industry and a path within ComSoc to address the need. The title of this month’s column sums it up: the hundreds of volunteers who have played many different roles in the development, growth, guidance, and success of WCET certification have been our most valuable asset. We have expressed our thanks to each as they have relinquished a role or responsibility, but we owe one large THANK YOU to all of them. Attention WCP certificate holders in particular: volunteer opportunities abound! Please let us know how you would like to contribute to continuing to grow the WCET program to its full potential.
IEEE 1/17/11 3:31:21 PMCommunications
Magazine • February 2011
LYT-PRODUCTS-FEB
1/20/11
12:06 PM
Page 16
NEW PRODUCTS CHIP FAMILY SUPPORTS THE ITU-T G.HN GLOBAL STANDARD Lantiq Lantiq has introduced a chip family supporting the ITU-T G.hn global standard for next generation wired home networks. Lantiq XWAY HNX devices provide manufacturers of consumer, computing and smart home electronics with the foundation for in-home networks that can be connected using any combination of phone, power and cable wiring. Endorsed by the 191 member countries of the ITU in June 2010, the G.hn standard defines technology to provide network connectivity across all common in-home wiring with data rates as high as 1 Gigabit per second (Gbps). As G.hn becomes an integral feature in residential gateways, consumer electronics devices, personal computers and Internet-connected smart home devices, service providers will be able to realize significantly reduced installation and operations costs as a result of plug-andplay network operation and greater device connectivity. Lantiq XWAY HNX chips can be used in standalone G.hn node applications or as part of multi-service platforms. The device is provided to customers with a software package that includes pre-integrated drivers for the broad range of Lantiq system-level silicon devices, including Gigabit speed gateway processors, 802.11n WLAN supporting carrier-grade video, DECT/CAT-iq, VoIP and analog voice. http://www.lantiq.com/hnx
LOW NOISE, HIGH-OUTPUT XPON VIDEO RECEIVER
supply in most xPON applications. RFRX8888’s ultra-low noise performance, combined with high output power, extends the performance and lifetime of wired networks by improving the link margin and/or allowing more passive optical splits. For FTTP applications requiring +5 VDC power supply operation, the RFRX8890 video receiver is also available. Features include: •+12V Single Supply Operation •On-Die Bias Circuitry Reduces Cost and Board Area •Best-in-Class Low Noise (<3.0 pA/rtHz Equivalent Input Noise Current) •Low Power: 1.4 W at +12V •Best-in-Class +23 dBmV per Channel RF Output Capability •Linearity Better Than -63 dBc CSO and -66 dBc CTB at +23 dBmV RF Out Per Channel (79-NTSC Equivalent Channels) •48 MHz to 1002 MHz Operational Bandwidth •30 dB AGC Range http://www.rfmd.com
SG384 RF SIGNAL GENERATOR Stanford Research Systems, Inc. Introducing the SG384, a 4 GHz RF Signal Generator from SRS. It offers a DC to 4 GHz frequency range with 1 μHz resolution, AM, FM, and PM, with -116 dBc/Hz phase noise at 20 kHz offset from 1 GHz, full octave frequency sweeps, an OCXO timebase and standard RS232, GPIB and Ethernet interfaces. Options include clock outputs, analog I/Q inputs and a rubidium timebase. http://www.thinkSRS.com
RF Micro Devices RFMD’s new RFRX8888 video receiver performs transimpedance amplification of the differential input from a high performance 1550nm optical wavelength photo detector (PD), all with best-in-class noise performance. This IC's output is linear low distortion RF from 48 MHz to 1002 MHz. RFRX8888 is ideal for 1550nm optical wavelength RF analog or digital overlay video receive circuitry employed in xPON FTTP ONT triplexer and quadplexer modules. Its first stage features integrated bias circuitry simplifying external-to-IC end product design and lowering overall end product assembly cost. Optimized for operation from a +12 VDC power supply with a highly efficient power consumption of just 1.4 W, it eliminates the need for a supplemental ONT power
16
Msample per second. It comes with a robust, interchangeable solid-state drive with one Tbyte of storage capacity and a recording rate of 270 Mbyte per second. The R&S IQR I/Q data recorder records digital RF signals in realtime. Thanks to its unique combination of speed, compactness and robustness, it is ideal for use in drive tests in broadcasting and mobile radio networks. For instrument tests or electronic component testing, the recorder can be used to supply previously generated test signals. In addition, broadband spectra or sporadic signals can be recorded in realtime for later offline analysis. To obtain a continuous analysis system for digital RF signals, the user can connect the data recorder with either a spectrum or radio network analyzer and with a signal generator from Rohde & Schwarz via the digital I/Q interface. This I/Q interface simplifies both parameter exchange between the instruments and the setup of the data recorder. During configuration, the user can access several trigger modes for the start/stop function that range from manual quick start to the triggering of recording via a previously entered I/Q signal level. The integrated Ethernet interface permits remote control of the instrument as well as the transfer of the measurement data via LAN. Two extra USB interfaces on the front panel and a touch screen round out the user-friendly concept of the data recorder. http://www.rohde-schwarz.com
MODEL-DRIVEN CONFIGURATION MANAGEMENT Tail-f Systems
DIGITAL I/Q
DATA RECORDER
Rohde & Schwarz Rohde & Schwarz introduuced the R&S IQR digital I/Q data recorder at electronica in Munich. The recorder can record, store and replay digital RF signals loss-free and in realtime over the I/Q interface developed by Rohde & Schwarz. When used in combination with RF scanners, generators and network analyzers from Rohde & Schwarz, the recorder completes a high-performance, continuous analysis system for digital RF signals. This system will prove to be of particular benefit for users in broadcasting, mobile radio, aerospace & defense, and the automobile industry. The compact recorder – in half 119-inch format – currently offers transmission rates of up to 66
Tail-f Systemshas announced the first model-driven configuration management application for provisioning Carrier Ethernet services. NCS for Carrier Ethernet will benefit both service providers and networking equipment providers by enabling the activation of complex services in less time and with fewer resources. NCS also radically simplifies the development of new and enhanced management systems, allowing developers to bring new products to market much faster. NCS is a general applications framework for building configuration management systems. NCS for Carrier Ethernet extends the value of NCS by incorporating service and device models plus a Web UI optimized for implementing Carrier Ethernet management systems. http://www.tail-f.com
IEEE Communications Magazine • February 2011
LYT-CALENDAR-FEB
1/20/11
12:25 PM
Page 17
CONFERENCE CALENDAR 2011 FEBRUARY • NTMS 2011 - 4th Int’l. Conference on New Technologies, Mobility and Security, 7-10 Feb. Paris, France. http://www.ntms-conf.org/innovative-projects.htm
• ONDM 2011 - 15th Int’l. Conference on Optical Networking Design and Modeling, 8-10 Feb. Bologna, Italy. http://www.ondm2011.unibo.it/
• ICACT 2011 - 13th Int’l. Conference on Advanced Communication Technology, 13-16 Feb.
MARCH
Phoenix Park, Korea. http://www.icact.org/
♦ IEEE CogSIMA 2011 - IEEE Conference on Cognitive Methods in Situation Awareness and Decision Support, 22-24 Feb. Miami, FL. http://www.ieee-cogsima.org
• ISWPC 2011 - Int’l. Symposium on Wireless Pervasive Computing, 23-25 Feb. Hong Kong, China. http://www.iswpc.org/2011/
• WSA 2011 - Int’l. ITG Workshop on Smart Antennas, 24-25 Feb. Aachen, Germany. http://www.wsa2011.rwth-aachen.de/
before the listing. Individuals with information about upcoming conferences, calls for papers, meeting announcements, and meeting reports should send this information to: IEEE Communications Society, 3 Park Avenue, 17th Floor, New York, NY 10016; e-mail:
[email protected]; fax: +1-212-705-8996. Items submitted for publication will be included on a space-available basis.
IEEE Communications Magazine • February 2011 IENYCM2846.indd 1
Los Angeles, CA. http://www.ofcnfoec.org/
♦ IEEE WCNC 2011 - IEEE Wireless Communications and Networking Conference, 28-31 March Cancun, Mexico. http://www.ieee-wcnc.org/
• ICCIT 2011 - Int’l. Conference on Communications and Information Technology, 28-31 March Aqaba, Jordan. http://iccit-conf.org/
APRIL
♦ Communications Society portfolio events are indicated with a diamond before the listing; • Communications Society technically co-sponsored conferences are indicated with a bullet
÷ )HGHUDWLRQZLWKRWKHU6)$EDVHGIDFLOLWLHV ÷ *HRJUDSKLFDOUHDFKDQGIXQFWLRQDOGLYHUVLW\ ÷ 5HVRXUFHGHVFULSWLRQODQJXDJHDQGWRROV ÷ 0HDVXUHPHQWDQGHYDOXDWLRQ
♦ OFC/NFOEC 2011- Optical Fiber Communication Conference, 6-10 March
♦ IEEE ISPLC 2011 - 15th IEEE Int’l. Symposium on Power Line Communications and Its Applications, 3-6 April Udine, Italy. http://www.ieee-isplc.org/
÷ (QKDQFLQJWKHWHVWEHGIDFLOLW\IRUH[SHULPHQWDWLRQZLWK ÷ 1HWZRUNDSSOLFDWLRQV ÷ $GGUHVVLQJVFKHPHV&&1,3Y ÷ ([SHULPHQWDOH[WHQVLRQVRI2SHQ)ORZLQWHUIDFH ÷ 1HZ2SHQ)ORZFRQWUROOHUV ÷ 0XOWLOD\HUH[WHQVLRQV ÷ ([WHQVLRQWRZDUGVVHSDUDWHSURFHVVLQJLQWHUIDFH
17
1/17/11 5:48:38 PM
LYT-CALENDAR-FEB
1/20/11
12:25 PM
Page 18
CONFERENCE CALENDAR ♦ IEEE INFOCOM 2011 - IEEE Conference on Computer Communications, 10-15 April Shanghai, China. http://www.ieee-infocom.org
• IEEE RFID 2011 - IEEE Int’l. Conference on RFID 2011, 12-14 April Orlando, FL. http://ewh.ieee.org/mu/rfid2011/
• WTS 2011 - Wireless Telecommunications Symposium 2011, 13-15 April New York, NY. http://www.csupomona.edu/~wtsi/wts/index.h tm
• IIT 2011 - Int’l. Conference on Innovations in Information Technology, 25-27 April Dubai, United Arab Emirates. http://www.it-innovations.ae/iit10/index.html
• ISPS 2011 - 10th Int’l. Symposium on Programming and Systems, 25-27 April Algiers, Algeria. http://www.isps2011.dz/
MAY • IEEE SARNOFF - 34th Sarnoff Symposium 2011, 2-4 May Princeton, NJ. http://sarnoff-symposium.ning.com/
• MC-SS 2011 - 8th Int’l. Workshop on Multi-Carrier Systems and Solutions, 3-4 May Herrsching, Germany. http://www.mcss.dlr.de/
♦ IEEE DySPAN 2011 - IEEE Int’l. Symposium on Dynamic Spectrum Access Networks, 3-6 May Aachen, Germany. http://www.ieee-dyspan.org/
• ICT 2011 - 18th Int’l. Conference on Telecommunications, 8-11 May Ayia Napa, Cyprus. http://www.ict2011.org/
♦ IEEE CQR 2011 - 2011 Annual IEEE CQR Int’l. Workshop, 10-12 May
17-20 May Arequipa, Peru. http://conatel.ucsp.edu.pe/
♦ IEEE/IFIP IM 2011 - 12th IFIP/IEEE Int’l. Symposium on Integrated Network Management, 23-27 May Dublin, Ireland. http://www.ieee-im.org/
JUNE ♦ IEEE IWQoS 2011 - 18th IEEE Int’l. Workshop on Quality of Service, 5-7 June San Jose, California. http://www.ieee-iwqos.org/
♦ IEEE ICC 2011 - IEEE Int’l. Conference on Communications, 5-9 June Kyoto, Japan. http://www.ieee-icc.org/2011/
• IEEE POLICY 2011 - IEEE Int’l. Symposium on Policies for Distributed Systems and Networks, 6-8 June Pisa, Italy. http://www.ieee-policy.org/
♦ IEEE CAMAD 2011 - IEEE Int’l. Workshop on Computer-Aided Modeling Analysis and Design of Communication Links and Networks 2011, 10-11 June
Salt Lake City, Utah. http://www.ieee-secon.org/2011/
• IEEE ITMC 2011 - IEEE Int’l. Technology Management Conference, 27-30 June San Jose, CA. http://www.ieee-itmc.org/
♦ IEEE ISCC 2011 - 16th IEEE Symposium on Computers and Communications, 28 June-1 July Kerkyra, Greece. http://www.ieee-iscc.org/2011/
• ICL-GNSS 2011 - Int’l. Conference on Localization and GNSS 2011, 29-30 June Tampere, Finland. http://www.icl-gnss.org/2011/index.php
JULY ♦ IEEE HPSR 2011 - 12th IEEE Int’l. Conference on High Performance Switching and Routing, 4-6 July Cartagena, Spain. http://www.ieee-hpsr.org/
• OECC 2011 - 16th Opto-Electronics and Communications Conference, 4-8 July Kaoshlung, Taiwan. http://www.oecc2011.org/
Kyoto, Japan. http://www.nprg.ncsu.edu/camad/
♦ IEEE ICME 2011 - 2011 IEEE Int’l. Conference on Multimedia and Expo, 11-15 July
♦ IEEE HEALTHCOM 2011 - 13th IEEE Int’l. Conferece on e-Health Networking, Application & Services, 13-15 June
Barcelona, Spain. http://www.icme2011.org/
Columbia, MO. http://www.ieee-healthcom.org/
• ConTEL 2011 - 11th Int’l. Conference on Telecommunications, 15-17 June Graz, Austria. http://www.contel.hr/
• ICUFN 2011 - 3rd Int’l. Conference on Ubiquitous and Future Networks Dalian, China. http://www.icufn.org/main/
AUGUST • ICCCN 2011 - Int’l. Conference on Computer Communications and Networks 2011, 1-4 Aug. Maui, Hawaii. http://www.icccn.org/ICCCN11/
♦ ATC 2011 - 2011 Int’l. Conference on Advanced Technologies for Communications, 3-5 Aug. Da Nang City, Vietnam. http://rev-conf.org/
Naples, FL. http://committees.comsoc.org/cqr/
♦ IEEE CTW 2011 - IEEE Communication Theory Workshop, 20-22 June
• ICADIWT 2011 - 4th Int’l. Conference on the Applications of Digital Information and Web Technologies, 46 Aug.
♦ IEEE ICSOS 2011 - IEEE Int’l. Conference on Space Optical Systems and Applications, 11-13 May
Sitges, Spain. http://www.ieee-ctw.org
Stevens Point, WI. http://www.dirf.org/DIWT/
Santa Monica, CA.
♦ IEEE SECON 2011 - 8th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks, 27-30 June
♦ IEEE P2P 2011 - IEEE Int’l. Conference on Peer-to-Peer Computing, 31 Aug.-2 Sept.
http://icsos2011.nict.go.jp/
• CONATEL 2011 - 2nd Nat’l. Conference on Telecommunications (Peru),
18
Tokyo, Japan. http://p2p11.org/
IEEE Communications Magazine • February 2011
LYT-CALENDAR-FEB
1/20/11
12:25 PM
Page 19
CONFERENCE CALENDAR (Continued from page 17) ♦ IEEE EDOC 2011 - 15th IEEE Int’l. Enterprise Distributed Object Computing Conference, 31 Aug.-2 Sept. Helsinki, Finland. http://edoc2011.cs.helsinki.fi/edoc2011/
Houston, TX. http://www.ieee-globecom.org/2011/
Las Vegas, NV. http://www.ieee-ccnc.org/
2012
MARCH
JANUARY
SEPTEMBER • ITC 23 2011 - 2011 Int’l. Teletraffic Congress, 6-8 Sept.
♦ IEEE CCNC 2012 - IEEE Consumver Communications and Networking Conference, 8-11 Jan.
♦ IEEE INFOCOM 2012 - IEEE Int’l. Conference on Computer Communications, 25-30 March Orlando, FL. http://www.ieee-infocom.org/2012/
San Francisco, CA. http://www.itc-conference.org/2011
♦ IEEE PIMRC 2011 - 22nd IEEE Int’l. Symposium on Personal, Indoor and Mobile Radio Communications, 11-14 Sept. Toronto, Canada. http://www.ieee-pimrc.org/2011/
• ICUWB 2011 - 2011 IEEE Int’l. Conference on Ultra-Wideband, 14-16 Sept.
Realize a new level of confidence in LTE PHY design
Bologna, Italy. http://www.icuwb2011.org/
♦ IEEE GreenCom 2011 - Online Conference, 26-29 Sept. Virtual. http://www.ieee-greencom.org/
OCTOBER • DRCN 2011 - 8th Int’l. Workshop on Design of Reliable Communication Networks, 10-12 Oct.
Verification
Golden Reference
Performance Testing
LTE Toolbox/Blockset for MATLAB®/Simulink®
Krakow, Poland. http://www.drcn2011.net/index.html
NOVEMBER
Comprehensive set of functions/blocks modeling the LTE physical layer transmit and receive processing
• ISWCS 2011 - 8th Int’l. Symposium on Wireless Communication Systems Aachen, Germany. http://www.ti.rwth-aachen.de/iswcs2011/
Supports FDD and TDD duplexing
• COMCAS 2011 - 2011 IEEE Int’l. Conference on Microwaves, Communications, Antennas and Electronic Systems, 7-9 Nov.
Channel estimation, synchronization and MIMO receivers (ZF,MMSE,SFBC)
Standard compliant propagation channel models (EPA, EVA, ETU, Moving, HST) Product roadmap includes releases through LTE-Advanced GUI based tools for LTE conformance test simulation and ARB waveform generation
Tel Aviv, Israel. http://www.comcas.org/
Extensive help, background documentation and demo set including HARQ and MIMO performance, PDCCH/DCI decoding and MIB/SIB recovery
♦ MILCOM 2011 - Military Communications Conference, 7-10 Nov.
Evaluate the complete solution at
Baltimore, MD. http://www.milcom.org/index.asp
www.steepestascent.com/lte/mathworks
DECEMBER ♦ IEEE GLOBECOM 2011 - 2011 IEEE Global Communications Conference, 5-9 Dec.
www.steepestascent.com
[email protected]
IEEE Communications Magazine • February 2011 IENYCM2844.indd 1
(UK): +44 141 552 8855 (US): +1 805 413 4127
19 1/17/11 5:47:34 PM
SPOT PAGE-11-02
1/20/11
12:13 PM
Page 20
PRODUCT SPOTLIGHTS GL Communications
Vectron The solution for all applications with no room for compromise in g-sensitivity, noise or stability performance but small size is required. Ideal for equipment used in harsh environments such as tactical weapons, avionics and portable equipment. The 508 product family of oscillators offers the industry's leading aging, stability, phase noise and g-sensitivity in a miniature 9 x 14 package. http://www.vectron.com
GL’s PacketProbe™ is an advanced CPE-based VoIP monitoring, reporting and diagnostic appliance. PacketProbe™ passively monitors VoIP traffic carried over the WAN/LAN by producing per call and per-stream voice quality metrics. Call Detail Records (CDR’s) along with voice quality statistics and other vital diagnostic information provide network managers immediate visibility into service quality, call volumes and call details. Service providers are able to rapidly drill down and diagnose voice related issues. Standards based Real-time Monitoring, Reporting and Diagnostic tools fit seamlessly into any existing standards based Management or Reporting environment, such as SNMP and RADIUS. Vital voice call quality statistics, Call Detail Records and Quality of Service metrics are available at the end of each call. Optionally GL offers its own Monitoring and Reporting System, PacketScanWEB™. http://www.gl.com/packetprobe.html
Silicon Labs Learn how to simplify your timing design using glitch-free frequency shifting to address low-power design challenges and the complexity of generating a wide range of frequencies in consumer electronics applications such as audio, video, computing or any application that requires multiple frequencies. Download this in-depth white paper from Silicon Labs. http://www.silabs.com/frequency-shifting
RSoft Design
Omicron Labs Omicron Labs’ Vector Network Analyzer Bode 100 now provides a 360° view on switched mode power supplies and voltage regulators. In combination with the Picotest Signal Injector series all key parameters such as Stability, PSRR, cross-talk, reverse transfer and impedance can be measured easily and accurately. http://www.omicron-lab.com http://www.picotest.com
20
RSoft releases version 5.2 of its award winning Optical Communication Design Suite OptSim and its multimode companion ModeSYS. The latest release features enhanced modeling capabilities for 100G Coherent PM-QPSK, Polarization induced all-optical switching, Bi-directional Ethernet in the First Mile, Interferometric systems, mode-coupling based Multimode systems, and Plastic Optical Fiber systems. http://www.rsoftdesign.com
GL Communications GL’s MAPS™(Message Automation & Protocol Simulation)-LTE supports scripted LTE simulation with the ability to simulate entities such as eNodeB (Evolved Nodeb), MME (Mobility Management Entity), SGW (Serving Gateway), and PGW (Packet Data Network Gateway). Supported interfaces include S1, S11 and S5/S8 (LTES1 and LTE-eGTP). Support for other interfaces such as S4, S11, and S12 is coming soon. The application gives the users the unlimited ability to edit S1-AP/NAS and eGTP-C (Evolved GPRS Tunneling Protocol for Control Plane (eGTP-c) messages and control scenarios (message sequences). http://www.gl.com/maps-lte.html
IEEE Communications Magazine • February 2011
LYT-NEWSLETTER-FEB
1/19/11
3:43 PM
Page 21
Global
Newsletter February 2011
IEEE Healthcom 2010: “Ambient Assisted Living” for Better Health By Norbert Noury, Healthcom 2010 General Chair, University of Lyon, France Telecommunications and networks are well recognized enabling technologies for telemedicine applications in remote and rural locations, but are also more and more becoming facilitating technologies for continuous health monitoring out of hospital, for p-health at home, and during various human activities, professional, leisure, or sports. A new field of investigation, never entered before, is now made possible: to continuously collect health information with context awareness. The field of e-health is also a remarkable melting point which gives the opportunity to bring together interested parties from around the world working in the various fields of healthcare and engineering to exchange ideas, discuss innovative and emerging solutions, and develop collaborations around operational projects. The 12th International Conference on e-Health Networking, Application & Services, Healthcom2010, was held in Lyon, France, on 1–3 July 2010. It was an important forum for discussions on e-health projects sponsored by world bodies such as the European Community (FP6 and FP7 European projects on AAL and e-inclusion, etc.). Each year, a broad variety of topics are presented at Healthcom, addressing the different levels of e-Health, from technologies to applications: •Network and Communications Infrastructures and Architectures for Healthcare Delivery •New Models for Healthcare Delivery •e-Health for Public Health •e-Health for Aging •m-Health •Field Applications •Education •Ethics The conference gathered more than 130 attendees, coming from 35 countries, including clinicians, IT professionals, researchers, healthcare solutions vendors, and consultants. About 90 papers were presented, addressing a broad variety of topics within e-Health. The submission rate was high, with 122 papers out of which 62 were selected for presentation, included in the Proceedings accessible through IEEEXplore, and six selected papers are to appear in a special issue of the International Journal of e-Health and Medical Communications (Professor Joel Rodrigues, Editor-in-Chief). The first keynote speaker, Professor Louis Lareng, Director of the European Telemedicine Society, reviewed the history of developing telemedicine in Europe since the 1950s. Dr Loukianos Gatzoulis, from the European Commission in Brussels, presented the goals of the EU in supporting ehealth, societal and economical. Jean Schwoerer, from Orange Labs, depicted the current normalization efforts in the field of
body sensor networks. Professor André Dittmar from CNRS in Lyon demonstrated the need for new ubiquitous body sensors to support e-health efforts and deployments. The HC10 was articulated around three main sessions, “Enabling Technologies,” “Enabling Information Systems,” and “Enabling Applications.” The variety and density of information were very high. Some main points are the followings. Body sensor networks allow collecting and aggregating data from several sensors in a mobile context. This mobility offers continuous monitoring of patient status, imrpoving patients’ quality of life. Existing femtocellular network resources, already available on site, may be used for rapid provisioning of mobile broadband data connectivity indoors for emergency telemedicine applications. This approach results in a reduction in service outage rates. Remote telemonitoring of elderly people in their own homes is a major challenge to face the fast growing population of elderly people. Health Smart Homes allow monitoring the behavior of a person with non-intrusive sensors. The major trends in the activity reflect the global homeostasis of the subject. High-level decision tools are used to classify scenarios of daily living and eventually to build an index of activities of daily living. As these smart homes will benefit people in preventing loss of autonomy, disabled people or elderly people with cognitive deficiencies, it is essential to facilitate their interactions with Smart Homes through dedicated interfaces, such as systems reactive to vocal orders. Audio recognition is also a promising way to ensure detection of distress situations. E-healthcare and telemedicine applications, when deployed to provide healthcare to remote locations in developing countries, must carefully take into account the existing healthcare and communications facilities, but also socio-economic conditions of populations. Telemedicine can also allow deployment of real-time disease surveillance and notification systems in developing countries. The communication technologies, adopting global standards for structured messages (SMS, email, web), will reduce the delays in communicating field data to central epidemiology units, which can therefore detect disease outbreaks in a timely manner, and allow health system to effectively respond and mitigate the consequences for populations. The domain of e-health is currently demonstrating high vitality. It is a living laboratory for cooperation between the fields of health and engineering. It is also a chance to better understand the health of humans in their living contexts. I want to thank Assistant Professor Pradeep Ray, director of the Asia-Pacific Ubiquitous Healthcare Research Centre (APuHC) at the University of New South Wales, who kindly invited me to organize IEEE Healthcom 2010.
Global Communications Newsletter • February 2011
1
LYT-NEWSLETTER-FEB
1/19/11
3:43 PM
Page 22
1st FOKUS FUSECO Forum 2010, Berlin, Germany By Prof. Dr. Thomas Magedanz, General Chair, TU Berlin/Fraunhofer FOKUS, Germany The First international FOKUS FUSECO Forum (FFF) on “Business and Technical Challenges of Seamless Service Provision in Converging Next Generation Fixed and Mobile Networks” was held in Berlin, Germany, on 14-15 October 2010 and was attended by around 150 experts from industry and academia. The FFF represents a technologically focused follow-up to the famous FOKUS IMS Workshop series, as the future role of IP Multimedia Subsystem (IMS) in mobile and fixed next-generation networks (NGNs) deserves some critical considerations in regard to slow industry adoption and rapidly emerging new control protocols and platforms from the Internet domain. There is no doubt that IMS has consolidated the various views on Session Initiation Protocol (SIP) and Diameter-based NGNs, and many standards have been established for voice over IP (VoIP), rich communications services (RCS), and IPTV. But there is lot of pressure from emerging over-the-top (OTT) applications originating from the Internet, challenging IMS business case calculations and extensive deployments beyond NGN VoIP. In addition, IMS is limited to SIP-based control and thus HTTP (Hypertext Transport Protocol) and other protocol-based applications, forming the majority of current fixed and mobile broadband traffic are out of control. Here the 3GPP Evolved Packet Core) (EPC) has the potential to emerge as a common control platform for any IP application. From the technical maturity and global recognition points of view, EPC stands today where IMS stood five years ago; thus, it is time to launch a new workshop series to create global awareness in academia and industry about this promising technology. Thus, the FFF discussed for two days the relationship between the EPC, IMS, and OTT approaches. The first day started with a technical tutorial about Long Term Evolution (LTE) and EPC standards, and introduced the Fraunhofer HHI LTE-Advanced Testbed and Fraunhofer FOKUS OpenEPC toolkit, which have been integrated into the Future Seamless Communication (FUSECO) Playground, the globally first open testbed uniting wireless LANs, 3G networks, LTE, and EPC technologies for early prototyping of new seamless applications. Practical demonstrations including seamless handovers of Skype and video services with quality of service assurance from the FUSECO Playground and an OpenEPC Release 2 preview were also provided to the audience. The second day featured a conference with presentations from various international network operators and service providers. A vendor panel session and an associated exhibition presented the state of the art and upcoming products in this field. In the following some more details are provided for both days. The tutorial “Understanding the Next Generation of Mobile Broadband Communications: LTE and EPC Concepts, Architectures, Protocols and Applications” on the first day was presented by Dr. Thomas Haustein, Fraunhofer HHI, and Prof. Dr. Thomas Magedanz, Fraunhofer FOKUS and TU Berlin. It started by pointing out the continuously increasing mobile data traffic demand and motivated the need for LTE, EPC, and IMS technologies in order to allow smooth evolution from existing circuit- and packet-switched mobile networks to a next-generation mobile network. A session about LTE presented details about the radio part, covering standards and architectures, and gave an outlook on LTE-Advanced. The correlated network part was covered in the following session by introducing EPC terminology, key concepts, and architecture, as well as the related Third Generation Partnership Project (3GPP) standards. Subsequently, potential applications and related platforms were discussed, including operator IMS platforms for voice over LTE (VoLTE), as well as over the top Internet service platforms. An outlook onto global Future Internet research and related application areas has concluded this tutorial section. The tutorial
2
The 1st International FOKUS FUSECO Forum (FFF) was attended by approximately 150 experts from industry and academia. ended with a presentation of the experiences from the Berlin LTE-Advanced testbed, the OpenEPC testbed toolkit, and the FUSECO Playground. Demonstrations during the breaks showed current proof of concept realizations, including seamless handovers between LTE-A and WLAN, service composition, and transparent mobility. The demonstrations covered essential challenges in the scope of LTE and EPC, and illustrated practical solutions based on these emerging technologies. On day 2 the conference started with a session on “Competing Mobile Broadband Access Network Technologies” chaired by Thomas Haustein, Fraunhofer HHI, and addressed challenges emerging with the introduction of LTE and EPC. The second session, “Access Network Integration and Service Enabling,” chaired by Hans Schotten, University of Kaiserslautern, addressed technical problems during the deployment of EPC into existing network infrastructures, in particular IMS roaming and different QoS signaling, as well as the complex IMS and EPC interoperability, which might take until 2014/2015. A vendor panel, “Standards, Products, and Business Cases for Future Seamless Communication,” chaired by Thomas Magedanz, TU Berlin, discussed the LTE business case, voice over LTE, the importance of open application programming interfaces (APIs) for VoLTE and RCS, and so on. The fourth session, “FUSECO Telco Applications: Voice, RCS and More,” chaired by Hans Joachim Einsiedler, Deutsche Telekom Laboratories, addressed mobile broadband services and M2M opportunities, and application challenges regarding fast deployment of web services and slow agreement on interoperability. The overall tenor forecasts no LTE/mobile broadband killer application. The last session, “FUSECO OTT Applications: Beyond Smart Bit Pipes,” chaired by Thomas Michael Bohnert, SAP Research, presented opportunities for wholesale and enterprise operators, OTT services, and the usability of LTE for vehicles. VoIP in mobile networks is growing in acceptance, increasingly challenging the operators around the globe. As Facebook has now announced interworking with Skype, the question of using Facebook as the main interface for launching new applications in the future has been raised. Alongside the workshops and conference, vendor exhibitions showed 4G Subscriber Data Management/Communications as a Service, Enhancements of Mobility Management for the 3GPP EPS — smart mobile devices in a dense wireless network environment, IBM Software Strategy for CSPs — start planning and implementing smarter communications systems, and smart networks for user-centric broadband. In addition, the newest (Continued on Newsletter page 4)
Global Communications Newsletter • February 2011
LYT-NEWSLETTER-FEB
1/19/11
3:43 PM
Page 23
Internet for Everybody in Spain: The 1 Mb/s Universal Service By Ana Vázquez Alejos, Rafel Asorey Cacheda and Felipe José Gil Castiñeira, University of Vigo, Spain Universal telecom service is a concept defined at the European level with the objective to guarantee all citizens the right of access to a defined set of basic electronic communications services, independent of their geographical location, with a minimum quality and at a reasonable price. Until now, in Spain, only a functional Internet access was considered as a service offered by the public phone wired network, with download speeds of 256 kb/s. During October 2009, the Spanish Ministry of Industry performed a public survey [1] to determine the minimum features that would require the Internet universal service to be updated to the growing needs of the Information Society. The survey outcomes concluded that choosing broadband access with a bit rate of 1 Mb/s would conciliate the current demands of the Information Society and the requirements to impel the modernization of the digital infrastructures. As a consequence, the Minister of Industry announced that the universal service would include the 1 Mb/s downlink broadband connection in 2011 as a minimum requirement to increase competitiveness in the broadband business. Recently, during the celebration of CEBIT 2010 in Hannover, Spanish Government President Mr Rodriguez Zapatero confirmed this aim. The operator in charge of providing the universal service will be selected during 2010. The inclusion of the broadband access with a minimum downlink speed of 1 Mbps makes Spain to go in head of the digital European policies. This actuation can be considered an epilogue to the AVANZAPEBA plan [2], designed to extend the penetration of the broadband access in Spain and deployed by the Government from 2005 to 2008. Now, that the plan is over, it is time to measure the achievements before designing the new telecom policy. We can compare the Spanish case to other European countries such as Finland. Traditionally, the latter has been found positioned in first place of the pro-right career in telecommunications, reaching the point of including in its constitution the right of a nationwide 1 Mb/s broadband connection provided by any kind of technology, hoping to reach the speed of 100 Mb/s by 2015. However, it was not Finland but Switzerland that, in 2008, defined broadband access as a connection of 100 kb/s/600 kb/s for up-/downlink speeds. Thus, Spain will become the third European country adopting a quantitative definition of broadband access. Despite this, the reality and the political intentions follow paths dangerously separated. The European Competitive Telecommunications Association (ECTA) Broadband Scorecard is a recognized benchmark being used regularly by industry, the European Commission, national regulators, and institutions. Biennially, ECTA collates and publishes data tracking the progress on broadband penetration and local loop unbunding in the 25 European Member States [3]. Some of the latest published statistics are plotted in Figure 1. According to this source, goals seem not to be reached, placing Spain under the average of European countries in broadband penetration, with a penetration rate of 21 percent, which is two points under the EU average placed at 23.5 percent. Along the same lines, a recent report published by the Regional Government of Galicia (Northwestern Spain) demonstrated that this situation is even more critical [4]. The determining factor of this circumstance can be found in the situation experienced by rural areas, which present 70 percent nonexistent or low-quality network access, under 512 kb/s, even after an investment of €225,000,000 provided by public funds. The same happens in other Spanish regions, showing a deep imbalance between urban and rural state areas.
2009 April total connections per technology for Spain including radio [2]. Closer insight into tje data provided by ECTA reveals that Sweden remains Europe’s fiber leader, with 7.5 percent of the population benefiting from high-speed modern access lines compared to an average of just 0.4 percent across the EU. Despite “regulatory holidays” for incumbents in countries such as Germany and Spain, the survey showed little evidence of increased fiber deployment in those countries. It also highlights a strong link between effective economic regulation and investment levels in the telecom sector, where a regulatory framework helps alternative operators compete against the national incumbent, but also does not encourage investments on new infrastructure since new deployments require many resources and are only profitable in the long term. The EU presidency by part of Spain in 2010 would seem to be a suitable moment to support the consideration of broadband Internet as an indispensable service in the Information Society. However, 1 Mb/s may not be enough and can hardly be considered broadband Internet in 2010. Moreover, no one has explained how this service will be provided since there are large areas in Spain without any kind of Internet access. And the most important thing is the need to define how the provision of this service will be financed. Probably, without public funds, it will not be possible to accomplish these goals, although achieving them would doubtlessly place Spain on top of the most developed information societies.
References [1] [2] [3] [4]
http://www.mityc.es http://www.planavanza.es http://www.ectaportal.com http://imit.xunta.es/
Global Communications Newsletter • February 2011
3
LYT-NEWSLETTER-FEB
1/19/11
3:43 PM
Page 24
Distinguished Lecturer Tour of Bhumip Khasnabish in India By Deergha Rao Korrai, Chair, Communications and Signal Processing Societies Joint Chapter, Hyderabad, India The Distinguished Lecturer Program is one of the best initiatives of the IEEE Communications Society. It brings distinguished experts to give lectures at Chapters on all continents. A DL tour of Dr. Bhumip Khasnabish, ZTE, United States, was held in India in July 2010. Lectures entitled “Services over IP: Implementation Options and Challenges” and “Converged Services and a New Generation of Networking” were given in India from 9 July to 17 July 2010 with the following schedule: 1) Mumbai, 9 July 2010 (two lectures); 2) Pune, 10 July, 12 July 2010 (two lectures); 3) Hyderabad, 13 July 2010 (two lectures); 4) Kharagpur, 15 July 2010 (one lecture); 5) Kolkata, 17 July 2010 (one lecture). Dr. Khasnabish’s lectures in Hyderabad were organized by the Communications and Signal Processing Societies Joint Chapter of the IEEE Hyderabad Section, and his accommodation and travel within Hyderabad were arranged by TCS Hyderabad. “Services over IP: Implementation Options and Challenges” is a tutorial and was held on 13 July 2010 at the Research and Training Unit for Navigational Electronics (NERTU) auditorium, University College of Engineering, Osmania University, Hyderabad from 9 a.m. to 1 p.m. There were 55 audience members for the tutorial, including students, research scholars, participants from industry, and faculty of colleges. During this lecture, issues related to mouth-to-ear delay calculation or estimation, GPS integration, location services, current status and future infrastructure deployment, speed of IPTV and the status of current compression technology, cooperating multimode devices, net enabled health services, security issues like DRM, Skype, and emerging trends were raised by the participants. These were well answered by the speaker; furthermore, he emphasized the need for interoperability, standardization, and protocol integration. “Converged Services and a New Generation of Networking” was held on 13 July 2010 at the Gadavari auditorium, TCS, Dec-
Global Newsletter www.comsoc.org/pubs/gcn
STEFANO BREGNI Editor Politecnico di Milano - Dept. of Electronics and Information Piazza Leonardo da Vinci 32, 20133 MILANO MI, Italy Ph.: +39-02-2399.3503 - Fax: +39-02-2399.3413 Email:
[email protected],
[email protected] IEEE COMMUNICATIONS SOCIETY
KHALED B. LETAIEF, VICE-PRESIDENT CONFERENCES SERGIO BENEDETTO, VICE-PRESIDENT MEMBER RELATIONS JOSÉ-DAVID CELY, DIRECTOR OF LA REGION GABE JAKOBSON, DIRECTOR OF NA REGION TARIQ DURRANI, DIRECTOR OF EAME REGION NAOAKI YAMANAKA, DIRECTOR OF AP REGION ROBERTO SARACCO, DIRECTOR OF SISTER AND RELATED SOCIETIES REGIONAL CORRESPONDENTS WHO CONTRIBUTED TO THIS ISSUE
THOMAS M. BOHNERT, SWITZERLAND (
[email protected]) JOSEMARIA MALGOSA SANAHUJA, SPAIN (
[email protected]) JOEL RODRIGUES, PORTUGAL (
[email protected]) EWELL TAN, SINGAPORE (
[email protected])
®
4
A publication of the IEEE Communications Society
Mr.MGPL Narayana(Hyderabad section chair, third from left), Dr.Bhumip Khasnabish (fourth from left), Dr.Deergha Rao Korrai (chapter chair, fifth from left) and other IEEE volunteers of the Hyderabad section after the lecture at the Godavari auditorium of TCS. can Park,Hi-Tech City, Hyderabad, from 5 p.m. to 7 p.m. There were 154 audience members for this lecture, including students, research scholars, participants from industry, and faculty of universities and colleges. In this lecture the speaker answered issues raised by participants related to protocols and infrastructure, IETF.org, streaming media, VoIP, IPTV, packet delay, packet losses, acceptable delays (< 150 ms), the difference between Internet phone and IP phone, telemedicine prospects in India, IPv6 adoption, QoS, QoE, WiMAX, LTE, virtual device (embedded) services, and more. The single biggest phenomenon that is transforming the global telecom industry is convergence. Internet users are now exposed to different modes of communication than basic voice telephony. Communication now includes pictures and videos, and is not limited to person-to-person communication; communities and user groups are being created, and information exchange is not limited to and from people known to each other. Content is driving service subscription, and identifying the right content in the right format at the right cost and delivering it in a secure manner to any kind of business model for the communication industry are the main issues. Convergence has been visible on the horizon for the last couple of years, but has yet to arrive in developing countries in a major way. The lecture focused on convergence of communication services like data, voice, and video in IP-based networks. The speaker stressed the importance of quality of experience (QoE) apart from quality of service (QoS) in IPTV user judgment and acceptance. Experience needs to be preserved as video traffic is transported across IP infrastructure. Service providers need IP-based next-generation network (IP-NGN) infrastructure solutions that are intelligent and video-aware. An outstanding video experience requires excellent solutions in the customer home to decode, decrypt, share, and display the content the way it was intended. In feedback, the participants expressed satisfaction with the event organization, suggesting increasing the time for these kinds of lectures, more explanations of security, and some demonstrations using MATLAB interfacing and Labviews.
FOKUS FUSECO FORUM/continued from page 2 toolkits from Fraunhofer FOKUS, the universal client framework myMONSTER TCS (www.opensoaplayground.org/tcs) and the newest release of the OpenEPC toolkit (www.openepc.net), were presented. This new forum will be continued next year in November 2011 to establish a regular meeting point for international researchers from academia and industry (www.fusecoforum.org/2011).
Global Communications Newsletter • February 2011
February 2011 Supplement Cover 1
1/20/11
3:19 PM
Page 1
IEEE www.comsoc.org
February 2011, Vol. 49, No. 2
MAGAZINE O ff lC oSp on so r
®
A Publication of the IEEE Communications Society
ia
Passive Optical Networks
ic
Special Supplement
LYT-SUPPLEMENT-TOC-FEB
1/20/11
12:04 PM
Page 28
Director of Magazines Andrzej Jajszczyk, AGH U. of Sci. & Tech. (Poland)
IEEE
Editor-in-Chief Steve Gorshe, PMC-Sierra, Inc. (USA) Associate Editor-in-Chief Sean Moore, Centripetal Networks (USA) Senior Technical Editors Tom Chen, Swansea University (UK) Nim Cheung, ASTRI (China) Nelson Fonseca, State Univ. of Campinas (Brazil) Torleiv Maseng, Norwegian Def. Res. Est. (Norway) Peter T. S. Yum, The Chinese U. Hong Kong (China) Technical Editors Sonia Aissa, Univ. of Quebec (Canada) Mohammed Atiquzzaman, U. of Oklahoma (USA) Paolo Bellavista, DEIS (Italy) Tee-Hiang Cheng, Nanyang Tech. U. (Rep. Singapore) Jacek Chrostowski, Scheelite Techn. LLC (USA) Sudhir S. Dixit, Nokia Siemens Networks (USA) Stefano Galli, Panasonic R&D Co. of America (USA) Joan Garcia-Haro, Poly. U. of Cartagena (Spain) Vimal K. Khanna, mCalibre Technologies (India) Janusz Konrad, Boston University (USA) Abbas Jamalipour, U. of Sydney (Australia) Deep Medhi, Univ. of Missouri-Kansas City (USA) Nader F. Mir, San Jose State Univ. (USA) Amitabh Mishra, Johns Hopkins University (USA) Sedat Ölçer, IBM (Switzerland) Glenn Parsons, Ericsson Canada (Canada) Harry Rudin, IBM Zurich Res.Lab. (Switzerland) Hady Salloum, Stevens Institute of Tech. (USA) Antonio Sánchez Esguevillas, Telefonica (Spain) Heinrich J. Stüttgen, NEC Europe Ltd. (Germany) Dan Keun Sung, Korea Adv. Inst. Sci. & Tech. (Korea) Danny Tsang, Hong Kong U. of Sci. & Tech. (Japan) Series Editors Ad Hoc and Sensor Networks Edoardo Biagioni, U. of Hawaii, Manoa (USA) Silvia Giordano, Univ. of App. Sci. (Switzerland) Automotive Networking and Applications Wai Chen, Telcordia Technologies, Inc (USA) Luca Delgrossi, Mercedes-Benz R&D N.A. (USA) Timo Kosch, BMW Group (Germany) Tadao Saito, University of Tokyo (Japan) Consumer Communicatons and Networking Madjid Merabti, Liverpool John Moores U. (UK) Mario Kolberg, University of Sterling (UK) Stan Moyer, Telcordia (USA) Design & Implementation Sean Moore, Avaya (USA) Salvatore Loreto, Ericsson Research (Finland) Integrated Circuits for Communications Charles Chien (USA) Zhiwei Xu, SST Communication Inc. (USA) Stephen Molloy, Qualcomm (USA) Network and Service Management Series George Pavlou, U. of Surrey (UK) Aiko Pras, U. of Twente (The Netherlands) Networking Testing Series Yingdar Lin, National Chiao Tung University (Taiwan) Erica Johnson, University of New Hampshire (USA) Tom McBeath, Spirent Communications Inc. (USA) Eduardo Joo, Empirix Inc. (USA) Topics in Optical Communications Hideo Kuwahara, Fujitsu Laboratories, Ltd. (Japan) Osman Gebizlioglu, Telcordia Technologies (USA) John Spencer, Optelian (USA) Vijay Jain, Verizon (USA) Topics in Radio Communications Joseph B. Evans, U. of Kansas (USA) Zoran Zvonar, MediaTek (USA) Standards Yoichi Maeda, NTT Adv. Tech. Corp. (Japan) Mostafa Hashem Sherif, AT&T (USA) Columns Book Reviews Andrzej Jajszczyk, AGH U. of Sci. & Tech. (Poland) History of Communications Mischa Schwartz, Columbia U. (USA) Regulatory and Policy Issues J. Scott Marcus, WIK (Germany) Jon M. Peha, Carnegie Mellon U. (USA) Technology Leaders' Forum Steve Weinstein (USA) Very Large Projects Ken Young, Telcordia Technologies (USA) Publications Staff Joseph Milizzo, Assistant Publisher Eric Levine, Associate Publisher Susan Lange, Online Production Manager Jennifer Porcello, Publications Specialist Catherine Kemelmacher, Associate Editor
MAGAZINE February 2011, Vol. 49, No. 2
www.comsoc.org/~ci SPECIAL SUPPLEMENT ADVANCES IN PASSIVE OPTICAL NETWORKS GUEST EDITORS: MAHMOUD DANESHMAND, CHONGGANG WANG, AND WEI WEI
S4 CONFERENCE PREVIEW OFC/NFOEC 2011: LEADING THE WAY IN OPTICAL COMMUNICATIONS LYNDSAY BASISTA
S8 CONFERENCE REPORT 36TH EUROPEAN CONFERENCE ON OPTICAL COMMUNICATION FABIO NERI
S12 GUEST EDITORIAL S16 OPPORTUNITIES FOR NEXT-GENERATION OPTICAL ACCESS Next-generation optical access technologies and architectures are evaluated based on operators’ requirements. The study presented in this article compares different FTTH access network architectures. DIRK BREUER, FRANK GEILHARDT, RALF HÜLSERMANN, MARIO KIND, CHRISTOPH LANGE, THOMAS MONATH, AND ERIK WEIS
S25 COST AND ENERGY CONSUMPTION ANALYSIS OF ADVANCED WDM-PONS The authors compare several WDM-PON concepts, including hybrid WDM-PON with integrated per-wavelength multiple access, with regard to these parameters. They also show the impact and importance of generic next-generation bandwidth and reach requirements. KLAUS GROBE, MARKUS ROPPELT, ACHIM AUTENRIETH, JÖRG-PETER ELBERS, AND MICHAEL EISELT
S33 TOWARD ENERGY-EFFICIENT 1G-EPON AND 10G-EPON WITH SLEEP-AWARE MAC CONTROL AND SCHEDULING The authors briefly discuss the key features of 10G-EPON. Then, from the perspective of MAC-layer control and scheduling, they discuss challenges and possible solutions to put optical network units into low-power mode for energy saving. JINGJING ZHANG AND NIRWAN ANSARI
S39 MULTIRATE AND MULTI-QUALITY-OF-SERVICE PASSIVE OPTICAL NETWORK BASED ON
HYBRID WDM/OCDM SYSTEM
The authors present a new scheme to support multirate and multi-quality-of-service transmission in passive optical networks based on a hybrid wavelength-division multiplexing/optical code-division multiplexing scheme. The idea is to use multilength variable-weight optical orthogonal codes as signature sequences of a hybrid WDM/OCDM system. HAMZEH BEYRANVAND AND JAWAD A. SALEHI
S45
PASSIVE OPTICAL NETWORK MONITORING: CHALLENGES AND REQUIREMENTS The authors address the required features of PON monitoring techniques and review the major candidate technologies. They highlight some of the limitations of standard and adapted OTDR techniques as well as non-OTDR schemes. MOHAMMAD M. RAD, KERIM FOULI, HABIB A. FATHALLAH, LESLIE A. RUSCH, AND MARTIN MAIER
FOR INFORMATION ABOUT ADVERTISING CONTACT ERIC LEVINE ADVERTISING MANAGER 212-705-8920
[email protected]
®
S2
IEEE Communications Magazine • February 2011
LYT-CONF PREVIEW-FEB
1/20/11
12:17 PM
Page 30
CONFERENCE PREVIEW OFC/NFOEC 2011: LEADING THE WAY IN OPTICAL COMMUNICATIONS BY LYNDSAY BASISTA Once again, optical communications and networking professionals from around the world will come together for an intense week of networking, high-quality science, and the most comprehensive exhibit in the industry at the Optical Fiber Communication Conference and Exposition (OFC) and the National Fiber Optic Engineers Conference (NFOEC). This year’s conference will feature hundreds of peer-reviewed technical and invited presentations, a rump session focusing on “green” networking, a special symposium on computer components and architectures, an exhibit hall with more than 500 leading optical communications companies and so much more. Areas that will be highlighted include: Datacom/Data Centers, FTTx/In-home, Next Generation Data Transfer Technology, Photonic Integration and Wireless Backhaul. OFC/NFOEC will be held at the Los Angeles Convention Center in Los Angeles, Calif. from March 6 – 10 with the exposition taking place March 8 – 10.
PLENARY SESSION This year’s plenary speakers are leaders at the forefront of the optical communications industry. As always, OFC/NFOEC draws speakers who are industry veterans and their wide range of expertise is certain to be a main attraction at this year’s OFC/NFOEC. The event will take place on Tuesday, March 8 from 8 – 11 a.m. Olivier Baujard, Chief Technology Officer at Deutsche Telekom has had a career spanning all facets of the telecommunications industry. From his time at France Telecom where he held engineering and managerial roles to his 20-year tenure at Alcatel-Lucent, France where he rose to the level of chief executive officer and chairman, Baujard has been a leader in the ever-changing telecommunications industry. In his current position at Deutsche Telekom, Baujard is responsible for the group-wide engineering, deployment and operation of fixed, mobile and carrier infrastructures. During his plenary talk, Baujard will discuss the future data and media-centric world with a particular emphasis on the challenges it presents for network operators. Additionally, he will cover network transformation and strategy and practical examples of how to enable efficient broadband. Baujard will also provide insights about Deutsche Telekom and their vision for the future of wireless services. Providing an insider’s perspective on the advances and developments that optical technologies have on supercomputing is Alan Gara, IBM Fellow and Blue Gene Chief Architect. Gara’s talk will discuss the challenges of reaching computing at the exaflop level with a special emphasis on areas where traditional communication solutions will not suffice. Gara has been leading high performance computing architecture and design efforts at IBM Research since 1999. Before then, Gara worked on the Superconductor supercollider (SSC) in Texas and at the Large Hadron Collider (LHC) at CERN. He has been the recipient of many prestigious awards for his supercomputing efforts including two Gordon Bell awards. With a career spanning more than 30 years in the telecommunications industry, Kristin Rinne, Senior Vice President – Architecture & Planning at AT&T, will share with attendees her outlook on the explosive growth of mobile data and how that growth impacts optical networks including cell site back-
S4
haul. Rinne is responsible for the IT and Network architecture and planning for AT&T, including setting the direction for the Network, BSS/OSS Systems, Services, etc. She is also responsible for the IPTV and the wireless network infrastructure and device technology vendor selection and first office implementation. Prior to joining AT&T, Rinne served as Cingular’s chief technology officer and before that was vice president–Technology Strategy at SBC Wireless and managing director–Operations at Southwestern Bell Mobile Systems. The plenary session will be held on Tuesday, March 8 from 8 – 11 a.m.
RUMP SESSION Back for its third year is the always popular Rump Session, an audience-driven discussion designed to be as interactive and dynamic as possible. This year’s Rump Session will be devoted to the question: “Is green networking revolutionary?” Organizers are looking for attendees to discuss whether a more energy-efficient network is crucial to continued growth of the Internet or if “green” is merely today’s buzzword for something quite familiar to us in the past, whether energy bottlenecks will constrain network growth if more attention is not paid to energy efficiency, and whether consumers and carriers will pay more to go “green,” and whether “green” networking will reduce costs in the long run. Any audience member is permitted to participate in the discussion and can use slides to further demonstrate points (no more than two can be accepted). The Rump Session will take place Tuesday, March 8 from 7:30 – 9:30 p.m.
EXHIBIT HALL The OFC/NFOEC exhibit hall attracts some of the biggest players in the optical communications industry from fiber equipment and components manufacturers to systems and cable vendors, participating companies include industry heavyweights like Juniper Networks, JDSU, Huawei, Finisar, Ciena and Nokia Seimens. The exhibit hall will be the place to see the latest in product demonstrations and innovation while also providing an opportunity for networking with industry leaders. The exhibit hall is also home to a number of programming events and activities. New this year, EXPO Theater II will have an optical networking focus and will feature events such as the Ethernet Alliance Program (panel discussion on the future of high-speed Ethernet and new storage facilities for efficiency and flexibility), the Green Touch Panel Presentation (panel discussion on when the energy crunch will affect optical networks), and the Optical Internetworking Forum (panel discussions on market requirements for the next generation optical network, considering 400G vs.1 Terabit speeds; and building blocks for high-speed, on-demand services).
THE OPTICAL BUSINESS FORUM – TUESDAY, MARCH 8 Optical technologies have become integral to the solutions needed to ensure the speed – and capacity – of transactions can keep pace with the needs of customers in the business world. The Optical Business Forum will discuss the latest advances in optical business services with sessions including: (Continued on page S6)
IEEE Communications Magazine • February 2011
LYT-CONF PREVIEW-FEB
1/20/11
12:17 PM
Page 32
CONFERENCE PREVIEW (Continued from page S4) •Who’s Buying Optical Bandwidth Services? •The Economics & Business Case for Connecting Data Centers •Carrier Ethernet Exchanges Additional information can be found online: http://www.ofcnfoec.org/OBF
SHORT COURSES Short Courses cover a broad range of topic areas at a variety of educational levels. The courses are taught by highly regarded industry experts on subjects such as 40 Gb/s transmission systems, optical transmission systems, photonic integrated circuits, and ROADM technologies. New topics for 2011 include computercom interconnects and data center networking. More information on the Web: http://www.ofcnfoec. org/Short_Courses/
SERVICE PROVIDER SUMMIT – WEDNESDAY, MARCH 9
INVITED SPEAKERS
Also in the exhibit hall is the perennial favorite Service Provider Summit. The Service Provider Summit is geared toward CTOs, network architects, network designers and technologists within the service provider and carrier sector. The program will include a keynote presentation, exhibit time and networking time. Keynote Presentation: The Financial Industry’s Race to Zero Latency and Terabit Networking – Andrew Bach, Senior Vice President and Global Head of Network Services, NYSE Euronext Panels: •Evolution to Higher Speed •What’s Going on in Wireless? Additional information can be found online: http://www.ofcnfoec.org/Service_Provider_Summit
OFC/NFOEC invited speakers are chosen through a highly selective nominations process to keep attendees at the forefront of optical communications. This year’s exciting lineup of speakers will cover various topics in 14 categories, from Optical Network Applications and Services to Transmission Subsystems and Network Elements. •Regulation Environments around the World: Impacts on Deployments, Fabrice Bourgart, France Telecom, France. •Google Optical Network Deployment, Vijay Vusirikala, Google, USA. •Energy-Efficient Optical Access Network Technologies, Junichi Kani, NTT Access Service Systems Labs, Japan. •Cloud Computing over Telecom Network, Dominique Verchere, Alcatel-Lucent Bell Labs France, France. •Scaling Networks in Large Data Centers, Donn Lee, Facebook, USA. •Optical Networking Trends and Evolution, Christoph Glingener, ADVA Optical Networking, Germany. A full list of invited speakers is available online: http://www.ofcnfoec.org/Invited_Speakers
MARKET WATCH – TUESDAY, MARCH 8 – THURSDAY, MARCH 10 This three-day series of panel sessions engages the applications and business communities in the field of optical communications. Presentations and panel discussions feature esteemed guest speakers from industry, research and the investment communities. Tuesday Panels: •State of the Optical Industry •Implications of Converged Wireline Wireless for Network Evolution Wednesday Panel: •100G Ecosystem: Enabling Technology and Economics Thursday Panels: •Data Center: Traffic and Technology Drivers •What’s Next for Optical Networking More information can be found online: http://www.ofcnfoec.org/Market_Watch
S6
WORKSHOPS Workshops and panel discussions will take place throughout the conference and will cover all topical areas of OFC/NFOEC. Some topics include: •Beyond 100G, Options and Implications for Today’s Networks TJ Xia, Verizon USA; Milorad Cvijetic, NEC and Frank Chang, Vitesse. •FTTH Around the World: Today and Tomorrow Shoichi Hanatani, Hitachi. •FTTX and Technical Challenges of Emerging Applications: How Do We Keep Up with the Pace of Digital Evolution David Li, Ligents Photonics. A full list of workshops and panel discussions is available online at: http://www.ofcnfoec.org/Workshops_ and_Panels
IEEE Communications Magazine • February 2011
LYT-ECOC-Conference Report
1/20/11
12:09 PM
Page 34
ECOC 2010 CONFERENCE REPORT
36TH EUROPEAN CONFERENCE ON OPTICAL COMMUNICATION BY
FABIO NERI, POLITECNICO DI TORINO
The annual European Conference on Optical Communication (ECOC) is the largest conference on optical communication in Europe, and one of the largest and most prestigious events in this field worldwide. ECOC travels from one European country to another each year and this year has visited Turin, Italy, for the first time, after the two previous successful ECOCs, 2008 in Brussels and 2009 in Vienna. As a not-for-profit conference, ECOC focuses on the dissemination of new research results, the education of engineering and business leaders, and the exposition of cutting-edge optical fiber communication and networking products in the associated exhibition. ECOC brings together professionals in several fields related to optical communications, covering the areas of fiber design, opto-electronic devices, optical transmission systems, optical transport networks, and access technologies, all the way to future optical routing architectures and quantum information applications. Most of the major international telecommunication service providers and optical network system vendors participate and present their most recent developments at ECOC each year. ECOC 2010 was held from 19 to 23 September in Turin, Italy, at the Lingotto Conference and Exhibition Center, former location of Fiat’ s first major car factory, built between 1917 and 1920. Lingotto houses today a shopping center, a hotel, an art gallery, a concert hall and a conference center: a large modern structure designed especially for conventions. The city of Turin is a major business and cultural center in northern Italy. Founded by the Romans over 2000 years ago, in 1861, it became the first capital of Italy, playing a leading role in the nation’s history as a city of many vocations: political, social, and cultural. Although it owes its recent development mainly to industry, today it offers a variety of attractions: Roman ruins, Baroque monuments, urban landscapes, world-class museums — like the Egyptian Museum, featuring the second most important collection after that of Cairo — a vibrant cultural life, and several opportunities for sport and recreation. The region is also renowned for its varied and refined cuisine, and the famous wines from the surrounding area. For what concerns the scientific history of the town, important contributions to the early developments of fiber optic communications were made by the CSELT laboratories of Telecom Italia, located in Turin. September 2010 was indeed the 33rd anniversary
Delegates at ECOC 2010.
S8
The ECOC 2010 Technical Program co-Chairmen: P. Poggiolini, M. Schiano, and A. Galtarossa (left to right).
Plenary session at the Lingotto Auditorium, designed by Renzo Piano.
of an important event in the history of optical communications: in September 1977 the first optical communication cable, called COS 2, was laid between two urban exchanges of SIP (now Telecom Italia). ECOC 2010 was the 36th conference edition, and confirmed ECOC’s reputation and attractiveness as one of the world’s major forums for discussion of the most recent advances in research, development, and industrial applications of optical communication technologies and networks. The general co-Chairmen of ECOC 2010 were Pierluigi Franco (PGT Photonics), Fabio Neri (Politecnico di Torino), and Giancarlo Prati (Scuola Superiore Sant’Anna and Centro Nazionale Interuniversitario per le Telecomunicazioni — CNIT). The conference was organized by the Stilema company, based in Turin. ECOC 2010 was attended by 1111 delegates from all over the world; the countries with the largest representation were Japan, the United States, Germany, and Italy. As in the ECOC tradition, a major exhibition was collocated with the conference, offering a showcase for the most recent advances in optical communications products. Three hundred companies worldwide participated as exhibitors, and the countries with the largest representation among exhibitors were China and the United States; 3775 exhibition visitors were registered in addition to conference delegates. The conference was organized by Nexus Business Media, United Kingdom. (Continued on page S10)
IEEE Communications Magazine • February 2011
LYT-ECOC-Conference Report
1/20/11
12:09 PM
Page 36
ECOC 2010 CONFERENCE REPORT (Continued fromwas pagesponsored S8) ECOC 2010 by Telecom Italia, Cisco, Ericsson, Istituto Superiore Mario Boella, and ZTE, and by the local authorities and agencies Regione Piemonte, Camera di Commercio Artigianato e Agricoltura di Torino, Unione Industriale Torino, and Fondazione CRT. A rich technical program was organized by technical program co-Chairmen Andrea Galtarossa (Università di Padova), Pierluigi Poggiolini (Politecnico di Torino), and Marco Schiano (Telecom Italia). The conference program started on Sunday with 11 well attended (over 650 delegate badges were collected on Sunday) half-day workshops. The opening session was held in the Lingotto Auditorium, and featured three world-renowned plenary speakers from industry and academia: Menahem Kaplan (former CTO of Alcatel-Lucent Optics), with a talk on “Technology Opportunities Beyond and Besides 100G,” Masataka Nagazawa (Tohoku University, Japan), with a talk on “Giant Leaps in Optical Communication Technologies Towards 2030 and Beyond,” and John Bowers (University of California atSanta Barbara), with a talk on “Challenges in Silicon as a Photonic Platform.” The bulk of the conference program was based on contributed technical papers, carefully selected for oral or poster presentation by an outstanding technical program committee, comprising around 100 well-known experts of the field, and organized in six subcommittees: “Fibers, Fiber Devices and Amplifiers,” “Waveguides and Optoelectronic Devices,” “Subsystems and Network Elements for Optical Networks,” “Transmission Systems,” “Backbone and Core Networks,” and
Gala dinner at La Venaria Reale royal castle.
S10
Welcome reception, “Taste of Piemonte.” “Access Networks and LANs.” The 245 oral and 131 poster paper presentations were complemented by 38 invited talks from renowned experts, six tutorial presentations, and seven symposia focused on hot topics, organized in 71 sessions running in seven parallel tracks for the four conference days. The poster session on Wednesday afternoon was very popular, with intense informal discussions among authors and other conference delegates. The most recent research achievements and breakthroughs in the field of optical communications were presented in post-deadline paper sessions at the end of the conference. The European Physical Society and CLEO Europe-EQEC organized within ECOC 2010 a special CLEO Focus Meeting on New Frontiers in Photonics, aimed at bridging the gap between basic science and optical telecommunications applications. The CLEO Focus Meeting covered five sessions in the ECOC program. Seven hundred ninety-three regular papers and 66 postdeadline papers were submitted to ECOC 2010. The acceptance ratio was 51 percent for regular papers (33 percent for oral presentations and 18 percent for posters), and 27 percent for post-deadline papers. Accepted ECOC 2010 papers are now available on IEEE Xplore, thanks to the technical sponsorship of the IEEE Photonics Society. The ECOC 2010 conference organization has enriched the technical program with a wide range of social events and additional services. A get-together cocktail was offered on Sunday evening, and a welcome reception was held at the Lingotto Conference Center on Monday evening, with around 1000 participants. The gala dinner was organized at the La Venaria Reale royal castle, part of the UNESCO World Heritage, and was a memorable event for over 500 participants. In the closing session, the token was passed to the organizers of ECOC 2012 in Geneva, who offered chocolate to participants, representing the worldwide reputation of Switzerland in chocolate making. ECOC 2010 confirmed the ECOC tradition of high-quality technical contributions, and very active and interactive participation of the lively research community on optical communications and technologies. The CLEO Focus Meeting, the ECOC conference, and the ECOC exhibition together provided an exciting inter- and multidisciplinary forum for people from basic research, R&D, industry, and telecom operators interested in optical communications. The rich social program and conference services contributed to making ECOC 2010 a memorable edition. More details on ECOC 2010 are available at http://www.ecoc2010.org.
IEEE Communications Magazine • February 2011
LYT-GUEST EDIT-Daneshmand
1/20/11
12:10 PM
Page 38
GUEST EDITORIAL
ADVANCES IN PASSIVE OPTICAL NETWORKS
Mahmoud Daneshmand
Chonggang Wang
A
s an ultimate broadband access solution for future Internet, the passive optical network (PON) brings many advantages such as cost effectiveness, energy savings, service transparency, and signal security over other last-/first-mile technologies. Over the past several years, we have witnessed significant development and deployment of time-division multiple access (TDMA) PONs such as IEEE 802.3ah Ethernet PONs (EPONs) and ITUT G.984 Gigabit PONs (GPONs) to provide high-quality triple-play services for residential users. However, future Internet applications, apart from triple-play service (e.g., peer-to-peer [P2P] social networking, online video sharing, grid computing, and mobile Internet), along with their unique traffic characteristics and huge bandwidth requirements, pose big challenges for current PON design and migration, which in turn are driving legacy TDMA PONs toward ultra-high-speed flexible next-generation PONs such as wavelength-division multiplexed (WDM) PONs and optical orthogonal frequency-division multiplexed (OFDM) PONs, and/or a hybrid WDM/OFDM/ TDM PON. This special issue features recent and emerging advances in PONs. Of the large number of submitted papers, five were selected for this issue . The selected articles cover topics including next-generation PON architecture, energy-efficient PONs, layer 2 medium access control (L2 MAC), quality of service (QoS) provisioning in future PONs, and PON monitoring techniques. The first article, “Opportunities for Next Generation Optical Access,” coauthored by Dirk Breuer, Frank Geilhardt, Ralf Hülsermann, Mario Kind, Christoph Lange, Thomas Monath, and Erik Weis, discusses the impact of the new business models on network architecture based on the comparison of different optical access network variants. It also pro-
1
Please note that several more papers were accepted as the second part of the special issue and will be published in the September 2011 issue of IEEE Communications Magazine.
S12
Wei Wei
vides perspective on access node consolidation for network operators. One of the PON’s advantages is the potential to provide high energy efficiency toward future green communications. The second article, “Cost and Energy Consumption Analysis of Advanced WDM-PONs” contributed by Klaus Grobe, Markus Roppelt, Achim Autenrieth, Jörg-Peter Elbers, and Michael Eiselt, focuses on the analysis of cost and energy-consumptions of future advanced WDM-PON options. The authors conclude that it is essential to carefully clarify the requirements for next-generation access with regard to per-PON client count and maximum reach. In particular, if client count does not exceed ~320, and a passive filter-based optical distribution network (ODN) is accepted, the most efficient solution, with regard to both cost and power consumption, is a simple WDM-PON. The article “Toward Energy-Efficient 1G-EPON and 10G-EPON with SleepAware MAC Control and Scheduling,” co-authored by Jingjing Zhang and Nirwan Ansari, presents L2 techniques, proposing sleep-aware MAC control and scheduling approaches for EPON. Two sleep-mode control and sleep-aware scheduling schemes are analyzed: sleep for over one DBA cycle and sleep within one DBA cycle. In addition to reducing energy consumption, multirate and multi-QoS provision is a critical feature next-generation PONs shall possess naturally to cater for existing and emerging Internet applications. The article titled “Multirate and Multi-Quality-of-Service Passive Optical Network Based on Hybrid WDM/OCDM System” by Hamzeh Beyranvand and Jawad A. Salehi proposes a new scheme to guarantee multi-QoS in WDM/OCDM system. The basic idea is to use multilength variable-weight optical orthogonal codes (MLVWOOC) as the signature sequence of an OCDM system. The code weight and code length of MLVWOOC are designed based on the characteristics of the requested classes of services. The last article, “Passive Optical Network Monitoring: (Continued on page S14)
IEEE Communications Magazine • February 2011
LYT-GUEST EDIT-Daneshmand
1/20/11
12:10 PM
Page 40
GUEST EDITORIAL (Continued from page S12)
Challenges and Requirements,” co-authored by Mohammad M. Rad, Kerim Fouli, Habib A. Fathallah, Leslie A. Rusch, and Martin Maier, touches on a different problem. In addition to the discussion of challenges and requirements for PON monitoring, it presents a comprehensive review of techniques for in-service monitoring PONs to detect and localize faults. The authors recommend the hybrid techniques as promising solutions for delivering the maintenance and protection functionalities required by current and next-generation PONs. We would like to take this opportunity to thank our reviewers for their effort in reviewing the manuscripts. We also thank the Editor-in-Chief, Dr. Steve Gorshe, for his supportive guidance during the entire process.
BIOGRAPHIES MAHMOUD DANESHMAND (
[email protected]) is a Distinguished Member of Technical Staff, AT&T Labs Research; executive director of the University Collaborations Program and assistant chief scientist of AT&T Labs; adjunct professor of computer science at the Stevens Institute of Technology; and adjunct professor of electrical engineering at Sharif University of Technology. He has more than 35 years of teaching, research and publications, and management experience in academia and industry, including Bell Laboratories, AT&T Labs, the University of California at Berkeley, the University of Texas at Austin, Tehran University, Sharif University of Technology, National University of Iran, New York University, and Stevens Institute of Technolo-
S14
gy. He has published more than 70 journal/conference papers and book chapters, co-authored two books, given several keynote talks, and served as general chair and TPC chair of many IEEE conferences. His current areas of teaching and research include artificial intelligence, knowledge discovery and data mining, complex network analysis, sensor network and RFID system reliability and performance, and mining of sensor and RFID data. He has Ph.D. and M.A. degrees in statistics from the University of California, Berkeley, and M.S. and B.S. degrees in mathematics from the University of Tehran. CHONGGANG WANG (
[email protected]) is a senior staff engineer in InterDigital Communications. Before joining InterDigital Communications, he conducted research with NEC Laboratories America, AT&T Labs Research, the University of Arkansas, and Hong Kong University of Science and Technology. His research interests include future Internet, machine-to-machine (M2M) communications, and wireless networks. He has published more than 80 journal/conference articles and book chapters. He is on the editorial boards of IEEE Communications Magazine, IEEE Network, ACM/Springer Wireless Networks, and Wiley Wireless Communications and Mobile Computing. He has served as a TPC member for numerous IEEE conferences including ICNP, INFOCOM, GLOBECOM, ICC, and WCNC. He received his Ph.D. in computer science from Beijing University of Posts and Telecommunications in 2002. WEI WEI [SM] (
[email protected]) is a senior engineer at Ciena Corporation. Before joining Ciena, he conducted research with NEC Laboratories America and the State University of New York at Buffalo. His research interests include cognitive optical networks, network virtualization, and future Internet. He has published more than 60 journal/conference articles and book chapters. He also has rich engineering experiences in developing and designing broadband optical networks and IP networks. He is the holder of three patents with five others pending. He has served as a TPC member for several IEEE conferences including GLOBECOM and ICC. He received his Ph.D. degree in electrical engineering from Shanghai Jiao Tong University.
IEEE Communications Magazine • February 2011
BREUER LAYOUT
1/19/11
3:25 PM
Page 42
ADVANCES IN PASSIVE OPTICAL NETWORKS
Opportunities for Next-Generation Optical Access Dirk Breuer, Frank Geilhardt, Ralf Hülsermann, Mario Kind, Christoph Lange, Thomas Monath, and Erik Weis, Deutsche Telekom Laboratories
ABSTRACT Next-generation optical access technologies and architectures are evaluated based on operators’ requirements. The study presented in this article compares different FTTH access network architectures. Additionally, the impact of new business models on network architectures is discussed.
MOTIVATION It is expected that in the near future an end user will require much more guaranteed bandwidth than is available today [1]. There is a common understanding that fiber to the home (FTTH) will overcome the bandwidth limitations of today’s copper-based and hybrid fiber access solutions (e.g., fiber to the cabinet [FTTCab]). FTTH is seen as the ultimate and most futureproof access solution. In consequence, this means building a completely new access network, thus requiring enormous investment and potentially allowing new business models. In the long run this will enable next-generation optical access (NGOA) networks where the access network is not limited to the first mile but will potentially further extend beyond today’s central offices (COs). The target function when optimizing the structure of such a new network is rather simple: satisfy all the needs of the customers along with minimized total costs of ownership (TCO) for building and operation of the whole network. When looking at a cost-optimal structure of the access/aggregation network, its structure can be subdivided into different building blocks, and the cost of the majority of the building blocks depends significantly on the number of access sites. This is shown in Fig. 1. In this model the access site is the demarcation point between the access network and the aggregation network providing active network elements, which terminate the access lines, aggregate the traffic, and forward the aggregated traffic via the aggregation network toward the core network. Nevertheless, access and aggregation networks can merge seamlessly when making use of appropriate optical-fiberbased network architectures and related systems. Considering the different building blocks, the following interdependencies can be seen.
S16
0163-6804/11/$25.00 © 2011 IEEE
The costs of the first mile and in-house cabling, used for connecting the customer premises, are mostly driven by the number of customer connections and the customer density in a given area, and are independent of the number of access sites. The cost of the feeder links, which connect the first mile with the access sites, are mostly driven by the length of the feeder links, first due to the building and material costs of the cables, and second due to rising costs of the used access network systems when increasing the reach requirements. Since the length of the feeder links will decrease with an increasing number of access sites, the costs of this link are inversely proportionally related to the number of access sites. The cost of building and operating the access sites scales with the number of sites. When decreasing the number of access sites, the mean number of customers per site will increase, and in consequence the size of the traffic switches inside the access sites can be increased. The cost for maintaining the network equipment installed in the access sites is also related to the number of access sites: maintaining centralized equipment installed in a small number of access sites will cause smaller traveling times and a smaller number of required maintenance personal compared to distributed equipment installed in a higher number of access sites. The costs of the aggregation links will increase with an increasing number of access nodes. In our model the aggregation link includes the egress interfaces of the switches inside the access nodes, the transmission systems (typically wavelength-division multiplexing [WDM] systems), and the ingress interfaces of the core network equipment. The mean number of customers connected to an access site increases with a decreasing number of access sites, and the number of customers per switch will also increase. A higher number of customers per switch will result in a higher amount of aggregated traffic at the egress of the switch; therefore, the required capacity of the egress interfaces also increases. The cost of these four building blocks contributes significantly to the total cost of the access/aggregation network. When analyzing the sum of the three cost types, the cost function has a minimum defining the cost-optimal number of access nodes. The characteristics of the cost
IEEE Communications Magazine • February 2011
BREUER LAYOUT
1/19/11
3:25 PM
Page 43
These cost considerations affect
PoP Aggregation network
a number of players Total costs
Access network
Access site
Costs
Aggregation links Access site
Feeder link costs
who maintain an Site costs
active interest in future access
First mile costs
networks. Even under competition the deployment and
Feeder links
operation of First mile Optical distribution network/ customer premises
Aggregation link costs Number of access sites
broadband networks is an attractive business area for a still increasing
Figure 1. Principal building blocks of access/aggregation networks and corresponding cost structure as a function of the number of access sites. function of each network segment are strongly related to the deployed technology in the respective network segment (e.g., copper lines vs. FTTx in the access network). Therefore, it is expected that the cost-optimal number of access nodes for NGOA will differ from the cost optimum in copper-based access networks. Particular attention should be paid to the costs of the feeder link section since only the feeder link cost decreases with increasing number of access sites. In copper-based access networks the cost of the feeder link section is strongly related to the pure material cost of the cables, which depends on the length and the cross-sectional area of the copper wires. The maximum transmission distance is limited by the copper line loop resistance and can only be increased by increasing the cross-sectional area of the copper wires or introducing intermediate repeaters, which both would drive feeder link costs. In optical access networks the situation is different: Due to the superior transmission characteristics of optical fibers — high transmission bandwidth and low loss — the interdependency between feeder cable length and feeder section costs is much more relaxed. Furthermore, point-to-multipoint optical access network architectures provide multiplexing of several subscriber lines, which decreases the number of required feeder fibers significantly compared to a copper-like point-to-point architecture. Therefore, we expect that the cost-optimal number of access sites in optical access networks will be below the number of access sites in today’s copper-based access networks. These cost considerations affect a number of players who maintain an active interest in future access networks. Even under competition, the deployment and operation of broadband networks is an attractive business area for a still increasing number of players. This includes mobile as well as fixed operators, but also utility companies like energy providers; even construction companies are entering this business environment. This article gives an overview of today’s FTTH approaches and the related economics, and an outlook toward network consolidation
IEEE Communications Magazine • February 2011
number of players.
evolution and the enabling optical access network technologies. In addition, the impact of new business models on network architecture and the requirements resulting from node consolidation concepts are qualitatively discussed.
TODAY’S FTTH APPROACH FIBER RICH VS. SHARED APPROACH FTTH networks can be deployed using different architectures as shown in Fig. 2. In a point-topoint (PtP) architecture all subscribers are connected to an access node (e.g., CO) via dedicated fibers. Today’s PtP deployments are mainly based on Ethernet technology using Ethernet switches in the access node with a high port density. The network termination at the subscriber site is realized with media converters (e.g., 100Base-TX ⇔ 100Base-BX) or mini-switches. The PtP architecture with its dedicated fiber connections requires a very high number of fibers in the whole access network, which causes high costs for fiber rollout and handling. In addition, each connection requires two interfaces, which cause high footprint and power consumption. In order to reduce the high number of fibers in the access network, point-to-multipoint (PtMP) architectures can be used. A PtMP architecture offers one or more additional aggregation layers between the subscriber location and CO. In general, two PtMP architectures can be distinguished: the active optical network (AON) and passive optical network (PON). The AON is determined by an active aggregation element (e.g., Ethernet switch) in the first mile. Figure 2 shows two AON variations. In one AON concept an Ethernet switch is located at the street cabinet (Fig. 2a), whereas in the second variant an Ethernet switch is used at the building location (Fig. 2b). On one hand the AON allows a reduction of the fiber count in the access network compared to a PtP solution, but on the other hand it is not able to decrease the number of required interfaces, so it is virtually impossible to reduce the footprint and power consumption.
S17
BREUER LAYOUT
1/19/11
3:25 PM
Page 44
This analysis assumes
PoP
Home/building
CO
Cab
a zero touch
Metro access node
deployment. This means that the fiber
Inhouse
infrastructure and the system
1. PtP
ONT ONT ONT
2a. AON
ONT ONT ONT
2b. AON
ONT ONT ONT
3. PON
ONT ONT ONT
equipment of the first mile are deployed at the beginning for final
First mile network
Feeder network
1
1
1 or 2 Not considered in the economic analysis
demand but the ONTs and the system equipment of CO
Aggr. network
1 or 2
location are considered demandorientated. 4. Optical access and node consolidation
ONT ONT ONT
1 OLT Power splitter
e.g. GPON
Splitter/switch/...
OLT/switch / ...
Figure 2. FTTH architectures and reference points. In contrast to an AON, PON aggregation is based on passive components such as optical power splitters or WDM (de)multiplexers. Today, typically 32–64 subscribers are connected to one PON port at the optical line termination (OLT). This means that a PON architecture enables fiber reduction as well as optimization of the footprint and power consumption compared to a PtP architecture.
AN OPERATOR’S VIEW ON ECONOMICS This analysis is based on the FTTH architectures depicted in Fig. 2. The scenarios have been modeled based on commercially available technology from leading system vendors. The PtP and AON scenarios are predicted on switched Ethernet technology. The AON switches in the first mile are connected to an Ethernet switch at the CO via optical gigabit Ethernet links (1GbE or 10GbE). The Ethernet optical network termination (E-ONT) at the subscriber site is linked to an AON switch via single-fiber Ethernet lines (100Base-BX, 1000Base-BX). Two transmission data rates have been considered for the PtP and AON scenarios, 100 Mb/s and 1 Gb/s. The PON scenario has been modeled with a G-PON system and a splitting ratio of 1:32, enabling a data rate of 2.5 Gb/s (downstream) and 1.25 Gb/s (upstream). The results refer to a dense urban service area with one CO, 100 street cabinets, 2000 buildings, and 16,000 subscribers. This service area is a brownfield area with a high number of empty ducts that can be used for fiber rollout. This analysis assumes a zero touch deployment. This means that the fiber infrastructure and the system equipment of the first mile are
S18
deployed at the beginning for final demand but the ONTs and the system equipment of CO location are considered demand-orientated. All scenarios allow a guaranteed downstream data rate of 100 Mb/s. This means that maximum 25 subscribers can be connected to one G-PON. A peak data rate of 1000 Mb/s at the User Network Interface (UNI) of the ONT is supported by the G-PON and PtP/AON solutions with a transmission data rate of 1 Gb/s. The PtP/AON scenarios with a transmission data rate of 100 Mb/s allow a peak data rate of 100 Mb/s. The techno-economical analysis considers the cost of the active system technology and the fiber infrastructure including installation. The fiber infrastructure takes into account the civil works (digging), fiber cables, optical passive splitters, optical distribution frame (ODF) at the CO, outdoor cabinets, and power for the active technology in the field. The system technology has been modeled on the basis of the price information provided by the vendors. Table 1 shows, as an example, the relative price information for the G-PON and PtP/AON solutions with a transmission data rate of 1 Gb/s. The difference between prices for the same network element from different vendors (e.g., pluggable 10GbE transceiver) can be explained by various business strategies. Figure 3a shows the total cost per line over the number of subscribers for the FTTH architectures described in Fig. 2. The results are normalized to the total cost per line of the G-PON solution at the end of the rollout (16,000 subscribers). The chart shows that the G-PON results in the lowest total cost per line independent of the number of connected subscribers. Especially in
IEEE Communications Magazine • February 2011
BREUER LAYOUT
1/19/11
3:25 PM
Network element
Page 45
Description
Relative price
GPON ONT CO OLT
The fiber infrastructure takes into
Data only
1.00
account the civil works, fiber cables,
GPON OLT with 16 PON card slots
—
Basic costs
incl. chassis, fan, power supply, switch fabric
78.67
Optics uplink
10GBASE-LR X2 module
11.00
tion frame (ODF) at
Line card uplink
2 × 10GbE
6.91
the CO, the outdoor
PON card
4 × G-PON ports incl. class B optics
80.00
optical passive splitters, optical distribu-
cabinets and the power for the active
PtP with GbE interface
technology in the
ONT
Data only
0.87
CO switch
Ethernet switch with 8 Line card slots
—
Basic costs
Incl. chassis, fan, power supply, switch fabric
179.66
Optics uplink
10GBASE-LR X2 Module
20.26
Line card uplink
6 × 10GbE
126.57
Optics downlink
1000BASE-BX
6.58
Line card downlink
48 × 1000BASE-X
83.53
field. The system technology has been modeled on the basis of the price information provided by the vendors.
AON with Cab switch and GbE interface PtP ONT
Data only
0.87
CO switch
Ethernet switch with 8 line card slots
—
Basic costs
incl. chassis, fan, power supply, switch fabric
151.89
Optics (up and down)
10GBASE-LR X2 Module
20.26
Line card
4 × 10GbE
101.28
Ethernet switch with 5 Line card slots
—
Basic costs
incl. chassis, fan, power supply, switch fabric
130.29
Line card downlink
48 × 1000BASE-X
83.53
Optics downlink
1000BASE-BX
6.58
Optics uplink
10GBASE-LR X2 module
20.26
Cab switch
AON with basement switch and GbE interface PtP ONT
Data only
0.87
CO switch
Ethernet switch with 11 line card slots
—
Basic costs
incl. chassis, fan, power supply, switch fabric
313.93
Optics uplink
10GBASE-LR X2 Module
20.26
Line card uplink
4 × 10GbE
101.28
Optics downlink
1000BASE-BX
6.58
Line card downlink
48 × 1000BASE-X
126.59
Ethernet switch
—
Basic costs
12-port 1000BASEs-X Ethernet switch
40.48
Optics (up & down)
1000BASE-BX
6.58
Basement switch
Table 1. Relative price information.
IEEE Communications Magazine • February 2011
S19
BREUER LAYOUT
1/19/11
3:25 PM
Page 46
Total costs per line (equipment + infrastructure)
Power consumption of FTTH variants in a dense urban service area (without ONT)
6
160 140
5
Building (basement) Cabinet (air conditioning) Cabinet CO
120 4 P/kW
100
3
80 60
2
40 20
1 0 1 0 0
2000
4000
6000
8000 10,000 12,000 14,000 16,000 Subscribers
AON with cab switch and GbE interface AON with cab switch and 100BT interface AON with basement switch and GbE interface PtP with 100BT interface PtP with GbE interface G-PON
2
3
4
5
6
1. PtP with GbE interface (UNI) 2. PtP with 100BT interface 3. AON with cab switch and GbE interface 4. AON with cab switch and 100BT interface 5. AON with basement switch and GbE interface 6. G-PON with GbE interface
Figure 3. a) Total costs per line for different FTTH architectures; b) energy consumption of different FTTH variants.
the economically sensitive initial deployment period with a low number of subscribers, the total cost per line decreases very fast due to the high level of sharing of the PON architecture. Though the system costs of the PtP solutions are lower for small subscriber numbers (less than 250–750 users per service area), the total cost are higher due to the infrastructure effort (fibers). For 100 percent coverage the total cost per line of the AON scenario with an Ethernet switch at the street cabinet (100 Mb/s transmission rate) is about 1.5 times more expensive than the G-PON solution. The total cost per line of the AON scenario with an Ethernet switch at the street cabinet and a transmission rate of 1 Gb/s is almost 2.3 times more expensive than the G-PON variant. The worst result has been calculated for the AON scenario with Ethernet switch at the building location (Fig. 2b). For 100 percent coverage the total costs per line are about 2.7 times more costly than the G-PON solution. This is mainly caused by the high cost of the building switch that has to fulfill network operator requirements. The cost of this switch cannot be compared with the cost of a simple LAN switch. At the beginning of the FTTH deployment with a low number of subscribers the PtP solution has a lower total cost per line than the AON variant because the AON equipment in the field causes very high initial investment due to zero touch rollout. The optical interfaces of the PtP solution have been considered demand-orientated, but the equipment of the AON has been deployed at the beginning for 100 percent coverage. Another important aspect within the comparison of different FTTH architectures is the energy consumption, since over the lifetime of a deployed technology it is a major contribution to the operational expenditures and also has a direct environmental impact. The energy con-
S20
sumption has been analyzed for the considered FTTH architectures (Fig. 2). It has been modeled on the basis of the power consumption per port for different interface types and network elements, as shown in Table 2. These values have been extracted from vendor data sheets. It includes the energy consumption of the transceiver and the appropriate portion of the line card and basic components (chassis, switch fabric, uplink, etc.). The difference between the power consumption of interfaces with the same type can be explained by the fact that a node with a high port density is more energy efficient than a node with a low port density. Figure 3b shows the maximum energy consumption of the different FTTH architectures for one dense urban service area with 100 percent coverage, neglecting the ONT at the customer site. It is assumed that all subscribers are connected. The PtP deployment with a Gigabit-Ethernet customer interface was used as a reference case. The GPON solution needs about 84 percent less energy compared to the reference. It is also evident from Fig. 3b that a PtP deployment with a reduced bandwidth of 100 Mb/s would not lead to significant power savings. In the AON scenario the power consumption would even increase due to field deployed active switch technology. Also, in the AON case a speed reduction has almost no influence on power saving.
EVOLUTION TOWARD NETWORK CONSOLIDATION TOPOLOGY SCENARIOS In the case of reducing the total numbers of access sites when deploying optical access network technologies, new (NGOA) service areas
IEEE Communications Magazine • February 2011
BREUER LAYOUT
1/19/11
3:25 PM
Page 47
have to be identified according to a number of boundary conditions: • Economic considerations: A cost-optimal cut of service areas has to be identified. • Maximum reach of access network technologies. • Limitation of the number of fibers that can be terminated in the remaining CO buildings. This refers mainly to the available capacity of the house lead-ins and feeding trunks as well as the available floor space for the optical distribution frame and active equipment, mainly the OLT devices. • Resilience requirements: The maximum number of subscribers connected to a certain CO may be limited. To identify potential new service areas by consolidation of some existing traditional service areas, we used a clustering algorithm for developing four exemplary network scenarios differing in the number of NGOA service areas. We used a reference model for Germany, and in all scenarios we started from 8000 COs and reduced the remaining number to 500, 1000, 2000, and 4000 COnew, respectively. It is obvious that other related parameters like the number of households connected to a certain access node or the covered area per NGOA service area vary in the different scenarios too. The whole NGOA service area is served by the COnew, which, however, now comprises several traditional service areas, one directly linked to the COnew and the others remotely linked to the COnew by elongated feeder links. The feeder links are interconnecting the former COs in remote service areas to the CO new . The distance for this feeder fiber depends on the number of CO new and with this on the degree of CO consolidation, and is up to about 40 km in most cases for working and backup paths (Fig. 4a). Figure 4a shows the feeder length and demand as a function of the degree of node consolidation, whereby solid lines represent the shortest working path and dashed lines show the shortest protection path, respectively. Distances above 40 km are observed only in the case of a very high degree of node consolidation (< 500 COnew/> 95 percent node consolidation) and when considering the most distant customer premises (95 percent quantile). Since in Fig. 4a only the additional feeder link length, caused by the consolidation of the CO sites, is shown, the length of the subscriber lines in the first mile (inside the traditional service area) must be added when estimating the total required reach budget of the access line system. In Germany the subscriber line length is typically below 5 km. Besides the length of the feeder links also the fiber demand in the feeder link section is an import evaluation criterion since this parameter impacts the costs of the cable infrastructure. In Fig. 4b the amount of feeder fibers needed for connecting 100 percent of all households is shown for all scenarios assuming bidirectional transmission on a single fiber and different splitting ratios. We use the splitting ratio 1:1 (pointto-point systems), 1:32 (widely used GPON splitting ratio) as well as 1:512 (feasible splitting ratios for NGOA systems). The ordinate is in
IEEE Communications Magazine • February 2011
Interface type
Network element
Port density
Energy consumption per port (W)
1000BASE-BX
CO switch
High
4.4
1000BASE-BX
Cabinet switch
Medium
4.8
1000BASE-BX
Building switch
Low
6.7
100BASE-BX
CO switch
High
4.3
100BASE-BX
Cabinet switch
Medium
4.8
G-PON-OLT
CO switch
Low
22.3
Table 2. Assumed energy consumption per interface.
logarithmic scale and it is obvious that the fiber demand for p-t-p systems compared to systems providing 1:512 splitting ratio is increased by more than two orders of magnitude.
POTENTIAL TECHNOLOGIES Technologies for NGOA networks must be able to connect subscribers by optical fibers over typical distances ranging from several hundred meters up to about 60 km or even higher, taking into account protection scenarios at high bit rates and high splitting ratios of up to 1:1024. There are different options to meet these requirements: dedicated point-to-point fiber links from the CO to every subscriber, AONs with intermediate active equipment in the field, and PONs relying on a fully passive optical outside plant with power splitters. Additionally, these options can be realized in different ways regarding topology, architecture, and used technology [2]. Incumbent network operators are very much in favor of PON-based FTTH access networks due to their cost advantages in largescale network deployments with inherited passive infrastructure (proved also by the detailed cost considerations for current PON architectures in the previous section). In the case of PONs, the optical fiber as the transmission medium is shared between multiple users. There are several principal options for ensuring the necessary multi-user access: timedivision multiple access (TDMA), frequencydivision multiple access (FDMA), and code-division multiple access (CDMA), whereby from the principal point of view FDMA encompasses both wavelength-division multiple access (WDMA) and orthogonal FDMA (OFDMA) as optical technologies discussed in the PON environment. In addition, besides the multi-user access property, some of these options can be seen as efficient modulation formats, as currently discussed in case of OFDM(A). Today’s gigabit-class PON technology is based on time-division multiplexing (TDM) and a passive power splitter in the field in order to collect a number of subscribers into a single OLT port. Higher-rate TDM-PON systems have been considered for the next generation of access networks, and they rely on the same optical distribution network as the gigabit-class PON systems. PON systems with a downstream data
S21
BREUER LAYOUT
1/19/11
3:25 PM
Page 48
Consolidation degree γ 98.75%
90%
80%
70%
Consolidation degree γ 60%
98.75%
50%
90%
80%
70%
60%
50%
109
100 75 50
Feeder fiber demand [km]
Feeder link length [km]
250
25 10 7.5 5 2.5
108
107
106
105
1 100 500
1000 1500 2000 2500 3000 3500 4000 4500
100 500
Number of metro access nodes
Number of metro access nodes Feeder link length (related to number of households)
1000 1500 2000 2500 3000 3500 4000 4500
Feeder fiber demand
Average values 80% quantile 95% quantile Shortest working path Shortest disjoint backup path
1:1 splitting ratio 1:32 splitting ratio 1:512 splitting ratio Shortest working path Shortest disjoint backup path
Figure 4. a) Feeder link length; b) feeder fiber demand as functions of the access network node consolidation degree. rate of 10 Gb/s are standardized by IEEE and full-service access network (FSAN)/International Telecommunication Union — Telecommunication Standardization Sector (ITU-T) [3, 4]. The 10 Gigabit Ethernet PON (10G-EPON) specification has been finalized by IEEE task group 802.3av, and the 10 Gigabit PON (XG-PON) [5] has been specified by the ITU-T within the G.987 Recommendation series. Long-reach extensions of TDM-type PONs utilize intermediate reach extenders amplifying or regenerating the signal; for example, amplified long-reach GPON or 10G-PONs systems have been demonstrated. Another option for obtaining longer reach are WDM-type PONs partly relying on arrayed waveguide gratings (AWGs) as distribution elements with less attenuation — compared to power splitters — in the field. However, whether WDM PONs may reach resource allocation flexibility and necessary cost targets for mass market deployments — compared to TDM-PONs — is questionable from today’s point of view. Hybrid PONs combining WDM and TDM approaches seem to be promising solutions in order to obtain long-reach high-rate and highsplit access systems. Further interesting and promising PON approaches currently under investigation are OFDM PONs and OCDM PONs (optical CDMA). OFDM and OCDM PONs are intensely discussed in the research community. They are in an early stage of development and laboratory tests using demonstrator setups have been reported. In conclusion, there are different and very promising technology solutions for optical access systems: The main question is how these different access technology options may suit the operator’s needs to provide cost-efficient and reliable broadband access over long periods of time.
S22
IMPACT OF NEW BUSINESS MODELS GENERAL BUSINESS ENVIRONMENT NGOA, as it is defined in this article, assumes a FTTH network. In practice, this might be not the available network for the upcoming years. According to market data from the Organization for Economic Cooperation and Development (OECD), current digital subscriber line (DSL) coverage is about 88 percent related to the population [6]. The same report outlines that the average coverage for broadband cable is about 60 percent, but this differs across countries in Europe, and not all cable networks are upgraded yet. In addition, mobile, wireless, and satellite networks are present today with coverage of 80 percent. Fiber networks are, besides some countries, increasing but not available to a large part of the population. So starting from today, there are a number of networks available, and fiber will have to increase market share in order to become the relevant base infrastructure for NGOA. Comparing in detail the situation of today with the announcements, again a mixed view can be found. Most incumbents in large European countries have connected only few households with fiber. BT wants to connect 2.5 million homes by 2012, Deutsche Telekom announced 4 million homes for the same time frame, and Telecom Italia 13 percent of the households FTTx in 2013 and 15 percent FTTH in the long term, just to name some of the plans. In addition, a number of smaller players will start to deploy their own networks, but those numbers are so far relatively small compared with total population. While extrapolating these announcements to the 2015 or 2020 time frame (under the assumption that the targets will by accomplished), the authors assume that 20–30 percent of all households will be covered by fiber in 2015 and 30–50 percent in 2020.
IEEE Communications Magazine • February 2011
BREUER LAYOUT
1/19/11
3:25 PM
Page 49
Political interest and regulation are demanding certain coverage, and this might increase the deployment speed [7]. Other important aspects are changes in the business relationships. In copper networks the incumbent has built the network and was forced either to provide appropriate wholesale products and/or to open the infrastructure. Nonetheless, the only available systems and infrastructure were provided by one player. According to the announcements, the infrastructure business will be more divided in the future, not only in terms of technology but also in terms of ownership. So the business environment might need new rules and mechanisms to work together and coexist. This will include organization of everyday processes like provisioning and forums with the aim to develop standards for the technology to be deployed. For example, in Sweden alternative network operators have already established such an organization, the Swedish Urban Network Association; in the United Kingdom BT Openreach is operating a platform called the Equivalence Management Platform (EMP: Openreach’s trading platform for Local Loop Unbundling). In addition, the trust relationship between the new alternative operators and the incumbents will change as the incumbent will be no longer the primary infrastructure provider. Overall, it can be seen that the situation is split in a number of aspects and directions, and a business framework for cooperation in NGOA networks is needed.
BUSINESS CASE ASPECTS FOR A NGOA PROVIDER The previous section outlined that there will be competition by different infrastructures and technologies. On the other hand, the active infrastructure of xDSL is no longer provided by a single player on a per country basis, and it can be assumed that this competition will still be present in the NGOA case, at least when a player reaches a significant market power and regulation comes into play. An important aspect is here cooperation whether forced, in the case of regulation, or on an open basis, like sharing active infrastructure with other providers. In any case, this demands a business relationship, which provides trust between the different parties in order to work together on a well-known basis. In addition, competition might be indirectly increased by infrastructure providers, who have the highest capital expenditures while deploying optical networks, and are looking to share these costs and retain revenues as fast as possible. Extending the viewpoint from an NGOA provider to a combined service and/or infrastructure provider/operator (like all incumbents are typically), the analysis will become even more complicated and will depend more on country specifics.
DISCUSSION OF NGOA REQUIREMENTS AND ECONOMIC OUTLOOK TECHNICAL REQUIREMENTS NGOA networks are intended to allow operators to bring down the network and production costs while at the same time ensuring high net-
IEEE Communications Magazine • February 2011
work quality, availability, security, and significantly increased bandwidth per user. A number of requirements are aligned with these aspects. It is expected that the data rate demand will continuously grow over the next decades [8]. This will drive the peak rates per user to at least 1 Gb/s or even more and the committed data rate to above 0.3 Gb/s, and will force much more symmetry between downstream and upstream traffic [1]. As shown, structural network changes will be the key to optimize the network and to bring down costs. Merging access and aggregation networks into a simplified network an NGOA network will lead to cost savings resulting form better utilization of network resources (e.g., interfaces, fibers, aggregation nodes), reducing significantly the number of aggregation network elements, thus avoiding expensive signal adaptations. This restructuring of the network, however, will require much higher access reach up to 100 km, and a high customer concentration per fiber (e.g., 1024). Subsequently per cable route effective redundancy concepts and protection mechanisms for service and network availability will be required. Over the next years significant FTTH deployments are expected. NGOA systems have to work on existing first mile fiber infrastructures without requiring changes on the deployed infrastructure and deployed components. Migration to the NGOA network should not affect running services, already deployed systems and used spectra. Another key to enable further cost savings is common access for residential customers, business customers (small and medium enterprises), and mobile radio backhaul on a single NGOA network avoiding multiplied network operations and network resources is required. Efficient use of network resources requires furthermore suitable quality of service (QoS) mechanisms, resource control, and management functions to address the requirements of different user types and the mobile backhaul as well as functions enabling efficient content distribution (e.g., multicast). A huge effort is to bring down network operation cost. For the systems and architecture part, much higher energy efficiency is expected in NGOA networks. For the network operation itself, a zero touch network is expected, with high automation and support functions for basic operation processes like provisioning, maintenance, fault, and management needs to be developed that minimize manual switching efforts and increase process efficiency. A customer network termination (NT), for example, should allow doit-yourself (DIY) installation (plug and play) and be customer-unspecific (colorless) to simplify logistics. Changes in the service portfolio or the access product should be done through a selfservice approach enabled by auto-configuration functions in the network. Functions for easy, fast, and efficient maintainability and restorability are needed (e.g., seamless software upgrade without service interruption, end-to-end service/ traffic performance monitoring per customer and service). Resiliency including automatic reconnections through redundant network con-
NGOA networks are intended to allow the operators to bring down the network and production costs while ensuring at the same time high network quality, availability, security, and significantly increased bandwidth per user. A number of requirements are aligned with these aspects.
S23
BREUER LAYOUT
1/19/11
3:25 PM
There are a variety of requirements on NGOA networks. Therefore the challenge is to define and select the requirements in a way to find the right balance enabling a cost optimization and establishing a network with minimum total cost of ownership.
Page 50
cepts should minimize the effect of failures, but to prevent and clear failures, suitable fault management supported by optical diagnosis and measurement solutions for fault detection and localization up to the home are necessary. There are a variety of requirements for NGOA networks. Therefore, the challenge is to define and select the requirements in a way to find the right balance, enabling cost optimization and establishing a network with minimum TCO.
BUSINESS CONSEQUENCES FOR TECHNICAL ASPECTS With respect to technology, there are a lot of options to establish next-generation access networks. In this section some points are highlighted that are related to the interrelationship between the business and technology aspects. Provided that there is competition and open access in next-generation access networks, it will be important to consider the market share and the related number of customers in order to fill possible long-reach PONs with high splitting ratios. Technically, it would be possible to extend the access up to 100 km, but for business needs and the open access model, new interfaces — possibly active — are necessary and may be contradictory to the node consolidation approach. These are important for establishing and operating access networks in changing business environments cost efficiently. From the operators’ point of view, one main requirement is upgrading either bandwidth or systems on existing fiber infrastructure as long as possible since infrastructure investment is the big bulk. Another point is changing systems under almost running conditions. This needs to be considered from the beginning. In consequence, cost efficiency in network installation and operation is key for network operators. It requires finding a balance between pure technical and business problems. Bringing all these problems together is still open from today’s point of view, and requires further research and in-depth investigations which will be conducted to a certain extent in current FP 7 projects like OASE.
ACKNOWLEDGMENTS The work leading to these results has received funding from the European Union’s Seventh Framework Program (FP7 2007/2013) under grant agreement no. 249025 (project: Optical Access Seamless Evolution — OASE).
REFERENCES [1] Analysis Mason, “Fibre Capacity Limitations in Access Networks,” report for OFCOM, Jan. 2010. [2] Optical Access Seamless Evolution (OASE), “Survey of NGOA system Concepts,” FP7/2007–2013, deliv. D4.1; http://www.ict-oase.eu/. [3] J. Kani et al., “Next-Generation PON Part I — Technology Roadmap and General Requirements,” IEEE Commun. Mag., vol. 47, no. 11, 2009, pp. 43–49. [4] F. Effenberger et al., “Next-Generation PON Part II — Candidate Systems for Next Generation PON,” IEEE Commun. Mag., vol. 47, no. 11, 2009, pp. 50–57. [5] F. Effenberger et al., “Next-Generation PON Part II — System Specifications for XG-PON,” IEEE Commun. Mag., vol. 47, no. 11, 2009, pp. 58–64.
S24
[6] A. Díaz-Pinés, “Indicators of Broadband Coverage,” OECD, DSTI/ICCP/CISP(2009)3/FINAL. [7] Booz & Company, “NGNBN Case Studies — Next Generation National Broadband Network Country Profiles,” July 2009; http://www.booz.com/media/file/NGNBNCountry-Profiles.pdf [8] Cisco Systems, “Cisco Visual Networking Index: Forecast and Methodology, 2009–2014,” San Jose, CA, 2009; http://www.cisco.com/en/US/solutions/collateral/ns341/n s525/ns537/ns705/ns827/white_paper_c11-481360.pdf.
BIOGRAPHIES DIRK BREUER (
[email protected]) received Dipl.-Ing. and Dr.-Ing. degrees in electrical engineering from the Technical University of Berlin in 1993 and 1999, respectively. Since joining Deutsche Telekom Laboratories, he has mainly been concerned with developing optimization strategies for the optical transport network of Deutsche Telekom. In recent years he is mainly involved in developing upgrade strategies toward next-generation broadband access networks. F RANK G EILHARDT (
[email protected]) received a Dipl.-Ing. (FH) degree in telecommunication engineering from the University of Applied Sciences of Berlin in 2001. Afterward he started to work in the Fixed Access Network Group of T-Systems Nova GmbH that was merged into TSystems Enterprise Services GmbH. In 2009 he joined Deutsche Telekom Laboratories in Berlin. Since joining TSystems he was mainly concerned with access network evolution including techno-economic assessments as well as the operation of access technology demonstrators. RALF HÜLSERMANN (
[email protected]) received a Dipl.-Ing. (FH) degree in telecommunication engineering from the University of Applied Sciences of Leipzig in 2001. Since joining Deutsche Telekom AG in 2001 he has been concerned with architectures for optical transport networks. Currently he is with Deutsche Telekom Laboratories where he is mainly involved in planning and optimizing optical access networks with focus on techno-economic assessment. MARIO K IND (
[email protected]) received a Dipl.-Inf (FH) degree in communications and information technology at Deutsche Telekom Hochschule für Telekommunikation Leipzig (FH), University of Applied Sciences (HfTL). He is employed as an expert in the area of broadband access networks. His main working area is the economic evaluation of business, technology, and service trends in the Internet, telecommunication, and broadcast industry. He is author or co-author of several papers published in international telecommunications conferences and journals. C HRISTOPH L ANGE (
[email protected]) is with Deutsche Telekom Laboratories, the research department of Deutsche Telekom AG, Berlin, Germany. He received a Dipl.-Ing. degree (diploma) in electrical engineering and Dr.-Ing. degree (Ph.D.) in communications engineering from the University of Rostock, Germany, in 1998 and 2003, respectively. His current research interests include broadband networks, emphasizing access network topics as well as the energy consumption and sustainability of telecommunication networks. THOMAS MONATH (
[email protected]) received his Diplom-Ingenieur (M.S.) degree in communication engineering from the University of Rostock in 1997. After finishing studies he joined Deutsche Telekom. He is working as a senior expert in techno-economics of telecommunication networks and as a project Leader (PMP) of strategic access network evolution projects. He has been involved in several European projects of EURESCOM, ACTS, and IST focused on broadband access network evolution. He is author or co-author of several papers published in international telecommunications conferences and journals. E RIK W EIS (
[email protected]) received a Dipl.-Ing. degree in electrical engineering from the Technical University of Dresden. He joined the Research Institute of Deutsche Telekom in 1997, where he was involved in national and international R&D projects on optical and hybrid optical broadband access networks. His current research interests include developing upgrade strategies and concepts toward next-generation broadband optical access networks.
IEEE Communications Magazine • February 2011
GROBE LAYOUT
1/19/11
3:40 PM
Page 51
ADVANCES IN PASSIVE OPTICAL NETWORKS
Cost and Energy Consumption Analysis of Advanced WDM-PONs Klaus Grobe, Markus Roppelt, Achim Autenrieth, Jörg-Peter Elbers, and Michael Eiselt, ADVA AG Optical Networking
ABSTRACT Next-generation access systems will have to provide bandwidths in excess of 100 Mb/s per residential customer, in conjunction with high customer count and high maximum reach. Potential systems solutions include several variants of WDM-PONs. These systems, however, differ significantly in their cost (capital expenditures) and energy consumption potential. We compare several WDM-PON concepts, including hybrid WDM-PON with integrated per-wavelength multiple access, with regard to these parameters. We also show the impact and importance of generic next-generation bandwidth and reach requirements.
INTRODUCTION Residential access bandwidth is ever increasing, and today, no change in this trend can be seen. Future applications that can easily fill bandwidths in excess of (several) 100 Mb/s consist of a combination of ultra-high-definition threedimensional TV with time-shifted or peer-topeer unicast. It can already be predicted that even fiber access based on Ethernet/gigabitcapable passive optical network (EPON/GPON) and their successors, 10G-EPON and XG-PON1, will reach its limit and is not suitable for that kind of application. So a new generation of broadband access systems will be required. These access systems are also referred to as next-generation PON, or NG-PON2 in full-service access network (FSAN) terminology [1] and are likely to be more than five years away from now. During this time, active site reduction and network consolidation will lead to larger reach requirements for the access and backhaul systems, with remaining sites having to serve a significantly larger number of customers [2]. The new technology will likely be based on wavelength-division multiplexing (WDM), possibly in conjunction with suitable wavelength sharing schemes. Following cost expectations and energy-consumption predictions [3], much focus with regard to next-generation access systems is put on advanced PON concepts. There is, however, much uncertainty about the respective ranges of these requirements. The
IEEE Communications Magazine • February 2011
different requirements regarding reach and client count as well as the requirements with respect to the optical distribution network (ODN) have severe impact on the potential capital expenditures (CapEx) and also on the operational expenditures (OpEx) of the resulting systems solutions. This is discussed in detail in the next sections. Regarding OpEx, we concentrate on energy consumption, as this constitutes a large part of these costs. Other differences in operation expenses resulting from operations and maintenance are hard to quantify, and might not even be fully appreciated by the network operators.
BROADBAND APPLICATIONS Today, there is much debate about a useful broadband definition for residential access. Sometimes, bandwidths starting at 1 Mb/s are referred to as broadband, sometimes this is increased to the range of 10 Mb/s, but sometimes, a definition of 256 kb/s has been used [4]. It is obvious that the broadband definition is frequently adopted toward increased bandwidth values. This discussion often leaves the downstream (central office or point of presence to customer) vs. upstream (the counter-direction) asymmetry unconsidered. Here, we refer to broadband access as an infrastructure which is able to potentially scale to beyond 1 Gb/s of sustained bandwidth per residential customer. Oversubscription, or statistical multiplex bandwidth gain, can be applied in a dedicated aggregation layer. This aggregation may be implemented at the edge to the backhaul and core parts of the network and can lead to significantly decreased sustained bandwidths. However, the transport and multiple-access parts of the access infrastructure should still enable sustained bandwidths in the range of 1 Gb/s. On a broad scale, bandwidths beyond, say, sustained 100 Mb/s should be provided via a fiber-based infrastructure (fiber to the home [FTTH] or fiber to the building [FTTB]), mostly due to cost and energy consumption reasons [3]. FTTH and FTTB require a massive, very costly infrastructure to be overbuilt. Hence, the nextgeneration infrastructure in turn must not be limited to anything below 1 Gb/s for reasons of investment security.
0163-6804/11/$25.00 © 2011 IEEE
S25
GROBE LAYOUT
1/19/11
3:40 PM
Page 52
A chicken-egg
Video phones VoIP SDTV VoD Multi-gaming e-Commerce, e-Learning HDTV VoD BluRay / Large-file peer-to-peer UHDTV VoD UHDTV P2P
problem comes into play: we do not see broadband peer-topeer applications today, which is likely due to the fact that the required upstream bandwidth is not available in the
70 60 50 40 30 Upstream rate (Mb/s)
20
10 5
5 10
30
40
50 60 70 80 Downstream rate (Mb/s)
ADSL2+ VSDL2 Symmetry option
majority of today’s access installations.
20
WDM-PON
DOCSIS2.0
VSDL2 GPON 1:32 WDM-PON
Once the bandwidth is provided, it may thus fuel the
Figure 1. Downstream and upstream bandwidth requirements and capabilities of different (near-future) services and solutions.
application. It is easy to envisage future services that can fill these bandwidths. This even holds for symmetric downstream-upstream scenarios. From today’s viewpoint, one application in particular can be identified which could make use of bandwidths approaching, and possibly exceeding, 1 Gb/s — unicast highest-resolution video, in conjunction with symmetric-bandwidth peer-to-peer applications. Today’s high-definition television (HDTV) requires bit rates between 8 and 15 Mb/s, depending on the compression efficiency (of the Moving Pictures Experts Group’ MPEG4 or International Organization for Standards/International Electrotechnical Commission [ISO/IEC] 14496 standard). The next step beyond HDTV is already clear — ultra-HDTV (UHDTV), with a resolution of today 3840 × 2160, in future 7680 × 4320 pixels, with 16:9 format, 60 Hz frame rate, and 22.2 audio (24-channel audio, including two subwoofer channels). These formats are also referred to as 4k and 8k UHDTV, respectively. They have been developed by the Japanese TV broadcast company NHK, together with the consumer systems industry. According to [5], it is likely that the 8k format will require up to 200 Mb/s per channel (72 Gb/s uncompressed). Furthermore, 3D versions of UHDTV may require up to 180 percent increased bit rate compared to 2D. Combined with time-shifted unicast or even peer-to-peer applications, these formats will then fill bandwidths up to the 400 Mb/s range. They will have to be provided — in the busy hour — via sustained bandwidth, without further oversubscription in the access network. The respective (downstream) UHDTV bandwidth will need to be at least twice that per typical household, assuming two parallel independent downstream or download sessions. Concerning the upstream direction, a similar bandwidth may be required in peer-to-peer applications. Here, a chickenegg problem comes into play: we do not see broadband peer-to-peer applications today, which is likely due to the fact that the required upstream bandwidth is not available in the
S26
majority of today’s access installations. Once the bandwidth is provided, it may thus fuel the application. The downstream and upstream bandwidth requirements of corresponding applications are compared in Fig. 1, together with the bandwidth capabilities of some of today’s access solutions with the highest bandwidth. Here, we assume an UHDTV bandwidth of only 75 Mb/s, which caters to the uncertainty about future codec efficiency and 3D overhead. It is also not clear what bit rate requirements may come after 8k 3DUHDTV. The authors hence believe that any new access infrastructure must be able to potentially deliver bit rates that are considerably higher than the requirements shown in Fig. 1.
BROADBAND ACCESS SOLUTIONS As discussed earlier, we focus on fiber-based next-generation broadband access. This refers to both FTTH and FTTB scenarios, where FTTH is more likely to be the broadband access endgame, and FTTB may be seen as one of the migration scenarios and also as an alternative in cases where optical in-house cabling is difficult or impossible. The main reason for this fiber access focus can also be derived from Fig. 1: for access bandwidth requirements in the 100 Mb/s range and beyond, there are no efficient alternatives. Clearly, DOCSIS3.0 and VDSL2 can exceed 100 Mb/s, but the former is a shared medium incapable of providing high dedicated bandwidth to all customers in a cluster at the same time, and the latter is severely limited in reach for bandwidths exceeding 50 Mb/s. In addition, both will lead to higher energy consumption than the PON candidates due to the fact that they are based on copper cabling. Figure 1 also gives the comparison of GPON and WDM-PON downstream and upstream capabilities. From this comparison it can be derived that GPON, in particular when it comes to unicast and symmetric-bandwidth applications, cannot meet the requirements by 2020. The same is true for EPON and the successors, XG-PON1 and
IEEE Communications Magazine • February 2011
1/19/11
3:40 PM
Page 53
10G-EPON. The proposed solution to the scalability requirement is an access technology that scales through the use of WDM. If this is combined with a passive ODN for cost and energy consumption effectiveness, we refer to it as a WDM-PON. WDM-PONs have been considered for quite a while as promising contenders for next-generation access [6, 7]. To the knowledge of the authors, only Korea Telecom has deployed an early implementation of a WDM-PON so far. The lack of further installations is mostly attributed to missing international standardization and cost efficiency. This is expected to change once the International Telecommunication Union (ITU) or IEEE provides relevant standards. Increased deployment numbers enabled by standardization will in turn lead to decreased WDM-PON cost.
REQUIREMENTS As mentioned in the introduction, the requirements for next-generation PONs vary a lot. The key performance indices are dedicated data rate per customer, total customer count, and maximum reach. With regard to customer count, anything between 64 and several thousand is stated [8]. Concerning maximum reach, the range from 50 to >100 km is covered. Only the bandwidth requirements seem to be clear: whatever the next generation of access is, it will have to support per-client bandwidth, which may go beyond 1 Gb/s in the long term. In order to limit the candidates, the next-generation access configurations considered here were required to support >100 customers with a sustained bandwidth of >500 Mb/s each. Minimum reach requirement for the ODN without additional reach extension was set to at least 50 km. To the best of our knowledge, these numbers represent what can be considered sufficient for 2020. It also has to be noted that, in particular, the sustained per-client bandwidth can be increased to 1 Gb/s (and potentially more) through reconfigurations of the respective solutions.
OPTICAL DISTRIBUTION NETWORK Taking reach extension into account, the key requirements will also support reduction of active sites by an order of magnitude, and hence longer distances in the ODN. It is not clear yet if a fully passive ODN is required, or if the ODN may accommodate a certain amount of active equipment. This is considered very important since it has an obvious impact on the accumulated insertion or path loss of the end-to-end ODN. Combining an optical power splitter/combineronly ODN with the requirement to support very high customer count, say in clear excess of 100, leads to high insertion/path loss. On the other hand, in order to keep any next-generation access technology competitive, it must be restricted in its available power budget. This is demonstrated in Fig. 2. Figure 2 shows the accumulated insertion loss for PONs with power splitters/combiners only, filters (arrayed waveguide gratings [AWGs]), band-splitters and interleavers only (i.e., a pure WDM-PON), and hybrid PON with both power
IEEE Communications Magazine • February 2011
Splitter only 1:8 TFF + Splitter 1:80 AWG + S/C + IL 1:80 AWG + Splitter
40
30 Insertion loss (dB)
GROBE LAYOUT
20
10
0 1
10
100 Number of clients
1000
10000
Figure 2. Accumulated insertion loss of power-splitter PON, filter-based (WDM-) PON, and hybrid filter-plus-splitter PON. TFF is a thin-film filter, S/C is an additional S/C-band splitter, IL is an interleaver, and AWG is an arrayed waveguide grating. splitters/combiners and filters. The light-grey area indicates the region for a pure WDM-PON that can be covered with cheap (positive-intrinsic-negative [PIN] photo-diode-based) transceivers and leaves enough power budget for the ODN fiber to cover distances of 40–60 km. For the hybrid infrastructure, it is assumed that the interfaces have a total bandwidth of 10 Gb/s in order to provide sufficient per-client bandwidth after the power splitter. It becomes clear that this infrastructure does not allow use of the cheapest transceivers since these have to support 10 Gb/s accumulated bandwidth, a multiple access mechanism, and also higher total power budgets for per-wavelength split ratios exceeding 1:4. It is also obvious that higher numbers of wavelengths decrease the insertion loss for hybrid infrastructure. The black squares and diamonds in Fig. 2 indicate the range for the split ratio where up to 1 Gb/s per client can be guaranteed when using 10 Gb/s total per-wavelength capacity. For split ratios >1:8 (white squares and diamonds), this is not possible anymore at 10 Gb/s per wavelength. For the splitter-only infrastructure, very high split ratios result in very high insertion loss exceeding 30 dB. This infrastructure can only be supported by means of coherent ultra-dense WDM (UDWDM). Then ultra-densely spaced (i.e., < 12.5 GHz) wavelengths with dedicated per-client bandwidth of 1 Gb/s can be used. Coherent technology can provide the necessary power budget to cope with the insertion loss. However, it is doubtful that it can achieve the cost points of the much simpler filter-based (or wavelength-routed) WDM-PON. It must be noted that in Fig. 2, we calculated the insertion loss for the ODN power splitters/combiners only. In an ultra-dense WDM-PON (UDWDM-PON), the transceivers in the optical line terminal (OLT, i.e., the central office equipment) have to be combined by means of splitters and/or filters as well. These additional combiners must be considered when designing the respective OLT transceiver array.
S27
GROBE LAYOUT
1/19/11
3:40 PM
Page 54
COMPARED SOLUTIONS The lasers must be very precisely tunable to maintain the ultra-dense spacing. This is achieved by heterodyne closedloop control and may require additional environmental control. It is therefore doubtful if uncooled, low-cost lasers can be used in this scheme.
S28
Several WDM-PONs and derivatives have been discussed so far [6, 7]. These include plain WDM-PON, which only makes use of wavelength-division multiple access (WDMA), as well as more sophisticated so-called hybrid PON and UDWDM-PON. Hybrid PON refers to combinations of WDM transport and multiple access (WDMA) and additional per-wavelength multiple access mechanisms like time-division multiple access (TDMA), subcarrier multiple access (SCMA), and code-division multiple access (CDMA). WDM-, hybrid, and UDWDM-PON are not equally capable of supporting all next-generation requirements (high client count, high dedicated per-client bandwidth, long reach), regardless of the exact numbers. In addition, they differ significantly in estimated per-client cost (CapEx) and per-client end-to-end energy consumption. These two parameters are very important on a broad scale with millions of subscribers. Hence, a detailed analysis is required, and is provided hereinafter. We restrict our analysis to fiberbased solutions that use WDM filters in the ODN due to the considerations in the last section. The schematic diagrams of the considered architecture alternatives are compared in Fig. 3. The WDM-PON shown in Fig. 3a makes use of the C-, L-, and S-bands, which are accessible via a cyclic AWG. Today, such AWGs (which also cover further bands like the extended Lband) are available for 32 ports and channel spacing in the 100 GHz range. Non-cyclic lowloss AWGs are available that support 96 channels at 50 GHz. We expect that cyclic 50 GHz AWGs with port counts between 64 and 96 will become available very soon. The WDM-PON is further based on highly integrated photonic integrated circuits (PICs), which contain the OLT transceiver arrays, and low-cost tunable lasers in the optical networking units (ONUs, i.e., the client equipment). PICs are considered necessary in order to reduce both power consumption and form factor in the OLT. The tunable ONU transmitters can be based on monolithic multisection distributed Bragg reflector (DBR) lasers (e.g., SG DBR or DS DBR lasers). Uncooled operation without thermo-electric cooling (TEC) of these devices has already been demonstrated [9]. TEC-less operation is required in order to achieve the lowest power consumption. It is also frequently required by large network operators. For lowest cost, the ONU lasers will also not get their own dedicated wave lockers. They will then have to be wavelength-tuned via closed-loop control, which incorporates feedback signals from either the OLT or the PON remote node. The active/passive hybrid PON shown in Fig. 3b connects access switches (active part) via passive WDM (passive backhaul part) running 10 Gb/s per wavelength. Tunable extended-formfactor pluggables (XFPs) can be used for the WDM backhaul, and lowest-cost grey small form-factor pluggables (SFPs) can be used for the active point-to-point access. The passive WDM part can also be used for generalized backhaul. This combination has potential for
very high per-client and total capacity and long reach in excess of 60 km. It requires, however, the accessibility of the respective active sites for the access switches. Figure 3c shows a hybrid WDM/TDMA-PON. For our analysis, we considered DWDM-colored TDMA at symmetric 10 Gb/s (per pair of wavelengths). This can be regarded as an extension of the stacked XG-PON approach. It can enable both high split ratio and high capacity. However, it leads to the necessity for 10 Gb/s TDMA (burst-mode) transceivers, which have up to 35 dB power budget. In addition, the transceivers must be tunable or be based on seeded reflective technology [6] (i.e., reflective electro-absorption modulator-semiconductor optical amplifier [REAM-SOA] combinations). The latter, however, lack long-distance capability because of Rayleigh crosstalk or require dedicated seed fibers [10]. The OLT again makes use of photonic integration. Finally, Fig. 3d shows a diagram of a coherent UDWDM-PON. Heterodyne detection is used in order to reuse the local oscillator laser also for the respective upstream or downstream signal transmission. In the OLT several channels can be integrated in broadband multichannel transceivers, thus reducing cost. In the ONUs, narrow-line-width lasers are required. If quaternary phase shift keying (QPSK) at 1 Gb/s is used, the line width must not exceed ~200 kHz [11]. The lasers must be very precisely tunable to maintain the ultra-dense spacing. This is achieved by heterodyne closed-loop control and may require additional environmental control. It is therefore doubtful that uncooled low-cost lasers can be used in this scheme. The coherent detection scheme also requires polarization control, diversity, or scrambling (Pol. Scr. in Fig. 3) mechanisms [12], which further increase complexity and cost. On the other hand, power budget of >45 dB is possible which allows the combination of very high customer count (up to and exceeding 1000), dedicated 1 Gb/s per customer, and long reach in excess of 50 km. Note that in Fig. 3d, a hybrid infrastructure with 100 GHz AWGs and subsequent power splitters/ combiners is shown. The AWG first routes groups of tightly spaced wavelengths to the respective ports. These wavelength groups are then split in the splitters/combiners. This approach requires gaps between the wavelength groups to allow for filtering, but leads to significantly decreased insertion loss compared to the splitter/combiner-only configuration.
COST AND ENERGY CONSUMPTION For a comparison of cost and energy consumption, the respective contributions of all major components must be considered. One major difficulty results from the fact that not all components are commercially available today. Parts of these components are not even technically mature yet. This leads to the necessity of predicting both cost and power consumption that these components will have. This prediction has been done based on the complexity (functionality), optical power budget, and bandwidth processing requirements of the respective components.
IEEE Communications Magazine • February 2011
1/19/11
3:40 PM
(a)
OLT
RN 1
C L/SR-Band 50 GHz C
T-LD Rx
SFF
C S
S C/SB-Band ~50 GHz
R B
PIC 2 S, 50 GHz
ONU
RN 2
L
AWG
TXA
AWG
PIC 1 C/L, 50 GHz
TXA
Page 55
AWG
GROBE LAYOUT
Cyclic AWG
PoP
PRN
Scalable universal switch
(c)
OLT
L2 TXFP
SFP
AWG
TXFP
AWG
TXFP
TXFP
SFP
(b) ONU SFP
ARN / LX
RN 1
RN2
λDλU
10G-TDMA ONUn
AWG
1xN AWG
TDMA MDXM Tx/Rx Array
1:k
APD SOA
TDMA MDXM
(d)
FEC CLK Rec. T-LD
OLT
RN 1
RN2
Data
λDλU
Burst M.
FEC
Data
1G-UDWDM ONUn
AWG
Tx/Rx Array
1xN AWG
UDWDM MDXM 1:k
DS
+
RF
Pol. Scr.
Pol. Scr.
CLK Rec. 3dB
3dB
US
3dB
ADC
Digital SCMA
RF Modulation
DAC
3dB
3dB
UDWDM MDXM
T-LD
Figure 3. Schematic diagrams of broadband next-generation access solutions: a) a basic C+L+S-band WDM-PON for 128-192 bidirectional channels — L/C, C/S, and R/B are the respective band filters, with R, B the Red and Blue sub-bands in the S-band; RXA, TXA: transmit and receive arrays, RN: remote node, SFF: small form factor transceiver, including a tunable laser diode (T-LD); note that only the WDM-PON in 3a is based on a filter-only ODN; b) an active/passive WDM hybrid PON, where the backhaul part is based on passive WDM and the last-mile access is based on active (Ethernet) point-to-point; TXFP, SFP: tunable extended and small form-factor pluggables, ARN and PRN: active and passive RN; c) a WDM/TDMA hybrid PON running (symmetrical) 10 Gb/s TDMA on each wavelength pair. MDXM:mux/demux, APD: avalanche photo diode, SOA: semiconductor optical amplifier, CLK Rec.: clock recovery; d) a coherent UDWDM-PON with cascaded filters and power splitters/combiners in the ODN. ADC and DAC: analog-digital and digital-analog conversion, US: upstream, DS: downstream, RF: radio frequency, Pol. Scr.: polarization scrambler. Where possible, we considered the nearest or most sophisticated components available today, and estimated their cost and power consumption decrease over the next years, assuming mass production (examples: tunable XFPs, grey SFPs). Also note that, for example, the tunable XFPs will be produced in significantly smaller numbers than the lowest-cost tunable ONU transceivers
IEEE Communications Magazine • February 2011
(TRXs). The respective cost impact has been considered. The cost and power consumption assessment has been performed, to the best of our knowledge, with equally challenging assumptions for all components. The resulting cost and power consumption figures were discussed with component and system vendors, and also network operators at FSAN.
S29
GROBE LAYOUT
1/19/11
3:40 PM
Page 56
Figure
Part
Component
Energy consumption
Cost
3a
OLT
40 × REAM/Rx array, plus MFL and circulator
20 W
2400
3a
ONU
1 Gb/s tunable TRX (PIN-PD, no TEC, no WL)
1.0 W
75
3b
OLT
40 λ 1G Laser/Rx array
20 W
2000
3b
RN
10 Gb/s tunable XFP (TEC, WL, 25 dB)
3.5 W
1200
3b
ONU
1 Gb/s grey SFP, 10 dB power budget
0.5 W
15
3c
OLT
30 GHz TRX (no TEC, no WL, 32 dB)
2.5 W
175
3c
ONU
10 Gb/s Burst-mode tunable TRX (ADP, SOA, FEC, no TEC, no WL, 35 dB)
2.5 W
175
3d
OLT
30 GHz TRX (coherent, TEC, WL, 16 channels, 1G/3 GHz)
8.0 W
1600
3d
ONU
1 Gb/s coherent (heterodyne) TRX (polarization diversity or scrambling)
2.0 W
175
3a–3c
ONU
ASIC 1 GHz (ONU)
1.0 W
10
3d
OLT
ASIC 50 GHz UDWDMA (OLT, 16 channels)
16 W
320
3a
OLT
EDFA booster/pre-amplifier combination (OLT)
25 W
2000
All
AWG port/power splitter/combiner port
—
20/10
All
OLT/PoP Layer-2 switch, per 1 Gb/s
1.0 W
5
All
Baseline cost per client (CPE housing, OLT shelf, etc.)
5.0 W
100
MFL: multi-frequency laser, PD: photo diode, WL: wavelength locker, ADP: avalanche PD, FEC: forward error correction, ASIC: application-specific integrated circuit, EDFA: erbium-doped fiber amplifier, CPE: Customer-Premises Equipment.
Table 1. Cost and energy-consumption parameters.
Table 1 gives an overview of cost and power consumption figures of the most relevant components for the systems shown in Fig. 3. Cost figures have been normalized to the common baseline costs of the solutions, which are the costs for common parts such as shelves and management controllers. From Table 1, the most relevant components can be identified. In the first place the transceivers, which are required on a per-client basis, determine the resulting cost and also part of the power consumption. With regard to cost, the common contribution from shelves, power supply units, management controllers, interfaces, and so on are in second place. Regarding power consumption, these common components are already the main contributor. Solution-specific application-specific integrated circuits (ASICs), switches, and the perport contribution from filters (AWGs) and splitters/combiners are of least importance. The results for cost and power consumption of the four solutions compared in Fig. 3 are listed in Table 2. They are split into contributions from the ONU, OLT, and remote node. The results are also visualized in Fig. 4. From the analysis, it can be seen that a pure WDM-PON clearly has the lowest per-client (end-to-end) power consumption, and almost lowest per-client cost. Again, plain WDM-PON refers to a system without any further added
S30
multiple access mechanism or coherent detection, using filters rather than splitters/combiners. The per-client end-to-end connection includes the complete ONU, the respective portion of the remote node (RN; with regard to power consumption, there is only a contribution for the active/passive hybrid PON), and the respective portion of the OLT node. The latter includes transceivers, any electronics necessary for modulation, multiple access and signal processing, and also the respective portion of an aggregation switch. We attribute the WDM-PON performance to the possibility to use cost-effective simple transceivers (27 dB class with PIN photo diodes, 1 Gb/s bandwidth, monolithically integrated lasers, no complicated medium access control [MAC] layer or multiple access). In turn, such transceivers are allowed because filters are used in the ODN rather than power splitters/combiners. According to the numbers in Table 2, the active/passive hybrid system leads to lowest cost (although the difference against the WDM-PON is small), and also has second best power consumption. Per-client power consumption is increased by 0.8 W. Note that this difference is produced mainly in the added active sites. The relatively good cost and power consumption performance of this solution can mainly be attributed to the use of lowest-cost lowest-complexity very-low-power-consuming grey transceivers
IEEE Communications Magazine • February 2011
GROBE LAYOUT
1/19/11
3:40 PM
Page 57
Common baseline
Energy consumption
Cost
5W
100
Solution individual contribution
ONU TRX
OLT TRX port incl. amplification
WDM-PON
1.0 W
0.5 W
Active/passive hybrid
1.0 W
WDM/TDMA-PON UDWDM-PON
DSP, switching
AWG + splitter ports
OLT TRX port incl. amplification
ONU TRX
DSP, switching
1.0 W
20
50
75
5
0.3 W
2.0 W
2
100
30
15
2.5 W
0.3 W
1.0 W
12
25
175
5
2.0 W
1.5 W
1.0 W
12
100
175
25
Table 2. Results of cost and power consumption analysis for next-generation access.
(SFPs) for the active access links. Today, such SFPs are available with power consumption going down to 0.4 W and cost going down into the range of US$20. These numbers cannot be achieved with any other transceiver class (10G, added multiple access, high power budget, etc.). We note, however, that this solution will not be accepted by every network operator due to the necessity of active sites. The hybrid WDM/TDMA-PON is higher in cost and power consumption than the two first solutions. Compared to the WDM-PON, cost is ~27 percent higher. This can be regarded as a reasonable cost markup given the fact that the WDM/TDMA-PON can support a larger number of clients over a purely passive ODN. Power consumption, however, is increased by 1.3 W/client. This difference is partly spent in the ONU and is likely to be paid by the customers, not the network operator or service provider. On a global scale, at high take rate of the respective technology, the question remains if such a difference is acceptable in the context of green IT. An advantage of this solution is the support of GPON access infrastructure, together with the filter-based backhaul. It also has to be noted that other WDM hybrid schemes (based on added SCMA or CDMA) from today’s perspective do not seem to offer advantages over the WDM/TDMA hybrid PON. According to our analysis, the UDWDMPON leads to highest cost and also highest power consumption. Cost compared to the simpler DWDM-PON is 165 percent, and per-client power consumption is higher by 2 W. We attribute these differences to the necessary added complexity of coherent detection, which includes tight wavelength control, added (digital or analog) signal processing, ultra-narrowlinewidth lasers, polarization control, diversity, or scrambling, and also the need for balanced receivers with multiple photo diodes. Clearly, a UDWDM-PON has very high optical performance and is able to potentially support more than 1000 clients via a splitter infrastructure or, using WDM filters, over very long access distances in excess of 100 km. Here, the most important question is whether or not this technology is overengineered for the majority of applications or clients (which will require significantly less reach performance than 100 km).
IEEE Communications Magazine • February 2011
So far, we have not considered means other than PICs and TEC-less lasers for reduction of power consumption in WDM-based PONs. Besides general power consumption reduction in (opto-) electronic components, reduction in WDM-PON can be achieved through so-called doze or (cyclic) sleep modes, where either transmitters are deactivated when not used, or complete transceivers are periodically deactivated. Such power-saving modes are expected to be simpler in implementation and more power-saving-efficient in WDM-PONs than the ones known from GPON since no common MAC layer has to be kept alive.
CONCLUSION We have shown that a simple WDM-PON is most efficient for next-generation access with up to 1 Gb/s (sustained) per client and client numbers not exceeding 128–192. Access distances of 40–60 km can also be supported. This can clearly be attributed to the inherent simplicity of the transceivers and the multiple access mechanism that can still be used for these numbers. Other solutions — hybrid WDM/TDMA, UDWDM, or active-plus-passive hybrid — exist, but either lead to higher cost and power consumption, or require active sites which may contradict site consolidation programs of certain network operators. We conclude that it is essential to carefully clarify the requirements for next-generation access with regard to per-PON client count and maximum reach. The question of whether WDM filters are allowed in the ODN (instead of power splitters/combiners) also has to be answered, and the same is true for active equipment in the access network. In particular, if client count does not exceed 128–192, and a passive filter-based ODN is accepted, the most efficient solution with regard to both cost and power consumption is a simple WDM-PON. This client-count range may even be increased to 256–384 by using interleavers in order to effectively use a 25 GHz DWDM grid.
ACKNOWLEDGMENT The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/20072013) under grant agreement no. 249025
S31
Page 58
(ICT-OASE) and from the German Ministry for Education and Research (BMBF) under Grant 13N10864.
10
REFERENCES
8
Base
Base
Base
Power consumption (W)
6
4
2
0 WDM
Active/passive
WDM/TDMA
DSP, switch OLT TRX port ONU TRX AWG + splitter ports
UDWDM
Base
400
300 Base
Cost (relative)
[1] J. Kani and R. Davey, “Requirements for Next Generation PON,” ITU-T/IEEE Wksp. NGOA Sys., Geneva, Switzerland, June 2008. [2] M. Fricke, “Examining the Evolution of the Access Network Topology,” IEEE GLOBECOM ‘09, Honolulu, HI, Dec. 2009. [3] R. Tucker, “Optical Packet-Switched WDM Networks: a Cost and Energy Perspective,” OFC/NFOEC ‘08, San Diego, CA, Feb. 2008. [4] OECD, “Broadband Statistics”; http://www.oecd.org/sti/ict/broadband. [5] T. Kuroda, “Current Status on Super HDTV Development in Japan,” IEEE Int’l. Symp. Broadband Multimedia Sys. Broadcasting, 2010. [6] A. Banerjee et al., “Wavelength-Division Multiplexed Passive Optical Network (WDM-PON) Technologies for Broadband Access: A Review,” J. Optical Net., vol. 4, no. 11, Nov. 2005, pp. 737–58. [7] K. Grobe and J.-P. Elbers, “PON in Adolescence: From TDMA to WDM-PON,” IEEE Commun. Mag., vol. 46, no. 1, Jan. 2008, pp. 26–34. [8] R. P. Davey et al., “Long-Reach Access and Future Broadband Network Economics,” ECOC ‘07, Berlin, Germany, Sept. 2007. [9] Y. Liu et al., “Directly-Modulated DS dBR Tunable Laser for Uncooled C-band WDM System,” OFC ‘06, Anaheim, CA, Mar. 2006. [10] G. Talli and P. D. Townsend, “Hybrid DWDM-TDM Long-Reach PON for Next-Generation Optical Access,” IEEE J. Lightwave Tech., vol. 24, no. 7, July 2006, pp. 2827–34. [11] M. Seimetz, “Laser Linewidth Limitations for Optical Systems with High-Order Modulation Employing Feed Forward Digital Carrier Phase Estimation,” OFC/NFOEC ‘08, San Diego, CA, Mar. 2008. [12] J. M. Fabrega and J. Prat, “New intradyne Receiver with Electronic-Driven Phase and Polarization Diversity,” OFC ‘06, Anaheim, CA, Mar. 2006.
DSP, switch OLT TRX port ONU TRX
Base
3:40 PM
200
Base
1/19/11
Base
GROBE LAYOUT
WDM
Active/passive
100
0 WDM/TDMA
UDWDM
Figure 4. Per-client end-to-end power consumption (top) and relative cost (bottom) of the four solutions, broken down into the main contributions of each.
BIOGRAPHIES KLAUS GROBE [M‘94] received Dipl.-Ing. and Dr.-Ing. degrees in electrical engineering from Leibniz University, Hannover, Germany, in 1990 and 1998, respectively. Since 2000 he has been with ADVA AG Optical Networking. He has authored or co-authored three book chapters on WDM and PON technologies, and more than 70 scientific papers. He is a member of the IEEE Photonics Society, OFC Subcommittee F, the German VDE ITG, and ITG Fachgruppe 5.3.3 (Photonic Networks). JÖRG-PETER ELBERS received his diploma and Dr.-Ing. degrees in electrical engineering from Dortmund University, Germany, in 1996 and 2000, respectively. In 1999–2001 he was with Siemens AG — Optical Networks, as director of network architecture. In 2001 he joined Marconi Communications as director of technology. Since September 2007 he has been with ADVA AG Optical Networking, where he is vice president of advanced technology. He authored and co-authored more than 70 scientific publications and holds 14 patents. M ARKUS R OPPELT received a diploma degree in electrical engineering from the Karlsruhe Institute of Technology (KIT), Germany, in 2009. He is currently working toward a
S32
Ph.D. degree in electrical engineering at ADVA AG Optical Networking. His current primary research interest is in nextgeneration optical networks. M ICHAEL E ISELT [SM] received his Dr.-Ing. degree in photonics from the Technical University of Berlin in 1994. During his 20-year career in optical communications, he has worked at various companies and research organizations in Germany and the United States. As director of advanced technology at ADVA Optical Networking, he is currently leading physical layer research for high-speed and access applications. He is a Fellow of the Optical Society of America. A CHIM A UTENRIETH [M] received his Dipl.-Ing. and Dr.-Ing. degrees in electrical engineering and information technology from Munich University of Technology (TUM), Germany, in 1996 and 2003, respectively. From 2003 to 2010 he was with Siemens AG, Siemens Networks GmbH & Co KB, and Nokia Siemens Networks, last as head of BCS R&D Innovations. Since 2010 he is with ADVA AG Optical Networking, Advanced Technology. His research interests include multilayer transport networks and control planes. He is a member of VDE/ITG.
IEEE Communications Magazine • February 2011
ANSARI LAYOUT
1/19/11
3:29 PM
Page 59
ADVANCES IN PASSIVE OPTICAL NETWORKS
Toward Energy-Efficient 1G-EPON and 10G-EPON with Sleep-Aware MAC Control and Scheduling Jingjing Zhang and Nirwan Ansari, New Jersey Institute of Technology
ABSTRACT To rapidly meet increasing traffic demands from subscribers, the IEEE 802.3av task force was charged to study the 10G-EPON system to increase the data rate to 10 times the line rate of 1G-EPON. With the increase of the line rate, the energy consumption of 10G-EPON increases accordingly. Achieving low energy consumption with 10G-EPON has attracted broad research attention from both academia and industry. In this article we first briefly discuss the key features of 10G-EPON. Then, from the perspective of MAC-layer control and scheduling, we discuss challenges and possible solutions to put optical network units into low-power mode for energy saving. More specifically, we detail sleep-mode control and sleep-aware traffic scheduling schemes in two scenarios: sleep for over one DBA cycle, and sleep within one DBA cycle.
INTRODUCTION As one of the major fiber to the home/curb/cabinet/and so on (FTTx) technologies, Ethernet passive optical network (EPON) is developed based on Ethernet technology, and enables seamless integration with IP and Ethernet technologies [1]. Due to the advantages of fine scalability, simplicity, and multicast convenience, as well as the capability of providing full-service access, EPON has been rapidly adopted in Japan and is also gaining momentum with carriers in China, Korea, and Taiwan since the IEEE ratified EPON as the IEEE 802.3ah standard in June 2004. On the other hand, video-centric applications and services such as HDTV are growing and emerging in the network [2]. As compared to traditional voice and data traffic, these multimedia applications are more bandwidth-hungry. For example, one HDTV channel requires as much as 10 Mb/s bandwidth. Motivated by satisfying these emerging high bandwidth demands, the IEEE 802.3av 10G-EPON task force was charged to increase the downstream bandwidth to 10 Gb/s, and to support two upstream data rates: 10 Gb/s and 1 Gb/s. While the line rate is significantly increased
IEEE Communications Magazine • February 2011
to satisfy subscribers’ demands, the power consumption of 10G-EPON may be increased as well [3]. Power consumption of 10G-EPON has become a big concern of network service providers as it contributes to part of their operational expenditure (OPEX). Moreover, energy consumption is becoming an environmental and therefore social and economic issue because one big reason for climate change is the burning of fossil fuels and the direct impact of greenhouse gases on the Earth’s environment [4]. Previously, Baliga et al. [5] estimated that the Internet currently consumes around one percent of the total electricity consumption in broadband enabled countries. It is also shown that currently and in the medium term future, access networks consume the majority of the energy in the Internet. The analysis in [5, 6] showed that, among various access technologies, such as WiMAX, fiber to the node (FTTN), and point-to-point optical access networks, PON is the most power-efficient solution in terms of energy consumption per transmission bit attributed to the nearest approach of optical fibers to users. Although PON consumes the smallest power among all access network technologies, it is desirable to further reduce its power consumption, especially when the line rate is increased to 10 Gb/s. With the increase in line rate, the optical dispersion increases as well. Compensating for higher dispersion exerts higher requirements on optical lasers, which may incur an increase of power consumption of lasers. In addition, the electronic circuit should be sufficiently powered such that it can process 10 times faster than that of 1G-EPON. Thus, 10G-EPON will consume more energy than 1G-EPON [4, 5]. Reducing power consumption of 10G-EPON requires efforts across both the physical and medium access control (MAC) layers. Efforts are being made to develop optical transceivers and electronic circuits with low power consumption. Besides, multi-power-mode devices with the ability of disabling certain functions can also help reduce the energy consumption of the network. However, low-power-mode devices with some functions disabled may result in degradation of network performances. To avoid service degradation, it is important to properly design
0163-6804/11/$25.00 © 2011 IEEE
S33
ANSARI LAYOUT
1/19/11
3:29 PM
Page 60
Data rate (Gb/s)
1G-EPON
FEC
Line coding
Tx type and launch power (dbm)
Upstream
Downstream
1.25
1.25
Optional RS(255, 239)
8b/10b
PX10: OLT: [–3,2] ONU: [–1,4]
PX20: OLT: [2,7] ONU: [–1,4]
10.3125
10.3125
Enabled RS(255, 223)
64b/66b
PR10: OLT: EML [1,4] ONU: DML [–1,4]
PR20: OLT: EML+AMP [5,9] ONU: DML [–1,4]
PR30: OLT: EML [2,5] ONU: HP DML [4,9]
1.25
10.3125
Enabled RS(255,223)
64b/66b
PRX10: OLT: EML [1,4] ONU: DML [–1,4]
PRX20: OLT: EML+AMP [5,9] ONU: DML [–1,4]
PRX30: OLT: EML [2,5] ONU: DML [.6,5.6]
10G-EPON
Table 1. Comparison between 1G-EPON and 10G-EPON. MAC-layer control and scheduling schemes that are aware of the disabled functions. This is the focus of this article. We first discuss the evolution from 1G-EPON to 10G-EPON. Then, we discuss the challenges in empowering optical network units (ONUs) in low-power mode. Finally, we detail our proposed control and scheduling schemes to achieve energy saving without degrading user services.
EVOLUTION FROM 1G-EPON TO 10G-EPON 10G-EPON supports both symmetric 10 Gb/s downstream and upstream, and asymmetric 10 Gb/s downstream and 1 Gb/s upstream data rates, while 1G-EPON provides only the 1 Gb/s symmetric data rate. With a focus on the physical layer, the IEEE 802.3av Task Force specifies the reconciliation sublayer (RS), symmetric and asymmetric physical coding sublayers (PCSs), physical media attachments (PMAs), and physical media-dependent (PMD) sublayers. Table 1 lists several key physical layer features of 10G-EPON [7]. Instead of using the 8B/10B line coding adopted in 1G-EPON, 10G-EPON employs 64B/66B line coding, with which the bit-to-baud overhead is reduced to as small as 3 percent. To relax the requirements for optical transceivers, Reed-Solomon code (255, 223) is chosen as the mandatory forward error correction (FEC) code in 10G-EPON to enhance the FEC gain, while Reed-Solomon code (255, 239) is specified as optional for 1G-EPON. 10G-EPON defines the PRX power budget for asymmetric-rate PHY of 10 Gb/s downstream and 1 Gb/s upstream, and the PR power budget for symmetric-rate PHY of 10 Gb/s both upstream and downstream. Each power budget further contains three power budget classes: low power budget (PR(X)10), medium power budget (PR(X)20), and high power budget (PR(X)30). PR(X)10 and PR(X)20 power budget classes are defined in 1G-EPON as well, while PR(X)30, which can support 32-split with a distance of at least 20 km, is an additional one defined in 10G-EPON. Due to limited space, we only list the transmitter (Tx) type along with its launch power of 10G-EPON in Table 1. As compared to 1G-EPON, advanced transmitters and higher launch power are employed in 10G-EPON
S34
to guarantee a sufficient signal-to-noise ratio (SNR) at the receiver side for accurate recovery of data with a rate of 10 Gb/s. Because of the increased launch power, the power consumption of the optical transmitter should be increased accordingly. Also, due to the mandatory FEC mechanism and increased line rate, the electronic circuit has to enable more functions and process faster than that in 1G-EPON, thus consequently incurring high power consumption and possibly larger heat dissipation. Therefore, to accommodate 10 Gb/s in the physical layer, the power consumption of the optical line terminal (OLT) and ONU may increase significantly. For the MAC layer and layers above, in order to achieve backward compatibility such that network operators are encouraged to upgrade their services, 10G-EPON keeps the EPON frame format, MAC layer, MAC control layer, and all the layers above almost unchanged from 1G-EPON. This further implies that similar network management system (NMS), PON-layer operations, administrations, and maintenance (OAM) system, and dynamic bandwidth allocation (DBA) used in 1G-EPON can be applied to 10G-EPON as well. Next, with a focus on MAC layer control and DBA, we propose a scheme to reduce energy consumption of 1G-EPON and 10G-EPON. In this article, we focus on reducing the energy consumption of ONUs.
CHALLENGES IN SAVING ENERGY OF 1G-EPON AND 10G-EPON Formerly, the sleep mode was proposed to be introduced into ONUs to save energy when ONUs are idle [8–10]. ITU-T Recommendation G. sup 45 [11] specified two energy saving modes for ONUs in GPON. One is doze mode, in which only the transmitter can be turned off when possible. Another one is cyclic sleep mode, in which both transmitter and receiver can be turned off. Since the access network traffic is rather bursty [12], ONUs may be idle for quite long periods, implying that putting idle ONUs into sleep mode is an effective way to reduce energy consumption. However, it is challenging to wake up sleep ONUs in time to avoid service disruption when downstream or upstream traffic arrives in 1GEPON and 10G-EPON.
IEEE Communications Magazine • February 2011
ANSARI LAYOUT
1/19/11
3:29 PM
Page 61
The major challenge lies in downstream transmission. In EPON, the downstream data traffic of all ONUs is time-division multiplexed (TDM) into a single wavelength, and is then broadcasted to all ONUs. An ONU receives all downstream packets and checks whether the packets are destined to itself. An ONU does not know when the downstream traffic arrives at the OLT and the exact time the OLT schedules its downstream traffic. Therefore, without proper sleep-aware MAC control, receivers at ONUs need to be awake all the time to avoid missing their downstream packets. To address this problem, Mandin [9] proposed to implement a three-way handshake process between OLT and ONUs before putting ONUs to sleep. Since an OLT is aware of the sleep status of ONUs, it can queue the downstream arrival traffic until the sleep ONU wakes up. However, to implement the three-way handshake process, extended multipoint control protocol (MPCP) is required to introduce new MPCP packet data units (PDUs). In addition, the negotiation process takes at least several round-trip times that further incurs large delay. Lee et al. [13] proposed to implement fixed bandwidth allocation (FBA) when the network is lightly loaded. By using FBA, the time slots allocated to each ONU in each cycle are fixed and known to the ONU, and thus ONUs can go to sleep in the time slots allocated to other ONUs. However, since traffic of an ONU changes dynamically and from cycle to cycle, FBA may result in bandwidth under- or overallocation, and consequently degrade services of ONUs to some degree. Besides the downstream scenario, an efficient sleep control mechanism should also consider upstream traffic and MPCP control message transmission. For upstream transmission, the wake-up of a sleep ONU can be triggered by the arrival of upstream traffic. However, this arrival traffic cannot be transmitted until the ONU is notified of the allocated time from the OLT. Before the OLT allocates bandwidth to an ONU, the newly awake ONU first needs to request upstream bandwidth. To realize this, some periodic time slots may need to be allocated to ONUs to enable them to access the upstream channel in time even when they are asleep. Regarding the MPCP control message transmission, to keep a watchdog timer in the OLT from expiring and deregistering the ONU, both IEEE 802.3ah and IEEE 802.3av specify that ONUs should send MPCP REPORT messages to the OLT periodically to signal bandwidth needs as well as to arm the OLT watchdog timer even when no request for bandwidth is being made. The longest interval between two reports as specified by report_timeout is set as 50 ms in both 1G-EPON and 10G-EPON. Besides, the OLT also periodically sends GATE messages to an ONU even when the ONU does not have data traffic. The longest interval between two GATE messages, specified by gate_timeout, is set as 50 ms. Therefore, to comply with MPCP, sleeping ONUs need to wake up every 50 ms to send the MPCP REPORT messages and receive the GATE messages.
IEEE Communications Magazine • February 2011
UNI UNI
Ethernet switch
NPE and PPE
ONU MAC
SER DES
UNI
Laser driver
LD
CD R
APD/ PIN
L A
TI A
NPE/PPE: network/packet processing engine
Figure 1. The constitution of an ONU.
SLEEP-AWARE MAC CONTROL AND SCHEDULING In this section, we discuss our proposals on saving energy in ONUs. Our basic idea is still putting ONUs into sleep mode whenever possible. Different from existing proposals, which put the whole ONU to sleep, we investigate the constitution of an ONU and put different components of an ONU to sleep under different conditions.
SLEEP STATUS OF ONUS Figure 1 illustrates the typical constitution of an ONU. The optical module consists of an optical transmitter (Tx) and an optical receiver (Rx). The electrical module mainly contains serializer/deserializer (SERDES), ONU MAC, network/packet processing engine (NPE/PPE), Ethernet switch, and user-network interfaces (UNIs). When neither upstream nor downstream traffic exists, every component in the ONU can be put to sleep. When only downstream traffic exists, the functions related to upstream transmission can be disabled. Similarly, the functions related to receiving downstream traffic can be disabled when only upstream traffic exists. Even when upstream traffic exists, the laser driver and laser diode (LD) do not need to function all the time, but only during the time slots allocated to this ONU. Thus, each component in the ONU can likely sleep, and potentially higher power savings can be achieved. By putting each component of an ONU to sleep, an ONU ends up with multiple power levels. Figure 2 shows the power levels of an ONU and the transition between different power levels. The wakeup of UNI, NPE/PPE, and switch can be triggered by the arrival of upstream traffic and the forwarding of downstream traffic from the ONU MAC [14]. They are relatively easily controlled as compared to the other components. Thus, we only focus on the ONU MAC, SERDES, Tx, and Rx. As shown in Fig. 2a, two power levels, all:awake and all:sleep, result from putting the whole ONU to sleep. In our proposal two more sleep statuses, Rx:sleep and Tx:sleep, are introduced; thus, four power levels are generated. When the ONU is in the all:awake status, if Tx does not need to work, it enters into Tx:sleep status, and further enters into all:sleep status if Rx does not need to function either. In all:sleep status, besides Rx and Tx, the ONU MAC and SERDES sleep as well. Similarly, transitions happen between the all:awake, Tx:sleep, and Rx:sleep statuses. Transitioning among these statuses should be properly designed so as to maximize energy sav-
S35
ANSARI LAYOUT
1/19/11
3:29 PM
Page 62
Power
Power All: awake Level 2
All: awake
1
3 4
2
Level 3 Tx: Level 4 sleep
Rx: sleep 7
5 All: sleep Level 1
6
Level 2
All: sleep
8
Level 1
Figure 2. Multi-power-level ONUs.
a: If the Tx has not transmitted traffic for the time duration of idle_threshold s = 1; b: Tx enters into sleep status; sleep_time = 2s – 1*short_active + (2s–1 – 1) * idle_threshold; If sleep_time > 50 ms sleep_time = 50 ms Endif Tx wakes up after sleep time duration The ONU checks the queue length and reports the queue status If there is queued traffic Keep Tx awake s = 0; go to line a; Else s = s + 1; go to line b; Endif Endif
Algorithm 1. Decide the transition between all:awake and Tx:sleep.
ing without degrading services. We present solutions in determining the transitions under two respective scenarios: sleep for more than one DBA cycle, and sleep within one DBA cycle.
SCENARIO 1: SLEEP FOR MORE THAN ONE DBA CYCLE In this scenario the transition is decided by the incoming traffic status. Tx/Rx is put to sleep if no upstream/downstream traffic exists for some time. Whether or not downstream/upstream traffic exists can be inferred based on the information of the time allocated to ONUs and queue lengths reported from ONUs, which is known to both OLT and ONUs. If no upstream traffic arrives at an ONU, the ONU requests zero bandwidth in the MPCP REPORT message. Then, the OLT can assume that this ONU does not have upstream traffic. If no downstream traffic for an ONU arrives at the OLT, the OLT will not allocate downstream bandwidth to the ONU. Assume that, out of fairness concerns, an OLT allocates some time slots in a DBA cycle to every ONU with downstream traffic. Then, considering the uncertainty of the exact time allocated to an ONU in a DBA cycle, the ONU can infer that no downstream traffic exists if it does not receive any downstream traffic within two DBA cycles.
S36
The next question is to decide the transition between different statuses. In this scenario, the status Rx:sleep actually does not exist since there will still be some downstream MPCP control packets for an ONU to facilitate the upstream transmission even when no downstream data traffic exists. Hence, we next discuss the transition between all:awake and Tx:sleep, and the transition between Tx:sleep and all:sleep. Formerly, Kudo et al. [10] proposed periodic wakeup with sleep time adaptive to the arrival traffic status. We also decide the sleeping time based on traffic status. More specifically, we set the sleep time as the time duration in which traffic stops arriving. When putting Tx to sleep, for example, Algorithm 1 describes the transition between all:awake and Tx:sleep. We assume that Algorithm 1 is known to the OLT as well. Then, the OLT can accurately infer the time that Tx is asleep or awake. Let idle_threshold be the maximum time duration a transmitter stays idle before being put to sleep, short_active be the time taken for an ONU to check its queue status and send out the report, and sleep_time be the time duration each time an ONU sleeps. If the transmitter is idle for idle_threshold, Tx will be put into sleep status, and the sleep_time for the first sleep equals idle_threshold. Then, Tx wakes up to check its queue status and sends a report to the OLT, which takes short_active time duration. If there is no upstream traffic being queued, Tx will enter sleep status again. Until now, the elapsed time since the last time Tx transmitted data packets equals idle_threshold + time duration of the first sleep + short_active. So for the second sleep, the sleep time duration sleep_time is set as idle_threshold + time duration of the first sleep + short_active. According to MPCP, ONUs send MPCP REPORT messages to the OLT every 50 ms when there is no traffic. Thus, we set the upper bound of sleep_time as 50 ms to be compatible with MPCP and also to avoid introducing too much delay of the traffic which arrives during sleep mode. This process repeats until upstream traffic arrives. For the sth sleep, the sleep_time equals idle_threshold + the total time durations of the former s – 1 sleep + (s – 1)* short_active, which also equals 2 s – 1 * short_active + (2s–1 – 1) * idle_threshold. For the transition between Tx:sleep and all:sleep, the transitioning algorithm is similar to Algorithm 1 with the exception that line a should be changed to: “If the Rx has not received downstream traffic destined to the ONU for the time duration of idle_threshold.” The remaining codes are similar. Figure 3 shows an example of the sleep time control process with short_active = 2.5 ms and idle_threshold = 10 ms. Then, sleep_times of the first, second, third, and fourth sleeps are as follows: • First sleep: 10 ms • Second sleep: idle_threshold + 10 ms + short_active = 22.5 ms • Third sleep: idle_threshold + 32.5 ms + 2 * short_active = 47.5 ms • Fourth sleep: min{50 ms, idle_thresh-
IEEE Communications Magazine • February 2011
ANSARI LAYOUT
1/19/11
3:29 PM
Page 63
old + 80 ms + 3 * short_active} = 50 ms In deciding the sleep time, idle_threshold and short_active are two key parameters which are set as follows. idle_threshold — Setting idle_threshold needs to consider the time taken to transit between sleep and awake. Considering the transition time, the net sleep time will be reduced by the sum of the transit time from awake to sleep and the transit time from sleep to awake. Hence, idle_threshold should be set longer than the sum of two transit times in order to save energy in the first sleep. Currently, the time taken to power up the whole ONU is around 2–5 ms [8]. Hence, idle_threshold should be greater than 4 ms in this case. In addition, we assert that the upstream/downstream traffic queue is empty if no bandwidth is allocated to upstream/downstream traffic for idle_threshold. To ensure this assertion is correct, idle_threshold should be at least one DBA cycle duration, which typically extends less than 3 ms to guarantee delay performance for some delay-sensitive service. So Tx/Rx must sleep for over one DBA cycle with this scheme. short_active — During the short awake time of Tx, an ONU checks its upstream queue status and reports to the OLT. Thus, short_active should be long enough for an ONU to complete these tasks. In addition, using some upstream bandwidth for an ONU to send a report affects the upstream traffic transmission of other ONUs. In order to avoid interruption of the traffic transmission of other ONUs, we set short_active to be at least one DBA cycle duration such that an OLT can have freedom in deciding the allocated time for an ONU to send its report. For Rx, during the short awake time, the OLT begins sending the queued downstream traffic if there is any. Similar to the Tx case, short_active is set to be at least one DBA cycle to avoid interrupting services of ONUs in the Rx case.
SCENARIO 2: SLEEP WITHIN ONE DBA CYCLE In the former scenario, the sleep and awake durations of Tx and Rx are greater than one DBA cycle. In this section, we discuss the scheme of putting Tx and Rx to sleep within one DBA cycle. Consider a PON with 16 ONUs. During a DBA cycle, on average, only 1/16 of time duration is allocated to an ONU. This means that even if upstream/downstream traffic exists, Tx/Rx need only be awake for 1/16 of the time and can go to sleep for the other 15/16 of the time. Therefore, significant energy savings can be achieved. To enable an ONU to sleep and wake up within a DBA cycle, the transit time between awake and sleep should be less than half the DBA cycle duration such that the net sleep time can be greater than zero, and thus energy can be saved. Formerly, Wong et al. [15] reduced the transition time to as small as 1–10 ns by keeping some of the back-end circuits awake. Thus, with advances in speeding up transition time, it is
IEEE Communications Magazine • February 2011
Idle threshold: 10 1 x
Transmitting Idle
1 x
Sleep 1: 10 Sleep 2: 22.5
1 x Sleep 4: 50
1 x Sleep 3: 47.5
Time
1 x Sleep 5: 50
Time
Figure 3. An example of sleep time control of the transmitter. physically possible to put an ONU to sleep within one cycle to save energy. For the upstream case, waking up Tx can be triggered by ONU MAC when the allocated time comes. Tx can go to sleep after data transmission. For the downstream case, however, it is difficult to achieve since Rx does not know the time when the downstream traffic is sent and has to check every downstream packet. To address this problem, we propose the following sleepaware downstream scheduling scheme. For downstream transmission, an OLT schedules the downstream traffic of ONUs one by one, and the interval between two transmissions of an ONU is determined by the sum of the downstream traffic of all other ONUs. Again, due to the bursty nature of ONU traffic, the ONU traffic in the next cycle does not vary much from that in the current cycle. Accordingly, we can make an estimation of traffic of other ONUs and put this particular ONU to sleep for some time. More specifically, for a given ONU, denote Δ as the difference between the ending time of its last scheduling and the beginning time of its current scheduling. Then we set the rule that the OLT will not schedule this ONU’s traffic until f(Δ) time after the ending time of the current scheduling. As long as the ONU is aware of this rule, it can go to sleep for f(Δ) time durations. Figure 4 illustrates one example of putting an ONU to sleep within one DBA cycle. In this example, one OLT is connected to four ONUs, and f(Δ) is set as 0.8 * Δ. The interval between the first two schedulings of ONU 4 is 9. Hence, the OLT will not schedule the traffic of ONU 4 until 7.2 time duration later; ONU 4 can sleep for 7.2 time duration and then wake up. However, this wakeup is an early wakeup since the actual transmission of the other ONUs takes 9.5 time durations, which is 2.3 longer than the estimation. Similarly, the duration of the second sleep is set as 7.6. However, this wakeup is a late wakeup since the actual time taken to transmit the other ONUs’ traffic is 6.5. The late wakeup incurs 1.1 idle time duration on the downstream channel. As can be seen from the example, early wakeup and late wakeup are two common phenomena of this scheme. Early wakeup implies that energy can be further saved, while late wakeup results in idle time durations, and thus possibly service degradation. From the network service providers’ perspective, avoiding late wakeup and the subsequent service degradation is more
S37
ANSARI LAYOUT
1/20/11
1:38 PM
Page 64
9
9.5 7.2
OLT 1 ONU 1 ONU 2 ONU 3 ONU 4
7.6
4
2
1 3
1
2
3
4
Awake
Idle
6.5
Sleep
4
2
3
4
Awake Sleep 2
Early wake up
Awake Late wake up
Figure 4. One example of putting ONUs into sleep within one DBA cycle. desirable than avoiding early wakeup. Hence, 0 < f(Δ) < Δ is suggested to be set. If f(Δ) is set as small as 0.5Δ, on average an ONU can still sleep 15/32 of the time when a PON supports 16 ONUs and traffic of ONUs is uniformly distributed. Therefore, significant power savings can be achieved with this scheme.
CONCLUSION With the increase of line rate, energy consumption of 10G-EPON may be significantly increased as compared to that of 1G-EPON. It is important to reduce power consumption of 10G-EPON for low OPEX and environmental friendliness. In this article, with the focus on saving energy of ONUs, we have proposed to disable some functions of ONUs whenever possible, and introduced four power levels for ONUs. Then, from the perspective of MAC control and scheduling, we have investigated schemes to properly transit between power levels of ONUs. Two schemes have been proposed for two respective scenarios: sleep for over one DBA cycle, and sleep within one DBA cycle. For the former scenario, we have described the scheme of implicitly conveying sleep information between ONU and OLT, and proposed an algorithm to adapt the sleeping time to user traffic. For the latter scenario, to address the challenging issue in the downstream case, we have designed a sleep-aware downstream scheduling scheme to achieve energy saving without degrading user service.
REFERENCES [1] Y. Luo and N. Ansari, “Bandwidth Allocation for Multiservice Access on EPONs,” IEEE Commun. Mag., vol. 43, no. 2, 2005, pp. S16–S21. [2] K. Cho et al., “The Impact and Implications of the Growth in Residential User-to-User Traffic,” Proc. Conf. Apps., Tech., Architectures, Protocols for Comp. Commun., 2006. [3] R. Kubo et al., “Sleep and Adaptive Link Rate Control for Power Saving in 10G-EPON Systems,” Proc. 2009 IEEE GLOBECOM, 2009, pp. 1573–78. [4] J. Baliga et al., “Energy Consumption in Optical IP Networks,” IEEE/OSA J. Lightwave Tech., vol. 27, no. 13, 2009, pp. 2391–2403. [5] J. Baliga et al., “Energy Consumption in Access Networks,” Proc. Optical Fiber Commun. Conf., 2008. [6] C. Lange and A. Gladisch, “On the Energy Consumption of FTTH Access Networks,” Proc. Optical Fiber Commun. Conf., 2009 [7] K. Tanaka, A. Agata, and Y. Horiuchi, “IEEE 802.3av
S38
10G-EPON Standardization and Its Research and Development Status,” IEEE/OSA J. Lightwave Tech., vol. 28, no. 4, Feb.15, 2010, pp. 651–61. [8] S. Wong et al., “Sleep Mode for Energy Saving PONs: Advantages and Drawbacks,” IEEE GreenCom, 2009. [9] J. Mandin, “EPON Power Saving via Sleep Mode,” IEEE P802. 3av 10GEPON Task Force Meeting, 2008. [10] R. Kubo et al., “Adaptive Power Saving Mechanism for 10 Gigabit Class PON Systems,” IEICE Trans. Commun., vol. E93-B, no.2, 2010, pp. 280–88. [11] ITU-T Rec. G. sup. 45. [12] G. Kramer, B. Mukherjee, and G. Pesavento, “Ethernet PON (ePON): Design and Analysis of an Optical Access Network,” Photonic Net. Commun., vol. 3, no. 3, 2001, pp. 307–19. [13] S. Lee and A. Chen, “Design and Analysis of a Novel Energy Efficient Ethernet Passive Optical Network,” Proc. 9th Int’l. Conf. Net., 2010. [14] E. Trojer and P. Eriksson, “Power Saving Modes for GPON and VDSL,” Proc. 13th Euro. Conf. Netw. & Optical Commun., Austria, June 30–July 3, 2008. [15] S. Wong et al., “Demonstration of Energy Conserving TDM-PON with Sleep Mode ONU Using Fast Clock Recovery Circuit,” Proc. Optical Fiber Commun. Conf., 2010.
BIOGRAPHIES J INGJING Z HANG [S‘09] received a B.E. degree from Xi’an Institute of Posts and Telecommunications, China, in 2003, and an M.E. degree from Shanghai Jiao Tong University, China, in 2006, both in electrical engineering. She is working toward her Ph.D. degree in electrical engineering at the New Jersey Institute of Technology (NJIT), Newark. Her research interests include planning, capacity analysis, and resource allocation of broadband access networks, QoE provisioning in next-generation networks, and energy-efficient networking. She received the New Jersey Inventors Hall of Fame Graduate Student Award in 2010. N I R W A N A N S A R I [S‘78, M‘83, SM‘94, F‘09] (Nirwan.
[email protected]) received his B.S.E.E. (summa cum laude, with a perfect GPA) from the New Jersey Institute of Technology (NJIT) in 1982, his M.S.E.E. degree from the University of Michigan, Ann Arbor, in 1983, and his Ph.D. degree from Purdue University, West Lafayette, Indiana, in 1988. He joined NJIT’s Department of Electrical and Computer Engineering as an assistant professor in 1988, tenured associate professor in 1993, and full professor since 1997. He has also assumed various administrative positions at NJIT. He authored Computational Intelligence for Optimization (Springer, 1997, translated into Chinese in 2000) with E.S.H. Hou, and edited Neural Networks in Telecommunications (Springer, 1994) with B. Yuhas. His current research focuses on various aspects of broadband networks and multimedia communications. He has also contributed over 350 technical papers, over one third of which were published in widely cited refereed journals/magazines. For example, one of his published works was the sixth most cited article published in IEEE Transactions on Parallel and Distributed Systems, according to the journal EIC report in February 2010. He has also guest edited a number of special issues, covering various emerging topics in communications and networking. He was visiting (chair) professor at several universities. He has served on the Editorial and Advisory Boards of eight journals, including as a Senior Technical Editor of IEEE Communications Magazine (2006–2009). He has served the IEEE in various capacities such as Chair of the IEEE North Jersey Communications Society Chapter, Chair of the IEEE North Jersey Section, member of the IEEE Region 1 Board of Governors, Chair of the IEEE Communications Society Networking Technical Committee Cluster, Chair of the IEEE Communications Society Technical Committee on Ad Hoc and Sensor Networks, and Chair/Technical Program Committee Chair of several conferences/symposia. Some of his recent recognitions include a 2007 IEEE Leadership Award from the Central Jersey/Princeton Section, the NJIT Excellence in Teaching Award in Outstanding Professional Development in 2008, a 2008 IEEE MGA Leadership Award, the 2009 NCE Excellence in Teaching Award, several best paper awards (ICNIDC 2009 and IEEE GLOBECOM 2010), a 2010 Thomas Alva Edison Patent Award, and designation as an IEEE Communications Society Distinguished Lecturer (2006–2009, two terms).
IEEE Communications Magazine • February 2011
SALEHI LAYOUT
1/19/11
3:29 PM
Page 65
ADVANCES IN PASSIVE OPTICAL NETWORKS
Multirate and Multi-Quality-of-Service Passive Optical Network Based on Hybrid WDM/OCDM System Hamzeh Beyranvand and Jawad A. Salehi, Sharif University of Technology
ABSTRACT In this article we present a new scheme to support multirate and multi-quality-of-service transmission in passive optical networks based on a hybrid wavelength-division multiplexing/ optical code-division multiplexing scheme. The idea is to use multilength variable-weight optical orthogonal codes as signature sequences of a hybrid WDM/OCDM system. To provide the requested classes of service, the code weight and code length of MLVW-OOCs are designed based on the characteristics of the requested classes of service. In order to mitigate multiple access interference, we propose to utilize a multilevel signaling technique and interference remover structure based on advanced optical logic gate elements. We show that utilizing such a technique improves the QoS of the proposed scheme.
INTRODUCTION
This work was supported in part by Iran National Science Foundation (INSF).
The increasing growth of Internet Protocol (IP) and the popularity of the web are resulting in altering the traffic pattern in data networks such that voice- and text-oriented traffic have changed to data- and image-based traffic. Furthermore, the emergence of IP based multimedia applications such as VoIP, IPTV, video conferencing, and so on, diversifies data traffic having different data rate and quality of service (QoS) requirements. Therefore, designing a high-capacity network to handle diverse and bulky data traffic is an essential challenge toward next-generation data networks. Optical networks exploiting tremendous fiber optic bandwidth is a promising solution to transmit bulky data traffic. Various techniques have been proposed to utilize fiber optic capacity in the access and backbone of the network. Fiber to the home (FTTH) is an interesting solution proposed to exploit fiber optic capacity in access networks. The passive optical network (PON) is a promising scheme to implement FTTH cost effectively [1]. Currently time-division multiplexing (TDM)PON has been implemented. Ethernet PON (EPON) based on IEEE 802.3ha, asynchronous transfer mode (ATM) PON (APON) based on
IEEE Communications Magazine • February 2011
ITU-T G.983.1, and Gigabit PON (GPON) based on ITU-T G.984 are typical examples of the implemented TDM-PON [1]. However, due to the uplink time-sharing, TDM-PON systems are limited in supporting bursty traffic and providing multirate transmission. Wavelength-division multiplexing (WDM)PON is another technique introduced to resolve TDM-PON’s shortcomings. Furthermore, maturing key optical technologies and the emergence of advanced optical devices are reducing the cost of WDM-PON deployment. Therefore, it is expected that WDM-PON schemes will be standardized and widely implemented. However, in WDM-PON the number of available wavelengths is not adequate to support users of access networks. Moreover, assigning an individual wavelength to a user decreases bandwidth efficiency and increases the coarseness of data granularity. Optical code-division multiplexing (OCDM) as a viable multiplexing technique is receiving much attention as a promising access technique to share common resource among asynchronous users without any central controller [2]. The OCDM technique is becoming an attractive candidate in the next-generation optical network and has been considered to be used in a PON. This is mainly due to the attractive properties of OCDM such as flexible and asynchronous bandwidth sharing, statistical multiplexing, provisioning differentiated QoS at the physical layer, and the capability to secure data transmission using a pseudo random signature. Hybrid OCMD/ WDM-PON is another interesting scheme proposed to resolve WDM-PON and utilize OCDM capabilities in future PONs [3]. In this article we introduce a novel hybrid OCDM/WDM PON scheme to support multirate and multi-QoS transmission in PON. The idea is based on utilizing multilength variable-weight optical orthogonal codes (MLVW-OOCs) as the signature sequence of the OCDM scheme. The length and weight of OOCs are designed based on the characteristics of the supported classes of service. Furthermore, in order to improve the throughput of the presented scheme, we propose to employ a multilevel signaling technique and an interference remover structure based on advanced optical logic gates [4, 5].
0163-6804/11/$25.00 © 2011 IEEE
S39
SALEHI LAYOUT
1/19/11
3:29 PM
Page 66
The rest of the article is organized as follows. In the next section we review conventional TDM-PON, WDM-PON, and WDM/OCDMPON. Our proposed multirate multi-QoS WDM/OCDM-PON is then presented. We introduce multilevel signaling and the interference remover via optical logic gates. The article is concluded in the final section.
PASSIVE OPTICAL NETWORK As mentioned above, the ultimate solution to handle increasing data traffic at the access network is FTTH. Basically, three architectures may be used to implement FTTH as shown in Fig. 1
End users
Central office (CO)
To metro network
(a)
Active star
Central office (CO)
To metro network
Feeder fiber
(b)
Central office (CO)
To metro network
Feeder fiber
Power splitter/combiner
(c)
Figure 1. Different architectures for FTTH: a) point-to-point; b) active star; c) passive star.
S40
[1]. The possible architectures are point-to-point, active star, and passive star. In the point-to-point architecture an individual fiber runs between the central office (CO) and each end user. Although point-to-point architecture provides the ultimate capacity and can support possible future highdata-rate applications, it needs many fibers, which increases the installation cost. Furthermore, for each fiber (home) we need a terminal at the CO, which complicates the CO architecture and raises scaling and powering issues. On the other hand, in the active star architecture a single fiber runs between the CO and an active node closed to end users. End users are connected to the active star by individual branching fibers. In the active star architecture only a single fiber is needed as a feeder, with a number of short branching fibers to connect end users and the active star, so installation cost is reduced. However, due to the presence of the active star, the powering issue remains. In the passive star architecture the active node of the active star architecture is replaced by a passive node. The passive node acts as a power splitter and power combiner to split the received signal from the feeder fiber among the branching fibers and aggregate branching fibers signals into the feeder fiber, respectively. In such an architecture, in addition to the cost reduction due to using feeder fiber, the passive power splitter/combiner resolves the power issue. Therefore, the passive star architecture as a cost-effective solution has received much attention and is becoming a popular architecture to implement FTTH. It is interesting to note that the passive star architecture is referred to as a passive optical network (PON). Based on the multiplexing method used to share the common resource of the feeder fiber among end users, we have three scenarios: TDM-PON, WDM-PON, and OCDM-PON. These scenarios are compared in Fig. 2. As shown in the figure, in the TDM-PON scenario bandwidth of the feeder fiber is slotted in the time domain, and each user (optical network unit, ONU) is assigned a dedicate time slot. On the other hand, in the WDM-PON scenario, bandwidth of feeder fiber is divided into multiple bands, and each user is assigned a dedicated wavelength. As shown in Fig. 2, in the OCDMPON scenario bandwidth of feeder fiber is divided among end users in code space. In this scenario each user is assigned a specific code considered as the user address and employed to transmit bitstreams. In an OCDM system employing on-off keying modulation, to send bit “1,” users transmit the signals encoded by the assigned codeword; and to send bit “0,” they transmit no signal. At the receiver front-end, by using the corresponding decoder the bitstream can be extracted. In comparison to TDM-PON, WDM-PON provides more bandwidth and can support future large amoaunts of data traffic. However, the number of available wavelengths is not adequate, and data granularity of the access network is coarse. On the other hand, OCDM-PON provides flexible bandwidth, and users can transmit asynchronously. In OCDM-PON the QoS of the users is limited by multiple access interference (MAI), which is a function of the number
IEEE Communications Magazine • February 2011
Page 67
of transmitting users [2]. In order to guarantee the desired QoS, the number of transmitting users needs to be restricted. So in OCDM-PON the number of supported end users is limited by the number of available codewords and MAI. Hybrid WDM/OCDM-PON is an ultimate solution to resolve scarceness of the number of available channels and codes, and the coarseness of the data granularity [6]. To resolve the MAI limit of the OCDM system, we propose to use a recently introduced multilevel signaling technique based on advanced optical logic gates [4, 5].
MULTIRATE AND MULTI-QOS HYBRID WDM/OCDM PON
( L − 1) ( L − 2 )…( L − λ ) . w ( w − 1) ( w − 2 )…( w − λ )
(1)
Equation 1 indicates that the number of available OOCs, Nc, is a function of code parameters (L, w, λ). As an example, for L = 101, w = 5, and λ = 1 we have N c ≤ 5 while for λ = 2 we have N c ≤ 165. Thus, increasing the maximum correlation increases Nc at the expense of interference excess and performance degradation. It is worth noting that the QoS in such an OCDM system depends on the number of interfering users (NI) and the code parameters (L, w, λ). The code weight has a direct effect on QoS. By increasing the code weight, QoS of transmitting users is improved, while according to the Johnson bound, N c is reduced. On the other hand, code length has a reverse relation with transmission rate and a direct relation with the number of available codewords. So for a specific bandwidth, Nc of high data rate is less than Nc of low data rate. In conventional OOCs all codes have the same parameters, so all users have the same
IEEE Communications Magazine • February 2011
ONU1 ONU2 ONU3
ONUNw λi : i th wavelength
WDM-PON λ1
λ2
λ3
ONU1 ONU2 ONU3
Wavelength
λNw ONUNw
Ti : i th time slot
TDM-PON
In hybrid WDM/OCDM-PON, OCDM is used in each WDM channel to share the available bandwidth among end users. In Fig. 3a the bandwidth classification of the hybrid WDM/ OCDM-PON is presented. As shown in the figure, in each wavelength N c channels are available where N c is the number of available codewords. Generally, OCDM, based on coding principles, is divided into two types, coherent and incoherent. In a coherent OCDM scheme, the phase of an optical signal is encoded by bipolar codes such as m-sequence, Gold code, or Hadamard. On the other hand, in an incoherent OCDM scheme the intensity of an optical signal is encoded by unipolar codes such as OOC or prime code. In this article we employ an incoherent OCDM scheme based on OOCs. An OOC is a family of (0, 1) sequences with good auto- and cross-correlation properties [2]. In the literature an OOC is characterized by (L, w, λ) where L is the code length, w is the code weight that determines the total number of ones in each codeword, and λ is the maximum value of shifted auto-correlation and cross-correlation. The number of available OOCs (N c) is limited by the well-known Johnson bound as follows [2]: Nc ≤
ONU: optical network unit
Time T1
T2
T3
OCDM-PON
ONUNc
C2
de
Co
TNt
ONU1 ONU2
Time
3:29 PM
Time
1/19/11
Wavelength
SALEHI LAYOUT
C1
Ci : i th code Wavelength
CNc
Figure 2. Bandwidth classification in different PON schemes. transmission rate and QoS. Multilength OOCs (ML-OOCs) have been designed to support multirate transmission. In ML-OOCs codewords are divided into multiple classes. Although all codewords have the same weight, each class has a specific code length. So using ML-OOCs, we can support multirate transmission. In order to support multi-QoS transmission, variable-weight OOCs (VW-OOCs) are designed. In VW-OOCs all codewords have the same code length and codewords are divided into multiple classes having specific code weight. So using VW-OOCs, we can support multi-QoS transmission in access network. In order to jointly support multirate and multi-QoS transmission, MLVW-OOCs have been designed. In MLVW-OOCs codewords are divided into multiple classes, and codewords of each class have a specific code length and code weight. So utilizing MLVW-OOCs, we can jointly support multirate and multi-QoS transmission. In OCDM based on MLVW-OOCs, high-weight codewords are assigned to high QoS users and short-length codewords are assigned to high-rate users. In Fig. 4 different OOC families are compared. Generally, MLVW-OOCs are characterized by (L = {L1, L2, …, LQ}, w = {w1, w2, …, wQ}, N c = {N c 1 , N c 2 , …, N c Q }, Q, Γ), where L i , w i , and N c i denote the code length, code weight, and number of available codes in class i, respectively. In addition, Q denotes the number of specified classes in the network, and Γ indicates the cross correlation matrix, which is defined as Γ = {I(n,m), for n,m = 1, 2, …, Q}. I(n,m) denote the maximum correlation between class n and
S41
1/19/11
3:29 PM
Page 68
Time
SALEHI LAYOUT
WDM/OCDM-PON ONU1
ONU2 ONUNc×Nw ONUNc Wavelength C1
C1
C2
C1 C2
C2
CNc
de
CNc
Co
λ1
CNc
λ2
λNw (a)
Time
OCDM + WDM/OCDM-PON ONU1 ONUk Wavelength ONUk+1 C1
ONUNc Ck Ck+1
Ck+1
Ck+2 CNc
de
Co
λ1
Ck+2
CNc λ2
CNc λNw (b)
Figure 3. a) Bandwidth classification in WDM/OCM-PON, b) bandwidth classification in OCDM+WDM/OCDM-PON.
class m codewords. If n = m, I(n,m) is referred to as intra-cross-correlation, which indicates the maximum cross-correlation between the same class codewords; if n ≠ m, I(n,m) is referred to as inter-cross-correlation, which shows the maximum cross-correlation between two codes from different classes. It is worthy to note that in the proposed multi-rate hybrid WDM/OCDM-PON the maximum transmission rate is limited by the shortest code length. Furthermore, the maximum transmission rate in hybrid WDM/OCDM-PON is less than that of OCDM-PON. In order to support ultra high rate services we propose to design a PON scheme utilizing both OCDM and hybrid WDM/OCDM scenario. In this scheme, a number of codewords are used to encode optical signal along all wavelength, same as OCDM PON. The remained codewords are used in the wavelength windows, the same as in WDM/OCDMPON. The bandwidth sharing in this scheme is
S42
shown in Fig. 3b. As it can be seen in the figure, codes C1 up to C k are used in OCDM scenario to support ultra high rate service and codes Ck+1 up to CNc are used in the hybrid WDM/OCDM scenario to support high-, medium-, and low-rate services. We refer to this scheme as OCDM+WDM/OCDM-PON scenario.
MAI MITIGATION USING A MULTILEVEL SIGNALING TECHNIQUE Multiple access interference is the dominant factor degrading QoS of an OCDM system. In [4], multilevel signaling technique has been introduced to mitigate MAI. In conventional incoherent one-level OCDM system, all users transmit at the same power level. In such a system, tapped delay lines (TDLs) and an AND logic gate (ALG) structure are used as encoder and decoder, respectively.
IEEE Communications Magazine • February 2011
SALEHI LAYOUT
1/19/11
3:29 PM
Page 69
In a multilevel signal-
L2 L1
ing technique, users are categorized into multiple groups and
L1
users of each group
L1
transmit at a specific Multi-length OOC (ML-OOC) Regular OOC (equal weight equal length)
power level. In such a system a multi-
L2
stage interference remover based on L1
optical logic gates is
L1
an essential element to mitigate interference of users trans-
L2
L1
mitting at other power levels. L1
Variable weight OOC (VW-OOC)
Multi-length variable weight (OOC (MLVW-OOC)
Figure 4. Different OOCs families.
τ1 τ2
H
Optical AND
Multi-stage interference remover τw
(a) Input Input
H1
H2
P0
P0
P1 P2 2P1
2P2
P1 P2 2P1
Output
2P2
Output
H1 H2 P1 Input
P2
Output
Input
Output
P2
2P0
3P0
2P1 2P2 (b)
Figure 5. a) Multi-stage receiver structure, b) interference remover structure.
IEEE Communications Magazine • February 2011
S43
SALEHI LAYOUT
1/19/11
3:29 PM
Page 70
bility of error is decreased. Furthermore, the increase of the number of stages of interference remover results to the performance improvement.
Probability of error 10-2
10-4
CONCLUSION
Pe
10-6
10-8
10-10
10-12 Conventional one-level scenario Two-level one-stage scenario Two-level two-stage scenario
10-14 0
5
10
15 20 25 Number of interfering users
30
35
40
Figure 6. Probability of error of one-level and two-level OCDM systems. In a multilevel signaling technique, users are categorized into multiple groups, and users of each group transmit at a specific power level. In such a system a multistage interference remover based on optical logic gates is an essential element to mitigate interference of users transmitting at other power levels. The structure of a typical receiver based on multi-stage interference remover is shown in Fig. 5a. The structure of interference remover relates to the number of power levels and the depth, i.e., the number of stages, of interference removing. In Fig. 5b the structure of two-stage interference remover in a two-level system is shown [4]. In the two-level system users are divided into two groups, group 1 and group 2. The users of group 1 and group 2 transmit at power levels P1 and P2, respectively (assume P2 > P1). In the figure, H1 and H2 are the interference remover of group 1 and group 2 users, respectively. From Fig. 5b we can observe that H1 removes interferences at power levels P2 and 2P2 and H2 removes interferences at power levels P 1 and 2P 1. So, in such a two-stage twolevel system for group 1 users pulses at power level P 1 and 3P 2 have the same effect. On the other hand, for group 2 users pulses at power level P2 and 3P1 have the same effect. As a matter of fact, in multilevel signaling technique by transmitting at different power levels and utilizing multi-stage structure, pulses at the other power levels can be distinguished and removed. Obviously increasing the number of power level results to the performance improvement due to the increase of the interference mitigation capability of a multistage structure. In Fig. 6 the probability of error of two-class OCDM system using MLVW-OOC characterized by (L = {400, 400}, w = {8, 12}, Nc = {12, 12}, Q = 2) is shown. As it can be observed in the figure, using multilevel signaling technique the proba-
S44
In this article we have presented a new scheme to support multirate multi-QoS transmission in optical passive networks. Utilizing MLVW-OOC in hybrid WDM/OCDM-PON we can provide the requested classes of services. The code weight and the code length of MLVW-OOC are designed based on the characteristics of the requested classes of services. Furthermore, to support ultra high rate service, we have proposed to use a combination of OCDM and WDM/OCDM scheme in PON, OCDM+WDM/ OCDM-PON. In order to mitigate MAI and to increase network throughput sufficiently, we have utilized a multilevel signaling technique and interference remover based on advanced optical logic gates elements. In such a technique interference remover, mitigate inference based on the power level on the input signals. We have showed that using this multilevel signaling technique, the QoS of the system is improved.
REFERENCES [1] T. Koonen, “Fiber to the Home/Fiber to the Premises: What, Where, and When?” Proc. IEEE, vol. 94, no. 5, May 2006, pp. 911–34. [2] J. A. Salehi, “Code Division Multiple-Access Techniques in Optical Fiber Networks — Part I: Fundamental Principles,” IEEE Trans. Commun., vol. 37, no. 8, Aug. 1989, pp. 824–33. [3] K. Kitayama, X. Wang, and N. Wada, “OCDMA Over WDM PON-Solution Path to Gigabit-Symmetric FTTH,” IEEE J. Lightwave Tech., vol. 24, no. 4, Apr. 2006, pp. 1654–62. [4] B. M. Ghaffari and J. A. Salehi, “Multiclass, Multistage, and Multilevel Fiber-Optic CDMA Signaling Techniques Based on Advanced Binary Optical Logic Gate Elements,” IEEE Trans. Commun., vol. 57, no. 5, May 2009, pp. 1424–32. [5] H. Beyranvnad, B. Ghaffari, and J. A. Salehi, “Multirate, Differentiated-QoS, and Multilevel Fiber-Optic CDMA System via Optical Logic Gate Elements,” IEEE J. Lightwave Tech., vol. 27, no. 19, Oct. 2009, pp. 4348–59. [6] H. Beyranvand and J. A. Salehi, “All-Optical Multi-Service Path Switching in Optical Code Switched GMPLS Core Network,” IEEE J. Lightwave Tech., vol. 27, no. 12, June 2009, pp. 2001–12.
BIOGRAPHIES HAMZEH BEYRANVAND (
[email protected]) received a B.S. degree (with honors, first rank) in electrical engineering from Shahed University, Tehran, Iran, in 2006 and an M.S. degree from Sharif University of Technology (SUT), Tehran, Iran, in 2008. He is currently working toward a Ph.D. degree in the Department of Electrical Engineering at SUT. Since summer 2007, he has been working as a member of the Optical Networks Research Laboratory (ONRL), SUT. JAWAD A. SALEHI [M‘84, SM‘07, F‘10] (
[email protected]) received a B.S. degree from the University of California, Irvine, in 1979, and M.S. and Ph.D. degrees from the University of Southern California (USC), Los Angeles, in 1980 and 1984, respectively, all in electrical engineering. From 1984 to 1993 he was a member of technical staff of the Applied Research Area, Bell Communications Research (Bellcore), Morristown, New Jersey. He is currently a full professor at the ONRL, Department of Electrical Engineering, SUT.
IEEE Communications Magazine • February 2011
RAD LAYOUT
1/19/11
3:32 PM
Page 71
ADVANCES IN PASSIVE OPTICAL NETWORKS
Passive Optical Network Monitoring: Challenges and Requirements Mohammad M. Rad, University of Waterloo Kerim Fouli, Optical Zeitgeist Laboratory, INRS Habib A. Fathallah, King Saud University Leslie A. Rusch, Université Laval Martin Maier, Optical Zeitgeist Laboratory, INRS
ABSTRACT As PONs carry increasing amounts of data, issues relating to their protection and maintenance are becoming crucial. In-service monitoring of the PON’s fiber infrastructure is a powerful enabling tool to those ends, and a number of techniques have been proposed, some of them based on optical time-domain reflectometry. In this work we address the required features of PON monitoring techniques and review the major candidate technologies. We highlight some of the limitations of standard and adapted OTDR techniques as well as nonOTDR schemes. Among the proposed opticallayer monitoring schemes, we describe our novel optical-coding-based reflection monitoring proposal and report on recent progress. We end with a discussion of promising solution paths.
INTRODUCTION Since the emergence of the passive optical network (PON) as a crucial access technology, a considerable amount of research has focused on fundamental design issues such as resource allocation [1]. PON technologies are constantly advancing toward increased capacity, embodied primarily by high-speed time-division multiplexing (TDM) PONs and wavelength-division multiplexing (WDM) PONs. In addition, important advances have been achieved to extend PON reaches, hence multiplying their subscriber counts. As a consequence, PONs are destined to carry huge amounts of traffic in the near future. The search for practical and cost-effective survivability and maintenance mechanisms is therefore becoming key to the continued development of viable PON solutions. The standardization of PON survivability mechanisms started within the broadband PON (BPON) standardization effort. International Telecommunication Union — Telecommunication Standardization Sector (ITU-T) G.983.1 described a set of four PON protection configurations that were subsequently narrowed down
IEEE Communications Magazine • February 2011
to two protection schemes in ITU-T Recommendations G.983.5 (BPON) and G.984.1 (Gigabit PON), Type B and Type C protection. Type B protection duplicates both the feeder fiber and optical line terminal (OLT) interface and uses an N:2 splitter at the remote node (RN), where N is the number of supported optical network units (ONUs). The Type B configuration hence offers protection only against the failure of the OLT interface equipment or a cut in the feeder fiber. In contrast, Type C duplicates the whole PON network infrastructure, including ONU and OLT interfaces, as well as the splitter, thus providing additional protection against ONU equipment failures. EPON has no standardized protection scheme but may adopt Type C protection through the adaptation of Ethernet protection switching defined in ITU-T G.8031 [2]. In both Type B and C protection configurations, automatic protection switching is typically triggered by layer 2 alarms related to the loss of signal intensity or quality. This has two important consequences [3]. First, the physical PON infrastructure is not entirely visible to the network management system (NMS) for fault management operations. Second, failures within the fiber plant are likely to entail service disruption before being detected, leading to revenue losses and customer dissatisfaction. Due to the high capital expenditures incurred by the deployment of such protection mechanisms, operators have resorted to troubleshooting and restoration once faults are detected [4]. Troubleshooting is an important network maintenance function that involves locating and identifying any source of fault in the network. The above-mentioned ITU-T protection configurations make no specific provisions to identify and localize faults within the optical infrastructure and defer the task to maintenance standards (L series). ITU-T L.53 (2003) is the first standard to specifically address the maintenance of PONs by recommending the use of optical time-domain reflectometry (OTDR)-based techniques for troubleshooting. Whether it is for survivability or maintenance
0163-6804/11/$25.00 © 2011 IEEE
S45
RAD LAYOUT
1/19/11
3:32 PM
An automatic monitoring technique allows the network operator to detect faults without resorting to in-field technicians or relying on customer equipment or feedback. This feature is highly desirable as the deployment of in-field personnel is usually equated with increased PON downtime and OPEX.
Page 72
purposes, there is a growing need for the monitoring of the PON fiber plant. PON monitoring technology automatically identifies and localizes faults of the in-service PON optical infrastructure. In doing so, it provides the NMS with enhanced optical infrastructure visibility in real time, thus speeding up the detection and localization of faults. Monitoring avoids the operational expenditures (OPEX) and large service restoration times of offline troubleshooting, thus enabling wider service differentiation and stronger QoS guarantees. In addition, it paves the way to potentially enhanced physical layer protection mechanisms. Accordingly, PON monitoring has been receiving increasing attention, and a variety of proposals have emerged [3, 5]. To accommodate the demand for monitoring technology, the ITUT L.66 (2007) Recommendation standardizes the criteria for in-service maintenance of PONs. It reserves the U-band (1625–1675 nm) for maintenance and lists several methods to implement PON in-service maintenance functions such as OTDR testing, loss testing, and power monitoring (i.e., monitoring a proportion of the signal power). Note that PONs need to be tested during installation to ensure that all fiber links and components are properly installed and working. Therefore, link characterization and diagnosis during network installation is also of great importance and can easily be performed using one of the aforementioned testing methods. However, there is a growing need to monitor fiber link failures and degradations without disturbing ongoing services. In this article we focus on the monitoring of in-service live PONs (i.e., after installation), where a service interruption due to monitoring is not permissible. In this article we review and compare the major optical-layer PON monitoring proposals, and address advantages and challenges of the monitoring techniques for deployment of highcapacity PONs. In the next section we enumerate the desired features and major requirements of in-service PON monitoring techniques. We then briefly review the basic principles of OTDR for point-to-point monitoring, and outline the challenges and limitations of standard OTDR in PON (point-to-multipoint) applications. NonOTDR-based techniques are then addressed. We particularly focus on two recently proposed techniques: Brillouin frequency shift assignment and optical-coding (OC)-based reflection monitoring. We also address in detail the advantages and disadvantages of each of the mentioned techniques in PONs. Finally, we discuss promising solution paths before concluding in the final section.
REQUIRED FEATURES OF PON MONITORING TECHNOLOGIES GENERAL REQUIREMENTS By definition, an effective monitoring technology should be able to both detect a fault and provide the NMS with useful information for root cause analysis. Useful monitoring information enables technicians to perform fast network repair,
S46
hence increasing PON reliability and reducing operational expenses. The most important issue in PON monitoring technology is cost, including capital expenditure (CAPEX, i.e., the initial cost of the monitoring technology per customer) and operational expenditure (OPEX, i.e., the cost of system maintenance). The reason is that the PON market is highly cost-sensitive, especially for the components not shared between customers, such as distributed monitoring nodes. Therefore, an expensive technology, even though it may provide in-service full visibility of the optical infrastructure to the network operator, may not be interesting for PON applications. Consequently, the monitoring technology requires simple design, fabrication, and implementation procedures to minimize the cost. Capacity, in terms of the number of PON branches or distribution fibers that can be simultaneously monitored, is the second desired feature. Candidate monitoring technologies should be able to support at least the maximum splitratio of current PON standards (e.g., 1:128 for ITU-T G.984 GPON). Accommodating larger split-ratios increases the number of supported customers, thus amortizing the expenses of the service provider and generating higher benefits. The monitoring technology should thus be scalable in order to enable seamless and continuous upgrades of the PON infrastructure (i.e., PON capacity, reach, and customer base) at low costs. The simplicity of the monitoring architecture and components directly affects the cost, and is hence an important requirement. In addition, as for any maintenance and protection mechanism, reliability is primordial. Furthermore, to operate in-service, the desired monitoring technology should act transparently to the data band signals such as the L and C bands. Therefore, strict isolation between the data band and monitoring signals is required.
AUTOMATIC AND CENTRALIZED MONITORING An automatic monitoring technique allows the network operator to detect faults without resorting to in-field technicians or relying on customer equipment or feedback. This feature is highly desirable as the deployment of in-field personnel is usually equated with increased PON downtime and OPEX. Besides, it allows the operator to enhance customer satisfaction by potentially reacting to faults before service disruption (e.g., through automatic protection switching [APS]). A fully automatic monitoring system is usually centralized, allowing the NMS, from its location in the central office (CO), to remotely acquire complete live network information without requiring the collaboration of customers or their ONUs, as does traditional OTDR in a point-topoint link. Both centralized and distributed approaches have been proposed for monitoring the fiber link quality of a PON [3–5]. In distributed (decentralized) monitoring strategies, active modules are placed inside the ONUs to measure performance and report to the NMS. These modules periodically evaluate the uplink for a specific fiber branch and may be implemented electronically at the ONU.
IEEE Communications Magazine • February 2011
1/19/11
3:32 PM
Page 73
Although the distributed approach effectively identifies fiber link degradation, it is ineffective when there is an interruption in the fiber link (e.g., a fiber cut) as it requires the real-time collaboration of ONUs. For instance, a missing monitoring signal at the NMS can be interpreted as the result of either a fiber fault or an electronic malfunction at the ONU. While the operator may take advantage of information on link quality provided by the ONU, the case is strong for a separate, independent, and rapid indicator of whether the fault occurred in the client’s or the operator’s domain. Therefore, a centralized automatic monitoring technology is highly desirable for PON applications.
0 Front connector -5 Backscattered power (dB)
RAD LAYOUT
Fiber end
Connector pair
-10 Bend -15
Crack
Fusion splice
-20 Noise
-25 Backscatter -30
OPTICAL TIME DOMAIN REFLECTOMETRY Optical-time-domain-reflectometry-based monitoring has been implemented for the first time for optical carriers in long-distance transmission systems. OTDR is an efficient way to characterize an optical link while accessing only one end, as appropriate for point-to-point links. It operates as follows. The OTDR equipment launches a short light pulse into the fiber and measures the backscattered light. Rayleigh scattering and Fresnel reflections are the physical causes of this scattering behavior [4]. Due to the measured power at the OTDR receiver, a trace of the power vs. the distance may be computed, representing the impulse response of the link under test, as shown in Fig. 1. This trace can be used to extract information about link faults, including fiber misalignment, fiber mismatch, angular faults, dirt on connectors, macro-bends, and breaks. These faults are usually referred to as events on the OTDR trace. For instance, the jumps in Fig. 1 correspond to the insertion loss of different network components, whereas the power reflection peak at 40 km indicates the Fresnel reflections at the fiber-air interface, signifying the fiber end. After the fiber end, no backscattering is detected, and the trace drops to receiver noise levels.
CHALLENGES OF STANDARD OTDR FOR PON While providing automatic monitoring and full characterization of the fiber link, OTDR is ineffective for PON point-to-multipoint (PMP) networks [3–6]. This is because a branch backscattering signal in a PON can be partially or totally masked by other branch signals. For PONs, the total power measured by the OTDR is a linear sum of all powers coming from different branches. Useful information can be extracted from the global backscattering trace when returns from individual branches are separated in time. Otherwise, extracting the desired information from the OTDR trace may require considerable offline signal processing, or simply be impossible. OTDR analysis for a branched network compares the backscattering trace with reference returns acquired under controlled conditions. A simulator interprets any deviation from the reference signals [7, 8]. The accuracy of such software depends on the quality of the simulator as
IEEE Communications Magazine • February 2011
-35 0
5
10
15
20 25 30 Distance (km)
35
40
45
50
Figure 1. Typical trace of OTDR of a fiber link. well as the uncertainties in both the measured traces and the simulated return based on reference measurements. In the event of equidistant branch terminations, the challenge is severe. As the network size increases, analysis complexity increases, leading to less reliable monitoring. In addition, the huge loss by passive splitters, typically located at the remote node (RN), leads to a significant drop in measured power. For example, a 1:32 splitter at the RN leads to 15 dB loss in the total backscattered light from each branch. The RN then resembles a fiber end, and no useful information can be extracted beyond the RN. In traditional OTDR, losses higher than 3–7 dB are identified as end-of-fiber. However, it is reported that by modifying the OTDR analysis, testing can be performed through splitters with losses up to 20 dB. This type of OTDR is usually referred to as PON-tuned OTDR [6].
MODIFIED OTDR SOLUTIONS Reference Reflector — In order to reduce PON OTDR analysis complexity, a variety of solutions have been proposed to distinguish individual fiber branches. The most well-known technique is the use of reference reflectors (RROTDR) [5] assigned to each fiber branch to render it distinguishable from others in the total measured OTDR trace. The principle of the reference reflectors is illustrated in Fig. 2. A reflector can be realized by different methods [5]. It could be wavelength selective and inserted in the input of the ONU connector to act as a stop filter. It also could be a non-wavelength-selective reflector placed on a separate tap (lower part of Fig. 2). Note that the reflectors at each fiber end are identical, and all reflect the same wavelength, each producing a reflection for its corresponding branch. To distinguish between the branches, it is critical to adjust the fiber lengths in each branch to avoid temporal overlapping. In this way a single OTDR return will have each branch return located in an isolated time interval. By monitoring the stability and level of reflections from reference reflectors placed at each
S47
RAD LAYOUT
1/19/11
3:32 PM
Page 74
fiber branch end, the integrity of a specific branch can easily be investigated. The OTDR is exploited for a full characterization of the corresponding fiber branch. The shift in the power level of the reference reflection for a desired branch provides useful information for the OTDR trace analysis. Checking the stability of the strong reflection (located well above the noise level) is faster and easier than analyzing the OTDR trace, and these reflectors are often used as a first fault indicator in most OTDRbased techniques. In RR-OTDR, the choice of the fiber lengths requires an important trade-off between OTDR sensitivity and resolution. The required fiber length is proportionally related to the transmitted OTDR pulse width as well as the relative distances between the customers. While for very short pulses small fiber lengths are required, the OTDR sensitivity is very poor, limiting allowable splitter size at the RN. For longer pulses, sensitivity improves. However, significantly long delay lines are required, leading to lower OTDR accuracy and larger dead zones (i.e., the area of an OTDR trace where events are not distinguishable). The NMS requires updated information on the customer distribution in the network; otherwise, customer relocations cause false alarms. The RR-OTDR scheme does not scale well with
CO RN
OLT Feeder fiber
RM ONU
OTDR
ONU Data wavelength Monitoring wavelength Stop filter for data
Wavelength separator (coupler)
Terminating fiber
Reference reflector
Figure 2. Use of a reference reflector for OTDR-based automatic monitoring of PONs.
Data Monitoring Λd Λm
Central office OLT Tunable OTDR
ONU1 ONU2 RN ONUN-1
Monitoring wavelengths Array waveguide
Data wavelength WDM coupler
ONUN
Optical filters
Figure 3. OTDR for PON via one monitoring wavelength per ONU.
S48
large network sizes. In fact, due to the huge splitter loss at the RN, it is difficult to extract useful information from the OTDR trace beyond the RN. In addition, as the network size increases, the selection of an optimal delay line becomes more challenging, and the complexity of the OTDR trace increases. Multi-Wavelength Approach — One other simple approach would be employing a multiwavelength source and an arrayed waveguide grating (AWG) at the RN. This reduces the PON monitoring problem to point-to-point link characterization, as illustrated in Fig. 3. In this case the tunable multiwavelength OTDR source should be very stable for reliable monitoring. Isolation between the monitoring and data signals will be more strict than single-wavelength OTDR. In addition to its high cost, this technique also has limited capacity due to practical limitations and very poor spectrum efficiency [9]. Its scalability is hence very low. Nevertheless, this approach provides a centralized monitoring system that enables the NMS to both detect and localize faults. Electronic Solutions — Note that the functionality of an OTDR device can be implemented within the ONU at the customer side [10]. This approach, known as embedded OTDR, leverages the electronics at the ONU for a cost-efficient solution, such that embedded OTDR within the ONUs becomes an integral part of the monitoring network. In this scheme the monitoring segment transmits an OTDR trace from the ONU upon request of the NMS at the CO when the corresponding ONU is idle over the upstream channel. Therefore, this solution relies on inband upstream signaling. As mentioned earlier, this solution is inadequate when a fiber cut happens, as all data and control channels linking the NMS to the ONU are disrupted.
CRITICAL ISSUES FOR THE USE OF OTDR IN PONS As the basic equipment for the above automatic test systems, OTDR requires suitable technical characteristics. The most important performance characteristics of OTDR-based techniques are spatial resolution, dynamic range, dead zone, wavelength stability, and minimum sensitivity [4–7]. Adequate performance requirements should be met for an OTDR to be an effective monitoring solution for future PONs. For instance, as the splitting ratio increases, larger dynamic ranges are required. Increasing the transmitted pulse width is not an efficient solution, as it decreases the spatial resolution and enlarges the dead zone of the OTDR. Also, the launched power is limited due to nonlinear effects. Generally, the capacity of OTDR-based techniques are limited to tens of customers, and system scalability is a serious concern. Recall that although cost is an important issue, it is not critical since the OTDR is shared among network clients. The leakage of the monitoring power from the U band to the data band (C and L) may cause performance degradation for data commu-
IEEE Communications Magazine • February 2011
RAD LAYOUT
1/19/11
3:32 PM
Page 75
f1
While the ITU
ONU1
CO
ONU2
f2 OLT
BOTDR
Standard Fiber
recommendations
RN
υ
fN-1
ONUN-1
fN
Specialty fibers, each with a unique Brillouin frequency shift
ONUN
Brillouin frequency shifted spectrums Data wavelength Monitoring wavelength Stop filter for monitoring wavelength Wavelength separator (coupler)
Separation
propose the U band for monitoring applications, the behavior of passive components is not very well investigated for this wavelength regime. Due to the continuous advancement in related fields,
υ-υ1=f1
υ-υ2=f2
υ-υN-1=fN-1 Frequency
υ-υN=fN
Figure 4. Performance monitoring based on Brillouin frequency shift assignment.
OTDR-based techniques are expected to become more reliable in the future.
nications. Hence, strict isolation between the data and monitoring bands is required. As a result, optical sources with very high sideband suppression ratios and optical filters with high insertion losses are required [11]. Other critical issues for OTDR-based monitoring are the use of optical selectors, filters, reflectors, and WDM devices. These devices should be cost- and dimension-effective (i.e., low cost and high density) in order to be able to monitor a large amount of fibers in future access networks. While the ITU Recommendations propose the U band for monitoring applications, the behavior of passive components is not very well investigated for this wavelength regime. Due to the continuous advancement in related fields, OTDR-based techniques are expected to become more reliable in the future.
NON-OTDR-BASED TECHNIQUES A variety of non-OTDR techniques have been proposed recently for the monitoring of link quality in a PON. In this article we focus on two of the most interesting, Brillouin frequency shift assignment (BFSA) and OC-based PON monitoring, and address their challenges and advantages.
BRILLOUIN FREQUENCY SHIFT ASSIGNMENT This technique uses Brillouin-based OTDRs (BOTDRs) at the CO [11] and deploys specialty fibers in the distribution segment of the PON, as shown in Fig. 4. Each fiber branch is hence distinguished by a unique Brillouin frequency shift as a signature, and is called an identification fiber. To monitor an individual fiber in a PON, an optical pulse with center frequency ν is launched through the network from the CO using a BOTDR. After the RN, subpulses are passed through different identification fibers, each of which scatters a unique pre-assigned Brillouin frequency. A specific identification fiber is then selected by monitoring the spectrum of the
IEEE Communications Magazine • February 2011
received signal. The frequency shifts are designed to have disjoint spectra for different branches. By observing peaks at center frequencies fk = ν – νk, as shown in Fig. 4, the status of the identification fiber is monitored. Furthermore, by measuring the filtered backscattered optical signal for a specific branch, BOTDR achieves a unique trace that is identical to the trace provided by traditional OTDR in a pointto-point link. In principle similar to the multiwavelength OTDR approach, this centralized technique provides a unique OTDR trace for each fiber branch that lies beyond the RN. Hence, it is capable of both detecting and localizing a fault at any branch of a PON. While providing a centralized and complete characterization of the identification fibers, the BFSA technique imposes significant design challenges for the network infrastructure. This technique requires the identification fibers to be manufactured with different physical characteristics that generate and return different Brillouin frequencies. Each identification fiber, while scattering a unique Brillouin frequency shift, should naturally also operate as a data link to satisfy the data transmission requirements of PONs. In addition to involving high CAPEX, this technique has a dramatic impact on existing fiber network infrastructures, as new fibers have to be designed and all existing distribution fibers replaced. As the capacity of the network increases, so does the number of required identification fibers. This leads to more strenuous constraints on the required frequency shifts, implying the use of more advanced manufacturing technology with higher cost and complexity. This technique is hence not simply scalable and has yet to demonstrate its capability for the monitoring of currently deployed PONs with standard splitting ratios (e.g., GPON with 64 and 128 branches). Furthermore, the use of specialty fiber for subscriber drop cables adds substantially to the cost of network deployment, especially when the subscriber take rate (i.e., the anticipated number of subscribers) is low. Due to the aforementioned
S49
RAD LAYOUT
1/19/11
3:32 PM
Page 76
DDF
U band short-pulse
Enc2
CO OLT Monitoring receiver
Feeder Data λd
ONU1
Periodic code implementation FBG FBG
ONU2
Encn
RN
ONUn
R1=38% for λm R1=0% for λd
Patchcord (b)
R2=100% for λm R2=0% for λd
1000100010001
Monitoring λm EncN
WDM coupler DDF: Distribution drop fiber Optical encoder
Enc1
(a)
ONUN
Low speed High speed electronics electronics From FPGA ADC network or DSP Offline GHz 8 bits/ U band processing detector sampling sample (c)
Network manager
Figure 5. OC-based PON monitoring: a) architecture; b) encoder; c) receiver for monitoring. reasons, this technique is very unlikely to be adopted commercially.
OPTICAL-CODING-BASED PON MONITORING OC exploits signal-coding techniques (inspired by optical code-division multiplexing) for control- and management-layer signaling operations. In OC-based PON monitoring, passive out-ofband encoders (Encn) are placed at the extremity of each PON distribution fiber to identify and monitor it, as shown in Fig. 5a [12]. The data and monitoring signals occupy separate wavelength bands (Λ d and Λ m, respectively) consistent with emerging standards. An optical source at the CO transmits the out-of-band pulses downstream; an optical or electronic receiver at the CO processes the aggregate upstream reflected signal. The encoders both reflect and imprint a unique code (i.e., specific to the PON branch) on the source pulses. Waveband separators split the data and monitoring wavebands at the ONU and the OLT. Alternatively, a combination of inline encoders and monitoring band-stop filters may be used at the branch termination points prior to the ONUs, as is the case for RR-OTDR. The use of simple fiber Bragg gratings (FBGs) directly inscribed at the termination of drop fibers may be regarded as a particular case of OC-based PON monitoring. Although the simplest approach, the use of FBGs as wavelength reflectors shares the low scalability and bandwidth efficiency drawbacks of the multiwavelength technique described earlier. Nevertheless, the use of in-fiber FBGs is particularly attractive since it reduces monitoring power losses by removing the requirement for waveband separators at the termination of drop fibers. While several encoders and receivers have been proposed, the most cost-effective and highperformance solution that has emerged is a combination of periodic codes [13] and an electronic receiver [14], illustrated in Figs. 5b and 5c. Periodic codes were developed exclusively for this application, and have low loss, low-complexity hardware, and good performance. Previously proposed encoders based on optical orthogonal codes exhibited much lower performance for this application [13].
S50
A particularly attractive feature of this solution is the self-configuring nature of the network. Encoders are installed at the drop fiber ends without concern for the fiber length from the remote node, unlike RR-OTDR. Signal processing at the receiver differentiates returns even for remarkably similar fiber lengths (within meters). Customer relocations can be accommodated without a re-allocation strategy based on previous installations. One of the challenges in evaluating any monitoring solution is predicting system capacity as it varies with the specific topology of the PON, whether legacy or greenfield. Simulations of specific topologies can be performed; however, they do not probe the generality of the solution. Statistical examinations of topologies can provide outage probabilities for the monitoring system in general. Several research topics remain to bring this technology to the marketplace. Compact lowcost periodic encoders are essential. While previously proposed fiber delay lines are simple, mass production is problematic. An integrated solution for the encoder would reduce both cost and bulk. Signal processing challenges also remain to increase the coverage capability of the decoding algorithm. A reduced complexity maximum likelihood receiver has been proposed in [14], but is nonetheless suboptimal and may leave room for performance improvement. The use of time- or wavelength-domain reflectors to identify PON branches, as in the reference-reflector and wavelength-based OTDR approaches, may be treated as particular cases of OC-based PON monitoring [15]. Compared to wavelength-domain reflection monitoring, codedomain reflection monitoring trades its more complex reflectors for higher scalability and bandwidth efficiency. Compared to time-domain reflection monitoring, it avoids the use of delay lines to differentiate branch fiber lengths and offers potentially higher scalability, particularly in the context of future long-reach PON (LRPON) applications. Moreover, the extension of OC-based monitoring to LR-PONs may be facilitated through the use of in-line reflectors [15]. However, this places additional strain on the more stringent power budgeting constraints of code-domain monitoring.
IEEE Communications Magazine • February 2011
RAD LAYOUT
1/19/11
3:32 PM
Page 77
OC-based monitoring does not require forklift upgrades of the PON distribution infrastructure, as does BOTDR. Consequently, the design of simplified and more cost-effective encoder and system architectures is a promising research direction [13]. Although OC-based monitoring does not offer complete fault localization (only fault identification on the branch), it is potentially more scalable than BOTDR. Therefore, it is well suited as a component within a hybrid solution, discussed in the next section.
SOLUTION PATHS TO SCALABLE COST-EFFECTIVE PON MONITORING Our review of proposed optical-layer PON monitoring technologies reveals two distinct and complementary monitoring principles: standard reflectometry and the use of dedicated reflectors at the termination points of distribution fibers. While reflectors are capable of speedy identification of faulty distribution fibers, they lack any accurate fault localization capability. Conversely, whereas standard reflectometry methods are inefficient for distribution fibers, they are capable of yielding highly accurate fault localization in point-to-point settings. In addition, adapting standard reflectometry techniques for PON applications is neither economical nor practical, particularly due to their lack of scalability to larger PON sizes. In contrast, fault detection via the monitoring of reflected signals is potentially simple, cost-effective, and reliable. Therefore, we expect that comprehensive monitoring methods will integrate both the aforementioned monitoring principles. To do so, it is necessary to break the monitoring procedure into two separate steps, whereby fault detection and the identification of faulty distribution fiber is carried out in real time through reflection monitoring, and precise fault localization is implemented subsequently through OTDR. In currently deployed PONs, the implementation of reflection-based monitoring will enable faster troubleshooting, as they will allow the NMS to exclude customer equipment malfunctions as the cause for a loss of signal while indicating the faulty distribution fiber. Technicians equipped with high-resolution OTDR can thus be dispatched immediately for exact fault localization and root cause analysis. Hence, fiber plant degradation may be detected long before transmission errors occur or services fail. In future PON deployments where protection is expected to play an increasing role, reflectionbased monitoring may be integrated with the protection schemes as triggers for implemented APS mechanisms, leading to reduced downtimes and higher quality of service.
CONCLUSIONS Cost effectiveness and scalability are among the major requirements for in-service monitoring of PON fiber infrastructures. OTDR requires costly architectural enhancements to deliver fast automatic fault localization in PON tree topologies. In this work we review some of the most promis-
IEEE Communications Magazine • February 2011
ing OTDR- and non-OTDR-based proposals for PON monitoring, and address the practical challenges facing their potential deployment. Rather than being exclusive, OTDR and alternative technologies such as reflection-based monitoring are complementary. Therefore, hybrid techniques should be investigated as promising solutions for delivering the maintenance and protection functionalities required by current and next-generation PONs. OC-based methods are particularly attractive to implement reflection monitoring in the context of increasing PON sizes.
REFERENCES [1] M. P. McGarry, M. Reisslein, and M. Maier, “Ethernet Passive Optical Network Architectures and Dynamic Bandwidth Allocation Algorithms,” IEEE Commun. Surveys & Tutorials, vol. 10, no. 3, 3rd qtr. 2008, pp. 46–60. [2] F. Effenberger et al., “Next-Generation PON-Part III: System Specifications for XP-PON,” IEEE Commun. Mag., vol. 47, no. 11, Nov. 2009, pp. 58–64. [3] K. Yuksel et al., “Optical Layer Monitoring in Passive Optical Networks (PONs): A Review,” Proc. ICTON ‘08, 2008, pp. 92–98. [4] D. Anderson, L. Johnson, and F. G. Bell, Troubleshooting Optical Fiber Networks: Understanding and Using Optical Time-Domain Reflectometers, Academic Press, 2004. [5] F. Caviglia and V. C. Biase, “Optical Maintenance in PONs,” Proc. ECOC, Madrid, Spain, 1998, pp. 621–25. [6] EXFO, “Application Notes 110 and 201”; http://exfo.com. [7] I. Sankawa et al., “Fault Location Technique for In-Service Branched Optical Fiber Networks,” IEEE Photonics Tech. Letters, vol. 2, no. 10, Oct. 1990, pp. 766–69. [8] L. Wuilmart et al., “A PC Based Method for the Localization and Quantization of Faults in Passive Tree-Structured Optical Networks using the OTDR Technique,” Proc. IEEE LEOS Annual Meeting, vol. 2, Nov. 1996, pp. 121–23. [9] M. Thollabandi et al., “Tunable OTDR (TOTDR) Based on Direct Modulation of Self-Injection Locked RSOA for Line Monitoring of WDM-PON,” Proc. ECOC, Brussels, Belgium, Sept. 2008. [10] W. Chen et al., “Embedded OTDR Monitoring of the Fiber Plant Behind the PON Power Splitter,” Proc. IEEE LEOS Symp., Benelux Chapter, Eindhoven, Netherlands, 2006, pp. 13–16. [11] N. Honda et al., “In-Service Line Monitoring System in PONs Using 1650 nm Brillouin OTDR and Fibers with Individually Assigned BFSs,” IEEE/OSA J. Lightwave Tech., vol. 27, no. 20, Oct. 2009, pp. 4575–82. [12] H. Fathallah and L. A. Rusch, “Code-Division Multiplexing for In-Service Out-of-Band Monitoring of Live FTTH-PONs,” OSA J. Optical Net., vol. 6, no. 7, July 2007, pp. 819–29. [13] H. Fathallah, M. M. Rad, and L. A. Rusch, “PON Monitoring: Periodic Encoders with Low Capital and Operational Cost,” IEEE Photonics Tech. Letters, vol. 20, no. 24, Dec. 2008, pp. 2039–41. [14] M. M. Rad et al., “Experimental Validation of Periodic Codes for PON Monitoring,” IEEE GLOBECOM ’09, Optical Net. Sys. Symp., Honolulu, HI, Dec. 2009, paper no. ONS-04.6. [15] K. Fouli, L. R. Chen, and M. Maier, “Optical Reflection Monitoring for Next-Generation Long-Reach Passive Optical Networks,” Proc. IEEE Photonics Society Annual Meeting, Belek-Antalya, Turkey, Oct. 2009.
Hybrid techniques should be investigated as promising solutions for delivering the maintenance and protection functionalities required by current and next-generation PONs. OC-based methods are particularly attractive to implement reflection monitoring in the context of increasing PON sizes.
BIOGRAPHIES MOHAMMAD M. RAD (
[email protected]) received both his B.S.E.E. and M.S.C. from Sharif University of Technology in 2003 and 2005, respectively. In September 2006 he joined the Department of Electrical and Computer Engineering, Center for Optics, Photonics, and Lasers (COPL), Université Laval as a Ph.D. candidate. His research interests include fiber-optic communications, long haul data transmission, multiple access networks, network monitoring, and sensor networks. K ERIM F OULI is a Ph.D. student at Institut National de la Recherche Scientifique (INRS), Montréal, Canada. He received his B.Sc. degree in electrical engineering at Bilkent
S51
RAD LAYOUT
1/19/11
3:32 PM
Page 78
University, Ankara, Turkey, in 1998 and his M.Sc. degree in optical communications at Université Laval, Quebec City, Canada, in 2003. He was a research engineer with AccessPhotonic Networks (Quebec City) from 2001 to 2005. His research interests are in the area of optical access and metropolitan network architectures with a focus on enabling technologies. He is the recipient of a two-year doctoral NSERC Alexander Graham Bell Canada Graduate Scholarship for his work on the architectures and performance of optical coding in access and metropolitan networks. HABIB FATHALLAH [S‘96, M‘01] received a B.S.E.E degree (with Honors) from the National Engineering School of Tunis in 1994, and M.A. and Ph.D. degrees in electrical engineering from Université Laval in 1997 and 2001, respectively. He initiated the use of Bragg grating technology for all-optical/allfiber coding/decoding in optical CDMA systems. He was the founder of AccessPhotonic Networks (2001–2006). He is currently with the Electrical Engineering Department, College of Engineering and Prince Sultan Advanced Technology Research Institute of King Saud University (Riyadh, KSA), and adjunct professor with the Electrical and Computer Engineering Department of Université Laval. His research interests include optical communications systems and technologies, metro and access networks, optical CDMA, PONs and long reach PONs, FTTH, network monitoring, and hybrid fiber wireless (FiWi) systems. L E S L I E A. R U S C H [S‘91, M‘94, ‘SM‘00, F‘10] received a B.S.E.E. degree (with honors) from the California Institute of Technology, Pasadena, in 1980, and M.A. and Ph.D. degrees in electrical engineering from Princeton University, New Jersey, in 1992 and 1994, respectively. She has experience in defense, industrial, and academic communications research. She was a communications project engineer for the Department of Defense from 1980–1990. While on leave from Université Laval she spent two years
S52
(2001–2002) at Intel Corporation creating and managing a group researching new wireless technologies. She is currently a professor in the Department of Electrical and Computer Engineering at Université Laval performing research on wireless and optical communications. Her research interests include wavelength-division multiple access using incoherent sources for metropolitan area networks; analysis of optical systems using coherent detection; semiconductor and erbium-doped optical amplifiers and their dynamics; and in wireless communications, optical pulse shaping for high-bit rate ultrawide-band systems (UWB), as well as performance analysis of reduced-complexity receivers for UWB. She has served as associate editor for IEEE Communications Letters and on several IEEE technical program committees. She has published over 70 journal articles in international journals (90 percent IEEE/IEE) with wide readership, and contributed to over 100 conferences. Her journal articles have been cited over 750 times per the Science Citation Index (SCI). MARTIN MAIER (
[email protected]) is an associate professor at the INRS. He was educated at the Technical University of Berlin, Germany, and received M.Sc. and Ph.D. degrees (both with distinctions) in 1998 and 2003, respectively. In the summer of 2003 he was a postdoc fellow at the Massachusetts Institute of Technology (MIT), Cambridge. He was a visiting professor at Stanford University, October 2006 through March 2007. He is a co-recipient of the 2009 IEEE Communications Society Best Tutorial Paper Award. His research activities aim at rethinking the role of optical networks and exploring novel applications of optical networking concepts and technologies across multidisciplinary domains, with a particular focus on communications, energy, and transport for emerging smart grid applications and bimodal fiber-wireless (FiWi) networks for broadband access. He is the author of the book Optical Switching Networks (Cambridge University Press, 2008), which was translated into Japanese in 2009.
IEEE Communications Magazine • February 2011
LYT-GUEST EDIT-Mohr
1/20/11
12:11 PM
Page 82
GUEST EDITORIAL
IMT-ADVANCED AND NEXT-GENERATION MOBILE NETWORKS
Werner Mohr
G
Jose F. Monserrat
lobally, mobile communications are moving toward broadband communication systems to meet the challenges of significantly increasing data traffic such as that for mobile Internet applications. In order to meet this increasing traffic, the International Telecommunication Union — Radiocommunication Standardization Sector (ITU-R) initiated in 2000 the process toward the next generation of International Mobile Telecommunications systems, referred to as IMT-Advanced systems. ITU-R Recommendation M.1645 specifies the objectives of the future development of the IMT-Advanced family, among them to reach 100 Mb/s for mobile access and up to 1 Gb/s for nomadic wireless access. The research community was asked to develop concepts and system proposals to meet these requirements. These activities were the basis for the preparation of the ITU World Radiocommunications Conference 2007 to identify additional frequency spectrum for mobile and wireless communications. In 2008 ITU-R issued a Circular Letter calling for candidate radio access technologies (RATs) for IMT-Advanced by taking into account the identified frequency bands. In parallel, international specification and standardization bodies were developing technology proposals. Finally, two main technologies were submitted to ITU-R for approval within the IMT-Advanced framework: one based on Third Generation Partnership Project (3GPP) Long Term Evolution (LTE)-Advanced, and the other one based on IEEE 802.16m. ITU-R decided in October 2010 that both submitted IMT-Advanced system proposals successfully met all of the established criteria for the first release of IMTAdvanced, qualifying them as the first true fourth-generation (4G) systems. In order to meet these challenging requirements on broadband mobile communication systems in limited available frequency spectrum, different advanced technologies were taken into account: advanced antenna concepts, modulation, coding and scheduling algorithms, radio resource management (including dynamic resource allocation, carrier aggregation, cross-layer optimization, admission control,
82
Afif Osseiran
Marc Werner
congestion control, mobility management, and interoperability), coordinated multipoint schemes, and new deployment elements like relays and femtocells. This feature topic issue provides an overview of major developments of both IMT-Advanced proposals. The five articles included in this issue describe new technology trends that will have an impact on future standardization and the evolution of IMT-Advanced technologies. The first article, “Evolution of LTE toward IMTAdvanced” by Stefan Parkvall et al., provides a high-level overview of 3GPP LTE Release 10 (IMT-Advanced) and an analysis of some of its key technologies. IMT-Advanced based on LTE Release 10 enhances LTE with carrier aggregation, enhanced multi-antenna support, improved support for heterogeneous deployments, and relaying. Simulation results show that LTE Release 10 fulfills and even surpasses the requirements for IMT-Advanced. The second article, “Assessing 3GPP Long Term Evolution (LTE)-Advanced as IMT-Advanced Technology: The WINNER+ Evaluation Group Approach” by Krystian Safjan et al., describes the approach followed by the European research project WINNER+ to successfully complete the performance evaluation of the 3GPP LTE-Advanced proposal as an IMT-Advanced technology candidate. Exemplary analytical and simulation results are provided together with the procedure followed for simulator calibration, which is essential in order to achieve comparable results between different evaluation organizations. The obtained results confirm that the 3GPP LTE Release 10 & Beyond (LTE-Advanced) proposal satisfies all the IMTAdvanced requirements. The third article, “Coordinated Multipoint: Concepts, Performance, and Field Trial Results” by Ralf Irmer et al., shows that coordinated multipoint (COMP) or cooperative multiple-input multiple-output (MIMO) is one of the most promising concepts to improve cell edge user data rate and spectral efficiency beyond legacy systems. Interference can be exploited or mitigated by cooperation between sectors or different sites. Significant gains can be obtained for
IEEE Communications Magazine • February 2011
LYT-GUEST EDIT-Mohr
1/20/11
12:11 PM
Page 83
GUEST EDITORIAL both the uplink and downlink. A range of technical challenges are identified and addressed, such as backhaul traffic, synchronization, and feedback design. The feasibility of COMP is demonstrated in two field testbeds with multiple sites and different backhaul solutions between the sites. The fourth article, “Evolution of Uplink MIMO for LTE-Advanced” by Chester Sungchung Park et al., is about the other main enabling technology for next-generation mobile networks, advanced antenna design. Specifically, the article gives an overview of a MIMO approach that has recently been adopted in 3GPP, including up to four-layer transmission using precoded spatial multiplexing as well as transmit diversity techniques. Receivers suitable for uplink MIMO are presented, and their link-level performances are compared. It is shown that with advanced receivers, single-carrier transmission performs as well as orthogonal frequency-division multiplexing (OFDM). The fifth article, “A 25 Gb/s/km2 Urban Wireless Network beyond IMT-Advanced” by Sheng Liu et al., presents a survey on the technical challenges of the future radio access network beyond IMT-Advanced, which should offer very high average area throughput in order to support huge data traffic demand and high user density with energy-efficient operation. Several potential enabling technologies and architectures from the controlling/processing, radio resource management, and physical layer perspectives for dense urban cell deployment are investigated to support the aggressive goal of an average area throughput of 25 Gb/s/km 2 in beyond-IMT-Advanced systems. The combination of various advanced technologies such as interference mitigation techniques, MIMO, and cooperative communications, as well as cross-layer self-organizing networks (SONs) could potentially offer high-quality mobile services in future urban wireless networks and an experience similar to that of the wired Internet. Finally, we would like to thank Dr. Steve Gorshe, Joseph Milizzo, Devika Mittra, and Jennifer Porcello for their continuous support and valuable comments to improve this feature topic issue. We hope that the articles in this issue will encourage the readers of IEEE Communications Magazine to contribute to the further development and improvement of IMT-Advanced.
BIOGRAPHIES WERNER MOHR [SM] (
[email protected]) graduated from the University of Hannover, Germany, with a Master’s degree in electrical engineering in 1981 and a Ph.D. degree in 1987. He joined Siemens AG, Mobile Network Division, Munich, Germany, in 1991. He was involved in several EU funded projects and ETSI standardization groups on UMTS and systems beyond 3G. In December 1996 he became project manager of the European ACTS FRAMES Project until the project finished in August 1999. This project developed the basic concepts of the UMTS radio interface. Since April 2007
IEEE Communications Magazine • February 2011
he has been with Nokia Siemens Networks GmbH & Co. KG, Munich, Germany, where he is head of Research Alliances. He was the coordinator of the WINNER Project in Framework Program 6 of the European Commission, Chairman of WWI (Wireless World Initiative), and the Eureka Celtic project WINNER+. The WINNER project laid the foundation for the radio interface for IMT-Advanced and provided the starting point for the 3GPP LTE standardization. In addition, he was Vice Chair of the eMobility European Technology Platform in the period 2008–2009 and is now eMobility (now called Net!works) Chairperson for the period 2010–2011. He was Chair of the Wireless World Research Forum from its launch in August 2001 up to December 2003. He is a member of VDE (the Association for Electrical, Electronic & Information Technologies, Germany). In 1990 he received the Award of the ITG (Information Technology Society) in VDE. He was a board member of ITG in VDE for the term 2006–2008 and was re-elected for the term 2009-2011. He is coauthor of the books Third Generation Mobile Communication Systems and Radio Technologies and Concepts for IMTAdvanced. JOSE F. MONSERRAT [M] (
[email protected]) received his M.Sc. degree with High Honors and Ph.D. degree in telecommunications engineering from the Polytechnic University of Valencia (UPV) in 2003 and 2007, respectively. He was the recipient of the First Regional Prize of Engineering Studies in 2003 for his outstanding student record, also receiving the Best Thesis Prize from the UPV in 2008. In 2009 he was awarded the best young researcher prize of Valencia. He is currently an associate professor in the Communications Department of UPV. His research focuses on the application of complex computation techniques to radio resource management (RRM) strategies and to the optimization of current and future mobile communications networks, such as LTE-Advanced and IEEE 802.16m. He has been involved in several European Projects, acting as task or work package leader in WINNER+, ICARUS, COMIC, and PROSIMOS. He also participated in 2010 in one external evaluation group within ITU-R on the performance assessment of the candidates for the future family of standards, IMTAdvanced. AFIF OSSEIRAN [M] (
[email protected]) received a B.Sc. in electrical and electronics engineering from Université de Rennes I, France, in 1995, a DEA (B.Sc.E.E) degree in electrical engineering from Université de Rennes I and INSA Rennes in 1997, and an M.A.Sc. degree in electrical and communication engineering from École Polytechnique de Montreal, Canada, in 1999. In 2006 he defended successfully his Ph.D. thesis at the Royal Institute of Technology (KTH), Stockholm, Sweden. Since 1999 he has been with Ericsson, Sweden. In 2004 he joined as one of Ericsson’s representatives the European project WINNER. During 2006 and 2007 he led in WINNER the spatial temporal processing (i.e. MIMO) task. From April 2008 to June 2010 he was the technical manager of the Eureka Celtic project WINNER+. His research interests include many aspects of wireless communications with a special emphasis on advanced antenna systems, relaying, radio resource management, network coding, and cooperative communications. He is listed in Who’s Who in the World and Who’s Who in Science & Engineering. He has published over more 40 technical papers in international journals and conferences. He co-authored a book, Radio Technologies and Concepts for IMT-Advanced (Wiley, 2009). Since 2006 he has been giving a few graduate-level lectures yearly on advanced antennas at KTH. M ARC W ERNER (
[email protected]) received his International Diploma degree from Imperial College, London, United Kingdom, in 1997, and his Dipl.-Ing. and Dr.-Ing. degrees in electrical engineering from RWTH Aachen University, Germany, in 1999 and 2006, respectively. He worked as a research scientist at RWTH Aachen University from 1999 to 2005. His research activities included capacity optimization and speech quality improvement for cellular communication systems. He has also worked in multiple industry projects in the area of mobile communication, and as a consultant for the German telecoms regulator. Since January 2006 he has been with QUALCOMM CDMA Technologies GmbH, Nuremberg, Germany. For Qualcomm he worked as work package and task leader in several multinational European research projects such as WINNER II and WINNER+. In WINNER+ he coordinated the simulative evaluation of the IMT-Advanced system proposal by 3GPP, LTE-Advanced.
83
PARKVALL LAYOUT
1/19/11
3:31 PM
Page 84
IMT-ADVANCED AND NEXT-GENERATION MOBILE NETWORKS
Evolution of LTE toward IMT-Advanced Stefan Parkvall, Anders Furuskär, and Erik Dahlman, Ericsson Research
ABSTRACT This article provides a high-level overview of LTE Release 10, sometimes referred to as LTEAdvanced. First, a brief overview of the first release of LTE and some of its technology components is given, followed by a discussion on the IMT-Advanced requirements. The technology enhancements introduced to LTE in Release 10, carrier aggregation, improved multi-antenna support, relaying, and improved support for heterogeneous deployments, are described. The article is concluded with simulation results, showing that LTE Release 10 fulfills and even surpasses the requirements for IMT-Advanced.
INTRODUCTION Deployment of fourth-generation (4G) mobilebroadband systems based on the highly flexible Long Term Evolution (LTE) radio access technology [1, 2] defined by the Third Generation Partnership Project (3GPP) is currently ongoing on a broad scale, with the first systems already being in full commercial operation. These systems are based on the first release of LTE, 3GPP Release 8, which was finalized in 2008. Release 8 can provide downlink and uplink peak rates up to 300 and 75 Mb/s, respectively, a one-way radio-network delay of less than 5 ms, and a significant increase in spectrum efficiency. LTE provides extensive support for spectrum flexibility, supports both frequency-division duplex (FDD) and time-division duplex (TDD), and targets a smooth evolution from earlier 3GPP technologies such as time-division synchronous code-division multiple access (TD-SCDMA) and wideband CDMA (WCDMA)/high-speed pakcet access (HSPA) as well as 3GPP2 technologies such as cdma2000. The LTE radio access technology is continuously evolving to meet future requirements. In Release 9, finalized at the end of 2009, support for broadcast/multicast services, positioning services, and enhanced emergency-call functionality, as well as enhancements for downlink dual-layer beam-forming, were added. Recently, 3GPP has concluded the work on LTE Release 10, finalized at the end of 2010 and further extending the performance and capabilities of LTE beyond Release 8/9. An important aim of LTE Release 10 is to ensure that LTE fulfills all the requirements for Inter-
84
0163-6804/11/$25.00 © 2011 IEEE
national Mobile Telecommunications (IMT)Advanced as defined by the International Telecommunication Union (ITU) [3, 4]. The relation to IMT-Advanced is also the reason for the label LTE-Advanced sometimes given to LTE Release 10 and beyond. This article provides a brief overview of LTE Release 8/9 and a short introduction to the IMTAdvanced work. Following this background, the extensions introduced in Release 10 are described. The article is concluded with results from system-level evaluations showing that LTE Release 10 can fulfill and even surpass the IMTAdvanced requirements.
OVERVIEW OF LTE RELEASE 8 LTE is an orthogonal frequency-division multiplexing (OFDM)-based radio access technology, with conventional OFDM on the downlink and discrete Fourier transform spread OFDM (DFTS-OFDM) [1] on the uplink. DFTS-OFDM allows for more efficient power-amplifier operation, thus providing the opportunity for reduced terminal power consumption. At the same time, equalization of the received signal is straightforward with conventional OFDM. The use of OFDM on the downlink combined with DFTSOFDM on the uplink thus minimizes terminal complexity on the receiver side (downlink) as well as on the transmitter side (uplink), leading to an overall reduction in terminal complexity and power consumption. The transmitted signal is organized into subframes of 1 ms duration with 10 subframes forming a radio frame as illustrated in Fig. 1. Each downlink subframe consists of a control region of one to three OFDM symbols, used for control signaling from the base station to the terminals, and a data region comprising the remaining part and used for data transmission to the terminals. The data transmissions in each subframe are dynamically scheduled by the base station. As seen in Fig. 1, cell-specific reference signals are also transmitted in each downlink subframe. These reference signals are used for data demodulation at the terminal (or user equipment, UE), and for measurement purposes (e.g., for channel status reports sent from the terminals to the base station). Spectrum flexibility is one of the key properties of the LTE radio access technology. A wide range of different bandwidths is defined and
IEEE Communications Magazine • February 2011
PARKVALL LAYOUT
1/19/11
3:31 PM
Page 85
One radio frame 10 ms
Support for multi-
One subframe 1 ms
antenna transmission fUL fDL
UL FDD DL
#0 #1 #2 (Special subframe)
#3
#4
#5 #6 #7 (Special subframe)
#8
LTE from the first release. Downlink
#9
TDD UL DL
is an integral part of
multi-antenna fDL/UL
schemes supported by LTE include transmit diversity,
DwPTS GP
UpPTS One subframe
spatial multiplexing (including both so-called single-user MIMO as well as multi-user MIMO),
Control region (1-3 OFDM symbols)
Control signaling
Cell-specific reference symbols
and beamforming.
Figure 1. LTE time-frequency structure. both FDD and TDD modes of operation are supported, allowing for operation in both paired and unpaired spectrum. An important requirement in the LTE design has been to avoid unnecessary fragmentation and strive for commonality between the FDD and TDD modes of operation while still maintaining the possibility to fully exploit duplex-specific properties such as channel reciprocity in TDD. Aligning the two duplex schemes to the extent possible does not only increase the momentum in the definition and standardization of the technology but also further improves the economy of scale of the LTE radio access technology. Support for multi-antenna transmission is an integral part of LTE from the first release. Downlink multi-antenna schemes supported by LTE include transmit diversity, spatial multiplexing (including both so-called single-user multiple-input multiple-output [MIMO] as well as multi-user MIMO), and beamforming.
ITU AND IMT-ADVANCED IMT-Advanced is the term used by ITU for radio access technologies beyond IMT-2000. An invitation to submit candidate technologies for IMT-Advanced was issued by ITU in 2008 [3]. Along with the invitation, ITU has also defined a set of requirements to be fulfilled by any IMTAdvanced candidate technology [4], some of which are shown in Table 1 together with the corresponding capabilities of LTE Release 10. Anticipating the invitation from ITU, 3GPP already in March 2008 initiated a study item on LTE-Advanced, with the task of defining requirements and investigating potential technology components for the LTE evolution. This study item, completed in March 2010 and forming the basis for the Release 10 work, aimed beyond IMT-Advanced [5]. In 2010 3GPP submitted LTE Release 10 to ITU and, based on this submission, ITU approved LTE Release 10 as one of two IMT-Advanced technologies. As
IEEE Communications Magazine • February 2011
will be seen, Release 10 will not only fulfill the IMT-Advanced requirements but in many cases even surpass them.
LTE RELEASE 10 LTE Release 10, sometimes known as LTEAdvanced, is not a new radio access technology but the evolution of LTE to further improve performance. Being an evolution of LTE, Release 10 includes all the features of Release 8/9 and adds several new features, the most important of which — carrier aggregation, enhanced multi-antenna support, improved support for heterogeneous deployments, and relaying — are discussed in the following sections. Evolving LTE rather than designing a new radio access technology is important from an operator perspective as it allows for smooth introduction of new technologies without jeopardizing existing investments. A Release 10 terminal can directly connect to a network of an earlier release, and a Release 8/9 terminal can connect to a network supporting the new enhancements. Hence, an operator can deploy a Release 8 network and later, when the need arises, upgrade to Release 10 functionality where needed. In fact, most of the Release 10 features can be introduced into the network as simple software upgrades.
CARRIER AGGREGATION Already the first release of LTE, Release 8, provides extensive support for deployment in spectrum allocations of various characteristics, with bandwidths ranging from around 1.4 up to 20 MHz in both paired and unpaired bands. In Release 10 the transmission bandwidth can be further extended by means of so-called carrier aggregation (CA) where multiple component carriers are aggregated and jointly used for transmission to/from a single mobile terminal, as illustrated in Fig. 2. Up to five component carri-
85
PARKVALL LAYOUT
1/19/11
3:31 PM
Page 86
IMT-Advanced requirement
LTE Release 8
LTE Release 10
Transmission bandwidth
At least 40 MHz
Up to 20 MHz
Up to 100 MHz
Peak spectral efficiency – Downlink – Uplink
15 b/s/Hz 6.75 b/s/Hz
16 b/s/Hz 4 b/s/Hz
16.0 [30.0]* b/s/Hz 8.1 [16.1]** b/s/Hz
Latency – Control plane – User plane
Less than 100 ms Less than 10 ms
50 ms 4.9 ms
50 ms 4.9 ms
LTE supports a rich set of multi-antenna transmission techniques already in the first release. This includes downlink transmit diversity based on Space-Frequency Block Coding (SFBC) for the case of two
*Value is for a 4 × 4 antenna configuration. Value in brackets for 8 × 8. ** Values is for a 2 × 2 antenna configuration. Value in brackets for 4 × 4.
Table 1. Requirements and LTE fulfillment.
transmit antennas and SFBC in combination with Frequency Shift Time Diversity (FFSTD) for four transmit antennas.
86
ers, possibly each of different bandwidth, can be aggregated, allowing for transmission bandwidths up to 100 MHz. Backward compatibility is catered for as each component carrier uses the Release 8 structure. Hence, to a Release 8/9 terminal each component carrier will appear as an LTE Release 8 carrier, while a carrier-aggregation-capable terminal can exploit the total aggregated bandwidth enabling higher data rates. In the general case, different numbers of component carriers can be aggregated for the downlink and uplink. With respect to the frequency location of the different component carriers, three different cases can be identified: intra-band aggregation with contiguous carriers (e.g., aggregation of #2 and #3 in Fig. 2), inter-band aggregation (#1 and #4), and intra-band aggregation with noncontiguous carriers (#1 and #2). The possibility to aggregate non-adjacent component carriers enables exploitation of fragmented spectrum; operators with a fragmented spectrum can provide high-data-rate services based on the availability of wide overall bandwidth even though they do not possess a single wideband spectrum allocation. From a baseband perspective, there is no difference between the cases, and they are all supported by LTE Release 10. However, the radio frequency (RF) implementation complexity is vastly different with the first case being the least complex. Thus, although spectrum aggregation is supported by the basic specifications, the actual implementation will be strongly constrained, including specification of only a limited number of aggregation scenarios and aggregation over dispersed spectrum only being supported by the most advanced terminals. Although exploitation of fragmented spectrum and expansion of the total bandwidth beyond 20 MHz are two important usages of carrier aggregation, there are also scenarios where carrier aggregation within 20 MHz of contiguous spectrum is useful. One example is heterogeneous deployments, discussed below. Scheduling and hybrid automatic repeat request (ARQ) retransmissions are handled independently for each component carrier (Fig. 2). As a baseline, control signaling is transmitted on the same component carrier as the corresponding data. However, as a complement it is possible to use so-called cross-carrier scheduling
where the scheduling decision is transmitted to the terminal on another component carrier than the corresponding data. To reduce the terminal power consumption, a carrier-aggregation-capable terminal typically receives on one component carrier only, the primary component carrier. Reception of additional secondary component carriers can be rapidly turned on/off in the terminal by the base station through medium access control (MAC) signaling. Similarly, in the uplink all the feedback signaling is transmitted on the primary component carrier, and secondary component carriers are only enabled when necessary for data transmission.
ENHANCED MULTI-ANTENNA SUPPORT LTE supports a rich set of multi-antenna transmission techniques already in the first release. This includes downlink transmit diversity based on space-frequency block coding (SFBC) for the case of two transmit antennas and SFBC in combination with frequency shift time diversity (FSTD) for four transmit antennas. In addition, downlink codebook-based precoding, including the possibility for multilayer transmission (spatial multiplexing) with up to four layers, is supported in LTE Release 8. This includes the possibility for rank-adaptation down to singlelayer transmission, leading to codebook-based beamforming, as well as a basic form of multiuser MIMO where different layers in the same time-frequency resource can be assigned to different terminals. The multi-antenna techniques above rely on the previously mentioned cell-specific reference signals for demodulation as well as to acquire channel-state feedback from the terminal to the base station. In addition, UE-specific reference signals are part of Release 8 to support singlelayer beam-forming; support that is extended to dual-layer transmission in Release 9. UE-specific reference signals are pre-coded together with the data, implying that the pre-coder weights are not restricted to a certain codebook and do not need to be known to the receiver. An important application is beamforming with more than four antennas and, for TDD, reciprocity-based transmission strategies. In Release 10, downlink spatial multiplexing is expanded to support up to eight transmission
IEEE Communications Magazine • February 2011
PARKVALL LAYOUT
1/19/11
3:31 PM
Page 87
layers together with an enhanced reference signal structure. Relying on cell-specific reference signals for higher-order spatial multiplexing is less attractive since the reference signal overhead is not proportional to the instantaneous transmission rank but rather to the maximum supported transmission rank. Hence, Release 10 introduces extensive support of UE-specific reference signals for demodulation of up to eight layers. Furthermore, feedback of channel-state information (CSI) is based on a separate set of reference signals broadcasted in the cell, known as CSI reference signals. CSI reference signals are relatively sparse in frequency (every 12th subcarrier, corresponding to 180 kHz) but regularly transmitted from all antennas at the base station. The periodicity is configurable but is typically on order of once per 10 ms. UE-specific reference signals, on the other hand, are denser in frequency and only transmitted when data is transmitted on the corresponding layer. Separating the reference signal structure supporting demodulation from that supporting channel state estimation helps reduce the reference signal overhead, especially for high degrees of spatial multiplexing, and allows for implementation of various beamforming schemes. Uplink spatial multiplexing of up to four layers is also part of Release 10. The basis is a codebook-based scheme where the scheduler in the base station determines the precoding matrix to be applied in the terminal. The selected precoding matrix is applied to uplink data transmissions as well as the uplink demodulation reference signals. To facilitate the selection of a suitable preceding matrix in the terminal, the sounding reference signals are enhanced to support up to four antennas.
IMPROVED SUPPORT FOR HETEROGENEOUS DEPLOYMENTS With the rapidly growing usage of mobile broadband, the data rates experienced by the users in the network become increasingly important. The end-user data rate in a practical deployment is highly dependent on factors such as the terminal-to-base-station distance, whether the user is indoor or outdoor, and so on. As the possibilities to improve the link performance or increase the transmission power are limited, supporting very high end-user data rates requires a denser infrastructure. Not only does a densified network have the possibility to increase the data rates experienced, it can also increase the overall capacity as the number of sites increase. A straightforward densification of an existing macro network is one possibility, but in scenarios where the users are highly clustered, a potentially attractive approach is to complement a macro cell providing basic coverage with multiple lowoutput-power pico cells where needed as shown in Fig. 3. The result of such a strategy is a heterogeneous deployment with two or more cell layers. The idea of multiple cell layers is in itself not new; hierarchical cell structures have been discussed since the mid-’90s but then for (lowrate) voice users. It is important to point out that this is a deployment strategy, not a technology component, and as such is possible already
IEEE Communications Magazine • February 2011
RLC
RLC
MAC
MAC HARQ
HARQ
HARQ
HARQ
Coding
Coding
Coding
Coding
DFT
DFT
DFT
DFT
OFDM
OFDM
OFDM
OFDM
PHY
Uplink only
Component carrier #1
#2
Component carrier #4
#3
Frequency band A
Frequency band B
Figure 2. Carrier aggregation in LTE Release 10. in LTE Release 8/9. However, Release 10 provides some additional features, improving the support for heterogeneous deployments. In a heterogeneous deployment, cell association (i.e., to which cell a terminal should be connected) plays an important role. From an uplink data rate perspective, it is fundamentally beneficial to connect to the cell with the lowest path loss as this results in a higher data rate at a given transmit power, instead of the traditional approach of connecting to the cell with the strongest received downlink. The best cell for downlink association depends on the load; at low load connecting to the cell with the strongest received downlink offers the highest data rates, while at high loads connecting to the low-power node may be preferable as it provides for downlink resource reuse between the cells served by the low-power nodes. The backhaul capacity to the low-power node is also important to consider. Cell association strategies in a heterogeneous deployment are therefore nontrivial where the overall network performance must be taken into account. Nevertheless, any cell association strategy not solely based on maximizing the received downlink signal quality can lead to a new interference situation in the network as, in essence, the uplink coverage area can be larger than the downlink coverage area, implying that there is a region around the low-power node (lighter ring in Fig. 3) where downlink transmission from the low-power node to a terminal is subject to strong interference from the macrocell. The signal-tointerference ratio experienced by the terminal at the outermost coverage area of the low-power node is, due to the difference in output power between the high-power macro and the lowpower node, significantly lower than in a traditional homogeneous macro network. For the data part of a subframe, this is not a serious problem as the intercell interference coordination (ICIC) mechanism present in LTE
87
PARKVALL LAYOUT
1/19/11
3:31 PM
Page 88
Rx p
To provide for accurate CSI feedback,
owe
(pat
r
h lo
Release 10 provides
ss) -1
the possibility to Macro
configure which sub-
Pico
frames the terminal should base it chan-
Signal strength border
nel-quality estimates
Pathloss border
upon as the interferMacro
ence experience by a terminal connected to a low-power node may vary drastically
Frequency-domain scheme
Pico
f1 f2 Subframe
depending on the Data
macro cell activity.
Time-domain scheme
Control region “Almost blank” control region
f
Figure 3. Heterogeneous deployment with a macro cell overlaying multiple pico cells. already from Release 8 can be used. With ICIC, different cells can exchange information about which frequencies they intend to schedule transmissions on in the near future, thereby reducing or completely avoiding intercell interference. This can be used to more or less dynamically coordinate the resource usage between the cell layers and avoid overlapping resource usage. The control signaling in each subframe is more problematic as it spans the full cell bandwidth and is not subject to ICIC. To address this, LTE Release 10 provides enhancements to separate the control signaling for the different cell layers in either the frequency or time domain. Frequency domain schemes use carrier aggregation to separate control signaling for the different cell layers. At least one component carrier in each cell layer is protected from interference from other cell layers by not transmitting control signaling on the component carrier in question in the other cell layers. For example, referring to Fig. 3, the macro base station transmits control signaling on component carrier f 1 but not on component carrier f2, while the situation is the opposite in the low-power nodes located within the macrocell. Since Release 10 introduces crosscarrier scheduling, resources on f 2 can be used for data transmission, scheduled by control signaling received on f1, subject to the normal ICIC mechanism. In essence, this creates frequency reuse for the control signaling while still allowing terminals to dynamically utilize the full bandwidth (and thereby supporting the highest data rates) for the data part. For example, an operator with 20 MHz of spectrum may choose to configure two component carriers of 10 MHz each and use carrier aggregation as described above. Note that carrier-aggregation-capable terminals, in addition to benefits of connecting to the low-
88
power node, also in the lighter ring in Fig. 3, will have the same peak data rates as in the case of a single 20 MHz carrier. Release 8/9 can also benefit from seeing a larger picocell but can obviously only access one component carrier. Time domain schemes use a single component carrier f in all the cell layers and separate the control signaling in the different cell layers in the time domain, as seen in Fig. 3. At least some subframes in the low-power cell layer are protected from interference by the macro layer muting the control signaling in those subframes. However, for backward compatibility, cell-specific reference signals still needs to be transmitted from the macro cell, resulting in some interference to the terminals. To provide for accurate CSI feedback, Release 10 provides the possibility to configure on which subframes the terminal should base its channel-quality estimates as the interference experienced by a terminal connected to a low-power node may vary drastically depending on the macrocell activity. Note that in this approach, Release 8/9 terminals will connect to the macro and not to the low-power node in the lighter area in Fig. 3, but can access the full bandwidth of the carrier. The discussion above assumes that the terminals are allowed to connect to the low-power node. This is known as open access, and typically the low-power nodes are operator-deployed in such a scenario. Another scenario, giving rise to a similar interference problem, is user-deployed home base stations. The term closed subscriber groups (CSGs) is commonly used to refer to cases when access to such a low-power base station is limited to a small set of terminals (e.g., a family living in a house where the home base station is located). CSG results in additional interference scenarios. For example, a terminal
IEEE Communications Magazine • February 2011
PARKVALL LAYOUT
1/19/11
3:31 PM
Page 89
In essence, the relay is a low-power base station wirelessly connected to the remaining part of
Relay cell (Donor) base station
the network. One of Donor cell
the attractive features of a relay is
Subframe Base station transmission
Control
BS-to-RN data transmission Data
Guard for Rx-Tx switch in the relay
the LTE-based wireless backhaul as this could provide a simple way of
DL-related UL-related BS-to-RN control transmission
Relay node transmission
improving coverage.
MBSFN subframe No relay-to-terminal transmission
Figure 4. Relaying.
located close to but not admitted to connect to the home base station will be subject to strong interference, and may not be able to access the macrocell. In essence, the presence of a home base station may cause a coverage hole in the operator’s macro network; a problem that is particularly worrisome as home base stations typically are user deployed, and their locations are not controlled by the operator. Similarly, reception at the home base station may be severely impacted by uplink transmissions from the terminal connected to the macrocell. Therefore, if closed subscriber groups are supported, it is preferable to use a separate carrier for the CSG cells to maintain the overall performance of the radio access network. Interference handling between CSG cells, which typically lack backhaul-based coordination schemes, could rely on distributed algorithms for power control and/or resource partitioning between the cells.
RELAYING LTE Release 10 also extends the LTE radio access technology with support for relaying functionality (Fig. 4). With relaying, the mobile terminal communicates with the network via a relay node that is wirelessly connected to a donor cell using the LTE radio interface technology. The donor cell may, in addition to one or several relays, also directly serve terminals of its own. The donor-relay link may operate on the same frequency as the relay-terminal link (inband relaying) or on a different frequency (outband relaying). With the 3GPP relaying solution [6], the relay node will, from a terminal point of view, appear as an ordinary cell. This has the important advantage of simplifying the terminal implementation and making the relay node backward compatible (i.e., also accessible to LTE Release 8 terminals). In essence, the relay is a low-power base station wirelessly connected to the remaining part of the network. One of the attractive features of a relay is the LTE-based wireless backhaul as
IEEE Communications Magazine • February 2011
this could provide a simple way of improving coverage, e.g., in indoor environments by simply placing relays at the problematic locations. At a later stage, if motivated by the traffic situation, the wireless donor-relay link could be replaced by e.g., an optical fiber in order to use the precious radio resources in the donor cell for terminal communication instead of serving the relay. Due to the relay transmitter causing interference to its own receiver, simultaneous donor-torelay and relay-to-terminal transmission may not be feasible unless sufficient isolation of the outgoing and incoming signals is provided, for example, by means of specific well separated and well isolated antenna structures or through the use of outband relaying. Similarly, at the relay it may not be possible to receive transmissions from the terminals simultaneously with the relay transmitting to the donor cell. In Release 10 a gap in the relay-to-terminal transmissions to allow for reception of donor-to-relay transmissions is created using MBSFN subframes, 1 as shown in Fig. 4. In an MBSFN subframe the first one or two OFDM symbols in a subframe are transmitted as usual carrying cell-specific reference signals and downlink control signaling, while the rest of an MBSFN subframe is not used and can therefore be used for the donor-torelay communication. The benefit of using MBSFN subframes compared to blanking transmission in the whole subframe is backward compatibility with Release 8/9 terminals. Blanking the whole subframe would not be compatible with Release 8/9 terminals as they assume cellspecific reference signals to be present in (part of) each subframe, while MBSFN subframes are supported already in Release 8. Similar to the downlink gaps obtained through the use of MBSFN subframes, there is a need to create gaps in the terminal-to-relay transmission in order for the relay to transmit to the donor. This is handled by not scheduling terminal-to-relay transmissions in some subframes.
1
Multicast-broadcast single-frequency network (MBSFN) subframes, present already in Release 8, were originally intended for broadcast support but has later been seen as a generic tool (e.g., to blank parts of a subframe for relaying support).
89
PARKVALL LAYOUT
1/19/11
3:31 PM
Page 90
±2.3%
±4.9%
1 0
2.82
InH
UMi
2.36
±2.2%
±1.9%
UMa
RMa
0.2 0.15 0.1
0.205 ±5.4%
0.081
0.05 0
±14.3%
InH
UMi
0.102 0.072 ±13.3%
UMa
±5.1%
RMa
Avg cell tp [bps/Hz/cell]
2
3.36
Cell-edge user tp [bps/Hz]
4.42
3
4.26
3 2
3.25 ±2.3%
±2.4%
1 0
2.63
InH
UMi
2.26
±2.2%
±2.2%
UMa
RMa
0.2 0.15 0.1
0.182 ±6.0%
0.076
0.05
±7.3%
0
InH
UMi
0.099 0.065
±5.1%
±7.6%
UMa
RMa
3 2
3.52 ±0.0%
1 0
InH
2.26
2.13
±1.2%
1.71 ±2.1%
±2.8%
UMi
UMa
RMa
0.4 0.3 0.2 0.1 0
Avg cell tp [bps/Hz/cell]
Downlink 4x2 TDD 4
Cell-edge user tp [bps/Hz]
Cell-edge user tp [bps/Hz]
4
Uplink 1x4 FDD
0.25
Avg cell tp [bps/Hz/cell]
Cell-edge user tp [bps/Hz]
Avg cell tp [bps/Hz/cell]
Downlink 4x2 FDD
0.267 ±2.8% 0.093 ±4.1% InH
UMi
0.075 ±3.4% UMa
0.101 ±6.2% RMa
Uplink 1x4 TDD 3 2
3.34 ±0.1%
1.99
2.09 1.61 ±1.2%
1 0
InH
UMi
±2.1%
±2.9%
UMa
RMa
0.4 0.3 0.2 0.1
0.254 ±2.8%
0.085
0
InH
0.094
±3.2%
±3.4%
0.070
±6.5%
UMi
UMa
RMa
Figure 5. Performance results for FDD (top) and TDD (bottom), and downlink (right) and uplink (left).
Since the relay needs to transmit cell-specific reference signals in the first part of an MBSFN subframe, it cannot receive the normal control signaling from the donor cell. Therefore, Release 10 defines a new control channel, transmitted later in the subframe as shown in Fig. 4, to provide control signaling from the donor to the relay. This control channel type, of which multiple instances can be configured, carries downlink scheduling assignments and uplink scheduling grants in the same way as the normal control signaling. As the assignments refer to data in the same subframe and the grants relate to transmissions in a later subframe, early decoding of the former control information is beneficial. For this reason, downlink assignments are transmitted in the first part of the donor-to-relay transmission, while the latter part is used for (less time-critical) uplink grants.
PERFORMANCE RESULTS As discussed in the introduction, ITU has defined basic requirements to be fulfilled by any IMT-Advanced technology [4]. Some of the most basic requirements, together with the corre-
90
sponding capabilities of LTE [7], are summarized in Table 1. From the table it is seen that already the first release of LTE, Release 8, is capable of meeting all of the requirements except the bandwidth and uplink spectral efficiency requirements. These two requirements are addressed in Release 10 through carrier aggregation and uplink spatial multiplexing, respectively. For the detailed requirements on average and cell-edge spectral-efficiency, 3GPP has carried out an extensive evaluation campaign to conclude on the performance of the LTE radioaccess technology in relation to the IMTAdvanced requirements. Examples of LTE system performance for the different test environments specified by the ITU (indoor hotspot, urban micro, urban macro, and rural) are provided in Fig. 5. In the downlink, a coordinated beamforming scheme is used with spatial multiplexing of two layers to a single terminal in each beam. Beams are dynamically adapted to limit interference, allowing reuse of time-frequency resources within cells. The beam-forming is coordinated between cells belonging to the same site. This can be seen as a simple
IEEE Communications Magazine • February 2011
PARKVALL LAYOUT
1/19/11
3:31 PM
Page 91
form of coordinated multipoint transmission (CoMP) or multi-user MIMO. In the uplink, single-layer transmission is used. For further details on the simulation assumptions, please see [8]. These performance results are achieved without using any of the features introduced in Release 10. The IMT-Advanced requirements on average and cell edge spectral efficiency can thus already be fulfilled with LTE Release 8. It is important to point out that this does not mean that Release 10 features, such as extended downlink multi-antenna transmission and relaying functionality, are of no use. Rather, these features take the capabilities of the LTE radio access technology even further, beyond IMT-Advanced. Thus, by including more advanced features, such as extended multiantenna transmission, LTE system performance is further enhanced, beyond what is illustrated above. A wider range of deployment scenarios is also addressed, including such with relays and non-contiguous spectrum allocations.
CONCLUSION This article has provided a high-level overview of the evolution of LTE towards Release 10. Some of the key components — carrier aggregation, enhanced multi-antenna support, and relaying — are described. Numerical results show that LTE Release 10 fulfills and even surpasses the IMTAdvanced requirements. Given the large momentum behind LTE, this is a very attractive route for an operator to meet future demands on mobile broadband. Clearly, LTE is a very flexible platform and will continue to evolve for many years to come.
REFERENCES [1] E. Dahlman et al., 3G Evolution: HSPA and LTE for Mobile Broadband, 2nd ed., Academic Press, 2008. [2] D. Astély et al., “LTE: The Evolution of Mobile Broadband,” IEEE Commun. Mag., vol. 47, no. 4, Apr. 2009.
IEEE Communications Magazine • February 2011
[3] ITU-R SG5, “Invitation for Submission of Proposals for Candidate Radio Interface Technologies for the Terrestrial Components of the Radio Interface(s) for IMTAdvanced and Invitation to Participate in their Subsequent Evaluation,” Circular Letter 5/LCCE/2, Mar. 2008. [4] ITU-R M.2134, “Requirements Related to Technical Performance for IMT-Advanced Radio Interface(s)”; http://www.itu.int/dms_pub/itu-r/opb/rep/R-REPM.2134-2008-PDF-E.pdf. [5] 3GPP TR 36.913, “Requirements for Further Advancements for Evolved Universal Terrestrial Radio Access (EUTRA) (LTE-Advanced)”. [6] 3GPP TS 36.216, “Evolved Universal Terrestrial Radio Access (E-UTRA); Physical Layer for Relaying Operation”. [7] 3GPP TR 36.912, “Feasibility Study for Further Advancements of E-UTRA (LTE-Advanced)”. [8] A. Furuskär, “Performance Evaluations of LTE-Advanced — The 3GPP ITU Proposal,” 12th Int’l. Symp. Wireless Personal Multimedia Commun. ‘09, Sendai, Japan, Sept. 2009.
Given the large momentum behind LTE, this is a very attractive route for an operator to meet future demands on mobile broadband. Clearly, LTE is a very flexible platform and will continue to evolve for many years to come.
BIOGRAPHIES STEFAN PARKVALL [SM] (
[email protected]) joined Ericsson Research in 1999 and is currently a principal researcher in the area of radio access, working with research and standardization of cellular technologies. He has been heavily involved in the development of HSPA and LTE and is also co-author of 3G Evolution: HSPA and LTE for Mobile Broadband. In 2009 he received “Stora Teknikpriset” (one of Sweden’s major technical awards) for his work on HSPA. He holds a Ph.D. from the Royal Institute of Technology (KTH), Stockholm, Sweden. His previous positions include assistant professor in communication theory at KTH and visiting researcher at the University of California, San Diego. ANDERS FURUSKÄR is a principal researcher within the field of wireless access networks at Ericsson Research. His current focus is on evolving HSPA and LTE to meet future demands on data rates and traffic volumes. He holds an M.Sc. and a Ph.D. from KTH. He joined Ericsson in 1990. ERIK DAHLMAN joined Ericsson Research in 1993 and is currently senior expert in the area of radio access technologies. He has been deeply involved in the development and standardization of 3G radio access technologies (WCDMA/HSPA) as well as LTE and its evolution. He is part of the Ericsson Research management team working with long-term radio access strategies. He is also co-author of the book 3G Evolution: HSPA and LTE for Mobile Broadband and, together with Stefan Parkvall, received “Stora Teknikpriset” in 2009 for his contributions to the standardization of HSPA. He holds a Ph.D. from KTH.
91
SAFJAN LAYOUT
1/19/11
3:32 PM
Page 92
IMT-ADVANCED AND NEXT-GENERATION MOBILE NETWORKS
Assessing 3GPP LTE-Advanced as IMT-Advanced Technology: The WINNER+ Evaluation Group Approach Krystian Safjan, Nokia Siemens Networks — Research Valeria D’Amico, Telecom Italia Daniel Bültmann, RWTH Aachen University David Martín-Sacristán, Universidad Politécnica de Valencia Ahmed Saadani, Orange-Labs Hendrik Schöneich, Qualcomm CDMA Technologies GmbH
ABSTRACT This article describes the WINNER+ approach to performance evaluation of the 3GPP LTE-Advanced proposal as an IMT-Advanced technology candidate. The official registered WINNER+ Independent Evaluation Group evaluated this proposal against ITU-R requirements. The first part of the article gives an overview of the ITU-R evaluation process, criteria, and scenarios. The second part is focused on the working method of the evaluation group, emphasizing the simulator calibration approach. Finally, the article contains exemplary evaluation results based on analytical and simulation approaches. The obtained results allow WINNER+ to confirm that the 3GPP LTE Release 10 & Beyond (LTE-Advanced) proposal satisfies all the IMTAdvanced requirements, and thus qualifies as an IMT-advanced system.
INTRODUCTION The fast growth of mobile traffic volume is one of the main reasons why the so-called fourthgeneration mobile communication systems are being investigated and standardized. For that reason a call for submission of system candidates for International Mobile TelecommunicationsAdvanced (IMT-A) was opened by the International Telecommunication Union — Radiocommunication Standardization Sector (ITU-R), while independent groups were encouraged to register with ITU-R to evaluate candidate systems. IMT-A systems are meant to support low to high user mobility, various data rates, and support for multiple environments while having capabilities for high-quality multi-
92
0163-6804/11/$25.00 © 2011 IEEE
media applications and providing a significant improvement in performance and quality of service [1]. The predecessors of the WINNER+ project, WINNER I and II, had an important impact on the Long Term Evolution (LTE) roadmap. The WINNER I system concept represented an important contribution toward LTE, while WINNER II was involved in the preparation for World Radiocommunication Conference 2007 (WRC ’07) and had an impact on IMT-A requirements in terms of spectrum demand, minimum requirements, and evaluation methodology. Shortly after WRC ’07, ITU-R issued the Circular Letter [2] with a call for submission of IMT-Advanced radio interface technology (RIT) proposals to ITU-R. Since WINNER+ predecessors were involved in the ITU-R process, WINNER+ is covering both competence and tools for performing evaluations. In November 2008 WINNER+ registered as an Independent Evaluation Group (IEG) at ITU-R for IMT-Advanced with a focus on evaluating the Third Generation Partnership Project (3GPP) LTE-Advanced proposal. Finally, 14 IEGs from the Americas, Asia, and Europe registered at ITU-R. By highlighting the WINNER+ IEG approach to simulator calibration and evaluation, and providing exemplary evaluation results, this article attempts to address the challenge of how to pursue a system-level performance check supplying relevant and reliable performance indicators while keeping the performance analysis feasible and practical. WINNER+ is a consortium of project partners; therefore, many different tools are used for evaluation. Thus, a relevant question appears: is it possible to assess similar performance results using different simu-
IEEE Communications Magazine • February 2011
SAFJAN LAYOUT
1/19/11
3:32 PM
Page 93
2008 #1 (Feb 2008) Geneva
#2 (Jun 2008) Dubai
2009 #3 #4 (Oct 2008) (Feb 2009) Seoul Geneva
#5 (Jun 2009) Geneva
2010 #6 (Oct 2009) Dresden
#7 (Feb 2010) Torino
2011
#8 (Jun 2010) Da Nang
#9
# 10
The timing of these phases can partially overlap, as it is clear from the above
Step 1 and 2
schedule, and not all Step 3
the phases are treated within ITU-R.
Step 4
In particular, step 4
Step 5, 6 and 7
is external to the Step 8
Figure 1. The ITU-R schedule for the IMT-Advanced process mapped to ITU-R WP5D meetings.
ITU-R. Organizations willing to become an IEG have been invited to register
lation tools of a complex communication system? In this article we present the WINNER+ evaluation group approach to harmonizing the orchestra of simulators while aligning different organizations with a variety of tools to produce converging system performance evaluation results. We also briefly describe a limited set of test scenarios used in the evaluations that directly correspond to a typical usage scenario of the system under consideration. Finally, a full evaluation of the 3GPP LTE Release 10 & Beyond (LTE-Advanced) candidate is performed, confirming that the proposal satisfies all the IMTAdvanced requirements.
ITU-R FRAMEWORK AND EVALUATION PROCESS The path toward IMT-Advanced officially started in March 2008, when the Circular Letter was sent out by the ITU-R to invite submissions of IMT-Advanced technology proposals. The ITUR schedule spans over the 2008–2011 timeframe and is shown in Fig. 1, as in [3]. The radio interface development process is covered in several steps, the first one represented by the issuance of the Circular Letter (step 1), after which step 2 copes with the development of candidate RITs and sets of RITs (SRITs). Step 3 represents the submission/reception of the RIT and SRIT proposals (and acknowledgment of receipt) to Working Party 5D (WP5D), the group within ITU-R responsible for IMT systems. Step 4 indicates the phase in which evaluation of candidate RITs or SRITs by evaluation groups is carried out. Steps 5, 6, and 7 refer to the review and coordination of outside evaluation activities, the review to assess compliance with minimum requirements, and, finally, the consideration of evaluation results, consensus building, and decision. Step 8 refers to the development of radio interface recommendation(s). The timing of these phases can partially overlap, as is clear from the above schedule, and not all the phases are treated within ITU-R. In particular, step 4 is external to ITU-R. Organizations willing to become an IEG have been invited to register with ITU-R.
IEEE Communications Magazine • February 2011
In November 2008 the European Eureka Celtic project WINNER+ registered as an IEG at ITU-R. WINNER+ has been very active in the IMT-Advanced process since its early stages. WINNER+ has participated in both rounds of workshops organized by the IMT-Advanced proponents in 2009 and 2010, and the relevant ITUR WP5D meetings, by submitting several contributions and sharing the adopted work method, intended work plan, and calibration assumptions and results. A dedicated website was activated by WINNER+ [4] to share the updated calibration data status in real time with all the other IEGs. The calibration methodology proposed by WINNER+ has represented a basic guideline for all the IEGs. The alignment of such results across different evaluation groups has been verified, which is beneficial for the robustness of the entire ITU-R process. A correspondence group was also initiated on the ITUR website to address questions to the proponents and exchange comments among the different evaluation groups. WINNER+ has had a high level of communication with others through this tool. The WINNER+ project, in its 30-month lifetime, has produced consistent research work [5] on optimization of the radio interface concepts for IMT-A systems, also thanks to the heritage of activities carried out in the former European Union Framework Program 6 projects WINNER I and WINNER II. In particular, WINNER II strongly influenced the channel model definition for IMT-A [6]. Based on expertise in IMT-A radio technology concepts and link- and systemlevel simulation tools, the WINNER+ Evaluation Group has considered the 3GPP LTE Release 10 & Beyond (LTE-Advanced) SRIT proposal consisting of a time-division duplexing (TDD) RIT and a frequency-division duplexing (FDD) RIT [7]. The WINNER+ group has evaluated all minimum requirements for IMT-A systems by means of analytical, inspection, and simulation activities in order to perform a full evaluation of the LTE-Advanced candidate technology. For simulation purposes, in order to guarantee the reliability of the results, evaluated characteristics have been assessed by a plurality of partners. During the course of the work, great emphasis has been placed on reflecting realistic
with ITU-R.
93
SAFJAN LAYOUT
1/19/11
3:32 PM
Candidate systems should be designed to reach certain performance requirements under bestcase conditions. Calculations should prove that peak spectral efficiency requirements can be reached and that user-plane, and control-plane latency as well as handover interruption times are meeting the requirements.
94
Page 94
behavior of the system under consideration, by modeling non-ideal aspects (including, e.g., effects of channel estimation errors, CQI measurement errors, and feedback delay as well as a correct modeling of the overhead in the system). Simulators of different partner organizations have been calibrated in order to provide consistent results. The adopted calibration approach, detailed calibration results, and the requirements assessment are provided later.
PERFORMANCE CRITERIA AND EVALUATION SCENARIOS According to the evaluation process of ITU-R, IMT-A candidate proposals need to fulfill a set of 13 requirements related to technical performance for IMT-A radio interface(s) [8]. The requirements ensure that candidate systems fit into the framework of IMT systems. It is to be checked by IEGs through inspection of the proposal whether the candidate system supports scalable bandwidths in the IMT-A spectrum, a wide range of services, and intersystem handover with at least one IMT-2000 system. Furthermore, candidate systems should be designed to reach certain performance requirements under best case conditions. Calculations should prove that peak spectral efficiency requirements can be reached, and that user plane and control plane latency as well as handover interruption times meet the requirements. A third set of requirements refers to the efficient use of the radio spectrum under normal operating conditions. Link- and systemlevel simulations need to demonstrate high cell spectral efficiency while ensuring basic service for cell edge users. A high number of simultaneous voice calls must be supported, and the system should operate at user speeds of up to 300 km/h. For these simulations the ITU-R gives detailed guidelines for evaluation of RITs for IMT-A [9] to ensure comparable simulation results across evaluation groups. According to [10] minimum requirements need to be fulfilled in three of four specific test environments that reflect future use cases of IMT-A systems. Each environment is associated with a deployment scenario that specifies the simulation setup (e.g., intersite distance, carrier frequency, maximum transmit powers, channel model). In particular, the deployment scenarios defined in [9] are: Indoor hotspot (InH): Small isolated cells at offices or hotspot areas; targets high user throughput or user density for pedestrian users. Two base stations operating at 3.4 GHz with omnidirectional antenna setup are mounted on the ceiling of a long hall with adjacent offices (cell coverage area 3000 m2). Urban microcell (UMi): High traffic and user density for city centers and dense urban areas. Outdoor and outdoor-to-indoor propagation characteristics for pedestrian users are assumed. Continuous hexagonal deployment is used with 3 sectors/cell and below rooftop antenna mount-
ing. Base stations operate at 2.5 GHz and have an intersite distance of 200 m (cell coverage area 0.035 km2). Urban macrocell (UMa): Targets ubiquitous coverage for urban areas. A similar hexagonal deployment is used with larger intersite distance of 500 m and antennas mounted clearly above the rooftop. Non-line-of-sight or obstructed propagation conditions are common for this scenario. Only vehicular users at moderate speed are assumed, suffering from an additional outdoor to in-car penetration loss. Base stations operate at 2 GHz (cell coverage area 0.22 km2). Rural macrocell (RMa): Similar to UMa, but targets larger cells with support for high-speed vehicular users. Base stations have an intersite distance of 1732 m and operate at 800 MHz, which is more suitable for large cells (cell coverage area 2.59 km2). Suburban macrocell (SMa): This is an optional scenario for the same test environment as of the UMa scenario. The key difference is an increased intersite distance of 1299 m, and a mix of indoor and high-speed vehicular users (cell coverage area 1.46 km2). During the evaluation phase, the Indian evaluation group TCOE India proposed in [10] an additional optional scenario reflecting an important use case to serve rural areas. It can be characterized by: Rural Indian open area: This is a large-cell coverage scenario. Some parameters of the scenario may take several values (e.g., the carrier frequency, terminal antennas height, and inter site distance). The intersite distance is 30–50 km corresponding to typical distance between villages in India. In this scenario terminals are in fixed positions with rooftop directional antennas. Base stations operate at 312–2300 MHz (cell coverage area is up to 1962 km2).
WORKING METHOD OF THE WINNER+ EVALUATION GROUP ASSESSMENT OF THE 3GPP TECHNOLOGY CANDIDATE In 2008 the 3GPP held two 3GPP IMTAdvanced Workshops. The goal of these workshops was to investigate what were the main changes that could be brought forward to enhance the evolved universal terrestrial radio access radio interface as well as the evolved universal terrestrial radio access in the context of IMT-A. In particular, the LTE-Advanced Study Item was initialized in order to study the evolution of LTE, based on new performance targets. This initiative has been collecting operators’ and manufacturers’ views in order to develop and test innovative concepts that will satisfy the needs of the next-generation communications. The resulting technical report was published in June 2008 and a contribution was sent to ITU-R covering the work in 3GPP radio access network (RAN) on LTE-Advanced toward IMT-A. Finally, the 3GPP has contributed to ITU-R toward IMT-A via its pro-
IEEE Communications Magazine • February 2011
SAFJAN LAYOUT
1/19/11
3:32 PM
Page 95
posal “3GPP LTE Release 10 & Beyond (LTEAdvanced)” [7]. The new technical features of LTE-Advanced are defined in [11]. The main technical features are as follows. Support of Wider Bandwidth — Carrier aggregation, where two or more component carriers, each with a bandwidth up to 20 MHz, are aggregated, is considered for LTE-Advanced in order to support downlink transmission bandwidths larger than 20 MHz (e.g., 100 MHz). Extended Multi-Antenna Configurations — Extension of LTE downlink spatial multiplexing is considered. LTE-Advanced supports spatial multiplexing of up to eight layers for the downlink direction and up to four layers for the uplink direction. Enhanced Multi-User MIMO (MUMIMO) transmission is supported in LTEAdvanced. Coordinated Multiple Point Transmission and Reception — Coordinated multi-point (CoMP) transmission/reception is considered for LTE-Advanced as a tool to improve the coverage of high data rates, the cell-edge throughput and/or to increase system throughput. Downlink CoMP transmission implies dynamic coordination among multiple geographically separated transmission points. The 3GPP currently considers the following two categories: Joint Processing and Coordinated Scheduling/Coordinated Beamforming. Downlink CoMP transmission should include the possibility of coordination between different cells. Two implementations of CoMP can be considered: inter-site CoMP and intrasite CoMP. Initially the focus of CoMP will be on intra-site schemes. In fact for Release 10, there will be no new standardized interface communication for support of inter-site CoMP, therefore no additional features are specified to support downlink CoMP. Uplink CoMP reception is expected to have very limited impact on the specifications. Uplink CoMP reception can involve joint reception of the transmitted signal at multiple reception points and/or coordinated scheduling decisions among cells to control interference. Relaying Functionality — Relaying is considered for LTE-Advanced as a tool to improve the coverage of high data rates, group mobility, temporary network deployment, the cell-edge throughput, and/or to provide coverage in new areas. Relay nodes are placed throughout the macro-cell layout, hence modifying the reference layout specified in [9]. Moreover the channel model to be used to model relay backhauling transmission link was not defined in [9]. For these reasons relay nodes have not been considered as advanced feature to be used when assessing IMT-Advanced requirements. The evaluation guidelines published by ITUR in [9] are helpful for IMT-A systems evaluation but evaluating Beyond Release 10 systems is still challenging since there is a need for specifying reference scenarios and missing parameters for new features like e.g., CoMP or multilayered networks.
IEEE Communications Magazine • February 2011
SPLITTING THE WORK: ANALYTICAL, INSPECTION, AND SIMULATION APPROACHES In its Guidelines for evaluation of radio interface technologies for IMT-Advanced [9] the ITU-R defined the characteristics for evaluating IMT-A candidate proposals. The characteristics can be classified based on the three different methods for evaluation: • Analytical • Inspection • Simulation (link-level or system-level) Analytical evaluation comprises all characteristics that can be calculated. It is performed for the characteristics of peak spectral efficiency, control and user plane latency, as well as intraand interfrequency handover interruption time. Inspection is a non-numerical check by the IEG that certain requirements are fulfilled and certain capabilities are provided. The characteristics bandwidth, intersystem handover, deployment in at least one of the identified IMT bands, channel bandwidth scalability, and support for a wide range of services are evaluated by inspection. Numerical characteristics that are too complicated to be calculated are evaluated by simulative methods. These characteristics are cell spectral efficiency, cell edge user spectral efficiency, mobility, and VoIP capacity. The simulations results should respect the guidelines and the deployment scenarios detailed in [9].
Uplink CoMP reception is expected to have very limited impact on the specifications. Uplink CoMP reception can involve joint reception of the transmitted signal at multiple reception points and/or coordinated scheduling decisions among cells to control interference.
PREPARING THE WORK: CALIBRATION OF THE SIMULATORS In the WINNER+ project the evaluations have been performed by several partners using different simulation tools. To ensure that all tools yield coherent results, key components were calibrated among partners. Specifically, the channel model implementation, which is technology agnostic, and a basic setup of the baseline LTE Release 8 communication system were aligned among partners. The calibration process was implemented using a stepwise approach with three steps: channel model large-scale parameters calibration, channel model small-scale parameters calibration, and baseline system calibration. Such calibration work provided high reliability to the WINNER+ IEG main evaluation work that was focused on the full assessment of the 3GPP LTE Release 10 & Beyond (LTE-Advanced) proposal. The channel model proposed by ITU-R in [9] is far from being simple to implement. This is why the WINNER+ IEG addressed a channel model implementation calibration from the beginning. The channel model calibration process was divided in two steps: large-scale and small-scale parameters calibration. Large-scale calibration (LSC) is focused on the calibration of the channel model implementation without multipath effects (i.e., only with large-scale fading). The metrics used in this calibration are the path gain and wideband signalto-interference-plus-noise ratio (SINR). The path gain is defined as the average signal attenuation between a user terminal and its serving base station. The measure includes distance attenuation, shadowing, and antenna gains (both
95
SAFJAN LAYOUT
1/19/11
3:32 PM
Page 96
Delay spread AoD spread AoA spread
SSC simulation
Wideband SINR path gain
Delay spread AoD spread AoA spread
Partner #N
Wideband SINR path gain
LSC simulation Partner #2
SSC simulation Partner #1
LSC simulation
Cross checking and decision on calibration 100
80 70
90 80 70
60
CDF (%)
CDF (%]
100
Org 1 Org 2 Org 4 Org 5 Org 7 Org 8 Average
90
50 40
60 50 40
30
30
20
20
10
10
0
Org 1 Org 2 Org 4 Org 5 Org 7 Org 8 Average
0 -10
-5
0
5
10
15
20
0
Wideband SINR (dB)
40
80
120
AOD (˚)
Figure 2. Channel model calibration (steps 1 and 2 of the calibration process) with examples of wideband SINR (left) and angle of departure (right) distributions in the UMi NLoS scenario.
at the base station and at the user terminal), while the effects from fast fading are excluded. The downlink wideband SINR, sometimes also called the geometry, is the average power received from the serving cell in relation to the average interference power received from all other cells plus noise. In addition to the evaluation principles and assumptions in [9] and the channel model clarifications that followed, additional assumptions concerning the cell selection mechanism, feeder loss, and base station antenna tilt have been used to derive the path gain and wideband SINR distributions. Exact values are included in [12]. Small-scale calibration (SSC) is focused on the calibration of the multipath part of the channel model. Given that the channel model is a stochastic geometric model, the stochastic distributions of several geometric characteristics are calibrated. These characteristics include the delay spread, and the departure and arrival angular spread at the base station and user terminal, respectively (also known as angle of departure [AoD] and angle of arrival [AoA]). The root mean square delay spread and circular angular spread at the base station and user terminal are calculated for a large number of radio links, and in the calibrations the corresponding
96
distributions are compared. Mathematical definitions of these spread measures are included in [12]. The calibrations are performed separately for line of sight (LoS), non line of sight (NLoS), and outdoor-to-indoor (OtoI) propagation conditions. As an example of the calibration data collected in this phase, we provide curves obtained in the UMi deployment scenario in Fig. 2. Results of several partners are included and also the averaged curves of the group. It can be concluded that the calibration is achieved. The complete calibration data obtained by WINNER+ is available in the WINNER+ IMT-A evaluation web page [4]. WINNER+ has focused on evaluating the 3GPP LTE Release 10 & Beyond (LTEAdvanced) proposal, and in order to prepare the system-level evaluations, a simulator calibration for the baseline configuration was performed in the third step of the calibration process. The reference baseline configuration is illustrated in Fig. 3, and the detailed simulation parameters can be found in [11]. Harmonization of simulators was done by comparing uplink and downlink spectral efficiencies (both cell and cell edge) for a baseline setup. Implementations of all major parts of an
IEEE Communications Magazine • February 2011
SAFJAN LAYOUT
1/19/11
3:32 PM
Page 97
57-sector UMi scenario Simulation parameter
Value
Duplex method
FDD
Traffic
Full buffer
Downlink scheduling
Round-robin (TDMA)
Uplink scheduling
Round-robin (FDMA)
Uplink power control
P0 = 106.0 dBm Alpha = 1.0 ...
...
ISD
10 users per sector
Figure 3. Baseline system calibration scenario and parameters. TDMA: time-division multiple access; FDMA: frequency-division multiple access. Downlink
Uplink
Org. 1 Org. 2 Org. 4 Org. 5 Org. 6 Org. 7 0.0
0.1 0.2 0.3 0.4 Normalized user throughput (b/s/Hz]
0.5
100 90 80 70 60 50 40 30 20 10 0
CDF (%)
CDF (%)
100 90 80 70 60 50 40 30 20 10 0
Org. 1 Org. 5 Org. 6 0.0
0.1 0.2 0.3 0.4 Normalized user throughput (b/s/Hz)
0.5
Figure 4. Baseline system calibration (step 3 of the calibration process) for the UMi scenario.
LTE compliant protocol stack such as hybrid automatic repeat request (H-ARQ) retransmissions, channel status feedback loop, power control, scheduling, and receiver setup were included. For non-standardized algorithms baseline assumptions were made. By comparing the normalized downlink and uplink user throughput (user spectral efficiency) distributions in Fig. 4, it can be seen that a good alignment between WINNER+ partners was achieved. The presented information and benchmark data has been derived for all IMT-A deployment scenarios, and shared with the other IEGs during the evaluation period in order to foster the required coordination and unification of results.
LTE-ADVANCED TECHNOLOGY CANDIDATE RESULTS This section gives an introduction to a subset of evaluation characteristics addressed by the WINNER+ IEG for the 3GPP LTE Release 10 & Beyond (LTE-Advanced) proposal assessment. The peak spectral efficiency is presented as an example of the analytical method. This is followed up by simulation results based on the aforementioned calibration outcome. Analytical Results — The peak spectral efficiency (PSE) is defined in [8]. It is basically the highest theoretical data rate normalized by bandwidth assignable to a single mobile station assuming error-free conditions. The WINNER+ IEG evaluated PSE for LTE-Advanced FDD
IEEE Communications Magazine • February 2011
and TDD modes in uplink and downlink. In addition to evaluation configuration parameters provided in [9] with up to four Rx and four Tx antennas at the base station and up to four Rx and two Tx antennas at the mobile station, configurations with up to eight antennas were also investigated for informative purposes. From a mathematical point of view PSE calculation is not demanding. It is simply the number of data bits that can be transmitted divided by the bandwidth and the time needed for transmission. But LTE-Advanced, as does any other mobile radio system, needs overhead that does not contribute to the data rate. Reference and synchronization signals as well as broadcast channels and control signaling with channels carrying different indicators and control information form such overhead. Depending on the mode and the direction of transmission, different overhead types have to be taken into account. In TDD mode the guard period (GP) that separates downlink and uplink transmission in the time domain adds additional overhead. For the PSE calculation one may additionally distinguish between different overhead types that add to the data rate or not. This topic was raised during a workshop organized by 3GPP for all IEGs at the end of 2009 and finally clarified by ITU-R in a liaison statement in 2010. A further topic was the handling of the GP duration in TDD mode and its influence on the time normalization for PSE calculation. The WINNER+ IEG provided multiple PSE
97
SAFJAN LAYOUT
1/19/11
3:32 PM
There is an expectation that further LTE evolution beyond Release 10 will provide even better
Page 98
PSE in b/s/Hz
ITU-R requirement
FDD RIT assessment
TDD RIT assessment
Downlink
15
16.3
15.8
Uplink
6.75
8.4
7.9
Table 1. Requirements and analytical PSE results for FDD and TDD RIT.
performance since multiple features considered in further releases e.g., Relaying, Coordinated Multipoint Transmission and
calculations for LTE-Advanced, and all of them clearly fulfilled the IMT-A requirements. The results for four-layer spatial multiplexing are summarized in Table 1. As it is clearly beyond the scope of this article to go into technical details, the interested reader is referred to the final evaluation report [12] where the calculation is explained in detail.
reception (CoMP) were not a part of evaluated proposal.
98
Simulation Results — Simulations have been derived by the organizations, and results are compared to the ITU-R requirements. The assessment is done in different ITU-R environments, and for FDD and TDD RITs. The ITUR guidelines impose that for the downlink, the number of antennas to be used should be higher or equal to n = 4 for the transmitter and m = 2 for the receiver. However, for the uplink, only the receiver should use at least m = 2 antennas. The use of different transmitting schemes allowed by LTE-Advanced and the constraint given by the antenna number lead to different simulation results. The following definitions for the transmission schemes hold: • SIMO: The transmitter uses one antenna and the receiver m antennas. This scheme is called 1 × m single-input multiple-output. • BF: The transmitter uses n and the receiver m antennas. The transmitter exploits the n antennas to orientate the transmitting power of the transmitted data stream to the receiver favorite direction. The scheme is called n × m beamforming. • SU-MIMO: The transmitter uses n and the receiver m antennas. The transmitter uses all n antennas to transmit for only one receiver one or several data streams. This schemes is called n × m single-user multiple-input mutliple-output. • MU-MIMO: Several receivers having m antennas share the n transmitting antennas to be served on the same time-frequency resources. This scheme is called multi-user MIMO. Table 2 summarizes the main results in UMi and UMa environments for FDD RITs. The results presented in this table for cell spectral efficiency and cell edge spectral efficiency are averaged over results coming from different organizations, evaluated using the same transmission scheme. We note that different LTE-Advanced transmission schemes permit the requirement achievement for uplink and downlink. The UMi and UMa deployment scenarios are the most challenging since there is a need to use MU-MIMO to achieve the downlink requirements. However, InH and RMa requirements are met using SU-MIMO configuration. Uplink requirements are less demanding than downlink requirements since they can be achieved with SIMO configurations.
For the mobility assessment, the traffic channel link data rate and support for mobility classes are addressed. It is also shown that their requirements are also achieved for the considered environments. Finally, the voice over IP (VoIP) capacity is assessed, and it is shown that the required number of active users per sector per megahertz is achieved by the LTE-Advanced technology. In general, the addressed requirements are achieved by simulations in all environments for FDD and TDD RITs. A complete set of assessment results for all ITU-R deployment scenarios derived by WINNER+ IEG is described in [12]. The obtained results have confirmed that the 3GPP LTE Release 10 & Beyond (LTEAdvanced) proposal satisfies all IMT-A requirements.
CONCLUSIONS The WINNER+ project responded to the ITUR call to form an IEG and created its own evaluation group. The evaluation effort has different flavors ranging from careful study of the proponent proposal (inspection) through calculation (analytical) to link- and system-level simulations (simulation). Evaluations by simulations were preceded by calibration. The stepwise calibration exercise appeared to be a complex and demanding task. During this step, communication among independent evaluation groups was relevant. Making the results of the WINNER+ IEG publicly available has enabled discussions and the possibility to compare results among other IEGs. Furthermore, WINNER+ gave a hint of one possible approach to coping with calibration. WINNER+ IEG also promptly reacted on proposed scenarios suggested by other IEGs, as in the case of the rural Indian open area additional test scenario. The WINNER+ IEG response can be an example of an agile approach to the evaluation activity. The WINNER+ evaluation group completed its assessment of the 3GPP LTE-Advanced proposal and submitted its final evaluation report to ITU-R WP5D in June 2010. The main conclusion drawn from the results is that the 3GPP LTE Release 10 & Beyond (LTE-Advanced) proposal satisfies all the IMT-Advanced requirements and thus qualifies as an IMT-Advanced system. There is an expectation that further LTE evolution beyond Release 10 will provide even better performance since multiple features considered in further releases, such as relaying and coordinated multipoint (CoMP) transmission and reception, were not part of the evaluated proposal.
IEEE Communications Magazine • February 2011
SAFJAN LAYOUT
1/19/11
3:32 PM
Page 99
UMi
Cell spectral efficiency in DL (b/s/Hz/cell)
UMa
Requirement
Assessment results
Requirement
Assessment results
2.6
2.88* (4 × 2 MU-MIMO)
2.2
2.38* (4 × 2 MU-MIMO)
2.07* (1 × 4 SIMO) Cell spectral efficiency in UL (b/s/Hz/cell)
1.8
Cell edge spectral efficiency in DL (b/s/Hz/cell)
0.075
2.41* (2 × 4 BF) 2.59* (2 × 4 SU-MIMO) 0.089* (4 × 2 MU-MIMO)
1.60* (1 × 4 SIMO) 1.4
0.06
0.082* (1 × 4 SIMO) Cell edge spectral efficiency in UL (b/s/Hz/cell)
0.05
Mobility Traffic channel link data rates in UL (b/s/Hz)
0.75
0.124* (2 × 4 BF) 0.127* (2 × 4 SU-MIMO) 1.27* (1 × 4 SU-MIMO)
0.03
0.55
Stationary, pedestrian, vehicular
Yes Vehicular (up to 30 km/h)
VoIP capacity (active users/sector/MHz)
40
83*
0.067* (4 × 2 MU-MIMO) 0.073* (1 × 4 SIMO)
Stationary, pedestrian, Mobility classes supported
2.94 * (2 × 4 BF) 1.97 * (2 × 4 SU-MIMO)
40
0.092* (2 × 4 BF) 0.091* (2 × 4 SU-MIMO) 1.36* (1 × 4 SU-MIMO)
Yes
66*
*Mean value of all contributing organizations for the given antenna configurations. Note that the mean value does not represent the performance of one particular system setup. Values in bold (maximum values in case of multiple antenna configurations) are taken as main results. UL: uplink; DL: downlink.
ACKNOWLEDGMENT This work has been performed in the framework of the CELTIC project CP5-026 WINNER+. The authors would like to acknowledge the contributions of their colleagues in the WINNER+ consortium. The authors wish to thank colleagues from Ericsson, Per Skillermark and Johnan Nyström, for their effort in leading the simulations part of the WINNER+ evaluation group. The work of David Martín-Sacristán was supported by an FPU grant of the Spanish Ministry of Education.
REFERENCES [1] Doc IMT-ADV/1-E, “Background on IMT-Advanced.” [2] ITU-R Circular Letter 5/LCCE/2 and Addendums, “Invitation for Submission of Proposals for Candidate Radio Interface Technologies for the Terrestrial Components of the Radio Interface(s) for IMT-Advanced and Invitation to Participate in their Subsequent Evaluation.” [3] ITU-R Doc IMT-ADV/2 Rev1, “Submission and Evaluation Process and Consensus Building.” [4] WINNER+: A European ITU-R Evaluation Group; http://projects.celtic-initiative.org/winner+/WINNER+%20Evaluation%20Group.html. [5] WINNER+ D4.2, “Final Conclusions on End-to-End Performance and Sensitivity Analysis,” June 2010; http://projects.celticinitiative.org/winner+/WINNER+%20Deliverables/D4.2_v1.0.pdf.
IEEE Communications Magazine • February 2011
[6] P. Kyösti et al., “WINNER II Channel Models,” IST-WINNER D1.1.2 v. 1.1, Sept. 2007; https://www.ist-winner.org/WINNER2/Deliverables/ D1.1.2v1.1.pdf [7] Doc IMT-ADV/8, “Acknowledgment of Candidate Submission from 3GPP Proponent (3GPP Organization Partners of ARIB, ATIS, CCSA, ETSI, TTA, AND TTC) under Step 3 of the IMT-Advanced Process (3GPP Technology),” Oct. 2009. [8] ITU-R M.2134, “Requirements Related to Technical Performance for IMT-Advanced Radio Interface(s),” Nov. 2008. [9] ITU-R M.2135, “Guidelines for Evaluation of Radio Interface Technologies for IMT-Advanced,” Nov. 2008. [10] ITU-R IMT-ADV/16, “Evaluation of IMT-Advanced Candidate Technology Submissions in Documents IMTADV/4 and IMT-ADV/8 by TCOE India,” 2010. [11] 3GPP TR 36.814, “Evolved Universal Terrestrial Radio Access (E-UTRA); Further Advancements for E-UTRA Physical Layer Aspects,” v. 9.0.0, Mar. 2010. [12] Doc. IMT-ADV/22, “Evaluation of IMT-Advanced Candidate Technology Submissions in Documents IMT-ADV/6, IMT-ADV/8, and IMT-ADV/9 by WINNER+ Evaluation Group”; http://www.itu.int/ITU-R/index.asp?category=studygroups&rlink=rsg5-imt-advanced&lang=en.
ADDITIONAL READING [1] ITU-R M.2133, “Requirements, Evaluation Criteria, and Submission Templates for the Development of IMTAdvanced.”
99
SAFJAN LAYOUT
1/19/11
3:32 PM
Page 100
BIOGRAPHIES KRYSTIAN SAFJAN (
[email protected]) graduated from the Wroclaw University of Technology, Poland, Faculty of Electronics, Specialization of Telecommunications/Digital Signal Processing in 2004. After graduation he joined Nokia Siemens Networks Sp z o.o., Poland, where he is involved in research and development of B3G systems within the Department of Radio System Technology. His main research interests are concentrated on advanced radio resource management methods. V ALERIA D’A MICO received an M.Sc. degree in electronics engineering from the University of Catania, Italy. After an internship in ST Microelectronics, she joined Marconi Mobile. Since 2001 she is with Telecom Italia where she has been involved in several activities targeting future-generation communications, contributing to the Italian-funded FIRB PRIMO project and the Eureka Celtic WINNER+ project. Currently she is involved in the EU ARTIST4G project, leading the work package on interference avoidance. DANIEL BÜLTMANN received his Diploma (Dipl.-Ing.) in electrical engineering from RWTH Aachen University in 2004. Since January 2005 he has been employed as a research assistant with the Research Group ComNets, RWTH Aachen University, where he is working toward his Ph.D. degree. The focus of his research is on LTE-Advanced radio resource management. He was involved in the WINNER II and WINNER+ projects.
100
DAVID MARTIN-SACRISTAN received his M.S. degree in telecommunications engineering from the Polytechnic University of Valencia (UPV) in 2006. Nowadays, he is a Ph.D. student in the Institute of Telecommunications and Multimedia Applications (iTEAM), UPV. His research interests are focused on beyond 3G networks including modeling and simulation, resource management, and link adaptation. A HMED S AADANI received his engineering degree from Tunisia Polytechnic School in 1999, his Master’s degree in 2000, and his Ph.D. degree in digital communications in 2003 from the Ecole National Supérieure des Télécommunications (ENST), Paris, France. He then joined Orange Labs, Issy les Moulineaux, France, as a research engineer, where he worked on advanced receivers and MIMO schemes for 3G/3G+ systems. His current research interests are in cooperative communications, relaying, and distributed MIMO for 4G systems. HENDRIK SCHÖNEICH received his Dipl.-Ing. degree in 2001 for a diploma thesis on co-channel interference cancellation in the GSM system. He joined the Information and Coding Theory Laboratory (ICT) in 2001, where he worked as a research assistant on different research topics with a focus on interleave-division multiple access (IDMA). He received his Dr.-Ing. degree for a thesis on adaptive IDMA in mobile radio communication systems. Since 2006 he has been with Qualcomm CDMA Technologies GmbH Nuremberg. His research interests include iterative interference cancellation, turbo equalization and decoding, semi-blind channel estimation, and related resource allocation strategies.
IEEE Communications Magazine • February 2011
IRMER LAYOUT
1/19/11
3:33 PM
Page 102
IMT-ADVANCED AND NEXT-GENERATION MOBILE NETWORKS
Coordinated Multipoint: Concepts, Performance, and Field Trial Results Ralf Irmer, Vodafone Heinz Droste, Deutsche Telekom Patrick Marsch, Michael Grieger, and Gerhard Fettweis, Technische Universität Dresden Stefan Brueck, Qualcomm CDMA Technologies GmbH Hans-Peter Mayer, Alcatel-Lucent Bell Labs Lars Thiele and Volker Jungnickel, Fraunhofer Heinrich-Hertz-Institut
ABSTRACT Coordinated multipoint or cooperative MIMO is one of the promising concepts to improve cell edge user data rate and spectral efficiency beyond what is possible with MIMOOFDM in the first versions of LTE or WiMAX. Interference can be exploited or mitigated by cooperation between sectors or different sites. Significant gains can be shown for both the uplink and downlink. A range of technical challenges were identified and partially addressed, such as backhaul traffic, synchronization and feedback design. This article also shows the principal feasibility of COMP in two field testbeds with multiple sites and different backhaul solutions between the sites. These activities have been carried out by a powerful consortium consisting of universities, chip manufacturers, equipment vendors, and network operators.
INTRODUCTION
1
A full list of results from all partners is available at http://www.easy-c.com. 2
See ict-artist4g.eu.
102
High spectral efficiency (i.e., high aggregated cell data rate per unit of spectrum) is especially important for data networks. Mobile data traffic has recently surged due to the availability of affordable data dongles, notebooks, tablet computers with third-generation (3G) radio modules, and smartphones with web-oriented user interfaces. Vodafone, for example, has observed 70 percent growth of data traffic within one year for their European mobile networks. So far, 3G networks could support the traffic growth. However, eventually, more efficient wireless technology and novel deployment concepts like small cells and heterogeneous networks are needed to provide the required capacity. Ubiquitous user experience is key for the end user to have a guaranteed minimum service
0163-6804/11/$25.00 © 2011 IEEE
quality corresponding to a minimum data rate. Denser network deployments address this issue caused by low link budget at the cell edge. However, this goes along with larger areas where the transmission is limited by interference. Long Term Evolution (LTE) and mobile WiMAX use multiple-input multiple-output (MIMO)-orthogonal frequency-division multiplexing (OFDM) and achieve improved spectral efficiency within one cell. However, inter-cell interference is still preventing these technologies from coming close to the theoretical rates for multi-cell networks. There are two fundamental ways to deal with inter-cell interference: Coordination of base stations to avoid interference and constructive exploitation of interference through coherent base station cooperation. Conceptually, we extend single-cell MIMO techniques, such as multi-user (MU-MIMO), to multiple cells. This article shows results from the EASY-C1 project, which focused on coordinated multipoint (COMP) from 2007 to 2010 and set up two multisite testbeds for LTE-based COMP in Dresden and Berlin. ARTIST4G 2 and other forthcoming projects will continue to use these platforms. COMP is a main element on the LTE roadmap beyond Release 9. In LTE Release 11, some simpler COMP concepts may appear, but it is generally expected that advanced COMP concepts will take longer to be mature enough for commercial use. The main scope of this article is to outline the basic COMP concepts, and highlight the potentials and technical challenges when introducing them in future mobile networks. Moreover, we sketch practical COMP schemes for uplink and downlink, assess their performance in large-scale network simulations, and use field trials in urban areas to demonstrate the maturity of COMP.
IEEE Communications Magazine • February 2011
IRMER LAYOUT
1/19/11
3:33 PM
Page 103
COORDINATION AND COOPERATION IN MOBILE NETWORKS One key element of mobile radio networks is spatial reuse (i.e., the reuse of resource elements such as timeslots or frequency bands) in a geographical distance, where the signal strength is reduced due to path loss, shadowing, and so on. Historically, this was achieved using network planning with certain frequency reuse patterns, which have, however, the drawback of poor resource utilization. 3G and 4G technologies are using full frequency reuse, which in turn leads to interference between the cells. In [1, 2] network coordination has been presented as an approach to mitigate intercell interference and hence improve spectral efficiency. Figure 1 shows the cooperation architecture for COMP. The same spectrum resources are used in all sectors, leading to interference for terminals (user equipment [UE] in Third Generation Partnership Project [3GPP] terminology) at the edge between the cells, where signals from multiple base stations are received with similar signal power in the downlink. Multiple sectors of one base station (eNB in 3GPP LTE terminology) can cooperate in intrasite COMP, whereas intersite COMP involves multiple eNBs. The sectors at one site can be different selfsustained units, or different remote radio heads linked via fiber to a central baseband unit. The eNBs may be interconnected by the logical X2 interface. Physically, this could be a direct fast fiber link, or a multi-hop connection involving different backhaul technologies. The cooperation techniques aim to avoid or exploit interference in order to improve the celledge and average data rates. COMP can be applied both in the uplink and downlink. All schemes come with the cost of increased demand on backhaul (high capacity and low latency), higher complexity, increased synchronization requirements, more channel estimation effort, more overhead, and so on. The aim of this article is to highlight the potentials of COMP and its technical challenges to be addressed for introducing it in next-generation mobile networks.
EVALUATION BY SIMULATION AND FIELD TRIALS Different approaches to COMP can be analyzed using system-level simulations with hexagonal cells and evaluation methodologies customary in the 3GPP, Next Generation Mobile Networks (NGMN), and International Telecommunication Union (ITU). Unless otherwise specified, the intersite distance in all computer simulations has been set to 500 m, a terminal speed of 3 km/h is assumed, and the system bandwidth is 10 MHz. The results of such simulations will be presented in this article. However, it is not enough to evaluate the feasibility of an approach solely based on simulations. Field trials are essential to find out the critical technical issues, and they encourage an end-to-end view. The EASY-C
IEEE Communications Magazine • February 2011
eNB
eNB Cell / sector
Mobile terminal (UE) X2 interface (eNB-eNB) eNB
eNB
Intra-Site COMP
Inter-Site COMP Cell edge
Figure 1. Base station cooperation: intersite and intrasite COMP. project has set up two outdoor testbeds with slightly different underlying technology and focus, as shown in Table 1; see also [3–5].
UPLINK COORDINATED MULTIPOINT OVERVIEW Theoretical work has shown that uplink (UL) COMP offers the potential to increase throughput significantly [1, 2], in particular at the cell edge, which leads to enhanced fairness overall. Modeling some practical aspects such as a reasonably constrained backhaul infrastructure and imperfect channel knowledge, UL COMP promises average cell throughput gains on the order of 80 percent, and roughly a threefold cell edge throughput improvement [6]. The channel information is available in the network without resource-consuming feedback transmissions in the uplink. Also, the terminals need no modifications in order to support UL COMP. Therefore, base station cooperation may be easier to implement than in the downlink (DL). Only the interface between base station sites (X2) needs to be defined. In case of joint detection in the UL, higher X2 capacity is needed than for joint transmission in the DL. Although the UL capacity is not the bottleneck in today’s networks, guaranteeing a minimum data rate, especially for cell edge users, is improving user experience, and UL COMP may be used to carry control traffic necessary to implement DL COMP. In general, the UL COMP schemes can be classified as: Interference-aware detection: Here, no cooperation between base stations is necessary; instead, base stations also estimate the links to interfering terminals and take spatially colored interference into account when calculating receive filters (interference rejection combining). Joint multicell scheduling, interference pre-
103
IRMER LAYOUT
1/19/11
3:33 PM
Page 104
Dresden testbed
Berlin testbed
Environment
Dense urban
Trial setup
10 sites with up to a total of 28 sectors
Frequency
4 sites with up to 10 sectors
2.68 GHz DL, 2.53 GHz UL
Baseline technology
OFDMA in DL and UL, scalable bandwidth 5–20 MHz, transmissions limited to a maximum of 40 resource blocks (PRBs) in UL and 10 PRBs in DL.
DL: 2 × 2 MIMO-OFDMA, UL: 1 × 2 SC-FDMA, scalable bandwidth 1.5–20 MHz, full bandwidth can be used in both up- and downlink
Processing
Real-time DL transmission. For uplink COMP offline processing. Scheduling is investigated in quasi-realtime.
Real-time PHY, adaptive MIMO multiple access and network layer. PHY is extended for DL CoMP.
Backhaul and interconnects
5.4/5.8 GHz microwave with a net data rate of 100 Mb/s and 1 ms delay
1 Gb/s Ethernet over optical fiber and free-spaceoptical links.
Testbed scope
UL and DL MU-MIMO COMP, relaying, practical issues
DL MU-MIMO, COMP, relaying, real-time demos such as high-definition mobile video conference
Table 1. COMP testbeds developed within the EASY-C project. diction, or multicell link adaptation, requiring the exchange of channel information and/or scheduling decisions over the X2 interface between base stations [7]. Joint multicell signal processing. Here, degrees of freedom exist in the way that decoding of terminals may take place in a decentralized or centralized way, and to which extent received signals are preprocessed before information exchange among base stations. In general, there is a trade-off between using backhaul efficiently by a maximum extent of preprocessing (e.g., as in distributed interference subtraction, DIS, where decoded data is exchanged), but obtaining less CoMP gain, or using a large backhaul capacity (as in the case of the distributed antenna system, DAS, where quantized receive signals are exchanged) and obtaining a better performance.
SELECTED SIMULATION RESULTS In the following section selected UL cooperation schemes are introduced. During performance evaluation it is distinguished between gains of intrasite and intersite cooperation, where intersite cooperation needs X2 backhaul capacity. Uplink Interference Prediction — The basic idea of UL interference prediction [7] is to perform link adaptation based on predicted signalto-interference-plus-noise ratio (SINR) values that are likely to occur during the associated data transmissions. Prediction is enabled by exchange of resource allocation information within a cluster of cooperating cells. In addition, the UL receivers provide channel state information related not only to their associated terminals, but also to the strongest terminals of neighboring cells. Due to interference prediction, more appropriate link adaptation can be realized, and hence the performance can be improved. The exchange of resource allocation information between two cells causes only moderate backhaul traffic in the range of 8 Mb/s. Whereas performance gains with intrasite coop-
104
eration prove to be rather low, we observe up to 25 percent gain in spectral efficiency and 29 percent gain with respect to baseline cell edge throughput if intersite cooperation including up to six interfering cells is simulated. The prediction accuracy degrades if the channel state information gets outdated. Therefore, the X2 latency should not exceed 1 ms, even at low terminal speed. Uplink Joint Detection — Uplink joint detection means that signals received at different sectors are jointly processed [8]. Hence, virtual MIMO antenna arrays may spread out over different users as well as different base station sectors at the network side. Most of the information exchange between cooperating cells is caused by sharing the quantized baseband samples received in each cell. Channel state information and resource allocation tables are shared in the cooperation cluster as well. First estimates reveal that even with consideration of less than half the cooperation cluster size as described above for interference prediction, the cell-to-cell X2 traffic would exceed 300 Mb/s for 10 MHz system bandwidth. This high amount of backhaul traffic motivates the investigation of intrasite joint detection. In case of intersite joint detection including up to three sectors per terminal, gains in spectral efficiency and cell edge throughput account for 35 and 52 percent, respectively (2). Sticking with intra-site joint detection, the improvements drop only moderately to 25 percent on average and 24 percent at the cell edge (3). Combining high throughput and low latency as required by joint detection will cause a cost burden for the backhaul, specifically the X2 interface. Therefore, a combination of intrasite joint detection (no X2 needed) and intersite interference predictions (low throughput demand) has been considered. This even outperforms the throughput-demanding intersite joint detection, as shown in Fig. 2. However, the burden of low-latency X2 remains.
IEEE Communications Magazine • February 2011
IRMER LAYOUT
1/19/11
3:33 PM
Page 105
350
Spectral effeiciency gain (%)
50
Cell-to-cell (X2) throughput requirement (Mb/s)
55
4
45
40
2
35
30 3 1
25
340
300
250
200
150
100
50 8
8
0
0 0
30
40 50 60 Cell edge throughput gain (%)
2
1
70
3
4
LTEA scheme
Figure 2. Performance of selected uplink COMP schemes: 1) inter-site interference prediction, 2) inter-site joint detection, 3) intra-site joint detection, 4) combining inter-site interference prediction with intra-site joint detection.
SELECTED FIELD TRIAL RESULTS
IEEE Communications Magazine • February 2011
4.0 BS1
BS2
UE1
UE2
3.5 No backhaul required here Average rate (b/channel use)
Joint decentralized and centralized detection of terminals was evaluated in the Dresden testbed [9]. Two terminals with one transmit antenna each transmitted continuous sequences of modulation and coding schemes, which were received by two base stations with one receive antenna each (KATHREIN 80010541). The scenario resembled a symmetric cell edge scenario, but the terminals were moved such that interference conditions changed continuously. The receive signals were recorded so that different cooperation schemes could be applied and evaluated offline. The result plot in Fig. 3 shows the average rates that could be achieved with different cooperation strategies vs. the backhaul required. Here, square and round markers are used to distinguish both UE types. We can see that in an LTE Release 8 system, where each UE unit is decoded only by the serving base station, an average rate of about 1.5 b/channel use is possible for UE 1 (square marker). This can be improved to about 2.2 b/channel use simply if a flexible (i.e., transmission time interval [TTI]wise) assignment of UE to eNBs is enabled, with the option of local decoding with successive interference cancellation (SIC). A further rate improvement of UE 1 is possible if DIS is enabled, where one UE unit is decoded first at one eNB, and the decoded data are then forwarded to the other eNBs for interference subtraction, requiring a smaller extent of backhaul. This scheme turns out to reduce the outage probability significantly. The remaining points show the performance of a DAS, where the eNBs exchange quantized received signals, with either 6 or 12 bits per antenna both for I and Q signal dimensions. As compared to LTE Release 8, in this scenario full DAS-based cooperation can improve the average throughput by about 70
3.0 Flexible assignment 2.5
DAS (12-bit) 2.0 DIS
DAS (6-bit)
1.5 1.0 LTE Rel. 8 0.5 0 10-1
100 101 Average backhaul (b/channel use)
102
Figure 3. Achieved rates vs. required backhaul for different uplink cooperation schemes, as measured in field trials.
percent, but the backhaul required is more than two orders of magnitude larger than for decentralized concepts (DIS). Further measurements have shown that DIS schemes become even more valuable in asymmetric scenarios, such that an adaptive usage of centralized and decentralized cooperation schemes depending on the interference situation appears promising. The presented results provide evidence of the potential benefits of using CoMP in specific scenarios. Figure 4 shows the COMP gains in a large-scale setup in the EASY-C testbed in downtown Dresden with 12 eNBs on five sites. The spacing between the sites is 350–600 m, with an antenna height of 15–35 m. Two UE units are
105
IRMER LAYOUT
1/19/11
3:33 PM
Page 106
At some cluster sizes, the COMP gains are outweighed by capacity losses due to additional pilot effort. Complexity: The above mentioned field trials have been performed using orthogonal frequency-division multiple access (OFDMA) in the UL, as this enables a subcarrier and symbol-wise MIMO equalization and detection in the frequency domain. If single-carrier (SC)-FDMA was used as in LTE Release 8, equalization would be more complex. Backhaul: It can be a severe issue if centralized decoding is applied. Hence, adaptive decentralized/centralized cooperation appears to be an interesting option. Furthermore, source coding schemes appear interesting for backhaul compression.
DOWNLINK COORDINATED MULTIPOINT OVERVIEW
Figure 4. Uplink COMP gains in EASY-C testbed in downtown Dresden. carried on a measurement bus on a 7.5 km length route, as depicted in Fig. 4, which passes through different kinds of surroundings, including an underpass, apartment buildings, a train station, and open spaces like parking areas. Conventional non-cooperative decoding is compared to cooperative joint decoding. Through cooperation, average spectral efficiency gains of about 20 percent were achieved. In certain areas, however, gains above 100 percent were observed. Furthermore, the variance of achievable rates at different UE positions was reduced, corresponding to fairer rate distribution throughout the measurement area.
CHALLENGES From the experience of implementing and testing UL COMP, the following key challenges have become apparent. Clustering: Suitable clusters of cooperating base stations have to be found, which can be done in a static way or dynamically, as discussed below. Synchronization: Cooperating base stations have to be synchronized in frequency such that intercarrier interference is avoided, and in time in order to avoid both intersymbol and intercarrier interference [10]. The maximum distance of cooperating base stations is limited since different propagation delays of different terminals may conflict with the guard interval. This aspect may be compensated through a more complex equalization. Channel estimation: A large number of eNBs in the COMP cluster in the UL will require a larger number of orthogonal UL pilot sequences.
106
Base station cooperation in the DL can also improve average throughput and, more important, cell edge throughput [2]. 3GPP distinguishes between the following categories of DL COMP [11]. Coordinated scheduling/beamforming: User data is only available in one sector, the so-called serving cell, but user scheduling and beamforming decisions are made with coordination among the sectors. Joint processing COMP: User data to be transmitted to one terminal is available in multiple sectors of the network. A subclass of joint processing is joint transmission, where the data channel to one terminal is simultaneously transmitted from multiple sectors. Both coordinated scheduling/beamforming and joint transmission have been investigated within the EASY-C project.
SELECTED SIMULATION RESULTS Coordinated beam selection [12] and coscheduling are part of the investigated COMP schemes. Co-scheduling draws its gains from interference avoidance and is less complex than DL joint transmission. One approach which includes beamforming per cell is presented here. Synchronization of the cells is needed; however, there is no strict requirement on phase stability as known from coherent techniques. Multicell co-coordinated beamforming has been assessed in system-level simulations taking into account the latency for inter-NodeB communication. The method is based on an extended precoding matrix index (PMI): the terminals measure and report the PMIs for their own cells (best companion) and additionally the PMIs for the neighboring cells causing the strongest interference (worst companion) plus the channel quality information for the case that these worst interferers are not used. The multicell scheduler is based on a distributed approach, with overlapping clusters of seven neighboring cells each. The scheduling is coordinated within the clusters. The following results are given for four closely spaced antennas
IEEE Communications Magazine • February 2011
1/19/11
3:33 PM
Page 107
at the base station and two UE antennas at 20 MHz system bandwidth. The simulations show significant gains for coordinated DL scheduling, in particular for mobiles at the cell edge. Additionally, the gains were evaluated for different radio channels and different latencies for communication between the sites. As can be observed in Fig. 5, 1 ms latency/hop has only a moderate impact on the gains. Even with highly timevariant channels such as urban macrocell (UMa) at 30 km/h, co-scheduling still provides a sensible improvement. Assuming 6 ms latency per hop, the gains are still preserved for UE velocities up to 3 km/h. The aggregated additional traffic on the backhaul sums up to approximately 5 Mb/s for 20 MHz spectrum; as a result this technique is also economically attractive.
SELECTED FIELD TRIAL RESULTS Since joint transmission is regarded as the most challenging CoMP technique from the implementation perspective, it has been implemented in both testbeds to investigate the feasibility of coherent transmission for intra- and intersite COMP [13, 14]. Significant throughput gains have been demonstrated for specific interference scenarios. The same techniques have also been assessed in wide-area system-level simulations to study more complex scenarios. The following enabling features were essential for the trials: • Sufficient timing and frequency synchronization accuracy: In the trials GPS was used, although network-based approaches such as IEEE 1588v2 could also provide sufficient accuracy. • Low-phase-noise radio frequency (RF) oscillators were used. • Cell-specific reference signals. • Time-stamped CSI feedback. • Synchronous exchange of data and channel state information (CSI) between eNBs over the X2 interface. • Distributed precoding and the provision of precoded pilots. An example of the DL COMP experiments conducted in the Berlin testbed is shown in Fig. 7a. A distributed implementation of joint transmission has been demonstrated with synchronized base stations and cell-specific pilots. Terminals estimate the multicell channel and feed the CSI back to their serving cells. Base stations exchange CSI as well as data and independently perform pre-coding with the goal to maximize the desired signals whilst minimizing mutual interference. Quantization and compression of the CSI are important topics, but outside the scope of this trial setup. CSI is fed back from the terminals safely using UL resources at a data rate of 4.6 Mb/s. Feedback interval and precoding delay are 10 and 20 ms, respectively. The X2 interface between base stations is realized using a 1 Gb/s Ethernet connection over cooper, fiber, or freespace optics, depending on the setup. The bidirectional load is 300 Mb/s realized with 0.5 ms latency. Measurements were taken in the laboratory [5] and over the air in both indoor and outdoor
IEEE Communications Magazine • February 2011
DL spectral efficiency vs. cell edge throughput 1.00E+06 5%-ile CDF DL UE throughput (b/s)
IRMER LAYOUT
alpha 3.0 alpha 2.0
9.00E+05 8.00E+05
alpha 1.0
7.00E+05 6.00E+05 5.00E+05 4.00E+05 3.00E+05 2.00E+05 1.90
3GPP_SCME_1ms 3GPP_SCME_6ms 3GPP_SCME_baseline ITU_UMa_1ms ITU_UMa_6ms ITU_UMa_baseline 2.00
2.10 2.20 2.30 2.40 2.50 Spectral efficiency (b/Hz/sector)
alpha 0.5
2.60
2.70
Figure 5. Downlink co-scheduling: spectral efficiency vs. cell edge throughput for ITU UMa and SCME radio channels and different backhaul latencies.
environments (Fig. 6, bottom left). It was observed that the interference situation experienced at a terminal is indeed critical at the cell edge if both base station signals are received equally strong on average at full frequency reuse. Signal and interference links fade independently, and sometimes the signal is stronger than the interference, while after a very short distance the opposite can be true. This is the origin of the high outage probabilities observed at the cell edge in the interference-limited case (Fig. 6, bottom right). Once DL COMP is switched on, significantly higher data rates can be realized in both cells simultaneously, due to the mutual interference cancellation. Moreover, the outage probability is remarkably reduced. Our experiments have shown that COMP gains are significant for simple interference scenarios, and that the implementation challenges can be overcome. In reality, non-cooperating cells would surround the cluster of cooperative cells, leading to a remaining interference floor not yet present in our trials. The presence of such external interference has been studied in wide-area systemlevel simulations using basically the same COMP concept also tested in the field. Note that the active set is found so that in each cell two users are randomly placed, and each user gets only one stream. In all cells, only those user sets requesting the same cooperation cluster are investigated. In Fig. 6 (top right) we observe that there is no gain from using explicit CSI feedback in the serving cell, exploited for multi-user DL beamforming. Performance is equivalent to a fixed grid-of-beams as in LTE Release 8 if the terminal estimates in addition the surrounding interferers coherently, applies interference rejection combining, and provides implicit frequencyselective feedback on interference-aware PMI and CQI, and the base station applies scorebased scheduling [4]. Explicit CSI feedback is useful for CoMP. With increasing cluster size, the interference floor is reduced and the performance enhanced accordingly, at the cost of additional effort for overhead and backhaul. For more details, see [5].
107
3:33 PM
Page 108
Inter-BS link (X2)
Data MT1
Data MT2
Channel feedback exchange
Local precoder
Local precoder
BS1
BS2 H22
H11
Channel feedback
Channel feedback
H21
H12
MT1
MT2
300m
e aβ str
50 LTE 1x1, round robin LTE 2x2, round robin LTE 2x2, score-based CoMP, LTE L2S mapping 0 non-CoMP 2 3 4 5 6 7 8 9 Cluster size. i.e., number of sectors involved in JP CoMP
10
Campus
m 5 48
1 0.9
s
Strasse de 17. Juni
0.7
540 m
TUB (43 m)
CDF
rch
5
r
Ma
100
0
0m 48
Ernst Reuter Plaza
10
e uƒ in
TLabs (84 m)
K–1 traffic = rate ⋅ (1+ code_rate) w.r.t. CoMP LTE mapping
ste Ein
HHI
150
15
Backhaul traffic (payload only) (b/kHz)
1/19/11
Median spectral efficiency (b/s/Hz/sector)
IRMER LAYOUT
0.5
y l Universit
Technica
0.3
Intersite sector
rd
N
r.
Intrasite point
st
rg
be
en
Intersite point
Interference limited, Intersite Interference limited, Intrasite CoMP Intersite CoMP Intrasite
Ha
Intrasite sector
0.1 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Rate relative to isolated cell rate
Figure 6. Top left: Distributed implementation of Joint transmission COMP. Top right: Performance and backhaul traffic vs. cluster size obtained from system-level simulations. Bottom left: Intra- and inter-site test scenarios in Berlin [5]. Bottom right: Measured throughput with full frequency reuse in a two-cell scenario w/o external interference relative to the case of isolated cells.
CHALLENGES Our results indicate that the complexity of DL COMP can be managed in real-world scenarios and that significant gain can be realized by forming small cooperation clusters in large-scale networks. However, solutions for the following are needed before it can be integrated in next-generation mobile networks: • Reduced cost of base station synchronization and low-phase-noise transmitters • Efficient feedback compression • Reduced feedback delay • Efficient channel prediction at the precoder • Flexible formation of cooperation clusters • Handling of outer interference within the cluster • Efficient multi-user selection • Flexible networking behind COMP • Integration of COMP into higher layers
CLUSTERING OF CELLS As demonstrated in previous sections, COMP has the capability to enhance spectral efficiency and cell edge throughput significantly. However, COMP requires additional signaling overhead on the air interface and over the backhaul in case of intersite cooperation.
108
Therefore, in practice only a limited number of base stations can cooperate in order to keep the overhead manageable. The cooperating cell clusters should be set up adaptively based on RF channel measurements and UE positions in order to exploit the advantages of COMP efficiently at limited complexity. A key requirement for any adaptive cluster algorithm is that it fits into the architecture of the radio access and/or the core network of LTE. The 3GPP standard already offers a framework for selforganizing networks (SONs) to support automatic configuration and optimization of the network. Within EASY-C an adaptive mobilestation-aware clustering concept has been designed that can be integrated with small standard changes to the existing network architecture and the SON concept of LTE. In order to evaluate the performance of the adaptive clustering concept, system-level simulations were run employing a hexagonal network layout shown in Fig. 7a. The scenario was configured with 19 3-sector sites of 500 m intersite distance. The 3GPP UMa spatial channel model (SCM) at 2 GHz was used. The shadow fading standard deviation was set to 2 dB. One hundred UE unitss were placed at random locations within each of four hotspot areas. Figure
IEEE Communications Magazine • February 2011
IRMER LAYOUT
1/19/11
3:33 PM
Page 109
Original CoMP cluster layout and UE positions
28
1000 11
23
12
22
27
29
10
Adapted CoMP Cluster Layout and UE Positions
7
25
26
6
11
24
22
21
24
37
10
40
13
41
1
4
16
19
15
14
16
18
20
15
23
3
5
0
2
12
14
39
8
9
11
36
38
-500
23
29
10
12
22
27
7
25
24
26
6
22
21
500
12
Y-Distance (m)
Y-Distance (m)
500
0
28
1000
24
0
11
-500 43
17
42
44
-1000
46
49
45
48
13
37
10
11
36
38
40
13
50
1
43
44
17
46
47
-1000
-500
0 X-Distance (m)
19
15
49
45
14
16
18
20
15
23
3
5
0
16
42
-1000
4
2
12
14
39
41
8
9
12
48
11
13
50
47
500
1000
-1000
-500
0 X-Distance (m)
500
1000
Figure 7. Cell layout with UE positions and selected clusters: a) no clustering; b) adaptive clustering. 7b shows the result of the designed clustering algorithm, which was configured to obtain the optimal solution for a disjoint set of clusters with up to three sectors. The colors represent the different clusters. The clustering algorithm took only long-term average received power measurements from UE into account in case they were higher than –120 dBm. It is apparent from the figure that this concept managed to form clusters around the UE hotspots and avoided clusters in regions where not needed. The mean geometry gain due to adaptive clustering was about 6 dB for this scenario compared to LTE Release 8.
BACKHAUL FOR COMP ARCHITECTURE AND TECHNOLOGIES COMP approaches need to exchange direct information between cells, with different requirements of necessary backhaul throughput and latency. Intra-site COMP can be realized without any impact on backhaul. In the case of deployment of remote radio units connected to a centralized baseband processing unit via Ethernet or fiber links, COMP backhaul requirements should also be no obstacle. For connectivity between sites, the logical X2 interface could be used. This could either be a direct physical link or a multihop link, depending on the network’s backhaul architecture. The delay depends on the network topology, network node processing delay and line delay (usually speed of light). Gigabit Ethernet speeds of up 10 Mb/s and delays of 0.1–20 μs with additional delays due to switching equipment. Other suitable candidates are conventional and millimeterwave microwave, with speeds up to of 800 Mb/s or 10 Gb/s, respectively, and delays as low as 150 μs/hop.
IEEE Communications Magazine • February 2011
LATENCY REQUIREMENTS COMP has to be integrated with the hybrid automatic repeat request (HARQ) process; thus, the backhaul latency will put some limits on this, suggesting a maximum latency of 1 ms without LTE standard modification. Another impact of backhaul latency is that the exchanged channel information is outdated. For example, a minor performance degradation was estimated for coordinated scheduling considering a X2 latency of 6 ms. In [15] a DL COMP capacity gain reduction of 20 percent is estimated for joint transmission with 5 ms backhaul latency at 3 km/h.
CAPACITY REQUIREMENTS COMP schemes require the exchange of channel state information, control data, user data, and received signals, in a preprocessed or quantized format. As shown earlier and in [16, 17], the backhaul requirements vary strongly from a few megabits per second up to 4 Gb/s for different COMP approaches, considering a 10 MHz LTE X2 link. This also depends on the cluster size. Earlier we showed an example of how backhaul can be reduced significantly even without major performance losses. To conclude, state-of-the-art backhaul technology can support COMP in principle. However, the cost of additional backhaul and access capacity gains has to be balanced in a network deployment.
CONCLUSIONS AND OUTLOOK This article has shown that coordination of cells in wide-area systems is not only beneficial for average spectral efficiency and cell edge data
109
IRMER LAYOUT
1/19/11
3:33 PM
COMP schemes for the uplink range from joint multi-cell scheduling to more complex joint detection, and can be centralized or decentralized. In the downlink, the schemes range from less complex coordinated scheduling to more challenging joint processing approaches.
Page 110
rates, but can also be implemented. COMP was demonstrated for uplink and downlink in two testbeds in urban areas. COMP schemes for the UL range from joint multicell scheduling to more complex joint detection, and can be centralized or decentralized. In the DL the schemes range from less complex coordinated scheduling to more challenging joint processing approaches. From the technical as well as economic points of view, intrasite cooperation will be much easier to realize. However, intersite cooperation will be needed in order to exhaust the full interference reduction potential of base station cooperation. The combination of joint processing at one site with joint scheduling between the sites is of great interest as it provides promising gains with limited backhaul. The following challenges needs to be addressed in order to benefit from the promising COMP gains: • Backhaul with low latency and high bandwidth. Today’s backhaul technologies can support COMP, but more effort is needed to reduce the amount of data exchanged between the sites. • Clustering and multisite scheduling. • Channel estimation and efficient feedback (for DL COMP). • Synchronization between sites is feasible today, but the cell area where COMP can be applied may be limited by the length of the cyclic prefix. • Combination of UL and DL COMP and their integration into the LTE standard. This article, and the EASY-C project, have already given some answers on COMP. Ongoing efforts to address the challenges in the research community — such as the ARTIST4G project and 3GPP standardization — are important to gain more insight into achievable spectral efficiency gains and the complexity of different approaches.
ACKNOWLEDGMENT The authors acknowledge the excellent cooperation of all project partners within the EASY-C project and the support of the German Federal Ministry of Education and Research (BMBF).
REFERENCES [1] P. Marsch, S. Khattak, and G. Fettweis, “A Framework for Determining Realistic Capacity Bounds for Distributed Antenna Systems,” Proc. IEEE Info. Theory Wksp. ‘06, Chengdu, China, Oct. 22–26, 2006. [2] K. M. Karakayli, G. J. Foschini, and R. A. Valenzuela. “Network Coordination for Spectrally Efficient Communications in Cellular Systems,” IEEE Wireless Commun., vol. 13, no. 4, Aug. 2006, pp. 56–61. [3] P. Marsch and G. Fettweis, “On Multi-Cell Cooperative Transmission in Backhaul-Constrained Cellular Systems,” Annales des Télécommun., vol. 63, no. 5–6, May 2008. [4] V. Jungnickel et al., “Interference Aware Scheduling in the Multiuser MIMO-OFDM Downlink,” IEEE Commun. Mag., vol. 47, no. 6, June 2009. [5] V. Jungnickel et al., “Field Trials using Coordinated Multi-Point Transmission in the Downlink,” 3rd IEEE Int’l. Wksp. Wireless Distrib. Net., IEEE PIMRC, Sept. 2010. [6] P. Marsch, Coordinated Multi-Point under a Constrained Backhaul and Imperfect Channel Knowledge, Ph.D. thesis. [7] A. Müller and P. Frank, “Cooperative Interference Prediction for Enhanced Link Adaptation in the 3GPP LTE Uplink,” IEEE VTC–Spring, 2010.
110
[8] A. Müller and P. Frank, “Performance of the LTE Uplink with Intra-Site Joint Detection and Joint Link Adaptation,” IEEE VTC–Spring, 2010. [9] M. Grieger et al., “Field Trial Results for a Coordinated Multi-Point (CoMP) Uplink in Cellular Systems,” Proc. ITG/IEEE Wksp. Smart Antennas ‘10, Bremen, Germany, Feb. 23–24, 2010. [10] V. Kotzsch and G. Fettweis. “Interference Analysis in Time and Frequency Asynchronous Network MIMO OFDM Systems,” IEEE WCNC ‘10, Sydney, Australia, Apr. 18–21, 2010. [11] 3GPP TR 36.814, “Further Advancements for E-UTRA Physical Layer Aspects,” Release 9, v. 9.0.0, Mar. 2010. [12] J. Giese and M. A. Awais, “Performance Upper Bounds for Coordinated Beam Selection in LTE-Advanced,” Proc. ITG/IEEE Wksp. Smart Antennas ‘10, Bremen, Germany, Feb. 23–24, 2010. [13] G. Fettweis et al., “Field Trial Results for LTE-Advanced Concepts,” Proc. IEEE ICASSP ‘10, Dallas, TX, Mar. 14–19, 2010. [14] L. Thiele, V. Jungnickel, and T. Haustein, “Interference Management for Future Cellular OFDMA Systems Using Coordinated Multi-Point Transmission,” IEICE Trans. Commun., Special Issue on Wireless Distributed Networks, Dec. 2010. [15] S. Brueck et al., “Centralized Scheduling for JointTransmission Coordinated Multi-Point in LTE-Advanced,” Proc. ITG/IEEE Wksp. Smart Antennas ‘10, Bremen, Germany, Feb. 23–24, 2010. [16] C. Hoymann, L. Falconetti, and R. Gupta, “Distributed Uplink Signal Processing of Cooperating Base Stations based on IQ Sample Exchange,” Proc. IEEE ICC ‘09, Dresden, Germany, June 14–18, 2009. [17] L. Falconetti, C. Hoymann, and R. Gupta, “Distributed Uplink Macro Diversity for Cooperating Base Stations,” Proc. IEEE ICC ‘09, Dresden, Germany, June 14–18, 2009.
ADDITIONAL READING [1] R. Irmer et al., “Multisite Field Trial for LTE and Advanced Concepts,” IEEE Commun. Mag., vol. 47, no. 2, Feb. 2009, pp. 92–98.
BIOGRAPHIES R ALF I RMER [SM] (
[email protected]) received his Dipl-Ing. and Dr.-Ing. degrees from Technische Universität Dresden in 2000 and 2005, respectively. He joined Vodafone Group R&D in 2005, where he leads the Wireless Access Group, which is responsible for evolution of LTE, WiFi, and other technologies, and defining Vodafone’s future network architecture. Before, he worked for five years as a research associate at TU Dresden. He holds several patents, and has published more than 30 conference and journal publications. He had a leading role in several research projects, including WIGWAM, WINNER, and EASYC. He is a member of VDE and IET. HEINZ DROSTE (
[email protected]) received his Dipl.Ing degree 1991 from the Open University, Hagen. Since then he has been working for Deutsche Telekom at a variety of mobile communication related R&D projects. Antennas and radio wave propagation belong to his knowledge field as well as system-level simulation and radio network planning. More recently he extended his expertise to the field of techno-economical evaluations. In the framework of EASY-C he coordinates the partner activities in Working Group 1, “Algorithm and Concepts.” PATRICK MARSCH (
[email protected]) received his Dipl.-Ing. and Dr.-Ing. degrees from Technische Universität Dresden in 2004 and 2010, respectively, after completing an apprenticeship at Siemens AG and studying at the TU Dresden and McGill University, Montréal, Canada. After an internship with Philips Research East Asia in Shanghai, P.R. China, he joined the Vodafone Chair in 2005. He is the technical project coordinator of EASY-C, and is currently heading a research group on the analysis and optimization of cellular systems. MICHAEL GRIEGER received his Dipl.-Ing. from DHBW Stuttgart in 2005 and his M.Sc. from the Technische Universität Dresden in March 2009. In 2008, funded by the Herbert Quandt/ALTANA Foundation, he studied at CTU, Prague. During his Master’s thesis, he conducted research in Prof. John Cioffi’s group at Stanford University on multicell sig-
IEEE Communications Magazine • February 2011
IRMER LAYOUT
1/19/11
3:33 PM
Page 111
nal processing, which continues to be his major research focus today. An aspect of his research is the comparison of information theoretic results to those of the “real world” using field trials. GERHARD FETTWEIS [F] (
[email protected]) earned his Dipl.-Ing. (1986) and Ph.D. (1990) degrees from Aachen University of Technology (RWTH), Germany. From 1990 to 1991 he was a visiting scientist at the IBM Almaden Research Center, San Jose, California, working on signal processing for disk drives. From 1991 to 1994 he was with TCSI Inc., Berkeley, California, responsible for signal processor development. Since 1994 he holds the Vodafone Chair at TU Dresden. He is coordinating the research project EASY-C. H ANS -P ETER M AYER (
[email protected]) received his Ph.D. degree in physics from the University of Tübingen in 1987. He joined Alcatel-Lucent and worked on high-speed optoelectronic and WDM components until 1995. From 1996 to 1999, he has been responsible for early UMTS system studies, followed by the realization of first UMTS and HSPA trial systems. Within Bell Labs, he is currently responsible for the Advanced MAC department with a focus on projects related to LTE-Advanced. S TEFAN B RUECK (
[email protected]) studied mathematics and electrical engineering at the University of Technology Darmstadt, Germany, and Trinity College Dublin, Ireland. He received his Dipl.-Math. and Dr.-Ing. degrees in
IEEE Communications Magazine • February 2011
1994 and 1999, respectively. From 1999 to 2008 he was working for Lucent Technologies and Alcatel-Lucent in Bell Labs and UMTS Systems Engineering, where he was responsible for the MAC layer design of the HSPA base station. In May 2008 he joined Qualcomm Germany and currently leads the Radio Systems R&D activities in the Corporate R&D Centre Nuremberg. He is involved in several research projects on LTE-Advanced. L ARS T HIELE [S‘05] received his Dipl.-Ing. (M.S.) degree in electrical engineering from TU Berlin in 2005. Currently he is working towards his Dr.-Ing. (Ph.D.) degree at the Fraunhofer Heinrich Hertz Institute (HHI), Berlin. He has contributed to receiver and transmitter optimization under limited feedback, performance analysis for MIMO transmission in cellular ODFM systems, and fair resource allocation. He has authored and co-authored about 40 conference and journal papers in the area of mobile communications. VOLKER JUNGNICKEL [M‘99] (
[email protected]) received a Dr. rer. nat. (Ph.D.) degree in physics from Humboldt Universität zu Berlin in 1995. He worked on semiconductor quantum dots and laser medicine and joined HHI in 1997. He is a lecturer at TU Berlin and head of the cellular radio team at HHI. He has contributed to high-speed indoor wireless infrared links, 1 Gb/s MIMO-OFDM radio transmission, and initial field trials for LTE and LTE-Advanced. He has authored and co-authored more than 100 conference and journal papers on communications engineering.
111
PARK LAYOUT
1/19/11
3:34 PM
Page 112
IMT-ADVANCED AND NEXT-GENERATION MOBILE NETWORKS
Evolution of Uplink MIMO for LTE-Advanced Chester Sungchung Park, Y.-P. Eric Wang, George Jöngren, and David Hammarwall, Ericsson Research
ABSTRACT The evolution of LTE uplink transmission toward MIMO has recently been agreed in 3GPP, including the support of up to four-layer transmission using precoded spatial multiplexing as well as transmit diversity techniques. In this article, an overview of these uplink MIMO schemes is presented, along with their impact on reference signals and DL control signaling. Receivers suitable for uplink MIMO are presented, and their link performances are compared.
INTRODUCTION The 3GPP Long Term Evolution (LTE) standard is continuing to evolve to add more capabilities and enable transmission and reception of even higher data rates [1]. One of the key features to be introduced in LTE-Advanced (Release 10 and beyond) is multiple-inputs multiple-outputs (MIMO) technique for uplink (UL). MIMO helps achieve the LTE-Advanced requirements, which include UL peak spectrum efficiency of 15 b/s/Hz and UL average spectrum efficiency of 2.0 b/s/Hz/cell [2]. With four transmit antennas and four receive antennas, spatial multiplexing for data channel achieves the UL peak spectrum efficiency. Given the availability of multiple antennas, transmit diversity for control channel provides robust signaling and improves cell coverage. In addition, multi-user MIMO (MU-MIMO) helps attain the UL average spectrum efficiency. In this article, the evolution of UL-MIMO schemes is described. Precoded spatial multiplexing for the UL data channel, the physical uplink shared channel (PUSCH), is explained, emphasizing the cubic metric (CM) preserving codebook design and its precoding gain. Transmit diversity for the UL control channel, the physical uplink control channel (PUCCH), is also described, together with its orthogonal resource allocation scheme. Additionally, the UL-MIMO aspects of reference signals and downlink (DL) control signaling are discussed, particularly the impact of interlayer interference on channel estimation and a new reference sig-
112
0163-6804/11/$25.00 © 2011 IEEE
nal multiplexing. To maximize the benefit of MIMO, a baseband receiver capable of untangling the multiple spatially-multiplexed signals received simultaneously is desired. In LTE-UL, single-carrier frequency-division multiple access (SC-FDMA) introduces intersymbol interference (ISI) in dispersive channels. Thus, a desired MIMO receiver should address both ISI and spatial-multiplexing interference. We present a number of receiver options, including linear minimum mean square estimation (MMSE) equalization, successive interference cancellation (SIC), and turbo equalization receivers. Link simulation results are provided to demonstrate the performance of UL data channel and also compare performance among these different receivers.
SINGLE-CARRIER TRANSMISSION The UL multiple access in LTE is characterized by SC-FDMA, where discrete Fourier transform (DFT) is followed by orthogonal frequency-division multiplexing (OFDM) [3, 4]. Since DFT spreads each time domain (TD) modulation symbol over all the subcarriers assigned, it enables single-carrier transmission, as opposed to OFDM. The main objective of single-carrier transmission in the UL is to relax the requirement of user equipment (UE) power backoff [5]. Since the power amplifier of UE is nonlinear in general, the output power tends to saturate around the peak power. Therefore, in order to avoid nonlinear distortion, the power amplifier needs to operate far below the saturation point. This inevitably limits the maximum transmit power of UE, thereby resulting in cell coverage limitation. The amount of power backoff depends on how much the amplitude of transmitted signal fluctuates around the average value, which is measured by peak-to-average-power-ratio (PAPR) or CM. Since the required power backoff is practically determined by adjacent channel interference caused by nonlinear power amplification (dominated by the cubic term), CM is a more accurate measure for power backoff requirements. The more the amplitude of transmitted signal fluctuates, the more power backoff the
IEEE Communications Magazine • February 2011
PARK LAYOUT
1/19/11
3:34 PM
Page 113
Codewords Encoder
Layers Modulation mapper
Scrambler
Antenna ports
DFT spreader Layer mapper
Encoder Codewords Decoder
DFT spreader
Resource mapper
IFFT
Layers Descrambler
Antenna ports
IDFT despreader
Demodulator
Descrambler
Resource demapper
FFT
Resource demapper
FFT
FD equalizer
Codeword mapper Decoder
IFFT
Precoder
Modulation mapper
Scrambler
Resource mapper
IDFT despreader
Demodulator (a)
Precoder
h1
Equivalent channel (SNR)
Precoder
MIMO with precoder
w1
=
+1
w1h1+w2h2
w2h2
:
+1
w2
w1h1
w1
h2 w1
w2
=
+1
w2
+j
w1
+1
Destructive combining w1h1+w2h2
: w2h2
w1h1
w1h1 Equivalent system w1h1+w2h2
=
w1 w2
w2h2
:
-1
w2
=
+1 -j
w1h1+w2h2
Constructive combining :
w1h1
w1h1+w2h2
w2h2
(b)
Figure 1. Precoded spatial multiplexing for UL data channel: a) transmitter and receiver; b) illustration of precoding operation (2 transmit antennas, 1 layer). power amplifier demands and the lower power it can radiate. Since an SC-FDMA signal resembles a conventional single-carrier signal, SCFDMA has a lower CM than OFDM, thereby leading to less power backoff and thus improved coverage.
MIMO TECHNIQUES UL DATA CHANNEL: PRECODED SPATIAL MULTIPLEXING A block diagram of a transmitter and receiver for the UL data channel is presented in Fig. 1a, which consists of a turbo encoder/decoder, a scrambler/descrambler, a modulation mapper/ demodulator, a layer mapper/codeword mapper, a DFT spreader/despreader, a precoded spatial multiplexer/frequency domain (FD) equalizer, a resource mapper/demapper, and inverse fast Fourier transform (IFFT)/FFT. Analogous to the transmitter for the Release 8 DL data channel, the physical downlink shared channel (PDSCH), up to two codewords can be transmit-
IEEE Communications Magazine • February 2011
ted in one subframe with separate control of modulation and coding scheme (MCS) and hybrid automatic repeat request (HARQ) functionality. One or two codewords are mapped to up to four independent data streams (referred to as layers hereafter) by the same codeword-tolayer mapping as for DL-MIMO [6]. DFT spreading is applied to each layer separately, and precoded spatial multiplexing is applied in frequency domain. Specifically, the precoder may spread each layer across multiple transmit antennas, as exemplified in Fig. 1b. The number of layers (R) (referred to as rank hereafter) is equal to or smaller than the number of transmit antennas (N T). Finally, IFFT is performed on a perantenna basis. The receiver simply reverses the above operation, as shown in Fig. 1a. The basic principles of the precoded spatial multiplexing scheme for the UL data channel are similar to those for DL-MIMO [6]. The precoding operation is mathematically expressed as a left-multiplication of a DFT-spread layer signal vector (R × 1) by a precoding matrix (N T × R), which is chosen from a predefined set of matri-
113
PARK LAYOUT
1/19/11
3:34 PM
A major difference from the precoder codebook used for DL-MIMO is that the UL codebook is carefully designed to not increase the CM compared to single-antenna transmission. A design constraint of CM-preserving codebook is that each row should have at most one non-zero element.
114
Page 114
ces, a so-called codebook exemplified in Fig. 2. (Note that the rth column vector of the precoding matrix represents the antenna spreading weight of the rth layer.) The main purpose of this kind of precoding is to match the precoding matrix with the channel properties to increase the received signal power and also to some extent reduce inter-layer interference, thereby improving the signal-to-interference-plus-noiseratio (SINR) of each layer. Assuming that the eNodeB is equipped with NR receive antennas, the capacity-achieving precoding, referred to as eigen-beamforming, precodes each layer with one of min(NR, NT) eigenvectors of the channel correlation. However, in practice, the selection of precoder needs to be signaled from eNodeB to UE; thus, the number of precoding matrices (i.e., codebook size) is limited by the available amount of DL control signaling. The use of a finite-sized codebook results in a reduced precoding gain. Additionally, as shown in Fig. 2, the alphabet for the nonzero elements of precoding matrices is confined to quaternary phase shift keying (QPSK) (±1, ±j). This eases the selection of precoding matrix, since it greatly simplifies matrix multiplication. Moreover, the constant-modulus property of the QPSK alphabet enables all the power amplifiers to transmit with equal power level. This balanced power helps maximize the power amplifier efficiency, regardless of which precoding matrix is used. A drawback of this alphabet constraint is further reduction of the precoding gain. A major difference from the precoder codebook used for DL-MIMO is that the UL codebook is carefully designed to not increase the CM, compared to single-antenna transmission. (In contrast, the Householder codebook of DLMIMO linearly combines multiple layers [6] and thus inevitably increases CM.) A design constraint of the CM-preserving codebook is that each row should have at most one non-zero element. In other words, each transmit antenna is not allowed to convey more than one layer; thus, no linear combination is allowed. This CM-preserving constraint gives rise to further reduction of the precoding gain. One of the objectives of the UL precoder design is to optimize the precoding gain under the CM-preserving constraint. It is well known that the codebook maximizing the minimum chordal distance makes sense from an information theoretic capacity point of view [7]. However, in practice, UE antennas are likely to be correlated; thus, UE antenna configuration and indexing, shown in Fig. 2, also need to be considered, taking into account antenna spacing and polarization. Since each layer is assigned to a disjoint group of UE antennas, antenna grouping with lower interlayer correlation is preferred, as it leads to lower interlayer interference. This is also consistent with eigenbeamforming in that eigenvectors of channel correlation tend to have elements with larger amplitude in one antenna group than in the other antenna groups. For example, in Fig. 2 antenna grouping of (1,2) + (3,4) is preferred to (1,3) + (2,4), since two closely located or copolarized antennas are grouped together in the former grouping. Under the above constraints, it easily follows
that the full-rank codebooks consist of the identity matrices alone. For non-full-rank codebooks, it is desirable to maximize the precoding gain while preserving the CM. The design of 2-TX codebooks is relatively simple. The rank-1 codebook consists of four constant modulus vectors and two antenna selection vectors. The inclusion of constant modulus vectors enables the signals from multiple UE antennas to be combined constructively at the receiving eNodeB by introducing a relative phase shift of 0°, 90°, 180°, or 270° between UE antennas. In the example shown in Fig. 1b, a phase shift of 270° maximizes the signal-to-noise ratio (SNR) of received signal due to constructive combining, whereas a phase shift of 90° minimizes it due to destructive combining. On the other hand, the inclusion of antenna selection vectors reduces transmit power and is intended for UE power saving. For example, if UE antennas experience significant gain imbalance, one of the antenna selection vector can be used to switch off the low-gain antenna. The design of 4-TX codebooks is more complicated. As presented in Fig. 2, the rank-1 codebook consists of 16 constant modulus vectors for constructive combining and 8 antenna selection vectors for UE power saving. For constant modulus vectors, four relative phase shifts are considered between the first two antennas. For each of these relative phase shifts, only two relative phase shifts are considered between the last two antennas. Additional relative phase shifts are applied between the first and third antennas, and between the second and fourth antennas, accounting for imperfect antenna polarization. The rank-2 codebook covers every possible antenna grouping: a total of three antenna groupings for two layers, with a larger portion involved with antenna groupings of (1,2) + (3,4) (i.e., the antenna grouping with lower intergroup correlation) than the other two antenna groupings of (1,3) + (2,4) and (1,4) + (2,3). For the rank-3 codebook, the first layer is assigned with two TX antennas, whereas layers 2 and 3 are each assigned one antenna. Every antenna grouping (in total, six groupings) for the first layer is covered, along with an intragroup phase shift of 0° or 180°. It should be noted that no intergroup phase shift is considered in the rank2 and rank-3 codebooks, since it does not contribute to layer separation. Our simulation results show that compared to eigen-beamforming, the codebook for the UL data channel experiences up to 2 dB loss for the 4 × 4 spatial channel model (SCM) of [8]. On the other hand, it tends to perform as well as the Householder codebook used in DL-MIMO, unless UE antenna correlation is extremely high. Note that the Householder codebook has a similar codebook size, while its alphabet is 8PSK and the CM is not preserved. Therefore, it can be concluded that the loss of precoding gain (compared to eigen-beamforming) largely comes from the codebook size constraint, rather than the alphabet restriction and CM preserving constraint. The precoder is defined by rank and precoding matrix, which are selected by an eNodeB based on uplink channel measurements and conveyed to UE through the rank indicator (RI)
IEEE Communications Magazine • February 2011
PARK LAYOUT
1/19/11
Rank-1 Codebook
Rank-2 Codebook
Rank-3 Codebook
Rank-4 Codebook
3:34 PM
Page 115
Index 0 to 7
1 1 1 2 1 -1
1 1 1 2 j j
1 1 1 2 -1 1
1 1 1 2 -j -j
1 1 j 2 1 j
1 1 j 2 j 1
1 1 j 2 -1 -j
1 1 j 2 -j -1
Index 8 to 15
1 1 -1 2 1 1
1 1 -1 2 j -j
1 1 -1 2 -1 -1
1 1 -1 2 -j -j
1 1 -j 2 1 -j
1 1 -j 2 j -1
1 1 -j 2 -1 j
1 1 -j 2 -j 1
Index 16 to 23
1 1 0 2 1 0
1 1 0 2 -1 0
1 1 0 2 j 0
1 1 0 2 -j 0
0 1 1 2 0 1
0 1 1 2 0 -1
0 1 1 2 0 j
0 1 1 2 0 -j
Index 0 to 7
1 0 1 1 0 2 0 1 0 -j
1 0 1 1 0 2 0 1 0 j
1 0 1 -j 0 2 0 1 0 1
1 0 1 -j 0 2 0 1 0 -1
1 0 1 -1 0 2 0 1 0 -j
1 0 1 -1 0 2 0 1 0 j
1 1 j 2 0 0
0 0 1 1
1 0 1 j 0 2 0 1 0 -1
Index 8 to 15
1 0 1 0 1 2 1 0 0 1
1 0 1 0 1 2 1 0 0 -1
1 0 1 0 1 2 -1 0 0 1
1 0 1 0 1 2 -1 0 0 -1
1 0 1 0 1 2 0 1 1 0
1 0 1 0 1 2 0 -1 1 0
1 0 1 0 1 2 0 1 -1 0
1 0 1 0 1 2 0 -1 -1 0
Index 0 to 5
1 0 1 1 0 2 0 1 0 0
0 0 0 1
1 0 1 -1 0 2 0 1 0 0
0 0 0 1
1 0 1 0 1 2 1 0 0 0
0 0 0 1
1 0 1 0 1 2 -1 0 0 0
0 0 0 1
1 0 1 0 1 2 0 0 1 0
0 0 1 0
1 0 1 0 1 2 0 0 -1 0
0 0 1 0
Index 6 to 11
0 1 1 1 0 2 1 0 0 0
0 0 0 1
0 1 1 1 0 2 -1 0 0 0
0 0 0 1
0 1 1 1 0 2 0 0 1 0
0 0 1 0
0 1 1 1 0 2 0 0 -1 0
0 0 1 0
0 1 1 0 0 2 1 0 1 0
0 1 0 0
0 1 1 0 0 2 1 0 -1 0
0 1 0 0
Index 0
1 0 1 0 1 2 0 0 0 0
0 0 1 0
0 0 0 1
Uniform Linear Array (ULA)
1
2
3
4
2 pairs of ULA
1 2
3 4
2 pairs of crosspol. antennas
1 3
2 4
Figure 2. CM-preserving codebook and antenna configuration (4 transmit antennas).
and precoding matrix indicator (PMI) fields in an uplink grant via the DL control channel, the physical downlink control channel (PDCCH). The rank and precoding matrix may be selected to maximize the throughput, together with MCS. In general, the higher SINR the eNodeB measures in link quality, the higher rank the eNodeB selects. Link quality measurements may be carried out through the use of so-called sounding reference signals (SRSs). SRSs are explained in more detail in the next section.
UL CONTROL CHANNEL: TRANSMIT DIVERSITY Multiple transmit antennas are used to provide diversity gain for the UL control channel. A 2TX transmit diversity (TxD) scheme for hybrid automatic repeat request (HARQ) and scheduling request (format 1/1a/1b) transmits the same control information from different UE antennas by using different orthogonal resources (thus providing no multiplexing gain). This is referred to as space orthogonal-resource transmit diversi-
IEEE Communications Magazine • February 2011
ty (SORTD), and is illustrated in Fig. 3. For the Release 8 UL control channel, the orthogonal resources are available through either the cyclic shift of length-12 phase-rotated sequences or orthogonal code covering of length-4 orthogonal sequences for control information [6]. The demodulation reference signal (DM-RS) for the UL control channel also uses the orthogonal resources through the cyclic shifts of length-12 phase-rotated sequences or orthogonal code covering of length-3 orthogonal sequences. (The details of DM-RS will be given later.) Thus, in principle, a total of 36 orthogonal resources are available for the UL control channel in each subframe. Assuming perfect orthogonality, the signals from multiple transmit antennas are received separately and combined constructively. Figure 3 shows that a 2-TX TxD scheme together with maximum ratio combining provides twofold transmit diversity. In the case of four transmit antennas, the UE from an eNodeB point of view transmits using two orthogonal
115
PARK LAYOUT
1/19/11
3:34 PM
Page 116
Maximum ratio combining h1
Orthogonal resource mapper (CS1, OCC1)
Orthogonal resource mapper (CS1, OCC1) h2
Orthogonal resource mapper (CS2, OCC2)
h1*
Orthogonal resource mapper (CS2, OCC2)
Effective channel (SNR) h2*
h2
|h1|2+|h2|2
h1 CSn
OCCn(DM-RS)
r11 w0
w1
w2
w0
r0 w0
w1
w2
w0
w1
w1
w3 OCC: orthogonal code covering CS: cyclic shift
w2
w3
w2
12 subcarriers (180 kHz) r0 ... r11 : length-12 phase-rotated sequence
1 slot (0.5 ms)
w0 ... w3 : length-4 orthogonal sequence OCCn(UCI)
OCCn(UCI)
w0 ... w2 : length-3 orthogonal sequence
Figure 3. Transmit diversity for UL control channel — HARQ and scheduling request (2 transmit antennas). resources (cyclic shift and orthogonal code covering), and it is up to the UE implementation if and how the transmission is spread over the four antennas. One possibility is to let the third and fourth antennas use the same orthogonal resources allocated to the first and second antennas, respectively, and transmit the same information. Another possibility is to only transmit on two of the four UE antennas. The same TxD scheme is also applied for the channel quality information report (format 2/2a/2b).
REFERENCE SIGNALS DM-RS AND CHANNEL ESTIMATION The reception of the UL data/control channel generally requires channel estimation, which is facilitated by DM-RS in Release 8 [6]. DM-RS is defined in the frequency domain by the cellspecific base sequence and its time-domain cyclic shift. The ISI introduced by SC-FDMA is a resampled version of the dispersive channel by a factor of N/M when N out of M subcarriers are assigned (N ≤ M). Consequently, different cyclic shifts provide orthogonal resources, as long as the resampled delay spread is smaller than the minimum spacing of cyclic shifts. However, the introduction of UL-MIMO requires an increase
116
in the number of orthogonal resources to facilitate the estimation of multiple spatial multiplexing channels. This reduces the spacing between the cyclic shifts, and as a result, the orthogonality of DM-RS is no longer guaranteed in highly dispersive channels. The same precoding is applied to both the UL data channel and DM-RS to provide the precoding gain for channel estimation. Therefore, the required number of orthogonal resources for each UE is equal to the number of layers, but it may be smaller than the number of transmit antennas. For demodulation purposes, the channels seen by all layers need to be estimated, and each channel can be estimated separately thanks to the orthogonality of DM-RS. This enables low-complexity implementation of MIMO channel estimation. Specifically, as shown in Fig. 4a, in slightly dispersive channels such as the Extended Pedestrian A (EPA) channel [9], after multiplying the received signal with a conjugate of DM-RS (e.g., DM-RS of the first layer), it is possible to separate all the channels, since the resampled delay spread of each channel is confined to a pair of two consecutive cyclic shifts. However, in highly dispersive channels such as the Extended Typical Urban (ETU) channel [9], per-layer channel estimation tends to experience significant loss, since the resam-
IEEE Communications Magazine • February 2011
PARK LAYOUT
1/19/11
3:34 PM
Page 117
EPA, 25RBs
ETU, 25RBs
2 1.8 1.6
0.8
1.4
0.7
1.2 1 0.8
0.6 0.5 0.4
0.6
0.3
0.4
0.2
0.2
0.1
0
0
50
100 150 200 Subcarrier index
First layer Second layer Third layer Fourth layer
0.9
Amplitude
Amplitude
1
First layer Second layer Third layer Fourth layer
0
250
0
50
100 150 200 Subcarrier index
(a)
250
(b)
EPA, 6RBs
ETU, 6RBs
2.5
2
First layer Second layer Third layer Fourth layer
First layer Second layer Third layer Fourth layer
1.8 1.6
2
Amplitude
Amplitude
1.4 1.5
1
1.2 1 0.8 0.6 0.4
0.5
0.2 0
0 0
10
20
30 40 Subcarrier index
50
(c)
60
70
0
10
20
30 40 Subcarrier index
50
60
70
(d)
Figure 4. Per-layer channel estimation (4 layers, system bandwidth: 20 MHz): a) EPA channel, 5 MHz; b) ETU channel, 5 MHz; c) EPA channel, 1.4 MHz; d) ETU channel, 1.4 MHz. pled delay spread exceeds the minimum spacing of cyclic shifts, thereby causing interlayer interference, as illustrated in Fig. 4b. Furthermore, interlayer interference becomes more detrimental in case of narrower user bandwidth (i.e., when N/M becomes smaller), since the interpolation filter (i.e., sinc filter) involved in the resampling operation decays more slowly, as illustrated in Figs. 4c and 4d. In order to improve inter-DMRS orthogonality, orthogonal code covering the two slots in a subframe is additionally used. By spreading DMRS across two slots within a subframe with two orthogonal codes (+1, +1) or (+1, –1), DM-RS orthogonality between layers can be maintained even in highly dispersive channels. Since the orthogonal code covering orthogonality assumes that the channel remains constant between two slots, it is inherently vulnerable to high-mobility scenarios. However, considering that UL-MIMO (especially with rank-4 transmissions) is mostly targeted for the low-mobility scenario, orthogonal code covering remains attractive as an alternative DM-RS multiplexing scheme. Another
IEEE Communications Magazine • February 2011
advantage of orthogonal code covering is that it enables MU-MIMO pairing of UE units with different user bandwidths. (Note that cyclic-shifted DM-RS cannot guarantee the orthogonality in case of different user bandwidth.)
SRS AND PRECODER SELECTION Since DM-RS is precoded together with the UL data channel, it is impossible to select the precoder (i.e., the rank and precoding matrix) based on DM-RS, since only a subset of the spatial channel dimensions is observed. Instead, nonprecoded SRS transmission is the conventional way for the eNodeB to efficiently select an appropriate precoder for the UL data channel. SRS is antenna-specific, as opposed to layer-specific in the case of DM-RS; thus, the required number of orthogonal resources is the same as the number of transmit antennas. There exists much similarity between DM-RS and SRS. SRS is derived from the same cell-specific base sequence as DM-RS, and SRS orthogonality is provided through different cyclic shifts [6]. The difference from DM-RS is that two
117
PARK LAYOUT
1/19/11
3:34 PM
With perfect per-layer MCS control, the transmission rate for each layer is chosen to satisfy sufficiently low block error rate (BLER); thus, SIC can most likely cancel interference contributed by previously detected layers.
Page 118
additional orthogonal resources are available in an FDMA fashion by using every other subcarrier [6].
IMPACT ON DL CONTROL SIGNALING UL scheduling grant is conveyed in the DL control channel from eNodeB to UE [6], and it enables dynamic resource allocation for both the data and control channels on a per-subframe basis. The introduction of UL-MIMO requires additional fields of DL control channel payload to inform UE of the relevant resource allocation, as follows: • PMI — RI: The 2-TX and 4-TX codebooks consist of 7 and 53 precoding matrices, respectively. • MCS — redundancy version (RV): MCS and RV are jointly signaled, as in the Release 8 UL data channel [10]. Up to two codewords can be transmitted in one subframe with per-codeword MCS and HARQ control. Thus, an additional MCS-RV for the second codeword is necessary. • New data indicator (NDI): Per-codeword HARQ control is assumed so that an additional NDI is required for the second codeword It is worth mentioning that DM-RS resource allocation (i.e., the cyclic shift and orthogonal code covering) should be signaled without additional fields of the DL control channel payload for both SU-MIMO and MU-MIMO. It is possible to derive the cyclic shifts for multiple layers from one cyclic shift field of DL control channel and semi-statically signaled cyclic shift offsets. Orthogonal code covering is also possible to be derived from a single dynamically-signaled cyclic shift, if orthogonal code covering index is directly mapped from the cyclic shift field. It can be concluded that additional DL control channel signaling required by UL-MIMO with the above approach amounts to approximately 9 and 12 bits for 2-TX and 4-TX cases, respectively, which justifies the need for new DL control formats for spatial multiplexing modes. Since the number of blind decodings is determined by the number of DL control formats, adding additional DL control formats results in an increase in the computational complexity of DL control channel reception of UE. Thus, it is desirable to limit the number of new DL control formats (e.g., by sharing the payload size) and a new DL control format (format 4) is assigned in order to support UL-MIMO. HARQ acknowledgment (ACK)/negative ACK (NACK) is signaled in the physical HARQ indicator channel (PHICH) from eNodeB to UE [6]. In order to support per-codeword HARQ, two orthogonal resources need to be allocated to each user.
RECEIVERS FOR UL DATA CHANNEL MMSE EQUALIZER A widely used simple solution is a linear MMSE equalizer, which produces an MMSE estimate for each TD symbol. Due to the linearity and unitary properties of the DFT and IDFT, an equivalent equalizer is to obtain an FD MMSE
118
estimate for each FD symbol and then take the IDFT to get the TD MMSE symbol estimates. The MMSE estimate for each FD symbol is obtained by linearly combining FD received signals collected from multiple receive antennas using MMSE combining weights, W(k) = (H(k)HH(k) + RU(k))–1H(k),
(1)
where k is the subcarrier index, H(k) is a vector representing the frequency response for the layer signal of interest (one element per receive antenna), and R U(k) is the impairment covariance matrix capturing the spatial correlation between the impairment components. This simple MMSE equalizer achieves performance very close to the theoretical capacity in a SIMO channel. However, in a MIMO channel, linear MMSE equalizer performance is far from the capacity due to the presence of spatial-multiplexing interference.
SIC RECEIVER A SIC receiver as considered in [11] is a good candidate for MIMO reception. It was shown that SIC receivers with per-layer MCS control in fact achieve MIMO capacity in non-dispersive channels [11]. The operation of SIC is simple. The receiver first detects the signal sent by the first layer, and after successful detection it cancels the interference contributed by the detected signal before it detects the other layers. The process is repeated until all the layers are detected. Typically, each layer is cancelled only when the cyclic redundancy check (CRC) bits check. With perfect per-layer MCS control, the transmission rate for each layer is chosen to satisfy sufficiently low block error rate (BLER); thus, SIC can most likely cancel interference contributed by previously detected layers. A SIC receiver consists of a number of linear MMSE equalizers as well. The role of these MMSE equalizers is to suppress the residual interference. The cancellation of other layers is reflected in the impairment correlation, and thus in the MMSE combining weights in Eq. 1. The cancellation of the detected layers allows the MMSE equalizer to use its available degree of freedom to better suppress other dominant interfering signals, resulting in a reduced interference level and a higher SINR. However, there are some limitations with SIC receivers in practice. Firstly, SIC does not address the ISI problem. Though frequencydomain MMSE equalization is effective in suppressing ISI, there is an opportunity to achieve small performance improvement by better cancelling ISI [12]. Moreover, Turbo encoding and CRC checking is not necessarily performed on a per-layer basis as per-codeword (as opposed to per-layer) MCS control and CRC checking is used for UL-MIMO. For example, for the rank4 transmission, the first and second layers are mapped to the same codeword and share the same CRC bits. Hence, these two layers need to be decoded jointly, rather than individually. Thus, prior to the decoding stage, the second layer receiver cannot cancel the interference contributed by the first layer, resulting in high intra-codeword interference.
IEEE Communications Magazine • February 2011
PARK LAYOUT
1/19/11
3:34 PM
Page 119
Codeword detector FD received signal
a priori information
MMSE-DFE
Layer regenerator
Feedforward filter
IDFT
Demodulator
LLR
− Regenerated layer
Feedback filter
Descrambler
SISO decoder
Scrambler
Regenerated layer Soft symbol modulator
DFT
LLR
Figure 5. Two key components in a turbo equalization receiver: a codeword detector that includes an MMSE-DFE receiver and a layer regenerator.
TURBO EQUALIZATION RECEIVER Ultimately, an optimal receiver is to form joint hypotheses on all the overlapping symbols. However, this is practically impossible due to the use of higher order modulation and a large number of overlapping symbols due to SC-FDMA and MIMO. A practically attractive solution, in terms of performance and complexity trade-offs, is soft-cancellation-based MMSE turbo equalization. Such a receiver was first proposed for GSM/EDGE [13] and recently considered for LTE-UL [14]. The receiver architecture of the turbo equalization receiver is similar to SIC in the sense that each codeword is successively detected, regenerated, and then cancelled from the received signal. However, unlike SIC, the cancellation of a detected layer takes place no matter whether or not the CRC checks. Furthermore, each layer may be detected multiple times in a round-robin manner. There are three basic elements in a soft-cancellation-based MMSE turbo equalization receiver. Turbo Operation — This refers to the information exchange between several detection blocks in the receiver. For each codeword, information exchange occurs between the FD equalizer/ demodulator and the turbo decoder. In essence, both the equalizer/demodulator and turbo decoder are estimating the probabilistic information (usually log-likelihood ratio [LLR]) about each encoded bit. The sign of the estimated probabilistic information indicates whether a bit is likely to be 0 or 1, while the magnitude indicates the likelihood or reliability level. First, the equalizer and demodulator utilize the modulation constellation structure to derive an estimate of the probabilistic information from the received signal. Based on this probabilistic information, the decoder utilizes the forward error correction (FEC) code structure to further improve the probabilistic estimate about the encoded bits. This involves a simple modification to the turbo decoder to generate LLR’s for the parity bits, in addition to those for the systematic bits, giving rise to a so-called soft-input soft-output (SISO) turbo decoder [13]. Such probabilistic information from the decoder is provided back to the demodulator, as a form of a priori information, for the demodulator to further
IEEE Communications Magazine • February 2011
improve the probabilistic estimate. Estimates about received bits become more accurate as more information exchanges take place. Furthermore, not only does the equalizer benefit from having the probabilistic information from the decoder that is estimating the same set of encoded bits, it also benefits from accessing probabilistic information from the decoders that are estimating other sets of encoded bits (e.g., encoded bits mapped to the other layers.) This probabilistic information can be used to form soft symbols for the cancellation of both interlayer interference and ISI. (see below) Soft Symbol Subtraction — The way the equalizer utilizes the probabilistic information from the decoder is to use this information to get MMSE symbol estimates of an interfering symbol. From [15], such an estimate is given in the form of conditional mean. Assuming bits mapped to a TD constellation symbol are mutually independent, the symbol probability is simply the product of bit probabilities. Averaging the product of symbol probability and the corresponding modulation value over the entire modulation constellation yields the MMSE symbol estimate. After obtaining the MMSE estimates of TD symbols, MMSE estimates of FD symbols are obtained from taking DFT, as illustrated in the layer signal regenerator in Fig. 5. These interfering symbols, after channel filtering, are subtracted from FD received signals. Soft Decision Feedback MMSE Equalization — In fact, not only interfering symbols from other layers are cancelled, soft symbols from the layer of interest are also removed. This feature is referred to as soft decision-feedback equalizer (DFE) [13], as in this process, the ISI component due to time dispersion is removed. This problem can be formulated as an MMSE-DFE problem. An MMSE-DFE structure is shown inside the codeword detector in Fig. 5. The objective of MMSE-DFE is to produce an MMSE symbol estimate to maximize post-equalization SINR, based on the knowledge of channel response and a priori information of the symbols that are under estimation. ISI is mitigated through feedback filtering (FBF) using the regenerated layer and cancellation of the FBF output from the feed-forward filter (FFF) output. The FFF and FBF are jointly designed
119
1/19/11
3:34 PM
Page 120
40
80
35
70 Average throughput (Mb/s)
Average throughput (Mb/s)
PARK LAYOUT
30 25 20 15 MMSE (id) SIC (id) Turbo (id) MMSE (est) SIC (est) Turbo (est)
10 5 0
0
10
20 Es/No (dB)
30
60 50 40 30 MMSE (id) SIC (id) Turbo (id) MMSE (est) SIC (est) Turbo (est)
20 10
40
0
0
(a)
10
20 Es/No (dB)
30
40
(b)
Figure 6. Performance of UL-MIMO (EVA channel, 5 MHz): a) 2 transmit antennas, 2 receive antennas; b) 4 transmit antennas and 4 receive antennas. based on the MMSE criterion, and the exact expressions were derived in [13].
PERFORMANCE EVALUATION To illustrate the performance of UL-MIMO in an LTE system, we simulated link performance in the Extended Vehicular A (EVA) channel [9]. We consider a scenario where 25 resource blocks (5 MHz) are allocated to a user, and full-rank transmission is assumed. For each SNR, we evaluate the average throughput performance over 500 subframes. We assume that there is no antenna correlation. Performance of 2 × 2 MIMO is shown in Fig. 6a for the MMSE, SIC, and turbo equalization receivers. Here the set of MCSs defined for the Release 8 UL data channel [10] is used for each layer and the highest MCS in this case has a transmission rate of around 18.75 Mb/s per layer. As in [14], an MCS is chosen based on long-term average throughput at each SNR point, and thus each curve in Fig. 6 corresponds to the envelope of the throughput curves for all the 29 MCSs defined in [10]. We see that turbo equalization offers significant performance gain over both MMSE and SIC, in detail, 4–5 dB gain over MMSE and 2–3 dB gain over SIC at a medium to high SNR. Furthermore, all the receivers suffer similar degradation when practical channel estimation is considered. With antenna correlation, similar performance gain of the turbo equalization receiver over the linear MMSE equalizer was observed in [14]. Performance of 4 × 4 MIMO is shown in Fig. 6b. The gain of the turbo equalization receiver over SIC increases, compared to the 2 × 2 case. The 4 × 4 case has two layers mapped to one codeword, and thus the larger gain here is due to better cancellation of intra-codeword interference.
CONCLUSION An overview of UL-MIMO for LTE-Advanced is presented in this article. The evolution of the Release 8 UL transmission scheme toward
120
MIMO has been agreed on in 3GPP, as detailed here, which includes precoded spatial multiplexing, transmit diversity, and modifications on reference signals and DL control signaling to support UL-MIMO. The performance of a number of receivers for UL data channel is evaluated using link simulations, which show that significant performance gain is achievable when an advanced receiver such as the turbo equalization receiver is used at the eNodeB.
ACKNOWLEDGMENT The authors would like to thank the anonymous reviewers for their helpful comments and suggestions.
REFERENCES [1] E. Dahlman et al., 3G Evolution: HSPA and LTE for Mobile Broadband, 2nd ed., Academic Press, 2008. [2] 3GPP TR 36.913, “Requirements for Further Advancements for E-UTRA,” v. 8.0.1, Mar. 2009. [3] U. Sorger, I. De Broeck, and M. Schnell, “Interleaved FDMA — A New Spread-Spectrum Multiple-Access Scheme,” Proc. IEEE ICC ‘98, Atlanta, GA, June 1998, pp. 1013–17. [4] H. G. Myung, J. Lim, and D. J. Goodman, “Single Carrier FDMA for Uplink Wireless Transmission,” IEEE Vehic. Tech. Mag., vol. 1, no. 3, Sept. 2006, pp. 30–38. [5] M. Noune and A. Nix, “Frequency-Domain Precoding for Single Carrier Frequency-Division Multiple Access,” IEEE Commun. Mag., vol. 48, no. 5, June 2009, pp. 68–74. [6] 3GPP TS 36.211, “Evolved Universal Terrestrial Radio Access (E-UTRA); Physical Channels and Modulation,” v. 8.9.0, Dec. 2009. [7] D. J. Love and R. W. Heath, Jr., “Limited Feedback Unitary Precoding for Spatial Multiplexing Systems,” IEEE Trans. Info. Theory, vol. 51, no. 8, Aug. 2005, pp. 2967–76. [8] 3GPP TR 25.814, “Physical Layer Aspects for Evolved UTRA,” v. 7.1.0, Sept. 2006. [9] 3GPP TS 36.104, “Evolved Universal Terrestrial Radio Access (E-UTRA); Base Station (BS) Radio Transmission and Reception,” v. 8.10.0, June 2010. [10] 3GPP TS 36.213, “Evolved Universal Terrestrial Radio Access (E-UTRA); Physical Layer Procedure,” v. 8.8.0, Sept. 2009. [11] M. K. Varanasi and T. Guess, “Optimum Decision Feedback Multiuser Equalization with Successive Decoding Achieves the Total Capacity of the Gaussian MultipleAccess Channel,” Proc. Asilomar Conf. Signals, Sys., Comp., Monterey, CA, Nov. 1997, pp. 1405–9.
IEEE Communications Magazine • February 2011
PARK LAYOUT
1/19/11
3:34 PM
Page 121
[12] G. Berardinelli et al., “Improving SC-FDMA Performance by Turbo Equalization in UTRA LTE Uplink,” Proc. IEEE VTC, Singapore, May 2008, pp. 2557–61. [13] C. Laot, R. Le Bidan, and D. Leroux, “Turbo Equalization: Adaptive Equalization and Channel Decoding Jointly Optimized,” IEEE JSAC, vol. 19, no. 9, Sept. 2001, pp. 965–74. [14] G. Berardinelli et al., “Turbo Receiver for Single User MIMO LTE-A Uplink,” Proc. IEEE VTC, Barcelona, Spain, Apr. 2009, pp. 26–29. [15] H. Stark and J. W. Woods, Probability and Random Processes with Application to Signal Processing, 3rd ed., Prentice Hall, 2002.
BIOGRAPHIES CHESTER SUNGCHUNG PARK received a Ph.D. degree in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, in 2006. Since 2007 he has been with Ericsson Research, USA, working on digital baseband and front-end algorithms for LTE. His research interests include algorithm and hardware architecture design for MIMO-OFDM, error correction codes, and software-defined radios. From 2006 to 2007 he was with Samsung Electronics Inc., Giheung, Korea.
IEEE Communications Magazine • February 2011
Y.-P. ERIC WANG received a Ph.D. degree from the University of Michigan, Ann Arbor, in 1995. Since then he has been a member of Ericsson Research, USA. His research interests include coding, modulation, synchronization, multiple inputs-multiple outputs, channel equalization, and interference cancellation and suppression. He was an Associate Editor for IEEE Transactions on Vehicular Technology from 2003 to 2007. GEORGE JÖNGREN received M.Sc. and Ph.D. degrees in electrical engineering from the Royal Institute of Technology (KTH), Stockholm, Sweden, in 1998 and 2003, respectively. He was a postdoctoral researcher at KTH for one year before joining Qualcomm CDMA Technologies GmbH, Nuremberg, Germany. Since September 2005 he has been with Ericsson Research, Sweden, working on smart antenna research and implementation, as well as 3GPP standardization. He was elected Teacher of the Year in Electrical Engineering at KTH in 1997. DAVID HAMMARWALL received a Ph.D. degree in telecommunications in 2007 from KTH, where he was awarded an “Excellent Graduate Student Position” from the President’s office. In 2007 he joined Ericsson Research in Stockholm, where he is active in research and standardization of LTE. His research interests include wireless communications, receiver algorithms, resource optimization, beamforming, and scheduling.
121
KOH LAYOUT
1/19/11
3:35 PM
Page 122
IMT-ADVANCED AND NEXT-GENERATION MOBILE NETWORKS
A 25 Gb/s(/km2) Urban Wireless Network Beyond IMT-Advanced Sheng Liu and Jianjun Wu, Huawei Technologies Chung Ha Koh and Vincent K. N. Lau, Hong Kong University of Science and Technology
ABSTRACT In this article we present a survey on the technical challenges of future radio access networks beyond LTE-Advanced, which could offer very high average area throughput to support a huge demand for data traffic and high user density with energy-efficient operation. We highlight various potential enabling technologies and architectures to support the aggressive goal of average area throughput 25 Gb/s/km2 in beyond IMT-Advanced systems. Specifically, we discuss the challenges and solutions from the controlling/processing perspective, the radio resource management perspective, and the physical layer perspective for dense urban cell deployment. Using various advanced technologies such as interference mitigation techniques, MIMO, and cooperative communications as well as crosslayer self-organizing networks, we show that future urban wireless networks could potentially offer high-quality mobile services and offer an experience similar to the wired Internet.
INTRODUCTION With the emergence of numerous smart mobile devices such as handheld smart phones and netbooks, the data usage on mobile networks is growing exponentially. As shown in Fig. 1, it is expected that the mobile data traffic generated will grow more than 50 times compared to the end of 2009 by 2015 and even 500 times by 2020 corresponding to about 57 Gbytes per month per average subscriber. Sixty Gbytes per month is generated by an asymmetric digital subscriber line (ADSL) subscriber at a constant bit rate of 2 Mb/s based on the average online time of 136 min in 2009. Thus, this expectation means that mobile users in 2020 will be waiting for a similar user experience as that of current wireline users. It is anticipated that all Internet applications used via fixed Internet access should also be supported on the mobile access platform. Actually, some applications (e.g., social networks) may even be accessed more frequently via mobile access than fixed access. At the same time, it is predicted that by 2020 more than 50 billion
122
0163-6804/11/$25.00 © 2011 IEEE
devices (compared to 4.9 billion devices at the end of 2009) will be connected with mobile broadband connections, and every one of us will be surrounded by an average of 10 devices. Such explosive growth of wireless devices also drives a new shift of applications from current man-tomachine communications to future machine-tomachine (M2M) communications. This new paradigm of M2M applications also contributes to the increasing demand for mobile data applications in the future. The rapidly increasing mobile traffic puts a huge demand on network capacity and quality of service (QoS) in mobile networks. The wireless network is evolving from the current third-generation (3G) technologies to various fourth-generation (4G) systems such as International Mobile Telecommunications (IMT)-Advanced systems. Although IMT-Advanced technologies (e.g., Long Term Evolution Advanced [LTE-A] in the Third Generation Partnership Project [3GPP]) have achieved remarkable capacity increase in comparison with current 3G systems, they still cannot satisfy the explosive increase of mobile data traffic projected for 2020. For instance, assuming an average bit rate of 1 Mb/s during the busy hour (BH) per user, this implies a demand of average area capacity of 25 Gb/s/km2 in dense urban regions with typical user density of 25,000 users/km 2 . To achieve this capacity, about 230 MHz bandwidth is required even if the cell average throughput of 3.7 b/s/Hz/cell is achieved as in LTE-A with 200 m intersite distance (ISD) [1]. It is clear that we need a more fundamental breakthrough to increase the wireless capacity in urban areas beyond LTE-A technology. In this article we discuss various technical challenges involved as well as potential advanced technologies to achieve the aggressive target of 25 Gb/s/km2 area throughput: interference mitigation techniques, cooperative multiple-input multiple-output (MIMO) techniques, and crosslayer self-organizing networks (SONs). In the following we discuss the advantages and technical issues associated with urban small cell deployment and the associated key performance target. Specifically, the urban small cell environment has the potential to provide more spatial degrees
IEEE Communications Magazine • February 2011
1/19/11
3:35 PM
Page 123
of freedom in the network, which facilitates efficient resource reuse, user plane/control plane separation, and dynamic spatial path selections. We propose two radio access network (RAN) architectures, the cloud RAN and self-organization RAN, for reducing operation and control costs. We also elaborate on various advanced physical layer techniques such as interference mitigation techniques and cooperative MIMO communications, which contribute to fulfilling the high demand in future wireless networks.
URBAN DENSE SMALL CELL DEPLOYMENT There are various advantages associated with small cell urban deployment from the network capacity and energy efficiency perspectives. For instance, small cell deployment allows more efficient spatial reuse, which contributes to increased network capacity. In addition, the deployment of small cells also has benefits in terms of energy efficiency. Energy-efficient design for green radio has become a trend for both handsets and network infrastructure to lower total cost of ownership (TCO) for mobile operators and CO2 emissions [2]. One example is the Green Radio project in Mobile VCE [3], which plans to achieve the goal of 100-fold reduction in power consumption over current wireless communication networks. Moving the access network closer to the user is a key approach to reduce the transmit power required. Although urban small cell deployment has the potential to achieve higher system capacity, there are various crucial technical challenges that must be overcome. Backhaul and Installation Cost — When the cell size shrinks, the number of radio access points increases tremendously, which leads to huge increases in backhaul cost and real estate cost of installing the access points. Therefore, cost-efficient backhauling schemes and easy-toinstall access points must be considered. Interference Management — For dense small cell deployment, intercell interference will become more severe than in conventional macrocellular systems since the cell edge areas and the number of interference sources might become larger. In macrocell networks MIMO-based intercell interference mitigation techniques, such as coordinated multipoint transmission (CoMP), have been proven to be very effective for handling interference [4]. However, such schemes may not be practically feasible in dense small cell networks due to the increased number of interference sources and radio access points. Mobility Management — A small cell network implies frequent handover, which gives rise to heavy signal loading in mobility management and an increase in the probability of dropped calls. Resource Scheduling — The radio propagation environment in a small cell network is very different from that in a macrocell network. For example, in a macrocell network the base station locations are usually carefully planned, and users are partitioned into cell center and cell edge
IEEE Communications Magazine • February 2011
Transferred data per average subscriber (basis: all subscribers) (Gbytes/month)
KOH LAYOUT
60.00 50.00 40.00 30.00 20.00 10.00 0.00
2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020
Cellular modem 0.09 0.22 0.52 1.09 2.00 3.22 5.16 8.1 12.5 18.8 27.7 39.9 Handset data + 0.06 0.09 0.17 0.32 0.58 0.96 1.61 2.7 4.4 7.0 11.0 17.0 VoIP Average subscriber transferred data. (Gbytes/month):56.9
Figure 1. Growth of transferred data in Western Europe.
users. However, in a dense small cell network the locations of the access points may not be as carefully planned as in the macrocell scenario, and there is no distinct user partitioning since the cell radius is very small, and therefore the differences in signal strengths among users may not be very large. Additionally, in a traditional macrocell network there are usually two to four adjacent cells that dominate the intercell interference, and the impact of remote cells can be ignored due to signal attenuation over long propagation distances. However, in small cell networks, interference may be coming not only from the first tier neighboring cells but also from the other cells. As a result, radio resources (power, frequency, time, and space) scheduling and optimization become much more complicated, and conventional centralized solutions may not be viable in such complex networks. High Operational Expenditures — The number of nodes in dense small cell networks is significantly larger than in conventional macrocell networks. As a result, the cost of site maintenance will be very high if each node does not support self-organizing operations such as selfconfiguration, self-optimization, and self-healing. Figure 2 illustrates a hierarchical architecture of future dense small cell network, consisting of dense small cells and macrocells. Specifically, network equipment within the service range of a macrocell base station includes a large number of small cell access points (SAPs), which are controlled by the AP managers. The SAPs might form a wireless backhaul by connecting with neighboring SAPs via the wireless medium and relaying traffic for each other. On the other hand, the SAPs might have centralized control signaling according to the control management policy of the capacity enhancement technologies. While the small cell concept has been around for a long time for 2G, 3G, or LTE-A systems, the notion of small cells in the current systems is quite different from the dense small cell network we discussed above. For instance, the notion of small cells in current systems is expected to play a complementary role rather than a major role
123
KOH LAYOUT
1/19/11
3:35 PM
Page 124
Network manager
AP manager
AP manager
User density: 25000 user/km2
Macro-BTS
Smallcell APs
SAPs
Figure 2. Small cell network architecture.
Metric
Potential target
Average spectrum efficiency (b/s/Hz)
DL: 5.5 b/s/Hz, UL: 3.7 b/s/Hz
Peak spectral efficiency (b/s/Hz)
DL: 45 b/s/Hz, UL: 25 b/s/Hz
Peak data rate (b/s/cell)
DL: 4.5 Gb/s/cell, UL: 2.5 Gb/s/cell
Average areal capacity (b/s/km2)
25 Gb/s/km2
Table 1. System performance targets. in providing capacity and coverage. Hence, the physical layer signal processing, resource management algorithms, and architecture of current systems are not optimized for dense small cell deployment, and these existing technologies cannot deal with the above challenges. In this article we elaborate on various advanced enabling technologies to support the vision of dense small cell networks.
TARGET PERFORMANCE 1
The per-cell metrics (average cell throughput, average spectrum efficiency, peak spectral efficiency, etc.) are widely used for characterizing the system performance of macrocell systems. However, these metrics focus more on the performance gain achieved by the physical layer signal processing and radio resource control techniques, and less on the evolution of network architecture.
124
Table 1 summarizes the major target performances of dense small cell networks. We aim to support an average user data rate of up to 1 Mb/s, which is similar to the current ADSL-like user experience. For a typical user density of 25,000 user/km 2 , this implies an average area capacity of 25 Gb/s/km2. In addition, we consider various per-cell performance targets.1 For example, the target average and peak downlink spectrum efficiency are about 5.5 b/s/Hz and 45 b/s/Hz, respectively. These per-cell targets represent about 50 percent improvement over the LTE-A systems in micro- or picocellular environments. While these target performances are rather challenging, they could potentially be achievable via various advanced enabling technologies on network architecture, cross-layer
self-organizing networks, as well as advanced interference mitigation techniques in the physical layer.
RADIO ACCESS NETWORK ARCHITECTURE The future mobile network is a heterogeneous network that supports macros, picos, femtos, relays, and distributed antenna system (DAS) in the same spectrum. Such a network enables huge system capacity by allowing vertical coverage and optimal usage of local and wide area cells. For indoor applications, femto is an efficient scheme to enhance indoor coverage and throughput. For urban and dense small cell environments, it is envisioned that dense small cells will play a major role in supporting high system capacity. The dense small cells should be supported by the control management and maintenance operations in the RAN architecture. A key factor is the availability of abundant physical fiber, wavelength-division multiplexed passive optical network (WDMPON), or ultra-wide band microwave, based on which the following two RAN architectures are considered. Centralized processing: For smaller-scale networks or when low-cost backhaul transmission resources are widely available, the radio frequency (RF) processing can be distributed at remote radio units (RRUs) such as SAPs, while the baseband and radio resource management (RRM) processing can be centralized at the network controller. Based on this centralized architecture, the system capacity can be maximized with network MIMO signal processing and global RRM. Performance-wise, the centralized RAN architecture is one of the most efficient ways to overcome the interference and resource management issues in small cell
IEEE Communications Magazine • February 2011
KOH LAYOUT
1/19/11
3:35 PM
Page 125
deployment. However, the issue is scalability due to the computation complexity and signaling overhead involved. Depending on the scale of the network, cost reduction and load sharing may be accomplished by cloud computing technology [5]. Distributed processing: For large-scale networks or when backhaul transmission is expensive, distributing some RAN computation and processing at the SAPs is preferred. In this case the RAN consists of dense small cells with compact micro/pico base stations, which take care of the baseband processing and maybe part of the RRM control, whereas the network controller takes care of registrations and maintenance of the network. The advantage of this architecture is reduced computational and signaling loading in the network controller, and as a result, this architecture is more scalable. However, the distributive processing requirement poses great challenges on the robustness and effectiveness of the control algorithms. In the following we illustrate two implementation examples of centralized and distributive RANs, the cloud RAN and distributed SON networks, respectively.
Small-cell cluster
WDM-PON
Centralized processing unit (CPU)
Small-cell cluster
RRU
RRU
RS (a) Network manager Centralized OAM BTS manager OAM
OAM (Operation and management)
CENTRALIZED CLOUD RAN Figure 3a illustrates a typical implementation of the cloud RAN [6]. A cloud RAN is a radio access network installing many small RRUs and centralized processing units (CPUs) based on the software defined radio (SDR) multiprotocol platform. It uses virtualized baseband processing by combining all radios and computing resource scheduling. As Fig. 3a shows, the CPUs are connected to the RRUs (which correspond to SAPs in urban wireless network) via WDM-PON and control the RRUs based on all the knowledge of centralized processing. The digitalized I/Q radio signal has very high speed up to several gigabits per second or higher when MIMO RRUs are used and the bandwidth is more than 20 MHz. Thus, optical transmission is usually necessary for this architecture. In the near future, it is in general agreed that a WDM-based access network will enable next-generation optical broadband access. In contrast to time-division multiplexing (TDM)-based passive optical network (PON) that offers only tens of megabits per second, WDM-PON will enable the delivery of much higher capacity services to subscribers since each optical network unit (ONU) will be served by a dedicated wavelength channel to communicate with the central office or optical line terminal (OLT). As a result, the deployment of centralized architecture for outdoor applications will become popular. When the signals of distributed antennas are connected together, joint signal processing and cooperative RRM become easier and more flexible. With the rapid development of multicore processors, a cloud-computing-based platform will be feasible to carry out all physical layer and medium access control (MAC) layer processing. Because the cell size is small, signals from hundreds or even thousands of cells can be centralized without long-distance optical transmission.
IEEE Communications Magazine • February 2011
SAPs SON Interface between SAPs
SON SON
SON (b)
Figure 3. RAN architectures: a) centralized cloud; b) distributed self-organized.
DISTRIBUTED SELF-ORGANIZATION RAN Figure 3b illustrates an example implementation of a distributed SON [5]. The SON is a solution for simpler operations and better maintenance of networks, which gives the network elements self-organizing functions to allow the system to operate and configure with less human intervention. Distributed SON allows the relative lower network entities to have SON functions, while on the other hand centralized SON allows only upper-level network entities to operate SON functions. In urban wireless networks established by distributed SON, each SAP operating SON functions collects the information about environment changes (e.g., the installation of a new SAP and neighboring SAP actions) and makes self-optimization decisions for mobility robustness, energy savings, coverage optimization, and so on. Moreover, the SAPs exchange information through the interface to help reconfiguration of neighboring SAPs. It can be a key enabler to manage and operate several layers for interworking and giving faultless service with lower cost. This service would be useful in particular for a dense small cell network comprising a large number of SAPs. The main SON functions include self-configuration, self-optimization, and self-healing. Follow-
125
KOH LAYOUT
1/19/11
3:35 PM
Page 126
SAP and network entities should contain selfdisabling capability and self-healing procedures.
SAP 1 Microwave or self-backhaul UE1
SAP 2
SAP 3 UE2
Figure 4. Illustration of user plane/control plane separated hierarchical cell structure.
ing are detailed descriptions of SON functions and requirements with considerations of urban small cells. Self-Configuration — SAPs should be automatically configured to provide wireless service when connecting with a core network. After the installation of a new SAP, the automatic configuration process contains the connection to a core network entity, authentication, and recognition of neighbor SAPs. The initial parameter setting for reliable initial service without interfering with other SAPs and macro-base transceiver station (BTS) is also an important feature of selfconfiguration. Self-Optimization — In order to maximize network performance and keep the system’s reliability, optimization considers urban environment characteristics of scattered channel, user mobility, and high density. Self-optimization encapsulates the procedures of the monitoring mode for detecting variance, updating the neighbor list, reconfiguring system parameters, and exchanging configured information; thus, it should be carefully operated to handle a small load of work. Interaction between Entities — Self-optimization of SAPs needs sensing and detection procedures of neighbor environments. This might lead to establishing interactions between SAPs and between an SAP and upper network equipment, especially when end equipment has its own SON function. The interface and control signaling are significantly necessary to support the SON function. Fault Management — Failure detection and localization belongs to self-healing procedures. If a failure happens in SON procedures such as negative effects and false setting of parameters,
126
RADIO RESOURCE MANAGEMENT IN DENSE SMALL CELL NETWORKS Efficient radio resource management (RRM) schemes for urban wireless networks play an important role in utilizing the limited radio spectrum resources. General RRM involves strategies and algorithms for controlling parameters such as transmit power, data/control channel allocation, and load balancing. In this section we focus on RRM approaches for controlling a dense population environment. In our urban model, typical user density is 25,000 user/km 2. For a dense urban environment, it is envisioned that an efficient distributed approach to resource allocation will have significant influence on achieving high system capacity. In this section we describe the advanced features of RRM adapted to the urban wireless environment. It deals with a plane separation scheme using the hierarchical cell structure of urban wireless networks. We then introduce traffic distribution and caching approaches for efficient resource usage that work with high user density.
USER PLANE/CONTROL PLANE SEPARATED HIERARCHICAL CELL STRUCTURE Figure 4 shows the proposed separate user plane/control plane hierarchical cell structure, where a macro-BTS and a number of SAPs share the same spectrum to form two-tier coverage. Signaling channels as well as traffic channels for delay-sensitive services of small data volume (e.g., voice over IP [VoIP] and gaming) are offered by the macro-BTS; meanwhile, other data traffic is transmitted by the channels established by the SAP. As illustrated in Fig. 4, UE1 has traffic channels offered by SAP1 and SAP2, and a signaling channel with the macro-BTS. When UE1 moves within the coverage of the macro-BTS, only traffic channels set up with the SAP may be changed, but the signaling channel with the macrocell is always connected. Since the signaling link is free of frequent handover, signaling load for mobility management can be alleviated and call drop of real-time services avoided. Thanks to orthogonal frequency-division multiple access (OFDMA), if orthogonal time-frequency resource blocks are allocated to the macro-BTS andrespectively, the macro and SAPs can share the same spectrum almost without interference. Moreover, this two-tiered structure has another advantage, i.e., the macro-BTS can provide wireless backhaul for the small cells. If in-band relaying is employed, the SAP actually acts as a relay node. Since the antenna height of the macro-BTS is high enough, there may be a line-of-sight (LOS) path between the macro and an SAP, which makes the use of point-to-point microwave links possible. Together with smart traffic distribution and local content caching, this structure will greatly reduce the cost of backhauling.
IEEE Communications Magazine • February 2011
KOH LAYOUT
1/19/11
3:35 PM
Page 127
SMART TRAFFIC DISTRIBUTION AND LOCAL CONTENT CACHING Note that mobile data traffic is differentiable. It is a fact that a lot of traffic is low value-added data for Internet services that occupies most mobile network resources but treats the network merely as a pipeline, while only high valueadded data such as IP Multimedia System (IMS) services, VoIP, mobile gaming, and mobile TV/music require QoS guarantees and core network services. In addition, most areas where massive mobile data occurs are convenient to Internet access via a fixed access network. Usually the backhaul link to connect RANs and mobile core networks is more expensive than the links offered by metropolitan area networks because the backhaul link conveys not only user traffic but also the control signaling, so it must be low-latency and high-security, and even provide special functions like timing and frequency reference. Additionally, the backhaul link conveys video service, and web browsing accounts for a big portion of mobile data traffic. The contents of such services can be prefetched or selectively stored close to the access point so that it can be accessed without being redirected from the application servers throughout the core network. Hence, we can integrate the deep packet inspection (DPI) function in the base station, so smart traffic distribution and local contents caching can be implemented at the RAN side. By offloading the low value-added traffic to fixed access networks, the bandwidth for backhauling can be effectively reduced. Only a small part of data traffic relying on QoS guarantee and core network services and control signaling pass through the backhaul link to the core network. Moreover, contents including web objects, downloadable objects (media files, software, and documents), real-time media streams, and so on are fetched from local caches, which implies that the load of backhaul can be further alleviated. In fact, smart traffic distribution and local content caching also has the benefit of improving QoS and user experience due to reduced delay.
INTERFERENCE MITIGATION TECHNIQUES IN DENSE SMALL CELL NETWORKS In outdoor environments, interference management and effective transmission schemes are significant to enhance the capacity, and they should be adapted to the characteristics of urban channels and deployment environments. The desired policies and algorithms for urban outdoor wireless networks, which are different from indoor and macrocellular environments, should consider the following aspects. Usage of a complex propagation environment: In contrast to a macro base station whose antenna is tower-mounted, the antenna height of a pico base station is usually 5~10 m. In urban regions such an antenna height implies numerous complicated scatters and shades in the propagation paths, so the coverage area of each
IEEE Communications Magazine • February 2011
picocell becomes very irregular, and they overlap each other. This results in a complex propagation environment but also brings rich spatial degrees of freedom, which implies potential capacity that can be exploited. Necessity of global optimization: Due to the complex propagation environment as explained above, the system model actually becomes a partial interference channel for most users. Thus, interference management cannot be handled in each cell separately, but should be considered from the viewpoint of global optimization. Interference characteristics: In a dense small cell network, the separation between outer cell and inner cell is not clear due to the small cell radius. Moreover, intercell interference is generated in universe to/from not only a one-tier neighboring cell, but also two- or three-tier cells. In this section we represent an advanced approach of dynamic spatial selection adapting to the urban wireless environment. We then introduce various cooperative transmission approaches that can overcome limitations of LTE-A technologies in urban small cell deployment.
In a dense small cell network, the separation between outer cell and inner cell is not clear due to the small cell radius. Moreover, intercell interference is generated in universe to/from not only a one-tier neighboring cell, but also two- or three-tier cells.
DYNAMIC SPATIAL PATH/BEAM SELECTION IN A DENSE SMALL CELL NETWORK A simple scheme is to avoid interference through dynamic spatial path selection. As illustrated in Fig. 5, UE1 will select path 1 because interference signals from SAP1 to UE2 are weak enough that the interference of SAP1 to UE2 can be ignored. Similarly, UE2 selects path 2 under the same principle. It is actually an altruistic algorithm and only feasible when there are many candidate paths to be selected. In order to apply this algorithm, each UE just reports its worst path (i.e., from which it hears almost nothing). In fact, physical-layer-based interference processing, which highly depends on the accurate estimation of real-time channel parameters varying every frame, usually needs interlinks among SAPs. However, MAC layer processing only relies on long-term channel average, and thus the intersite links for cooperation are not necessary. Since beamforming is an efficient way to enrich spatial degrees of freedom, coordinated beamforming, already adopted in LTE-A [1], can be further used together with the selection of spatial path (i.e., joint spatial path and beam selection) for interference mitigation.
COOPERATIVE TRANSMISSION IN DENSE SMALL CELL NETWORKS Small cell size brings more spatial degrees of freedom, which facilitates cooperative transmission of multiple network nodes. When the destination is far from the source node, cooperative transmission has the benefit of capacity by the relaying function of some other nodes, so information flow is transferred via them instead of over the direct air link. One novel technique in LTE-Advanced for cooperation is CoMP, which directly improves cell-edge user throughput by avoiding or eliminating intercell interference; however, due to complexity it is not a cost-efficient approach to
127
KOH LAYOUT
1/19/11
3:35 PM
Page 128
SAP 1 Path 1
UE1
UE2 Path 2
SAP 2
Figure 5. Illustration of dynamic spatial path selection.
combat interference in a dense small cell network. Cooperative techniques have the following advanced approaches for enhancing capacity and reducing cost overhead in urban wireless networks. Cooperative Relaying — Although relaying also consumes radio resources, it can still improve system capacity due to interference localization. Based on cooperation among multiple relay nodes or relay nodes and the source node, cooperative relaying can further improve system throughput and reliability due to exploiting spatial degrees of freedom. Furthermore, relay also helps reduce terminal power consumption, which is significant to green radio. Mobile Terminal Cooperation — At the same time, the number of subscriptions is increasing dramatically with the rapid development of mobile computing and the advent of M2M service. As the mobile terminal becomes more powerful, sophisticated cooperative communication can be carried out by the terminals. The mobile terminal either acts as a relay node just like a relay station or establishes a direct air link with other terminals to enable device-to-device (D2D) transmission. Thus, we can achieve performance improvement in terms of throughput and reliability without increased infrastructure cost. Despite the fact that the capacity gain of mobile cooperation diminishes fast with increased distance, mobile cooperation still plays a big role in enabling M2M services in future mobile networks since sensors must have a very low transmit power limit, but handsets or cellular modems are distributed everywhere and close to these sensors.
128
Other Approaches — MAC layer scheduling, including distributed frequency reuse (DFR) and distributed power control, is also essential for cooperative radio resource management for the distributed architecture. In a small cell network there is no distinct partition of cell center and cell edge users, which means that fractional frequency reuse (FFR), succeeding in the macrocell network, may not appropriate for the dense small cell network. Hence, radio resources including time, frequency, beam, power, and spatial path cannot be scheduled independently by each cell but preferably in a cooperative way.
PHYSICAL LAYER ENHANCEMENT There are various advanced physical layer techniques introduced in LTE-A systems to enhance the physical layer data rate for macrocell scenarios, such as carrier aggregation and enhanced MIMO. Carrier aggregation supports both contiguous and non-contiguous spectra and asymmetric bandwidth for frequency-division duplex (FDD) with maximum 100 MHz aggregated spectrum in LTE-A systems. However, the efficiency of bandwidth aggregation is limited by the available spectrum and maximum site power constraint. On the other hand, with a maximum of eight antennas equipped, enhanced MIMO in LTE-A systems supports up to 8-stream spatial multiplexing in both the uplink and downlink. However, in the small cell network, the antenna height of a pico base station is usually 5~10 m, and there is very limited space for antenna mounting, which implies that the number of antennas for each pico base station is difficult to increase. In addition, there is a diminishing return on spatial multiplexing gain in high order MIMO when the overhead of pilot preambles and the associated signaling are taken into account. As a result, we cannot merely rely on the existing techniques in LTE-A; careful design of physical layer techniques to exploit the unique characteristics of a dense small cell network is needed to meet the aggressive area capacity in future networks. In addition to MIMO and carrier aggregation, the following physical layer enhancement techniques can be effective to boost the spectral efficiency in dense small cell networks. High-order modulation: Exploiting the interference mitigation techniques and small propagation loss in a small cell network, mobile users may be able to operate at very high signal-tointerference-plus-noise ratio (SINR); hence, we could exploit very high-order modulation (128quadrature amplitude modulation [QAM] and 256-QAM) to achieve high peak rate and average throughput. Filter-bank-based multicarrier: In addition, filter-bank-based multicarrier (FBMC) [7] may be a better choice of multiple access technique in dense small cell networks instead of OFDMA or SC-FDMA in LTE. The main advantages of FBMC include very low out-of-band frequency leakage and cyclic prefix (CP)-free operation. Note that in the user plane/control plane separated hierarchical cell structure, the macro- and small cells share the same spectrum, and the delay spreading of signals from the small cell is
IEEE Communications Magazine • February 2011
KOH LAYOUT
1/19/11
3:35 PM
Page 129
usually small, but that from the macro is relatively large. Clearly, the CP length should meet the delay spreading of signals from the macrocell, which is a redundancy for the small cell and thus causes a loss in spectral efficiency. Adopting FBMC can overcome this problem at a cost of somewhat increased processing complexity.
CONCLUSIONS Based on the above discussions, it is seen that the future radio access network should offer very high average areal throughput to support a vast amount of mobile data traffic and user density, and should be energy-efficient. Although wireless technologies seem to encounter a bottleneck in fundamental limitations, we can still move forward with advanced radio access network architecture and novel techniques for small cell networks.
REFERENCES [1] T. Abe, “3GPP Self-Evaluation Methodology and Results — Assumptions” 3GPP TSG-RAN1 tech. rep., NTT DoCoMo, 2009. [2] R. Irmer and S. Chia, “Signal Processing Challenges for Future Wireless Communications,” Proc. ICASSP, 2009, pp. 3625–28. [3] Mobile VCE, “CORE-5 Research Area: Green Radio”; http://www.mobilevce.com/infosheets/GreenRadio.pdf. [4] T. Nakamura, “Proposal for Candidate Radio Interface Technologies for IMT-Advanced Based on LTE Release 10 and Beyond (LTE-Advanced),” ITU-R WP 5D 3rd Wksp. IMT-Advanced, Oct. 15, 2009. [5] 3GPP TR 32.821, “Study of Self-Organizing Networks (SON) Related Operations, Administration and Maintenance (OAM) for Home Node B (HNB) version 9.0.0 2009-06,” 3GPP TSG-RAN, 2009. [6] China Mobile Research Institute, “C-RAN — The Road Towards Green RAN,” White Paper, v. 1. 0. 0, Apr. 2010. [7] T. Ihalainen et al., “Filter Bank Based Multi-Mode Multiple Access Scheme for Wireless Uplink,” EUSIPCO ‘09, Aug. 2009, pp. 1354–58.
BIOGRAPHIES SHENG LIU (
[email protected]) received B.S., M.S., and Ph.D. degrees in electrical engineering from the University of Electronic Science and Technology of China (UESTC, in 1992, 1995, and 1998, respectively. From 2001 to 2005 he served as a senior architect in UTStarcom Shenzhen R&D
IEEE Communications Magazine • February 2011
Center, where he worked on RAN architecture and RRM algorithms. Since 2005 he has been with Huawei Technologies Corporation, where he has led several standard research projects including HSPA+, UMB, and 802.16m. Currently he is the system architect of the NG-Wireless Program in Huawei. His research interests include MIMO, interference alignment, cloud RAN, heterogeneous network, and cooperative communications. He is the inventor of 16 U.S. and 50+ China granted patents or patent applications. J IANJUN W U (
[email protected]) graduated from Southwest Jiaotong University in April 2001. He joined Huawei as a wireless engineer in 2001. From 2001 till 2003 he was engaged in the development of NodeB for WCDMA systems, and during this time he also designed and developed smart antenna for WCDMA and CDMA2000 systems. From 2003 to 2004 he worked on the Huawei B3G project as a system engineer, responsible for system design. Since July 2005 he has led Huawei’s WIMAX Research project, and was responsible for the standard research within the IEEE and WiMAX Forum. Since May 2007 he has been responsible for system and architecture evolution research.
Although wireless technologies seem to encounter a bottleneck in fundamental limitations, we can still move forward with advanced radio access network architecture and novel techniques for small cell networks.
CHUNG HA KOH (
[email protected]) received her B.S., M.S., and Ph.D. degrees from the Department of Electrical and Electronic Engineering of Yonsei Univerisy, Seoul, Korea, in 2004, 2006, and 2010, respectively. Since May 2010 she has been with Hong Kong University of Science and Technology (HKUST) as a research associate. Her current research interests are resource allocation, self-organizing networks, cross layer optimization, delay optimal systems, and femtocell protocols. VINCENT K. N. LAU (
[email protected]) received his B.Eng. (Distinction 1st Hons — ranked 2nd) from the Department of Electrical and Electronics Engineering, University of Hong Kong in 1992. He joined the Hong Kong Telecom (PCCW) after graduation for three years as system engineer, responsible for transmission systems design. He obtained the Sir Edward Youde Memorial Fellowship, Rotoract Scholarship, and Croucher Foundation in 1995 and went to the University of Cambridge for a Ph.D. in mobile communications. He completed his Ph.D. degree in two years and joined Bell Labs — Lucent Technologies, New Jersey, in 1997 as a member of technical staff. He has been working on various advanced wireless technologies such as IS95, 3G1X, and UMTS, as well as wideband CDMA base station ASIC design and p 3G technologies, such as MIMO and HSDPA. He joined the Department of ECE, HKUST as an associate professor in August 2004 and was promoted to professor in July 2010. He has also been the technology advisor and consultant for a number of companies such as ZTE and Huawei, ASTRI, leading several R&D projects on B3G, WiMAX, and cognitive radio. He is the founder and co-director of the Huawei–HKUST Innovation Lab.
129
LYT-GUEST EDIT-Bregni
1/20/11
12:09 PM
Page 130
GUEST EDITORIAL
SYNCHRONIZATION OVER ETHERNET AND IP IN NEXT-GENERATION NETWORKS
Stefano Bregni
N
etwork synchronization deals with the distribution of time and frequency over a network of clocks, including clocks spread over a wide area. The goal is to align the time and frequency scales of all the clocks by using the communications capacity of links between nodes. A synchronization network is the facility that implements network synchronization. The basic elements of a synchronization network are nodes (autonomous and slave clocks) and communication links interconnecting them. Since the 1970s and ’80s, most telecommunications operators have set up synchronization networks to synchronize their switching and transmission equipment. Over this time, network synchronization has been gaining increasing importance in telecommunications. As a matter of fact, the quality of many services offered by network operators to their customers depends on network synchronization performance. Since the introduction of early digital switching systems, network synchronization was needed to avoid slips in circuit-switched voice and data networks. The deployment synchronous digital hierarchy/synchronous optical network (SDH/SONET) networks imposed new and more complex requirements on the quality of synchronization systems. To study those new problems, international standard bodies established specific work groups, which culminated in the ’90s with the release of a new series of International Telecommunication Union — Telecommunication Standardization Sector (ITU-T) Recommendations on synchronization of digital networks (G.810, G.811, G.812, and G.813), as well as their counterparts released by the Alliance for Telecommunications Industry Solutions (ATIS) and Telcordia (e.g., GR-1244) in the United States and by the European Telecommunications Standards Institute (ETSI) in Europe. More recently, it has been recognized that the importance of network synchronization goes even further: asynchronous transfer mode (ATM) and cellular mobile telephone networks (Global System for Mobility [GSM], Global Packet
130
Ravi Subrahmanyan
Radio Services [GPRS], Universal Mobile Telecommunications Services [UMTS]) are two striking examples where the availability of network synchronization references has been proven to significantly affect the quality of service. Traditionally, synchronization has been distributed to telecommunications network nodes using circuit-switched links in time-division multiplexing (TDM). In particular, E1 and DS1 circuits have been most commonly used over European and North American standard plesiochronous digital hierarchy (PDH) systems, respectively. The recent migration of network operators to the packet-switched next-generation network (NGN) once again poses newer and even more difficult problems for network synchronization. Today, as fixed and mobile operators migrate to NGN infrastructures based on IP packet switching, Ethernet transport is becoming increasingly common. This trend is driven by the prospect of lower operation costs and the convergence of fixed and mobile services. However, migrating trunk lines to IP/Ethernet transport poses significant technical challenges, especially for circuit emulation and synchronization of network elements. Therefore, the network evolution toward IP packet switching has led to increased interest on the part of communications engineers in synchronization distribution using packet-based methods. After a few years of declining research, considerable new investigation activity on network synchronization has restarted in both industry and academia. International standard bodies have also resumed significant levels of activity on this subject. Since 2004 the ITU-T has been developing a new set of Recommendations, specifically for synchronization on packet-switched networks, beginning with ITU-T Recommendation G.8261/Y.1361, “Timing and Synchronization Aspects in Packet Networks.” In 2002 IEEE released a new “Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems” (IEEE 1588, revised in 2008).
IEEE Communications Magazine • February 2011
LYT-GUEST EDIT-Bregni
1/20/11
12:09 PM
Page 131
GUEST EDITORIAL At this point, it is worth pointing out that the traditional model, in which synchronization distribution is engineered carefully for optimal performance and survivability, may give way to scenarios in which there is greater expectation of automatic, self-configured operation while still maintaining adequate synchronization quality. This model is similar to that of Ethernet “plug and play,” in which Ethernet equipment may be connected into a network without significant, or any, a priori configuration, yet expected to come up and work satisfactorily. As NGN synchronization is transported increasingly via packet networks, there are indications that such an expectation will arise for synchronization as well. This consideration widens the scope of interest in synchronization beyond specialists, reaching the wider audience of telecommunications engineers in general. An example is the distribution of synchronization to next-generation wireless base stations, which are connected to the core network only via packet-switched networks, but still require highly accurate synchronization to meet standard quality of service expectations. Toward this end, this special issue aims to introduce readers to some notable changes in network synchronization technology, which have recently arisen from the evolution to NGN. Two articles (Ferrant et al. and Garner et al.) review the current state of standardization activity in the ITU-T and IEEE. Two articles (Cosart and Shenoi) deal with aspects of characterization and measurement of synchronization performance in packet networks. Finally, one article (Ouellette et al.) describes the use of IEEE 1588 for time synchronization. In further detail, the article by J.-L. Ferrant and S. Ruffini summarizes the work done by ITU-T Q13/15 over the last six years to standardize the transport of timing over packet networks, including a summary of the relevant documents published by the ITU-T. It also provides insight into the future work in ITU-T Q13/15 on the transport of timing in packet networks. The article by G. M. Garner and H. Ryu presents the Audio/Video Bridging (AVB) project in the IEEE 802.1 working group, focused on the transport of time-sensitive traffic over IEEE 802 bridged networks. The IEEE 802.1AS is the AVB standard that will specify requirements to allow for transport of precise timing and synchronization in AVB networks. This article provides a tutorial on IEEE 802.1AS and also new simulation results for timing performance. The article by L. Cosart describes techniques for performance data measurement and analysis of NGN packet network synchronization. It introduces some of the new metrics that are used for performance evaluation of packet timing. The article by K. Shenoi presents metrics and analytical methods suitable for specifying timing requirements in NGN packet networks. It provides a brief overview of timing fundamentals, followed by an explanation of how packet-based methods transfer timing. Two groups of metrics, the TDEV and MTIE families, are discussed. Finally, the article by M. Ouellette, K. Ji, S. Liu, and H. Li describes the use of IEEE 1588 and boundary clocks for time distribution in telecommunications networks. This
IEEE Communications Magazine • February 2011
technology is primarily used to serve the radio interface synchronization requirements of mobile systems such as WiMAX and LTE, and to avoid the dependence on GPS systems deployed in base stations. It also presents some preliminary field trial results, which indicate that it is possible to transfer accurate phase/time across a telecom network for meeting the requirements of mobile systems. This issue does not include an update on the topic of synchronous Ethernet. Recent activity in the standards bodies has recognized that no single method is likely to achieve acceptable results for both time and frequency distribution, and a combination of methods will be required. Research activity is now focused on using synchronous Ethernet to transfer frequency, and then using a different protocol as an overlay to distribute time. (IEEE 1588 is one example of a protocol that may be used in combination with synchronous Ethernet, although this has yet to be proven to work well enough). Ferrant et al. (IEEE Communications Magazine, 2008) recently provided a review of synchronous Ethernet. Another article will be published in a forthcoming issue of IEEE Communications Magazine providing a further update on this topic, including aspects of how synchronous Ethernet may be used together with other protocols for time distribution.
BIOGRAPHIES STEFANO BREGNI [M’93, SM’99] (
[email protected]) is an associate professor at Politecnico di Milano, where he teaches telecommunications networks and transmission networks. In 1990 he graduated in telecommunications engineering at Politecnico di Milano. Beginning in 1991, he worked on SDH and network synchronization issues, with special regard to clock stability measurement, first with SIRTI S.p.A (1991–1993) and then with CEFRIEL (1994–1999). In 1999 he joined Politecnico di Milano as a tenured assistant professor. Since 2004 he has been a Distinguished Lecturer of the IEEE Communications Society, where he holds or has held the following official positions: Member at Large on the Board of Governors (2010–2012), Director of Education (2008–2011), Chair of the Transmission, Access and Optical Systems (TAOS) Technical Committee (2008–2010; Vice-Chair 2002–2003, 2006–2007; Secretary 2004–2005) and Member at Large of the GLOBECOM/ICC Technical Content (GITC) Committee (2007–2010). He is or has been Technical Program Vice-Chair of IEEE GLOBECOM 2012, Symposia Chair of GLOBECOM 2009, and Symposium Chair for eight other ICC and GLOBECOM conferences. He is Editor of the IEEE ComSoc Global Communications Newsletter and Associate Editor of IEEE Communications Surveys and Tutorials. He was tutorial lecturer for four IEEE ICC and GLOBECOM conferences. He has served on ETSI and ITU-T committees on digital network synchronization. He is an author of about 80 papers, mostly in IEEE conferences and journals, and of the book Synchronization of Digital Telecommunications Networks (Wiley, 2002). His current research interests focus mainly on traffic modeling and optical networks. RAVI SUBRAHMANYAN [M’89, SM’97] (
[email protected]) is a senior design engineer with Immedia Semiconductor, Andover, Massachusetts, where he is involved in the architecture and design of high-performance video products focusing on next-generation multimedia delivery. He was previously a systems & applications engineering manager at National Semiconductor Corp. He was also with AMCC in Andover, where he was involved in the design of communications ICs and multicore PowerPC CPUs. He has participated in ITU-T and ATIS on topics related to timing and synchronization in communications networks, most recently working on synchronization over packet networks. His interests are in high speed and custom design (currently as applied to communications integrated circuits and video codecs), signal processing, timing recovery, and communications network architectures. He received M.S. and Ph.D. degrees in electrical engineering from Duke University, and a B.Tech. degree (also in electrical engineering) from the Indian Institute of Technology, Bombay. He has 50 publications including conference presentations and papers in refereed journals. He also has 20 issued or pending patents. He has served on various conference committees, and has been involved with ComSoc’s TAOS TC since 2008, where he currently serves as Vice-Chair. He is also active on the organizing committee of the Workshop on Synchronization in Telecommunications Networks (WSTN).
131
FERRANT LAYOUT
1/19/11
3:27 PM
Page 132
SYNCHRONIZATION OVER ETHERNET AND IP NETWORKS
Evolution of the Standards for Packet Network Synchronization Jean-Loup Ferrant, Calnex Solutions Stefano Ruffini, Ericsson
ABSTRACT This article summarizes the work done by ITU-T Q13/15 over the last six years to standardize the transport of timing over packet networks. It provides a summary of the published documents in this area from ITU-T while providing some of the background that went into each document including the specification of synchronous Ethernet and IEEE 1588 telecom profiles. Finally, it provides insight into the future work on the transport of timing in packet networks in ITU-T Q13/15.
INTRODUCTION In 2001, all time-division multiplexing (TDM) hierarchies — plesiochronous digital hierarchy (PDH), synchronous digital hierarchy (SDH), and optical transport networks (OTNs) — were fully standardized. For the synchronization aspect, the jitter and wander of PDH interfaces were specified in G.823 and G.824; the reference clock of digital networks was specified in G.811; and G.810 provided the definitions related to synchronization in TDM networks. SDH was defined as the new synchronization network, with the definition of slave clocks in G.812 (synchronization supply unit [SSU]) and in G.813 (SDH equipment clock [SEC]). The jitter and wander of STM-N interfaces was specified in G.825, and the SDH synchronization layer was specified in G.781. The last TDM hierarchy, OTN, has been designed as an asynchronous network, and the reference timing signal should be carried by the SDH clients. The only requirement assigned to OTN was that STM-N signals should be transported by OTN without degradation of their timing characteristics; at this time almost nobody thought that the transport of synchronization by signals other than STM-N could raise any issues in the future. All these documents specified the transport of a network reference frequency through TDM networks; the transport of time was not specified. The existence of Network Time Protocol (NTP) was considered good enough to transport time with an
132
0163-6804/11/$25.00 © 2011 IEEE
accuracy of 0.5 s for time stamping of events; there was no need to distribute more accurate time reference over the network. The main requirement for client synchronization at that time came from mobile networks using frequency-division duplex (FDD) mode; the transport of a reference frequency with an accuracy of 50 parts per billion (ppb) could be done via SDH synchronization networks and traditional 2048/1544 kb/s interfaces. The evolution of transport and access networks emphasizing data led to the definition of new synchronization requirements. In order to reduce the cost of mobile backhauling networks, the transport network between the mobile switches and the base stations could be based on Ethernet and IP links. The first approach in the migration from TDM to packet networks was based on timing recovery as carried by the circuit emulation services (CES). The use of an adaptive clock recovery technique in the CES timing recovery process, however, raised some concerns on the quality of the recovered timing, especially in case of highly loaded packet networks. In order to increase the quality of the timing carried in packet networks, some operators then pushed in the International Telecommunication Union — Telecommunication Standardization Sector (ITU-T) for the transport of a frequency reference over the physical layer of Ethernet links. ITU-T SG15 Question 13 developed this concept and defined synchronous Ethernet, based on previous work to define the transport of timing through SDH networks. Some mobile network techniques required the transport of accurate time through networks; therefore, in parallel with the work in ITU-T, the IEEE worked on a new version of its Precise Time Protocol (PTP), IEEE 15882008 in order to transport time information with a high accuracy through wide area networks (WANs). ITU-T decided to use this time protocol to transport accurate time and frequency over telecom networks, and Q13 has defined a series of recommendations addressing the transport of frequency and time as summarized in Fig. 1.
IEEE Communications Magazine • February 2011
FERRANT LAYOUT
1/19/11
3:27 PM
Page 133
To address the
G.8260 (Sync definition)
Definitions / terminology
problems with adaptive methods,
Basics
G.8261
Frequency: G.826x G.8271
Time/Phase: G.827x
ITU-T proposed to transport a reference
Network requirements
Clock
SyncE NetworkJitter-Wander included in G.8261
G.8271.1 (networkPDV_time/phase)
G.8261.1 (networkPDV_frequency)
G.8271.2 may be needed in future
G.8262 (SyncE clock)
G.8272 PRTC
G.8263 (Packet clock)
G.8273 BC TC?
Document status:
G.8265.1 (PTPprofileFrequency)
G.8275.1 (PTPprofileTime/phase)
G.8265.2 (PTPprofileFrequency 2)
G.8275.2 (PTPprofileTime/phase 2)
Agreed
Ethernet signals and it was immediately agreed. This approach was new term Syn-
G.8275 (Packet-architecture for time)
G.8265 (Packet-architecture for frequency)
Profiles
physical layer of
identified with the
G.8264 (SyncE-architecture) Methods
frequency within the
chronous Ethernet.
Ongoing
Figure 1. Current structure of ITU-T Q13/15 Recommendations for synchronization in packet networks.
SYNCHRONIZATION ASPECTS IN PACKET NETWORKS: ITU-T G.8261, G.8262, G.8264 Starting in 2004, the transport of TDM through CES was the first work item addressed by ITU on the synchronization aspects in packet networks. At that time, the focus was on CES and its ability to properly interwork with legacy TDM networks. Moreover, the CES requirements include synchronization since many applications, including base stations, need a reference frequency via their traffic interface. The synchronization reference is required in order to generate a radio signal at the output of the base station with both long-term frequency accuracy and short-term stability (measured over a very short period, in the sub-millisecond range) within 50 ppb. The work on the CES requirements was completed with the approval of the first revision of G.8261, which contained the following items related to CES: • A network model, combining a TDM network and CES, and allowing the definition of the wander budget that can be allocated to the CES part in different network scenario cases. • A guideline (included in Appendix VI, G.8261) that could provide assistance in the acquisition of performance characteristics. In particular, a testing method was defined to evaluate the performance of CES systems in the presence of different types of network loads generating packet delay vari-
IEEE Communications Magazine • February 2011
ation (PDV). Note: It was not possible to directly specify PDV patterns representing the stress in real networks due to the lack of network measurements at that time. This guideline is only informative and is not a normative part of G.8261. • A model of CES equipment including packetizer and depacketizer interworking functions. These functions are applicable to the different modes of CES: network-synchronous solutions, and differential and adaptive methods. The most common clock recovery mode used with CES is the adaptive mode (also known as adaptive clock recovery, ACR) as normally a reference network clock is not available at the edges of the packet network. Depending on the clock characteristics of the base station, the performance of the CES might not be good enough to ensure the delivery of 50 ppb accuracy in any network load condition; therefore, operators asked ITU to work on a more robust solution. To address the problems with adaptive methods, ITU-T proposed to transport a reference frequency within the physical layer of Ethernet signals, and it was immediately agreed on. This approach was identified with the new term synchronous Ethernet. ITU decided to provide full interworking between SDH and synchronous Ethernet networks, defining hybrid equipment with both types of interfaces at the boundary between these networks. At the network level, interoperability means that the G.803 synchronization reference chain could be done with SDH network entities (NEs) or synchronous Ethernet equip-
133
FERRANT LAYOUT
1/19/11
3:27 PM
Page 134
PDV phase; samples: 596,919; initial phase offset: 270.020 μs 1.17 ms
90.0 μs/div
180 μs 0.000 hours
1.00 hours/div
10.36 hours
Figure 2. Measured packet delay (copy of the left hand side of Figure I.2/G.8260). ment, and the restoration of the synchronization network operates in the same way, independent of the type of equipment. The network architecture for synchronous Ethernet was defined in Annex A of G.8261. The compatibility of SDH NEs and synchronous Ethernet equipment in the synchronization reference chain led to the conclusion that both types of equipment must be built with a clock with identical main parameters (e.g., noise generation, noise transfer, and noise tolerance, phase discontinuity, and transient responses). This enabled the approval of synchronous Ethernet in a reasonable period of time, since there was no need to perform all the network simulations that were done to validate SDH. But this also led to the decision to specify two options for the synchronous equipment clock in G.8262, in the same way as had been done in G.813 for SDH, although a convergent solution would have been preferable. The identical behavior of SDH NEs and synchronous Ethernet equipment in case of synchronization network protection led to the following consequences: • The holdover characteristics of G.8262 and G.813 are identical. • The restoration principles of synchronous Ethernet networks are identical to those defined for SDH: the protection of a chain of 20 G.8262 clocks must not generate more than 1 μs of phase error, and the use of synchronization status message (SSM) is mandatory to propagate the quality level (QL) information between directly linked equipment. In this respect a fundamental characteristic of the restoration via SSM of the physical-layer-based synchronization is that the timing information is propagated equipment by equipment, and the SSM is never passed transparently through these network elements. While it was relatively simple to reuse the SDH clock specification (G.813) to specify the synchronous Ethernet clock specification (G.8262), this was not the case for the transport of SSM. In SDH, the SSM code is regularly transmit-
134
ted in a dedicated slot of each frame (125 μs); this periodicity was short enough to provide a fast restoration mechanism and protect against bit errors on the SSM message. For Ethernet, a new transmission of SSM over packets had to be created. The SSM propagation method imposes that the SSM generated by one NE of the chain is sent to the next NE of the chain, and only to this one; the SSM must never be transparently passed through an NE. According to G.813 and G.781, the round-trip propagation of a message through a chain of 20 NEs must not exceed 15 s; otherwise, a better holdover characteristic would be required for G.8262. In addition, the restoration must not be blocked or delayed by lost packets. In order to fulfill all these requirements, it has been decided to define in G.8264 an Ethernet synchronization messaging channel (ESMC) able to carry two types of messages; a heartbeat message sent periodically, typically 1 s, carrying the QL value, and an event message sent once in case of change of the QL. This QL value is coded in the SSM transported by the ESMC. These messages are transported via an IEEE 802.3 Organization Specific Slow Protocol (OSSP) provided by IEEE to ITU. Another emerging aspect that has been addressed during the initial development of the packet timing standards has been the definition of a generalized packet timing distribution where the timing could be carried by specific protocols such as NTP [1] or PTP [2, IEEE 1588]. Also, in this case the actual clock recovery technique is based on adaptive methods. The use of these techniques, similar as when using CES with adaptive clock recovery, required a specific investigation of the performance aspects as described in the following section. In addition, the use of IEEE 1588 packets required the development of a specific telecom profile.
PACKET TIMING PERFORMANCE ASPECTS: ITU-T G.8260, G.8261.1, G.8263 The distribution of timing via packets has raised some new concerns due to the important relationship between the performance that can be achieved by these methods (in particular if based on adaptive methods) and the packet network characteristics. In particular in these cases the key parameter is the PDV introduced by the network. Discussions on these aspects were first discussed in ITU-T when the adaptive clock recovery method was introduced several years ago. The adaptive clock recovery method supports synchronization of CES (e.g., to recover the clock of constant bit rate [CBR] services carried over asynchronous transfer mode (ATM) adaptation layer 1 (AAL1) in ATM networks [3]). In addition, the European Telecommunications Standards Institute (ETSI) published a related document (“TR101685, Timing and Synchronization of Aspects of Asynchronous Transfer Mode [ATM] Networks” [4]). This report provided
IEEE Communications Magazine • February 2011
FERRANT LAYOUT
1/19/11
3:27 PM
Page 135
some initial hints on the performance characteristics of clock recovery when using adaptive methods. Further work was required in order to better understand the behavior of real packet networks and the effect on clock recovery. After these initial studies, significant improvements were made in clock recovery technology, based on a better understanding of packet network behavior. The characterization of packet networks via PDV statistical distributions gave the group some understanding of the impairment that could be present. However, the most significant progress in these studies was made only when data from real network were made available. Figures 2 and 3 provide an example of the data presented during the last Study Period in Q13/15 meetings and recently included in the recommendations as an example of PDV measured in a packet network [5, G.8260]. In particular, Fig. 2 provides an example of the packet delay variation measured over several hours and Fig. 3 provides the corresponding histogram. One important aspect that was addressed during the related studies is the introduction of the concept of a network clock carried by packetbased methods (e.g., using NTP or PTP packets). It was recognized that from a performance point of view this case is analogous to the CES clock recovery using adaptive method (i.e., ACR). Indeed, the basic principle in the packetbased clock is to compare the time of arrival of a packet as calculated by the local clock (slave clock) with the expected arrival time of the packet generated by a master. The method in this case is to use time stamps to transmit the times of departure and arrival. The comparison of local time of arrival with the content of the time stamp as generated by the master corresponds to a measurement offset in a one-way packet transfer and is analogous to the phase error measurements obtained in CES adaptive clock recovery methods where the expected arrival time is defined by the periodicity of the packets. The current status of the discussions is twofold: • Characterization and modeling of the packet clock (packet-based equipment clock, PEC), which is planned to be included in the G.8263 draft • The definition of PDV metrics and related network limits (G.8260 and G.8261.1, respectively) The work on the characterization of a packet clock is quite complex due to the fact that multiple implementations are indeed possible. A logical model for the packet clock was finally agreed on and included in the G.8263 draft. Some other important aspects will need to be further discussed before G.8263 is finalized: bandwidth, holdover, and PDV tolerance. In particular, the PDV tolerance and the metrics used to describe it are among the most controversial points in this discussion. This work is also known by the generic term PDV metrics. In particular, this term indicates a method to measure the significant characteristics
IEEE Communications Magazine • February 2011
PDV phase; samples: 596,919; Iinitial phase offset: 270.020 μs
10 k
1k
100
10
1
180.00 μs
Bins = 2048 Measurements = 596,919
1.17.00 ms
Figure 3. PDV histogram (copy of the right hand side of Figure I.2/G.8260). of the delay variation in the network and build the actual requirements in terms of PDV. In particular, the goal is to formulate packet-based stability quantities (metrics) that will provide a means of estimating the physical-based stability quantities for the packet clock output. The initial results of the PDV metrics discussion are included in the ITU-T G.8260, recently consented by SG15. The related network limits are planned to be included in G.8261.1. An initial draft has been prepared, but it may take some more time before it is completed.
THE FIRST IEEE 1588 TELECOM PROFILE: ITU-T G.8265, G.8265.1 The use of packet-based protocols to distribute a network clock was initially described in G.8261 (SG8261)). As described in this Recommendation (clause 7), different protocols (e.g., NTP, PTP) might be used to distribute a frequency synchronization reference signal end to end (i.e., without support from the network), leading to similar performances. Remember, the initial focus of the new protocols was to distribute frequency only in order to support the interworking of packet and TDM networks, and applications that require a synchronization reference (CES and wireless). The next step is to address the needs of applications also requiring accurate time synchronization references (e.g., in the sub-microsecond range). To meet these requirements ongoing standardization activities are focusing on PTP (IEEE 1588), Transparent Clocks and Boundary Clocks. Although the IEEE 1588 standard was finally released in 2008 [2], it was not sufficient to be used for the deployment of the related synchronization solutions in Telecom. One main aspect is that the IEEE 1588-2008 specification in reality provides a list of options (e.g., mapping over Ethernet or over IP/UDP, master selection process, use of unicast or multi-
135
FERRANT LAYOUT
1/19/11
3:27 PM
Page 136
ments dealing with synchronous Ethernet (G.8264 is mostly known as the “SyncE SSM Recommendation”). A second decision was also made to separate between the general aspects (e.g., architecture) that might be applicable to different packet protocols (e.g., including NTP) and the actual telecom profile. This approach was also found useful in order to allow the definition of several (frequency synchronization) profiles in the future. The architecture recommendation is now called G.8265 [5]. The profile(s) are specified by G.8265.1 [5]. A decision was taken to address initially the case of frequency synchronization distribution endto-end without timing support from the network (see the general architecture in Fig. 5), considered as the simplest one. According to this scenario a Packet Master clock distributes timing packets towards the connected slaves over an IEEE 1588-unaware packet network (i.e., according to ITU-T terminology “without timing support”). Despite the relatively simple environment, the definition of this Telecom Profile required lengthy and careful discussions. The IEEE 1588 protocol was originally designed for use in a LAN, almost in a plug-andplay approach (see IEEE 1588-2002). A second version was then released as to add some features that should have made it also more suitable for use in telecom (and other applications). One main addition was the possibility to use the unicast mode (as opposed to the default multicast mode used in the initial IEEE 1588 applications). Nevertheless, despite the IEEE 1588-2008 revision, several concerns still remained for using the PTP protocol in a telecom environment; they were carefully addressed during several Q13 meetings based on a large amount of contributions. One key aspect is that the common practice
PRC reference Time stamp master
Packet switched network
Time stamp processing Recovered reference timing signal
Time stamp
Figure 4. Packet-based methods (copy of Figure 3/G.8261).
cast mode, etc.) and each application has to define its own specific setting. The list of these details is called a “profile” by IEEE 1588. This means that ITU-T had to define a telecom profile (or profiles) in order to use IEEE 1588. A second major aspect is that compliance with IEEE 1588 does not imply any performance guarantee. The actual performance that can be achieved depends on aspects such as the network architecture, support or not of boundary clock/transparent clock in the network, quality of the clocks in the various types of equipment, and so on. In particular, when there is no timing support (e.g., the transport nodes do not support the boundary clock, which filters the timing it receives before propagating it), the slave clock must filter any PDV that is introduced by the network. The work on the IEEE 1588 telecom profile and related performance aspects was initiated in 2008 with focus on the frequency synchronization needs. The initial plan was to include the specification of the telecom profiles as a series of documents under the G.8264 umbrella. To improve the structure of the Recommendations, it was then agreed that the work on the IEEE 1588 profile should be addressed in Recommendations clearly separated from the docu-
Reference1 Fi Packet master clock
Packet timing signals Packet slave clock Fout+δ1
Packet network
Packet slave clock
Packet slave clock Fout+δ2
Fout+δ3 1
Note: the reference may be from a PRC directly, GPS or via a synchronization network
Figure 5. General packet network timing Architecture (copy of Figure 1/G.8265).
136
IEEE Communications Magazine • February 2011
FERRANT LAYOUT
1/19/11
3:27 PM
Page 137
Grandmaster2 PTP domain = x
Grandmaster1 PTP domain = x
GrandmasterN PTP domain = x
Network
SOOC instantiation
SOOC instantiation
Slave-onlyOC instance1
Slave-onlyOC instance2
SOOC instantiation
Slave-onlyOC instanceN
QL QL ENABLE_REQUESTING_ ENABLE_REQUESTING_ ENABLE_REQUESTING_ QL SYNC_DEL_RESP SYNC_DEL_RESP SYNC_DEL_RESP PTSF PTSF PTSF Time- ENABLE_REQUESTING_ Time- ENABLE_REQUESTING_ Time- ENABLE_REQUESTING_ stamps stamps UNICAST-ANNOUNCE stamps UNICAST-ANNOUNCE UNICAST-ANNOUNCE ENABLE_REQUESTING_ Timestamps Timestamps for from SOOCs UNICAST_ANNOUNCE frequency ENABLE_REQUESTING_ QL from PTSF from recovery SOOCs SOOCs SYNC_DEL_RESP Selector
Management information
GM#1, priority_GM#1 GM#2, priority_GM#2 GM#N, priority_GM#N
QL processing
PTSF processing
List of grandmasters
Request announce
Request sync / del_resp GM selection
Control and processing block
Selected grandmaster
Telecom Slave
Figure 6. Model of a telecom slave required to define the alternate BMCA (copy of Figure 3/G.8265.1).
for the Telecom operators is to have a full control on the operation of the network: for instance the master priority is statically defined and the master selection is decided by the slaves. On the other hand, the basic principle of the IEEE 1588 redundancy is to provide some automatic planning of the network at start up as well as support automatic restorations after failures (based on the Best Master Clock Algorithm [BMCA]). In particular, according to the IEEE 1588 default approach, the master could decide whether or not it is the “grandmaster for the network.” The discussion on the BMCA was probably the most controversial one. A final agreement resulted in the scheme shown in Fig. 6, where several instances of PTP slave are required as to allow the Packet Slave to be connected to several Masters at the same time, supporting the appropriate redundancy requirements. In order to simplify the discussion it was also agreed to focus on a full unicast approach. The first telecom profile was finally agreed on at the June 2010 SG15 meeting and is included in G.8265.1 [5, G.8265.1]. Future frequency synchronization profiles (e.g., supporting a mixed multicast/unicast environment) might be addressed in future versions of the G.8265.1.
TIME AND PHASE SYNCHRONIZATION AND FUTURE WORK Time synchronization has traditionally been required to support functions such as billing and alarming. In this case the requirements are on the order of tens or hundreds of milliseconds.
IEEE Communications Magazine • February 2011
An effective and convenient distribution of the time synchronization reference (time of day [ToD]) calls for a hierarchical time synchronization network and a protocol that can read a server clock, transmit the data to one or more clients, and adjust each client clock. For this purpose in IP networks NTP [1] has been chosen. More stringent time synchronization requirements are related to the support of the correct generation of the signals on the radio interface. In the past this was mainly required for instance in case of code-division multiple access (CDMA) technology (the time synchronization requirement is for ± 3 μs with respect to the CDMA system time, which in its turn is traceable and synchronous to UTC). In this case the traditional approach has been to deploy a GPS receiver in every Base Station. The deployment of time-division synchronous CDMA (TD-SCDMA) technology, as well as the foreseen wider use of time-division duplex (TDD) technology in the future, is increasing the need to deliver accurate time or phase synchronization in the networks. Particularly in the case of LTE TDD, Third Generation Partnership Project (3GPP) TS 36.133 specifies 3 μs or 10 μs maximum time difference between base stations for small and large cells, respectively. In case of TD-SCDMA, 3GPP TR 25.836 specifies 3 μs maximum time difference between Base Stations. Additional examples are related to the support of functions such as multimedia broadcast/multicast service over a single frequency network (MBSFN) or coordinated multipoint transmission (also known as network multipleinput multiple-output [MIMO]), also with
137
FERRANT LAYOUT
1/19/11
A significant number of activities are planned addressing various aspects on the distribution of time over packet networks and in general over any relevant transport technology (OTN, GPON, VDSL2, etc.). In this respect Q13 has defined a new series of recommendations, G.827x
3:27 PM
Page 138
requirements on the order of microseconds or sub-microseconds. This situation is driving the definition of alternative solutions to the use of GPS or, more generally, a global navigation satellite system (GNSS). In fact sometimes the use of a GPS receiver might not be feasible or effective. Note, however, that the use of GNSS will remain important in the future: it will provide the reference to the packet masters, and allow a mixed architecture where some nodes may directly receive time reference from the GNSS, and others will get the reference through the networks from these nodes. A significant number of activities are planned addressing various aspects of the distribution of time over packet networks and in general over any relevant transport technology (OTN, GPON, VDSL2, etc.). In this respect Q13 has defined a new series of Recommendations, G.827x, which will cover all relevant aspects: network requirements, architecture, PTP profile, and clocks.
CONCLUSIONS Synchronization networks have adapted to merge with the widespread use of packet networks as needed for different types of networks and applications. Synchronous Ethernet is being deployed and is able to interwork with SDH synchronization networks. For IEEE 1588, the new telecom profile for the transport of frequency is now consented, and other frequency profiles might be specified in the future. The main area of study for ITU-T Q13/15 is now the transport of time and phase through packet networks, as required by some mobile technologies (e.g., LTE TDD). A new area of work is to define an architecture where the transport of time and phase could be supported by networks where a reference frequency is available, in order to get a more flexible time synchronization solution. New issues which did not exist for the transport of frequency have to be considered in case of time synchronization, such as the asymmetry of the two directions. Many aspects need to be solved in the next few years, and several recommendations (G.827x series) will have to be completed. Q13 has shown to be a very active group and expects to overcome these new challenges like it has addressed the previous technical issues.
138
REFERENCES [1] RFC 5905, “NTP Network Time Protocol Version 4: Protocol and Algorithms Specification.” [2] IEEE 1588-2008, “Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems.” [3] I-ETS 300 353, “Broadband Integrated Services Digital Network (B-ISDN); Asynchronous Transfer Mode (ATM); Adaptation Layer (AAL) Specification — Type 1.” [4] ETSI TR 101 685, “Timing and Synchronization Aspects of Asynchronous Transfer Mode (ATM) Networks.” [5] ITU-G.G826x series: G.8260, “Definitions and Terminology for Synchronization in Packet Networks”; G.8261, “Timing and Synchronization Aspects in Packet Networks”; G.8262, “Timing Characteristics of a Synchronous Ethernet Equipment Slave Clock (EEC)”; G.8263, “Timing Characteristics of Packet-Based Equipment Clocks (PEC) and Packet-Based Service Clocks (PSC)” (only draft available); G.8264, “Distribution of Timing Information through Packet Networks”; G.8265, “Architecture and Requirements for Packet-Based Frequency Delivery”; G.8265.1, “Precision Time Protocol Telecom Profile for Frequency Synchronization.”
ADDITIONAL READING [1] ITU-G.81x series: G.810, “Definitions and Terminology for Synchronization Networks”; G.811, “Timing Characteristics of Primary Reference Clocks”; G.812, “Timing Requirements of Slave Clocks Suitable for use as Node Clocks in Synchronization Networks”; G.813, “Timing Characteristics of SDH Equipment Slave Clocks (SEC).” [2] ITU-G.82x series: G.823, “The Control of Jitter and Wander within Digital Networks which are Based on the 2048 kb/s Hierarchy”; G.824, “The Control of Jitter and Wander within Digital Networks which are Based on the 1544 kb/s Hierarchy”; G.825, “The Control of Jitter and Wander within Digital Networks which are Based on the Synchronous Digital Hierarchy (SDH).”
BIOGRAPHIES JEAN-LOUP FERRANT (
[email protected]), graduated from INPG Grenoble (France), joined Alcatel in 1975 and worked on analog systems, PCM, and digital crossconnects. He has been working on SDH synchronization since 1990 and on SDH and OTN standardization for more than 15 years in ETSI TM1, TM3, and ITU-T SG13 and SG15. He has been rapporteur of SG15 Q13 on network synchronization since 2001. He was one of the AlcatelLucent experts on synchronization in transport networks until he retired in March 2009. He is still rapporteur of SG15 Q13, sponsored by Calnex Solutions. STEFANO RUFFINI (
[email protected]) joined Ericsson in 1993 and has been working on synchronization aspects for about 15 years (currently as Expert R&D and member of the Research & Innovation Team, Ericsson). He has represented Ericsson in various standardization organizations (including ETSI, ITU, 3GPP, and IETF) and is currently actively contributing to ITU-T SG15 Q13 (serving as associate rapporteur and editor) and to other relevant standardization bodies. He is one of the Ericsson experts involved in the definition of the equipment and network synchronization solutions.
IEEE Communications Magazine • February 2011
GARNER LAYOUT
1/19/11
3:28 PM
Page 140
SYNCHRONIZATION OVER ETHERNET AND IP NETWORKS
Synchronization of Audio/Video Bridging Networks Using IEEE 802.1AS Geoffrey M. Garner and Hyunsurk (Eric) Ryu, Samsung Advanced Institute of Technology
ABSTRACT The Audio/Video Bridging project in the IEEE 802.1 working group is focused on the transport of time-sensitive traffic over IEEE 802 bridged networks. Current bridged networks do not have mechanisms that enable meeting these requirements under general traffic conditions. IEEE 802.1AS is the AVB standard that will specify requirements to allow for transport of precise timing and synchronization in AVB networks. It is based on IEEE 1588-2008, includes a PTP profile that is applicable to full-duplex IEEE 802.3 transport, and adds specifications for timing transport over IEEE 802.11, IEEE 802.3 EPON, and CSN media. This article provides a tutorial on IEEE 802.1AS that updates earlier descriptions, and new simulation results for timing performance.
INTRODUCTION The Audio/Video Bridging (AVB) project in the IEEE 802.1 working group is focused on the transport of time-sensitive traffic over IEEE 802 bridged networks. The initial emphasis was on consumer audio/video (A/V) applications. As the project developed, the focus broadened to include professional A/V, industrial automation, and automotive applications. Additional AVB applications are expected to include wireless communications and smart grid. All of these applications have stringent timing requirements. Current bridged networks do not have mechanisms that enable meeting these requirements under general traffic conditions. Goals of the AVB project, driven mainly by the initial emphasis on consumer applications but also useful for the later applications, are that the bridges be low-cost, and that AVB systems require minimal configuration and are as near plug-and-play as possible. Regarding the former, the philosophy has been to make the bridges as inexpensive as possible and place any high cost in the respective end devices. This ensures that the cost of applications with stringent requirements is borne only by those applications. Regarding the latter, the result has been to specify as few options as possible. The carrying of time-sensitive traffic requires three main functions. First, precise timing and
140
0163-6804/11/$25.00 © 2011 IEEE
synchronization is needed so that individual traffic streams will meet their respective jitter, wander, and time synchronization requirements on egress. Second, a mechanism for applications to reserve the necessary network resources is needed. Finally, bridge forwarding and queueing mechanisms are needed so that latency requirements are met. These functions are provided by three AVB standards: IEEE 802.1AS (precise timing and synchronization) [1], IEEE 802.1Qat2010 (“Virtual Bridged Local Area Networks, Amendment 14: Stream Reservation Protocol [SRP]”), and IEEE 802.1Qav-2009 (“Virtual Bridged Local Area Networks, Amendment 12: Forwarding and Queueing Enhancements for Time-Sensitive Streams”). A fourth AVB standard, IEEE 802.1BA [2], specifies “AVB profiles” (i.e., the parameters and options of the other three standards needed to transport traffic streams of each respective AVB application). The initial versions of the three main AVB standards are either completed or nearly completed. IEEE 802.1Qat and IEEE 802.1Qav are published. IEEE 802.1AS has completed the initial sponsor ballot and one recirculation, and is currently in a second recirculation; completion is expected in early 2011. The AVB networks focus on IEEE 802 technologies. In the current versions of the AVB standards, these include full-duplex IEEE 802.3 (Ethernet) operating at rates of 100 Mb/s or higher, 802.11 (WiFi) operating at rates of 100 Mb/s or higher (which means that 802.11 transport must use IEEE 802.11n), and 802.3 Ethernet passive optical network (EPON). However, an informative annex of IEEE 802.1AS describes transport over a coordinated shared network (CSN). Examples of such networks, also described in the Annex, are those based on the Multimedia over Coax Alliance (MoCA) standard and International Telecommunication Union — Telecommunication Standardization Sector (ITU-T) Recommendation G.hn. This article focuses on synchronization of AVB networks using IEEE 802.1AS. It is intended as a tutorial on AVB synchronization. AVB synchronization has been described previously [3–6]. However, [3–5] were prepared well before completion of IEEE 802.1AS and are not up to date, and [6] was a presentation without an accompanying paper or tutorial. In addition, this
IEEE Communications Magazine • February 2011
GARNER LAYOUT
1/19/11
3:28 PM
Page 141
article contains new simulation results for AVB network timing performance that correspond to up-to-date default parameters specified in 802.1AS. The article is organized as follows. The next section gives an overview of IEEE 802.1AS. We then describe how synchronization is transported using IEEE 802.1AS. We then describe how best master selection is done using IEEE 802.1AS. We then describe the Precision Time Protocol (PTP) profile that is part of IEEE 802.1AS and the performance requirements for 802.1AS timeaware systems. We then describe the new simulation results for 802.1AS timing performance. The final section presents conclusions.
OVERVIEW OF IEEE 802.1AS A bridge or end station that meets the requirements of IEEE 802.1AS, and therefore is able to transport synchronization, is referred to as a time-aware bridge or end station, respectively. Since IEEE 802.1AS operates over a variety of media, the 802.1AS architecture divides a timeaware system (system refers generically to bridge or end station) into media-independent and media-dependent entities. All the entities are located in a layer above the IEEE 802.1 medium access control (MAC), MAC relay, and link layer control (LLC) sublayers. IEEE 802.1AS relies on the transfer of time stamps using mechanisms that are media dependent. For the case where the medium is fullduplex Ethernet, 802.1AS uses a subset of IEEE 1588-2008 [7]. Specifically, IEEE 802.1AS includes an IEEE 802-specific profile of IEEE 1588-2008. For 802.11 links, 802.1AS uses timing facilities, developed initially for location determination, defined in IEEE 802.11v [8]. For EPON links, 802.1AS uses the timing facilities defined in the IEEE 802.3 multipoint control protocol (MPCP). For CSNs, it is possible to use inherent timing facilities or the PTP profile. The protocol defined by IEEE 1588 is referred to as PTP, and the IEEE 1588 profiles are referred to as PTP profiles. By analogy, the protocol defined by IEEE 802.1AS is referred to as the generalized PTP (gPTP); the gPTP includes transport of synchronization over all media (i.e., not only those where transport is part of the PTP profile). The links that connect ports of time-aware systems are at least logically pointto-point; that is, gPTP information sent by a gPTP port is received by a specific gPTP port at the other end of the logical link. Whether the links are also physically point-to-point is mediadependent. The specific subset of IEEE 1588 used in the PTP profile contained in IEEE 802.1AS, and the differences between PTP and gPTP, are described in more detail later. IEEE 802.1AS establishes a synchronization hierarchy within an AVB network using an algorithm that is very similar to the default best master clock algorithm (BMCA) of [7]. This algorithm is part of the media-independent layer. It operates autonomously, and results in the selection of one time-aware system as the grandmaster and all ports of all the time-aware systems having master, slave, or passive roles. Each time-aware system, except for the grand-
IEEE Communications Magazine • February 2011
master, receives synchronization information on its single slave port and transmits synchronization information on any master ports. The synchronization hierarchy forms a synchronization spanning tree, and ports are placed in the passive role to break loops. The transport of synchronization is hop by hop; each time-aware system except the grandmaster uses incoming synchronization information on its slave port to synchronize to the grandmaster. Each time-aware system transmits synchronization information on any master ports. IEEE 802.1AS requires that all bridges and end stations be time-aware. The enforcement of this requirement is media-dependent. A fullduplex Ethernet port uses the IEEE 1588 peer delay mechanism to determine whether the system at the other end of the link is time-aware. This is done by detecting whether the system at the other end of the link responds to peer delay messages and, if so, whether the measured link delay exceeds a specified threshold. An IEEE 802.11 link determines that gPTP cannot run on the link based on information provided by the 802.11v protocol. An EPON can always run gPTP because the MPCP timing facilities are always present. A CSN that uses its inherent timing facilities can always run gPTP; alternatively, if the CSN uses the PTP profile, the IEEE 1588 peer delay mechanism is used to determine whether the system at the other end of the link is time-aware.
Since IEEE 802.1AS operates over a variety of media, the 802.1AS architecture divides a time-aware system (system refers generically to bridge or end station) into media-independent and media-dependent entities. All the entities are located in a layer above the IEEE 802.1 MAC, MAC relay, and LLC sublayers.
TRANSPORT OF SYNCHRONIZATION IN IEEE 802.1AS Synchronization transport by an 802.1AS timeaware system is functionally equivalent to synchronization transport by an IEEE 1588 boundary clock (or ordinary clock) that uses the peer delay mechanism. It may be shown that the transport is also functionally equivalent to transport by an IEEE 1588 peer-to-peer transparent clock. The time synchronization model in IEEE 802.1AS assumes that a time-aware system has a free-running local clock it uses to time-stamp the departure and arrival of various time synchronization messages. There must be a single common local clock for all time-stamping in the node, but otherwise the clock is free-running (i.e., there is no requirement to physically adjust the frequency of this clock, although it is not prohibited). Each node uses the arrival and departure time stamps for the various messages and time synchronization information carried by the messages from upstream nodes to determine grandmaster time that corresponds to any desired local clock time A time-aware system is required to measure propagation delay on each logical link attached to each gPTP port. This measurement is mediadependent. For full-duplex Ethernet, the measurement uses the IEEE 1588 peer delay mechanism. The peer delay measurement is illustrated in Fig. 1 (adapted from [1, 7]). The port that wishes to measure propagation delay, termed the peer delay initiator, sends a Pdelay_Req message and time-stamps the depar-
141
GARNER LAYOUT
1/19/11
3:28 PM
Page 142
Peer delay initiator
Peer delay responder Time stamps known by peer delay intiator
t1
t1 Pdelay_Req t2
t4 – t1
t3 – t2 t3 t4
Pdelay_Resp t2 Pdelay_Resp_Follow_Up t3
t1,t2,t4 t1,t2,t3,t4
Figure 1. Link delay measurement using peer delay mechanism.
Time-aware system A master port
Time-aware system B Slave Master port port
Time-aware system C slave port
ts,A Sync Follow _Up (p tr,B reciseO correcti riginTim onField estamp , A , rateRati oA ) ts,B Follow _
Sync Up (pre tr,C c iseOrig correcti inTime onField stamp , B , rateRati oB )
Figure 2. Transport of time synchronization information over full-duplex Ethernet media.
1
An error remains due to any difference between the initiator and grandmaster frequency; however, the effect of this error can be shown to be negligible [1].
142
ture of that message (time t 1 ). The message arrives at the node at the other end of the link, termed the peer delay responder, which timestamps the arrival of the message (time t2). The peer delay responder sends a Pdelay_Resp message to the peer delay initiator at a later time, and time-stamps the departure (time t 3). The Pdelay_Resp message also conveys the time t2 to the initiator. The initiator time-stamps the arrival of the Pdelay_Resp message (time t4). Finally, the responder conveys the time t3 to the initiator in a Pdelay_Resp_Follow_Up message, which is not time-stamped. At the conclusion of this message exchange, the peer delay initiator can compute the propagation delay under the assumption that the link is symmetric (i.e., the delay is the same in both directions). The propagation delay is equal to half the difference between the interval t4 – t1 and the interval t3 – t2, but with both intervals referred to a common time base (since, in general, the frequencies of the local clocks at the time-aware systems will be different). The referencing of both intervals
to a common time base is done by multiplying the interval t3 – t2 by the measured ratio of the initiator to responder local clock frequency.1 A peer delay initiator uses the departure and arrival times of the successive Pdelay_Resp messages to measure the ratio of its local clock frequency to that of the peer delay responder. This rate ratio is used in the link delay computation, as described above, and in computing synchronized (i.e., grandmaster) time, as described shortly. IEEE 802.1AS does not prescribe the specific algorithm for the rate ratio measurement; any algorithm is allowed provided that the measurement can be made to within ±0.1 ppm. An example is given where rate ratio is computed as the ratio of the interval between the arrivals of successive Pdelay_Resp messages to the interval between the departures of the same messages. Every full-duplex Ethernet port of every time-aware system periodically initiates a peer delay measurement. As indicated above, the result of this measurement is used to determine if gPTP can be run on the link. In addition, the link delay and rate ratio will be continuously known to all ports of all links that can run gPTP, which allows faster reconfiguration if there is a grandmaster or network topology change. Note that while the link delay is expected to be relatively static for full-duplex Ethernet links, the frequencies of the local clocks of the endpoint time-aware systems, and therefore the rate ratio, may vary over time (e.g., due to temperature changes). A time-aware system synchronizes to the grandmaster using time synchronization information received on its slave port. Here, the term synchronize means that the time-aware system is able to compute the grandmaster time corresponding to any desired local clock time. If the time-aware system can do this, it can provide the grandmaster time (i.e., the network synchronized time) whenever desired. There is no requirement that the local clock frequency be physically adjusted to match the grandmaster frequency (although, as indicated earlier, this is not prohibited). Time synchronization is performed as follows. Periodically, a time-aware system that is not the grandmaster receives time synchronization information on its slave port. This information consists of a grandmaster time and a corresponding local clock time. The period for the sending of this information by a master port is termed the Sync interval, following IEEE 1588 terminology. The format of this information and the messages used to convey the information are media-dependent. For example, in full-duplex Ethernet media the PTP messages Sync and Follow_Up [7] are used, and the correspondence between grandmaster and local clock time is obtained using the timestamp information from the upstream time-aware system carried in the Follow_Up message and the timestamp of the arrival of the Sync message. The process is described in more detail below. In IEEE 802.11 media, the information is conveyed by a single IEEE 802.11v Timing Measurement Action Frame. IEEE 802.1AS causes this frame to be transmitted and obtains the information from a
IEEE Communications Magazine • February 2011
GARNER LAYOUT
1/19/11
3:28 PM
Page 143
Attribute
Default value
Required supported values
gPTP domain number
0
There is a single gPTP domain, with domain number 0
logAnnounceInterval
0
0, 127*
logSyncInterval
–3
–3, 127*
logPdelayReqInterval
0
0, 127*
announceReceiptTimeout
3
3
priority1
246 (network infrastructure time-aware systems) 250 (portable time-aware systems) 248 (other time-aware systems)
246 (network infrastructure time-aware systems) 250 (portable time-aware systems) 248 (other time-aware systems)
priority2
248
248
Observation interval for offsetScaledLogVariance
0.125 s
0.125 s
*When the value of logAnnounceInterval, logSyncInterval, or logPdelayReqInterval is 127, the port does not send Announce, Sync, or Pdelay_Req messages, respectively.
Table 1. Default values and required supported ranges for PTP attributes. received frame via service interface primitives that are part of the 802.11v MAC layer management entity (MLME). In EPON media, the correspondence is conveyed using an organization-specific slow protocol (OSSP) message. IEEE 802.1AS causes this message to be sent and obtains information from a received message via service interface primitives. The EPON MPCP counter is, in effect, the local clock, and the setting of this counter is done within the EPON MPCP layer (i.e., outside of IEEE 802.1AS). The transport of synchronization information for IEEE 802.11 and IEEE 802.3 EPON media is described in more detail in [1, clauses 12 and 13]. Figure 2 (taken from [1]) illustrates the synchronization process for the case of full-duplex Ethernet media. The figure shows three timeaware systems, labeled A, B, and C. A master port of A sends a PTP Sync message to B, and time-stamps the message ts,A relative to the local clock of A. The Sync message is received and time-stamped on the slave port of B at local time t r,B. At a later time, A sends a PTP Follow_Up message, which contains (among other information) a preciseOriginTimestamp, a correctionField, and the measured cumulative rateRatio of the frequency of the local clock of A relative to the grandmaster frequency (this latter field is contained in a standard organization TLV). The computation of these values will become clear after the set of computations done by time-aware system B is described. The preciseOriginTimestamp value is the same value that was sent by the grandmaster when this particular synchronization information originated. The correctionField value is chosen so that the sum of the preciseOriginTimestamp and correctionField is the grandmaster time that corresponds to local time t s,A . At time t s,B , B sends and time-stamps a Sync message. At a later time, B sends a Follow_Up message. The fields
IEEE Communications Magazine • February 2011
of the Follow_Up message are set as follows: • The preciseOriginTimestamp is set equal to the preciseOriginTimestamp of the most recently received Sync and Follow_Up message on the slave port, • The cumulative rateRatio is set equal to the cumulative rateRatio of the most recently received Sync and Follow_Up message on the slave port multiplied by the current neighbor rate ratio measured by the slave port, and • The correctionField is set equal to the sum of the correctionField of the most recently received Sync and Follow_Up message on the slave port, the link delay measured by the slave port, and the time interval ts,B – t r,B multiplied by the newly computed cumulative rateRatio. The sum of the preciseOriginTimestamp and correctionField of the Follow_Up message sent by B is the grandmaster time that corresponds to the local time t s,B when the Sync message is sent. The time interval t s,B – t r,B multiplied by the cumulative rateRatio is the PTP residence time, and the fact that a time-aware system alters only the correctionField and not the preciseOriginTimestamp is analogous to way in which an IEEE 1588 peer-to-peer transparent clock transports synchronization. However, gPTP could have specified that the full grandmaster time (except for any sub-nanosecond portion, as in PTP this must be carried in the correction field) be carried in the preciseOriginTimestamp field as would be done by an IEEE 1588 boundary clock. In fact, the boundary clock and peerto-peer transparent clock are functionally equivalent in the manner in which they transport synchronization (since they differ only in how the grandmaster time is distributed between the preciseOriginTimestamp and correctionField). The key difference between the boundary clock and peer-to-peer transparent clock is that the
143
GARNER LAYOUT
1/19/11
3:28 PM
Page 144
Parameter
Value
Number of time-aware systems, including grandmaster
8 (7 hops)
Link medium
Full-duplex IEEE 802.3 (Ethernet)
Sync interval
0.125 s
Peer delay interval
1.0 s
Free-running, local clock tolerance
±100 ppm
Residence time, Pdelay turnaround time
1 ms (cases 1 and 2), 10 ms (case 3), 50 ms (case 4)
Link propagation delay
500 ns
Phase measurement granularity of local oscillator
40 ns
Local clock wander generation
Not modeled (case 1); FFM at level of TDEV mask of [1, Fig. B-1]; see performance requirement 4 in article (cases 2–4)
Endpoint filter bandwidth
1 Hz, 10 mHz, 1 mHz (for each of cases 1–4)
Endpoint filter gain peaking
0.1 dB
Simulation time
10 010 s
Maximum time step
0.001 s
Table 2. Parameters for simulation cases. former invokes the BMCA and PTP state machine, and the latter does not; for this reason, the gPTP time-aware system is equivalent to a PTP boundary clock. This topic is discussed in more detail in [9].
BEST MASTER SELECTION IN IEEE 802.1AS The description of best master selection in this section is a summary of the description in [4]. A more detailed description is contained in [4, references therein, especially 11–14]. The 802.1AS BMCA is very similar to the default IEEE 1588 BMCA. Within an IEEE 802.1AS network, all time-aware systems are required to invoke the BMCA, and best master selection information is conveyed using PTP Announce messages on all media. As indicated above, the BMCA is part of the media-independent layer. The gPTP BMCA differs from the default PTP BMCA as follows: • In gPTP, a time-aware system need not be grandmaster-capable, regardless of the number of ports, while in PTP it is only an ordinary clock (i.e., a single-port clock) that need not be grandmaster-capable (i.e., can be slave-only). • In gPTP, there is no foreign master qualification as in PTP; all Announce messages received on a port are used immediately. • In gPTP, a port whose role is determined to be master becomes master immediately; there is no pre-master state as in PTP. The gPTP BMCA is expressed using a subset
144
of the formalism for Rapid Spanning Tree Protocol (RSTP) (see IEEE 802.1Q-2005, “IEEE Standard for Local and Metropolitan Area Networks, Virtual Bridged Local Area Networks”). This is possible because both the BMCA and RSTP create spanning trees. The root of the spanning tree created by the BMCA is the grandmaster, unless no time-aware system in the network is grandmaster-capable. The IEEE 1588 attributes priority1, clockClass, clockAccuracy, offsetScaledLogVariance, priority2, and clockIdentity are concatenated, as unsigned integers in that order, into the overall attribute systemIdentity. The first part of the IEEE 1588 dataset comparison algorithm [7, Fig. 27] is expressed in terms of a comparison of systemIdentities. Six different, but related, priority vectors are defined, which are set and compared in four interacting state machines; these machines also set the role (i.e., PTP state) of each port. The operation of these state machines is equivalent to the dataset comparison and state decision algorithms of [7].
PTP PROFILE CONTAINED IN IEEE 802.1AS AND DIFFERENCES BETWEEN PTP AND IEEE 802.1AS IEEE 802.1AS includes a PTP profile, which is used for transport over full-duplex, IEEE 802.3 links. The PTP profile specifies attribute values and the subset of IEEE 1588 options used in IEEE 802.1AS. However, IEEE 802.1AS also contains specifications beyond PTP, such as the specifications for transport over IEEE 802.11 and 802.3 EPON, as well as those related to
IEEE Communications Magazine • February 2011
1/19/11
3:28 PM
Page 145
time-aware system performance. The default values and required supported values for time-aware system attributes are given in Table 1. Note that the required supported values are minimum requirements for an AVB network (i.e., all AVB applications can assume that these values are supported). However, a particular AVB application may require that additional values be supported. The PTP options used in this PTP profile are: 1. The BMCA is an alternate BMCA that differs from the default BMCA of [7] as described above. 2. The management mechanism uses a Simple Network Management Protocol (SNMP) Management Information Base (MIB). 3. The path delay mechanism is the peer delay mechanism. 4. The transport mechanism is full-duplex Ethernet. 5. A time-aware system is a PTP ordinary clock or boundary clock, depending on whether it has one or more than one gPTP ports, respectively. All time-aware systems are two-step clocks (as defined in IEEE 1588-2008). 6 Each gPTP port of a time-aware system measures the frequency offset of its neighbor at the other end of the attached link relative to itself. The frequency offset, relative to the grandmaster, is accumulated in a standard organization TLV that is attached to the Follow_Up message. 7 The PTP path trace feature is required. 8 A standard organization TLV is defined that allows a port of a time-aware system to request that its neighbor slow down or speed up the rate at which it sends Sync/Follow_Up, peer delay, and/or Announce messages. 9 The acceptable master table feature is used with IEEE 802.3 EPON links to ensure that the optical line terminal (OLT) is master and optical network units (ONUs) are slaves. In addition to containing the above PTP profile, IEEE 802.1AS specifies performance requirements for time-aware systems: 1 The fractional frequency offset of the local clock, relative to the TAI frequency, shall be within ±100 ppm. 2 The local clock frequency shall be 25 MHz or greater (i.e., the time measurement granularity is no worse than 40 ns). 3 The jitter generation of the local clock shall not exceed 2 ns peak-to-peak, measured through a 10 Hz first-order high-pass filter (and having low-pass characteristic specified in [1]). 4 The wander generation of the local clock shall be within the time deviation (TDEV) mask of [1, Fig. B-1]. 5 The residence time (i.e., the time interval between the sending of a Sync message on a master port and receipt of the most recent Sync message on the slave port) shall not exceed 10 ms. 6 The Pdelay turnaround time (i.e., the time interval between the receipt of a Pdelay_Req message and the sending of the corresponding Pdelay_Resp message
IEEE Communications Magazine • February 2011
1 Hz, 1 ms, no clock wander generation 1 Hz, 1 ms, with clock wander generation 1 Hz, 10 ms, with clock wander generation 1 Hz, 50 ms, with clock wander generation 10 mHz, 1 ms, no clock wander generation 10 mHz, 1 ms, with clock wander generation 10 mHz, 10 ms, with clock wander generation 10 mHz, 50 ms, with clock wander generation 1 mHz, 1 ms, no clock wander generation 1 mHz, 1 ms, with clock wander generation 1 mHz, 10 ms, with clock wander generation 1 mHz, 50 ms, with clock wander generation Uncompressed SDTV (SDI Signal) Uncompressed HDTV (SDI Signal) MPEG-2, after network transport MPEG-2, no network transport Digital audio, consumer interfaces Digital audio, professional interfaces Femtocell Comparison of jitter/wander accumulation MTIE at time-aware system (node) 2 1 Hz, 10 mHz, and 1 mHz endpoint filter bandwidths 1, 10, 50 ms residence time and Pdelay turnaround time (with clock wander generation) 1 ms residence time and Pdelay turnaround time (without clock wander generation) Sync Interval = 0.125 s Pdelay Interval = 1.0 s 1e+10 1e+9 1e+8 1e+7 1e+6 1e+5 MTIE (ns)
GARNER LAYOUT
1e+4 1e+3
1 Hz
1e+2 1e+1 1e+0 10 mHz
1e-1
1 mHz
1e-2 1e-3 1e-4 1e-5 1e-4
1e-3
1e-2
1e-1
1e+0
1e+1
1e+2
1e+3
1e+4
1e+5
Observation Interval (s)
Figure 3. MTIE results for cases 1-6, node (time-aware system) 2.
shall not exceed 10 ms. 7 The error inherent in any scheme used to measure neighbor rate ratio shall not exceed 0.1 ppm.
SIMULATION RESULTS FOR TIMING PERFORMANCE References [5, 6] present simulation results for 802.1AS timing performance. The results are given in the form of maximum time interval error (MTIE; see ITU-T Rec. G.810, “Definitions and Terminology for Synchronization Networks”) for transport over one and sevem hops, respectively. An eight-node network was simulated because one objective for AVB is that end systems may be separated by up to seven hops. However, these simulations did not consider local clock wander generation (see requirement 4 in the previous section). In addition, since those simulations were done, the residence time and Pdelay turnaround time requirements (see requirements 5 and 6 of the previous section) were increased from 1 to 10 ms. New simulations were performed, which include local clock
145
GARNER LAYOUT
1/19/11
3:28 PM
Page 146
wander generation and reflect the increased residence and Pdelay turnaround times. The simulator and model used in [5, 6] were used here, but with the addition of a noise model that generates wander at the level of the TDEV mask of performance requirement 4 of the previous section. This mask has a flicker frequency modulation (FFM) characteristic, and its values at 0.05 s and 10 s are 0.25 ns and 50 ns, respectively (and it is specified only for times between 0.05 s and 10 s; see [1] for details). The noise is simulated using the technique of [10]. The simulator is described in [5]. Parameters for the simulation cases are given in Table 2. Case 1 is from [5, 6], and is repeated here for comparison with the new cases. The new cases include: • Addition of local clock wander generation at the level of the 802.1AS requirement • Addition of clock wander generation and increase of residence and Pdelay turnaround times to 10 ms (the 802.1AS requirement) • Addition of wander generation and increase of residence and Pdelay turnaround times to 50 ms (i.e., exceeding the 802.1AS requirement by a factor of 5)
1 Hz, 1 ms, no clock wander generation 1 Hz, 1 ms, with clock wander generation 1 Hz, 10 ms, with clock wander generation 1 Hz, 50 ms, with clock wander generation 10 mHz, 1 ms, no clock wander generation 10 mHz, 1 ms, with clock wander generation 10 mHz, 10 ms, with clock wander generation 10 mHz, 50 ms, with clock wander generation 1 mHz, 1 ms, no clock wander generation 1 mHz, 1 ms, with clock wander generation 1 mHz, 10 ms, with clock wander generation 1 mHz, 50 ms, with clock wander generation Uncompressed SDTV (SDI Signal) Uncompressed HDTV (SDI Signal) MPEG-2, after network transport MPEG-2, no network transport Digital audio, consumer interfaces Digital audio, professional interfaces Femtocell
CONCLUSIONS
Comparison of jitter/wander accumulation MTIE at time-aware system (node) 8 1 Hz, 10 mHz, and 1 mHz endpoint filter bandwidths 1, 10, 50 ms residence time and Pdelay turnaround time (with clock wander generation) 1 ms residence time and Pdelay turnaround time (without clock wander generation) Sync Interval = 0.125 s Pdelay Interval = 1.0 s 1e+10 1e+9 1e+8 1e+7 1e+6
MTIE (ns)
1e+5 1e+4 1 Hz
1e+3 1e+2 1e+1 10 mHz
1e+0 1e-1
1 mHz
1e-2 1e-3 1e-4 1e-4
This article has provided a tutorial on IEEE 802.1AS, and has presented new simulation results based on requirements in the 802.1AS sponsor ballot draft. IEEE 802.1AS is based on a subset of IEEE 1588-2008. It includes a PTP profile that is applicable to full-duplex IEEE 802.3 transport, and adds specifications for timing transport over IEEE 802.11, IEEE 802.3 EPON, and CSN media. The requirements of IEEE 802.1AS were chosen to provide for low-cost bridges, while still allowing application performance requirements to be met. Applications with more stringent timing requirements will use narrower-bandwidth endpoint filters. IEEE 802.1AS has very few user-configurable options, consistent with the goal that AVB networks be plug-and-play. The new simulations were performed to confirm the level of support that can be provided for various end applications. The results of simulations show that all application requirements considered in 802.1AS can be met with the use of a 1 mHz filter bandwidth. With a 10 mHz bandwidth, all requirements are met with the exception of uncompressed SDTV. With a 1 Hz bandwidth, requirements for audio are met.
REFERENCES 1e-3
1e-4-2
1e-1
1e+0
1e+1
1e+2
1e+3
1e+4 1e+5
Observation Interval (s)
Figure 4. MTIE results for cases 1-6, node (time-aware system) 8.
146
As in [5, 6], a single run was made for each endpoint filter bandwidth, for each case. The free-running clock frequency offset for each time-aware system was initialized randomly, from a uniform distribution over the frequency tolerance range. MTIE results for nodes (time-aware systems) 2 and 8 are given in Figs. 3 and 4, respectively, and are compared with MTIE requirements (masks) derived from jitter, frequency offset, and frequency drift requirements for the respective audio/video and femtocell technologies. MTIE is peak-to-peak phase variation, as a function of observation interval; see [5, 6, references therein] for more detail, including derivation of the MTIE masks. The results show that the addition of clock wander generation, as well as the increase of residence and Pdelay turnaround times, have little effect on the resulting MTIE. In fact, increasing the residence and Pdelay turnaround times to 50 ms has little impact. The results indicate that a 1 mHz endpoint filter bandwidth enables all the requirements to be met. Increasing the bandwidth to 10 mHz enables all the requirements except those for uncompressed SDTV to be met, and increasing the bandwidth to 1 Hz enables only the audio requirements to be met. Note that [5, 6] also gave results that indicated that a 10 Hz endpoint filter bandwidth will allow only the professional audio requirements to be met. While the new cases were not run with a 10 Hz endpoint filter, it is expected that this result would be the same.
[1] IEEE P802.1AS/D7.0, “Draft Standard for Local and Metropolitan Area Networks — Timing and Synchronization for Time-Sensitive Applications in Bridged Local Area Networks,” Mar. 23, 2010. [2] IEEE P802.1BA/D2.0, “Draft Standard for Local and Metropolitan Area Networks-Audio Video Bridging
IEEE Communications Magazine • February 2011
GARNER LAYOUT
1/19/11
3:28 PM
Page 147
(AVB) Systems,” Aug. 13, 2010. [3] G. M. Garner et al., “IEEE 802.1 AVB and its Application in Carrier-Grade Ethernet,” IEEE Commun. Mag., Dec., 2007, pp. 126–34. [4] M. D. Johas Teener and G. M. Garner, “Overview and Timing Performance of IEEE 802.1AS,” Proc. IEEE ISPCS ‘08, Ann Arbor, MI, Sept. 22–26, 2008, pp. 49–53. [5] G. M. Garner, A. Gelter, and M. D. Johas Teener, “New Simulation and Test Results for IEEE 802.1AS Timing Performance,” Proc. IEEE ISPCS ‘09, Brescia, Italy, Oct. 12–16, 2009, pp. 109–15. [6] G. M. Garner, “Synchronization of Audio/Video Bridging Networks using IEEE 802.1AS,” NIST-ATIS-Telcordia Wksp. Synchronization Telecommun. Sys., Mar. 9–11, 2010. [7] IEEE Std. 1588-2008,” IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems,” Revision of IEEE Std 15882002, IEEE Instrumentation Measurement Society, July 24, 2008. [8] IEEE P802.11v/D14.0, “Draft Standard for Information Technology — Telecommunications and Information Exchange between Systems — Local and Metropolitan Area Networks- Specific Requirements — Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, Amendment 8: IEEE 802.11 Wireless Network Management,” Aug. 2010. [9] G. M. Garner, M. Ouellette, and M. D. Johas Teener, “Using an IEEE 802.1AS Network as a Distributed IEEE 1588 Boundary, Ordinary, or Transparent Clock,” Proc. IEEE ISPCS ‘10, Portsmouth, NH, Sept. 29–Oct. 1, 2010, pp. 109–15. [10] J. A. Barnes and C. A. Greenhall, “Large Sample Simulation of Flicker Noise,” 19th Annual Precise Time and
IEEE Communications Magazine • February 2011
Time Interval Apps. Planning Meeting, Dec. 1987.
BIOGRAPHIES G EOFFREY M. G ARNER (
[email protected]) graduated from MIT (S.B., 1976, S.M., 1978, Ph.D., 1985). Since 1993 his work has focused on network timing, jitter, and synchronization; network performance and quality of service; standards development; and simulation. He has been a consultant since 2003; current projects include work on the IEEE Audio/Video Bridging standard for precise timing transport, IEEE 802.1AS (for Samsung Electronics), and simulation of timing performance for new OTN clients (for Huawei Technologies). He is editor of IEEE P802.1AS, and is a member of the IEEE Registration Authority Committee. Prior to 2003, he was a Distinguished Member of Technical Staff in Bell Labs Lucent Technologies. H YUNSURK (E RIC ) R YU (
[email protected]) graduated from POSTECH (B.S., 1992, M.S., 1994, Ph.D., 1998). He has been a member of technical staff in Samsung Advanced Institute of Technology, Samsung Electronics, after receiving his Ph. D. He has been a project leader of the multiple networking technologies R&D projects and was a contributor to the related standardization in ITU-T SG15, IEEE 802.1, and IEEE 802.3. In addition, he has been actively involved in the development of the new IEEE 802.1 Audio Video Bridging standard from the beginning. Recently he has expanded his research interests to bio-mimic computing, communication, and cognitive applications.
147
COSART LAYOUT
1/19/11
3:26 PM
Page 148
SYNCHRONIZATION OVER ETHERNET AND IP NETWORKS
NGN Packet Network Synchronization Measurement and Analysis Lee Cosart, Symmetricom, Inc.
ABSTRACT As the transport of data across the network relies increasingly on Ethernet/IP methods and less on the TDM infrastructure, the need for packet methods of synchronization transport arises. Evaluation of these new packet methods of frequency and time transport requires new approaches to timing measurement and analysis. This article describes these new packet measurement techniques and introduces some of the new metrics being used for packet timing data analysis.
INTRODUCTION Traditionally, timing measurements involve sampling signal edges of an oscillator or network equipment timing signal. Phase (phase deviation) in this context is usually represented in units of time, in which case it is also referred to as time interval error (TIE). It is computed by comparing time-stamped edges to a mathematically derived ideal signal, and frequency is calculated by counting signal edges between each pair of time stamps and taking a ratio of the count over the sample duration. Other useful quantities can be derived from these basic calculations, including Allan deviation (ADEV), modified Allan deviation (MDEV), phase power spectral density (PPSD), time deviation (TDEV), and maximum time interval error (MTIE) [1–3]. The latter two are of particular importance to telecom: MTIE, which describes the maximum phase swings over a time window, and TDEV, which assesses noise processes, have both been used to set network and equipment synchronization limits in the standards bodies. It should be emphasized that these traditional timing measurements are still important for the characterization of packet network timing. Packet network equipment such as a packet slave device produces signals that may be characterized in the same way as those described above, and the definitive way of assessing performance of such a device is to measure that output timing signal. There is also, however, a great need for studying the timing characteristics of the packet network itself; this is required both for
148
0163-6804/11/$25.00 © 2011 IEEE
understanding the behavior of packet network clocks and for designing these devices optimally. This involves a measurement of a different kind. Rather than timing signal edges, packets are time-stamped as they traverse two nodes in a network. The packets could just as well originate or terminate at one of these nodes. Just like for timing signal edges, a primary reference is required in the ideal case for timing packets. In addition, a common timescale at the two nodes is required in the ideal case. This can be provided by a global navigation satellite system (GNSS) system such as GPS. In contrast, for the traditional synchronization measurement, only a primary frequency reference is required as frequency stability and accuracy, not absolute time, are the quantities under study. To summarize, the traditional synchronization measurement is a single-node measurement requiring precision time-stamping of the signal edges with a single primary frequency reference, while the packet timing measurement is a two-node measurement requiring precision time-stamping of the packets, ideally with primary time references at both nodes. Regarding the study of packet timing, there are two aspects of the measurement result that are of interest: the nominal time it takes to transit the network and the variation in that transit time over time. The former quantity is referred to as latency, while the latter is called packet delay variation (PDV). The focus of the metrics described below is on the characterization and understanding of PDV, which is also the quantity responsible for providing the greatest challenge for packet timing equipment.
PACKET TIMING PROBE Packet timing measurements involve the production or selection of special probe packets within the overall mix of traffic packets in the network. These probe packets could be IEEE 1588 Precision Time Protocol (PTP) packets or Network Time Protocol (NTP) packets, for example. A series of these probe packets are time-stamped at two nodes in the network, most generally in both the forward and reverse directions. There are two approaches to the design of a packet timing probe, one resulting in a passive
IEEE Communications Magazine • February 2011
COSART LAYOUT
1/19/11
3:26 PM
Page 149
probe and the other resulting in an active probe [4, 5]. The passive probe relies on other network equipment to produce the packet stream and serves only to time-stamp the packets as they pass through the probe via a network tap or mirrored port. The active probe establishes a protocol with another paired device as well as time-stamping the probe packets. In the case where an active probe is used to measure packet delay in both directions, each of the two paired devices serves to originate probe packets in one direction and terminate probe packets in the other direction. Examples of both probe types are shown in Fig. 1. In this case, IEEE 1588 PTP packets are used as the probe packets. Note that the setup for the passive probe is more complicated, with a slave device needed to establish the packet protocol and an Ethernet tap needed to send a copy of the packet stream going into and out from the slave into the passive probe. The active probe both sets up the session and time-stamps the packets, and thus requires neither additional piece of equipment. The IEEE 1588 grandmaster supplies PTP packets and time-stamps packets in both directions, just as the probe does. All grandmaster time stamps can be transported to the probe, so it can serve as the central measurement data collection point.
PACKET TIMING MEASUREMENT DATA It is instructive to look at a sequence of raw packet time stamp pairs and describe how packet delay samples are derived from them. The probe packets that traverse a network under study are emitted from a source at some specified rate. In the example shown in Fig. 2, the rate is 64 Hz. Each individual packet is timestamped at two nodes in the network. If both forward and reverse directions are being studied, there are two probe packet sequences. These are denoted in Fig. 2 as F and R time stamp pairs. The F and R time stamps are used to construct forward and reverse packet delay sequences by computing the differences between time stamps in the pair. To take one example, the first forward packet delay value is the difference between the time stamps in the line F,00167, 1223305830.490552012 – 1223305830.488078908 = 2.473 ⋅ 10–3 s. The placement of the individual packet delay calculations is set by the originating time stamp, in this case in Unix time (UTC seconds since January 1, 1970). In the example shown in Fig. 2, the first time stamp for both the forward and reverse sequence occurs at 2009/10/06 15:10:30, which has been assigned to a time of 0.0000. The next sample occurs approximately 1/64 s (0.015625 s) later for both forward and reverse sequences. Thus, two packet delay sequences are produced, forward and reverse, and these are measurements of one-way packet delay. Generally, these two sets of data are analyzed separately, particularly when packet-based frequency transport is the goal. When time transport via a twoway packet timing protocol is of interest, both directions must be considered together. Metrics
IEEE Communications Magazine • February 2011
1588 slave GPS GPS 1588 Grandmaster
Ethernet tap
PDV measurement and analysis software Passive probe
Network PDV measurement and analysis software GPS
Active probe
Figure 1. Passive and active packet timing probes using IEEE 1588 PTP packets as probe packets.
Packet delay sequence R,00162; F,00167; R,00163; F,00168; R,00164; F, 00169; R,00165; F,00170; R,00166; F,00171;
1223305830.478035356; 1223305830.488078908; 1223305830.492882604; 1223305830.503473436; 1223305830.508647148; 1223305830.519029300; 1223305830.524413852; 1223305830.534542972; 1223305830.540181132; 1223305830.550229692;
1223305830.474701511 1223305830.490552012 1223305830.489969511 1223305830.505803244 Packet 1223305830.505821031 Timestamps 1223305830.521302172 1223305830.521446071 1223305830.536801164 1223305830.537115991 1223305830.552551628
Forward #Start: 0.0000, 0.0155, 0.0312, 0.0467, 0.0623,
2009/10/06 15:10:30 2.473E-3 2.330E-3 2.273E-3 2.258E-3 2.322E-3
Reverse #Start: 0.0000, 0.0153, 0.0311, 0.0467, 0.0624,
2009/10/06 15:10:30 3.334E-3 2.913E-3 2.826E-3 2.968E-3 3.065E-3
Figure 2. Packet time stamp pairs, and the corresponding forward and reverse packet delay. applicable to both of these situations are discussed in turn below.
PACKET DELAY STATISTICS Unlike the phase (TIE) data from a traditional synchronization measurement, where current phase values are generally correlated to neighboring preceding and subsequent phase values because of oscillator stability, the data in a packet delay sequence can vary considerably from point-to-point. Often a relatively short packet delay measurement will exhibit similar peak-topeak behavior as a much longer measurement. This is true for other statistics such as mean and standard deviation as well. Given these typical characteristics of packet delay sequence data (PDV phase), it is often useful to construct a histogram with the collection of packet delay samples. In other situations, statistics might vary over the duration of a packet delay measurement, in which case dynamically tracking a statistic such as minimum, mean, or standard deviation could provide insight into the data.
149
COSART LAYOUT
1/19/11
3:26 PM
Page 150
The shape of the dis-
Symmetricom TimeMonitor Analyzer; Production Network; 2009/03/27; 13:53:33
tribution is an impor1000
tant aspect of this approach to the analysis. Of particular
(a)
note is the asymme-
PDV histogram
100
μ = 4.55 ms σ = 13.26 μs ρ-ρ = 145 μs
10
try; this histogram is 1
essentially a one-
4.5 ms
sided distribution,
4.7 ms
4.7 ms
that is, the tail exists only to the right since packet transit delay is limited to
PDV phase
(b)
some minimum value and are con-
4.5 ms
centrated there.
22 μs
PDV phase stddev
(c)
τ = 100 s 2 μs 0.0 days
12.0 hours/div
6.4 days
Figure 3. a) Histogram of a six-day packet delay sequence; b) plot of the packet delay sequence (PDV phase) over six days; c) plot tracking standard deviation of the PDV phase over the six days.
PACKET DELAY DISTRIBUTION AND SUMMARY STATISTICS An example of a histogram formed from a packet delay sequence (PDV histogram) is shown in Fig. 3. This figure shows the raw packet delay sequence from the six-day measurement taken on a production Ethernet network spanning hundreds of kilometers (Fig. 3b) and the corresponding histogram (Fig. 3a). Statistics based on this distribution are shown to the right of the histogram plot. The mean is 4.55 ms, the standard deviation is 13.26 μs, and the peak-to-peak is 145 μs. The shape of the distribution is an important aspect of this approach to the analysis. Of particular note is the asymmetry; this histogram is essentially a one-sided distribution, that is, the tail exists only to the right since packet transit delay is limited to some minimum value and are concentrated there. There is a much greater probability that packets experience a shorter transit delay than a longer one. Such a one-sided distribution is fairly common for a one-way packet delay measurement. Sometimes particular network equipment — such as firmware-based enterprise routers — will alter this, as will high levels of load, particularly when a network becomes congested. In that case, very few packets if any are able to traverse the network in the minimum possible time, and the distribution will exhibit greater symmetry.
150
DYNAMIC PACKET DELAY STATISTICS Packet networks are, of course, dynamic by nature, and stationarity cannot be assumed in general, particularly in the long term. While conditions in the packet delay sequence (Fig. 3b) appear to be fairly constant throughout the six days, tracking standard deviation over 100 s intervals shows variations in traffic load revealing clear, repeating 24-h cycles (Fig. 3c). The cycles peak at approximately 4 p.m. local time and bottom out at approximately 8 a.m. local time.
FREQUENCY TRANSPORT (ONE-WAY) PACKET DELAY METRICS When a histogram is constructed from a packet delay sequence, any temporal characteristics present are hidden since the time a sample occurred is not preserved; the sample value is rendered into a bin count somewhere in the histogram. Tracking a statistic as described above begins to reveal temporal characteristics, but one can imagine that some of the stability analysis provided by such calculations for TIE data as ADEV, MDEV, MTIE, and TDEV or related calculations could be useful for packet delay sequence analysis. Systematic noise processes appear in simple
IEEE Communications Magazine • February 2011
COSART LAYOUT
1/19/11
3:26 PM
Page 151
and complex networks because of both the technologies employed, and the characteristics and design of the network devices such as switches, routers, access equipment (e.g., digital subscriber line [DSL] and gigabit passive optical network [GPON]), and transport equipment (e.g., microwave systems). If a packet clock is to recover frequency optimally in such a situation, an understanding of these temporal characteristics is important. Just like packet slave clock algorithms must select and reject packets to optimize timing performance, analysis algorithms themselves can provide greater insight through packet selection. Modification to two stability metrics used for TIE analysis, TDEV and maximum average time interval error (MATIE, also known as ZTIE), has proved fruitful for packet timing data analysis [6]. These are parallel to the standard TDEV and MTIE metrics used for TIE data analysis; TDEV focuses on systematic noise, and MTIE/ MATIE focus on frequency offsets. For packet timing data, large short-term movement reduces MTIE to an essentially single-dimensional quantity, a flat line or nearly flat line at the overall peak-to-peak phase; this is the rationale for choosing the averaging in MATIE for packet timing data. A normalized version of MATIE called maximum average frequency error (MAFE) is derived from MATIE by dividing each MATIE(τ) value by the applicable τ. Two approaches have been taken to incorporating packet selection into these calculations, preprocessed packet selection and integrated packet selection. A number of packet selection algorithms are possible. Three are described in detail below in the discussion of preprocessed and integrated packet selection: min, percentile, and band. All three can be applied using either the preprocessed or integrated approach. These packet selection methods are related to the client/slave algorithms themselves. In order for a packet slave to optimally recover time and frequency, packet selection is essential. In this way, calculations with packet selection provide insight into how a packet slave algorithm might perform under the studied network conditions.
PREPROCESSED PACKET SELECTION With preprocessed packet selection, a packet selection algorithm is applied to the packet delay sequence (PDV phase), and the standard TDEV, MATIE, or MAFE calculation is run with the modified packet delay sequence. If x is the original packet delay sequence, and x′ is the modified packet delay sequence, the idea is that TDEV, MATIE, or MAFE operates on the x′ sequence rather than original x sequence. There are several important parameters or options that must be chosen when applying preprocessed packet selection. One is the selection time window duration. Another is whether or not to overlap the time windows. As an example, a 100 s time window might be chosen with nonoverlapping windows. If the measurement duration is 50,000 s, the sequence would be divided into 500 segments of 100 s, with each segment producing a single value based on packet selection. If overlapping windows are chosen instead, more points are generated in the x′ sequence. If
IEEE Communications Magazine • February 2011
the window advances point by point, there are nearly as many points in the x′ sequence as there were in the original x sequence. As indicated above, a number of packet selection algorithms are possible. A fairly simple one, min, effective for sequences with a good population of minimum delay packets, is minimum selection. Drawing on the 100 s time window example above, and assuming a 64 Hz probe packet rate, the minimum packet delay would be selected from 6400 samples in a 100 s window, with each selected minimum over 100 s producing a single sample in the x′ sequence. A similar selection algorithm, percentile, takes a number of packets at or near the minimum, and averages them together to produce an x′ sequence sample. For example, the minimum 10 packet delay samples could be found in a 100 s window and averaged together. Generalizing the percentile selection method further, the band selection method first sorts all the data in a time window from minimum to maximum, then selects a cluster of points at some chosen range, say between the 20th and 30th percentiles, and averages them together to produce a sample in the x′ sequence.
Systematic noise processes appear in simple and complex networks both because of the technologies employed and because of the characteristics and design of the network devices such as switches, routers, access equipment (DSL and GPON for example), and transport equipment (microwave systems for example).
INTEGRATED PACKET SELECTION TDEV and MATIE (and by extension MAFE) all contain averaging as a component of the calculation. Integrated packet selection replaces the averaging with a selection process such as the min, percentile, and band described above. As a result a new self-contained metric is formed such as minTDEV, percentileTDEV, bandTDEV, minMAFE, percentileMAFE, or bandMAFE These metrics and packet selection procedures have been discussed in the Alliance for Telecommunications Industry Solutions (ATIS) and International Telecommunication Union (ITU), and are included in recently published documents [6, 7]. The metrics based on the band selection process are the most general; in fact, the other selection processes and the standard calculation are all special cases of the band metric. As an example, consider the TDEV group. Standard TDEV is bandTDEV with indexing based on the 0 and 100 percentiles. The minTDEV metric is bandTDEV with both indices chosen at the 0 percentile. The percentileTDEV calculation is bandTDEV with the lower index based on the 0 percentile. To clarify this, definitions of TDEV, minTDEV, and bandTDEV are shown in Eqs. 1, 2, and 3, which also serve to clarify the min and band selection methods. For the bandTDEV calculation as defined below, the x′ sequence represents the sorted phase sequence from minimum to maximum over the range i ≤ j ≤ i + n – 1. The indices a and b are set based on two selected percentile levels, A and B. The averaging is then applied to the x′ variable indexed by a and b. The number of averaged points m is related to a and b: m = b – a + 1.
TDEV ( τ ) =
1 ⎡1 n 1 n 1 n ⎤ ⎢ ∑ xi + 2 n − 2 ∑ xi + n + ∑ xi ⎥ n i =1 n i =1 ⎦ 6 ⎣ n i =1
2
(1)
151
COSART LAYOUT
1/19/11
3:26 PM
Page 152
min TDEV ( τ ) =
1 6
[ xmin (i + 2n ) − 2 xmin (i + n ) + xmin (i )]2
where ⋅ xmin ( i ) = min ⎢⎣ x j ⎥⎦ for ( i ≤ j ≤ i + n − 1)
(2)
bandTDEV ( τ ) =
1 6
[ xbm (i + 2 n) − 2 xbm (i + n) + xbm (i )]2
where ⋅ xband _ mean (i ) =
1 m
b
(3)
∑ x′j +i j =a
Again, relating the other TDEV calculations to bandTDEV: 1)TDEV is bandTDEV (0.0 to 1.0) 2)minTDEV is bandTDEV (0.0 to 0.0) 3)percentileTDEV is bandTDEV (0.0 to B) with B between 0.0 and 1.0 The effectiveness of packet selection is illus-
Symmetricom TimeMonitor Analyzer, Xli 1588 PDV Phase; 2006/10/09; 20:59:41 10k PDV Histogram
TIME TRANSPORT (TWO-WAY) PACKET DELAY METRICS
100
1 40 μsec
84 μsec
10 μs TDEV
minTDEV 1 ns 1.0 s
10 s
100 s
1 ks
10 ks
Symmetricom TimeMonitor Analyzer, TP5000 Fwd PDV Phase; 2008/10/17; 01:30:27 90 μs PDV
30 μs 0.0 hours
2.0 hours
10 μs
TDEV
bandTDEV 0.1 μs 1.0 s
Packet-network-based time transport requires more than the assessment of variations in packet delay required for frequency transport. While packet network frequency transport can benefit from a two-way protocol, a stream in a single direction could be used. Time transport, by contrast, requires a two-way protocol in order to estimate one-way packet delay using a measurement of a round-trip. In such a situation, asymmetry between the upstream and downstream paths has a direct impact on the ability to accurately transport time. Thus, certainly one of the goals of two-way transport metrics is to assess asymmetry. The measurement setup for two-way packet timing demands that both forward and reverse packet flows are measured at the same time and that there is a common, stable time reference at both measurement points. The one-way measurement for the frequency transport metrics requires only that the two clocks run at the same rate since packet delay variation is the critical quantity. Further, for frequency transport, oneway packet flows can be studied independently.
TWO-WAY PACKET TIMING MEASUREMENT DATA
1 μs
10 s
100 s
1 ks
Figure 4. a) PDV histogram followed by TDEV compared to minTDEV for data with a well-populated minimum; b) TDEV compared to bandTDEV for data concentrated above the minimum.
152
trated in the two measurement examples shown in Fig. 4. The first measurement (Fig. 4a) was taken on a laboratory network composed of five carrier class switches with multiple stream traffic generation composed of small, medium, and large packets: 64 bytes, 576 bytes, and 1518 bytes, respectively. In this example, where 80 percent of the packet delay data is at or very near the minimum, minTDEV noise levels are below TDEV noise levels for all values of integration time τ. Clearly a packet clock algorithm based on minimum packet selection would likely have great advantages over one that utilizes all data without any packet selection in this situation. Similarly, in a measurement taken on a network of cascaded routers (Fig. 4b), where the packet delay data is concentrated in a band above the minimum, advantages can be achieved by focusing the TDEV analysis on that band. This is accomplished using the bandTDEV calculation with appropriate choice of lower and upper indices based on chosen lower and upper percentiles. In this case, the 40th and 60th percentiles were chosen. In so doing, the bandTDEV calculation shows lower noise than the standard TDEV calculation throughout most of the range of integration time τ.
The first step toward performing two-way packet timing analysis is the construction of a two-way data set from a simultaneously measured pair of upstream and downstream one-way packet delay sequences. The procedure for doing so is outlined in the top and middle parts of Fig. 5. The forward and reverse packet delay sequences shown in Fig. 5 are themselves derived from four time stamps, two time stamp pairs, such as those shown in the top part of Fig. 2.
IEEE Communications Magazine • February 2011
COSART LAYOUT
1/19/11
3:26 PM
Page 153
Based on the time when individual samples occur, forward and reverse packet delay sequence samples are combined into samples with three components — time, forward packet delay, and reverse packet delay — for each sample. Organizing the data in such a way makes it convenient for performing two-way packet timing calculations to do such things as investigate asymmetry in packet transport.
TWO-WAY PACKET TIMING BASIC CALCULATIONS There are two fundamental quantities for twoway packet timing analysis, round-trip and offset. These form the basis for the metrics involving packet selection discussed below in the next section. The round-trip calculation sums the forward and reverse packet delay, and the offset calculation takes the difference of the two. It is useful to consider these from the perspective of one-way delay, since one-way packet delay is often estimated as half of the round-trip packet delay for devices that can only measure round-trip delay. The bias produced by a packet algorithm is related to half the full offset since the client/slave device is dividing its locally made full round-trip measurements by two. For instance, if the actual forward and reverse oneway delays in a network were 99 μs and 101 μs, a packet slave would estimate a one-way delay of 100 μs, which would result in a 1 μs bias for each direction, half of the 2 μs difference. This kind of normalization is accomplished by dividing the full round-trip and full offset by two. These will be referred to as normalized round-trip and normalized offset. The importance of this normalization for the offset calculation is illustrated by the measurement example shown in Fig. 6 where a normalized packet offset predicts the 2 μs bias in the slave device.
TWO-WAY PACKET TIMING CALCULATIONS WITH PACKET SELECTION Just as one-way packet timing analysis can benefit from packet selection, so can two-way analysis. Figure 5 shows construction of a new set of two-way data by applying min packet selection to both the forward and reverse packet delay sequences. In this case, for the purpose of illustration, a time window is set to three samples, and non-overlapping windows are used. The resulting new data set is thus a third the size of the original data set. Normalized offset and normalized round-trip can be calculated using a min packet selection version of two-way data producing new metrics minRoundtrip and minOffset. These are then the basis for other metrics. One such metric is produced by plotting minOffset against minRoundtrip as a scatter plot to form minimum time dispersion (minTDISP). Such a relation does not produce a plot with unique values of minRoundtrip mapping to single values of minOffset — that is, this is not a plot of minOffset as a function of minRoundtrip. This is why this data is best plotted as a scatter plot rather than connecting the dots. As is the case for all calculations based on
IEEE Communications Magazine • February 2011
Forward packet delay sequence #Start: 2010/03/06 17:15:30 0.0000, 1.47E-6 0.1000, 1.54E-6 0.2000, 1.23E-6 0.3000, 1.40E-6 0.4000, 1.47E-6 0.5000, 1.51E-6
Reverse packet delay sequence #Start: 2010/03/06 17:15:30 0.0000, 1.11E-6 0.1000, 1.09E-6 0.2000, 1.12E-6 0.3000, 1.13E-6 0.4000, 1.22E-6 0.5000, 1.05E-6
#Start: 2010/03/06 17:15:30 0.0000, 1.47E-6, 1.11E-6 0.1000, 1.54E-6, 1.09E-6 0.2000, 1.23E-6, 1.12E-6 Two-way 0.3000, 1.40E-6, 1.13E-6 data set 0.4000, 1.47E-6, 1.22E-6 0.5000, 1.51E-6, 1.05E-6
Time(s) 0.0 Constructing f’ and r’ 0.1 from f and r with a 3- 0.2 sample time window 0.3 0.4 0.5
f(μs) 1.47 1.54 1.23 1.40 1.47 1.51
r(μs) f’(μs) r’(μs) 1.11 1.09 1.23 1.09 1.12 1.13 1.22 1.40 1.05 1.05
Minimum search sequence
Figure 5. Constructing a two-way data set from forward and reverse packet delay sequences and then f′ and r′ from f and r (the two-way data set) with a 3-sample time window. minOffset, a time window must be chosen as input parameter. The calculation could be repeated for different time windows each of which would produce a different minTDISP scatter plot. The ideal situation for minTDISP is a cluster of data sitting on the x-axis with little dispersion either above or below the x-axis (little variation in minOffset away from zero) and little dispersion along the x-axis (limited variation in minRoundtrip). The two-way packet measurement in Fig. 6, taken on an Ethernet network used for wireless backhaul, is represented as a minTDISP plot (Fig. 6a). The apex of the minTDISP data converges to a minOffset of –2 μs. Thus, a clear asymmetry between forward and reverse channels is seen in this packet network measurement. A measurement of a slave clock operating under these asymmetrical conditions (Fig. 6b) not surprisingly shows the same 2 μs error when recovering time. Other two-way metrics have been proposed and are being studied in industry and within standards bodies. Since asymmetry is of particular interest, in many cases these are based on minOffset. Examples of these are minOffset mean, minOffset standard deviation, and minOffset percentile plotted as a function of time window. In contrast to minTDISP, where a particular packet selection time window is chosen in advance of performing the calculation, these minOffset mean/standard deviation/percentile calculations start with a small time window to produce a single numerical result, and then increase the time window to produce additional numerical values, finally plotting these results against time window value. The values themselves are a chosen statistic of a minOffset sequence such as mean, standard
153
COSART LAYOUT
1/21/11
10:09 AM
Page 154
Symmetricom TimeMonitor Analyzer; Ethernet Wireless Backhaul; 2009/04/28; 11:37:01
The need for synchronization in pack-
-2.0 μs
et networks arises as the telecommunications infrastructure
Min TDISP
0.5 μs/ (a) div
migrates from circuitswitched networks to packet-switched
-6.0 μs 265.6 μs
networks. Applica-
270.0 μs
tions relying on precise time and
2.0 μs
frequency drive the requirement for packet-based syn-
1588 Slave 1 PPS vs. GPS
0.5 μs/ (b) div
chronization and the necessity for packet network timing analysis.
-1.0 μs 0.0 hours
Figure 6. a) Ethernet wireless backhaul asymmetry; b) IEEE 1588 slave 1PPS under these asymmetrical network conditions. deviation, or 95th percentile. Extensions to these can be imagined as integrating other packet selection techniques such as percentile or band; another possible extension would be to plot some other statistic such as one based on the Allan deviation family of metrics against time window.
CONCLUSIONS As this article has discussed, a number of new metrics are currently being studied for the analysis of packet delay variation data. Just as was the case for traditional synchronization measurements, these metrics serve several purposes. First, they are tools for gaining insight into the behavior of the timing characteristics of packet networks. Second, some of them, perhaps in conjunction with other related new metrics, can form the basis for setting packet network limits, much has been done with MTIE and TDEV for traditional synchronization measurements. At this stage, standards committees have been studying and defining the new packet metrics. The next step will be to use these metrics to define limits based on application requirements for the synchronization signals produced by the packet client/slave device and understanding of the essential characteristics of the packet algorithms that convert packet timing into synchronization signals. It is worth pointing out that these synchronization signals themselves can be measured in the traditional way with network limits such as MTIE and TDEV masks with the choice of mask based on the particular application requirements. For example, if an application demands better than 15 parts per billion frequency offset, the measurement of the packet slave synchronization output signal can be analyzed for this. As discussed above, these new metrics incor-
154
22.7 hours
porating packet selection provide insight into packet algorithms since the algorithms themselves rely on packet selection to produce optimal timing performance. The need for synchronization in packet networks arises as the telecommunications infrastructure migrates from circuit-switched networks to packet-switched networks. Applications relying on precise time and frequency drive the requirement for packetbased synchronization and the necessity for packet network timing analysis.
REFERENCES [1] S. Bregni, Synchronization of Digital Telecommunications Networks, Wiley, 2002. [2] K. Shenoi, Synchronization and Timing in Telecommunications, BookSurge Publishing, 2009. [3] ITU-T Rec. G.810 “Definitions and Terminology for Synchronization Networks,” 1996. [4] L. Cosart, “Precision Packet Delay Measurements Using IEEE 1588v2,” ISPCS ‘07, Vienna, Austria, Oct. 2007. [5] L. Cosart, “Packet Network Timing Measurement and Analysis Using an IEEE 1588 Probe and New Metrics,” ISPCS ‘09, Brescia, Italy, Oct. 2009. [6] ATIS-0900003.2010 Technical Report “Metrics Characterizing Packet-Based Network Synchronization,” 2010. [7] ITU-T Rec. G.8260 “Definitions and Terminology for Synchronization in Packet Networks: Appendix I,” 2010.
ADDITIONAL READING [1] M. Weiss, “Time Domain Representation of Oscillator Performance,” 32nd Annual Time & Frequency Metrology Seminar, Boulder, CO, June 2007.
BIOGRAPHY LEE COSART (
[email protected]) is a senior technologist with Symmetricom, Inc. A graduate of Stanford University, he worked as an R&D engineer at Hewlett-Packard/ Agilent prior to joining Symmetricom in 1999. His R&D activities have included measurement algorithm development and mathematical analysis for a variety of test equipment for which he holds several patents. He serves on the ATIS and ITU-T committees responsible for network synchronization standardization as chair, contributor, and editor.
IEEE Communications Magazine • February 2011
SHENOI LAYOUT
1/19/11
3:37 PM
Page 156
SYNCHRONIZATION OVER ETHERNET AND IP NETWORKS
Performance Aspects of Timing in Next-Generation Networks Kishan Shenoi, Shenoi Consulting
ABSTRACT Circuit-switched networks based on time-division multiplexing require synchronization to deliver information, whereas packet-switched networks can deliver information in an asynchronous environment. However, all real-time services require that synchronization and timing information be delivered over the network. Performance of timing distribution is quantified using particular metrics and adherence to requirements determined by using masks. The traditional metrics, TDEV and MTIE, have extensions to packet-switched networks for addressing the corruption of timing information by packet delay variation. The principles of metrics and masks and these extensions are presented here.
BACKGROUND The principal requirement for synchronization and timing in telecommunications networks is to support real-time services such as voice and video communications, particularly of an interactive or conversational nature. For example, for signals such as speech, voice-band (modem/fax), and video, the information signal (analog) at the source end is converted into digital format at a particular sampling rate. At the destination there is a conversion back to analog, and if the conversion rates are not equal, the quality of experience (QoE) degrades. In traditional circuit-switched telecommunications networks the need for network-wide synchronization arises to ensure proper communication of information (bits) itself, and arises from the nature of the methods of multiplexing and switching employed. In time-division multiplexing (TDM) networks the transmitted signals themselves are often suitable for carrying timing information [1]. This enables the creation of a synchronization network with each node provided timing information traceable to a primary reference clock (PRC) [1, 2]. In traditional networks the service requirements are piggybacked on transport requirements and thereby available by default. In next-generation networks (NGN) that are based on packet switching and statistical multi-
156
0163-6804/11/$25.00 © 2011 IEEE
plexing principles, network-wide synchronization is not required to keep the transmission pipes from operating in a bit-error-free mode. However, the need for synchronization remains for the services provided, particularly in the case of realtime signals and the circuit emulation of constant bit rate data streams. Whereas good synchronization will improve the functioning of a packet network as a whole, good timing is essential at all points in the network at which service is delivered or where a format conversion is required for converting between packet-switched and circuit-switched formats. Equipment performing such C2P functions is referred to as an interworking function (IWF) or gateway, and synchronization is required to reliably support realtime applications such as circuit emulation services (CES), voice over IP, video over IP, IPTV, and so on. The notion of a synchronization network can be achieved in packet networks by embedding timing information in the physical layer, as exemplified by Synchronous Ethernet. Alternatively, timing information can also be carried at a higher layer as exemplified by methods based on Precision Time Protocol (PTP) [3] or Network Time Protocol (NTP) [4]. Quantifying timing requirements requires the development of suitable metrics and analytical methods. These and related performance aspects are the subject of this article. In the ensuing sections a brief overview of timing fundamentals is provided, followed by an explanation of how packet-based methods transfer timing. Two groups of metrics, the TDEV and MTIE families, are discussed to clarify how they quantify the departure of a timing signal from ideal.
TIMING FUNDAMENTALS ITU-T Rec. G.810 [5] defines a clock as “an equipment that provides a timing signal” and further explains that in the context of telecommunications networks a clock can be viewed as a signal generator that provides the appropriate signals to other devices in the network, effectively synchronizing the network. This definition can be extended to include the subsystem within a network element that governs the temporal behavior of other functions of the network ele-
IEEE Communications Magazine • February 2011
SHENOI LAYOUT
1/19/11
3:37 PM
(n-1)
Page 157
(n)
(n+1)
Implicit in the notion of a timing signal is Reference (ideal) τ0
(a)
the information that can be utilized to generate a clock
Clock output
signal, most often pictorially viewed as a pulse train where a rising edge of the waveform identifies
x(n-1)
x(n)
x(n+1)
the instant of interest. Time-of-departure
Tn (b)
Time-of-arrival
Sn
v(n-1)
v(n)
v(n+1)
Figure 1. Illustrating the notion of clock noise (time error and packet delay variation). ment, especially related to output signals and functions such as analog-to-digital and digital-toanalog conversions. Implicit in the notion of a timing signal is the information that can be utilized to generate a clock signal, most often pictorially viewed as a pulse train where a rising edge of the waveform identifies the instant of interest.
CLOCK NOISE AND PACKET DELAY VARIATION The notion of clock noise, or time error, is illustrated in Fig. 1a. When the principal item of interest is the frequency of the clock, the noise in the clock output is obtained as the time (phase) difference between the timing edge of the clock relative to the corresponding edge of an ideal or reference clock that is known to be of much higher quality than the clock under test. The sequence {x(n)} (an equivalent notation is {xn}) represents the time error. In practice it is a measured quantity and various metrics can be computed on the vector {x(n); n = 0, 1, 2, …, (N – 1)}. When the principal characteristic of interest is time (sometimes referred to as wall-clock), the notion of time error is simply the difference between wall-clock reading of clock under test and the reference clock. In such a case the time interval between samples of this time error computation need not necessarily be uniform but it is important that the wall-clock reading apply to the same instant in time. In packet-based methods the timing information is based on the time of arrival and time of departure of packets. If T n is the time of departure of timing packet #n, and the time of arrival at the destination is Sn, the transit time
IEEE Communications Magazine • February 2011
of the packet is simply v(n) = (S n – T n ). The packet delay variation (PDV) is defined as the difference between the actual transit delay and a reference transit delay. There are other definitions of PDV that are targeted to other applications [6]. The specific choice of reference delay is application-dependent, and common choices are delay of first packet of session, minimum delay, average delay, and so on. The PDV sequence is analogous to the time error sequence and the same collection of metrics can be computed. In traditional TDM architectures physical layer methods are employed to deliver timing information. For example, the transmission schemes in optical networks based on synchronous digital hierarchy (SDH) are isochronous with suitable signal features that allow the receiver of the signal to extract a recovered clock that represents the characteristics, from a timing perspective, of the clock employed in the transmitter. In packet-based methods, a packet timing packet flow is established between the master (source of timing) and slave (recipient of timing). In fact there could be packet flows in either or both directions. Timing information in this scenario is comprised of the time-stamps associated with the instant the packet leaves one clock and arrives at the other. Clock noise is introduced in the form of packet delay variation and the effects of imprecision in the manner in which time-stamps representing the times of arrival and departure are struck. Nevertheless, even in packet-based methods, a clock output is generated that can be of the form in Fig. 1a.
157
SHENOI LAYOUT
1/19/11
3:37 PM
Page 158
CLOCK NOISE MODEL The time-of-departure and time-ofarrival instants are equally important and timing information implies the association of timestamps with these events. Any error, or lack of precision, in associating the time-stamp and representative event is not distinguishable from network-introduced packet delay variation.
The time error is modeled as a combination of systematic and random components and expressed as: x(t) = α + β ⋅ t + (0.5 ⋅ D ⋅ t2 + φ(t))
(1)
Systematic components include a constant, α, (α is the initial phase error); a possible linear function of time, β ⋅ t (β is the fractional frequency offset); and in some cases a quadratic term, 0.5 ⋅ D ⋅ t2 (D is the frequency drift). For quantifying the error in frequency alignment any constant phase error (i.e., α) can be ignored; for time alignment (or phase alignment) the constant term should be (nominally) zero. A similar model can be postulated for delay in packet networks. However, if we assume that the network does not create or destroy packets, then on a long-term basis β and D should be zero. This assumption is valid since packets used for timing purposes can be sequence numbered, and consequently duplicate and/or lost packets can be identified. Clearly, the ideal value of time error (or packet delay variation) is identically zero. All timing metrics quantify the non-zero behavior of the time error. Note that the time error is essentially a random process and, in practice, metrics are computed on the (measured) time error sequence {x(n)}. Thus the computed metrics are akin to estimates of statistical expectations. An implicit assumption is that the performance estimates based on data obtained from one sufficiently long experiment is adequate to characterize the statistical behavior of the clock output or timing signal. This is definitely true for static conditions such as in the study of the behavior of traditional (TDM) network equipment clocks. For dynamic conditions such as those encountered in packet networks the packet delay variation may not be a stationary process. Nevertheless, metrics are invaluable for purposes of analysis. However, the analytical thought processes involved are somewhat more complex than in the static case and require a good understanding of the metric, the strengths and weaknesses, and areas of applicability. Additional explanation and observations regarding metrics for evaluating timing performance are available in [1, 7–9].
PACKET-BASED TIMING METHODS Physical layer methods used in traditional TDM networks are well documented [1, 7]. Here the emphasis is placed on methods suitable for NGNs and are based on transfer of packets between the two devices. Precision Time Protocol (PTP) [3] and Network Time Protocol (NTP) [4] are two protocols that are being studied for use in packet-based timing methods for telecommunications applications. From a time-transfer perspective the two methods are identical in principle. Both protocols specify mechanisms for communicating timing information comprising the time-of-arrival and time-of-departure of designated packets and provide specific formats for the time-stamps used to convey this timing related information.
158
Synchronization of slave to master necessitates that the notion of 1 second be the same (or nearly so) in both devices. This implies that the two are syntonized (aligned in frequency). Syntonization implies that the wall-clock values progress at the same rate and thus the wall-clock difference, or time offset, will be constant. Whereas there are other methods of ranging to obtain the time offset (ε), PTP and NTP devices estimate ε as one-half the round-trip delay. That is, there is an implicit assumption that the delay is the same in the two directions. Any asymmetry will result in an error in the offset estimate (εerr). Figure 2 indicates the calculation in the case of a single exchange of packets. In practice a flow of timing packets is set up between the devices in order to address packet delay variation and local oscillator drift. Delivery of timing information across a packet network is based on some fundamental premises: • Every path between source and destination has the notion of a nominal delay. The delay can change from packet to packet and it is this packet delay variation (from nominal) that introduces the clock noise that impacts clock recovery. • Missing or duplicated packets can be detected. Consequently, on a long-term basis at least, the rate at which the packets leave the source matches the rate at which packets are (perceived) received at the destination. • The local timekeeping mechanism is based on a local oscillator that is reasonably stable. Generally speaking, a longer observation interval provides a better estimate of the appropriate oscillator frequency correction. This is true provided the local oscillator remains constant over the interval. Any drift in the oscillator (frequency) will corrupt the estimate. • The network attempts to deliver each packet, individually, in a timely manner. Storeand-forward schemes can protect the content of packets but could disrupt the timing information. • For clock recovery purposes, it is permissible to select, from the population of packets, a subset that exhibit particular properties. This selection process is part of the “secret sauce” of algorithms. A common misconception is that timing information is carried solely by time-stamps embedded in the packet directly or by an association mechanism. Actually time-stamps are just one aspect, albeit a very important aspect, of timing information. The time-of-departure and time-ofarrival instants are equally important and timing information implies the association of timestamps with these events. Any error, or lack of precision, in associating the time-stamp and representative event is not distinguishable from network-introduced packet delay variation.
THE CLOCK RECOVERY FUNCTION In order to better appreciate the role of metrics and masks, a brief explanation of the principles of clock recovery is provided. In essence, clock
IEEE Communications Magazine • February 2011
SHENOI LAYOUT
1/19/11
3:37 PM
Page 159
Master
Time-base
In actual packet-
Slave
algorithms,
Slave time offset = ε
Wall-clock T
based clock recovery
Wall-clock
Time-base
τ
τ=T+ε
sophisticated packet preprocessing techniques are
@T1
employed to reduce
Transit delay = ΔMS
the power of the @T2 (time-of-arrival)
(time-of-departure)
τ2 = T2 + ε
clock noise in the timing reference as seen by the loop and
Transit delay = ΔSM
@T4 (time-of-arrival) ε≅
(τ2 – T1) – (T4 – τ3)
εerr=
;
2
@T3
thereby, albeit to a
(time-of-departure) τ 3 = T3 + ε
the oscillator perfor-
ΔMS – ΔSM 2
limited extent, relax mance requirements (and therefore cost).
Figure 2. Principles of timing transfer over packet networks.
recovery can be viewed as the process of regenerating a clock that matches the source (master) by filtering out the accumulated clock noise in the timing reference. The general structure of the signal processing associated with clock recovery is depicted in Fig. 3. The slave generates a control signal for adjusting its local time-base by comparing its output to the timing reference extracted from the signal sent by the master. In the TDM case, the reference is obtained by extracting a recovered clock from the physical layer transmission medium; in the packet-based methods the reference is extracted based on time-stamps in the packet and time-of arrival/departure measurements based on the local clock. This model of a locked loop (e.g., phase-locked) is common to all clock recovery schemes. It is important to observe that in both the TDM and packet-based cases, the clock noise of the master and transit-time variation introduced by the transmission medium affect the slave clock in much the same manner, in principle. Two important differences are the rate of transfer, typically very high in TDM, and scale of time error, usually much higher in packet-based methods (see [7, 9] for more related information). From a signal processing perspective, the loop presents a low-pass filter characteristic between the timing reference input and the clock output; between the local oscillator and output the effective transfer function is high-pass in nature. The high-pass and low-pass characteristics have the same (nominal) cut-off frequency. The implication is that very narrow-band lowpass characteristics require high performance oscillators. In actual packet-based clock recovery algorithms, sophisticated packet pre-processing techniques are employed to reduce the power of the clock noise in the timing reference as seen
IEEE Communications Magazine • February 2011
by the loop and thereby, albeit to a limited extent, relax the oscillator performance requirements (and therefore cost). One consequence of this low-pass behavior is that the clock noise in the output contains all the low-Fourier-frequency components present in the timing reference fed to the loop. This implies that the tolerance limit of the loop and the clock output requirements are almost the same for large observation intervals (i.e., low Fourier frequency) — some allowance must be provided for locally generated noise. The observation interval that quantifies large is related to the bandwidth (or time-constant) of the clock recovery loop.
THE TDEV FAMILY OF METRICS Time Variance (TVAR) and its square-root, Time Deviation (TDEV) are metrics that have been used within the timing community for several decades [1, 7]. TVAR can be interpreted as a spectral decomposition of the clock noise power. Whereas a power spectrum is a function of Fourier frequency, f, TVAR is a function of a time variable τ, usually referred to as the observation interval and f and τ are (proportional to) the reciprocal of the other. The value of τ determines the center frequency and bandwidth of a filter and TVAR(τ) is the power observed when the given time-error signal is passed through this filter. Assuming the nominal sampling interval is τ 0 , then TVAR (and TDEV) are computed for values of τ that are integer multiples of τ0 (i.e., τ = n ⋅ τ0). TDEV, and other metrics of the TDEV family, provide insight into the stability of the timing signal (see [1, 7, 8] for additional discussion on clock stability metrics). These are developed on measured values of the time error and in packet network scenarios are based on measurements of
159
SHENOI LAYOUT
1/19/11
3:37 PM
Page 160
The notion of a mask, as associated with a metric, is essentially the limit
Timing reference
Loop filter
C
(Controlled) oscillator
Clock output
beyond which the equipment is deemed to be non-conformant to a
(a)
particular standard or application. That is, the mask is an IN
expression of what is good enough.
Σ
Master
PP
Clock output
Σ
LPF A PLL function
Network noise (PDV)
HPF
(Servo function) B PP: preprocessing Local oscillator noise Clock recovery function (b)
Figure 3. Clock recovery processing. the transit delay of (timing) packets over the network between the master and slave. These metrics are, generically, xTDEV( τ = n ⋅ τ 0 ), where τ 0 is the (nominal) sampling interval of measurement, typically the (nominal) packet interval, and are computed over the measurement data set comprised on N values, {x(kτ0); k = 0, 1, …, (N – 1)}. For a given value of observation interval τ = n ⋅ τ 0, a representative value, X m, is developed for the n samples in each observation window spanning the indices k = m through k = (m + n – 1). The x in xTDEV identifies the manner in which this representative value is obtained. xTDEV is a measure of the standard deviation of {X m } after removing any constant and linear contribution as is clear from the double difference (X m+2 – 2 ⋅ X m+1 + X m) in Eq. 2 (see [7] for additional information). xTDEV ( τ = n ⋅ τ 0 ) = M −3 1 2 ⎛ 1⎞ ⋅ ∑ [ X m + 2 − 2 ⋅ X m +1 + X m ] ⎜⎝ ⎟⎠ ⋅ 6 M − 2 m=0
THE MTIE FAMILY OF METRICS (2)
For regular TDEV, the representative value is computed as the average of the n samples in the observation interval. Different choices for x that have been proposed include: • min (minTDEV): The representative value, Xm, is the minimum of the n samples. • band (bandTDEV): given two probabilities (expressed as percentages), α and β with β
160
> α, Xm is the average of the population of samples in the β-percentile of least delay , after removing the samples in the α-percentile of least delay, within the n samples in the mth observation window. percentileTDEV is a special case of bandTDEV with α = 0. • cluster (clusterTDEV): in this case Xm is the average of the samples that fit a particular rule, called the clustering rule. That is, clusterTDEV is a class of TDEV defined by a selection process. Numerous choices for the clustering rule can be proposed. Generally speaking, xTDEV provides a measure of the stability of the timing information derived from the packet transit delay sequence if the rule x is applied in the selection process to select a representative sample over an observation interval of duration τ = n ⋅ τ0.
Consider the hypothetical situation where data is being written into a buffer under control of clock A and is being read out of the buffer under control of clock B. If the two clocks are perfectly synchronized then over any interval of time (observation interval) the buffer fill level will remain nominally constant. Any relative wander between the two clocks will be reflected in a variation of the fill level and consequently excessive wander will result in buffer overflow or underflow. The peak-to-peak variation of the
IEEE Communications Magazine • February 2011
1/19/11
3:37 PM
Page 161
(relative) time error then provides guidance on the sizing of buffers and/or the propensity for data loss. Maximum Time Interval Error (MTIE) provides a quantitative measure that indicates the peak-to-peak behavior as a function of observation interval. Additional description, including the mathematical formulas, is available in [1, 7, 10]. With some abuse of notation, denote by MT(n) or MT(τ) the MTIE value for observation interval τ = n ⋅ τ0. MT(τ) is the maximum peakto-peak variation over all intervals of duration ≤τ. First, it is clear that MT(τ) is a monotonically non-decreasing function of τ. Second, if clock error is dominated by systematic components (frequency offset and/or frequency drift) then for large τ we can say that MT(τ) exhibits a linear/quadratic behavior. One very common situation is where the two clocks being compared are very stable (≈ zero frequency drift) but may have a small frequency offset, and in this situation the relationship MT(τ) ≈ β ⋅ τ applies for large observation intervals. If the two clocks are locked then there is no frequency offset and the MTIE becomes constant for large τ. In traditional clock measurements the time error sequence is generally smooth and the nondecreasing property of MTIE is not an impediment to understanding the relative wander between the clocks. In the case where the time error represents a packet delay variation, it is not uncommon to see sharp and rapid changes in the time error. Since it is known a priori that there will be some filtering of the time error in the (slave) clock recovery system, a variant of MTIE called MATIE has been proposed [7]. The time error is filtered using a first-order rectangular-window low-pass filter prior to analysis and this is the source of the A (for average) in the acronym MATIE. The metric is generally used to ascertain whether there is any systematic component in the time error (i.e., PDV). Presence of a frequency offset will result in a linear behavior of MATIE for large τ and the slope will be the (fractional) frequency offset. The metric Maximum Average Frequency Error (MAFE) was defined to point out this behavior and provide a numerical value for this offset (in fractional frequency units); MAFE and MATIE are essentially equivalent.
MASKS The notion of a mask, as associated with a metric, is essentially the limit beyond which the equipment is deemed to be non-conformant to a particular standard or application. That is, the mask is an expression of what is good enough. For example, an MTIE mask will correspond to a predetermined line on the log-log plot used for displaying MTIE; the measured MTIE should lie below this line for compliance. It is common to break down the masks into two major areas of study. These two areas relate to: • Clock related masks: In both cases, i.e., TDM or packet network architectures, there is an implicit or explicit notion of a clock signal that is used to time processes or other signals. Clock related masks apply
IEEE Communications Magazine • February 2011
Symmetricom TimeMonitor Analyzer MTIE; Fo = 10.00 MHz; Fs = 1.000 Hz; 2010/03/18 13:41:54 Simulated phase; Samples: 1000000; WhitePM: 1.000 μs; FlickerPM: 10.00 ns; RWalkPM: 10.00 ps 100 ms 10 ms
MTIE
SHENOI LAYOUT
G.824 Traf G.824T 1PPB G.824T 15PPB
1 ms 100 μs 10 μs 1 μs
10.00 s
100.0 s
1.000 s
10.00 ks
100.0 ks
1.000 ks
Figure 4. MTIE masks for 1 × 10–9 (1ppb) and 15 × 10–9 (15ppb) allowance combined with the G.824 traffic mask. to these signals and determine whether the clock output is fit for purpose regardless of whether the network is packet-switched or circuit-switched. • PDV related masks: Networks add packet delay variation that is, practically speaking, clock noise since it impairs the ability of the sink to recreate a suitable timing signal that mimics the source. PDV related masks therefore pertain to the measurement of a network and ascertain the ability of the network to support packet-based time/timing transfer. Although peculiar to packet networks, PDV metrics used for quantifying the strength of the PDV often have a parallel with a TDM counterpart. The development of PDV metrics and masks is a new area of study in the industry and standards are still in development. There are several masks available in standards documents. These are usually targeted towards specific equipment or applications. Here we address MTIE masks because that is often the more relevant mask in the field. The TDEV behavior is also important but requires much more subject matter expertise in order to interpret the TDEV results. In all telecommunications networks there is the notion of good enough in terms of frequency accuracy. In traditional TDM networks the goal is to have all network elements aligned to a primary reference clock (PRC) whose innate fractional frequency accuracy is better than 1 × 10–11. However, clock performance requirements at the network edge can be tailored to the application or end-point equipment. The most common example of this situation is the case of wireless base-stations that derive their timing from the backhaul signal. The primary requirement in this case is (fractional) frequency accuracy of better than 50 × 10 –9 , when measured (hypothetically) at the radio interface. To allow for some margin in the base-station circuitry, it is common to impose a requirement of X on the timing recovered from the network (backhaul
161
SHENOI LAYOUT
1/19/11
3:37 PM
The MTIE and TDEV families of metrics described here represent the state of understanding for quantifying clock noise in packetswitched networks. There have been several contributions made that suggest that these metrics have uses far beyond simple clock noise characterization.
Page 162
signal). X is typically in the range between 1 × 10–9 and 15 × 10–9. It is straightforward to generate a suitable MTIE mask that merges the traditional and application specific requirements. Suppose the traditional network limit is expressed by a mask described by the function μT(τ). It is clear that a fractional frequency offset of y corresponds to an MTIE curve that is a linear function of τ, namely M T (τ) = y ⋅ τ. Then the appropriate MTIE mask that merges the two limits is given by M(τ) where M(τ) = max{MT(τ) ; μT(τ)}
(3)
Wireless base-stations often derive their timing from the backhaul signal and it is common to invoke the conventional masks that are used in TDM for expressing network limits. However, these masks often have a 1 × 10 –11 asymptote which may be overly stringent for the wireless case. Figure 4 shows the conventional traffic mask from ITU-T Rec. G.824 (for DS1) [11] that is modified to account for 1 × 10–9 and 15 × 10–9 offset allowances. The MTIE computed for a synthetic sequence is shown in Fig. 4 solely for illustrative purposes. The synthetic sequence includes a frequency offset of 5 × 10–9. The notion of a tolerance mask is the following. If the PDV across a network is measured, it should be possible to determine whether a slave clock will be able to generate a clock output that meets a certain (application-specific or otherwise prescribed) mask. To determine whether the slave clock will meet an output MTIE mask, there will probably be a tolerance mask at the input that is based on a metric of the MTIE family. Likewise, a slave clock output TDEV mask will determine a tolerance mask for a metric of the TDEV family. These tolerance masks are important since they actually determine network limits for the PDV.
CONCLUDING REMARKS The intent of this article is to provide an introduction to performance criteria associated with delivering timing over packet networks. The corresponding body of knowledge for TDM networks is quite mature as exemplified by the numerous standards available governing clock behavior so it is possible to leverage the parallel between clock noise in TDM architectures and clock noise in packet-based methods as manifested in the form of packet delay variation. The development of suitable standards for packetbased methods in telecommunications is an ongoing activity in the ITU-T and ATIS standards bodies. The principal metrics used in TDM are MTIE and TDEV. For application in packet networks
162
these need to be extended. The MTIE and TDEV families of metrics described here represent the state of understanding for quantifying clock noise in packet-switched networks. There have been several contributions made that suggest that these metrics have uses far beyond simple clock noise characterization. For example, monitoring packet flows and computing simple metrics can provide network operators a real-time view of network loading and can prove to be useful information for network management purposes.
ACKNOWLEDGMENTS The author would like to acknowledge the excellent comments and suggestions from the reviewers. These were very constructive and improved the content and flow of the article.
REFERENCES [1] S. Bregni, Synchronization of Digital Telecommunications Networks, Wiley, 2002. [2] ITU-T Rec. G.811, “Timing Characteristics of Primary Reference Clocks,” 1997. [3] IEEE Std. 1588-2008, “IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems,” Nov. 2008. [4] IETF RFC 5905, “Network Time Protocol (Version 4): Protocol and Algorithms Specification,” June 2010. [5] ITU-T Rec. G.810, “Definitions and Terminology for Synchronization Networks,” 1996. [6] ITU-T Rec. Y.1540, “Internet Protocol Data Communication Service — IP Packet Transfer and Availability Performance Parameters,” 2007. [7] K. Shenoi, Synchronization and Timing in Telecommunications, BookSurge Publishing, 2009. [8] S. Bregni, “Clock Stability Characterization and Measurement in Telecommunications,” IEEE Trans. Instrumentation Measurement, vol. 46, no. 6, Dec. 1997. [9] R. Subrahmanyan, “Timing Recovery for IEEE 1588 Applications in Telecommunications,” IEEE Trans. Instrumentation Measurement, vol. 58, issue 6, June 2009. [10] S. Bregni, “Measurement of Maximum Time Interval Error for Telecommunications Clock Stability Characterization,” IEEE Trans. Instrumentation Measurement, vol. 45, no. 5, Dec. 1996. [11] ITU-T Rec. G.824, “The Control of Jitter and Wander within Digital Networks which are Based on the 1544 kb/s Hierarchy,” 2000.
BIOGRAPHY K ISHAN S HENOI [M’76, SM‘09] (
[email protected]) received his B.Tech. degree from the Indian Institute of Technology (IIT) Delhi in 1972, an M.S. from Columbia University in 1973, and a Ph.D. from Stanford University in 1977, all in electrical engineering. He has been active in the telecommunications field since 1977 and since 2006 has been the principal consultant at Shenoi Consulting in Saratoga, California. Prior to this, he worked at Symmetricom, DSC Communications, and ITT Advanced Technology Center. He is named on 40 U.S. patents and authored several papers. He has published two books, Digital Signal Processing in Telecommunications (Prentice Hall, 1995) and Synchronization and Timing in Telecommunication (BookSurge Publishing, 2009). He serves as co-chair of the Technical Committee for the annual NISTsponsored workshop on Synchronization in Telecommunications Systems (WSTS).
IEEE Communications Magazine • February 2011
OUELLETTE LAYOUT
1/19/11
3:38 PM
Page 164
SYNCHRONIZATION OVER ETHERNET AND IP NETWORKS
Using IEEE 1588 and Boundary Clocks for Clock Synchronization in Telecom Networks Michel Ouellette, Kuiwen Ji, and Song Liu, Huawei Technologies Inc., Ltd. Han Li, China Mobile Research Institute
ABSTRACT This article describes the use of IEEE 1588 and boundary clocks for clock distribution (phase/time transfer) in telecom networks. The technology is primarily used to serve the radio interface synchronization requirements of mobile systems such as WiMAX and LTE, and to reduce the deployment and dependence of GPS systems in base stations. We discuss the most important functions that are necessary for phase/time transfer and present some initial field trial results using a chain of cascaded boundary clocks and synchronous Ethernet links across a packet and optical transport network that spans tens of kilometers and tens of network elements. The results indicate that it is possible to transfer accurate phase/time in a telecom network and meet the requirements of mobile systems. The article also discusses some of the challenges and highlights the ongoing activities in standardization bodies so that IEEE 1588 can be used as a technology in telecom networks.
INTRODUCTION The explosive growth of mobile data devices and applications is driving the industry to develop higher-bandwidth and more efficient radio technologies like WiMAX, time-division synchronous code-division multiple access (TD-SCDMA), Long Term Evolution (LTE), and LTE-Advanced. Current mobile radio technologies such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), and CDMA2000 are used to provide voice mobility services, and although these technologies have been augmented with data connectivity, they do not provide the same scale the latest radio technologies mentioned above offer. As mobile data traffic increases, operators are being forced to rethink their backhaul network infrastructure. In many cases the current backhaul is based on time-division multiplexing (TDM) and asynchronous transfer mode (ATM)
164
0163-6804/11/$25.00 © 2011 IEEE
network technologies, and is seen as a bottleneck to offering higher mobile data connectivity and services. For instance, E1/T1 interfaces and synchronous digital hierarchy/optical network (SDH/SONET) interfaces are typically used to interconnect base stations with the core wireless switching centers. These interfaces have served well the demand for voice and data services, but are now seen as not having enough flexibility and capacity to meet the predicted growth of mobile data traffic. Several operators are now adopting packet transparent network (PTN) and optical transport network (OTN) as a way to mitigate the growth in mobile data traffic. However, there is one aspect for which PTN and OTN (when compared to a TDM network) were not initially designed, and that is the transfer of both frequency and phase/time signals across the network (as opposed to a TDM network, which was primarily designed for the transfer of accurate frequency). Mobile systems such as GSM and UMTS require a frequency reference at the base station with a frequency error is ±0.05 parts per million (ppm) [1]. This requirement has been practically met for many years using the transfer of E1/T1 circuits through the backhaul network, where the E1/T1 circuits are traceable to a primary reference clock (e.g., Cesium clock). The latest radio technologies such as TDSCDMA and LTE-Advanced are not just forcing the need for the transfer of frequency, but also for the transfer of phase/time across a telecom network. Current mobile systems such as CDMA2000 require both frequency and phase/ time at the base station (in contrast to GSM/ UMTS, which require only frequency). CDMA2000 is successfully served today by GPS receivers deployed in every base station. The GPS receiver, equipped with sophisticated algorithms and oscillators, is used to maintain microsecond-level accuracy traceable to GPS system time. For instance, CDMA2000 requires that base stations be synchronized to GPS system time with an accuracy of ±3 μs in normal operating mode and no more than ±10 μs when
IEEE Communications Magazine • February 2011
OUELLETTE LAYOUT
1/19/11
3:38 PM
Page 165
GPS signals become unavailable. The ±10 μs is required to be met for up to 8 h, although some vendors do advertise a longer autonomy period as a product differentiator. Although GPS has many merits, there are network operators seeking to minimize the use of GPS within their network, especially at the base station sites. This is driven by the fact that the number of base stations to be deployed is expected to significantly increase in the coming years, as well as the need to use alternative methods to transfer time through the network. Other facts include that the base station coverage area is becoming smaller, and some base stations will be deployed indoors where there is difficulty in obtaining satellite signals. The cost and operation of installing receiver antenna, mounts, cabling, and so on must be taken into account. These are some of the reasons the telecom industry is actively looking at the IEEE 1588 protocol [2] for transferring accurate phase/time (as well as frequency) across packet and optical networks. Considerable work has been done by the industry since the initial proposals [3, 4] for applying IEEE 1588 to the telecom environment were made. The next section of this article discusses the aspect of frequency transfer, while the following section presents the most important IEEE 1588 functions necessary for the transfer of phase/time in telecom. We then discuss field trial results. Finally, we provide an overview of challenges and future work.
FREQUENCY TRANSFER This section summarizes some of the latest development in the industry for the transfer of frequency (syntonization) over packet and optical networks. International Telecommunication Union — Telecommunication Standardization Sector (ITU-T) Study Group (SG) 15/Question (Q) 13 has spent a considerable amount of time studying the aspects of frequency over packet networks. ITU-T Recommendation G.8261 [5] provides general information related to packet network timing solutions. One of the technologies specified in ITU-T is called synchronous Ethernet or SyncE [5–7] and was primarily designed for use in packet networks (Ethernet systems). SyncE is a physical-layer node-by-node frequency transfer method that inherits a lot of the SDH/SONET properties such as equipment clocks. SyncE also specifies a messaging channel to exchange information on the quality of the clock being distributed in the network (analogous to SSM in SDH/SONET). SyncE is known to provide guaranteed frequency performance well below what is required by mobile radio technologies (i.e., ±0.05 ppm). In normal operating mode (when traceable to a primary reference clock), SyncE can offer long-term frequency performance of about ±0.00001 ppm. It is also possible now to transport GE/10GE (and beyond) SyncE signals directly over an asynchronous OTN network where the bit and timing integrity are preserved as they traverse the endto-end OTN network [8]. This is useful for operators that want to transfer a frequency reference across the core and metro portion of their network or in a multi-operator environment.
IEEE Communications Magazine • February 2011
Another technology for frequency transfer developed in ITU-T is based on the use of IEEE 1588. The development of the first IEEE 1588 profile [9, 10] by ITU-T specifies the functions and interoperability for the transfer of frequency (not phase/time) between a Packet Master Clock (e.g., IEEE 1588 Grandmaster) and Packet Slave Clock (e.g., Base station). The profile can be said to offer packet-layer end-to-end frequency transfer as opposed to SyncE, which is physicallayer node-by-node frequency transfer. The profile does not specify performance aspects and assumes that only the Packet Master and Packet Slave (end-points of the network) provide syntonization functions (i.e., the network elements do not offer any IEEE 1588 assistance). It is important to note that the original intentions behind IEEE 1588 were not to provide end-to-end frequency transfer but rather nodeby-node time transfer. This article addresses the necessary building blocks for providing IEEE 1588 node-by-node phase/time transfer in telecom environments.
The use of IEEE1588 for clock synchronization is seeing a considerable uptake in various fields. Originally developed for the test and measurement community, it is not uncommon today to see IEEE 1588 being applied to other fields such as automation, power systems, military and telecom.
TIME TRANSFER BASED ON IEEE 1588 IEEE 1588 OVERVIEW The use of IEEE 1588 for clock synchronization is seeing a considerable uptake in various fields. Originally developed for the test and measurement community, it is not uncommon today to see IEEE 1588 being applied to other fields such as automation, power systems, military and telecom. This section describes some of the building blocks necessary to support node-by-node time transfer based on IEEE 1588, with the intent of addressing telecom and mobile backhaul needs. ITU-T is currently engaged in developing a second IEEE 1588 profile. The profile will specify that all nodes be IEEE 1588 capable (e.g., boundary clock), since this is seen as necessary for mitigating the effects of latency and delay variation as well as developing and providing guaranteed performance bounds. Industrial-type applications, power-based systems, and consumer/professional studio type applications [11] rely on such an approach.
IEEE 1588 BEST-MASTER CLOCK ALGORITHM The Best-Master Clock Algorithm (BMCA) is used to create a master-slave hierarchy within a set of distributed clocks, with the goal of producing a tree-based topology. The BMCA is a distributed algorithm running in every IEEE 1588 node, and is functionally similar to Spanning Tree Protocol used to build the data spanning tree of bridged Ethernet LANs. The master-slave hierarchy is composed at the top of the tree of a root (grandmaster clock) being the most accurate clock, branch/forking points forming the tree (boundary clocks), and leaves (ordinary slave clocks) being the least accurate clocks. The BMCA advertises the clock quality of IEEE 1588 nodes via a message called the Announce message. These messages travel through the network and are used by an IEEE 1588 node to compare its own clock quality to any received clock quality and to decide what kind of role
165
OUELLETTE LAYOUT
1/19/11
3:38 PM
Page 166
later. Although the work in IEEE 802.3bf primarily addresses the needs of the IEEE 802.1AS protocol, it is also applicable to telecom.
Grandmaster clock (root)
IEEE 1588 SYNCHRONIZATION PROCESS
M S
M: Master port S: Slave port P: Passive port
Boundary clock (branch point)
M
M M
Boundary clock (branch point)
S M
S
S
M
M
P S
Ordinary slave clock (leaf)
S
Ordinary slave clock (leaf)
Figure 1. BMCA master-slave hierarchy (clock tree). (i.e., root, branch/forking point, leaf) the node will take within the hierarchy. Figure 1 shows a tree-based master-slave hierarchy of IEEE 1588 clocks that provides a timing loop-free topology. The BMCA is also capable of handling network topology changes (e.g., link or node failure) and to provide re-convergence of the master-slave hierarchy during such events. Alternative approaches to create a master-slave hierarchy of clocks are based on manual configuration or possibly based on link state protocols as they would permit distributed computation of the trees in a manner similar to the multicast trees produced by IEEE 802.1aq Shortest Path Bridging. These are being studied.
TIMESTAMPING Hardware timestamping is another important function of IEEE 1588. The level of time accuracy transferred across a network is related to the precision at which timestamps can be measured in every node along the master-slave hierarchy. Although 1588 does not strictly specify how timestamps are captured, it does point out that to achieve a high-level of accuracy timestamps should be captured at the physical (or as close to the medium as possible) in order to minimize any delay variability such as protocol stacks. There is ongoing work in IEEE802.3 (IEEE802.3bf task force) to specify how the MAC/PHY can be used to provide timestamp indications. The proposed model consists of the definition of a time sync service interface (TSSI) that sits between the medium access control (MAC) and physical (PHY) layers, and an external time sync client. Timestamp indications are provided to the client every time a message timestamp point (SFD) crosses the reconciliation sublayer (RS). There are also provisions made for PHY vendors to specify the inbound and outbound latency values (the latency between the RS layer down to the medium). Such values are necessary to minimize any delay uncertainty and help construct a more accurate delay budget across the network, as explained
166
The IEEE 1588 protocol defines a set of periodic message exchange used to interrogate the propagation delay of the network and calculate the time offset between master-slave ports that are geographically separated across links/nodes. This is done by exchanging timestamps and is known as the synchronization process or twoway time transfer. Figure 2 shows the synchronization process. The master sends a Sync message to the slave at time t1, which is the timestamp based on its own clock. The Sync message is received by the slave at time t2, based on its own clock. The difference between t2 and t1 is the clock offset including any propagation delay between the master and slave port. The slave then sends a Delay_Request message to the master, which is transmitted at time t 3 and measured by the slave clock, and received by the master at time t4 and measured by the master. The master then transmits a Delay_Response message to the slave with all appropriate timestamps. The slave can then determine the master-slave clock offset and propagation delay of the network, and use these values to align its timebase to the master. This is done based on a system of equations as shown in the figure. However, the propagation delay of the network is calculated based on the assumption that the two directions (forward M → S and reverse S → M) are symmetrical. If the network is not symmetric, a time error will be produced. The magnitude of the error will be proportional to half the difference in the delay between the forward and reverse directions. The standard defines the message and synchronization exchange process but does not specify how to apply the correction and adjust the clock’s timebase.
SYNTONIZATION PROCESS The section above described the synchronization process, but also of importance for the transfer of phase/time is the syntonization process. Syntonization implies that the frequency of clocks used to maintain time must be well behaved during the exchange of messages, as frequency is a prerequisite for time transfer. If the difference in frequency between a master port and a slave port is faster or slower, an accumulation of time error will be produced. The IEEE 1588 standard is fairly silent on the syntonization process, but describes an approach that may be used to compute the frequency offset between master and slave ports based on the interarrival and interdeparture times of Sync messages. This allows a slave to measure the frequency offset of its local oscillator relative to the master and physically adjust the frequency of the local oscillator. IEEE 1588 also discusses other approaches where physical signals may be used to syntonize clocks, and one such example could be synchronous Ethernet, as explained earlier. In both cases, it is well known that jitter and wander can accumulate in a reference chain of syntonized clocks [12]; to limit this accumulation, proper parameterization of phase locked loops (PLLs) must be specified.
IEEE Communications Magazine • February 2011
OUELLETTE LAYOUT
1/19/11
3:38 PM
Page 167
SYNCHRONIZATION AND SYNTONIZATION PLANES One trend in the industry is to combine the use of IEEE 1588 for the time function (synchronization process) and synchronous Ethernet for the frequency function (syntonization process), especially in the case where an operator has already deployed SyncE. These two processes can be seen as operating in their own planes and are to some extent independent in terms of their protocols, functions, protection mechanisms, and so on. Figure 3 shows the two planes; the bottom one is the SyncE frequency sync plane and the top one is the IEEE 1588 time sync plane. One item worth discussing is the interaction and coordination of both planes. Since both are independent, one can imagine situations where the reference chains of network elements may differ to some extent and lead to an accumulation of time error that is well outside requirements. For instance, it can be difficult to manage two different planes so as to result in a congruence of the frequency and time reference chains, and Fig. 3 illustrates an example where the reference chains in both chains do not exactly follow the same path through the network. It can also be difficult today to guarantee that the convergence time of both planes would be the same during failure events. There could also be cases where loss of frequency produces a reconfiguration of the SyncE reference chain, but is not seen by the IEEE 1588 reference chain (i.e., the default BMC algorithm described above is not invoked). It is important to note that all these issues would be minimized if IEEE 1588 would be used simultaneously for both the time and frequency functions (without the use of SyncE). In such a case congruence of both planes would be guaranteed, and since frequency and time would be distributed node-by-node, it could be envisaged that performance of frequency would be at par when compared to SyncE. The decision on how to operate the planes should ultimately be left to the network operator and will likely be studied by ITU-T.
M→S
Master port
Slave port
M←S t1 + clock_offset
t1
Sync t2 (=t1 + clock_offset + prop_delay) t3
Del_Req t4
Del_Resp
(t1, t2, t3, t4)
Clock_offset = [ (t2 − t1) − (t4 − t3) ] / 2 Propagation_delay = [ (t2 − t1) + (t4 − t3) ] / 2
Figure 2. Synchronization process.
Grandmaster (time + frequency)
Wireless switching centers
2MHz 1PPS NE2 BC
GE
NE1 IEEE 1588 time plane NE4
BC NE3
BC BC NE1 SyncE frequency plane
NE2 Base stations
1588 time chain
NE4 NE3
SyncE freq chain
Service
BOUNDARY CLOCKS Earlier we discussed the role of the BMCA in establishing the master-slave hierarchy. It was noted that the root of the tree is known as the grandmaster clock and that Boundary Clocks (BC) are used to logically segment the network by creating branches in the synchronization tree. Typically BCs have multiple ports, and a separate copy of the protocol state machine and dataset must be maintained for each port. This information is used to derive the master-slave hierarchy, and only one of port in a IEEE 1588 node can be in a slave state at any given time, while the others are in master or passive state. The BC timestamps incoming and outgoing IEEE 1588 messages, and uses the timestamp information to synchronize its timebase to the grandmaster (GM) timebase. BCs can include PLL filtering to limit accumulation of error through a chain of cascaded nodes, although this is not specified in IEEE 1588. A model and description of a boundary clock is presented in [13], and conceptually (excluding the timestamp
IEEE Communications Magazine • February 2011
Figure 3. Synchronization and synchronization planes.
part) a BC is similar to the type of clock systems found in SDH and SyncE network elements.
ONE PULSE-PER-SECOND INTERFACE The so-called One Pulse-Per-Second (1PPS) is an electrical signal used for precise timekeeping and time measurements. The rising edge of the signal is used to precisely indicate the rollover of the Universal Time Coordinates (UTC) second, and its accuracy generally ranges from several nanoseconds to a few microseconds. The 1PPS signal can be found on several devices such as test and measurement equipment, GPS receivers, and current cellular base stations such as CDMA2000 and WiMAX. The 1PPS signal is also used in various types of interfaces such as PTTI (military/defense) and NEMA-183 (navigation), where the interface specifies not just the electrical signal but also a data communication
167
OUELLETTE LAYOUT
1/19/11
3:38 PM
Page 168
channel. The data channel is typically a unidirectional serial interface that defines the characteristics of a protocol to transmit data from a transmitter to a receiver. The data channel can transmit information beyond the rollover of a second such as GPS time, day of year, satellite ID, frequency information, and estimated quality of time. Grandmaster clocks, for instance, can receive their time reference signal via a 1PPS interface of a GPS receiver; likewise, boundary clocks and ordinary clocks can recover a 1PPS signal to transfer time to an application or for test and measurement purposes.
TIME ERROR BUDGET In order to transfer phase/time, it is important that the various sources of error be known and GPS DUT#1 GE
1PPS + 2MHz GM
DUT#2
BC 1PPS
Time tester
2MHz
BC 1PPS
LABORATORY AND FIELD TRIAL RESULTS
2MHz Freq tester
Reference signals
quantified (e.g., through simulations). Many systems make use of a budget to describe these errors and their magnitude; therefore, a time error budget of a hypothetical network reference model is essential. Such modeling is necessary to derive the proper network limits and ensure that the application’s requirements can be met. For instance, a mobile backhaul can be decomposed into several high-level parts with clear demarcation points; the GM clock generating time, the packet transport network used to transfer time, and the recipient recovering and using time. Each part must be assigned a portion of the application’s allowable time error budget, and each part can then be further subdivided into several subparts. For example, the GM’s time error budget might be related to the error produced due to the GPS receiver, the length of cables, and so on. The packet transport network time error might be related to aspects such as timestamping error, delay difference between forward and reverse directions, PLL filtering, PHY latency, thermal variations, and clock synthesis. The recipient of time might also share the same factors as the packet transport network, in addition to internal and external aspects of transferring time to the application.
Recovered signals
Figure 4. Laboratory testbed.
This section presents some of the initial laboratory results and field trials conducted in China about a year and a half ago. The field trial evaluated the transfer of phase/time to TD-SCDMA mobile base stations, based on the use of IEEE 1588 boundary clocks embedded in packet transport network equipment, as well as the use of synchronous Ethernet to transfer frequency. It is the authors’ belief that no other field trials of this scale have been reported in the industry, except in some very recent contributions to ITU-T.
LABORATORY RESULTS
Figure 5. SyncE performance (top figure) and 1PPS performance (bottom figure).
168
One laboratory test topology is shown in Fig. 4, which was used for benchmarking performance. The GPS antenna signal is connected to the GM device. The jitter on the GPS signal is filtered using a Rubidium oscillator embedded into the GM, in order to produce a highly stable 2 MHz and 1PPS reference signal. Device under test (DUT) #1 and DUT#2 were configured as PTP boundary clocks and used an OCXO oscillator. DUT#1 was phase-aligned to the GM using the 1PPS interface. The link connecting DUT#1 and DUT#2 is a synchronous Gigabit Ethernet (SyncE) link, and there was no background traffic. The time and frequency measurement equipment compare the recovered time output (1PPS) and recovered frequency output (2 MHz) of DUT#2 to their respective reference signals. Figure 5 shows the frequency (2 MHz) and time (1PPS) performance. The top figure shows that the TIE (time interval error with a sampling rate of 30 measurements/s) of SyncE varied between –2.5 and 2.0 ns over 24 h, and shows that PRC frequency traceability was achieved. This is the expected behavior. The bottom figure shows that peak-to-peak jitter of the recovered
IEEE Communications Magazine • February 2011
OUELLETTE LAYOUT
1/19/11
3:38 PM
Page 169
1PPS (top trace) compared to the reference 1PPS (bottom trace) was about 2 ns, but the result also shows a static time error of about 3 ns between the recovered 1PPS and reference 1PPS. Note: Due to the resolution of the figure, the legend in the bottom right corner does not appear properly; for reference, the x-axis scale was 5 ns/division and 1 ns/subdivision. This static offset was due to the cables used in the test and measurement, as the length of the 1PPS cables going to the test equipment was not taken into account and compensated for.
GPS
Cumulative Time Error of Reference Chain #1 — Figure 7 shows the time error (recovered 1PPS signal) of Chain#1 for a period of 15 h. The time error varied from approximately 79 to 123 ns. The short-term noise of the 1PPS was primarily due to hardware timestamping error, SyncE frequency error, and time offset calculation and adjustments. The long-term noise of the 1PPS was primarily due to the oscillator and time-PLL used in the switches. The result also shows a static time error of about 100 ns, which is due to the fiber asymmetry along the 52 km reference Chain #1. In this initial field trial, any fiber asymmetry between switches was not compensated for. What is important to note is that the results were well under the performance requirements of ±3 μs for TD-SCDMA mobile base stations, and further improvements have been made since these initial field trials. Cumulative Time Error during Rearrangement of the Reference Chain — To create a switch of the reference chain (i.e., a switch from reference Chain #1 to #2), the 1PPS signal at GM#1 was disconnected. The execution of the
IEEE Communications Magazine • February 2011
1PPS
1PPS
GM#2
GM#1 S
M
FIELD TRIAL TOPOLOGY AND RESULTS The initial field trial topology is shown in Fig. 6, and includes several of the characteristics and functions discussed earlier. The field trials were conducted in a city in China. The GPS devices provide a 1PPS reference signals into the IEEE 1588 Grandmaster devices (GM#1 and GM#2). The network consists of a metro 10GE ring and an access GE ring. There are two time synchronization reference chains consisting of 15 switches each (Chain#1 and Chain#2), which were used for protection and diversity of time transfer. Each network switch was configured as an IEEE 1588 BC clock for time transfer and SyncE for frequency transfer and were deployed in various central offices across the city. Chain#1 and Chain#2 spanned approximately 52 and 45 km, respectively. Background traffic included live mobile calls as well as 80 percent load (on and off) during the measurement period. The BMCA algorithm was used to establish the master-slave hierarchy of both synchronization reference chains. The state of each port is shown in the figure, and the ports shown as passive (dashed lines) eliminate timing loops. The time measurement equipment (time tester) was placed at the last network switch to measure the cumulative time error of the reference chain, and the measurement was done by comparing the recovered 1PPS signal of the last switch with a reference 1PPS also obtained from GPS.
GPS
M
M 10GE RAN
S Reference chain #1 52 km
Reference chain #2 43 km
S
P
M M
M
S
S GE RAN
M
M
S
S M
M S
P
Recovered 1PPS Reference 1PPS (from GPS)
Time tester
Figure 6. Field trial topology.
BMC algorithm elected GM#2 as the new GM, and a new master-slave hierarchy was created with an appropriate change of port states. In this field trial the last network switch changed its right-side port to slave state (the dashed line became a thick line) and its left-side port to passive state (the thick line changed to a dashed line). As shown in Fig. 8, the rearrangement of the reference chain was created at about 350 s. A time error transient of about 60 ns was produced during the convergence of the BMC algorithm. This 60 ns was due to a combination of holdover and time offset computation during the rearrangement period. The result also shows a static time error of about 80 ns, which was due to the fiber asymmetry along the 43 km reference Chain #2.
ADDITIONAL FIELD TRIAL RESULTS In another field trial, the topology of the network was significantly augmented in terms of number of nodes as well as the use of live TD-SCDMA base stations. The reader is referred to [14] for more details on the network topology. The base station was the last device of the reference chain and was capable of recovering time by two methods: via a fast Ethernet interface where the base station supported IEEE 1588 or via a 1PPS input
169
OUELLETTE LAYOUT
1/19/11
3:38 PM
Page 170
Time interval error (ns)
123.00 120.00 115.00 110.00 105.00 100.00 95.00 90.00 85.00 79.19 0
5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 55705 Cursor A: 0 Time (s) Cursor B: 55705
Figure 7. Cumulative time error.
Time error (ns)
105.7 100.0 90.0 80.0 70.0 60.0 50.0 41.8 0
50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 803 Time (s) Cursor B: 803
Cursor A: 791
Chain #1 time error (about 100 ns static time error due to fiber asymmetry)
Rearrangement chain #1 to chain #2
Chain #2 time error (about 80 ns static time error due to fiber asymmetry)
Figure 8. Reference chain rearrangement.
interface provided by the collocated switch/BC. Another notable difference when compared to the previous results was that fiber asymmetry between each switch was manually measured and compensated for (in order to remove any static time error as observed in Figs. 7 and 8). Figure 9 shows the results of the recovered 1PPS using method#1 (top graph) and method#2 (bottom graph). The top figure showed that method#1 (using IEEE 1588 to synchronize the base station) generated better performance and resulted in a smaller static time error than method#2 (using 1PPS to synchronize the base station) in the bottom figure. However, the static time error in the bottom graph was due, at the time of trial, to the base station vendor not capable of properly compensating for the cable length used to transfer the recovered 1PPS between the switch and the base station. This generated an arbitrary static time error as shown in the bottom graph. It is important to note that every cable used in time transfer (e.g., for transferring time to an application or for measurement purposes) must be compensated for. Indeed, both methods #1 and #2, if properly implemented, should deliver the same level of performance. Figure 10 shows the cumulative time error result for a period of approximately 16 h measured at one base station vendor. The results
170
indicate that the peak-to-peak 1PPS performance was kept roughly within +80 ns and –50 ns over the course of 16 h, which was well within the performance requirements of ±3 μs for TDSCDMA mobile base stations. More analysis on the behavior is currently being conducted, and includes aspects related to short-term buffering, timestamping functions, and the oscillator used in the base station. Finally, some road tests were also conducted to verify if there was any impact to voice and data services when base stations were recovering phase/time using IEEE 1588 and others using GPS. The road tests consisted of driving between base stations and verifying that successful handoffs and call completion could be made. In [14] the initial results showed that the handoff and call completion ratios were not impacted by the use of IEEE 1588. In addition, the average voice mean opinion score (MOS) was measured to be 3.46 (out of 5), which is typical of mobile voice quality. In summary, it is worth pointing out that the technology proposed here is based on node-bynode phase/time transfer, where each network node supports boundary clocks. For this reason, packet delay variation (PDV) in the network does not impact phase/time transfer. Since there is no PDV, the control algorithm to recover frequency and time is less complex, since corrections are done in every network node between the GM and slave. Note that the frequency in this testbed was obtained via synchronous Ethernet and time via PTP, but it is believed that the results would be similar if PTP was used in every network node for frequency recovery rather than synchronous Ethernet.
FUTURE WORK There is still work to be done on applying IEEE 1588 in the telecom environment, but as shown in this article, the use of node-by-node clock synchronization is a necessary requirement to provide guaranteed performance. Topics include evaluating the implications of deploying IEEE 1588 functionalities in existing network infrastructure. Other important topics include studying the aspect of link/fiber asymmetry and mitigation techniques to minimize this impairment, since measuring and compensating for every link/fiber in a network is practically infeasible or cost-prohibitive. The allocation of a time error budget, the modeling and simulation of boundary clocks, the protection aspects, and mapping and transfer of IEEE 1588 into the OTN are other topics worth addressing. These topics will most probably be studied in great detail by ITU-T SG15.
CONCLUSION It is not uncommon to hear in the industry that IEEE 1588 can achieve sub-microsecond-level accuracy. Unfortunately, this statement is not always true; it depends primarily on how IEEE 1588 is deployed and how the various functions will be used within the network. The use of node-by-node time transfer (default mode of operation of IEEE 1588) using boundary clocks
IEEE Communications Magazine • February 2011
Page 171
BIOGRAPHIES MICHEL OUELLETTE [M] (
[email protected]) is a project manager and technical leader in Huawei’s IP Network Solutions and Clock Lab, where he focuses on mobile backhaul networks and the development and analysis of packet network architectures/protocols for accurate frequency and phase/time transfer. He actively participates in ITU-T and IEEE 802.3 standardization. Prior to joining Huawei, he spent 12 years at Nortel focusing on ATM/TDM pseudowires, synchronous Ethernet, clock algorithms for base stations, TCP/IP active queue management, and ATM switching. He has been granted 10 patents, has published in more than 10 international journals, and has generated more than 50 ITU contributions. He received his B.A.Sc. and M.A.Sc. in electrical/computer engineering from the University of Ottawa in 1995 and 1997, and l’Ecole Nationale Superieure des Telecommunication.
IEEE Communications Magazine • February 2011
30 25 20 15 10 5 0 -5
1 32 63 94 125 156 187 218 249 280 311 342 373 404 435 466 497 528 559 590 Time (s)
145 140 135 130 125 120 115 110 105
1 32 63 94 125 156 187 218 249 280 311 342 373 404 435 466 497 528 559 590 Time (s)
Figure 9. Time error performance via method #1 (top graph) and method #2 (bottom graph).
80 60 40 20 0 -20
52922 55442
50402
47882
45362
42842
40322
37802
32762 35282
27722 30242
25202
22682
20162
17642
15122
-60
12602
-40 7562 10082
[1] 3GPP TS25.402 v. 7.5.0, “Synchronization in UTRAN Stage 2,” section 4.2, Dec. 2007. [2] IEEE Std. 1588-2008, “IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems,” July 2008. [3] G. Algie, “Proposal for IEEE 1588 use over Metro Ethernet Layer2 VPNs,” Proc. IEEE 1588 Conf. ‘03, Gaithersburg, MD, 2003. [4] G. Algie and M. Ouellette, “IEEE 1588 Time Service Enablers for Metro Ethernet Solutions,” Proc. NIST-ATIS T1X1 Wksp. Synchronization Telecommun. Sys., Broomfield, CO, 2004. [5] ITU-T G.8261, “Timing and Synchronization Aspects in Packet Network,” Apr. 2008. [6] ITU-T G.8262, “Timing Characteristics of Synchronous Ethernet Equipment Slave Clock (EEC),” Aug. 2007. [7] ITU-T G.8264, “Distribution of Timing through Packet Networks,” Oct. 2008. [8] J. L. Ferrant et al., “OTN Timing Aspects,” IEEE Commun. Mag., vol. 48, no. 9, Sept. 2010, pp. 62–69. [9] ITU-T G.8265.1, “ITU-T PTP Profile for Frequency Distribution without Timing Support from the Network (Unicast Mode),” Consented June 2010. [10] J. L. Ferrant et al., “Development of the First IEEE 1588 Telecom Profile to Address Mobile Backhaul Needs,” IEEE Commun. Mag., vol. 48, no. 10, Oct. 2010, pp. 118–26. [11] G. M. Garner, M. D. Tenner, and A. Gelter, “New Simulation and Test Results for IEEE 802.1AS Timing Performance,” ISCPS ‘09, Brescia, Italy, Oct. 2009. [12] S. Bregni, Synchronization of Digital Telecommunications Networks, Wiley, 2002. [13] J. C. Eidson, Measurement, Control and Communication Using IEEE 1588, Springer-Verlag, 2006. [14] M. Ouellette, “Examples of Time Transport,” Joint ITUT/IEEE Wksp. Future Ethernet Transport, Geneva, Switzerland, May 2010; http://www.itu.int/ITU-T/worksem/tfet/programme.html. [15] R. Subrahmanyan, “Time Recovery for IEEE 1588 Applications in Telecommunications,” IEEE Trans. Instrumentation Measurement, vol. 58, no. 6, June 2009, pp. 1858–69.
35
5042
REFERENCES
40
2 2522
is a strong requirement in order to provide guaranteed performance, when compared to end-toend frequency and time transfer where there is no support for IEEE 1588 within the telecom network elements [15]. Using node-by-node time transfer (where each node is IEEE 1588 capable) removes the dependence on packet delay variation and minimizes the complexity and proprietary rights of the control algorithms required for frequency and phase/time recovery. This article has presented some of the most important building blocks that are necessary for using IEEE 1588 in a telecom and mobile environment, as well as initial field trial results. The use of IEEE 1588 and boundary clocks provides an alternative solution for operators that want to minimize the use of GPS within their network, especially as some of them will be deploying a large number of base stations (in the coming years) that require a source of accurate phase/time.
Time error (ns) of 1PPS signal
3:38 PM
Time error (ns) of 1PPS signal
1/19/11
Time error (ns) of 1PPS signal
OUELLETTE LAYOUT
Time (s)
Figure 10. Time error (ns) measured at the base station over 16 hours. KUIWEN JI is the technical leader for synchronization solutions and has worked in Huawei since 2001. He focuses on the synchronization solutions for SDH/OTN/IP products. He also attends and participates in ITU-T, IETF, and IEEE standardization, and is a steering committee member of the WSTS workshop on synchronization. L IU S ONG is a system engineer in Huawei’s Clock Lab in Shenzhen, China, where he focuses on the network product’s synchronization solutions and implementation. He is a participant of ITU-T and actively contributes to the development of the frequency profile and time profile based on IEEE 1588v2 protocol. He has five years work experience in the telecom network and product synchronization field since joining Huawei in 2004. He received a bachelor’s degree from the University of China in 2002. H AN L I graduated from Beijing University of Posts and Telecommunications (BUPT) and obtained his Ph.D. in 2002. He has been working in China Mobile Research Institute since 2004. He is currently the deputy director of research and in charge of transport and access area. He has profound knowledge in OTN, PTN, PON, and time synchronization technology, and has published more than 50 articles, applied for 20 patents, and delivered more than 60 ITU-T contributions.
171
LYT-ADINDEX-February
1/21/11
10:06 AM
Page 176
ADVERTISERS’ INDEX Company
Page
ADVERTISING SALES OFFICES Closing date for space reservation: 1st of the month prior to issue date
Accumold ...................................................................................................S15 ADVA Optical Networking ......................................................................S Cover 4 Agilent Technologies.................................................................................3, 13 Anritsu........................................................................................................S1
SOUTHERN CALIFORNIA Patrick Jagendorf 7202 S. Marina Pacifica Drive Long Beach, CA 90803 Tel: (562) 795-9134 Fax: (562) 598-8242 Email:
[email protected]
Elsevier .......................................................................................................11 European Center for Information & Communications Technologies........17 GL Communications.................................................................................14
NORTHERN CALIFORNIA George Roman 4779 Luna Ridge Court Las Vegas, NV 89129 Tel: (702) 515-7247 Fax: (702) 515-7248 Cell: (702) 280-1158 Email:
[email protected]
Huber + Suhner ........................................................................................S11 IEEE GreenCom.......................................................................................101 IEEE PIMRC 2011 ...................................................................................139
SOUTHEAST Scott Rickles 560 Jacaranda Court Alpharetta, GA 30022 Tel: (770) 664-4567 Fax: (770) 740-1399 Email:
[email protected]
IWCE..........................................................................................................155 Luna Technologies.....................................................................................S13 Micrel .........................................................................................................S9 OFC/NFOEC 2011....................................................................................S Cover 3 RSOFT Design Group..............................................................................S Cover 2 Samsung .....................................................................................................Cover 4
NATIONAL SALES OFFICE Eric L. Levine Advertising Sales Manager IEEE Communications Magazine 3 Park Avenue, 17th Floor New York, NY 10016 Tel: (212) 705-8920 Fax: (212) 705-8999 Email:
[email protected]
EUROPE Rachel DiSanto Huson International Media Cambridge House, Gogmore Lane Chertsey, Surrey, KT16 9AP ENGLAND Tel: +44 1428608150 Fax: +44 1 1932564998 Email:
[email protected]
Silicon Labs................................................................................................Cover 2 Stanford Research Systems ......................................................................1 Steepest Ascent .........................................................................................19 u2t Photonics .............................................................................................S7 Vectron International................................................................................5 Wiley-Blackwell .........................................................................................Cover 3 Xilinx ..........................................................................................................S3 Zarlink Semiconductor .............................................................................S5 S indicates Optical Supplement 176
IEEE Communications Magazine • February 2011