Programmable Networks for IP Service Deployment
For a listing of recent titles in the Artech House Telecommunications...
62 downloads
1734 Views
5MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Programmable Networks for IP Service Deployment
For a listing of recent titles in the Artech House Telecommunications Library, turn to the back of this book.
Programmable Networks for IP Service Deployment Alex Galis Spyros Denazis Celestin Brou Cornel Klein
Artech House, Inc. Boston • London www.artechhouse.com
Library of Congress Cataloging-in-Publication Data
British Library Cataloguing in Publication Data
ISBN
1-58053-745-6
Cover design by Igor Valdman © 2004 ARTECH HOUSE, INC. 685 Canton Street Norwood, MA 02062 All rights reserved. Printed and bound in the United States of America. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark. International Standard Book Number: 1-58053-745-6 Library of Congress Catalog Card Number: 10 9 8 7 6 5 4 3 2 1
CONTENTS Foreword Preface Acknowledgments
xiii xv xxi
1 Introduction 1.1 The Importance of Programmable Networks 1.2 Structure of the Book 1.3 The FAIN Project and Consortium
1 1 3 4
2 Programmable Networks: Background 2.1 Motivation 2.2 Trends and Expected Evolution 2.3 Open Signaling 2.3.1 The IEEE P1520 2.3.2 The IETF ForCES 2.4 DARPA Active Networks 2.5 Node Operating Systems 2.6 Execution Environments 2.7 Conclusions References
5 5 7 8 8 12 13 15 18 21 23
3 Programmable Networks’ Security: Background 3.1 Introduction 3.2 Requirements for Security 3.3 Programmability Versus Security 3.4 Programming Language or Operating System? 3.5 Trusted Networking Requires Trusted Computing 3.6 Authorization in the Absence of Identities 3.7 Resource Controls 3.8 Putting It All Together 3.9 Conclusion and Thoughts for the Future References
27 27 29 30 31 33 35 36 37 40 42
4 Programmable Network Management and Services: Background 4.1 State of the Art 4.1.1 Network and Element Management 4.1.2 Active Service Provisioning 4.2 Trends and Expected Evolution 4.2.1 Element and Network Management 4.2.2 Active Service Provisioning References
47 47 47 55 59 59 61 61
5 SwitchWare Active Platform 5.1 Introduction 5.2 Why SwitchWare? 5.3 Precedents and Possibilities 5.4 Switch Versus Capsule: A Misleading Dichotomy 5.5 It Starts with the Node: Active Bridging, ALIEN, SANE, SQOSH, and RCANE 5.6 Active Packet Languages: PLAN, SNAP, and Caml 5.7 Results
65 65 66 67 71 72 76 78
v
vi
Programmable Networks for IP Service Deployment
5.8 Reflections and Conclusions References
80 81
6 Peer-to-Peer Programmability 6.1 Introduction 6.2 What Are P2P Services? 6.2.1 Architectural Concepts 6.2.2 Components, Structure, and Algorithms of Peer-to-Peer Services 6.3 Requirements for P2P Programmability 6.4 Objectives and Requirements for P2P Overlay Management 6.5 P2P Overlay Management Using Application-Layer Active Networking 6.5.1 The Active Virtual Peer Concept 6.5.2 Implementation of AVPs 6.6 Conclusion References
87 87 88 88 92 94 94 96 96 99 104 105
7 Programmable Networks’ Requirements 7.1 Introduction 7.2 Operators’ Expectations of Active Networks 7.2.1 Overview 7.2.2 Speeding Service Deployment and Customization 7.2.3 Leveraging Network and Service Management 7.2.4 Decreasing Vendor Dependency 7.2.5 Integrating Information Networks and Services 7.2.6 Diversification of Services and Novel Business Opportunities 7.3 FAIN Enterprise Model 7.3.1 Roles 7.3.2 Reference Points 7.4 Network Programmability and Active Applications 7.4.1 Introduction 7.4.2 Active Web Services 7.4.3 Active Multicasting 7.4.4 Active VPN 7.5 Generic Requirements for the FAIN Architecture 7.5.1 Service Architecture 7.5.2 Service Access Requirements 7.5.3 Service-to-Network Adaptation/Management 7.5.4 IP-Based Network Models 7.5.5 Service Level Agreements 7.5.6 Quality of Service 7.5.7 Charging/Billing 7.5.8 Security 7.5.9 Active Node/Network Control 7.5.10 Generic Framework Requirements 7.6 Requirements from Operators’ Expectations 7.6.1 Impact of Speeding Service Deployment and Customization 7.6.2 Impact of Leveraging Network and Service Management 7.6.3 Impact of Decreasing the Dependence on Vendors 7.6.4 Impact of Networks and Service Integration and Information Networking 7.6.5 Impact of Diversifying Services and Business Opportunities 7.7 Application Requirements 7.7.1 RP1: SCP–SP 7.7.2 RP2: SP–ANSP 7.7.3 RP3: ANSP–NIP 7.7.4 RP4: Consumer–SP
109 109 110 110 110 111 112 113 114 114 115 118 119 119 121 126 131 136 136 136 137 137 137 138 138 138 138 139 139 139 140 140 141 142 142 142 143 144 144
Contents
7.7.5 RP5, RP6, and RP7: Federation among SPs, ANSPs, and NIPs 7.8 Conclusion References
vii
145 145 146
8 FAIN Network Overview 8.1 FAIN Enterprise Model 8.1.1 Roles 8.1.2 Reference Points 8.2 FAIN Reference Architectural Model 8.2.1 Discussion on the FAIN Reference Architecture 8.3 FAIN Networking Architecture 8.3.1 Networking Issues in FAIN 8.3.2 Components in the FAIN Programmable Network 8.4 FAIN Active Service Provisioning 8.4.1 Introduction 8.4.2 FAIN Approach 8.4.3 Actors 8.4.4 Use Cases 8.4.5 ASP Architecture 8.5 FAIN Testbed 8.5.1 Network Topology and Interconnection 8.5.2 Sites Overview 8.6 FAIN Scenarios 8.6.1 DiffServ Scenario 8.6.2 WebTV Scenario 8.6.3 Web Service Distribution Scenario 8.6.4 Video on Demand Scenario 8.6.5 Mobile FAIN Demonstrator 8.6.6 Managed Access 8.6.7 Security Scenario 8.7 Concluding Remarks References
149 150 152 153 154 158 159 159 160 170 170 170 172 172 173 175 175 178 179 179 180 181 183 184 186 187 188 190
9 Virtual Environments and Management 9.1 Requirements 9.2 Design 9.2.1 Basic Component 9.2.2 Configurable Component 9.2.3 Component Manager 9.2.4 Template Manager 9.2.5 Resource Manager 9.2.6 Special Managers 9.3 Implementation 9.3.1 Basic Component 9.3.2 Port 9.3.3 IIOP Port 9.3.4 SNMP Port 9.3.5 Configurable Component 9.3.6 Component Manager 9.3.7 Resource Manager 9.3.8 Virtual Environment 9.3.9 Virtual Environment Manager 9.3.10 Security Context 9.3.11 Security Manager 9.3.12 Execution Environment
195 196 196 199 199 199 200 201 201 202 204 204 205 205 205 205 206 206 206 206 206 206
viii
Programmable Networks for IP Service Deployment
9.3.13 Java Execution Environment 9.3.14 Java Execution Environment Manager 9.3.15 PromethOS Execution Environment 9.3.16 PromethOS Execution Environment Manager 9.3.17 SNAP Execution Environment 9.3.18 SNAP Execution Environment Manager 9.3.19 Channel 9.3.20 Channel Manager 9.3.21 DiffServ Controller 9.3.22 DiffServ Manager 9.3.23 Traffic Controller 9.3.24 Traffic Manager 9.4 Use Cases 9.4.1 Booting the Management Layer 9.4.2 Creating a Virtual Environment 9.4.3 Deploying a Service 9.5 Conclusion References
207 207 207 207 207 207 208 208 208 208 208 208 209 209 209 210 210 211
10
Demultiplexing 10.1 Introduction to De/MUX 10.2 Requirements 10.2.1 Requirements for Active Packet Format for De/Multiplexing 10.2.2 Requirements for De/MUX Mechanism 10.3 Active Packet Format 10.3.1 VE ID Option Data 10.3.2 EE ID Option Data 10.4 Framework, Components, Interfaces 10.4.1 Active Channel 10.4.2 Data Channel 10.4.3 Interface Between De/MUX Components and Security Component 10.5 Conclusions References
213 213 214 214 214 215 216 217 217 219 220 221 224 224
11
Security Management 11.1 Introduction 11.2 System Relationships and Entities 11.3 Threats, Security Requirements, and Architecture Goals 11.4 Security Issues 11.4.1 Authorization and Policy Enforcement 11.4.2 Authentication 11.4.3 Packet Integrity 11.4.4 System Integrity 11.4.5 Code and Service Verification 11.4.6 Limiting Resource Usage 11.4.7 Accountability 11.5 High-Level Security Architecture 11.5.1 Fain Architectural Model and Security Architecture 11.6 Security Architecture Design and Implementation 11.6.1 Building the Components’ Security Context 11.6.2 Enforcement Layer, Authorization, and Policy Enforcement 11.6.3 External Security Representation 11.6.4 Cryptographic Subsystem and Secure Store 11.6.5 Connection Manager 11.6.6 Verification Manager
227 227 228 230 232 232 233 234 234 235 235 236 236 237 239 240 240 241 242 243 243
Contents
ix
11.7 General Active Packet Security Events 11.8 Security Architecture Performance 11.9 Architecture Applicability 11.10 Evaluation of the Security Architecture 11.11 Conclusions References
243 244 246 248 249 250
12
Resource Control Framework 12.1 Requirements 12.2 RCF Design 12.3 RCF Main Functionalities 12.3.1 Admission Control 12.3.2 Resource Control 12.4 Model RCF Implementation 12.4.1 Traffic Control and Management for Linux 12.4.2 DiffServ Control and Management for a Gigabit Router 12.5 Conclusions References
253 253 254 256 256 258 260 261 263 264 265
13
Control Execution Environments 13.1 Introduction 13.1.1 Management for Evolving and Adapting Networks 13.1.2 Extending the Control Plane 13.1.3 Operation of the Control EE 13.1.4 Safety, Predictability, and Security 13.2 Active Packet Interceptor 13.2.1 Intercepting and Injecting 13.2.2 Executing 13.2.3 IP Protocols as Active Packets 13.2.4 Constrained Language: Forward Branching Languages 13.3 Operational Design of SNAP Interpreter 13.3.1 Instruction Classes 13.3.2 Marshaling and Execution in Place 13.3.3 Segments 13.3.4 Stack and Heap Addressing 13.3.5 Expanding Execution Buffers 13.3.6 The Send Primitive 13.4 SNAP Activator 13.4.1 Packet Interception Mechanisms 13.4.2 Other Services 13.4.3 SNMP Interface 13.5 Security in the Control EE 13.5.1 Introduction 13.5.2 Active Networks Authentication 13.5.3 FAIN Solution 13.6 Control EE in DiffServ 13.7 Conclusion References
267 267 268 271 271 271 272 272 272 273 275 277 277 278 279 279 280 280 281 281 282 284 285 285 286 287 289 289 290
14
High-Performance Execution Environments 14.1 Motivation
293 293
x
Programmable Networks for IP Service Deployment
14.2 Initiatives in High-Performance Active Networking 14.2.1 Practical Active Network: The First Step Toward High Performance 14.2.2 Active Network Node with Hardware Support 14.2.3 Simple Active Router Assistant 14.2.4 Cluster-Based Active Node 14.2.5 Composable Active Network Elements 14.2.6 Active Packets Edition 14.2.7 Protocol Boosters: Programmable Protocol Processing Pipeline 14.2.8 Kernel Services 14.2.9 AMP 14.2.10 Magician: Resource Management and Allocation 14.2.11 AMnet: Flexinet Project 14.2.12 Safe and Nimble Active Packets 14.2.13 TAGS: Optimizing Active Packet Format 14.3 Toward an Architecture of High-Performance Active Networks and Nodes 14.3.1 Proposing an Architecture for a High-Performance Active Network 14.3.2 Proposing an Architecture for a High-Performance Active Node 14.4 Tamanoir: A Practical Framework for High-Performance Active Networking 14.4.1 High-Level Multithreaded Execution Environment 14.4.2 User Space and Implementation Issues 14.4.3 Kernel Space Execution Environment 14.4.4 Distributed Service Processing: Tamanoir on a Cluster 14.5 Tamanoir Performance Evaluation 14.5.1 Hardware and Software Descriptions of the Testbeds 14.5.2 Latency Measures 14.5.3 Data Path Optimization in a Tamanoir Active Node 14.5.4 Throughput Measures 14.6 Conclusion References
295 296 296 296 297 298 298 299 299 299 300 300 300 301 301 301 303 307 307 308 309 309 310 310 311 312 313 321 321
15
Network Management 15.1 Introduction 15.2 Design and Functionality 15.3 The FAIN PBNM Core Components Description 15.3.1 Common Use Cases 15.3.2 Core Components 15.3.3 ANSP Proxy 15.3.4 PDP Manager 15.3.5 PDP 15.3.6 Monitoring System 15.3.7 Policy Parser 15.3.8 Policy Repository 15.4 Network-Level Management System 15.4.1 Use Cases 15.4.2 NMS Components 15.5 Element-Level Management System 15.5.1 Use Cases 15.5.2 EMS Components 15.6 Conclusion References
325 325 326 330 331 334 334 334 338 340 342 344 346 346 347 357 357 360 370 371
16
Service Deployment in Programmable Networks 16.1 ASP Functionalities 16.1.1 Actors
373 374 375
Contents
xi
16.1.2 Use Case Diagrams 16.2 Design Overview 16.3 Service Description 16.3.1 Basic Concepts 16.3.2 Network-Level Service Descriptor 16.3.3 Node-Level Service Descriptor 16.4 ASP Components 16.4.1 Network ASP 16.4.2 Node ASP 16.5 Conclusion References
375 377 379 380 381 383 385 385 387 390 392
17
DiffServ Scenario 17.1 Introduction 17.2 Architecture 17.2.1 Traffic Controller 17.2.2 DiffServ Controller 17.3 Scenario 17.3.1 HIT/HEL Testbed Configuration 17.3.2 FHG Testbed Configuration 17.3.3 Active Proxy Configuration 17.4 Conclusion References
393 393 394 396 397 398 400 401 402 404 404
18
WebTV Scenario 18.1 Motivation and Key Concepts 18.2 General Description 18.3 FAIN PBNM and ASP Revisited: Detailed Scenario Description 18.4 WebTV Components 18.4.1 Reconfiguration of the Transcoder 18.4.2 How the Controller Works 18.4.3 Testbed Configuration for WebTV Demonstration 18.5 Conclusions References
405 405 406 407 409 410 411 414 414 415
19
The Outlook 19.1 Reference Architecture for Programmable Service Networks 19.2 Requirements Analysis for Further Development in Programmable Service Networks 19.3 Expected Key Novel Features and Benefits References
417 417 421 422 423
About the Editors
425
Index
427
Preface The integration of the Internet, Web technologies, software technologies, traditional telecommunications technologies, and broadcasting technologies has been always a challenge for network and service operators, as far as service deployment and management is concerned. Different frameworks and architectural approaches have been proposed in the research literature and in commercial work. This book addresses programmable Internet protocol (IP) networks and their management, and is expected to be the first of several on this subject. The purpose of this book is to introduce the reader to the current state of the art and the future challenges of programmable networks as an enabling step toward rapid, autonomic, and flexible service deployment, to present a novel programmable network and management approach, and to present part of the research results developed in the FAIN project. It includes contributions and experiences also coming from other sources, notably from DARPA/NSF-funded projects in the United States. This book has benefited greatly from the contributions, reviews, and experience of many people, who generously gave of their time and experiences. Moreover, the editorial tasks that were necessary to select and shape the material were a joint effort between the following people and us: Editors and coauthors Stamatis Karnouskos - Fraunhofer Institute FOKUS - Germany Spyros Denazis - Hitachi Europe Ltd. - United Kingdom Jonathan M. Smith - University of Pennsylvania United States Julio Vivero - Universitat Politecnica de Catalunya - Spain Joan Serrat - Universitat Politecnica de Catalunya - Spain
Chapters Chapter 2 - Programmable Networks: Background
Chapter 31 - Programmable Networks’ Security: Background Chapter 4 - Programmable Network Management and Services: Background
1 Work in Chapter 3 was supported by DARPA under Contracts #N66001-96-C-852 and #DABT63-95C-0073, and the National Science Foundation under grants #ANI-9906855, #ANI98-13875, #ANI0082386, and #CSE-0081360.
xv
xvi
Programmable Networks for IP Service Deployment
Jonathan M. Smith - University of Pennsylvania United States Scott M. Nettles - University of Texas - United States Hermann de Meer - University of Passau Germany Kurt Tutschku - University of Würzburg Germany Drissa Houatra - France Télécom – France Cornel Klein - Siemens AG - Germany Alex Galis - University College London - United Kingdom Spyros Denazis - Hitachi Europe Ltd. - United Kingdom Célestin Brou - Fraunhofer Institute FOKUS Germany Cornel Klein - Siemens AG - Germany Thomas Becker - Fraunhofer Institute FOKUS Germany Toshiaki Suzuki - Hitachi Europe - United Kingdom Dusan Gabrijelcic - Jozef Stefan Institute Slovenia Arso Savanovic - Jozef Stefan Institute - Slovenia Antonis Lazanakis - National Technical University of Athens - Greece George Karetsos - National Technical University of Athens - Greece
Chapter 52 - SwitchWare Active Platform
Chapter 6 - Peer-to-Peer Programmability
Chapter 7 - Programmable Networks’ Requirements Chapter 8 - FAIN Network Overview
Chapter 9 - Virtual Environments and Management Chapter 10 Demultiplexing Chapter 11 - Security Management Chapter 12 - Resource Control Framework
2 Work in Chapter 5 was supported by DARPA under Contracts #N66001-96-C-852 and #DABT63-95C-0073, and the National Science Foundation under grants CAREER Grant #CCR-9702107, #ANI0081360, #ANI-9906855, #ANI98-13875, #ANI00-82386 and #CSE-0081360
Preface
Walter Eaves - University College London United Kingdom Jonathan Moore - University of Pennsylvania United States Lawrence Cheng - University College London United Kingdom Laurent Lefevre - INRIA - France Jean-Patrick Gelas - INRIA - France Epifanio Salamanca - Universitat Politecnica de Catalunya - Spain Edgar Magaña - Universitat Politecnica de Catalunya - Spain Joan Serrat - Universitat Politecnica de Catalunya - Spain Célestin Brou - Fraunhofer Institute FOKUS Germany Marcin Solarski - Fraunhofer Institute FOKUS Germany Toshiaki Suzuki - Hitachi Europe - United Kingdom Thomas Becker - Fraunhofer Institute FOKUS Germany Lawrence Cheng - University College London United Kingdom Alvin Tan - University College London - United Kingdom Marcin Solarski - Fraunhofer Institute FOKUS Germany Alex Galis - University College London - United Kingdom Spyros Denazis - Hitachi Europe Ltd. - United Kingdom Célestin Brou - Fraunhofer Institute FOKUS Germany Cornel Klein - Siemens AG - Germany
xvii
Chapter 13 - Control Execution Environments
Chapter 14 - HighPerformance Execution Environments Chapter 15 - Network Management
Chapter 16 - Service Deployment in Programmable Networks Chapter 17 - DiffServ Scenario
Chapter 18 - WebTV Scenario
Chapter 19 - The Outlook
xviii
Programmable Networks for IP Service Deployment
This book is based on the experiences of the researchers in the FAIN IST project and particularly the following people listed as contributors: University College London - United Kingdom
Lawrence Cheng Alex Galis Walter Eaves Wilson Lim Drissa Houatra Yannick Carlinet Bruno Dumant Remi Kerboul
Richard Lewis Alvin Tan Kun Yang
Eidgenössische Technische Hochschule - Switzerland
Matthias Bossardt Placi Flury Bernhard Plattner
Lukas Ruf Burkhard Stiller Rolf Stadler
Jozef Stefan Institute Slovenia
Borka Jerman Blazic Dusan Gabrijelcic Tomaz Klobucar
Franci Mocilar Arso Savanovic
Koninklijke KPN NV, KPN Research - Netherlands
Jan H. Laarhuis Jerry van der Leur
Herman Pals
Fraunhofer Institute FOKUS - Germany
Thomas Becker Célestin Brou Elisa Boschi Georg Carle Fawzi Daoud Richard Gold Hui Guo
Stamatis Karnouskos Eun-Mok Lee Eckhard Moeller Marcin Solarski Richard Sinnott Ming Yin
National Technical University of Athens Greece
George Karetsos Antonis Lazanakis Yiannis Nikolakis
Odysseas Pyrovolakis Ermolaos Zymboulakis
Hitachi Europe Ltd. United Kingdom
Masahiro Abe Spyros Denazis Chiho Kitahara
Fumihiko Mori Christos Tsarouchis Toshiaki Suzuki
Hitachi Ltd. - Japan
Kiminori Sugauchi Osamu Takada
Satoshi Yoshizawa
France Télécom/R&D France
Yvon Gourhant Bertrand Mathieu Jamel E. Meddour
Preface
xix
IKV++ Technologies AG Germany
Christoph Bäumer Jürgen Dittrich
Christoph Weckerle
Universitat Politecnica de Catalunya - Spain
Edgar Magaña Epifanio Salamanca
Joan Serrat Julio Vivero
Integracion Y Sistemas De Medida, SA - Spain Siemens AG - Germany
Juan Luis Mañas González Peter Graubmann Cornel Klein Jessica Kornblum Sotiris Ioannidis
Gema Esteban Mercedes Urios Evelyn Pfeuffer Reiner Schmid Jonathan Moore Jonathan Smith
University of Pennsylvania United States
This book could not have been completed without the enthusiastic support of every individual listed here. We thank all contributors and invite you to consider and make use of the concepts and technological results presented in this book.
Acknowledgments This book is the result of the work of many people. First of all we want to mention the people whose valuable contribution to the editorial task of this book was the key to our final objective. They are: Thomas Becker Lawrence Cheng Walter Eaves Dusan Gabrijelcic Jean-Patrick Gelas Drissa Houatra George Karetsos Stamatis Karnouskos Antonis Lazanakis Laurent Lefevre Edgar Magaña Hermann de Meer Jonathan Moore Scott M. Nettles Epifanio Salamanca Arso Savanovic Joan Serrat Marcin Solarski Jonathan M. Smith Toshiaki Suzuki Alvin Tan Kurt Tutschku
(Fraunhofer Institute FOKUS - Germany) (University College London - United Kingdom) (University College London - United Kingdom) (Jozef Stefan Institute Slovenia - Slovenia) (INRIA - France) (France Télécom - France) (National Technical University Athens - Greece) (Fraunhofer Institute FOKUS - Germany) (National Technical University Athens - Greece) (INRIA - France) (Universitat Politecnica de Catalunya - Spain) (University of Passau - Germany) (University of Pennsylvania - United States) (University of Texas - United States) (Universitat Politecnica de Catalunya - Spain) (Jozef Stefan Institute Slovenia - Slovenia) (Universitat Politecnica de Catalunya - Spain) (Fraunhofer Institute FOKUS - Germany) (University of Pennsylvania - United States) (Hitachi Europe - United Kingdom) (University College London - United Kingdom) (University of Würzburg - Germany)
The contents of this book represents the work done by the research and engineering staff that participated in: the IST FAIN project, which was cofunded by the European Union; and the two DARPA projects, which were cofunded by the National Science Foundation; and national French and German research projects. We would like to thank all those people; it was a pleasure to work with them during the last three years. xxi
xxii
Programmable Networks for IP Service Deployment
We would like to acknowledge the help and enthusiastic support received from Dr. Julie Lancashire and Tiina Ruonamaa, the two Artech House officers who conducted the book review and publication process. We thank David Sutherland for his thorough review and helpful comments in writing this book. We thank Professor Chris Todd (University College London) for his support and encouragement for writing this book. We thank Taina Galis and Richard Lewis for their helpful comments on improving the readability of the book. Finally, we would like to thank Pertti Jauhiainen, European Union Project coordinator, for his support, wisdom, and encouragement for the work of the FAIN project. He modulated the evolution of the project and therefore favorably affected the content of this book. Alex Galis Spyros Denazis Celestin Brou Cornel Klein London, England April 2004
Chapter 1 Introduction 1.1 THE IMPORTANCE OF PROGRAMMABLE NETWORKS Since its beginning, the Internet’s development has been founded on a basic architectural premise: a simple network service as a universal means to interconnect intelligent end systems. The end-to-end argument has served to maintain this simplicity by pushing complexity into the end points, allowing the Internet to reach an impressive scale in terms of interconnected devices. While the scale has not yet reached its limits, the growth in functionality—the ability of the Internet to adapt to new functional requirements—has slowed with time. In addition, the ever-increasing demands of audiovisual applications and network services currently face the relative inflexibility of the telecommunication infrastructures. In this sense, the pervasive use of the Internet to support such applications has revealed important deficiencies in current IP networks. We see new network architectures evolving both above and below the Internet. Underneath the Internet, all-optical networks without buffering will dominate the core, while wireless local area networks (LANs) subject to high packet error rates as well as asynchronous digital subscriber line (ADSL) access networks will dominate the edges. Above the Internet are systems that exploit and demand mobility, the ability to deliver continuous media, as well as private address spaces and access control. While some of the requirements were met quickly with ad hoc solutions and de facto standards, others are still unresolved: quality of service (QoS), Internet protocol version 6 (IPv6), and a well-designed multicast service, despite having standards, are still not widely deployed. In the world of new network architectures, we are experiencing a significant paradigm shift resulting from new technologies and approaches. The motivation behind this shift is the still-elusive goal of rapid and autonomous service creation, deployment, activation, and management, resulting from new and ever-changing customer and application requirements. Research activity in this area has clearly focused on the synergy of a number of concepts: programmable networks, managed networks, network virtualization, open interfaces and platforms, and 1
2
Programmable Networks for IP Service Deployment
increasing degrees of intelligence inside the network. Next generation networks must be capable of supporting a multitude of service providers that exploit an environment in which services are dynamically deployed and quickly adapted over a heterogeneous physical infrastructure, according to varying and sometimes conflicting customer requirements. Programmable networks have been proposed as a solution for the fast, flexible, and dynamic deployment of new network services. Programmable networks are networks that allow the functionality of some of their network elements to be programmable dynamically. These networks aim to provide easy introduction of new network services by adding dynamic programmability to network devices such as routers, switches, and applications servers. Dynamic programming refers to executable code that is injected into the network element in order to create the new functionality at run time. The basic idea is to enable third parties (end users, operators, and service providers) to inject application-specific services (in the form of code) into the network. Applications may utilize this network support in terms of optimized network resources and, as such, they are becoming network aware. Programmable networks allow dynamic injection of code as a promising way of realizing application-specific service logic, or performing dynamic service provision on demand. As such, network programming provides unprecedented flexibility in telecommunications. However, viable architectures for programmable networks must be carefully engineered to achieve suitable trade-offs between flexibility, performance, security, and manageability. The key question from the public fixed and mobile operator’s and Internet service provider’s (ISP) points of view is: how to exploit this potential flexibility for the benefit of both the operator and the end user without jeopardizing the integrity of the network. The answer lies in the promising potential that emerges with the advent of programmable networks in the following aspects:
• • • • • •
Rapid deployment of new services; Customization of existing service features; Scalability and cost reduction in network and service management; Independence of network equipment manufacturer; Information network and service integration; Diversification of services and business opportunities.
While in the initial stages of research in this area the focus was on the basic mechanisms needed for dynamically installing and moving service code in the network, research and development is now directed toward the main scientific and commercial benefits that programmable network technology can provide for both fixed and mobile networks: service network programmability and autonomic
Introduction
3
service deployment architectures bringing just the right services to the customer in just the right context (e.g., time, location, preferences, qualities). 1.2 STRUCTURE OF THE BOOK The main objective of this book is to first introduce the reader to the current state and challenges of programmable networks as an enabling step toward rapid, autonomic, flexible service deployment; second, to present a novel programmable network and management approach; and third, to present the research results developed in the European Union IST research project FAIN. The book addresses the rapid deployment of services, whose solution is fundamental in supporting the widespread deployment of new telecommunications networks and services in a deregulated market. We considered it necessary to present these results from the perspective of current and future challenges of the network technologies. The book presents the background to programmable network technologies and management systems and, as such, it introduces the reader gradually to its main subjects. This technology perspective is presented in the first part of the book in Chapters 2–6. The central part of this book, Chapters 7–18, describes the problems and novel solutions for programmable networks and their management. In Chapter 19 we provide an outlook for the necessary research and development in service programmability. Chapter 2 is a review of the current research and development in programmable networks technologies and platforms. Chapter 3 deals with the security challenges and architectures for programmable networks. Chapter 4 is a review of the management technologies, and solutions applicable to programmable networks. Chapter 5 deals with the realization details of the SwitchWare platform and its influence on programmable network research and development activities. Chapter 6 is a review of peer-to-peer technologies as applied to network programmability. Chapter 7 presents a closer look at the programmable networks’ requirements in relation to a business and enterprise model. Chapter 8 presents the main concepts and systems conceived in the FAIN project. Chapter 9 presents the concept of virtual environments and their management. Chapter 10 presents the (de)multiplexing in programmable networks. Chapter 11 presents some solutions to programmable network security challenges. Chapter 12 presents the programmable network’s resource control mechanisms and framework. Chapter 13 presents the design of execution environments applicable to the control plane. Chapter 14 presents the design of high-performance execution environments. Chapter 15 presents management solutions to programmable networks. Chapter 16 presents service deployment mechanisms in programmable networks. Deployments in programmable networks of two example services, DiffServ and WebTV services, are presented in Chapters 17 and 18, respectively.
4
Programmable Networks for IP Service Deployment
Finally, Chapter 19 provides an outlook for the expected evolution of next generation networks, and identifies some of the research and development issues, that are underpinning the migration from programmable networks toward programmable service networks. 1.3 THE FAIN PROJECT AND CONSORTIUM The FAIN project (www.ist-fain.org) is a research project operating under the umbrella of the IST program. FAIN is cosponsored by the European Commission and the National Science Foundation. The project duration was 38 months: May 2000–July 2003. The FAIN project developed an open, flexible, programmable, and dependable (reliable, secure, and manageable) network architecture based on novel active and programmable node concepts. The FAIN network architecture is validated by deploying and exercising interoperable, active IP network nodes in a testbed environment. It supports new dynamic models of network control and management, and a wide range of distributed applications and services. It proposes a new generic architecture for active and programmable networks and their management. The FAIN consortium is composed of the following companies and universities:
• • • • • • • • • • • • • • •
University College London, United Kingdom (prime contractor): www.ee.ucl.ac.uk Josef Stefan Institute, Slovenia: www.ijs.si National Technical University of Athens, Greece: www.telecom.ntua.gr Universitat Politecnica De Catalunya, Spain: www.upc.es T-Nova Deutsche Telekom Innovationsgesellschaft Mbh, Germany: www.berkom.de France Télécom / RandD, France: www.rd.francetelecom.fr Koninklijke KPN NV, KPN Research, Netherlands: www.kpn.com Hitachi Europe Ltd, United Kingdom: www.hitachi-eu.com Hitachi Ltd., Japan: www.hitachi.co.jp Siemens AG, Germany: www.siemens.de Eidgenössische Technische Hochschule Zürich, Switzerland: www.ethz.ch Fraunhofer Institute FOKUS, Germany: www.fokus.fraunhofer.de IKV++ Technologies AG, Germany: www.ikv.de Integracion Y Sistemas De Medida, SA, Spain: www.integrasys-sa.com University of Pennsylvania, United States: www.upenn.edu
Chapter 2 Programmable Networks: Background 2.1 MOTIVATION In the last decade, the business world has witnessed the rapid development of Internet and IP networks in private and corporate areas. The wide acceptance of IP originates from its unparalleled ability to provide ubiquitous access at a low price, whatever the underlying networking technology. Moreover, the existing best-effort IP transport service allows new application services to be offered on a global scale by almost everyone, simply by connecting a new Web server to the Internet. Today, IP is uniquely able to bridge diverse application/user requirements with broadband transfer capability. Various research initiatives, such as Next Generation Internet (NGI), CANARIE, and Internet2 are moving toward providing unlimited bandwidth for Internet users. In parallel, based on the conventional Internet architecture, Internet Engineering Task Force (IETF) is undertaking a “bottom-up” development of Internet protocols and techniques, to fulfill upcoming requirements from applications, users, and providers. However, the development and deployment of new network services, that is, services that operate on the IP layer, is too slow due to best practice and standardization. It is unable to match the rate of development of many applications such as multimedia multiparty communication. This has led to calls for quality of service, reliable multicast or Web proxies/caches/switches/filters. Like the intelligent network (IN) architecture in the public switched telephone network (PSTN) world, the current Internet architecture needs to be enhanced in order to allow for a more rapid introduction of such services. Programmable and active networks (AN) have been proposed as a solution for fast and flexible deployment of new network services. The core idea is to enable third parties (end users, operators, and service providers) to inject application-specific services in the form of code into the network. This allows applications to utilize these services to obtain required network support such as performance that is now becoming network aware. So, active networks allow dynamic injection of code to enable application-specific service logic, or to perform dynamic service provision on demand. The problem is that the dynamic injection of code is only acceptable to network providers if it does not compromise the integrity, the performance, and/or 5
6
Programmable Networks for IP Service Deployment
the security of networks. So, viable architectures for active networks must be carefully engineered to achieve suitable trade-offs between flexibility, performance, security, and manageability. The rapid deployment and customization of the services offered led to the introduction of programmability in the network elements. This first occurred in the field of telecommunications, with the intelligent networks [28] and the advanced intelligent networks [4], and spread to the data communication community with the emergence of the open signaling and active networks. These advances have been driven by a service-oriented market that needs granularity, openness, and reduced time to market.
Computational Model
Object Orientation
Operating Systems
Active and Programmable Networks
Programming Languages Routing Control
QoS
Packet Forwarding
Communication Model
Figure 2.1 Active and programmable networks problem space.
The term “programmable networks” is used widely by the Opensig [37] community to characterize networks built on the principles they promote. But the definition of programmable networks may cover more ground than the one defined within Opensig, as the authors admit. The networking research community has realized for some time now that there is a need for more flexibility and dynamically customizable networks. The next logical step in network evolution seemed to be the development of the one-dimensional networking model based on the communication model (realized by packet header processing and forwarding), to the two-dimensional one with the addition of the computational model. A programmable network realizes this by allowing a third party to customize and process the packets that pass from the network interface, by calling open interfaces that reconfigure the node, or that even execute programs on that node. These programs are predefined and have limited capabilities that are realized via predefined interfaces or with specific parameters that bring the node to deterministic states. This approach is safer than the one taken by the active network community. In active networks, the nodes are able to compute on data they receive via the execution of injected user code. ANs
Programmable IP Networks: Background
7
provide a new network model with its pros and cons, many of which are analyzed in this work. The two-dimensional model depicted in Figure 2.1 shows the networking paradigm shift we are witnessing from the flat communication model to a new two-dimensional space, composed of a mix of communication and computational models. Traditional parts of both models, like packet header processing and forwarding, and quality of service [54] in the communication model, interact with technologies, such as programming languages, and distributed programming of the computational model. The result is that the network elements, once barricaded with proprietary interfaces (e.g., routers, firewalls, switches) are now places where advanced customized computation may take place. Two different schools of thought are dealing with this new problem space: the Opensig [37] community and DARPA [53, 59]. The last two years have seen a significant involvement of the international community as active networks gain momentum. The number of projects in this domain has increased exponentially, and several research efforts have been published. 2.2 TRENDS AND EXPECTED EVOLUTION Programmability in network elements (switches, routers, and so forth) was introduced over a decade ago as the basis for rapid deployment and customization of new services. The next generation heterogeneous networks are engineered to facilitate the integration and delivery of a variety of services. Advances in programmable networks have been driven by a number of requirements that have given rise to a new business model, new business actors, and roles [9, 19, 31]. We are moving away from the “monolithic” approach, where systems are vertically integrated; toward a component based approach, where systems are made of multiple components from different manufacturers, which may interact with each other through open interfaces to form a service [10]. The result is a truly open service platform representing a marketplace wherein services and service providers compete with each other, while customers may select and customize services according to their needs. The problem space of programmable networks is well represented by a twodimensional model (Figure 2.1). Along the first dimension is the communication model where programmability has been exercised by introducing service models such as ATM or DiffServ [39] in the transport plane, and then using the control plane to customize them, resulting in different forwarding behaviors as perceived by the users. In the second dimension, the computational model consists of “active” technologies whose origins can be traced to the areas of programming languages: object-oriented and distributed programming, and operating systems. Recently, new hardware technologies such as network processors [23, 27] have pushed the computational model even lower and closer to the physical interfaces of the network elements. The computational model encourages higher amounts of
8
Programmable Networks for IP Service Deployment
computation and processing than the communication model, as a means of pushing additional functionality inside the network to meet customer requirements. Programmability along this dimension is exerted by treating the network element (e.g., router, firewall, switch) as a programming environment, wherein service components may be deployed as part of environments. The Opensig community and DARPA schools of thought have, at first sight, divergent solutions to this problem space, which are heavily dependent on their underlying networking technology and implementation. The Opensig community was established through a series of international workshops [37], while the second one, active networks [16], is the result of a series of projects under the auspices of DARPA [53]. However, there is strong evidence of common features and work that tried to combine both domains [57]. Putting these schools of thought together creates a picture that we believe is representative of the “programmable networks.” Recently, such features have become the main focus of standardization activities and, in particular, the IEEE P1520 [9] and the IETF ForCES protocol working group [26]. We will identify a number of the common features of Opensig and DARPA, and argue that they are the basic ingredients of the next generation network element (NE) architecture, capable of realizing the otherwise elusive rapid service deployment. These features are: the execution environment (EE), the building block approach, and the principle of separation across the different operational planes necessary to support interoperability. Our analysis is based on the state of the art and is a basis for the FAIN project. Furthermore, due to strong evidence of the versatility of the building block approach, we argue that there is a specific type of EE that is going to be dominant in the next generation of programmable networks, assisted by the widespread adoption of network processors. 2.3 OPEN SIGNALING The term “programmable networks” is used widely by the Opensig [37] community to characterize networks built on the principles they promote. The following subchapters describe this term. 2.3.1 The IEEE P1520 The original motivation behind Opensig networks came from the observation that monolithic and complex control architectures could be restructured as a minimal set of layers, allowing the services residing in each layer to be accessible through open interfaces—providing the basis for service creation (composition). Results from the Opensig community were formalized by the IEEE Project 1520 standards initiative for programmable network interfaces and its corresponding reference model [9]. This IEEE P1520 reference model (RM) provides a general framework for mapping programming interfaces and operations of networks, over any given
Programmable IP Networks: Background
9
networking technology. Mapping diverse network architectures and their corresponding functionality is essential to the P1520 RM. USERS V Interface Applications invoking methods on objects below U Interface Policy-Based Differentiated Services Scheduling
Customized Routing
Routing Algorithms
RSVP or Other per-flow protocol
L Interface
Low
Degree of Abstraction
Service-specific building blocks
Resource building blocks
Base building blocks
High
CCM Interface
Controller Hardware and other resources
Routing table
Data
Figure 2.2 The P1520 reference model and the L-Interface abstraction model. (After: [9].)
The IEEE P1520 reference model, depicted in Figure 2.2, defines the following four interfaces:
•
•
• •
CCM interface: The connection control and management (CCM) interface is a collection of protocols that enable the exchange of state and control information at a very low level between the network element and an external agent. L-interface: This defines an application program interface (API) that consists of methods for manipulating local network resources abstracted as objects. The abstraction isolates upper layers from hardware dependencies or other proprietary interfaces. U-interface: This mainly provides an API that deals with connection setup issues. The U-interface isolates the diversity of connection setup requests from the actual algorithms that implement them. V-interface: This provides a rich set of APIs to write highly customized software, often in the form of value added services.
CCM and L-interfaces fall under the category of NE interfaces, whereas Uand V-interfaces constitute networkwide interfaces.
10
Programmable Networks for IP Service Deployment
Initial work, through the asynchronous transfer mode (ATM) subworking group (P1520.2), focused on telecommunication networks based on ATM, and introduced programmability in the control plane [24]. Later, the IP subworking group extended these principles to IP networks and routers. The model depicted in Figure 2.2 also suggests a possible mapping of the P1520 RM to IP routers. However, their efforts aim at a generalized design framework for interfaces, not just for routers but for any NE—the core functionality being forwarding of traffic; for example, switch, gateway, and so on [8]. We focus on the activities of the IP working group for the remainder of this section, as these are the most relevant to this document. At first, the IP subworking group (P1520.3) faced two critical questions: (1) which RM interface is the most important in terms of maximizing the openness of the RM, and (2) what is the right approach for achieving this openness. Eventually, the group decided that NE interfaces (CCM and L) are the most critical, as they abstract the functionality and the resources found in the NE, thereby creating a kind of interoperability layer among different vendors’ equipment and, most importantly, allowing the requirements of network services residing in higher layers to be mapped in many different ways onto the capabilities of heterogeneous NEs. The second question they faced was the more complex. Traditional packet and flow processing has been the default network behavior for a long time, but, with increasing intelligence being pushed into NEs, emerging devices will perform multiple functions, thereby defining a new class of network elements that extend behavioral functionality within the network transport. Thus, traditional routers and switches are going to be subsumed within next generation NEs capable of dynamically adapting to multifunction requirements. They are expected to include address translation, firewall enforcement, advanced flow differentiation, proxy activation, load balancing, and advanced monitoring. To keep up with technological advances both in network devices and services/applications, specification of a standard should be based on a software architecture that allows extensive reusability of its modules. In other words, the development effort (proprietary or otherwise) required to extend the API should be minimized, so as not to hinder or delay deployment of emerging technological advances. The fundamental requirement levied on the standardization process of the API is that the standardization itself should not interfere with the future advancement and development of related technologies. Furthermore, this extensible nature must be available at all levels of abstraction within the API hierarchy. The standard API must be extensible to accommodate new network devices, and also able to accommodate newly developed network services and applications. For example, the former includes a proprietary hardware mechanism to accelerate a particular functionality, which could be realized by software in a “conventional” IP router. In summary, to make a standard extensible so as to keep up with the pace of innovation and differentiation, you must make the composition mechanism part of the standard, enabling seamless extensions of the API in the future.
Programmable IP Networks: Background
11
Following this decision, P1520.3 also selected the L-interface as its initial target for specification. The solution it took for the second question was based on a building block approach, which consists of three layers of abstraction that define a model for specifying the API [46, 47]. The model enables network device programmability from two complementary perspectives, corresponding to the layers of the L abstraction model, primarily service and resource specific. For example, this allows upper level interfaces to create or program completely new network services using generic resource abstractions or to modify existing services using service specific abstractions, which are themselves built on generic resource abstractions. The third layer is introduced to facilitate common device programmability by means of composition, via a standard set of base building block abstractions, on which both the service specific and resource layers are built. More specifically, the upper part of the L-interface is the service specific abstraction layer of the NE. The service-specific building block (SSBB) abstractions at this layer expose “sub”interfaces associated with underlying behaviors or functions, state or policies on the local node that have concrete meaning within the context of a particular supported service (e.g., differentiated services). The idea here is that an administrator or Internet service vendor (ISV) need only program the device within the context of the service (i.e., preferably to an industry standard), rather than deal with low-level abstractions associated with fundamental resources of the network device (e.g., scheduler, dropper). So, he or she need only modify, update or provision the service abstraction at the level that they understand or have a need (or privilege) to supplement, to deliver the required service specific behavior. Alternatively, the middle part of the L-interface abstraction model is the resource abstraction layer of the NE. The abstractions here are termed resource building blocks (RBB), from which primitive behaviors [e.g., DiffServ per hop behavior (PHB) [39]] or new behaviors can be built. We envision the programmer as a sophisticated developer or network software architect, who is aware of underlying resource abstractions (not the implementation) of an NE (e.g., router), and can construct new behaviors or functions, or change state or policies within the context of the generic abstraction, without specific knowledge of the underlying vendor device implementation. The maximum degree of abstraction is achieved at the lowest layer of the abstraction model. At this layer, the composition mechanism is abstracted and becomes part of the standard. The idea behind the base building blocks (BBB) is to have abstractions that have no service or resource significance from an NE behavioral or packet processing perspective. These base blocks serve the needs of the programmer only in an inheritance fashion, such that the abstractions above the base layer (namely, resource or service specific) can be designed appropriately to create new functional service behaviors or resources or modify (enhance) existing ones in a consistent, standard object-oriented manner. As a result of the approach, a number of APIs are defined at each layer, with the ones at the BBB layer providing methods that allow RBBs to be composed in
12
Programmable Networks for IP Service Deployment
such a way that they form SSBB constructs. By using the APIs of the BBB layers and defining as RBBs components like classifier, meter, shaper, queue, and scheduler, one can create a differentiated services (DiffServ) SSBB, the API of which will be the collection of the APIs of the individual RBBs. To this end, by standardizing a small set of RBBs, one can create any SSBB that is required by specific network services. Alternatively, new RBBs may be introduced by means of inheritance from the BBB layer, and deployed in the NE in order to support new network service requirements or enhance/extend existing functionality. 2.3.2 The IETF ForCES The Opensig community has long advocated the benefits of a clear distinction between the control and transport plane. Recently, a working group of IETF, called Forwarding and Control Element Separation (ForCES) was formed with a similar objective to that of P1520, namely, “to define a set of standard mechanisms for control and forwarding separation, ForCES will enable rapid innovation in both the control and forwarding planes. A standard separation mechanism allows the control and forwarding planes to innovate in parallel while maintaining interoperability” [25, 26].
Network Element Fr
CE 1
Fp
CE 2
CE 3
FE 2
FE 3
Fi
FE 1
Figure 2.3 ForCES architectural representation of NE.
According to [47], the NE is a collection of components of two types: control elements (CE) and forwarding elements (FE) operating in the control and forwarding (transport) plane, respectively. CEs host control functionality like routing and signaling protocols, whereas FEs perform operations on packets, like header processing, metering, and scheduling when passing through them. CEs and FEs may be interconnected with each other in every possible combination (CECE, CE-FE, and FE-FE), thus forming arbitrary types of logical topologies (see Figure 2.3). Every distinct combination defines a reference point, namely, Fr, Fp, and Fi. Each one of these reference points may define a protocol or a collection thereof, but the ForCES protocol is only defined for the Fp reference point.
Programmable IP Networks: Background
13
However, FEs do not represent the smallest degree of granularity of the NE functionality. And, as they implement the ForCES protocol, they must facilitate CEs to control them in terms of abstracting their capabilities, which in turn may be accessed by the CEs. It is at this point that the ForCES group faced a similar challenge to the IP working group in P1520, which they formulated as follows: Since FEs may manifest varying functionality in participating in the ForCES NE, the implication is that CEs can make only minimal assumptions about the functionality provided by its FEs” [49]. As a result, CEs must first discover the capabilities of the FEs before they can actually control them. The solution they suggest is captured in an FE model [49], while two of its requirements are that it must pertain to the problem, and be of an extensible standard. The first mandates that the FE model should provide the means to describe existing, new or vendor-specific logical functions found in the FEs, while the latter demands to describe the order in which these logical functions are applied in the FE [31]. The ForCES FE model uses a similar approach to the building block approach of the P1520.3 working group, by encapsulating distinct logical functions by means of an entity called the “FE block.” When this FE block is treated outside the context of a logical function, it becomes equivalent to the base building blocks. When someone looks at the inside FE block, then it becomes a resource building block. Similarly, FE blocks eventually are expected to form an FE block library— in principle extensible, which will be part of the standard and the basis for creating complex NE behaviors, although dynamic extensions thereof may be possible. Of course, there are differences between the two initiatives, but the main ideas are very close, so we expect their full convergence in the future. A model like the FE model is useful when CEs attempt to configure and control FEs. ForCES has identified three levels of control and configuration, namely, static FE, dynamic FE, and dynamic extensible FE control and configuration. The first assumes that the structure of the FE is already known and fixed, the second one allows the CE to discover and configure the structure of the FE although selecting from a fixed FE block library, while the third—the most powerful—allows CEs to download additional functionality, namely, FE blocks onto FEs at run time. Currently, ForCES is mainly focusing on the first level of control and configuration. 2.4 DARPA ACTIVE NETWORKS Active networks transform the store-and-forward network into the store-computeand-forward network. The innovation here is that packets are no longer passive. Rather, they are active in the sense that they carry executable code together with their data payload. This code is dispatched and executed at designated (active) nodes that perform operations on the packet data, as well as changing the current state of the node to be found by the packets that follow. In this context, two approaches can be differentiated based on whether programs and data are carried
14
Programmable Networks for IP Service Deployment
discretely; namely, within separate packets (out-of-band) or in an integrated manner (in-band). In the discrete case, the job of injecting code into the node and the job of processing packets are separated. The user or network operator first injects his or her customized code into the routers along a path. When the data packet arrives, its header is examined and the appropriate preinstalled code is loaded to operate on its contents [18, 48]. Separate mechanisms for loading and executing may be required for the control thereof. This separation enables network operators to dynamically download code to extend a node’s capabilities, which in turn becomes available to customers through execution. At the other extreme lies the integrated approach, where code and data are carried by the same packet [31]. In this context, when a packet arrives at a node, code and data are separated, and the code is loaded to operate on the packet’s data or change the state of the node. A hybrid approach has also been proposed [2].
Applications Network API EE1
EE2
EEn
Execution Environments
Node API Node OS
Node
Router Figure 2.4 The active node architecture.
Active networks have also proposed their own reference architecture model [13] depicted in Figure 2.4. This model understands an active network as a mixture of active and legacy (nonactive) nodes. The active nodes run the node operating system (NodeOS)—not necessarily the same—while a number of execution environments coexist at the same node. Finally, a number of active applications (AA) make use of services offered by the EEs. The NodeOS undertakes the task of simultaneously supporting multiple EEs. Its major functions are to provide isolation among EEs through resource allocation and control mechanisms, and to provide security mechanisms to protect EEs from each other. In addition, it provides other basic facilities like caching or code distribution that EEs may use to build higher abstractions to be presented to their AAs. All these capabilities are encapsulated by the node interface through which
Programmable IP Networks: Background
15
EEs interact with the NodeOS. This is the minimal fixed point at which interoperability is achieved [38]. In contrast, EEs implement a very broad definition of a network API ranging from programming languages to virtual machines like the Spanner VM in smart packets and byte codes, to static APIs in the form of a simple list of fixed-size parameters [14]. To this end, EE takes the form of a middleware toolkit for creating, composing, and deploying services. Finally, the AN reference architecture [13] is designed for simultaneously supporting a multiplicity of EEs at a node. Furthermore, only EEs of the same type are allowed to communicate with each other, whereas EEs of different types are kept isolated from each other. 2.5 NODE OPERATING SYSTEMS In the area of active networks, two distinct approaches that have been around for quite some time are the pure capsule [48] and the router plug-in [17] approach. While the former can be seen as an extreme in terms of how program code is injected into the network, the latter resembles a transition from upgradeable router architectures toward programmable, high-performance active network nodes. With the capsules approach, every packet carries code that is executed at each node. For example, the functionality provided in the capsules may handle packetrouting requests or payload modifications to be carried out on a node. Plug-ins are code components that are installed out of band on an active node. Usually, they serve for longlived flows, that is, they extend the base functionality for a large set of packets. Thus, the overhead is smaller with the plug-in approach than with the capsule approach. Capsules commonly make use of a virtual machine that interprets the capsule’s code to safely execute it on a node. In order to ensure security, the virtual machines must restrict the address space a particular capsule might access, thus limiting the application of capsules. On the assumption that network links will be 10 Gbps or faster in the near future, with an optimistic average packet size [43] of 512 bytes for IP traffic, a router must process 2.6 million packets per second on every port, or less than 380 nanoseconds per packet. This, or some at least, is often done in hardware in current routers. Even if we assume that a significant fraction of the packets forwarded do not require active processing and can be handled in hardware, it seems obvious that active network architectures based on virtual machines are not well suited to a multigigabit scenario. However, they may be relevant for tasks such as network management where only sporadic configuration and management tasks are to be carried out. Multimedia communication over heterogeneous network infrastructures will drive the requirements for particular network protocols with per-customer adaptation. A classical example is the situation in which a customer receives a multimedia stream that needs data adaptation, such as downscaling at the gateway between the Internet and the mobile device.
16
Programmable Networks for IP Service Deployment
A very important observation here is that the deployment of multimedia data sources and applications (e.g., real-time audio/video, IP telephony) will produce longerlived packet streams (flows) with more packets per session than is common in today’s Internet. For these kinds of application in particular, active networking with the plug-in approach offers very promising possibilities: media gateways, data fusion and merging, and sophisticated application-specific congestion control. We are convinced that network code will never be programmed by common users, but rather by specialists [19]. Service code providers are expected to deliver components that are freely interconnectable. These components are stored on a central repository and fetched by the management systems to be deployed and configured on an active network node. Even though programmed by specialists, components may be erroneous and thus compromise node stability and security. Based on these observations and assumptions, the following conditions may be defined:
• • • •
Management and control functionality that is not time critical can be carried out in virtual machines without sacrificing overall node performance. Deployment and configuration of decoupled service components require extended, time-consuming steps. Resource control is required for safely multiplexing physical resources on a node. High performance active network node architectures are required for flexibility of application-specific code processing at link speed of routers.
To implement these requirements, a NodeOS for active networks providing the outlined functionality is required. To compare the approach we pursued in FAIN in this area, we focus on high-performance NodeOSs that are based on the plug-in approach. Table 2.1 NodeOS Summary Table
Add-ons to Linux, NetBSD, and MS Windows: o Pronto o Bowman o SILK o PromethOS o Crossbow/ANN o Lara++
Proprietary NodeOS: o o o o
Scout Nemesis Exokernel Moab/Janos
In this area, a basic distinction must be made between addons to widely used legacy operating systems like NetBSD [36] and Linux [33] or Microsoft Windows [42], and proprietary NodeOSs that have been designed and implemented from scratch. Next, we give a short overview of current research (depicted also in Table
Programmable IP Networks: Background
17
2.1), and provide a very brief discussion of the goals followed by the proposed candidates.
• •
•
• • • •
•
Scout: Scout [35], as based on xkernel v2, implements the path abstraction. It is a single address space research operating system without resource control mechanisms. Nemesis: Nemesis [32] is a research operating system for multimedia, low latency communication. It introduced the fbuf structure to allow interprocess communication with zero-copy mechanisms. Nemesis was extended by a Resource Controlled Active Network Environment (RCANE) for resource control issues. Exokernel: Exokernel [53] multiplexes physical resources by providing a so-called library operating system that exports interfaces that are as close to the hardware as possible—thus introducing as little as possible overhead. A clear disadvantage is code redundancy and the requirement to reimplement basic functionality found in legacy operating systems. Moab/Janos: Moab [37, 58] is a research operating system based on the OSKit. It exports an interface as required for the Janos [44] operating system. Janos creates a Java virtual machine with resource control in mind. Pronto: Pronto [22] provides a framework for node programmability. It is based on Linux and runs in kernel space. It follows the plug-in approach by providing its own execution environment. Bowman: Bowman [34] implements the active network interface specification in user space of Linux. SILK: SILK [3] extends the Linux kernel by providing the path abstraction in the kernel space. By SILK, the Scout architecture has been ported to a widely used operating system. Basically, the architecture can be compared to the Linux netfilter architecture for packet mangling. The NodeOS interface specification is mapped from user space on the Linux kernel by SNOW. Scout and SILK intercept the system call interface of Linux for communication with the user space; the GENI interface on the other end is connected to the input port at the Linux network stack. However, SILK does not provide resource control mechanisms. VERA provides an interface to the Intel IXP1200 network processor. It is attached to SILK by providing a virtual device in Linux. With use of VERA, the programming of the microEngines in the IXP1200 is achieved. PromethOS: PromethOS [30] extends the standard netfilter architecture of Linux by adding, at run time, programmability and extensibility to the Linux kernel for unknown components. The whole framework is inherently portable, strictly allowing the interfaces of netfilter. The performance of the PromethOS framework is comparable to the standard Linux networking environment; only one additional hash table lookup has been introduced to schedule the PromethOS plug-ins.
18
•
•
Programmable Networks for IP Service Deployment
Crossbow/ANN: Crossbow [18] follows the ideas of Scout and the path abstractions. Flows can be bound to plug-in chains. Crossbow forms the conceptual basis of PromethOS. No resource control mechanisms are foreseen. Due to its implementation, Crossbow is deeply bound to a specific release of NetBSD. Lara++: Lara++ [42] provides an active node operating system that is implemented on Windows. It provides an execution environment (called a processing environment) into which active components are loaded. These components are interconnected to form a graph similar to the concepts of Scout.
In summary, we identify a lack of resource controllable, widespread, flexible, and high performance node operating systems (NodeOS). Also lacking is support of currently available and future network processors, as well as mapping strategies for components. Based on this brief overview of operating systems suitable for active networking, the following issues can be extracted that could be addressed in standardization on future or other research projects:
•
• •
Service composition and control is needed. The interface to trigger the service composition as well as its configuration should be standardized such that management and control components become reusable for different NodeOSs. Interfaces to specify resources in a very fine granular manner are required. Interfaces for inter component and inter EE communication are mandatory to ease the creation of decomposed services.
2.6 EXECUTION ENVIRONMENTS The active network community has designed and embraced an architectural framework for active networks [13] that defines a three layer stack on each active node, the second of which is represented by one or more execution environments that define a programming model for writing active applications. In addition programming models applied to application-level networking are detailed in [55, 56]. Several well-known efforts that provide EEs based on this approach include ANTS, PLAN, SNAP, CANES, and ASP. Finally, a management EE [e.g., Smart Environment for Network Control, Monitoring, and Management (SENCOMM) [29]] and a management AA are usually initiated at bootstrap, in order to offer management services at EE or AA level, respectively. ANTS: ANTS [4] is a Java-based toolkit for constructing an active network and its applications. ANTS is based on an aggressive capsule approach, in which code is
Programmable IP Networks: Background
19
associated with packets and run at selected IP routers that are extensible. The latest version of ANTS (version 2.x) relies on the Janos Java NodeOS. PLAN: PLAN [20] is a functional scripting language (based on Caml) with limited capabilities, designed to execute on routers. The fundamental construct in the language is one of remote evaluation of delayed functional applications [21]. PLAN is designed to be a public, authentication-free layer in the active network hierarchy, so it has limited expressive power to guarantee that all programs will terminate. PLAN can also be used as a “glue” layer that allows access to higher level services (which may or may not require authentication). This combination allows much of the flexibility of active networking, without sacrificing security. SNAP: SNAP [41] is an active networking system where traditional packet headers are replaced with programs written in a special-purpose programming language. The SNAP language has been designed to be practical and with a focus on efficiency, flexibility, and safety. Compared to PLAN, SNAP offers significant resource usage safety, and achieves much higher performance at the cost of flexibility and usability. A PLAN-to-SNAP compiler has also been developed [21]. CANES: CANES [15] seeks an approach to active networks that supports high performance while also permitting dynamic modification of network behavior to support specific applications and/or provide new services. The CANES execution environment runs on the Bowman [34] implementation of the NodeOS specifications, and is specifically built for composing services within the network [50]. ASP: ASP [12] is implementing the “strong EE model” by offering a user-level operating system to the AAs via a Java-based programming environment. The underlying capabilities of NodeOS and Java are enhanced (e.g., usage of Netiod [6] to replace the socket interface) and complex control plain functionality such as signaling and network management can be realized. Customized security manager, out of band code loading, routing of active packets via IP and virtual connectivity [user datagram protocol/Internet protocol (UDP/IP) tunneling] are some more features of the ASP EE. Tamanoir EE: This is based on Java language. Tamanoir active nodes (TAN) provide persistent active nodes that are able to handle different applications and various data streams at the same time [51, 52]. Two main transport protocols [transmission control protocol (TCP) and user datagram protocol (UDP)] are supported by the TAN for carrying data. The active network encapsulated protocol (ANEP) format is used to send data over active networks. The injection of new functionalities, called services, is independent from the data stream: Services are deployed on demand when streams reach an active node, which does not hold the required service. There are two ways for service deployment: with a service repository, where TANs send all requests for downloading required services; and without, in which case the TAN queries the active node that sent the stream for the service. When the service is installed in memory, it is ready to process the stream. It is worth noting that a stream can cross equally a classical router, obviously, without any processing actions. TAN has an original approach regarding on-the-
20
Programmable Networks for IP Service Deployment
fly storage in the network, as it takes advantage of the Internet backplane protocol (IBP) project results concerning network storage management; a project that aims to expose network storage resources in an Internet-style way. FAIN EE: One of the key concepts defined by the future active IP networks (FAIN) architecture is the execution environment and the virtual environment (VE) as a group of EEs. In FAIN, drawing from an analogy based on the concepts of class and object in object-oriented systems, we distinguish EEs between the EE type and the EE instances thereof. An EE type is characterized by the programming methodology and the programming environment that is created as a result of the methodology used. The EE type is free of any implementation details. In contrast, an EE instance represents the realization of the EE type in the form of a run-time environment by using specific implementation technology; for example, programming language and binding mechanisms to maintain operation of the run-time environment. Accordingly, any particular EE type may have multiple instances, while each instance may be based on different implementations. This distinction allowed us to address the principles that must govern next generation NEs and the properties that they must possess, separate from the issue of how to build such systems. The programming methodology that was used as part of the FAIN EE type was the building block approach, in which services break down into primitive, distinct blocks of functionality, which may then be bound together in meaningful constructs. To this end, services can be rebuilt from these primitive forms of functionality; that is, the building blocks, while the building blocks may be reused and combined together in a series of different arrangements as dictated by the service itself. In FAIN, we have built two different EE instances: a Java EE and a Linux kernel-based EE, of this particular EE type [39]. The FAIN architecture also allows EEs to reside in any of the three operational planes; namely, transport, control, and management, while they may interact and communicate with each other either across the planes or within a single plane. In fact, it is not the EEs that communicate, but rather distributed service components hosted by them—a subset of the deployed network services that can be accessed by applications or higher level services by means of the network API they export. EEs (instances) are the place where services are deployed. Services may well be extensible in the sense that the programming methodology and the corresponding environment (EE type) support service extension, while they can access services offered by other EEs to achieve their objectives and meet customer demands. For example, a service uses the code distribution mechanism to download its code extensions. The extension API then becomes part of the overall service interface. Furthermore, FAIN separates the concept of the EE from that of the virtual environment. We argue that the concept of an EE and that of a VE are orthogonal to each other. In fact, a VE is an abstraction that is used only for resource management and control. Therein services may be found and may interact with each other. From the viewpoint of the
Programmable IP Networks: Background
21
operating system, the VE is responsible for the consumption and use of resources, the recipient of sanctions in the event of policy violations, and the entity that can legally receive authorization when other services access the control interfaces. Similar conclusions may be found in [31, 45]. In other words, a VE provides a container in which services may be instantiated and used by a community of users or groups of applications, while remaining isolated from components residing in different VEs. Within a VE, many types of EEs with their instances may be combined to implement and/or instantiate a service. Another property of the reference architecture is that it makes no assumptions about how “thin” a VE is. It may take the form of an application, or a specialized service environment; for example, video on demand or even a fully fledged network architecture as proposed in [7, 11]. Finally, a VE may coincide with an implementation (EE instance) that is based only on one technology, such as Java. In either case this is a design decision dictated by customer requirements and/or the VE owner. Out of all the VEs residing in a node there must be a privileged one that is instantiated automatically when the node is booted up and serves as a controlled mechanism through which subsequent VEs may be created through the management plane. This privileged VE should be owned by the network provider, who has access rights to instantiate the requested VE on behalf of a customer through a VE manager (VEM). From this viewpoint, the creation of VEs becomes a kind of meta service. In summary, ASP, ANTS, and PLAN are existing execution environments in the active network backbone (ABone [1]). ASP and ANTS offer in general a Javabased subOS for executing AAs. PLAN and SENCOMM at the EE level interpret script (carried in AA) to invoke a fixed function library. Finally, the CANES approach calls plug-in modules in order to realize a generic processing function. It is worth mentioning that almost all developments in active and programmable network approaches through the several projects around the world also host one or more EEs in their architecture. These EEs are approach specific. 2.7 CONCLUSIONS The analysis of the state of the art presented in this chapter aims to identify the ingredients that can serve as building materials and principles for the next generation NE architecture First and foremost, we consider the concept of EE as the basis of the next generation NE architecture that greatly facilitates the definition of a reference architecture. Such architecture enables service deployment algorithms to make decisions about where service components can be deployed, the appropriate implementation technology of these components, how the deployed components are linked with existing ones that are running in the NE, and so on. But what exactly is an EE, what elements is it comprised from and are these elements part of the architecture or part of its chosen implementation? Furthermore, is it possible to identify specific types of EEs that are
22
Programmable Networks for IP Service Deployment
implementation independent? Scanning the literature, we can trace a variety of answers regarding the exact characteristics of an EE. Conceptually, an EE is the active network’s programming environment [40] which when instantiated becomes the run-time environment of a process or a process itself [5]. This programming environment may be centered on a particular language and may export some API that encompasses elements like a Java virtual machine [13, 40], toolkits used for building AAs (services) [5, 48] or even interfaces to access generic services that AAs may customize building value added services. EEs have also been proposed as extensions of the NodeOS for those that are allowed to be extensible [38]. The latter has an impact on where to draw the boundary between EE and NodeOS, known as the node interface. The fact that the AN reference architecture [13] simultaneously supports multiple EEs, implies that EEs are also treated as principals on which to allow authentication, authorization, and resource control to takes place themselves. Services and users that use an EE are represented by this principal, which is the only valid entity allowed to access NodeOS facilities. To this end, the EE concept is overloaded with the characteristics of a virtual environment. Prototypes proposed in [31, 45] may be interpreted in this way. Finally, EEs have been characterized not by the choice of technologies, but rather by the services they offer and the architectural plane they operate at; namely, control, management, and transport [7, 11, 60-63]. Evidently, in the majority of cases, the boundaries between architecture and implementation are blurred. This makes it is very difficult to come up with a clear definition of an EE. The lack of an unambiguous definition impedes efforts to propose a reference NE architecture that not only encompasses most of recent research, but also is instrumental in designing middleware for service creation and deployment. This has been the subject of one of the research activities in FAIN. The second of these ingredients is finding the right approach for design of EEs. We have argued that the approach must satisfy the requirements for composability, extensibility, and vendor independence. We believe that the building block approach is the right one for designing EEs. Recently, a new research activity has been reported in [12], that uses a similar approach applied to redesign of protocols that do not imply a layered IP architecture. The final ingredient deals mainly with the problem of interoperability and the NE itself. It comes in the form of the separation principle among the different operational planes, and the abstraction of the functionality at each one of the planes by means of open interfaces. Finally, the programmable networks approach also extends naturally to the emerging domain of GRID management and services as described in [60]. The following chapters detail the technological perspective of programmable networks as follows: the security challenges and architectures for programmable networks (Chapter 3), the management technologies and solutions applicable to programmable networks (Chapter 4), the SwitchWare platform’s influence on the programmable network research and development activities (Chapter 5), and the programmable peer-to-peer technologies (Chapter 6).
Programmable IP Networks: Background
23
References [1]
ABone Testbed, http://www.isi.edu/abone/.
[2]
Alexander, D. S., et al., “The SwitchWare Active Network Architecture,” IEEE Network Special Issue on Active and Controllable Networks, Vol. 12, No. 3, 1998, pp. 29-36. http://www.cis.upenn.edu/~switchware/papers/switchware.ps.
[3]
Bavier, A., et al., SILK: Scout Paths in the Linux Kernel, Technical Report 2002-009, Uppsala University, February 2002.
[4]
AIN Release 1 Service Logic Program Framework Generic Requirements, Bell Communications Research Inc., FA-NWT-001132.
[5]
Berson, S., Braden, B., and Ricciulli, L., Introduction to the ABone, June 15, 2000. http://www.isi.edu/abone/DOCUMENTS/ABoneIntro.ps.
[6]
Berson, S., Braden, B., and Gradman, E., The Network I/O Daemon–Netiod, October 11, 2001, Draft version. http://www.isi.edu/abone/DOCUMENTS/netiod.ps.
[7]
Bhattacharjee, S., “Active Networks: Architectures, Composition, and Applications,” Ph.D. Thesis, Georgia Tech, July 1999.
[8]
Biswas, J., et al., Proposal for IP L-interface Architecture, IEEE P1520.3, P1520/TS/IP013, 2000. http://www.ieee-pin.org/doc/draft_docs/IP/p1520tsip013.pdf.
[9]
Biswas, J., et al., “The IEEE P1520 Standards Initiative for Programmable Network Interfaces,” IEEE Communications, Special Issue on Programmable Networks, Vol. 36, No 10, October 1998. http://www.ieee-pin.org/.
[10] Bjorkman, N., et al., “The Movement from Monoliths to Component Based Network Elements,” Special Issue on Telecommunications Networking at the Start of the 21st Century, IEEE Communications, Vol. 39, No. 1, January 2001. http://www.msforum.org/techinfo/IEEEcomMag200101_monoliths.pdf. [11] Braden, B., et al., Introduction to the ASP Execution Environment (Release 1.5), November 30, 2001, http://www.isi.edu/active-signal/ARP/DOCUMENTS/ASP_EE.ps. [12] Braden, B., Faber, T., and Handley, M., “From Protocol Stack to Protocol Heap–Role Based Architecture,” HotNets I, Princeton University, October 2002. http://www.cs.washington.edu/hotnets/papers/braden.pdf. [13] Calvert, K. L. (ed.), Architectural Framework for Active Networks, Draft version 1.0, July 27, 1999, http://protocols.netlab.uky.edu/~calvert/arch-latest.ps. [14] Calvert, K., et al, “Directions in Active Networks,” IEEE Communications Magazine, 1998, http://www.cc.gatech.edu/projects/CANEs/papers/Comm-Mag-98.pdf. [15] Composable Active Network Elements Project (CANES). http://www.cc.gatech.edu/projects/canes/. [16] DARPA Active Network Program. http://www.darpa.mil/ato/programs/activenetworks/actnet.htm [17] Decasper, D., et al., “Router Plug-ins: A Software Architecture for Next Generation Routers,” Proc. of ACM SIGCOMM '98, Vancouver, Canada, September 1998. [18] Decasper, D., et al., “A Scalable, High-performance active Network Node,” IEEE Network, January/February 1999. [19] FAIN Deliverable 1 - Requirements Analysis and Overall AN Architecture, May 2001, http://www.ist-fain.org.
24
Programmable Networks for IP Service Deployment
[20] Hicks, M., et al., “PLAN: A Packet Language for Active Networks,” Proc. of the Third ACM SIGPLAN International Conference on Functional Programming Languages, pp. 86-93, ACM, September 1998, http://www.cis.upenn.edu/~switchware/papers/plan.ps. [21] Hicks, M., Moore, J.T., and Nettles, S., “Compiling PLAN to SNAP,” IWAN'01, September/October 2001. http://www.cis.upenn.edu/~jonm/papers/plan2snap.ps. [22] Hjalmtysson, G., “The Pronto Platform – A Flexible Toolkit for Programming Networks Using a Commodity Operating System,” OpenARCH 2000. [23] IBM Network Processors, http://www-3.ibm.com/chips/products/wired/products/network_processors.html. [24] Standard for Application Programming Interfaces for ATM Networks, IEEE P1520.2, Draft 2.2, http://www.ieee-pin.org/pin-atm/intro.html. [25] IETF ForCES, draft-ietf-forces-framework-04.txt, December 2002. http://www.ietf.org/Internetdrafts/draft-ietf-forces-framework-04.txt. [26] IETF ForCES. http://www.ietf.org/html.charters/forces-charter.html. [27] Intel, IXP family of network processors. http://www.intel.com/design/network/products/npfamily/. [28] Principles of intelligent network architecture, ITU-T Recommendation Q.1201 1992. [29] Jackson, A. W., et al., The SENCOMM Architecture, Technical Report, BBN Technologies, April 26, 2000. http://www.ir.bbn.com/projects/sencomm/doc/architecture.ps. [30] Keller, R., et al., “PromethOS: A Dynamically Extensible Router Architecture Supporting Explicit Routing,” Proc. of the Fourth Annual International Working Conference on Active Networks (IWAN 2002), Springer Verlag, Lecture Notes in Computer Science, 2546, Zurich, Switzerland, December 4-6, 2002. [31] Khosravi, H., and Anderson, T., Requirements for Separation of IP Control and Forwarding, January 2003. http://www.ietf.org/Internet-drafts/draft-ietf-forces-requirements-08.txt [32] Leslie, I., et al., The Design and Implementation of an Operating System to Support Distributed Multimedia Applications. http://www.cl.cam.ac.uk/Research/SRG/netos/oldprojects/nemesis/documentation.html. [33] Linux. http://www.kernel.org. [34] Merugu, S., et al., “Bowman: A Node OS for Active Networks,” Proc. of IEEE Infocom 2000, Tel Aviv, Israel, March 2000. http://www.cc.gatech.edu/projects/CANEs/papers/bowman.pdf. [35] Montz, A., et al., “Scout: A Communications-Oriented Operating System,” IEEE HotOS Workshop, May 1995. [36] NetBSD. http://www.netbsd.org. [37] Open Signaling Working Group. http://www.comet.columbia.edu/opensig/. [38] Peterson, L. (ed.), Node OS Interface Specification, AN Node OS Working Group, November 30, 2001. http://www.cs.princeton.edu/nsg/papers/nodeos-02.ps. [39] An Architecture for Differentiated Services, RFC 2475, 1998. http://www.ietf.org/rfc/rfc2475.txt. [40] Smith, J.M., et al., “Activating Networks: A Progress Report,” IEEE Computer, Vol. 32, No. 4, April 1999, pp. 32-41. http://www.cs.princeton.edu/nsg/papers/an.ps. [41] SNAP: Safe and Nimble Active Packets, http://www.cis.upenn.edu/~dsl/SNAP/.
Programmable IP Networks: Background
25
[42] Schmid, S., et al., “Flexible, Dynamic, and Scalable Service Composition for Active Routers,” Proc. Fourth Annual International Working Conference on Active Networks (IWAN 2002), Zürich, Switzerland, Lecture Notes in Computer Science 2546, Springer Verlag, December 2002. [43] Thompson, K., Miller, G., and Wilder, R., “Wide-Area Internet Traffic Patterns and Characteriztics,” IEEE Network, November/December 1997. [44] Tullmann, P., Hibler, M., and Lepreau, J., “Janos: A Java-Oriented OS for Active Networks,” IEEE Journal on Selected Areas of Communication. Vol. 19, No. 3, March 2001. [45] Van der Merwe, J.E., et al., “The Tempest – A Practical Framework for Network Programmability,” IEEE Network, Vol. 12, No. 3, May/June 1998, pp. 20-28. http://www.research.att.com/~kobus/docs/tempest_small.ps. [46] Vicente, J., et al., L-interface Building Block APIs, IEEE P1520.3, P1520.3TSIP016, 2001. http://www.ieee-pin.org/doc/draft_docs/IP/P1520_3_TSIP-016.doc. [47] Vicente, J., et al., “Programming Internet Quality of Service,” 3rd IFIP/GI International Conference of Trends toward a Universal Service Market, Munich, Germany, September 12-14, 2000. http://comet.ctr.columbia.edu/~campbell/papers/usm00.pdf. [48] Wetherall, D., et al., “ANTS: A Toolkit for Building and Dynamically Deploying Network Protocols,” Proc. of IEEE OPENARCH '98, April 1998. [49] Yang, L., et al., ForCES Forwarding Element Functional Model, March 2003. [50] Zegura, E. (ed.), Composable Services for Active Networks, AN Composable Services Working Group, September 1998. http://www.cc.gatech.edu/projects/CANEs/papers/cs-draft0-3.ps.gz. [51] Gelas, J., and Lefevre, L., “TAMANOIR: A High-Performance Active Network Framework,” Active Middleware Services, Kluwer Academic Publishers, August 2000. [52] Gelas, J-P., and Lefèvre, L., “Toward the Design of an Active Grid,” Lecture Notes in Computer Science, Computational Science - ICCS 2002, Vol. 2230, April 2002, pp. 578-587. [53] “DARPA Active Networks Conference and Exposition,” DANCE Proc. – IEEE Computer Society Number PR01564, May 2002. [54] Marshall, I.W., and Roadknight, C.M., “Provision of Quality of Service for Active Services,” Computer Networks, Vol. 36, No. 1, June 2001, pp. 75-87. [55] Marshall, I., et al., “Application-Level Programmable Internet Work Environment,” BT Technology Journal, Vol. 17, No. 2, April 1999, pp. 82-95. [56] ANDROID project, Active Network Distributed Open Infrastructure Development, www.cs.ucl.ac.uk/research/android. [57] Karnouskos, S., Guo, H., and Becker, T., “Trade-Off or Invention: Experimental Integration of Active Networking and Programmable Networks,” Special Issue on Programmable Switches and Routers, IEEE Journal of Communications and Networks, Vol. 3, No. 1, March 2001, pp. 19-27. [58] Moab. http://www.cs.utah.edu/flux/janos/moab.html. [59] Wetherall, D., and Tennenhouse, D., “The ACTIVE IP Options,” Proc. of the 7th ACM SIGOPS European Workshop, September 1996. [60] Galis, A., et al., “Programmable Network Approach to Grid Management and Services,” International Conf. on Computational Science 2003, LNCS 2658, June 2-4, 2003, pp. 1103-1113. www.science.uva.nl/events/ICCS2003/. [61] Brunner, M., et al., “Management in Telecom Environments That Are Based on Active Networks,” Journal of High Speed Networks, March/April 2001.
26
Programmable Networks for IP Service Deployment
[62] Brunner, M., “Tutorial on Active Networks and its Management,” Journal Annals of Telecommunications, 2002. [63] Brunner, M., et al., “Service Management in Multi-Party Active Networks,” IEEE Communications Magazine, March 2000.
Chapter 3 Programmable Networks’ Security: Background1 3.1 INTRODUCTION Security has been described as the “humanities of computer science” for its lack of metrics and quantifiable comparisons, or even simple definitions [2]. It is certainly clear that distributed applications have many different requirements for controlling access to data and their decision-making logic. At its most abstract, an application’s security requirements are merely a specialization of its correctness requirements, where correctness can include a specification of getting the right information to the right person at the right place at the right time. Within this assumption, any deviation from that specification will then be defined as insecure. It is remarkable how well this simple statement of security works; consider requirements usually offered for security, such as confidentiality, integrity and availability (CIA) against the specification. Any violation of these requirements also violates the more network-specific requirements about location and authorization. In a network, we typically desire reliable, timely, and confidential message delivery among one or more participants. There are a variety of architectural approaches to this problem. Before examining active networks it is useful to examine some traditional networks to see how their architectures address security issues. The traditional telephone network [1] provides a relatively simple service— that of band-limited voice service. All communication, including signaling, is 1 Work in this chapter was supported by DARPA under Contracts #N66001-96-C-852 and #DABT6395-C-0073, and the National Science Foundation under grants #ANI-9906855, #ANI98-13875, #ANI00-82386, and #CSE-0081360.
27
28
Programmable Networks for IP Service Deployment
accomplished with a telephone handset. The communications unit is a “call,” consisting of a setup, interaction between the caller and callee, and a teardown. Since there are some technical advantages that accrue from in-band signaling, this technique was used when the telephone system was originally designed. Unfortunately, various interfaces to this control plane, such as call status and billing, were discovered by curious and capable individuals (“phone phreaks”). In response, the telephone system’s signaling was altered to out-of-band. The current system, SS7, is out-of-band, and forms a separate network infrastructure for directdialed telephony, providing considerable insulation between the users of the network, and its control and operation by carriers. Most of the security risks associated with telephony are not weaknesses associated with network control, but weaknesses such as ease of wiretap. (Wiretap can be addressed with encrypting modems; such as one from CSC that uses a digital signal processor to encode a triple-DES-encrypted stream of voice in a voice channel.) In contrast to the call model for interactions among computers with its weaknesses, the Internet uses a packet-switching model to accommodate the behavior of computer applications, which have a statistically bursty style, with intense bursts of data traffic followed by an idle period. Statistical multiplexing of the link works. When a variety of technologies for data traffic were interconnected, a general strategy was adopted of conversion to and from a standard packet format—where this standard format and addressing standard is chosen to be minimal. Minimal here refers to a minimum expectation of service, so that it is relatively straightforward for most communications networks to convert to and from this format. To obtain the most current network state conveniently, devices called routers are used not only to store and forward IP packets [46, 48, 49, 50, 58, 59, 66] but also to exchange information used for routing the packets through the Internet. The Internet’s designers have been applying a rule of thumb here called the “end-toend argument”—a form of late binding for the creation of network services. The idea behind the “end-to-end argument” is that services that must be provided at the end points need never be duplicated in the network, as they become redundant. As with the choice of a simplistic bearer service, the simplicity of this scheme makes it easier to architect a generic packet service. The “end-to-end argument” has had unfortunate security consequences. First, and probably most importantly from a network security perspective, security was observed to be an end-to-end property for many of the applications that the Internet pioneers envisioned. So security was relegated to the end points of the architecture, such as hosts. Unfortunately, at the same time, an operating system designed for disconnected personal computers became dominant for the Internet, with a bias toward usability, away from the traditional virtual machine model developed for timesharing use such as UNIX. When such systems were connected to the Internet, features for convenient interconnection of applications became, in
Programmable Networks’ Security: Background
29
effect, levers that could elevate one security failure in an application to complete control of a machine. Since the Internet’s control architecture is decentralized, it becomes very difficult to impose any control on such a machine, such as through recovery or bandwidth limitations. Thus, in practice, the machine has become a weapon that can subvert the Internet. This happens as the machine, when attached to the Internet, becomes part of its control architecture, as a participant in such distributed control algorithms as transmission control protocol/Internet protocol (TCP/IP) congestion control and route selection. The Internet’s control architecture is quite vulnerable to attacks, such as incorrect route advertisements and false data injected into border gateway protocol (BGP) information exchanges. An additional consequence of decentralized control is that it is difficult to address large-scale attacks on availability. An example of a security property is that it is difficult to provide end-to-end security when intermediate nodes are outside the control of the end points in the system. While the basic router infrastructure is quite simple and therefore fairly robust, the Internet architecture itself is notably becoming extended by “middleboxes” such as network address translators (NATs) [19, 20, 35, 36, 41], firewalls, load-sharing boxes and other devices that directly affect the behavior of the packet service provided by the network. Such devices provide attackers with indirect, but effective, access to the Internet control plane. 3.2 REQUIREMENTS FOR SECURITY The basic requirements of active network architecture are centered on the notion of dynamically loading code into a network element; a device embedded in a network. One model, that of Wetherall [43], proposes that active network security need not be any greater than that of a passive network, a level of security that people find acceptable, probably due to ignorance. In fact, though it may be preferable to set a higher standard, it may not be possible to set a definition for “higher” due to the lack of metrics and quantification noted in the introduction to this chapter. We attempt to address this with a set of qualitative criteria, and emphasize that comparing two systems with a security “checklist” may not be entirely satisfactory. (Interestingly, “privacy,” which is closely related to, but not the same as “security,” suffers from some of the same standardization and terminological challenges.) All networking systems provide one or more networking services. For example, IP networks provide a best-effort packet delivery service. For an active network, an additional service is a “meta service:” provision of a facility for defining new services. The basic requirements we see are: 1. 2.
Addressing and message delivery; Support for confidentiality;
30
3. 4. 5.
Programmable Networks for IP Service Deployment
Support for integrity; Support for resource controls/controlled multiplexing; Requirement for authorization for definition of new services.
Any of these requirements can be relaxed if a user or community so desires, but they must be present if the system is to be controlled by its users. If any of these requirements are missing from an architecture, it will be impossible to encompass the full range of security definitions for applications. For example, if we consider for a moment control of multiplexing, we see that time-centric (i.e., things that are specified with a rate) such as bandwidth must be so overprovisioned as to make starvation impossible, or resource availability may be denied to some subset of applications that require regular service. An example of such a service might be a “heartbeat” service in a reliable distributed system, where timely availability of bandwidth is required to ensure that the distributed protocol operates correctly. In the next section of this chapter, we address some of the trade-offs between flexibility, security, performance, and usability by focusing on the abstract relationship between programmability and security. 3.3 PROGRAMMABILITY VERSUS SECURITY What is the trade-off between programmability and security? While this question is likely to stimulate lively debate, with intuition suggesting a smooth correlation, the truth is, of course, more subtle. A completely programmable network element (such as an “open” computer running a variety of programming environments with no required authorizations or authentication), is insecure except for the most unrestrictive security policies. In a completely benevolent community, this policy may indeed be acceptable, but at the level at which network elements operate, it is unclear what assumptions about community behavior should be made. Certainly, malicious users will be present in the modern Internet with its many interconnected networks. From a security perspective, an analogy may be drawn, with the evolution of personal computing from a disconnected set of machines servicing individual users, to the entry of these machines into an interconnected distributed system of services. While access to some of these services improves the user experience, the risks of an operating system environment designed without a model of competing interests such as viruses, spyware, and zombies has become painfully apparent. Likewise, a network environment where benevolence is assumed, such as the congestion control strategy embedded in TCP/IP, is at risk when malicious users are present. In an active network, there are many forms that malice [2, 5, 10, 47, 51, 54, 64, 65, 68] can take, ranging from a corrupted state for the next program to interference among concurrently executing programs. The central tension is
Programmable Networks’ Security: Background
31
between the desire to do all things, and the desire to share a node. If use of the node is relatively unrestricted, then the node will be quite flexible and the programming model will be quite rich, enabling access to many of the system’s resources. The cost of this comes when shared resources are used, as unprotected access to these resources may induce failure modes on some of the sharers (for example due to inadequate resources). Restrictions of various types [10, 45] are added to the system to limit the range of what the programs can accomplish. These restrictions are intended to maximize flexibility at the same time that security risks are minimized, such as integrity failures induced via shared state. It is an oversimplification to think that a system becomes less flexible when more restrictions are placed on it. The real issue is whether the restrictions are minimal enough to enable flexible networking, while at the same time ensuring that the security problems considered in the introduction to this chapter are addressed. Confidentiality seems relatively straightforward, as it is guaranteed locally with access control and across connections between local nodes with encryption [3, 12, 30]. In fact, encryption might be used as the means of local restriction where a trusted path of some sort exists for key material. Addressing [9] can be linked to both authentication and integrity checking—that can be supported by cryptographic techniques (for an example, see below in Section 3.6, particularly Figure 3.3). Restrictions on resources [9, 10, 25, 31, 42, 46, 52, 53, 55-57, 60-63, 67, 69, 70] have proven very tricky to achieve at the programming environment level, not least because of the programming environment designer’s assumptions about the presence of an operating system to manage such details. To be more explicit, there is always some first foundational layer of software charged with abstracting and managing machine resources, which typically has been an operating system. If this layer does not provide the necessary facilities, such as access to a soft real-time scheduling service, it is very difficult to provide services above the layer, that make any time or proportional guarantees. The desire to provide such services stimulated the design of new operating systems such as Nemesis [24], which provides strong facilities for rate-controlled service delivery. If the programming environment were operating “on the hardware,” it could provide such resource management itself. In passing, we can observe that most programming languages have extremely poor support for dealing with time and timed activities, with most merely providing library interfaces to timer services such as C’s gettimeofday(). 3.4 PROGRAMMING LANGUAGE OR OPERATING SYSTEM? A key question here is whether restrictions should be provided by a restricted programming environment, a virtual machine provided by a protected operating
32
Programmable Networks for IP Service Deployment
system kernel, or by some division of labor between the two. Most implementations have used the third approach. In the DARPA model of active network architecture, these functional divisions take place at the division in roles between execution environments and a NodeOS, as shown in Figure 3.1. To some degree, recent progress in operating systems has centered on concepts similar to active networks. For example, Bershad’s SPIN system [24] was extensible; and vertically-structured operating systems such as Exokernel and Nemesis [24] provide minimal services beneath a protection barrier, that allow programmers to construct their own services, including transport protocols and (some) roles for device drivers. The role of the operating system in these vertically structured operating systems is minimalist, so that applications can put together services as they choose. Such an operating system design is well suited to active networking trade-offs between programmability and security, since it defines protected resources, but largely allows users to decide what the style of interaction will be through new code or through choices of library code within a defined protection domain.
Application
Application
Execution Environment (e.g., Alien)
Application
Execution Environment (e.g., ANTS)
Node Operating System (e.g., Scout, Nemesis, Linux, Windows NT) Figure 3.1 Components in DARPA’s active network architecture.
The challenge for a pure operating system-based approach is to allow access to enough resources so that the user’s application system can be written in a reasonable fashion. For example, the use of the root or “super-user” identity is common in cases where universal access to shared resources is required under UNIX. A better way to operate, and one being pursued in the context of secure operating systems such as OpenBSD (see http://www.openbsd.org), is to define a more object-oriented model of privilege, where resources such as a mail queue are
Programmable Networks’ Security: Background
33
controlled by a mail login with appropriate privileges. The problem the operating system has as a universal resource manager is that it must assume the most malicious behavior possible from a system user, which often makes the more common benevolent uses harder than they would be otherwise. The challenge of a pure language-based approach is that either applications must be written in the programming language, or multiple programming languages must provide the same restrictions on access to shared resources. The extreme limit of the latter case is an operating system. For example, many features of the portable common run time (PCR) system are indistinguishable from those of an operating system. 3.5 TRUSTED NETWORKING REQUIRES TRUSTED COMPUTING All guarantees at the node-level are meaningless unless the integrity of the node’s protection system is ensured. For example, this means that a programming language-based approach where the run-time system is changeable cannot make guarantees. A system based on role separation between programming environment and operating system will break, if the operating system component, trusted by the programming language environment, is subverted. It is absolutely reasonable to divide trusted (or “trustworthy”) operation into two phases. In the first phase, the system under study is validated and verified with all of the tools available, until it has achieved either an acceptable or desirable level of trustworthiness. Typically, this is an intense, thorough, time-consuming, and expensive process. The second phase consists of the system in use. In this phase, the integrity of the trustworthy system must be preserved. This should be far easier than the assessment of trustworthiness, and so, more readily performed. In the case of an active network node, the integrity of the system can be validated through what is called a “secure bootstrap,” which carefully passes control to higher layers of the system based on verification of a cryptographic checksum of the higher layer. This can be surprisingly complex, as many points in modern computer architecture allow programmability. Surprisingly, systems perceived as “hardware” are often believed immutable by programmers, but this is in actual fact not the case as many system boards provide writable flash, and many network elements may apply field-programmable gate arrays (FPGA) rather than Application Specific Integrated Circuits (ASIC), in initial implementations of line cards. Such components can introduce security problems of their own (see [18] for a surprising experimental result on FPGAs) when they are programmable or modifiable (i.e., where programmable systems depend on their state). Embedded systems of this type are likely to become increasingly common; see [17] for an outline of such systems. Arbaugh [11] developed a new principle called “chaining layered integrity checking” to address this problem. In brief, his AEGIS architecture begins by
34
Programmable Networks for IP Service Deployment
dividing a system into a set of layers defined by the upper layers that depend on the integrity of the layers below. The process of a system beginning operation in the AEGIS architecture consists of a base layer (assumed to be immutable) computing an integrity checksum on the next layer up. If the integrity checksum fails, either the system halts or a recovery procedure can be initiated, where trustworthy recovery consists of retrieving a correct copy of the damaged layer from a trusted source. This architecture has been patented and underpins the Trusted Computing Platform Alliance (TCPA) technology that is deployed on many PCs. Figure 3.2 shows AEGIS in operation [11, 12].
User Programs
Network Host
Level 5 Operating System Level 4 Boot Block Level 3 Expansion ROMs Level 2 POST2 Level 1 Recovery ROM Trusted POST Level 0 Initiate POST
Legend Control Transition Recovery Transition
Figure 3.2 The AEGIS secure bootstrap for trusted network nodes.
Once the node has bootstrapped successfully, integrity checks and correctness are maintained by the topmost layer of the system, which is the result of completion of the AEGIS process. Trustworthy nodes can then interoperate, using a variety of protocols and protections to offer a trustworthy network service. Some
Programmable Networks’ Security: Background
35
examples are offered later in this chapter where we discuss the secure active network environment (SANE) as an example application of AEGIS (there are others; see, for example, [14]). 3.6 AUTHORIZATION IN THE ABSENCE OF IDENTITIES Authorization in the absence of identities is the problem that so-called trust management [13, 22] technology was designed to address, such as the KeyNote system, which represents assertions as credentials with authorizers, licensees, and conditions. Public key technologies are used to build the Web of trust, and a compliance checking process is used to test requested actions against the credentials. Consider public keys for rmn and jms77, where jms77’s key is the licensee, rmn’s key is the authorizer, conditions are $file_owner =“rmn” andand $filename ~=“/home/rmn/[^/]*” andand $hostname =“ouse.cl.cam.ac.uk” ->“true”; and the signature is with rmn’s key. Then jms77 is authorized by rmn to access files in rmn’s home directory on a particular host at the University of Cambridge. This architecture provides capability-like control of resources, and robust delegation of authority, and many other desirable features, in spite of distributed control through its use of cryptography to authenticate and authorize remote operations. There is only space here for a brief overview of the desirable properties of credentials [37] enabling advanced applications: data provenance, support for micropayment systems of various flavors, authorization for network control, code loading, resource allocation, and digital rights management. Some of these are discussed in Section 3.8. Given the many possible paths through a network and the many possible ephemeral users of a network node, it is unlikely that any set of identities will be complete. In addition, the identity is a target through which sets of authorizations are obtained, and the principle of least privilege suggests that the focus of a system should be on necessary authorizations rather than identities. In the Active Network Encapsulation Protocol (ANEP) [8], considerable focus was put on developing a ubiquitous and useful Authentication Header. This header has much in common with the design of IP security protocol (IPsec), and the concepts are quite portable. From the very beginning the goal was to build in integrity and the means for authorization on a per-packet basis (see Figure 3.3).
36
Programmable Networks for IP Service Deployment
Packet Headers
SPI Replay Counter
Authenticators
Packet Payload
Authentication Data
. Other Authenticators . .
Figure 3.3 Authentication infrastructure in an ANEP packet.
3.7 RESOURCE CONTROLS Resource controls [e.g., for memory, central processing unit (CPU), and bandwidth] were relatively neglected as an area of security research in active networking, although some prototype systems and active network NodeOS prototypes addressed the issue. Not surprisingly, one of the major issues in the design of network systems is sharing. The economics of networking only make sense because the shared resource, whether a bus, switched network or Internet, can be used effectively by multiple users in preference to a multiplicity of dedicated links. So, it is important to consider some of the places where sharing occurs in an active network system. First and foremost, there is multiplexing of the links interconnecting active network nodes and that, of course, has been well studied, for example, in the literature on Ethernet performance and ATM switch performance, as well as extensively in the more abstract literature on queuing systems. Next are multiplexing issues in network elements; in this case, active network elements. Here, the multiplexing requirements can vary based on the node architecture. If we refer to the architecture illustrated in Figure 3.1, we can see a number of other possible multiplexing points. First (moving bottom up) is the NodeOS—the operating system that multiplexes physical resources such as disks, memory, and network adapters. Systems such as Menage’s Resource Controlled Active Network Environment [25] utilize a system such as Nemesis [24] to partition resources among scheduling domains, which gives each scheduling domain a “share” of the system resources. A similar model was later followed in NodeOS [31] work such
Programmable Networks’ Security: Background
37
as the University of Arizona’s Scout system and the University of Utah’s FLUX system. It would be expected that the operating system would operate one or more execution environments, which, in our discussion, would correspond to a programming environment and its run-time system. It is absolutely reasonable to expect multiple instances of an execution environment, even if a single language is used, since it is likely that versioning will occur in these systems just as it does elsewhere with software systems. In a system like RCANE, each execution environment would be placed in a distinct scheduling domain to avail itself of the partitioning offered by the underlying NodeOS (Nemesis). Execution environments could in fact simply be replicated and assigned portions of the machine. Finally, there is multiplexing and scheduling that takes place within a single execution environment. For example, multiple threads executing in an environment may share the same heap. This might be an issue even in a Java system used for packet-by-packet interpretation, unless great care was taken to serialize, and all state was ephemeral, and garbage collection was done between active packet executions. This seems unlikely in practice. So effort must be invested in partitioning heaps, as was shown to be effective in RCANE [10, 25]. Controlled sharing of resources is necessary for security, as it is the only remedy to resource overuse attacks on security such as “denial of service” attacks. A crucial element in the design of any resource allocation system is the authorization system for use of resources. This was examined in the Secure Quality of Service Handling (SqoSH) [9] system designed at the University of Pennsylvania. There, the general scheme used for access control (capability-like digitally signed credentials) was applied to a set of resource control “tuning knobs,” which allocated shares of resources in the underlying system. The design was experimentally evaluated using an experimental operating system called Piglet [28, 29]. Piglet was used to implement a version of Lixia Zhang’s “virtual clock” rate policing algorithm, and the controls were protected behind the SwitchWare system’s module thinning scheme and its remote support via SANE. In summary, using access control was quite effective in practice for protecting a structured interface to the resource controls. 3.8 PUTTING IT ALL TOGETHER It is worth examining a single system that “puts it all together” to understand what is possible. As it is familiar, the SwitchWare [4] system, designed, implemented and experimentally evaluated at the University of Pennsylvania, will be considered. The goal of the SwitchWare project was to build a secure, flexible node architecture with which a larger network infrastructure could be constructed.
38
Programmable Networks for IP Service Deployment
Given that advances in programming language technology enhanced the potential safety and security contributions of the programming environment at a low cost to flexibility, we explored the use of a member of the ML [26] family of languages, OCaml [23], as a development language. This turned out to be both a benefit and a hindrance. The benefit was that the environment delivered much of the desired code safety, and was able to deliver a surprisingly respectable performance compared to competing programming environments such as Java. The hindrance was that a relatively small community of programmers existed, which reduced the impact of the system in terms of reuse of the code base, follow-on technologies, and so on. The presence of “module thinning,” a facility through which a program’s namespace could be selectively restricted, turned out to have great power. As with sparse capability-like access systems, the extent of a program’s access was dictated by what it could name. Thus, the programming language provided the elementary support for resource access control, and the use of cryptographic capabilities extended this namespace for use by remote nodes by presentation of the credentials. By way of example, this would be used when one of the active packets shown at the top of Figure 3.4 required access to resources designated as privileged in the node’s security policy. Figure 3.4 provides an illustration of the design concepts underlying the SwitchWare node architecture. The first realization of the node architecture was in Alexander et al.’s “Active Bridge,” [7] later developed as the ALIEN [6] node architecture. The basic principle applied was that the immutable code would be minimized and instantiated in a module called the loader, which loaded the mutable portions of the system. The loader was, of course, privileged, following our arguments about integrity earlier in this chapter. Since it is both reasonable and desirable to change administrative code over time (e.g., after a security flaw has been discovered) the privilege boundary extends above the loader, separating types of code that might be present on a node. The ALIEN system provided certain services using privileged modules. An example of such a service is access to the raw Ethernet packets when an active network node is connected to an Ethernet. Since confidentiality of some applications may be violated by “packet sniffing,” this access must be controlled and not be available to all active programs on the node. ALIEN used a technique called “module thinning,” available in the OCaml implementation of the ML programming language, to restrict access to active services designated as privileged. This is an instantiation of namespace security, in the style of capability-based access controls. The analogy to capabilities is employed in extending the name-based controls to remote ALIEN nodes. In an extension of the ALIEN system called the Secure Active Network Environment [3, 10], cryptographic techniques [30] were used to provide remote authorizations for access to active services. The use of credentials
Programmable Networks’ Security: Background
39
and a trust management architecture enables considerable flexibility and scaling, while preserving the elegance of the programming model. Figure 3.5 illustrates the structure of the SANE system, including the system hardware. The SANE system can be mapped quite readily to the abstract architecture of Figure 3.4, but is oriented toward the construction of networks of programmable nodes that are trusted and integrity checked.
Active Packet Active Packet
Ephemeral functions and state used by individual active packets Administrative Privilege Active extensions – persistent state and functions used by many active packets Loadlet – for persistent state + functions (minimal static functionality) Figure 3.4 An abstract view of the SwitchWare architecture.
Note especially the secure bootstrap, discussed previously in Section 3.5, which operates as follows: Before the next layer of the bootstrap process is executed, a cryptographic checksum is performed on the layer and compared with a stored value. If they match, then the system is what it was (even if insecure), but if the checksums do not match, then there is clearly an integrity failure. Node integrity is obviously crucial in an active network, where commercial deployments would result in scenarios similar to today’s large ISPs, where huge racks of almost unattended servers are operating continuously in multiple-acre rooms—sometimes called “telecom hotels.” Such unattended operation means that not only must security be preserved, but also that there must be a robust automated path to recovery.
40
Programmable Networks for IP Service Deployment
Loadable Modules
Module Checking
Integrity Dependencies
Remote Authentication of Modules
Card Runtime Loader
Linux Process VM
OS (e.g., Linux) Card ROMS, CMOS
Memory Protection Boundary Secure Bootstrap and Recovery, via AEGIS
BIOS Level 1 Figure 3.5 The organization of roles and modules in the SANE system.
It is notable that out of the secure node architecture research in SwitchWare came a secure bootstrap technology, which serves as a basis for the TCPA now being standardized for the computer industry. 3.9 CONCLUSION AND THOUGHTS FOR THE FUTURE As research efforts in active networking were initiated in the mid-1990s, concerns about security were an alarm rung against the architectural promise of programmability. Research in response to these concerns, sketched in this chapter, has shown that a carefully reasoned and implemented approach to addressing security concerns can lead to a system that is more secure than existing systems, particularly in the face of managing resources such as bandwidth; providing active response to threats that are inevitable with distributed control such as distributed denial of service (DDoS) attacks. Notably, all techniques argued to address DDoS [32, 34] attacks have a strong active networking flavor, and in fact, many have been developed from researchers engaged in research in active networking. The future looks bright and the impact of our research in SwitchWare, FAIN, and its subsidiary, GAIN has only begun to penetrate commercial systems. The
Programmable Networks’ Security: Background
41
opportunities are great. Three will be considered here: overlay networks, middleboxes, and mobile systems. Overlay networks are active networks, and in some sense FAIN [52] and similar projects have provided thought leadership in this area, seeking to provide a programmable control overlay for the future of the Internet. The effort has proven quite successful. The approach of overlay networks [37, 38, 40, 44] seeks to avoid some of the interoperability challenges associated with early active network efforts [33, 39], which attempted to change the Internet. Instead, overlay networks treat the Internet as a “black box,” over which networking infrastructure is constructed in a tabula rasa style, allowing any and all forwarding, routing, and service definitions to be explored. While in this sense overlays weaken the active networking vision crucially, they are proving easy to deploy—something that has been a major issue with active networking. Distributed algorithms and security of overlay networks [21] is becoming an area of study, and results from active networking research in trust management and integrity checking will increasingly come to the forefront, as malicious peers seek to frustrate assumptions of other nodes in a peer-to-peer architecture. Middleboxes [15] have arisen in response to the challenges to the Internet, such as slow evolution and demands for rapid service introduction, that active networking in general and FAIN specifically were intended to address. While a complete enumeration of middleboxes is impossible, some effort has been put into cataloging these systems by researchers in the Internet community in [15], which gives a more complete set of middlebox examples [19, 20, 35, 36, 41] and a clear analysis of what the research issues and long-term architectural goals are (although the document is seemingly ignorant of active networking). Examples include firewalls [16], intrusion detection systems and load balancing systems. Each of these examples illustrates several key points. First, each has a domain-specific “language.” Second, each performs packet-by-packet interpretation of headers and data passing by. Third, each is integral to the distributed security architecture scaffolded over the Internet. Given these observations, it is clear that a general architecture could be used for each of these specialized functions, and any optimizations and security enhancements would benefit all middleboxes, rather than merely selected examples. Finally, mobile systems provide exciting opportunities for security techniques and technologies for mobile code. The mobile telephony industry is exploring options such as the Java-like “Brew” for advanced mobile phones, and the convergence of personal digital assistants (PDA) and mobile telephony continues unabated by hiccups in the telecommunications marketplace. An even more interesting possibility appears to be software radio [14, 27] where the programmability can extend to the manipulation of the radio system’s functionality, to the point of defining waveforms. We believe all of these systems will benefit significantly from the security lessons learned in the active networks research.
42
Programmable Networks for IP Service Deployment
References [1]
Engineering and Operations in the Bell System, 2nd ed., Murray Hill, NJ: AT&T Bell Laboratories, 1983.
[2]
Alexander, D. S., et al., “Safety and Security of Programmable Network Infrastructures,” IEEE Communications, Vol. 36, No. 10, October 1998, pp. 84-92.
[3]
Alexander, D. S., et al., “A Secure Active Network Environment Architecture: Realization in SwitchWare,” IEEE Network, Special Issue on Active and Programmable Networks, Vol. 12, No. 3, May/June 1998, pp. 37-45.
[4]
Alexander, D. S., et al., “The SwitchWare Active Network Architecture,” IEEE Network, Special Issue on Active and Programmable Networks, Vol. 12, No. 3, May/June 1998, pp. 29-36.
[5]
Alexander, D. S., et al., “Security in Active Networks,” Secure Internet Programming: Security Issues for Mobile and Distributed Objects, Vitek, J., and Christian, J., (eds.), New York: Springer Verlag, 1999, pp. 433-451.
[6]
Alexander, D. S., and Smith, J. M., “The Architecture of ALIEN,” Proc, First International Workshop on Active Networks, Springer Verlag, Germany, June 30-July 2, 1999, pp. 1-12.
[7]
Alexander, D. S., et al., “Active Bridging,” Proc., ACM SIGCOMM Conference, Cannes, France, October 1997, pp. 101-111.
[8]
Alexander, D. S., et al., Active Network Encapsulation Protocol (ANEP), Active Networks Group, DARPA Active Network Project, August 1997.
[9]
Alexander, D. S., et al., “Secure Quality of Service Handling (SQoSH),” IEEE Communications, April 2000, Vol. 38, No. 4, pp. 106-112.
[10] Alexander, D. S., et al., “The Price of Safety in an Active Network,” Journal of Communications (JCN), Special Issue on Programmable Switches and Routers, March 2001, Vol. 3, No. 1, pp. 418. [11] Arbaugh, W. A., Farber, D. J., and Smith, J. M., “A Secure and Reliable Bootstrap Architecture,” IEEE Security and Privacy Conference, Oakland, CA, May, 1997, pp. 65-71. (An early version available as Technical Report MS - CIS - 96 - 35, CIS Dept., University of Pennsylvania, December 2, 1996.) [12] Arbaugh, W. A., et al., “Security for Virtual Private Intranets,” IEEE Computer, Special Issue on Broadband Networking Security, Vol. 31, No. 9, September 1998, pp. 48-55. [13] Blaze, M., Feigenbaum, J., and Lacy, J., “Decentralized Trust Management,” Proc. IEEE 17th Symposium on Security and Privacy, 1996, pp. 164-173. [14] Bose, V., “Virtual Radios,” Ph.D. Dissertation, MIT, 1999. [15] Carpenter, B., and Brim, S., “Middleboxes: Taxonomy and Issues,” Internet Engineering Task Force, RFC 3234, February 2002. [16] Cheswick, B., and Bellovin, S., Firewalls and Internet Security: Repelling the Wily Hacker, Reading, MA: Addison-Wesley, 1994. [17] Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers, Washington D.C.: National Academy Press, 2001. [18] Hadzic, I., Udani, S., and Smith, J. M., “FPGA Viruses,” Proc. of 9th International Workshop on Field - Programmable Logic and Applications, FPL’99, Springer, August 1999.
Programmable Networks’ Security: Background
43
[19] Hain, T., “Architectural Implications of NAT,” Internet RFC 2993, November 2000. [20] Holdrege, M., and Srisuresh, P., “Protocol Complications with the IP Network Address Translator,” Internet RFC 3027, January 2001. [21] Keromytis, A., Misra, V., and Rubenstein, D., “SOS: Secure Overlay Services,” Proc. ACM SIGCOMM Conf., 2002, pp. 20-30. [22] Keromytis, A., et al., “The STRONGMAN Architecture,” 3rd DARPA Information Survivability Conference and Exposition (DISCEX), April 2003. [23] Leroy, X., The Caml Special Light System (Release 1.10), INRIA, France, November 1995. [24] Leslie, I. M., et al., “The Design and Implementation of an Operating System to Support Distributed Multimedia Applications,” IEEE Journal on Selected Areas in Communications (JSAC), Vol. 14, No. 7, September 1996, pp. 1280-1297. [25] Menage, P. B., “Resource Control of Untrusted Code in an Open Programmable Network,” Ph.D. Dissertation, University of Cambridge Computer Laboratory, 2000. [26] Milner, R., Tofte, M., and Harper, R., The Definition of Standard ML, Cambridge, MA: MIT Press, 1990. [27] Mitola, J., III, “Software Radios,” Proc. IEEE National Telesystems Conference, IEEE, May 1992. [28] Muir, S. J., and Smith, J. M., “Supporting Continuous Media in the Piglet OS,” 8th International Workshop on Network and Operating Systems Support for Digital Audio and Video, 1998, pp. 99102. [29] Muir, S. J., “Piglet: An Operating System for Network Appliances,” Ph.D. Dissertation, CIS Department, University of Pennsylvania, 2001. [30] Needham, R., and Schroeder, M., “Using Encryption for Authentication in Large Networks,” Communications of the ACM, Vol. 21, No. 12, 1978, pp. 993-999. [31] Peterson, L., et al., “An OS Interface for Active Routers,” IEEE Journal on Selected Areas in Communications (JSAC), Vol. 19, No. 3, March 2001, pp. 473-487. [32] Savage, S., et al., “Practical Support for IP Traceback,” Proc. ACM SIGCOMM Conf., 2000, pp. 295-306. [33] Smith, J. M., et al., “Activating Networks: A Progress Report,” IEEE Computer, Vol. 32, No. 4, April 1999, pp. 32-41. [34] Snoeren, A. C., et al., “Hash-Based IP Traceback,” Proc. ACM SIGCOMM Conference, 2001, pp. 3-14. [35] Srisuresh, P., and Holdrege, M., “IP Network Address Translator (NAT) Terminology and Considerations,” Internet RFC 2663, August 1999. [36] Srisuresh, P., and Egevang, K., “Traditional IP Network Address Translator (Traditional NAT),” Internet RFC 3022, January 2001. [37] Stoica, I., et al., “Chord: A Scalable Peer-to-Peer Lookup Protocol for Internet Applications,” Proc. ACM SIGCOMM Conf. 2001, pp. 149-160. [38] Stoica, I., et al., “Internet Indirection Infrastructure,” Proc. ACM SIGCOMM Conf., 2002, pp. 1020.
44
Programmable Networks for IP Service Deployment
[39] Tennenhouse, D. L., et al., “A Survey of Active Network Research,” IEEE Communications, Vol. 35, No. 1, January 1997, pp. 80-86. [40] Touch, J., “Dynamic Internet Overlay Deployment and Management Using the X-Bone,” Computer Networks, July 2001, pp. 117-135. [41] Tsirtsis, G., and Srisuresh, P., “Network Address Translation,Protocol Translation (NAT-PT),” Internet RFC 2766, February 2000. [42] Tullmann, P., Hibler, M., and Lepreau, J., “Janos: A Java-Oriented OS for Active Network Nodes,” IEEE Journal on Selected Areas in Communications (JSAC), Vol. 19, No. 3, March 2001. [43] Wetherall, D. J., Guttag, J., and Tennenhouse, D. L., “ANTS: A Toolkit for Building and Dynamically Deploying Network Protocols,” Proc. IEEE OpenArch ’98, San Francisco, CA, 1998, pp. 117-129. [44] White, B., et al., “An Integrated Experimental Environment for Distributed Systems and Networks,” Proc. USENIX OSDI Conf., December 2002. [45] Zander, J., and Forchheimer, R., “Softnet—An Approach to Higher Level Packet Radio,” Proc. AMRAD Conf. 1980. [46] Agere Network Processors, http://www.agere.com/ enterprise metro access/ network processors.html. [47] Application Level Programmable Inter-Network Environment Project Web Page, http://www.cs.ucl.ack.uk/alpine/. [48] Biagioni, E., “A Structured TCP in Standard ML,” Proc. 1994 SIGCOMM Conf., 1994, pp. 3645. [49] Feldmeier, D. C., et al., “Protocol Boosters,” IEEE Journal on Selected Areas in Communication, Special Issue on Protocol Architectures for 21st Century Applications, Vol. 16, No. 3, April 1998, pp. 437-444. [50] IETF Forwarding Control Element Separation Working Group Home Page, http://www.ietf.org/html.charters/forces-charter.html. [51] Freed, N., “Behavior of and Requirements for Internet Firewalls,” Internet RFC 2979, October 2000. [52] Galis, A., et al., “A Flexible IP Active Networks Architecture,” Proc. 2nd IWAN, Springer, 2000, pp. 1-15. [53] Hicks, M., and Keromytis, A.D., “A Secure PLAN,” Proc. First International Workshop on Active Networks, Berlin, Springer-Verlag, Germany, June 30-July 2, 1999, pp. 307-314. [54] Hicks, M., and Keromytis, A., “A Secure PLAN (Extended Version),” IEEE Trans. on Systems, Man and Cybernetics, 2003. [55] Hicks, M., et al., “PLAN: A Packet Language for Active Networks,” Proc. International Conf. on Functional Programming, 1998. [56] Hicks, M., et al., “PLANet: An Active Internetwork,” Proc. of the 18th IEEE Computer and Communication Society INFOCOM Conference, 1999, pp. 1124-1133. (Also available on-line at http://www.cis.upenn.edu/˜switchware/papers/planet.ps.) [57] Hicks, M., Moore, J.T., and Nettles, S., “Compiling PLAN to SNAP,” IWAN'01, September/October 2001. http://www.cis.upenn.edu/~jonm/papers/plan2snap.ps.
Programmable Networks’ Security: Background
45
[58] IBM PowerNP Network Processors, http://www.3.ibm.com/chips/products/wired/products/ network processors.html. [59] Intel IXP Architecture Network Processors, http://www.intel.com/design/network/products/npfamily/. [60] Kakkar, P., et al., “Specifying the PLAN Network Programming Language,” Electronic Notes in Theoretical Computer Science, September 1999. [61] Marcus, W., et al., “Protocol Boosters: Applying Programmability to Network Infrastructures,” IEEE Communications, Vol. 36, No. 10, October 1998, pp. 79-83. [62] Moore, J. T., Hicks, M., and Nettles, S., “Practical Programmable Packets,” Proc. of the Twentieth IEEE Computer and Communication Society INFOCOM Conference, April 2001, pp. 41-50. (Also available: on-line at http://www.cis.upenn.edu/˜switchware/papers/snap.pdf.) [63] Moore, J. T., “Practical Active Packets,” Ph.D. dissertation, University of Pennsylvania, September 2002. [64] Needham, R. M., “Denial of Service: An Example,” Communications of the ACM, Vol. 37, No. 11, November 1994, pp. 42-46. [65] Partridge, C., et al., “FIRE: Flexible Intra-AS Routing Environment,” Proc. ACM SIGCOMM Conference, 2000, pp. 191-203. [66] Peterson, L., et al., “A Blueprint for Introducing Disruptive Technology into the Internet,” Proc. of the 1st ACM Workshop on Hot Topics in Networks, October 2002. [67] Plan home page, http://www.cis.upenn.edu/˜switchware/PLAN. [68] Smith, J. M., and Nettles, S. M., “Active Networking: One View of the Past, Present and Future,” IEEE Trans. on Systems, Man and Cybernetics, 2003. [69] Stehr, M. O., and Talcott, C., “Plan in Maude: Specifying an Active Network Programming Language,” Fourth International Workshop on Rewriting Logic and Its Applications (WRLA’2002), Pisa, Italy, September 19-21, 2002. Also in Electronic Notes in Theoretical Computer Science, Vol. 71, 2002, http://www.elsevier.nl/locate/entcs/volume71.html. [70] Wall, D. W., “Messages as Active Agents,” Proc., Ninth Annual POPL, 1982, pp. 34-39.
Chapter 4 Programmable Network Management and Services: Background This chapter reviews the main research projects relevant to the management of active and programmable networks, and summarizes the range of solutions proposed. We divide the analysis into two important management topics: network management (i.e., we focus on policy-based management of active networks) and active service provisioning (ASP). At the end of the chapter we elaborate on the direction that current research and standardization activities seem likely to take in the near future. 4.1 STATE OF THE ART Recent research activities have clearly focused on the synergy of three concepts: network virtualization, open interfaces and platforms, and intelligence in the network. The network is no longer an entity providing basic connectivity, but rather is treated as a service-enabling platform, which is open, intelligent, and offers a variety of virtualized facilities under the authority of different communities. Consequently, the management of such a network must take into account a whole new range of problems and complexity. To this end, it must support coexistence of different management strategies, thus facilitating customization, interoperability with different vendors’ equipment, and dynamic extensibility of its functionality, to support new services and their deployment. 4.1.1 Network and Element Management This review of the state of the art of programmable networks management is divided into two main sections; the first one dealing with nonpolicy-based proposals, and the second one focusing on policy-based management of programmable networks. The goal is to have a clear overview of the different alternatives proposed to manage programmable networks and, at the same time, focus on those that have chosen a similar approach to ours, that is, policy-based management. 47
48
Programmable Networks for IP Service Deployment
4.1.1.1 Nonpolicy-Based Management of Programmable Networks There are a number of research projects that cover the field of programmable network management using nonpolicy-based approaches. Most of these projects use programmable network techniques so as to achieve more efficient management. We briefly review some of these research projects below in order to demonstrate the wide range of solutions proposed. ABone Management The ABone [1] is a DARPA-funded virtual testbed for the active networks research program. It is composed of a set of computer systems configured into virtual active networks. The ABone nodes are administered locally, but can be used by remote users to start up execution environments and launch active applications. Each core ABone node is configured with seven UNIX-specific accounts. Each account runs an instance of the Anetd active network management daemon. These daemons allow remote EE and AA developers to install, configure, and control EE instances in these nodes. The Anetd client, sc, can be used to communicate with Anetd on these machines and perform the required functions. Anetd performs two major functions: deployment, configuration, and control of network software, in particular EE prototypes; and demultiplexing of active network packets encapsulated using the Active Network Encapsulation Protocol to multiple EEs located on the same network node. ABLE The Active Bell Labs Engine (ABLE) [2, 7] proposes a novel active network architecture that primarily addresses the management challenges of modern complex networks. Its main component is an active engine that is attached to any IP router to form an active node. The active engine is designed and implemented to execute programs that arrive from the network. The engine facilities and executed programs are oriented to the monitoring and control of the attached router. The active code is implemented in Java, and active packets are encapsulated in a standard ANEP header over UDP. The authors of ABLE claim that ABLE offers an efficient access to the local state of the router; a secure system to modify the router behavior, as well as easyto-use programming abstractions and interfaces. AVNMP The Active Virtual Network Management Prediction (AVNMP) [3, 6] algorithm is a proactive management system; in other words, it provides the ability to solve a
Programmable Network Management and Services: Background
49
potential problem before it impacts the system, by modeling network devices within the network itself, and running that model ahead of real time. Predictions range from network performance to possible network or node faults. Such a proactive management approach is particularly useful in many applications. For example, in the case of handovers in a mobile environment, if the handover is prepared in advance, the service quality degradation is minimized. Similarly in QoS-sensitive applications, particularly those that are affected by an excessive or variable delay, the management system can avoid congestion before it actually happens. The system is composed of different types of active nodes with different targets. Some active nodes realize predictions based on the information they have, and publish it on the network. These predictions can either be about the network or about an offered service. Then, a second type of active node captures these predictions and introduces them into the management algorithms that have been implemented. The algorithms basically compare the actual state of the network with previous predictions. If a previous prediction was incorrect, the configuration actions caused by this prediction are removed from the network. This correction is done through special kinds of messages called antimessages. Smart Packets Smart packets [4] focus on applying active networks technology to network management and monitoring without placing undue burden on the nodes in the network. The management applications developed are oriented to diagnostic reporting and fault detection. The framework is based on active packets carrying programs that are executed at nodes on the path to one or more target hosts. Smart packet programs are written in a tightly encoded safe language (spanner), specifically designed to support network management and avoid dangerous constructs and accesses. The spanner code is obtained after compiling the program written in a high-level programming language specifically created for the project: sprocket. Smart packets are generated by management or monitoring applications and are encapsulated in ANEP. The ANEP daemon is responsible for receiving and forwarding smart packets correctly. Security is achieved through the limitations imposed by the tightly encoded safe language, and through a prudent execution of smart packets code: If the virtual machine does not know how to proceed with the code, then it stops the execution. Additionally, further security checks are realized such as user authentication and data integrity checks.
50
Programmable Networks for IP Service Deployment
SENCOMM The main objective of the Smart Environment for Network Control, Monitoring and Management [46] framework is to implement a network control, management, and monitoring environment using active networks. SENCOMM is somehow a continuation of smart packets, since it reuses much of the smart packets system. User-written network management and monitoring programs generate smart probes, which are encapsulated in ANEP frames. The probes are demultiplexed to the local SENCOMM management EE, which injects the smart probes into the network. A probe can be sent to be executed only at the destination or at every active node running the SENCOMM management EE (measurements and control operations might be taken in a single packet’s traversal of the network). The probe contains directives to access loadable libraries of functions on the node, and registers to receive incoming packets that meet a filter specification, and optionally inject the packet back into the network. Probe packets can be sent either to unicast or multicast addresses. The information content returned by probes to the management center can be tailored in real time to the current interest of the center. VAN The virtual active network (VAN) management framework [5] allows customers, on the one hand, to access and manage a service in a provider’s domain, and, on the other hand, to outsource a service and its management to a service provider. VAN supports generic, that is service-independent, interfaces for service provisioning and management, and customized service abstractions and control functions, according to a customer's requirements. Only two types of EE exist in the management architecture: the management EE that works on the management plane, and the service provider EE that works on the data transfer as well as on the control plane. The tasks of the management EE are limited to node configuration and the management of virtual active networks in the active network provider’s domain. Note that in this context VAN management means the creation, modification, monitoring, and termination of virtual active networks. The management EE is not concerned with the management of active services running in the virtual active networks. In the VAN architecture, a service and the corresponding service management run in the same instantiation of a service provider EE. 4.1.1.2 Policy-Based Management of Programmable Networks Policy-based management is an emerging technology for the management of telecommunications networks that can be adapted, as has been demonstrated
Programmable Network Management and Services: Background
51
within FAIN and within a number of other projects briefly described below, to deal with active networks. More specifically, policy-based network management technology eases the handling of active networks’ specificities. For example, policies are particularly suited for delegating management responsibility, essential to enable the customization of network resources. Also, the policy’s deviceindependence property is optimum for the management of heterogeneous network technologies. Finally, policies permit a more automated and distributed approach to management, taking decisions based on locally available information according to a set of rules. Both the Internet Engineering Task Force [8] and the Distributed Management Task Force (DMTF) [9] are currently working on the definition of standards for policy-based network management (PBNM). The DMTF is mainly focused on the representation of policies and the specification of a corresponding information model and schema [10]. The IETF is also working in that field, in cooperation with DMTF [11], while also trying to define a general framework for a PBNM system, as well as a protocol that could be used for implementing a PBNM system [12]. Aside from the standardization activities, many research projects have covered the field of policy-based management. Among these, the more relevant ones for FAIN are the Ponder and Jasmin projects. The Ponder project has been one of the first technology- and manufacturer-independent policy-based management frameworks. It defines a policy specification language from which many concepts have been reused in FAIN. The Jasmin project, although not an active networks-related project, explores the automation and distribution of policies and policy decisions; properties that are of the highest relevance to FAIN. When focusing on policy-based management of active networks, we realize that up to now there have not been many efforts that analyze the synergies that can be obtained from marrying active and policy-driven networking technologies. Some of the more widely known and accepted works are Seraphim, ANDROID, PxP, A-PBM, policy networking using active networks and policy specification for programmable networks. Ponder The Ponder project [13] has had a good acceptance within the research community, and its results have been used in many research projects using policybased management. Ponder defines a language and framework for specifying security policies that map onto various access control implementation mechanisms for firewalls, operating systems, databases, and Java. It supports obligation policies that are event-triggered, condition-action rules for policy-based management of networks and distributed systems. Ponder can also be used for security management activities, such as registration of users, or logging and auditing events for dealing with access to critical resources or security violations.
52
Programmable Networks for IP Service Deployment
Key concepts of the language include roles to group policies relating to a position in an organization, relationships to define interactions between roles, and management structures to define a configuration of roles, and relationships pertaining to an organizational unit such as a department. Jasmin The Jasmin project [14] aims to evaluate, enhance, and implement the distribution and invocation of network management scripts with distributed network management applications. The implementation supports multiple languages and run-time systems. As part of the Jasmin project a set of classes has been added to support policy-based configuration management of Linux DiffServ nodes. In particular, general policy management language extensions, domain-specific policy management language extensions, and drivers realizing the mappings between domain-specific policies and the underlying device-level mechanisms, have been realized. Seraphim One of the first projects to work with policies in active networks was the Seraphim project [15]. It enables the extension of its security mechanisms by allowing the active code to dynamically install its own application-specific security functions. These code fragments, which are encapsulated inside active packets, have been named active capabilities (AC). An AC is able to carry not only the active code, but also the security policies customized for a particular application, and even the code needed to make a policy decision. Hence, the user “can” (in some way) establish security policies in the active node. ANDROID The Active Network DistRibuted Open Infrastructure Development (ANDROID) project [16], proposes a policy- and event-driven architecture for the management of Application Layer Active Networking (ALAN) [17] networks. The project is mainly focused on the management of active servers, where programmability up to the application-level is allowed. Nonetheless, they also consider a reduced management of the active routers; that is, configuration of users’ routes toward their assigned active server. The ANDROID management framework is policy based and event driven. Policies and events are introduced into a new eXtensible Markup Language (XML) document called a notification. When a user wants to install a new service on an active server, it sends an event to the network operator. The operator initiates the resources and security checks, based on available policies, and then loads the active service. Active services are continuously monitored, so that if
Programmable Network Management and Services: Background
53
unexpected behavior is detected, corrective policies can be enforced to correct this behavior. Policy Extension by Policy Kanada [18] suggests a method for the dynamic extension of a policy-based management system by means of policies in active networks. The policy extension by policy (PxP) proposal is limited to the extension method, so it must be included within another management architecture. The method defines two types of policies for realizing this extension: policy definition (PD) policies and policy extension (PE) policies. On the one hand, PD policies allow a user to add a new type of policy into the policy server specifying the correct syntax and restrictions. Then, through PE policies, users can specify the corresponding methods for translating the new policy types into commands on different types of network nodes. Both PD and PE policies are defined either by network operators or an application. The architecture where this extension method has been conceived is the general policy-based management architecture containing a graphical user interface (GUI), a policy manager (or policy server), a database, and policy agents shown in Figure 4.1. User/application interface Policies (PD, PE, and UD policies) Policies
Policy database
Policy manager
PD Policies work here
Policies (PE and UD policies) Policy agent
PE Policies work here
Policy agent
PE Policies work here
Configurations Network node
…
Network node
Network node
…
Network node
Figure 4.1 Policy extension by policy basic architecture.
When a user introduces a new policy, the policy manager verifies the correctness of the policy (both syntactic and semantic), with the information contained in the corresponding PD policy.
54
Programmable Networks for IP Service Deployment
The policy agent translates the new policy into managed device commands following the instructions contained in the corresponding PE policy. In consequence, the PE policy depends on the managed node where the policy should be enforced. The way policies should be translated is described within PE policies by means of templates. These templates are completed with the policy information using “fillers.” The fillers specify what information should be retrieved from the policy to complete the template. A program interpreter can be included inside each policy agent to evaluate fillers, that is, to allow fillers specifying certain processing of the policy data before being included in the template. Active Policy-Based Management Fonseca [19] proposes a framework to allow the interoperability between different ISP management domains satisfying end-to-end requirements given by users. The proposed framework extends the policy-based management framework proposed by the IETF by including capsules for the communication between the different components of this framework. Capsules represent user requirements and are used for service negotiation and network elements configuration. The framework has defined three types of capsules: one to request decisions from the policy execution point (PEP) to the policy decision point (PDP), one to notify decisions from the PDP to the PEP, and a third one to negotiate between ISPs. Policy Networking Using Active Networks Kato and Sheba [20] propose a management framework designed to reduce management traffic by allowing network elements to make decisions. This is done by defining active packets, that might even contain policy parameters and code, which are executed inside network elements. This allows network elements to make autonomous, intelligent decisions. In Figure 4.2 we can see the Active Program Execution System (APES), which provides an environment to execute and control programmable packets. The policy is edited in the GUI and executed in APES. APES is responsible for carrying out the actions specified by the policy at a given time; for example, contacting another APES. Programmable packets are autonomously routed through all nodes to be managed.
Programmable Network Management and Services: Background
55
Figure 4.2 APES architecture.
Policy Specification for Programmable Networks Finally, in [21], the research group responsible for the Ponder project analyzes and suggests an approach and a framework for specifying policies related to programmable networks. The proposal is mainly based on Ponder concepts such as policy grouping according to role. 4.1.2 Active Service Provisioning In this section, we discuss existing active and passive network systems from a service deployment perspective. The systems were selected to show the range of possible design decisions. Service deployment in a network can be subdivided into two levels: • •
The network level: where components have knowledge of the network topology and of the capabilities of the active nodes, enabling them to choose the active nodes into which to deploy the software. The node level: where software must be really deployed within the node environment.
56
Programmable Networks for IP Service Deployment
A comprehensive framework must support service deployment at both levels. Due to the loose coupling between the two levels, the design of the service deployment mechanisms at each level can be tackled to a great extent independently. The design space is similar for both levels [22]. A first decision designers of service deployment systems must take is between in-band and out-of-band service deployment. In-band deployment is a system in which the service logic is distributed in the same path as the payload data. In-band deployment is often used for widespread code deployment to many active nodes (nodes in the deployment path), so it may be useful to deploy a given service all over an architecture and in this sense may be compared to a content delivery network (CDN) [23] architecture where the service is deployed onto all the nodes of this architecture. Out-of-band deployment, on the other hand, refers to an architecture where service deployment and payload data use logically and/or physically distinct communication channels. That is, service deployment information is exchanged via the control or management plane. Out-of-band deployment focuses more on deployment onto specific, targeted active nodes. In this case, the service is deployed onto particular nodes depending on their location (edge routers for instance), the network topology, or on the nodes’ capabilities (type of environment execution, current CPU load, available bandwidth, and so on). Historically, the in-band approach was the first deployment method to be designed and implemented [Active Node Transfer System (ANTS) active node, for instance] but we may argue that the out-of-band approach is now more frequently used and will be the predominant method of deploying code on the new active platforms, mainly because of the security considerations. In the in-band approach, the deployment request (or the service reference) is carried inside an active packet, but the code may be retrieved from a code repository using an out-of-band approach, and is not always conveyed in the data payloads. Some active platforms using this approach are Active Network Node (ANN) [41], ANTS [40], and PLANet/SwitchWare [42]. The out-of-band approach is a bit different as it assumes that a “network entity” is used to deploy the service onto active nodes. This network entity may be a service provider itself, a network operator or a component aiming at the service deployment but operating at the network level. Whereas, in the two first cases, the code is initiated to a specific node (already chosen), the last case (network deployment component) is more generic since the network component may have a view of the network topology, and may know the status of the active nodes (CPU load, available bandwidth, and so forth); the latter might then deploy the service on given nodes depending on these capabilities. For instance, the ALAN [24] platform can be classified in the first category because the actor in charge of the deployment must know onto which active node to deploy the service. On the contrary, the FAIN platform might be classified in the second category because a network deployment component is used [25]. The latter is in charge of choosing
Programmable Network Management and Services: Background
57
the appropriate active nodes depending on constraints such as the topology, the load of the nodes, and so on. A second dimension of the service deployment design space distinguishes between centralized and distributed service deployment mechanisms. In this context, the design choice refers to the method of deployment information processing. We observe that active networks typically use a distributed, in-band approach at the network level. Greater variety can be found in the approaches for the node level. The chosen approach depends on targeted services, as well as performance and security considerations. The following sections discuss the approach to service deployment adopted by existing active network systems. ANN The Active Network Node [41] architecture makes use of active packets to deploy services. These packets feature a reference to a router plug-in, which contains the service logic. If not cached locally on a node, router plug-ins are fetched from a code server and installed on the node. Using active packets, the service deployment mechanism of this system can be classified as a distributed, in-band approach at the network level. It is distributed because the service logic is installed on active network nodes traversed by active packets. Whether an active network node is traversed or not depends on the forwarding tables in the nodes, which are set up by distributed routing algorithms. It is in-band because necessary information is contained in active packets, which also carry payload data. At the node level, an out-of-band mechanism is used. That is, router plug-ins are fetched from a code server, using a different logical communication channel. As plug-ins may modify all fields of active packets, the choice between a centralized or distributed approach is left to the service designer. ANTS From a service deployment viewpoint, ANTS [40] is similar to ANN. It also uses active packets, which contain references to code groups, that is, the service logic. At the network level, the same comments as for ANN apply. At the node level, however, it is interesting to note that while using an out-ofband approach, ANTS efficiently mimics an in-band deployment mechanism. That is, if an active packet arrives at a node, service logic is, if not cached locally, retrieved from the cache of the previously visited node. Therefore, active packets and service logic generally follow the same path. The out-of-band approach, however, allows for an efficient use of network bandwidth because the service logic follows the first active packet of a stream. Subsequent active packets of the same stream will make use of the cached service logic. Therefore, it is not
58
Programmable Networks for IP Service Deployment
necessary to transmit the service logic with each active packet. As in the case of ANN, the choice between a centralized and a distributed approach is left to the service designer. PLANet/SwitchWare SwitchWare [42] is an architecture that combines active packets with active extensions to define networked services. Active packets contain code that may call functions provided by active extensions. Active extensions may be dynamically loaded onto the active node. PLANet implements this architecture using a safe, resource-bounded scripting language (called PLAN) in the active packets, and a more expressive language to implement active extensions. At the network level, service deployment is implemented in a distributed, inband way, using active packets similar to ANTS and ANN. An interesting service deployment characteristic of the SwitchWare architecture is found at the node level. In fact, both in-band and out-of-band service deployment are used to combine the advantages of both worlds. Active packets contain code (in-band) that can be used like a glue to combine services offered by dynamically deployed active extensions (out-of-band). Similar to ANN and ANTS, the content of active packets may be modified by active extensions. Therefore, the choice between a centralized and distributed way of deployment is left to the service designer. HIGCS The hierarchical iterative, gather-compute-scatter procedure (HIGCS) [25] uses a distributed, out-of-band approach to service deployment at the network-level. Similar to hierarchical routing schemes, nodes build clusters and elect a cluster leader to aggregate information (e.g., node capabilities) and to represent it to the upper hierarchy level. The information exchange is based on a specific control protocol (e.g., an extension to a hierarchical routing protocol), and therefore an out-of-band approach. It is distributed because cluster leaders process (aggregate, distribute) information relevant to service deployment within their cluster. As HIGCS is intended for the network-level deployment, a discussion of the node-level service deployment is not applicable. Furthermore, HIGCS is targeted toward programmable networks, and, as a consequence, does not deal with code deployment.
Programmable Network Management and Services: Background
59
SENCOMM The Smart Environment for Network Control, Monitoring and Management [43] is a prototype system that: • • • •
Responds to user, application, and system operator specialized requests for custom network monitoring information; Provides the capability to investigate dynamic and unplanned events by enabling flexible probes that can execute throughout the network and make local decisions; Facilitates the automation of routine and repetitive tasks, moving people outside the network control loop; Dynamically installs specialized monitoring and control mechanisms in targeted network elements.
NESTOR As its name indicates, Network Self Management and Organization (NESTOR) [44, 45] is an architecture for network self management and organization. The NESTOR system seeks to replace labor-intensive configuration management with one that is automated and software intensive. Configuration management is automated by policy rules that access and manipulate respective network elements via a resource directory server (RDS). The RDS provides a uniform objectrelationship model of network resources, and represents consistency in terms of constraints; it supports atomicity and recovery of configuration change transactions, and mechanisms to ensure consistency through changes. RDS pushes configuration changes to network elements using a layer of adapters that translate operations on its object-relationship model to actions on respective elements. NESTOR has been implemented in two complementary versions, and is now being applied to automate several configuration management scenarios of increasing complexity, with encouraging results. 4.2 TRENDS AND EXPECTED EVOLUTION 4.2.1 Element and Network Management In our context, network management [26, 46, 47] means deploying and coordinating resources in order to administer and operate active networks, with the objective of achieving the required quality of service, thus fulfilling the expectations of both the owners and the users of the network [27]. Methods to
60
Programmable Networks for IP Service Deployment
predict [28] or rapidly detect failures [29] and alert the relevant personnel to take remedial action can substantially reduce user inconvenience. Based on experiences in FAIN, there are two main areas of interest in the combined research area of network management and active networking: • •
The need for a management system that can manage an active network The role of an active network to reduce the load on any management system and, by doing so, improving the management process in a specific manner compared to nonactive approaches.
The general trend in network management is to achieve scalability in functionality. The research community constantly comes up with novel ideas for optimizing efficiency and functionalities, only to fall short of having global endto-end management capabilities. Simple network management protocol (SNMP) is still the de facto management protocol on the Internet. In recent years, new management paradigm proposals have tried to overcome some of the key deficiencies of SNMP. The management by delegation (MbD) [30] paradigm proposes a distributed hierarchy of managers that solves the problem of polling distance between the manager and the agent. MbD was expected to be a scalable proposition when compared to the SNMP model, because if data analysis is only conducted at the management station (as is the case for the latter), it will require data access and processing rates that do not scale up for large and complex networks (e.g., the Internet). While the MbD approach is a trend away from a centralized approach; that is, pushing intelligence from management system to managed element (using mobile agents for code mobility), the policy-based approach is a trend toward simplification of configuration by means of high-level rules. The introduction of automation of management tasks involves the most significant change with respect to current implementations of management tools with existing technologies (e.g., SNMP). The mobile agent [31] and active networking [32] technologies have been extensively investigated over the last several years for this primary interest. The programmable networking paradigm offers the possibility of utilizing dedicated plug-ins for per-flow monitoring. Automation in the network environment [33] has been proposed many times during the past 15 or so years; for example, in routing at switches, and it is arguable that a large-scale adoption and implementation is near. However, operators have been nervous about adopting extensive automation. To cope with interoperability, middleware technologies like CORBA and Java remote method invocation (RMI) are used for interdomain management [34].
Programmable Network Management and Services: Background
61
4.2.2 Active Service Provisioning There are a number of standardization activities going on that might have an impact on service deployment within the active networks field. Among these, the Open Service Gateway Initiative (OSGI) [35], which is slightly different from active networks, may be a source of inspiration for the deployment of active networks [36]. Even though OSGI focuses on gateways, like the ones used in home networks (to manage the home networks and to interact with the broadband network as well), the mechanism they have designed is component based. Indeed, a service is seen as a set of components offering interfaces, and the deployment is done using this “bundle.” Another interesting standardization activity is the Object Management Group (OMG) [37], which is standardizing the Common Object Request Broker Architecture (CORBA) technology, and has designed the CORBA component model (CCM) [38] that describes the definition of a component and the interfaces it offers to others. We may imagine that the component based deployment is the next step in the deployment process for active networks as well, and that it can find ideas in the previous ones. Network operators consider active packet-based networks to be unsafe and insecure, because they do not have sufficient control over the code to be deployed. Therefore, a component based approach, as proposed by FAIN and others, is necessary to convince operators of the technology [39]. This opens a field for standardization of component behavior. That is, component names should have well-defined semantics. In the FAIN approach, information has been defined in XML to specify the characteristics of service (type of execution environment, code location, service provider, network topology constraints, and so forth). Chapters 16 and 17 detail examples of active service provisioning. References [1]
Introduction to the ABone, http://www.isi.edu/abone/intro.html.
[2]
ABLE: The Active Bell Labs Engine, http://www.cs.bell-labs.com/who/ABLE/.
[3]
Bush, S., F., and Kulkarni, A., Active Networks and Active Network Management: A Proactive Management Framework, Norwell, MA: Kluwer Academic/Plenum Publishers, 2001.
[4]
Schwartz, B., et al., “Smart Packets for Active Networks,” OpenArch ’99, March 1999.
[5]
Brunner, M., Plattner, B., and Stadler, R., “Service Creation and Management in Active Telecom Environments,” Communications of the ACM, March 2001.
[6]
Galtier, V., et al., “Prediction and Controlling Resource Usage in a Heterogeneous Active Network,” MILCOM 2001, October 2001.
[7]
Kornblum, J., Raz, D., and Shavitt, Y., “The Active Process Interaction with Its Environment,” IWAN 2000, October 2000.
62
Programmable Networks for IP Service Deployment
[8]
Internet Engineering Task Force, http://www.ietf.org.
[9]
Distributed Management Task Force, http://www.dmtf.org.
[10] Common Information Model Standards, http://www.dmtf.org/standards/standard_cim.php. [11] Moore, B., “Policy Core Information Model (PCIM) Extensions,” RCF3460, January 2003. [12] Resource Allocation Protocol IETF’s WG, http://www.ietf.org/html.charters/rap-charter.html. [13] Damianou, N., et al., The Ponder Specification Language, Workshop on Policies for Distributed Systems and Networks (Policy2001), HP Labs Bristol, January 29-31, 2001, http://www.doc.ic.ac.uk/~mss/Papers/Ponder-Policy01V5.pdf. [14] The Jasmin Project, http://www.ibr.cs.tu-bs.de/projects/jasmin/policy.html. [15] Seraphim Project homepage, Seraphim: Building Dynamic Interoperable Security Architecture for Active Networks, http://choices.cs.uiuc.edu/Security/seraphim/. [16] Active Network DistRibuted Open Infrastructure Development (ANDROID), http://www.cs.ucl.ac.uk/research/android/. [17] Application-Level Active Networks, http://dmir.it.uts.edu.au/projects/alan/. [18] Kanada, Y., “Dynamically Extensible Policy Server and Agent,” Proc. Policies for Distributed Systems and Networks, 2002, pp. 236-239. [19] Fonseca, M., Agoulmine, N., and Cherkaoui, O., Active Networks as a Flexible Approach to Deploy QoS Policy-Based Management, http://citeseer.nj.nec.com/483138.html. [20] Kato, K., and Shiba, S., “Designing Policy Networking System Using Active Networks,” Second International Working Conference on Active Networks (IWAN'2000), Tokyo, Japan, October 2000. [21] Sloman, M., and Lupu, E., “Policy Specification for Programmable Networks,” International Working Conference on Active Networks (IWAN'99), Berlin, Germany, June-July 1999. [22] Bossardt, M., et al., “Integrated Service Deployment for Active Networks,” Proc. Fourth Annual International Working Conference in Active Networks (IWAN 2002), Zürich, Switzerland, also in Lecture Notes on Computer Science 2546, Springer Verlag, December 2002. [23] Hull, S., Content Delivery Networks: Web Switching for Security, Availability, and Speed, New York: McGraw-Hill, February 2002. [24] Fry, M., and Ghosh, A., “Application-Level Active Networking,” Computer Networks, Vol. 31, No. 7, 1999, pp. 655-667. [25] Mathieu, B., et al., “Deployment of Services into Active Networks,” Proc. WTC-ISS 2002, Paris, France, September 2002. [26] Haas, R., Droz, P., and Stiller, B., “Distributed Service Deployment over Programmable Networks,” DSOM 2001, France, 2001. [27] Sloman, M. (ed.), Network and Distributed Systems Management, Reading, MA: AddisonWesley, 1994. [28] Galtier, V., et al., “Prediction and Controlling Resource Usage in a Heterogeneous Active Network,” MILCOM 2001, McLean, VA, October 2001. [29] Hood, C. S., and Ji, C., “ProActive Network Fault Detection,” INFOCOM ‘97, Kobe, April 1997.
Programmable Network Management and Services: Background
63
[30] Goldszmidt, G., and Yemini, Y., “Distributed Management by Delegation,” Fifteenth International Conf. on Distributed Computing Systems, Vancouver, June 1995. [31] Sugauchi, K., et al., “Efficiency Evaluation of Mobile Agent-Based Network Management System,” p. 527, Lecture Notes in Computer Science, April 1999. [32] Kawamura, R., and Stadler, R., “Active Distributed Management for IP Networks,” IEEE Communications, April 2000, pp. 114-120. [33] Greenwood, D., and Gavalas, D., “Using Active Processes as the Basis for an Integrated Distributed Network Management Architecture,” First International Working Conference on Active Networks (IWAN ‘99), Berlin, June 1999. [34] Interdomain Management: Specification Translation and Interaction Translation, Technical Standard C802, The Open Group, January 2000. [35] Open Service Gateway Initiative, http://www.osgi.org. [36] OSGI Service Platform, Release 2, October 2002. [37] Object Management Group, http://www.omg.org. [38] CORBA Components, full specification v3.0: Document - formal/02-06-65. [39] Solarski, M., Bossardt, M., and Becker, T., “Component Based Deployment and Management of Services in Active Networks,” Proc. Fourth Annual International Working Conference on Active Networks (IWAN 2002), Zürich, Switzerland, also in Lecture Notes in Computer Science 2546, Springer Verlag, December 2002. [40] Wetherall, D., et al., “ANTS: A Toolkit for Building and Dynamically Deploying Network Protocols,” Proc. of IEEE OPENARCH '98, April 1998. [41] Decasper, D., et al., “A Scalable, High Performance Active Network Node,” IEEE Network, January/February 1999. [42] Alexander, D., S., et al., “The SwitchWare Active Network Architecture,” IEEE Network Special Issue on Active and Controllable Networks, Vol. 12, No. 3, 1998, pp. 29-36. http://www.cis.upenn.edu/~switchware/papers/switchware.ps. [43] Sencomm Project: http://www.ir.bbn.com/projects/sencomm/sencomm-index.html. [44] Nestor Project: http://www1.cs.columbia.edu/dcc/nestor. [45] Yemini, Y., Konstantinou, A., and Florissi, D., “NESTOR: An Architecture for NEtwork SelfmanagemenT and Organization,” IEEE Journal on Selected Areas in Communications, Vol. 18, No. 5, 2000. [46] Smart Environment for Network Control, Monitoring and Management, http://www.ir.bbn.com/projects/sencomm/sencomm-index.html. [47] Stevenson, D. W., “Network Management: What It Is and What It Isn’t,” white paper, April 1995.
Chapter 5 SwitchWare Active Platform 5.1 INTRODUCTION As discussed in many places in this book, active networks offer some degree of programmability to users and network administrators. Other chapters cover a variety of issues, from security to the realization of various services in the FAIN [31] system. As one of the earliest active network architectures, the SwitchWare project [5], designed, and implemented at the University of Pennsylvania, charted a variety of new directions and made a variety of research discoveries. The work described in this chapter was supported by Defense Advanced Research Projects Agency (DARPA).1,2 It has now been several years since the SwitchWare project finished, and is closing in on a decade since the work was first conceived. This chapter is focused on the realization of SwitchWare, as it moved from concept to architecture to working system. We feel, at this stage, that there was a set of significant lessons learned from this realization; the great majority of these are positive. The next section reminds the reader of the motivations for SwitchWare and, more generally, active networking. Sadly, many of these problems are still with us; perhaps work inspired by active networks, such as overlay networks, will make further progress. Section 5.3 discusses work that preceded SwitchWare, and shows how this basis illustrated what was possible, but how the passing of time had offered new possibilities not available to early researchers. Section 5.4 discusses the unfortunate distinction made [76] between the switch model and the capsule 1
CIS Department, University of Pennsylvania’s work was supported by DARPA under contracts #N66001-96-C-852 and #DABT63-95-C-0073, and the National Science Foundation under grants #ANI-9906855, #ANI98-13875, #ANI00-82386, and #CSE-0081360.
2 ECE Department, The University of Texas at Austin’s work was supported by DARPA under contracts #N66001-96-C-852 and #DABT63-95-C-0073, and the National Science Foundation under grants CAREER Grant #CCR-9702107 and, #ANI-0081360.
65
66
Programmable Networks for IP Service Deployment
model, which obscured many of the common architectural issues. Section 5.5 discusses our first attempt to bring it all together, Scott Alexander’s active bridge [8], and the systems that evolved from it such as ALIEN [7]. Section 5.6 looks at packet languages, and Section 5.7 covers some of the major SwitchWare accomplishments. Section 5.8 concludes the chapter, including our assessment of the mistakes we made in the project. 5.2 WHY SWITCHWARE? The mid-1990s were an auspicious time for networking researchers. After several decades of research, the Internet was coming into its own as a result of the World Wide Web that it supported. The Gigabit Testbed3 project of DARPA [9] and the National Science Foundation (NSF) had stimulated the technologies required for the broadband Internet, and excitement was in the air. While some of this was because of the marketplace’s intense new interest in networking, some of us also saw the Internet’s commercialization as auguring a new era for networking research. First, we had had a big success and influenced the world. Second, since this had happened, we needed to identify new problems demanding research solutions. A major challenge that presented itself was the slow rate of change of the network itself, and, in work with David Feldmeier (then at Bell Communications Research [1]), we had convinced ourselves that the problem was hidden deep within the Internet architecture in its need for standardization. This is worth reviewing, and easy to understand. The Internet is an example of a virtual infrastructure; all participating devices and systems have to use and accept a standard packet format and addressing convention, which are “overlaid” on disparate components. When the standard is well chosen, it can be supported by a plethora of systems, enabling the interoperability intended for the Internet. However, this very requirement for standardization demands that any changes in the network layer be standardized, and standardization is a political process subject to all of the issues of scaling and competing interests one might expect. That the standards involved are quite low level, specifying bits and bytes, only exacerbated this problem. The damping effect of standardization, then, slows the introduction of new services, which bring new uses, new users, and new sources of revenue for providers. Smith had devised a network element model, store-translate-andforward (STF) [69] that augmented the traditional packet-switching model with an additional operation: executing a translator on the packet. Such a design generalized packet switching, as a “null” translator would result in a very simple packet-forwarding device (e.g., a buffered repeater), while adding more 3
See http://www.cnri.reston.va.us/gigafr/index.html.
SwitchWare Active Platform
67
functionality (such as state for drop lists and spanning tree protocols to prevent loops) could enable devices such as self-learning bridges [67]; further complexity in the state and state machines might result in devices such as routers and firewalls [23]. If one made the translator programmable, the device could be configured to assume any of these roles, or, as with any general-purpose system, many more roles unforeseen by the designers. It was the latter possibility that attracted us to programmable network elements as a way to attack the network evolution problem. We saw programmable systems as potentially offering an application program interface that might be exploited by vendors or even directly by users. Our earlier experience with “protocol boosters” [28, 49, 50] had given us considerable exposure to the technical literature, but operating under the belief that an hour in the library can save a week in the laboratory, we began to read and think through what had been done in the past on the way toward our realization. 5.3 PRECEDENTS AND POSSIBILITIES “An hour in the library is worth a month in the laboratory.” An investment that we made early was to gain an understanding of what had already been done, since the lessons learned by others had been learned at a substantial cost, which we could avoid if we understood them. There was very little literature on programmable networks—or an infrastructure to support them— since this approach was largely eschewed by the computer networking community in which we were working. The telephony industry, on the other hand, had seen considerable advantages from programmability and flexibility, exemplified by the Advanced Intelligent Network (AIN) architecture [17]. In this architecture, within the constraints of the call model for telephony, there is considerable programmability, but it is inaccessible to users. It does lead to enhanced services; for example, the integration of “800” number service and geographic information systems which allows a call for a pizza to be routed to the local shop. However, the call model is constrained by its requirement to put the configuration and flexibility in the “front end,” and AIN is constrained by this requirement. It was clear that a different model was needed, and we were motivated by the development of the store-translate-and-forward model discussed in our paper [69] reflecting on active networking as a research agenda. Using STF to frame the research, we asked questions about what technologies were needed to move programmable networking from science fiction to science fact. Of particular interest to us were advances in operating systems and programming languages. The former were interesting because at least a few researchers, most notably Brian Bershad at the University of Washington, had noted the rather static nature of operating system services, and had addressed this by studying architectures to
68
Programmable Networks for IP Service Deployment
safely load code into an operating system [18]. The SPIN4 operating system used Modula-3; an example of the “safe” programming language technology that we found in the programming language community. Although strongly typed statically checked programming languages had existed for some time, the simple type systems of languages like Modula-3 were being augmented by much more sophisticated and flexible type systems in languages like those in the ML [52] family. Programming languages [33] had evolved considerably, in particular in the areas of safety, where strong type checking and garbage collection contributed to a programming model strong enough that users of some programming languages such as ML observed that if the program compiles (i.e., passes all of the strong type checking) it almost always works. If only this were true for C! This strong type-checking idea turns out to be very clever software engineering, and is worth addressing briefly here. It has been a long-standing observation among practitioners that proving programs correct is very unlikely to become common practice. This observation is based on the traditional requirement that a specification of the program be run through an automatic system, and that the specification accurately reflect the behavior of the system under study. This accuracy requirement, unfortunately, has the consequence that the programmer (or somebody else) must write two versions of the program: the specification, and the one that will be compiled and executed. If the specification could be compiled, it would be a different story. Type systems are basically general specifications that apply to all programs and, furthermore, they can be checked by algorithms that do not require human guidance, unlike general-purpose theorem provers. We believed that the type-checking approach would allow us to check security and safety properties once, before the network program was run, rather than many times while it executed. The difficulty, a major technical challenge, was the development of one or more “type theorems” that could be added to the complement used to examine programs in the standard language. One could imagine (and at one point we did) that one or more “security” type theorems be developed to augment ML, and that programs after compiling could be declared “secure,” at least to the degree that the types captured the relevant properties. Another programming language influence was that of mobile code—programs or program fragments that were executed remotely. Java, which was becoming popular at the beginning of the project, is an obvious example of this approach (as well as an example of a strongly typed language), but there were other examples, including the Caml [47] dialect of ML. Furthermore, Java had sparked a significant amount of research into safe mobile code, and we hoped to ride the coattails of this work. In addition, networking had begun to interest the programming language community [19], so that it was likely there would be 4 SPIN is a dynamically extensible operating system that allows user applications to safely change the operating system's interface and implementation.
SwitchWare Active Platform
69
considerable collaboration. When combined with the research from SPIN, mobile code approaches offered one recipe for active networking. In addition, Bryan Lyles, then of Xerox PARC, pointed us to the Softnet [84] work done by Forchheimer and Zander at the University of Linkoping. The importance of this early work (1979!) cannot be overstated—it provided a clear model for both programmable node architecture and a custom programming environment that allowed its packet radio subsystem [53] to be modified on the fly. We are all indebted to these researchers and in some sense have come full circle, as many of the most exciting applications seem to be mobile applications. There was also early work done by David Wall [80], then of Pennsylvania State University, which looked at network/distributed programming in a different way— one where messages were active agents, anticipating mobile code systems and active network approaches such as “active packets.” Finally, some research in operating systems was focused on extensible operating systems. There were two basic directions that were appearing. First, and perhaps most interesting to us, were the on-the-fly enhancements of Bershad’s aforementioned SPIN operating system. SPIN took a basic operating system and enhanced it with the ability to dynamically add functionality using new code loaded into the operating system. Important characteristics of the system included the fact that a safe programming language, Modula-3, was used to write the enhancements, so that they could be presumed not to bring the rest of the operating system down. The second direction being explored was that of vertically structured operating systems—operating systems that provided a very lightweight kernel focused solely on protection of hardware resources and some multiplexing support (such as scheduling), leaving the rest of the definition of the operating system’s functions to applications choosing their desired support from libraries. For example, MIT’s Exokernel [26] and the Cambridge Nemesis [48] operating system shared this basic design paradigm. What all three designs cited demonstrated was both a desire and a methodology for user-defined and userextensible functionality. In the network architecture space, Ritchie’s streams [65] architecture demonstrated that a protocol stack could be extended on the fly. Streams used the model of a dynamic stack to which modules could be pushed, and from which they could be popped. This allowed applications an unprecedented degree of control over their network infrastructure. The Arizona x-Kernel [41], while nominally an operating system, provided a novel architecture for constructing protocol graphs from protocol elements, allowing users to create paths through the graph to create their desired protocol. The protocols created could be more sophisticated than the streams stack, but lacked the dynamics of the simpler stack structure, as the paths were maintained once created. Protocol boosters [28] combined the best features of both of these methods, the dynamics of streams and the flexibility of x-Kernel, while providing a new degree of dynamics in adaptation by changing the protocol structure itself on the fly.
70
Programmable Networks for IP Service Deployment
Our experience implementing [49] protocol boosters, and the store-translateand-forward network model [69] led us to a “software switch” model,5 with which we believed that network evolution could be significantly accelerated. Our initial conceptualization drew upon some of the technologies discussed previously to attempt to realize an infrastructure that could support protocol boosters as well as many other networking applications. In particular, we hoped to use advanced programming language technology as a source of significant leverage against the safety and security issues so obviously associated with active networking [3]. Our collaborator Carl Gunter proposed a three-layer architecture (later revised), consisting of three languages. This was a very nice logical split of the issues, and it is interesting to observe what actually arose as opposed to what we initially modeled. The three languages were a switchlet language, a wire language and an infrastructure language. The switchlet language was intended for users, and our intention from the very start was that this language would use the formal semantics ideas associated with modern programming languages to restrict the behavior of network programs to those deemed safe. Our design target, referring to presentations from that period, was a verifiable subset of ML we were thinking of as “ML—.” The wire language provided a language for communicating; for example, for enforcing formal semantics across boundaries. To preserve the integrity of the strong type checking, we intended to use encryption, resulting in an “encrypted verified intermediate language.”6 The infrastructure language was intended to define a virtual machine, so that the formal semantics of the switchlet language would be enforced all the way to the hardware; this might for example be the C language that the language run-time system was written in. Our goals initially were far more ambitious; we intended to build an ML runtime system that provided all the necessary features for managing hardware resources and multiplexing. These ambitions were reflected in our internal name, “ML++,” reflecting the addition of the sophisticated run-time support. The utility of this model was that it forced us to identify what had to be implemented where in the system. Pragmatically, we can view this architecture as foreshadowing the active packet plus security architecture system, which was written in the Caml dialect of ML. The run-time system evolved somewhat, but can be best seen as an ML-implemented infrastructure using Linux for low-level resource management. Thus perhaps the correct retrospective view, as we will see below, is that we replaced ML++ with ML+Linux.
5
See http://www.cis.upenn.edu/~jms/SoftSwitch.html. It is indeed fortunate that we did not rigorously follow this model, if for no other reason than to avoid the appalling acronym. 6
SwitchWare Active Platform
71
5.4 SWITCH VERSUS CAPSULE: A MISLEADING DICHOTOMY An initial survey of active network research [76] one of us was involved in outlined active network research as being done in more or less two camps. One, the capsule (later “active packet”) camp, took the stance that all packets were programs. Thus, we moved from passive packets of data, preceded by headers interpreted by devices such as bridges and routers, to packets that specified some significant degree of their processing. The second camp focused on a switch model, researching the nature of the systems necessary to interpret downloaded code, including that which determined the role of the system. The focus here was thus the “active extensions” designed to enhance the network element. Rather than the holistic view maturity has now led us to take, the survey misleadingly characterized the camps and approaches as more aggressive (i.e., capsules) or less aggressive (i.e., switch), based on little else than opinion and greater familiarity with switch concepts. This continuum gives one the false impression that so-called capsule research was more ambitious than work in the environments that executed the capsules. This was not as true as it appeared to the survey authors at the time. First, mobile code had been researched, beginning with either the Wall [80] work, or even earlier if “process migration” [27] can be viewed as mobile code, and a variety of mobile code environments had been devised and studied (note that we need mobile code for both active packets and active extensions). Second, the application of mobile code to networking was the specialization of functions and restriction of functionality necessary for flexibility, security, performance and extensibility; exactly the same issues that are the focus of the switches. A far better model is to view the central goal of an active network as allowing a safe form of programmability to some population of administrators or users. Then the portions of the system that run on the nodes for sequences of packets can be viewed as active extensions, while the portions that are executed on a perpacket basis can be viewed as active packets. This is much cleaner representation, and is the one that we are now using to think about active and programmable networks. This model has the same basic dichotomy, but captures the subtlety of real systems. It is also clear that, at the limit, these models are differentiated more on the basis of state than they are on the basis of functionality. For example, an active extension could be contained in a single packet and used only once. Likewise, an active packet might be held in a cache indefinitely, providing services to other packets if available. The continuum seems to us at this point to be a question of state and persistence, and the design assumptions associated with different choices in the design space. An example might be the willingness to accept heavyweight security authorizations to execute code that is executing for a long time, versus a desire to avoid such overheads in an active packet system. In any case, we began our experimental investigation with a system to support active extensions, and this line of work is discussed in the next section.
72
Programmable Networks for IP Service Deployment
5.5 IT STARTS WITH THE NODE: ACTIVE BRIDGING, ALIEN, SANE, SQOSH, AND RCANE Considerable skepticism of active networking existed in the Internet community, for which the “end-to-end argument” represented an architectural principle not to be violated. The end-to-end argument is a well laid out argument for late binding of functionality to network locations, as under the assumptions of the Internet architects the universal services Internet should not commit to in-network optimizations that benefit any particular subset of applications. Nonetheless, we had begun implementation of an active router, with the goal of building an extensible network platform. It was implemented in Caml, which ran on the Linux system preferred by our graduate students, and which had support for mobility in the form of a byte-code representation, much like Java. This choice represented our commitment to using modern programming languages. A key question was what to implement first, and, with a network element perspective, there were many choices, ranging from bridges and routers to firewalls. Since a central goal was showing advantages of on the fly extensibility, D. Scott Alexander proposed a LAN bridge [67] as a prototyping experiment, and, after some discussion, we convinced ourselves that this was not only a reasonable approach, but also a good one. Bridges were well enough understood to be convincing to the networking community, and complex enough that a nice extensibility example could be developed. In addition, it was easy to see how to measure performance. A bridge interconnects two local area networks transparently. While this is not the place to discuss complete bridge functionality, it is easy to see how to build a transparent bridge in steps. First, one would build a buffered repeater that would copy every frame received on a network interface to all other network interfaces. Then a self-learning feature would be deployed to record source addresses of frames, and to send packets destined to that source out only on that network interface. Finally, since bridges are transparent (meaning they do not modify packets and packets thus cannot be deleted using techniques such as “hop counts”), and packets might collapse the network by circulating forever, a spanning tree protocol (STP) is used to preserve an acyclic topology. Finally, since it was clear that extensions could be buggy, we wanted a simple demonstration of failure recovery, so we built two STP implementations and impaired one of them, adding a recovery module to detect impairments and keep the bridge operating in spite of the failed extension. Our vision at this point was rapid recovery from software failures, and this system was implemented and reported on in ACM’s SIGCOMM conference [8]. The performance reported, even at that point, was quite competitive and adequate for our lab environment, providing some evidence that implementation of functionality in software written in a modern programming language was not entirely quixotic.
SwitchWare Active Platform
73
This system also provided the beginnings of a platform for essentially all of our later work, both in active packets and active extensions. We will discuss the active extensions work in this section, and active packets in the next. The ALIEN [7, 10, 11] system was the culmination of our experimental investigation of the node architecture issues that would have to be addressed to build flexible, safe, and well-performing active extensions. ALIEN used the active bridge as a basis, but was restructured to reflect a new understanding of node architecture we gained through the experimental implementation. In particular, we realized that the two properties of privilege and mutability could be usefully separated and combined in different ways. Privilege reflected the notion that access to certain resources, such as a raw Ethernet socket, should not be available to arbitrary code on the system. Mutability reflected the notion that some code would be changeable,7 and, in an ideal system, this would be the overwhelming majority of the code comprising the node architecture. The system was separated into two halves by a privilege boundary: privileged and unprivileged. Our experience with operating systems such as UNIX made us realize that two levels of protection were both necessary and sufficient for providing protected operations. The privileged portion was further separated into an immutable part (the loader) and a mutable part, which was far larger, called the “core switchlet,” reflecting in the name our initial goal of a “software switch for active networks.” The unprivileged portion was entirely mutable, as an unprivileged immutable portion of the node architecture made no sense to us. The role of the loader was to load the core switchlet, and that was it. That modeled very well the typical bootstrap process by which an operating system is loaded onto a hardware platform for execution, and in turn runs applications requested by users. As with such a bootstrap, the loader was extremely simple and compact, keeping with our goals of maximizing the extent of the system that could be changed. The core switchlet comprised functions that needed access to protected system elements, and the issue of applications’ expectations was addressed with an unprivileged set of Caml library functions that would reasonably be expected to be present on every node. In the Caml implementation, access control was achieved by restricting the namespaces accessible to loaded code. Namespace restriction was accomplished using the “module-thinning” feature of OCaml, the Caml [47] dialect we used in our system. This idea offered considerable benefit, and a reasonably accurate analogy can be drawn to database “views” provided to different applications—an application was provided a namespace that represented its rights, and, therefore, its privileges. Since Caml is a higher level programming language, with no pointers, naming is sufficient to protect the system. Now that the internal architecture of the system had been worked out, it was important to extend the system in ways that would enable safe access to its services. The language-based security architecture 7
This separation was also made in the design of PLAN; see Section 5.6.
74
Programmable Networks for IP Service Deployment
required attention to two major details. First, since the node architecture relied on the language system to protect it, the integrity of the language protection mechanisms had to be ensured. Second, since the namespace enforcement was inherently local, a scheme was needed to extend the namespace protection scheme to remote nodes so that services could be invoked by code loaded from these nodes, with an interesting special case being active packets (but more on this in the next section). The Secure Active Network Environment [4, 6, 11] addressed both of these problems in ways that turned out to have impact well beyond solving ALIEN’s immediate problems. The AEGIS secure bootstrap [14, 16], developed by Bill Arbaugh [16], guaranteed node integrity and thus that the ALIEN node protection mechanisms, if they were correct, were in place when the node was initialized. The remote namespace extension system was developed by Angelos Keromytis and Bill Arbaugh, and relied on cryptographically protected credentials to authorize access to namespaces by remote nodes. AEGIS [14] is described in Chapter 3 so we will not go into it in detail, but try to put it in the context of SwitchWare. We intended SwitchWare to automate many of the management problems that beset network operators, including longterm unattended operation (as in remote switching centers or even in modern ISPs or colocation facilities). To achieve this, and to rest our security guarantees on the foundation of a programming environment, we had to ensure two things: first, that the programming environment was the one we expected, and second, that we had a way to recover where it was not. Arbaugh’s Chained Layered Integrity Checking (CLIC) [16] architecture operated by defining a base case which was assumed trustworthy. (Ideally, this would be a hardware component, but we used the Level 1 BIOS to ease and accelerate prototyping. Interestingly, such chips are now being developed as part of the TCPA8, which AEGIS anticipated.) This foundational element next checked, using a cryptographic checksum, the integrity of any code it would execute, before passing control to it. This meant that either the code that was executed was unchanged, or that the checksum had been forged, which was extremely unlikely. If the checksum failed, we needed to recover automatically; definition of a trusted third party from which code could be retrieved enabled that. Thus, with Arbaugh’s algorithm, one could bootstrap from the lowest operational layers of the system to the active networking environment, recovering from failures and ensuring that the namespace protection system operating was the one that the node owner expected. The remote namespace extension system used the notion of credentials. A credential is simply a set of strings with semantics defined by the applications using them. One example might be strings that comprise a namespace defined by a node for access by another node. By appropriate applications of digital signatures, it becomes possible for a node to check authorization for use of a name by 8
See http://www.tcpa.org.
SwitchWare Active Platform
75
checking the signatures, and the nodes are then able to delegate rights in the style of capabilities. A key virtue to note here is that like capabilities, credentials authorize access directly [20], with no notion of identity. On a network node with millions of potential users [45] (consider the number of “identities” sending packets through core Internet routers), this is the only way to scale authorization. The nodes mutually authenticate themselves using a station-to-station (STS) protocol. The important discovery from this implementation was that the namespace security notion developed in ALIEN was readily extended from node to network. We were further able to exploit the remote namespace extension system in extending the ALIEN nodes to provide more facilities for resource control. Our original ALIEN system was implanted on Linux, which offered a familiar and fullfeatured programming environment, including the Caml dialect of ML. While an excellent environment for programming and development, Linux and most host operating systems are optimized for maximizing throughput and minimizing response time in some complex trade-off depending upon workload. Yet some network applications, particularly those that transport multimedia data such as voice or video, have a strong desire for more or less constant interarrival delay. An ideal model for such applications is one that partitions their networking resources into streams or flows, each with a specified quality of service specification. SQoSH [10] used the access control protections to protect interfaces to resource controls as if they were privileged features of the system. To provide resource controls, a new operating system architecture called Piglet [57, 58] was used to provide managed traffic. In a dual-processor configuration, one node would run Piglet and manage network devices, while the other node would focus more on traditional tasks. Piglet was meant to foreshadow devices such as programmable line cards. Finally we implemented, in collaboration with Paul Menage of the University of Cambridge, the Resource Controlled Active Network Environment [11, 51]. RCANE used a combination of the Nemesis operating system and run-time modifications such as heap-space resource bounds to isolate applications from each other. The advantage of running on Nemesis or a Nemesis-like system for the SwitchWare architecture is that Nemesis is far more suited to the many needs of a switch than the host-oriented Linux operating system, particularly in terms of resource management, resource bounds, and traffic-shaping sorts of activities. Interestingly, a variety of new systems; for example, the XenoServer system and the Xen virtual machine architecture, both based on the ability to run untrusted code (the goal of Menage’s Ph.D. work) are now beginning to emerge and to be used in large-scale overlay networks.
76
Programmable Networks for IP Service Deployment
5.6 ACTIVE PACKET LANGUAGES: PLAN, SNAP, AND CAML After completion of the active bridge (but before the beginning of work on ALIEN itself), several of us (Nettles and Gunter) began to consider the idea of active packet programming. Our first instinct was to avoid designing a new programming language, but rather to take an existing and well-understood language and modify it to be suitable for packet programming. Nettles already had considerable experience with this approach in the context of adding transactions to standard ML [83]. Such a modification to Java is at the heart of the ANTS system [81]. However, when we began considering modifying a general-purpose language, we began to run into problems concerning security. In particular, we had two goals that seemed difficult to achieve with this approach. First, we believed that for programmable packets to be as generally useful as possible, doing cryptographic authentication [59, 60] or authorization should not be required on every packet. After all, IP packets do not require such heavyweight operations when being forwarded. Second, we very much wished to apply formal methods to help us make guarantees about the effects of packet programs. This seemed to require a formal specification of the packet language, and one that was tractable to modify and use as part of a basis for formal proof. At this time, only one general-purpose language had a fairly complete specification, Standard ML, and that specification ran to about 100 pages [52]. So, instead, we began to consider the possibility of designing a new, but very simple language just for packet programming. We quickly became convinced that such a language was feasible and would give us significant leverage on our security concerns. Furthermore, by making it as simple and limited as possible, not only would we facilitate formal reasoning, but we would also make informal reasoning about packet program behavior much simpler. This approach eventually led to our design for the Packet Language for Active Networks [37, 64, 73]. PLAN: The initial PLAN design [35, 36, 37] was done by Michael Hicks, Jon Moore, and Pankaj Kakkar, and was led by Scott Nettles. Our basic guiding principle was to make the language as simple as possible, adding features only if they were clearly needed to implement the active packet programs we were interested in. Another principle was to use IP as a guide; we wanted to be able to do what IP did, only more flexibly. For this reason, ping became our canonical active packet program. We also wanted to rule out certain misbehaving programs. The canonical example here was that we wanted it to be impossible to use the network to factor arbitrary prime numbers. As our design matured, several key architectural features appeared. First, to model IP’s unreliable packet transmissions, PLAN would have a facility for unreliably sending a program to another node and evaluating it there. This remote evaluation feature was much like a function call, except that while the arguments to the function would be evaluated on the sending node, the function call itself
SwitchWare Active Platform
77
would be sent, along with the arguments and code, to the remote node for evaluation. Second, we wanted PLAN to be very limited, but we also wanted to make a wide variety of network programs implementable. Thus we needed an “escape hatch” that allowed code not written in PLAN to be called from PLAN. We called these more general functions services. Services are logically at a different level of privilege than PLAN packets, and calling them requires crossing a protection boundary much like a system call. This marked the first introduction of this distinction in SwitchWare. Broadly, we began to view PLAN as a language that glues together node-resident services, much as UNIX shell scripts glue together UNIX programs. Finally, we took a radical view with respect to other language features. No user-defined data structures are available, and PLAN data is immutable. Most radically, we decided to experiment with leaving out recursion (or equivalently general looping). Thus PLAN programs are guaranteed to terminate, but are strictly less powerful than most programming languages, since the termination guarantee means PLAN is not Turing complete. The resulting design has proven to be surprisingly robust. There have been four different implementations in three different languages and the design has been formally specified, although the uses of this specification have been more limited than we would have hoped [44]. The first implementation (in Caml) resulted in the first public demonstration of an active packet program (of ping, naturally), and the second implementation (in Java) was the first active packet system to be released to the public. Finally, to implement the PLANet [38] active internetwork described in the next section, only one new feature, chunks [54], needed to be introduced. Despite its deliberate limitations, PLAN has proven to be powerful enough for our purposes. SNAP: One significant outgrowth of PLAN is Jon Moore’s Safe and Nimble Active Packets system [55, 56]. Although SNAP is described elsewhere in this book, it is worth considering its relation to PLAN and SwitchWare here. Not surprisingly, SNAP draws many lessons from PLAN, retaining key aspects of PLAN and improving on many of PLAN’s problems. SNAP shares with PLAN the key idea of using a special-purpose domain-specific language for packet programming. It also retains the PLAN security model, based upon unprivileged packet programs and (potentially) privileged service routines. Also, although it is expressed somewhat differently, remote evaluation is still at the heart of SNAP support for transporting packets. However, unlike PLAN, SNAP is a low-level language-based on a byte code language. This, coupled with several careful engineering choices that eliminate copying and memory management overheads, allows SNAP to significantly outperform PLAN. In fact, SNAP comes within a few percent of IP’s performance, essentially achieving our goal of IP-like performance for IP-like tasks. Although SNAP retains many aspects of PLAN’s security model [39], it betters PLAN in the area of resource safety. Although PLAN programs terminate, SNAP programs are guaranteed a priori to execute in
78
Programmable Networks for IP Service Deployment
time, space, and bandwidth in proportion to the length of the packet. To achieve this, SNAP byte codes are only allowed to branch forward. Thus SNAP is an even more serious exploration of how limited an active packet language can be, and still be more useful than PLAN (for example, a modified SNAP is used in some IBM network processors; see [32]). ALIEN/Caml: ALIEN was the basis of the final SwitchWare experiment with active packet languages. In this case, Caml served as the packet language. Although Caml’s type safety provided some key protection, the general-purpose nature of Caml meant that cryptographic techniques were needed to ensure that the packet had not been modified en route, and that its namespace access had been properly authorized. Unfortunately, the performance impact of this was severe [11], validating PLAN’s assumption that cryptography needed to be avoided if active packets were to be used for basic packet transport. Nevertheless, this work showed that a general-purpose programming language could be used for both active packets and active extensions, if the architecture had given the problem sufficient forethought. 5.7 RESULTS Figure 5.1Figure 5.1 captures an abstract view of the SwitchWare architecture. The combination of active packet and active extension technologies was unique to SwitchWare. The most substantial investigation of this combination occurred in the context of our active internetwork implementation, PLANet. PLANet: After the design and first two implementations of PLAN were complete, we believed that it was important to use PLAN [64] to build a substantial networking application. After some consideration, we decided to implement an internetwork (PLANet [38]) using PLAN. We made this choice because we felt that internetworking was “network complete;” if we could build an internetwork, we could build essentially any network system. In fact, our goals were more ambitious than just testing PLAN; our goal was to test the SwitchWare architecture itself. The idea was simple. So as not to reinvent, we would take the basic design and engineering strategies of the IP-based Internet and attempt to reimplement them in a system where all packets were active and all nodes could be extended with active extensions. For active packets we implemented a new Caml version of PLAN and for active extensions we used ALIEN, which was just beginning to emerge. As we designed PLANet, we discovered that one key addition was needed in PLAN. Packets needed to be fragmented and reassembled as well as checksummed. But PLAN had no way to treat packets as data. To provide this facility, we introduced chunks [54], which are essentially PLAN packets treated as
SwitchWare Active Platform
79
data. Not only can chunks be treated much like byte arrays, they can also be passed to functions and used as arguments to other chunks. Thus, chunks also provided a mechanism for expressing and generalizing encapsulation (and when executed de-encapsulation) in an elegant language mechanism. PLANet also served as a platform to experiment with using AN for a variety of tasks. One important one was flow-based adaptive routing (FBAR) [38]. FBAR demonstrated that a few control packets with nontrivial algorithms embedded in them could be used to enhance the performance of a much larger flow of less sophisticated (or even nonactive) packets. One final addition to PLANet was extending the namespace-based security ideas we had been exploring to PLAN itself, using chunks and SANE [35, 36]. The resulting system gives sophisticated and fine-grained control over security for active packets.
Active Packet Active Packet
Ephemeral Functions and State used by Individual Active Packets Administrative Privilege Active Extensions – Persistent State and functions used by Many Active Packets Loadlet – for Persistent State + Functions (Minimal Static Functionality) Figure 5.1 An abstract view of the SwitchWare architecture.
Other observations: SwitchWare generated the first active application, that of active bridging. While not a “user” application in the normal sense, it demonstrated the ease and value of dynamic updating of an infrastructure’s role in a network. SANE provided the first secure node environment, and lessons from it remain applicable to the architecture of any reprogrammable components embedded [25]
80
Programmable Networks for IP Service Deployment
in a network. Notably, the secure bootstrap process used for SANE is having a very broad impact as the basis for the TCPA. Telcordia showed that the SwitchWare infrastructure could easily support a number of important and practical applications of a publish/subscribe form. Perhaps more interestingly, a 1999 demonstration of interoperation among SwitchWare, the protocol boosters implementation done at Telcordia [28], the Netscript implementation done at Columbia [24], and the University of Washington Detour [13] system showed how easily programmable infrastructures could be composed and demonstrated (this composition was mainly done by Bill Marcus, then at Telcordia Technologies). 5.8 REFLECTIONS AND CONCLUSIONS We have several things that we now see as mistakes. First, the goal of active networking was to accelerate network evolution, and, while we believe we chose the right tools and approach, the use of purist language technology that was unfamiliar to most programmers meant that it had to jump two hurdles—language and technology—to pick up momentum in building a community. Second, we had planned on a 6-year active networking program, and an eventual evolution toward the goal of programmable networking, and that was the original plan promulgated by DARPA in the original aggressive research program. Unfortunately, a midcourse shift in the winds created pressures for demonstrations of technologies that had not been completely worked out, and proved to be a significant distraction. Third, at the time of this writing, we have become convinced that our project was before its time. The currently popular overlay network [35, 46, 52, 54, 58, 74, 75, 77] research is, somewhat cynically, a weaker form of active networking. Purpose-built middleboxes [12, 15, 16, 21, 22, 24, 30, 34, 40, 50, 51, 55, 57, 6163, 65-73, 78, 79, 82] pervade the network, and new technologies such as network processors [2, 42, 43] are appearing at many points in the network, particularly in high-performance routers [29]. Finally, the mobile device (mobile phone, PDA, and so forth) market is rapidly evolving toward on-the-fly service introduction in the style of SwitchWare, for example Qualcomm’s Brew9 system. In conclusion, SwitchWare had an enormous impact in many areas of network and security technology, as people realized that the solutions we developed for active networking were portable to many other computing and networking environments.
9
See http://www.qualcomm.com/brew/about/aboutbrew.html.
SwitchWare Active Platform
81
References [1]
Engineering and Operations in the Bell System, 2nd ed., AT&T Bell Laboratories, Murray Hill, NJ, 1983.
[2]
Agere Network Processors, http://www.agere.com/enterprise_metro_access/network_processors.html.
[3]
Alexander, D. S., et al., “Safety and Security of Programmable Network Infrastructures,” IEEE Communications, Vol. 36, No. 10, October 1998, pp. 84-92.
[4]
Alexander, D. S., et al., “A Secure Active Network Environment Architecture: Realization in SwitchWare,” IEEE Network, Special Issue on Active and Programmable Networks, Vol. 12, No. 3, May/June 1998, pp. 37-45.
[5]
Alexander, D. S., et al., “The SwitchWare Active Network Architecture,” IEEE Network, Special Issue on Active and Programmable Networks, Vol. 12, No. 3, May/June 1998, pp. 29-36.
[6]
Alexander, D. S., et al., “Security in Active Networks,” in Secure Internet Programming: Security Issues for Mobile and Distributed Objects, J. Vitek and C. Jensen (eds.), New York: Springer Verlag, 1999, pp. 433-451.
[7]
Alexander, D. S., and Smith, J. M., “The Architecture of ALIEN,” Proc. First International Workshop on Active Networks, Springer Verlag, Berlin, June 30-July 2, 1999, pp. 1-12.
[8]
Alexander, D. S., et al., “Active Bridging,” Proc. ACM SIGCOMM Conference, Cannes, France, October 1997, pp. 101-111.
[9]
Alexander, D. S., et al., Active Network Encapsulation Protocol (ANEP), Active Networks Group, DARPA Active Network Project, August 1997.
[10] Alexander, D. S., et al., “Secure Quality of Service Handling (SQoSH),” IEEE Communications, Vol. 38, No. 4, April 2000, pp. 106-112. [11] Alexander, D. S., et al., “The Price of Safety in an Active Network,” Journal of Communications and Networks (JCN), Special Issue on Programmable Switches and Routers, Vol. 3, No. 1, March 2001, pp. 4-18. [12] Application Level Programmable Inter-Network Environment Project Web Page http://www.cs.ucl.ack.uk/alpine/. [13] Savage, S., et al., Detour: A Case for Informed Internet Routing and Transport, University of Washington, TR #UW-CSE-98-10-05. [14] Arbaugh, W. A., Farber, D. J., and Smith, J. M., “A Secure and Reliable Bootstrap Architecture,” IEEE Security and Privacy Conference, Oakland, CA, May 1997, pp. 65-71. (An early version available as Technical Report MS-CIS-96-35, CIS Dept., University of Pennsylvania, December 2, 1996.) [15] Arbaugh, W. A., et al., “Security for Virtual Private Intranets,” IEEE Computer, Special Issue on Broadband Networking Security, Vol. 31, No. 9, September 1998, pp. 48-55. [16] Arbaugh, W. A., “Chaining Layered Integrity Checks,” Ph.D. thesis, Computer and Information Science Dept., University of Pennsylvania, 1999. [17] Bell Communications Research, Inc., AIN Release 1 Service Logic Program Framework Generic Requirements, Report FA-NWT-001132.
82
Programmable Networks for IP Service Deployment
[18] Bershad, B., et al., “Extensibility, Safety and Performance in the SPIN Operating System,” Proc. of the Fifteenth ACM Symposium on Operating System Principles (SOSP-15), Copper Mountain, CO, December 1995, pp. 267-284. [19] Biagioni, E., “A Structured TCP in Standard ML,” Proc. 1994 SIGCOMM Conf., 1994, pp. 3645. [20] Blaze, M., Feigenbaum, J., and Lacy, J., “Decentralized Trust Management,” Proc. IEEE Seventeenth Symposium on Security and Privacy, 1996, pp. 164-173. [21] Bose, V., “Virtual Radios,” Ph.D. dissertation, MIT, 1999. [22] Carpenter, B., and Brim, S., “Middleboxes: Taxonomy and Issues,” Internet Engineering Task Force, RFC 3234, February 2002. [23] Cheswick, B., and Bellovin, S., Firewalls and Internet Security: Repelling the Wily Hacker, Reading, MA: Addison-Wesley, 1994. [24] DaSilva, S., http://www.cs.columbia.edu/~dasilva/netscript.html, Columbia University Computer Science Dept., 1995. [25] Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers, Computer Science and Telecommunications Board (CSTB) of the National Research Council (NRC), Washington D.C., National Academy Press, 2001. [26] Engler, D. R., Kaashoek, M. F., and O’Toole, J., Jr., “Exokernel: An Operating System Architecture for Application-Level Resource Management,” Proc. of the Fifteenth ACM Symposium on Operating System Principles (SOSP-15), Copper Mountain, CO, December 1995. [27] Farber, D. J., “The Distributed Computing System,” Proc. 1973 IEEE COMPCON, 1973. [28] Feldmeier, D. C., et al., “Protocol Boosters,” IEEE Journal on Selected Areas in Communication, Special Issue on Protocol Architectures for Twenty-First Century Applications, Vol. 16, No. 3, April 1998, pp. 437-444. [29] IETF Forwarding Control Element Separation Working Group Home Page, http://www.ietf.org/html.charters/forces-charter.html [30] Freed, N., “Behavior of and Requirements for Internet Firewalls,” Internet RFC 2979, October 2000. [31] Galis, A., et al., “A Flexible IP Active Networks Architecture,” Proc. Second IWAN, No. 1942, Springer, 2000, pp. 1-15. [32] Haas, R., et al., “Creating Advanced Functions on Network Processors: Experience and Perspectives,” IEEE Network, July/August 2003, pp. 46-54. [33] Hadzic, I., Udani, S., and Smith, J. M., “FPGA Viruses,” Proc. of Ninth International Workshop on Field-Programmable Logic and Applications, FPL’99, Springer, August 1999. [34] Hain, T., Architectural Implications of NAT, Internet RFC 2993, November 2000. [35] Hicks, M., and Keromytis, A. D., “A Secure PLAN,” Proc. First International Workshop on Active Networks, Berlin, Germany: Springer Verlag, June 30-July 2, 1999, pp. 307-314. [36] Hicks, M., Keromytis, A., and Smith, J. M., “A Secure PLAN (Extended Version),” IEEE Trans. on Systems, Man and Cybernetics, 2003. [37] Hicks, M., et al., “PLAN: A Packet Language for Active Networks,” Proc. International Conference on Functional Programming, 1998.
SwitchWare Active Platform
83
[38] Hicks, M., et al., “PLANet: An Active Internetwork,” Proc. of the Eighteenth IEEE Computer and Communication Society INFOCOM Conference, 1999, pp. 1124-1133. Also on-line at: http://www.cis.upenn.edu/˜switchware/papers/planet.ps. [39] Hicks, M., Moore, J. T., and Nettles, S., “Compiling PLAN-to-SNAP,” Proc. of the Third International Working Conference on Active Networks, Lecture Notes in Computer Science, Marshall, I.W., Nettles, S., and Wakamiya, N. (eds.), Vol. 2207, Springer Verlag, October 2001, pp. 134-151. Also available on-line at: http://www.cis.upenn.edu/˜mwh/papers/plansnap.ps. [40] Holdrege, M., and Srisuresh, P., Protocol Complications with the IP Network Address Translator, Internet RFC 3027, January 2001. [41] Hutchinson, N. C., and Peterson, L. L., “The x-Kernel: An Architecture for Implementing Network Protocols,” IEEE Trans. on Software Engineering, Vol. 17, No. 1, January 1991, pp. 6476. [42] IBM PowerNP Network Processors, http://www-3.ibm.com/ chips/ products/ wired/products/ network processors.html. [43] Intel IXP Architecture Network Processors, http://www.intel.com/ design/network/products/npfamily/. [44] Kakkar, P., et al., Specifying the PLAN Network Programming Language, Electronic Notes in Theoretical Computer Science, September 1999. [45] Keromytis, A., Misra, V., and Rubenstein, D., “SOS: Secure Overlay Services,” Proc. ACM SIGCOMM Conf., 2002, pp. 20-30. [46] Keromytis, A., et al., “The STRONGMAN Architecture,” Proc. Third DARPA Information Survivability Conference and Exposition (DISCEX), April 2003. [47] Leroy, X., The Caml Special Light System (Release 1.10), INRIA, France, November 1995. [48] Leslie, I. M., et al., “The Design and Implementation of an Operating System to Support Distributed Multimedia Applications,” IEEE Journal on Selected Areas in Communications (JSAC), Vol. 14, No. 7, September 1996, pp. 1280-1297. [49] Mallet, A., Chung, J., and Smith, J. M., “Operating System Support for Protocol Boosters,” Proc. 1997 HIPPARCH Workshop, Uppsala, Sweden, 1997 (earlier version available as UPENN CIS TR# MS-CIS-96-13). [50] Marcus, W., et al., “Protocol Boosters: Applying Programmability to Network Infrastructures,” IEEE Communications, Vol. 36, No. 10, October 1998, pp. 79-83. [51] Menage, P. B., “Resource Control of Untrusted Code in an Open Programmable Network,” Ph.D. Dissertation, University of Cambridge Computer Laboratory, 2000. [52] Milner, R., Tofte, M., and Harper, R., The Definition of Standard ML, Cambridge, MA: MIT Press, 1990. [53] Mitola, J., III, “Software Radios,” Proc. IEEE National Telesystems Conference, May 1992. [54] Moore, J.T., Hicks, M., and Nettles, S. M., “Chunks in PLAN: Language Support for Programs as Packets,” Procs. of the Thirty-Seventh Annual Allerton Conference on Communication, Control, and Computing, September 1999. [55] Moore, J. T., Hicks, M., and Nettles, S., “Practical Programmable Packets,” Proc. of the Twentieth IEEE Computer and Communication Society INFOCOM Conference, April 2001, pp. 41-50. Also available on-line at: http://www.cis.upenn.edu/˜switchware/papers/snap.pdf.
84
Programmable Networks for IP Service Deployment
[56] Moore, J. T., “Practical Active Packets,” Ph.D. dissertation, CJS Dept., University of Pennsylvania, September 2002. [57] Muir, S. J., and Smith, J. M., “Supporting Continuous Media in the Piglet OS,” Proc. Eighth International Workshop on Network and Operating Systems Support for Digital Audio and Video, 1998, pp. 99-102. [58] Muir, S. J., “Piglet: An Operating System for Network Appliances,” Ph.D. dissertation, CIS Dept. University of Pennsylvania, 2001. [59] Needham, R., and Schroeder, M., “Using Encryption for Authentication in Large Networks,” Communications of the ACM, Vol. 21, No. 12, 1978, pp. 993-999. [60] Needham, R. M., “Denial of Service: An Example,” Communications of the ACM, Vol. 37, No. 11, November 1994, pp. 42-46. [61] Partridge, C., et al., “FIRE: Flexible Intra-AS Routing Environment,” Proc. ACM SIGCOMM Conference, 2000, pp. 191-203. [62] Peterson, L., et al., “An OS Interface for Active Routers,” IEEE Journal on Selected Areas in Communications (JSAC), Vol. 19, No. 3, March 2001, pp. 473-487. [63] Peterson, L., et al., “A Blueprint for Introducing Disruptive Technology into the Internet,” Proc. of the First ACM Workshop on Hot Topics in Networks, October 2002. [64] PLAN home page, http://www.cis.upenn.edu/˜switchware/PLAN. [65] Ritchie, D.M., “A Stream Input-Output System,” AT&T Bell Laboratories Technical Journal, Vol. 63, No. 8, Part 2, October 1984, pp. 1897-1910. [66] Savage, S., et al., “Practical Support for IP Traceback,” Proc. ACM SIGCOMM Conf., 2000, pp. 295-306. [67] Sincoskie, W. D., and Cotton, C. J., “Extended Bridge Algorithms for Large Networks,” IEEE Network, Vol. 2, No. 1, January 1988, pp. 16-24. [68] Smith, J. M., et al., “Activating Networks: A Progress Report,” IEEE Computer, Vol. 32, No. 4, April 1999, pp. 32-41. [69] Smith, J.M., and Nettles, S. M., “Active Networking: One View of the Past, Present and Future,” IEEE Trans. on Systems, Machines and Cybernetics, 2003. [70] Snoeren, A. C., et al., “Hash-Based IP Traceback,” Proc. ACM SIGCOMM Conference, 2001, pp. 3-14. [71] Srisuresh, P., and Holdrege, M., IP Network Address Translator (NAT) Terminology and Considerations, Internet RFC 2663, August 1999. [72] Srisuresh, P., and Egevang, K., Traditional IP Network Address Translator (Traditional NAT), Internet RFC 3022, January 2001. [73] Stehr, M. O., and Talcott, C., “PLAN in Maude: Specifying an Active Network Programming Language,” Proc. Fourth International Workshop on Rewriting Logic and Its Applications (WRLA’2002), Pisa, Italy, September 19-21, 2002. Also in Electronic Notes in Theoretical Computer Science, Vol. 71, Elsevier, 2002, http://www.elsevier.nl/locate/entcs/volume71.html. [74] Stoica, I., et al., “Chord: A Scalable Peer-to-Peer Lookup Protocol for Internet Applications,” Proc. ACM SIGCOMM Conference, 2001, pp. 149-160. [75] Stoica, I., et al., “Internet Indirection Infrastructure,” Proc. ACM SIGCOMM Conference, 2002, pp. 10-20.
SwitchWare Active Platform
85
[76] Tennenhouse, D. L., et al., “A Survey of Active Network Research,” IEEE Communications, Vol. 35, No. 1, January 1997, pp. 80-86. [77] Touch, J., “Dynamic Internet Overlay Deployment and Management Using the X-Bone,” Computer Networks, July 2001, pp. 117-135. [78] Tsirtsis, G., and Srisuresh, P., Network Address Translation–Protocol Translation (NAT-PT,) Internet RFC 2766, February 2000. [79] Tullmann, P., Hibler, M., and Lepreau, J., “Janos: A Java-Oriented OS for Active Network Nodes,” IEEE Journal on Selected Areas in Communications (JSAC), Vol. 19, No. 3, March 2001. [80] Wall, D. W., “Messages as Active Agents,” Proc. Ninth Annual Principles of Programming Languages, Vol. 19, No. 3, 1982, pp. 34-39. [81] Wetherall, D. J., Guttag, J., and Tennenhouse, D. L., “ANTS: A Toolkit for Building and Dynamically Deploying Network Protocols,” Proc. IEEE OpenArch 98, San Francisco, CA, 1998, pp. 117-129. [82] Lepreau, W. B., et al., “An Integrated Experimental Environment for Distributed Systems and Networks,” Proc. USENIX OSDI Conf., December 2002. [83] Wing, J. M., et al., “Extensions to Standard ML to Support Transactions,” ACM SIGPLAN Workshop on ML and Its Applications, June 1992. [84] Zander, J., and Forchheimer, R., “Softnet—An Approach to Higher Level Packet Radio,” Proc. AMRAD Conference, 1980.
Chapter 6 Peer-to-Peer Programmability 6.1 INTRODUCTION The evolution of the Internet can be classified broadly into three phases. The first phase was initiated in the early 1970s and lasted until the early 1990s. Remarkably, the Internet started as an overlay on the existing PSTN infrastructure. Peering routers enabled worldwide connectivity, creating a global network almost unrecognized by the public. The architecture was designed with the principles of efficiency and simplicity and was built on the well-known TCP/IP protocol stack and characterized by static IP addresses. In the second half of the 1990s, the World Wide Web (WWW) became the dominant application of the Internet. At its peak, up to 80% of the total traffic load was attributed to the WWW. The WWW was based on a centralized architecture, which is heavily biased toward a client/server model. A relatively small number of very powerful servers provide services to a large number of much less powerful clients in a strongly asymmetric way. Among the main challenges are provisioning of high performance, redundancy, and load balancing facilities in a centralized fashion. Currently, a departure from these characteristics has become noticeable. Peerto-peer (P2P) services have surpassed the WWW in popularity, at least in terms of traffic volume. Backbone operators, Internet service providers, and access providers consistently report P2P-type traffic volumes exceeding 50% of the total traffic load in their networks [15], sometimes even reaching 80% at nonpeak times. The success of peer-to-peer applications is strongly related to other observable trends. The former clients have become very powerful machines in their own right, that are in a position to take over functionalities that were traditionally devoted to servers in a centralized manner. This is supported by a substantial increase in access capacities through technologies like wireless LAN (WLAN), Ethernet in the last mile, and asynchronous digital subscriber line (ADSL). The new trends in technology have been accompanied by new modes of operation such as “always on” and “flat rate” business models. On the other hand, 87
88
Programmable Networks for IP Service Deployment
mobility and roaming have triggered a quest for dynamic IP addresses. The architectural changes, together with new dominant applications, justify the identification of a new phase, the third phase, in the evolution of the Internet. We are facing this paradigm shift at the moment. P2P services have evolved to become one the most popular applications in today’s Internet. In particular, P2P networks have become very popular amid the relentless spread of Gnutella [6, 19, 20, 21, 25], Kazaa [13], and eDonkey [1] file sharing applications. Remarkably, only very simple protocols and almost no support by the transport network was required to make these distributed services operable on a large scale in very little time. One of the main reasons for their notable success is that P2P networks operate on the application level, and typically form application-specific overlays. Overlays work without specific network or transport support, and can be run completely at the edge of a network. While P2P overlays do implement a certain type of group communication structure, they do not suffer from the same deployment difficulties as multicast did in the past. However, ease in deployment comes at a cost; a lack of central servers, or of any central control, predictably leads to a huge amount of uncontrolled signaling traffic being generated and transmitted. It is the goal of this work to deal with this specific problem. This chapter provides a brief survey of P2P networks and an analysis of their most prominent properties. The analysis then lends itself to the conclusion that a flexible and self-organizing management system should be developed and integrated with the following properties: support of optimization of resource usage of P2P overlays, and its adaptation on demand to changing patterns of request profiles and traffic loads, and to new application type requirements. Adaptation should be possible both in time and space, and the adaptation pattern should reflect the granularity of significant structural overlay changes and the timescale of such events. Extensive measurement studies have revealed profiles of timescales ranging from tens of minutes, over hours, to multiple months. The use of mobile code with an appropriate programming environment appears as an attractive vehicle to the solution of some of the most urgent open questions under the given circumstances. In particular, application-level active networking seems most promising to enable an integrated management of P2P overlay networks. 6.2 WHAT ARE P2P SERVICES? 6.2.1 Architectural Concepts P2P relates inherently to a special kind of network architecture. P2P services in particular are networked applications, which take advantage of this type of architecture. A main feature of P2P is the immediate interaction among equal
Peer-to-Peer Programmability
89
partners that are called “peers.” In a P2P network, each peer has equivalent capabilities and responsibilities. This is in contrast to the traditional client/server network architecture, in which one or more nodes are dedicated to serve the others. In addition, in P2P architectures, the networked nodes are highly autonomous: Peers may leave or join a P2P network arbitrarily. Another major characteristic of P2P service is that the services provide a simple and efficient mechanism to pool and share exchangeable resources like disk space, audio/video files, or CPU cycles. These features allow any peer to be removed without resulting in any loss of service. While these are the properties in an ideal world, in reality, P2P services have evolved with mixed appearances that cover a whole range of the architectural spectrum.
pure P2P
hybrid P2P
hybrid client / server
pure client / server
(GRID) Figure 6.1 Architectural spectrum of P2P services.
Architectural Spectrum of P2P Architectures of P2P services vary between two extremes: pure P2P and pure client/server distributed architectures; see Figure 6.1. A pure P2P architecture is completely decentralized. Peers operate in a highly autonomous mode and are de jure equal entities. De facto, however, if left to themselves, most P2P architectures evolve into a hybrid structure, where some peers are “more equal than others.” Hybrid structures may form themselves due to preferential attachment of peers to distinct peers. An involuntary instantiation of structure can be based on forms of application-level attractiveness of peers (e.g., content or metadata), or on networklevel attractiveness of peers (e.g., bandwidth). Figure 6.2 depicts an example for the forming of structure due to application-level attractiveness in a pure/hybrid P2P network. It shows highly connected peers in the Gnutella network. These distinct peers provide a more stable behavior and higher capacity in relaying signaling and searches messages than other Gnutella peers. The lack of resource optimization has often been blamed for the nonscalability of pure or hybrid P2P architectures [14, 23]. Hence, many structuring approaches have been suggested to optimize computational or network resource usage. As a result, even more structure and controls have been induced and, henceforth, almost all successful P2P services can be classified as exhibiting some form of hybrid architecture. While, in theory, peers of a P2P service are equal, in practice, some structure may be evolving involuntarily or may be
90
Programmable Networks for IP Service Deployment
imposed externally to ensure scalability. Many successful classes of P2P architectures apply “more-equal-than-others” concepts like “superpeers” (e.g., Kazaa [13]), “ultrapeers” (e.g., Gnutella [8]), or server mediation (e.g., eDonkey2000 [1]), to increase scalability. Such hybrid P2P architectures contain at least one central, or “more distinct” entity. However, the instantiation of the distinct nodes is often exclusively based on application-level or content optimization; for example, as Kazaa’s “superpeers,” and it is rarely based on network-level resource optimization. Hybrid P2P architectures can further be classified into user-centric (e.g., instant messaging) or data-centric types (e.g., search for content) [16].
Figure 6.2 The Gnutella network topology, available at http://www.ececs.uc.edu/~mjovanov/Research/gnutella.html.
Since the forming of structure is advantageous for P2P architectures, a complementary classification of pure and hybrid P2P into unstructured and structured systems is suggested. Structured P2P architectures are systems where distinct roles and responsibilities of the peers are inherent to the architecture. The spectrum of structured P2P architectures ranges from superpeers, as outlined above, to P2P architectures applying distributed hash tables (DHT) [9]. In DHTbased P2P systems, certain nodes take on responsibility to hold or to be able to locate specific content. Externally imposed structure increases routing and location efficiency and scalability, but may imply additional overhead for adaptation, and therefore may reduce autonomy and capability of self-organization for peers. There is clearly a trade-off, if centralized methods such as DHT are introduced. Popular examples of P2P architectures using DHT are Pastry [10], Chord from University of California at Berkley and Massachusetts Institute of
Peer-to-Peer Programmability
91
Technology (MIT) [11] or Content Addressable Network (CAN) from International Computer Science Institute (ICSI) at University of California at Berkley [12]. Figure 6.3 displays an example of CAN, where distinct peers are responsible for certain rectangles in the two-dimensional space of the hash function. The rectangles represent regions where similar content has comparable hash function values. Obviously, a priori structured P2P architectures like Chord or CAN decrease in their capability for self-organization. In particular, they rarely consider network-level resources as the objective for self control. 1.0 2 6 3
5
1
0.5
4 8
7 0.0 0.0
0.5
1.0
Figure 6.3 Two-dimensional coordinate space of a CAN hash function.
As control becomes increasingly centralized, P2P and client/server architectures become less distinguishable. In fact, hybrid client/server architectures are based on centralized control while still enabling resource sharing, a property shared with P2P services. For example, Seti@Home can be classified as a hybrid client/server type architecture, exhibiting resource sharing as a P2P property [17]. As another example, grid services can be regarded as being based on hybrid client/server architectures [18] where control is typically centralized, while resources are shared on a large scale. In contrast, pure client/server architectures are strictly centralized and do not usually allow for sharing of scattered resources [22]. To summarize, P2P services are inherently distributed, and can be categorized according to the degree of decentralization incorporated, both with respect to the implementation of control and the location of shared resources.
92
Programmable Networks for IP Service Deployment
6.2.2 Components, Structure, and Algorithms of Peer-to-Peer Services
P(t)
In this subsection, P2P services are related to and distinguished from other important concepts such as overlay networks, self-organizing systems, and ad hoc networks. P2P services have some features in common with all these concepts, but are also substantially different in parts. Identifying commonalities and highlighting differences helps our understanding of important properties. Overlay networks are virtual network structures that take advantage of an underlying physical network. An overlay network has different semantics from the underlying physical network, as far as neighborhood relations, address spaces, and so on, are concerned. Security, resource management, and group communication are among the most prominent challenges for an operation of overlay networks, particularly where an overlay involves several administrative domains. While P2P networks share all these concerns to varying degrees, they also pose additional challenges. The additional challenges stem from P2P networks operating in a very large-scale environment under unstable conditions. Peers come and go as they please in a large, unmanaged system, and so form highly variable topologies. Figure 6.4 shows a typical snapshot of the distribution of overlay connection holding times in Gnutella [2]. The observed average holding time was 405 seconds, a value which is very low compared to the stability of other overlay architectures, such as virtual private networks (VPN). Mean Median 90% Interval
0.04
0.02
0 10-3
10-2
10-1
100
101
102
103
104
105
106
Overlay Connection Holding Time t [sec]
Figure 6.4 Overlay connection holding time distribution in Gnutella [2].
The distribution from Figure 6.4 reveals two modes, one at approximately 1 second and the other at around 500 seconds. These modes are related to the exchange of host information (short mode) and search request (large mode). The
Peer-to-Peer Programmability
93
distribution shows that P2P services operate at a wide range of timescales. Different operational or application requirements also apply. The implied dynamics of P2P overlays further increase the challenges in operating P2P services. On one hand, it is necessary to provide overlay stability on various timescales, while on the other hand, adaptivity is needed to operate P2P services efficiently. These constraints call for an adaptive mechanism on the application level, which we will investigate in Section 6.5. Self-organization is a well-known principle in science. It refers to the spontaneous emergence of coherence or structure without externally applied coercion or control. An order at large may emerge out of local interactions by mutual reinforcements, without a detailed building plan. Obviously, P2P services inherently exploit self-organization, albeit on different levels and to various degrees. P2P services could be left completely to themselves, some local interaction pattern may be enforced (e.g., by imposing super-peerlike structures), or control could be partially centralized (search) and partially self-organizing (topology forming). Self-organization can even be incorporated on several levels within a hierarchically structured system, allowing the realization of a system that relies on a mix of local strategies, loose coupling, adaptation, and simplicity of communication protocols. Ad hoc networks include another class of self-organizing and decentralized networks. Ad hoc nodes are self responsible for the discovery of each other, and for subsequent cooperation in an otherwise infrastructureless environment. Ad hoc networks provide an improvised and often impromptu networked system of mostly local scale. Although these properties seem similar to P2P characteristics, some substantial differences must be noted. While ad hoc networks provide transport layer connectivity in a wireless and likely mobile environment, P2P overlays offer application-layer connectivity. In Open System Interconnection (OSI) terminology, P2P services are built on top of the transport layer and comprise the upper three layers of the stack: session, presentation, and application layers. In contrast, ad hoc networks specialize in offering connectivity under specific conditions and provide an instantiation of the physical, data link, and network layers. While ad hoc networks tend to be smaller in size and constrained by physical positioning and neighborhood relations, P2P overlays can easily obtain a global dimension and can form application-centric topologies, irrespective of the physical connectivity. Ad hoc and P2P can both be characterized as infrastructureless networks that need to self-organize discovery and routing (search), albeit under different constraints with different methods. P2P networks often apply content addressing schemes, based on one or multidimensional hash tables for routing, or search for content in a fast search space. In contrast, ad hoc networks are more concerned with balancing the computation of proactive and reactive connectivity information, depending on a given mobility pattern and the quality of the physical
94
Programmable Networks for IP Service Deployment
infrastructure, such as by minimizing the “memory footprint” and the consumption of energy. 6.3 REQUIREMENTS FOR P2P PROGRAMMABILITY Decentralization and self-organization have helped to increase popularity of P2P services. For scalability and resource optimization, methods for structuring overlay connectivity and search operations have been introduced. The problem with these structuring methods lies in their limited flexibility. While the methods have tended to be optimized for special cases, P2P services and applications seem to be permanently evolving. Connections come and go with lifespans that range from a few minutes to many hours. The overlay topology is also highly variable, and changes as a result of peer connectivity variations. On a longer timescale, user behavior changes due to the impact of the introduction of legal battles and new technologies. More often than not, new, or modified overlay and search techniques arise and subside within a few weeks’ time, leaving the structure obsolete. Even completely new P2P applications arise in relatively short periods of time, and can easily reach a global distribution span without much effort. What is needed is a highly flexible, and possibly programmable, infrastructure that allows new software modules to be launched on demand to control dynamically overlay creation and management, and to optimize content search (routing) operations. These software modules should be incremental, composable, and self-organizing to adapt to varying operational conditions. Flexibility is needed in terms of service placement, as well as in terms of variable functionality that might evolve over time. Services should be enabled to spawn or replicate themselves to places where they are needed. As an additional constraint, the appealing property of nondisruptiveness of P2P services should be present as far as possible. In other words, optimizing structure should evolve as an integrated constituent of evolving P2P services and applications. Nondisruptiveness is closely related to layering within architectures. Since P2P services and applications tend to evolve on the application layer without much interference with lower layers, it may be tempting to require optimization and control methods to be located on the same layers for minimum disruption effects. 6.4 OBJECTIVES AND REQUIREMENTS FOR P2P OVERLAY MANAGEMENT P2P services are effective in providing solutions for a large range of applications due to their distributed nature and the focus they give to the resources found on the edges of the network. However, it has become evident over the past few years that some form of control is necessary to tackle issues such as the use of the service,
Peer-to-Peer Programmability
95
the separation between the P2P overlay and the network layer, the short and unpredictable life cycles of peer relations, and the high signaling traffic generated [2]. We believe that there exist four areas where the enforcement of control will be beneficial for such applications. The first area is access control. Participants of P2P overlays are typically granted access to all resources offered by the peers. These resources are valuable, so the resource provider, either content provider or network provider, needs to identify and regulate admission to the overlay. For P2P file sharing applications, in particular, access control should block off P2P applications or enable controlled content sharing. The second area is resource management. The resources of individual peers must be treated with care. For example, low-bandwidth connected peers should not be overloaded with download requests. For P2P file sharing applications, content caching capabilities will improve the performance while reducing the stress imposed on the network. A third area of interest is overlay load control. Overlay load control copes with traffic flows inside the overlay. Its goal is to balance the traffic and load in order to maintain sufficient throughput inside the overlay, while also protecting other network services by mapping this load in an optimum way onto the underlying network infrastructure. Finally, the fourth area of command is adaptive topology control. Overlay connections may be established or broken arbitrarily by the peers since they can join or leave the virtual network at any time. Topology control may enforce redundant connections, thus increasing the reliability of the service. In addition, topology control may force the structure of the virtual network to be more efficient and faster in locating resources when using broadcast protocols. The last two areas support the aim of having adaptive and application-suited management strategies for P2P services. The control objectives outlined here might violate the populist concept of unlimited access to free resources in P2P services, but the control mechanism governed by these objectives increases the stability of P2P services based on overlays. The proper trade-off must be found between regulation and autonomy in P2P overlays. Having identified the objectives of control for a P2P overlay, it is important to examine how adaptive and unsupervised control mechanisms need to be implemented, without diminishing the virtues of the P2P model or introducing further complexity and overhead in the network. We believe it is vital to preserve the autonomy of the peers inside a P2P network. Additional control loops, which adapt to the behavior of a P2P overlay, must not interfere with the autonomous nature of any P2P application. To achieve this goal, we suggest implementing control through an additional support infrastructure. This infrastructure will provide all the necessary tools and interfaces to implement the desired forms of control, and at the same time protect a P2P application. Finally, the mechanisms in this infrastructure permit self-organization or constraint-based self-organization.
96
Programmable Networks for IP Service Deployment
The support infrastructure should be formed of self-organized interworking modules that may resemble a P2P network on their own. 6.5 P2P OVERLAY MANAGEMENT USING APPLICATION-LAYER ACTIVE NETWORKING 6.5.1 The Active Virtual Peer Concept The main element of the suggested support infrastructure is the active virtual peer (AVP) concept. As its name suggests, an AVP is a virtual entity, which interacts with other peers inside a P2P network. An AVP is a representative of a community of peers. Its purpose is to enhance, control, and make the P2P relation more efficient inside that community. AVPs enable flexibility and adaptivity by the use of self-organization. An AVP consists of various distributed and coordinated components that facilitate different forms of control. By combining these components, based on network conditions or administrative policies, we can create AVPs of different functionality. An AVP performs certain functions that are not expected of an ordinary peer. These AVP functions are arranged in horizontal layers as well as in vertical planes; see Figure 6.5. The horizontal layers correspond to the layers on which an AVP imposes control. The vertical separation describes the functional planes of AVPs. These architectural planes have been examined in detail in [2, 5]. Proxylet connections
Policy Control
Topology Control
Performance Monitoring
Gnutella overlay connections
Peer Peer Peer
Application Optimization Layer Layer ApplicationOptimisation Virtual VirtualControlCache Cache Network NetworkOptimization OptimisationLayer Layer
Peer Peer
VCC AOL AVP 1
AOL
AOL
AOL
Information Exchange
AVP 2
AOL Peer
Peer
AVP-to-AVP connection
Administrative Domain
Figure 6.5 The AVP structure.
Figure 6.6 The AVP realm.
The upper horizontal layer of an AVP is called the application optimization layer (AOL). It controls and optimizes the peer-to-peer relation on the applicationlevel. The AOL may apply application-specific routing in conjunction with access policies. The routing performed by the AOL is based on metrics such as the state of the peers (“virtual peer state”) or the state of the links between peers (“virtual
Peer-to-Peer Programmability
97
overlay link state”). Thus it changes the peer load and overlay link characteristics such as packet drop rate, throughput, or delay. In addition, the AOL allows for active overlay topology control, which is accomplished in two ways. An active virtual peer may initiate, accept, or terminate overlay connections based on access restriction or topology features. Topology characteristics such as number of overlay connections or characteristic path length can be enforced or may govern the overlay structure. Furthermore, the AOL layer also makes use of the ALAN control mechanisms, examined below, for implementing its selforganization features. Within the AOL layer, modules implementing AOL functions can be instantiated whenever and wherever needed. New AOL instances can be spawned and relocated on demand. These features enable an adaptation of virtual overlay structures to varying demands, traffic patterns, and connectivity requirements by launching new overlay connections and new virtual peers. These self-organization features of the AOL enable creation of very flexible architecture. The middle layer of an AVP is denoted as the virtual control cache (VCC). The VCC provides content caching on the application level similar to conventional proxies. By maintaining often-requested content in close proximity, such as inside an ISP’s domain, efficiency in resource usage and performance gains can be achieved. In addition, the VCC may offer control flow aggregation functions. The lower layer of AVPs is denoted as the network optimization layer (NOL). Its main task is the implementation of dynamic traffic engineering capabilities that map the P2P traffic onto the network layer in an optimized way. The mapping is performed in accordance with the performance control capabilities of the applied transport technology. The AVP architecture may apply traffic engineering for standard IP routing protocols [7], as well as for explicit QoS-enabled mechanisms like multiprotocol label switching (MPLS) [8]. As a notable feature of our approach, it is worthwhile pointing out that many instantiations of an AVP may run concurrently, within an administrative domain or across domains. AVPs and their constituents can be spawned and relocated on demand as a response to a looming or existing performance bottleneck. Bottleneck information is disseminated through application-layer routing functions implemented by the AVPs themselves. Launching of mobile code modules is enabled through the underlying ALAN infrastructure. Figure 6.6 depicts a scenario where two AVPs, AVP 1 and AVP 2, are located within a single administrative domain. AVP 1 consists of three AOL modules and one VCC component, while AVP 2 comprises two AOL modules. Multiple ordinary peers, denoted by “peer,” maintain connections to them. The two AVPs maintain overlay connections to each other. The AOL modules of the AVPs are in command of the overlay connections. This way, the AVPs can impose control on the overlay connection. The AVP concept based on the ALAN infrastructure supports the introduction of structure on demand, as well as the removal of structure if it is no longer needed. Such an approach contrasts with structure being introduced for optimization purposes but statically, or once for all only, as with superpeers or
98
Programmable Networks for IP Service Deployment
DHT mechanisms. Application-level active networking can be considered as combining flexibility with least disruptiveness. ALAN seems to match quite well the scale of variability of P2P systems in time and space. As a result, selforganization can be maintained as the governing principle, even for optimized and structured systems. Implementation Support The current instance of the AVP technology is based on the ALAN concept [3, 4]. The ALAN infrastructure allows a rapid deployment of network services and their on-demand provision to specified users or communities. ALAN is based on an overlay technique: Active nodes, which operate on the application-level, are strategically placed within the network. These nodes, called execution environments for proxylets (EEP), enable the dynamic loading and execution of active code elements, denoted as proxylets, from designated servers. The resulting services may interfere with data transport and control. ALAN provides mechanisms for EEP discovery, application-specific routing, and service creation by deploying a Web of proxylets across the physical infrastructure. This way, ALAN facilitates the creation of an application-specific connectivity mesh, and the dynamic forming of topology regions. Finally, ALAN provides the basic administrative mechanisms necessary for managing such an architecture. The AVP layer modules are implemented by single or multiple interconnected proxylets. This allows the implementation of the layered AVP architecture in separate components. For instance, a proxylet may execute the AOL functions, whereas an additional proxylet may materialize the virtual control cache or the network optimization layer. This approach facilitates better flexibility and efficiency under continuously changing conditions of a P2P overlay. Different configurations of AVPs can be deployed in parts of the network that experience different characteristics, or even in the same network at different times of the day when conditions have changed. In addition, it is possible that different proxylets exist that implement the same layer functions differently. This gives further choice over the functionality of the AVP. Having identified earlier the objectives for control of a P2P overlay, it is time to see how the AVP facilitates these control issues. As shown in Figure 6.6, the AVPs create a realm wherein they continuously exchange information. Each AVP consists of multiple AOL and VCC proxylets, which communicate and collaborate. The exchange of information allows for coordinated control of the overlay. A realm of AVPs is more suitable for evaluating the conditions inside a particular part of a P2P overlay than a single entity, and this knowledge is distributed in order to achieve better results. Again, this capability promotes the flexibility and adaptivity of the AVP approach. An AVP imposes control by providing effectors on connection level, see Figure 6.7. The effectors include the router module and the connection manager module. The connection manager
Peer-to-Peer Programmability
99
enforces control by manipulating the connections that peers maintain with each other. That is significantly different from most P2P applications, where the way peers connect to each other is random. By applying connection management, the AVP can enforce different control schemes.
Distributed and Collaborative Control Scheme of AVPs realm
Router
Sensors
Topology Control
Effectors
Application Optimisation Optimization Layer Layer Application
VirtualControl Virtual ControlCache Cache Network Optimization Layer Network Optimisation Layer
Effectors implement AVP control on P2P service
Figure 6.7 The concept of effectors inside the AVP.
The router module governs the relaying of messages on the application level according to local or federated constraints, such as access restriction or virtual peer state information. The sensor module provides state information for the distributed and collaborative control scheme. In the remainder of the section, we discuss in detail how the suggested effectors are implemented [24]. 6.5.2 Implementation of AVPs The AVP concept is not based on any particular P2P application and does not require any specific P2P components in order to operate. Furthermore, the AVP does not address issues found only in P2P file-sharing applications, but provides a generic performance management framework suitable for any type of P2P application that uses overlays. Nevertheless, for evaluating our software, we use the Gnutella P2P file-sharing protocol as a vehicle and test environment. We chose Gnutella because it is a well-tested, open source, fully distributed P2P network with thousands of users; it is therefore ideal for realistic experiments. Furthermore, through Gnutella, we are able to illustrate several realistic showcases where the AVP technology can provide solutions; some of them are presented below. The showcases presented in the next section are all representative experiments carried out at the University of Würzburg and University College London. In these
100
Programmable Networks for IP Service Deployment
experiments, as described below, Gnutella protocol version 0.6 [6] was used during participation in the Gnutella network (GNet). Access Control One of the core capabilities of the AVPs is access control. An AOL component can create areas of control inside a P2P overlay, where all communications between the controlled domain and the global Gnutella network are examined and managed by the AOL. Its goal is to control who can access the peers and their resources inside the domain of interest. An AOL proxylet imposes access control by blocking and modifying Gnutella packets communicated between the controlled domain and the global Gnutella network. The result is that peers inside the controlled domain see only each other, and become invisible to any peer outside that domain. At the same time, the AOL proxylet becomes the single point of contact between the controlled domain and the global network. Access control as implemented by AOL proxylets can be illustrated by the scenario depicted in Figures 6.8 and 6.9. In Figure 6.8, peers 1 to 5 reside inside the global Gnutella overlay. Peer 2 sends out a Gnutella ping message in order to discover other peers. Under the Gnutella protocol, a ping must be forwarded by the receiving peer; that is, peer 5, to any peer in its vicinity. Every peer that receives the ping must respond with a pong message. Thus, peer 2 receives pongs from peers 1, 3, 4, and 5. In the access-controlled scenario, see Figure 6.9, an AVP has the duty to impose control on peers 1 and 2. Peers 1 and 2 are forming the controlled domain (CD). In order to facilitate the access control, the AOL proxylet establishes connections with all peers. When peer 2 sends out a ping to discover other peers, the AOL proxylet intercepts the ping message and forwards it unmodified to peer 1, which is also part of the CD. In addition, the AOL proxylet modifies that ping so it seems like it was initiated by the proxylet; that is, it changes the source connection information of the message: IP address and Globally Unique Identifier (GUID). The AOL proxylet relays the modified message to the outside world. 1
3
Peer
Pin g
3P
on g
Pi
2
P1
ng
ng Po
5
Peer 1
2
Peer
Pin
g
P ng Po
1,
P3
4 ,P
5 ,P
4
Peer
2
Po ng
Pi
1
Pi
3
Po ng
3
P3
3
Peer
ng
P1
2
Peer
1
2
Peer
Figure 6.8 Gnutella conventional forwarding.
Ping
ng ng Po
P3
3a
AOL 4
3
Pi
2
ng 2
P4
Peer
2a
ng Pi ng Po
L AO 3, , P 4a P14
controlled domain
4
Peer
Figure 6.9 New routing by AOL proxylet.
Peer-to-Peer Programmability
101
Peer 2 receives pongs by peer 1 and the AOL proxylet, and concludes that only these two peers comprise its neighborhood. The AOL proxylet captures all messages originating from the global Gnutella network, modifies them if necessary, and forwards them inside the controlled domain. This way, the AOL proxylet gathers information about the global Gnutella network that can be indirectly utilized by the peers inside the CD if so desired. Routing Control and Load Balancing The AOL router (gateway) represents the core mechanism for the application of control. One of its main features is the ability to handle several different protocols at the same time. To facilitate the use of these different protocols in an effective and expandable way, the implementation of the router is divided into multiple, partly autonomous elements. In the current version of the AOL proxylet, two different mechanisms and protocols have been implemented: the Gnutella protocol version 0.6 [6], and an AOL intercommunication protocol, denoted as the AOL-toAOL protocol. A major feature of the AOL-to-AOL protocol is the tunneling of other protocol messages between AOL proxylets; in our example, the Gnutella packets. The routing of Gnutella packets follows the specification of Gnutella version 0.6 but is significantly enhanced. The major enhancement lies in the “probabilistic routing” module, which drops broadcasted packets, for example, query messages, ping messages, and so on, based on a random value compared to a given threshold per connection. If the random value for a packet is larger than the configured threshold, the packet is discarded. As a result, certain links become more lightly loaded than others. Since the Gnutella protocol is based on event-triggered responses, discarding a limited number of packets does not sacrifice the filelocating capability of the system, when sufficient responses are still available, such as by receiving responses on multiple paths and from multiple sources. An example of probabilistic routing is depicted in Figure 6.10. In this example, four peers (1, 2, 3, 4) are directly connected to an AOL proxylet. The proxylet has configured different threshold values for the links to peer 2 (threshold is 0.3), peer 3 (threshold is 0.6) and peer 4 (threshold is 0.0). Peer 1 sends a message; for example, a query,1 to the AOL proxylet. The proxylet determines a random value of 0.5 for this packet.2 Since the random value is smaller than the threshold value on the link to peer 3, the AOL proxylet drops the packet along this connection, while it keeps the packet on the links peers 2 and 4.
1 Query: Gnutella protocol message containing search criteria, used to search the P2P network for files [5]. 2 Without the loss of generality, the random value is equally distributed in the interval form [0, 1].
102
Programmable Networks for IP Service Deployment
1
Peer Query Random value: 0.5
2
AOL
Threshold: 0.3
Peer
Query
Threshold: 0.0
Query 4 Threshold: 0.6
3
Peer
Peer Figure 6.10 Operation of the “probabilistic routing” feature of the AOL.
The AOL monitors and evaluates the condition of the overlay constantly. For example, it measures and analyzes the virtual link state or the virtual peer state. If these states degrade, the AOL may adjust the thresholds on the different proxylets and overlay connections. Through adaptive probabilistic routing, the AOL performs dynamic load control.3 It is important to note that general probabilistic routing of Gnutella packets may lead to superfluous traffic in a federation of AOL proxylets, if inappropriately applied. For example, if a ping message is accepted into a federation of AOL proxylets that form an AVP, and is not dropped, it will generate a pong response. If the pong is later dropped by an AOL proxylet of this federation, then the transmission of the ping message was clearly superfluous. A distinct handling of the implied events (i.e., responses like pongs and query-hits) and initial events (i.e., requests like pings and queries) may overcome this insufficiency. For initial events, the federation of AOL proxylets must keep track of which peer generated the initial event, including the message details such as the message identification (ID). This is achieved by maintaining a table at the ingress AOL proxylet that contains this information, and by including information to initial event packets of the location of the ingress information. However, initial events are not protected from being dropped in the AOL federation. This enables the load-sensitive forwarding of the packets. The table at the ingress node is protected from overflow by being a round-robin table. In contrast to the initial events, implied events are confined from probabilistic routing. They are relayed to the shortest path through the AOL network to the ingress AOL proxylet. 3
For details, refer to [2].
Peer-to-Peer Programmability
103
Topology Control As mentioned earlier, topology control as enabled by AVPs enforces optimal P2P relations inside an overlay, based on a variety of metrics such as virtual peer state and virtual link state. AOL proxylets achieve topology control by selectively setting up or closing connections to other AVPs and ordinary peers. By shaping the way peers are connected and communicate inside an overlay, self-coordinating AVPs can mediate better performance and greater stability. Based on the virtual peer state, the AOL can initiate or terminate overlay connections between AOL proxylets in order to maintain good connection characteristics inside the overlay; for example, more durable overlay connections. The virtual peer state can be monitored by using parameters like the number of overlay connections maintained, routing capability of the peers, or processing load. Let us examine the following scenario, as depicted in Figures 6.11 and 6.12. Figure 6.11 shows three AVPs and two peers existing in that part of the overlay. The link between AVP 1 and AVP 2 is significantly degraded, affecting the stability and performance of the information exchange between the two peers. Peer
AVP 1
Peer
AVP 2 AVP 3 Figure 6.11 Dynamic overlay topology control (before).
AVP 1
Link state between AVPs has degraded
AVP 2
Peer
AVP 3
Peer
Virtual connection between peers is maintained
Figure 6.12 Dynamic overlay topology control (after).
AVP 1 discovers that the virtual link is degraded, and decides to restructure the overlay topology in order to maintain good overlay characteristics. So, AVP 1 establishes a link with AVP 3, which is in proximity (see Figure 6.12), and shuts down the overlay connection to AVP 2. This way, AVP 1 manages to maintain the connection between the two peers with the desired levels of connectivity and quality, without any knowledge or action taken from their part. This scenario shows how the AOL proxylets create and terminate overlay connections in order to enable dynamic topology management of the overlay by means of self-organization. Similar schemes where certain peers have some influence on the way the overlay is formed have been proposed elsewhere, such as with the Gnutella “ultrapeer” concept. However, an AVP achieves improved adaptivity and flexibility due to its self-organization features, and the coordination between multiple AVPs or multiple AOL proxylets.
104
Programmable Networks for IP Service Deployment
Resource Management and Caching Using the VCC An AVP may contain a VCC proxylet. Its task is to provide content caching on the application level by maintaining often-requested content in close proximity. It is envisaged that an AVP will maintain one or more VCC proxylets. This feature is illustrated in Figure 6.13. An AVP that has spawned and configured a VCC controls a domain of peers by applying routing and access controls as shown previously. All messages generated by the peers inside the domain are monitored by the AOL. Each time a query is made by the peers inside the domain, it is only visible by other peers in the domain and the VCC. Moreover, the AOL does not forward the query message outside the domain, but modifies it accordingly, so that peers outside the domain see the AOL as the actual initiator of the query. This way, the peers inside the domain receive query-hit replies only by other peers in the domain and by the VCC. If the content is available locally, a direct download connection may be established. Otherwise, the AOL, upon receiving a query-hit from outside the domain, downloads the content on behalf of the VCC where it is ultimately stored. Then, the AOL sends a query-hit to the peer that demanded the content pointing to the VCC. If the file is requested in the future, it can be retrieved directly from the VCC. configuration Configuration Download 2
1
VCC
d nloa Dow
Peer
ified Mod
AOL Query
AVP
Peer ry que
Controlled controlled domain
Figure 6.13 Caching by the VCC proxylet.
6.6 CONCLUSION P2P services have recently become very popular amid the relentless spread of P2P file-sharing applications. P2P services have become widely popular due to highly appealing features: an immediate interaction among equal partners, highly autonomous peers, and efficient mechanisms for sharing and pooling of resources. P2P services come in many forms of distributed architectures. The spectrum of
Peer-to-Peer Programmability
105
P2P varies from “pure” P2P, to “hybrid” P2P, to “pure client/server” architectures. In addition, pure P2P and hybrid P2P architectures can both be classified into being “structured” or “unstructured” systems, depending on whether or not wellorganized storage and search mechanisms are in place, such as distributed hash tables. Any predefined structuring concept, however, may compromise originally attractive features of P2P systems; for example, by compromising the equality and autonomy of peers. Alternatively, an approach was suggested in this chapter that aimed at maintaining autonomy and adaptation behavior of P2P systems, while simultaneously providing a means for system optimization. ALAN is a natural approach for the dynamic control and management of resources in P2P overlay networks. Mobile software modules running on an ALAN infrastructure can interact flexibly with P2P services on the application layer with minimum disruptiveness. Existing optimization techniques such as “super peers” or distributed hash tables can be incorporated on demand, or modified or combined flexibly with new and complementing methods. The programmability infrastructure allows launching and spawning of methods and software modules when and where needed. Spawning, in particular, is based on self-organizing application layer routing mechanisms and does not need manual or external intervention. The ALAN approach is a compromise between maximum flexibility and minimum disruptiveness; itself a condition for flexibility. ALAN combines the advantages of the flexibility of classical layered architectures with the flexibility of loading mobile code on demand. It seems well suited for the control and management of P2P services and applications. P2P services have a limited lifespan with a relatively high degree of variability. Static methods of control are limited in their potential for imposing optimizing structure, and tend to lag behind in dealing with continuing evolutions of new application types due to ever shorter cycles of innovation. ALAN seems to be an appropriate vehicle for enabling a degree of flexibility that matches the granularity of evolutionary steps and dynamics, as has been observed with P2P networks. A series of case studies centered round the concept of active virtual peers has demonstrated the potential of this approach. References [1]
MetaMachine Inc., http://www.edonkey2000.com.
[2]
De Meer, H., Tutschku, K., and Tran-Gia, P., “Dynamic Operation in Peer-to-Peer Overlay Networks,” Praxis der Informationsverarbeitung und Kommunikation, (PIK Journal), Special Issue on Peer-to-Peer Systems, June 2003.
[3]
Ghosh, A., Fry, M., and Crowcroft, J., “An Architecture for Application Layer Routing,” Active Networks, May 2000.
[4]
Fry, M., and Ghosh, A., “Application-Level Active Networking,” Computer Networks, Vol. 31, No. 7, 1999, pp. 655-667.
106
Programmable Networks for IP Service Deployment
[5]
De Meer, H., and Tutschku, K., “Dynamic Operation in Peer-to-Peer Overlays,” Poster Presentation Supplement to the Proceedings of Fourth Annual International Working Conference on Active Networks, Zurich, Switzerland, December 4-6, 2002.
[6]
Klingberg, T., and Manfredi, R., The Gnutella Protocol Version 0.6 Draft, Gnutella Developer Forum, 2002, http://groups.yahoo.com/group/the_gdf/files/Development/.
[7]
Fortz, B., and Thorup, M., “Internet Traffic Engineering by Optimizing OSPF Weights,” Proc. of IEEE INFOCOM, 2002, pp. 519-528.
[8]
Xiao, X., et al., “Traffic Engineering with MPLS in the Internet,” IEEE Network, Vol. 14, No. 1, 2000, pp. 28-33.
[9]
Balakrishnan, H., et al., “Looking Up Data in P2P Systems,” Communications of the ACM, Vol. 43, No. 2, February 2003.
[10] Castro, M., et al., “One Ring to Rule Them All: Service Discovery and Binding in Structured Peer-to-Peer Overlay Networks,” SIGOPS, France, September 2002. [11] Stoica, I., et al., “Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications,” ACM SIGCOMM ’01, San Diego, CA, September 2001. [12] Ratnasami, S., “A Scalable Content-Addressable Network,” Ph.D. Thesis, U.C. Berkeley, October 2002. [13] Kazaa Media Desktop, http://www.kazaa.com. [14] Ritter, J., Why Gnutella Can’t Scale. No, Really. Available at http://www.darkridge.com/~jpr5/doc/gnutella.html. [15] Azzouna, N., and Guillemin, F., “Analysis of ADSL Traffic on an IP Backbone Link,” GLOBECOM 2003, San Francisco, CA, December 2003. [16] Schollmeier, R., “A Definition of Peer-to-Peer Networking for the Classification of Peer-to-Peer Architectures and Applications,” First International Conf. on Peer-to-Peer Computing (P2P2001), Linköping, Sweden, 2001. [17] Korpela, E., et al., “SETI@home: Massively Distributed Computing for SETI,” Computing in Science and Engineering, Vol. 3, No. 1, January 2001. [18] Keahey, K., et al., “Computational Grids in Action: The National Fusion Collaboratory,” Future Generation Computer Systems, Vol. 18, No. 8, October 2002, pp. 1005-1015. [19] Anonymous, “Gnut: Console Gnutella Client for Linux and Windows,” http://www.gnutelliums.com/linux_unix/gnut/, 2001. [20] Singla, A., and Rohrs, C., “Ultrapeers; Another Step Towards Gnutella Scalability,” Gnutella Developer Forum (http://groups.yahoo.com/group/the_gdf/files/Proposals/Ultrapeer/Ultrapeers_ 1.0_clean.html), 2002. [21] Stokes, M., “Gnutella2 specification document – first draft,” Gnutella2 Web site (http://www.gnutella2.com/gnutella2_draft.htm), 2003. [22] Barabasi, A. L., and Albert, R., “Emergence of Scaling in Random Networks,” Science, Vol. 286, 1999. [23] Cohen, E., and Shenker, S., “Replication Structures in Unstructured Peer-to-Peer Networks,” ACM SIGCOMM, 2002. [24] Birman, K. P., et al., Isis–A Distributed Programming Environment: User’s Guide and Reference Manual Version 2.1, Dept. Computer Science, Cornell Univ., Ithaca, NY, September 1990.
Peer-to-Peer Programmability
107
[25] Lv, Q., Ratnasamy, S., and Shenker, S., “Can Heterogeneity Make Gnutella Scalable?” Proc. of the 1st International Workshop on Peer-to-Peer Systems (IPTPS '02), Cambridge, MA, March 2002.
Chapter 7 Programmable Networks’ Requirements This chapter presents the results of the FAIN [4] project in the identification and definition of requirements for programmable networks. These requirements are based on business, technical, and application-oriented considerations. For this reason, the main sections of this chapter deal with: 1. The network operator’s expectations on network programmability and active networks. 2. An enterprise model of active networks. 3. The description of three reference networking applications that can benefit from active networks in one way or another. 4. Identification of a set of technical FAIN system requirements based on the operator’s expectations, the enterprise model, and the reference applications. 7.1 INTRODUCTION The active networks’ requirements guided the implementation activities in the FAIN project. The comprehensive requirements’ definition and their analysis are provided in [5]. This section gives an introduction to the need to define a set of requirements for network programmability and active networks. It then states the public network operator’s current expectations of these technologies. The operator’s expectations have guided work within the overall FAIN project. Then, the enterprise model (EM) is defined and demonstrated by a simple example. The EM defines the involved actors, the roles these actors play, and the relationships among these roles. Of particular importance in FAIN are the network infrastructure provider, the active network service provider, and the service provider, which provides active services to its customers, based on service components obtained from a service component provider. Although similar approaches are used in related work (e.g., in TMN [16, 24, 34, 35] in 109
110
Programmable Networks for IP Service Deployment
Telecommunication Information Networking Architecture (TINA) [12, 23, 32, 33]), the FAIN EM [5] is the first one (to our knowledge) designed for active networks. Moreover, requirements for the FAIN system [6, 7, 8, 9, 10] and architecture are also derived by following an application-oriented approach. For this purpose, three applications that can benefit from active networks [1, 2, 3, 6, 17, 20, 21] are presented: Web service distribution, reliable multicasting and virtual private networks. Besides being convincing applications for the usefulness of active networking technology, it is expected that by following this application-centered approach, the resulting architecture will support a large variety of applications. 7.2 OPERATORS’ EXPECTATIONS OF ACTIVE NETWORKS 7.2.1 Overview Active and programmable networks are networks in which the functionality of (some of) their network elements are dynamically programmable. With the latter it is meant that executable code is injected into that network element (for which several not to be mentioned methods exist), thereby creating the new functionality at run time. Allowing this, active networks have the potential for unprecedented flexibility in telecommunications. The key question from the (public network) operator’s point of view is: How can we exploit this potential flexibility for the benefit of both the operator and the end user? The answer lies in the promising potential that emerges with the advent of active networks for the following aspects: • • • • • •
Rapid deployment of new services; Customization of existing service features; Scalability and cost reduction in network and service management; Independence of the network equipment manufacturer; Information network and service integration; Diversification of services and business opportunities.
The benefit of active networks for these aspects will be explained below. For each aspect, one or more real-life examples is described to illustrate the potential benefit. 7.2.2 Speeding Service Deployment and Customization Currently, the deployment of new services and service features by the network operator in its network can only be carried out with the consent of the equipment
Programmable Networks’ Requirements
111
manufacturer, due to the equipment’s closed nature. Current network elements are closed in the sense that their functionality is “baked in” during manufacture by the vendor. Any change in the functionality of a network element requires in general a lengthy standardization phase and technology diffusion phase before being operational. The above situation results in undesirable delays when planning the deployment of new services/features. The active network approach, however, with its open nature based on standardized interfaces, will allow the network operator to freely select different service developers for implementing and deploying its new services/features according to market-driven requirements. Depending on the service design and implementation time, this can be done nearly on demand. Interestingly, vendors could use the active network properties to upgrade the network elements with their latest network software release. Besides the introduction of new services, active networks can be used advantageously to modify an existing service at run time, too. This property is extremely useful for enabling an operator to offer personalized services to a large extent to their customers. Notice that service customization is not about configuring parameters of a service, but about modifying the behavior of a basic resident service according to the requirements of the end user. There are various ways to pass the user requirements and the modifying code to the appropriate nodes and resident services; for example, by means of (router) plug-ins. An interesting example of service customization is multicast. At active nodes there is a resident, basic multicast forwarding service. This service could be enhanced with, for instance, reliability, security, and real-time operation. 7.2.3 Leveraging Network and Service Management In this section we argue that active networks enable more feasible network and service management. Key mechanisms to achieve this are the outsourcing and distribution of management tasks (decisions). This directly results in better, scalable solutions, and leads to cost reductions. These aspects will be illustrated using the deregulated wholesale telecommunications market as an example. The example is therefore of special interest to incumbent operators. From a regulator’s point of view, exPNOs that now dominate the network provider market are forced to open their network infrastructure to third-party service providers on the basis of the European Union Open Network Provisioning (ONP) directive. This directive mainly addresses the traditional telephony networks but its application to the information technology (IT) communications infrastructure in general is to be anticipated. In order to comply with the ONP directive, a network operator will allow third-party service providers to use its network infrastructure under service-provider-specific circumstances regarding, for example, accounting and billing. Due to the proprietary nature of current network infrastructure (hardware, protocols) the management of each third-party
112
Programmable Networks for IP Service Deployment
service provider’s specific requirements must remain with the network operator (the third-party service provider has no access to network element management functions), resulting in complex and costly overhead of operation and maintenance, as there will be very large numbers of third-party service providers whose network usage requirements must be individually supported by the network operator. These additional costs cannot be pushed to the third-party service providers because of the ONP directive; also, the additional costs cannot be reflected in the network operator’s pricing structure, as this would lead to reduced competitiveness. The active network concept allows the network operator to delegate full management responsibility to the third-party service providers, thereby complying with regulatory demands and simultaneously avoiding the management overhead mentioned above. The existence of third-party service providers (in addition to exPNO network operators) is a cost-effective differentiation of the standard value chain in providing IT communications and services. The cost argument outlined above is generally applicable to a network operator with a proprietary network infrastructure in the face of potentially very large numbers of third-party service providers. By means of an active network approach based on open, standardized protocols/interfaces to network management, the network operator may achieve substantial cost reductions through delegation of management responsibility to the third-party service providers. 7.2.4 Decreasing Vendor Dependency An important aspect of the active network concept is the fact that it will employ open, standardized interfaces. There is a fundamental difference of the active network concept with the current telecommunications paradigm. In active networks, the way functionality is to be implemented is prescribed (by the API); functionality itself is not prescribed. In the current paradigm the functionality itself is prescribed (by protocols) while their implementation in network elements is left for the vendor to choose. The important consequence of this paradigm shift is that any communicating entities do not have to adhere to the rigid communication patterns imposed by protocols. Instead, they have the ability to dynamically introduce the required functionality at network nodes by means of injecting executable code into the node. Due to the active network concept, the network platform provisioning is more and more decoupled from the network software provisioning, ultimately leading to two corresponding separate network vendor types. On the other hand, but related to the previous, the active networks concept will make operators less dependent on a particular vendor. This is due to the increased competition of the vendor market, where network software vendor A is developing innovative services/features for vendor B’s network platform.
Programmable Networks’ Requirements
113
The operator’s independence of a single network equipment manufacturer, which currently is not the case with proprietary network infrastructure, is very relevant in the case of network (element) management. The network operator will be able to select different manufacturers of switches, routers, and so on, according to market-driven requirements; for example, price, availability, and functionality. Managing them collectively takes place via a standardized management interface that is downloaded on each network element. 7.2.5 Integrating Information Networks and Services Starting about 5 years ago, a new paradigm is being adopted for the design, the deployment, and the operations of telecommunication networks. Precisely, the paradigm shift concerns the use of distributed object-oriented platforms [CORBA [18], distributed component object model (DCOM), and Java virtual machine/remote method invocation (JVM/RMI)] and service frameworks as a new software basis for networks and information services. One of the major consequences of this new paradigm is to enable the integration of networks and information services within the same systems software platform. Until recently, networks and information services have almost always been designed and implemented as separate systems that are then integrated during their operations. For example, the existing intelligent network (IN) service control functions (SCF) and service management functions (SMF) are designed without a common conceptual framework, and they are implemented using different system supports (service nodes), both in terms of hardware and system service components. Seldom is there a service framework to facilitate the interactions between services running on SCF and SMF supporting systems. Therefore, the design and implementation of services that require interactions between SCF and SMF supporting systems is sometimes very difficult. With the demand for new services, requirements of this kind are becoming more important. Another example is the provision of services that will allow a customer to access and manage his account from an Internet access: Typically, customers would like to access their account from the Web whenever they want, in order to acknowledge the cost of their communication, change some parameters and service options, and so on. The provision of this kind of service requires interactions between IN service nodes and a Web-based information service provided by operators; a requirement that is sometimes hard to meet due to the structure of the existing networks and information services. We expect active networks to facilitate the realization of service platforms based on the new paradigm. The integration of information and network services in particular can be facilitated by active networks, via the definition, the implementation, and the use of standard open application programmable network interfaces. Beyond the needs of interactions between service components of the same operator, such interfaces can also be used for the interactions between
114
Programmable Networks for IP Service Deployment
service components that belong to different operators, and even between network operators and some of their clients; that is, service providers. The potential impact of open and programmable interfaces provided by an active network on telecommunication networks and information services will be similar to that of socket interfaces and other Internet programming standards on the recent development of the Internet and the Web. 7.2.6 Diversification of Services and Novel Business Opportunities Due to the new regulatory environments and also to market demands, network operators need to invent new services and push forward their traditional limits in terms of the types of services they can provide. The existing networks appear to have some limitations for new types of services. It is usually very difficult and sometimes impossible in practice to provide some new services because of the structure of networks, and it is sometimes possible but wasteful in terms of network resources. For example, an operator can provide video-on-demand services with an acceptable QoS based on resource reservation, but problems will arise if the operator wants to extend the capability of its network to support more users or customers. Also, it will be very hard for a network operator to provide VPN services to client companies with varying QoS requirements in terms of reliability, dependability, timely delivery, scalability, or security; a service profile that can be seen as a very important business opportunity by operators. This is also the case for policy-based network management, which stands for the management of networks taking into account QoS requirements specified by client organizations. Other examples, such as the provision of e-commerce platforms, remote sensing, and control operations for industrial plants, are also business opportunities for network operators, and generally such services require variability and dynamics in QoS requirements. The ability of active networks to adapt network services to the needs of target applications (by allowing application programs to use their own communication services instead, that can control and manage network resources adequately), and the use of application programming interfaces to access network elements, can be seen as key networking features that will facilitate the diversification of services and enable more business opportunities. 7.3 FAIN ENTERPRISE MODEL In this section, we introduce the FAIN enterprise model. From an abstract point of view, an enterprise model consists of: (1) actors, that is, people, companies, or organizations of any kind that own, use, support, and/or administer parts of an (active) networking system; (2) the various (business) roles these actors play; and (3) the contractual relationships between these roles, that is, the reference points.
Programmable Networks’ Requirements
115
The term “business role” refers to the specific behavior, properties, and responsibilities assigned to an actor with respect to an identifiable part of the networking system, and the related activities that are necessary to keep this part of the system operational. The set of FAIN’s actors is mapped onto the set of business roles occurring in the FAIN context; that is, each role is understood to be played by at least one actor who may, however, perform more than one role. The relationships between roles are expressed as reference points, which determine the interactions (interfaces and action sequences, together with related properties and constraints, granted rights, and so on) between the actors in the respective roles. The FAIN actors are the “usual suspects:” (1) the providers, that is, companies operating networks (by managing the respective hardware and software resources) and offering services over the network (employing their own resources or those of the traditional network operators), and so on; (2) the users, that is, individuals or companies that use the proffered connectivity and services; and (3) the manufacturing companies that support the providers by developing (and vending) the necessary hardware and software components. The manufacturers are not the main focus of the FAIN context, even if their role is an important one; however, they are briefly reflected in the enterprise model, in order to make the picture complete. The FAIN enterprise model only describes the business roles that may be taken by the above-listed actor companies (and, obviously, each actor company may act in more than one role if this suits the business). The purpose of defining business roles is to partition an active networking system in a way that responsibilities are identified, such that they can be clearly separated and assigned to different actors. Consequently, the enterprise model must determine the interactions that are expected to take place between the actors acting in the respective business roles, in order to allow for independence and “composition” of the partial systems provided by the actors. The reference points are the means to describe these interactions that comprise the administrative relations, such as how to establish/handle the involved business relations and legal issues—the “business model” in the terminology of the Telecom Management Forum (TMF) [14]) and the technically necessary interaction schemes that must be supported by the respective interacting components, their functionality and their capabilities, the qualities they must guarantee, and so on. In short, these are necessary technical (interface related) contracts that must be established between the components in the realm of the various actors in their respective roles. 7.3.1 Roles The business model for active network service provisioning is depicted in Figure 7.1. The gray-shaded business roles are considered to be within the focus of the FAIN project; the other roles in the figure are not further detailed beyond the brief
116
Programmable Networks for IP Service Deployment
description given in the text that follows. The FAIN business model is composed of the following roles: Consumer (C): The consumer is the end user of the active services offered by a service provider. In FAIN, a consumer may be located at the edge of the information service infrastructure (i.e., be a classical end user) or it may be an Internet application, a connection management system, and so on. In order to find the required service among those proffered by the service providers the consumer may contact a broker and, for example, use its service discovery features. Service provider (SP): A service provider composes services, including active components delivered by a service component provider; deploys these components in the network via the active network service providers, and offers the resulting service to the consumers. The services provided by a service provider may be end user services or communication services. A service provider may join with other service providers in order to build more complex services. Descriptions of the offered services are published via the broker service. Active network service provider (ANSP): An active network service provider provides facilities for the deployment and operation of active components into the network. It provides virtual environment facilities (one or more) for active service components to service providers. Together with the network infrastructure provider, it forms the communication infrastructure. Network infrastructure provider (NIP): A network infrastructure provider provides managed network resources (bandwidth, memory, and processing power) to active network service providers. It offers a network platform including AN components to the active network service providers, who can build their own execution environments, and proffers basic IP connectivity,1 which may be based upon traditional transmission technology, as well as emerging ones (both wired and wireless).2 Service component provider (SCP): A service component provider is a repository of active code that may be used by the consumer and the service provider. A service component provider may publish its components via a broker.
1 Since FAIN focuses on Internet technology, a basic network infrastructure consists of the following characteristics: the presence of a physical network; an IPv4 and/or IPv6 forwarding mechanism; a default IP routing protocol; if RP7 is implemented, a peering service to other NIPs or ISPs; and an element management interface. 2 The service offered by the NIP may also be used by current Internet service providers.
Programmable Networks’ Requirements
117
Service component manufacturer (SCM): A service component manufacturer builds service components and active code for active applications and offers them to service providers, consumers, and service component providers in appropriate form (e.g., as binary or as source code). The service component provider is not a genuine FAIN business role since it is not directly involved in the main FAIN business process. Middleware manufacturer (MM): An active middleware manufacturer provides platforms for active services (including the execution environments) to the active network service provider. The active middleware manufacturer is not a genuine FAIN business role since it is not directly involved in the main FAIN business process. Broker
Consumer
RP4
Service Provider/ Retailer
RP1
Service Component Provider
Service Component Manufacturer
RP5 RP2
Active Network Service Provider
Active Middleware Manufacturer
RP6 RP3
Network Infrastructure Provider
Hardware Manufacturer
RP7 Roles in the focus of FAIN
Figure 7.1 The FAIN business model.
Hardware manufacturer (HM): A hardware manufacturer produces programmable networking hardware for the network infrastructure provider. The hardware manufacturer is not a genuine FAIN business role and will not be detailed further in the FAIN enterprise model.
118
Programmable Networks for IP Service Deployment
Retailer (RET): The retailer [according to the OMG Telecommunications Service Access and Subscription (TSAS) model] acts as access point to the services provided by service providers to customers; it must be capable of negotiating and managing the end user contract (e.g., handling authentication and security issues as well as accounting and billing) and it must allow the service lifecycle management by the user (subscription and customization of a service, administrative interactions, and so on). Broker (BR): The broker collects, stores, and distributes information about: (1) available services (i.e., those offered by the service providers); (2) the administrative domains in the active networking system; and (3) service components (i.e., code offered by the service component providers). The FAIN broker basically resembles the “broker” role in the TINA business model. 7.3.2 Reference Points Among the business roles, the following reference points are defined for inclusion in the FAIN architecture as open interfaces: RP1 (SCP-SP): This provides contracting, subscription, and transfer of components in a client/server situation; information about execution environments should be exchanged to determine which components to transfer. RP2 (ANSP-SP): SP delivers components to be injected in the network by the ANSP; SP requests generic network services, processing resources, and code injection from the ANSP; ANSP allocates resources depending on availability; ANSP provides information about the capabilities of network services; SP is responsible for service session management, according to the agreed service level (ANSP-SP); ANSP is responsible for network service management, according to the agreed service level (ANSP-SP). RP3 (NIP-ANSP): NIP provides and allocates physical active networking resources; ANSP requests resources (network, processing, transmission, and so on) from the NIP in order to provide network services, and is responsible for injection of active code. RP4 (SP-C): The consumer requests and uses services offered by the service provider. The SP provides the service with its functionality. It also subsumes the role of the retailer as service access point (i.e., it determines the service contract). In addition, RP5 (SP-SP), RP6 (ANSP-ANSP), and RP7 (NIP-NIP) deal with federation among roles of the same kind. For instance, a service provider may provide features of the services of other cooperating service providers and it may
Programmable Networks’ Requirements
119
use these service providers to offer facilities of different service component providers it is not directly associated with. 7.4 NETWORK PROGRAMMABILITY AND ACTIVE APPLICATIONS 7.4.1 Introduction In this section an application-oriented approach is used to derive requirements for active networks. The idea is to select applications that we expect to benefit from the distinguishing property of active networks, that is, the possibility to download code into the network and to distribute intelligence flexibly. Among the multitude of applications that have been presented in the literature and that have been discussed in FAIN, we have selected the following ones: •
•
•
Virtual private networks: Virtual private networks are private networks built on top of a public network like the Internet. Due to the diversity of requirements for virtual private networks, no uniform standard for VPNs has been established until now. For this reason, the IETF has stopped standardization activities in this area. Nevertheless, several building blocks required for active networks are standardized and can be used: techniques for tunneling, support for quality of service, encryption, and routing. From active networks, we expect to be able to flexibly compose these basic building blocks in order to implement the requirements of a particular customer, and to deploy the resulting VPN service logic into the network. Reliable multicasting: Protocols for unreliable best-effort multicasting have been available both in standards and within products for more than 10 years. Intensive research and development into various extensions (multicast using the properties of the underlying link layers such as ATM, reliable multicasting, and multicasting with QoS) has been conducted. Nevertheless, none of the proposals, not even unreliable best-effort multicasting, has been deployed on a larger scale. In our opinion, this is also due to the diversity of application requirements, which will make the development, deployment, and operation of a uniform, one-fits-all multicasting protocol hard or even impossible. Active networks may help in developing and deploying applicationspecific multicast protocols with characteristics tailored to the requirements for the application. A prominent example is networked games, which impose challenging requirements; for example, with respect to QoS, reliability, and group management. Web service distribution: The World Wide Web is certainly the most popular and important application of the Internet. The underlying
120
Programmable Networks for IP Service Deployment
technologies like HTML, XML, and HTTP are not only important for “Internet surfing,” but for mission-critical business applications as well. However, protocols like HTTP were originally invented as a pure client/server approach, making the scalable, reliable, and performing implementation of Web services difficult. Current approaches address particular issues in an ad hoc way (e.g., distribution of content via content distribution networks), but without addressing the issue on an architectural basis. We expect to benefit from activeness in the network in two ways. First, activeness can be used to distribute code in order to filter out Web traffic. The downloaded code constantly monitors network-internal conditions (bandwidth, availability of servers), and distributes the filtered traffic in an adequate way to implement objectives like high performance or availability. Second, activeness can be beneficial for distributing not only content, but also service logic of Web services, which is clearly desirable for scalable Web services with multimedia content. In the following sections, these applications are elaborated in more detail, using a use-case-oriented approach. Each of these sections is structured as follows: • •
•
Introduction: The introduction briefly describes the applications, justifies the use of active networking technology for implementing the applications, and gives references to related work. Embedding of the application: In this section, the relevant actors are described and mapped onto the actors of the FAIN enterprise model. In addition, the necessary networking resources for running the application are stated. Use cases: This section contains several use cases for modeling the application. For each use case, we start with a summary, provide a description of the problem to be solved by the use cases, and provide a description of the resources and of the preconditions required to run the use cases. In addition, one or several execution scenarios are given in the form of a sequence of execution steps, which are mapped onto the respective reference points of the enterprise model. If considered necessary, these may also contain alternative scenarios or exceptional situations. Also, an assessment of the relevance of the different use cases is given.
There is emphasis on modeling the applications to work out the AN-specific aspects.
Programmable Networks’ Requirements
121
7.4.2 Active Web Services In the application “active Web services” (AWS), active network technology is used to provide scalable, reliable, large-scale Web services. In the following subsections, we describe and show motivation for this application, which will be modeled by use cases later on. 7.4.2.1 Application Description Web technology has been the single most important technology responsible for the explosive growth of the Internet. It is not only the uniform technology that accommodates Internet surfing by end users, but it is also mission critical for business applications, both between and within enterprises. In this context, a number of requirements need to be addressed: • • • •
To accommodate an increasing numbers of customers. To provide scalable, reliable, and efficient Web services. To be able to quickly add new services and modify existing services. To reduce end-to-end traffic and server load.
To meet this diversity of requirements coming from customers and service providers we propose to design and implement an environment for Web services based on active networks. In order to present our ideas, we first revisit the state of the art in Web technology. Web technology (e.g., HTTP, HTML) was developed originally as a pure client/server architecture, where end users are connected with the Web service provider site via an application-unaware IP network. Usually, such an IP network is composed of an access network for both the end user and the Web service provider site, and an IP core network (see Figure 7.2). End User IP Access Network
End User
End User
CATV Network
Telephone Network
Service Provider IP Access Network
Service Site
Mobile Network
End User
End User
IP Core Network
AR
Service Site Service Site
End User Web Service Provider
Figure 7.2 Physical network architecture of Web services.
122
Programmable Networks for IP Service Deployment
This simple approach, where the “intelligence” is mainly located in the client and the server, was one of the main reasons for the enormous success of the Internet. However, it is also well known that over the public Internet, this pure client/server model on top of a “dumb” IP network has been shown to lack important characteristics, which are required for current and future Internet applications. These include reliability, performance, and scalability as well as the possibility to take network internal conditions (e.g., the available bandwidth for particular end users) into account in the service provisioning. In order to overcome these deficiencies, the network infrastructures have already been extended in various ways. Some solutions, which have been proposed and deployed in response to the growing demands of existing and new applications, are: • •
•
Web caching, where static content is stored within caches within the network to reduce the response time for subsequent requests. Load balancing/layer 7 switching, where requests are distributed to several servers in a server farm. To clients, this server farm appears as one virtual Web server, which is reachable by one IP address, which in reality is served by multiple servers. Content distribution networks, where large volume content such as images or video is pushed onto dedicated content distribution servers, which are distributed worldwide. These servers all have their own IP address and domain name server (DNS) name, and the traffic is redirected based on features included in the HTTP protocol.
See also Figure 7.3. However, these can only be considered as ad hoc solutions to specific problems. For instance, caches only deal with static content. Important features of today’s Web services, such as the observation of page hits of a particular page by the service provider or a third party, can still only be implemented by relying on centralized servers. We therefore believe that today’s Web services can also already benefit from an architectural framework that can support an easy, rapid, and uniform way of deployment of new solutions that need intelligence within the network. In addition, more network services are requested by customers and designed by service creators, which clearly shows that there is a demand for extensibility and flexibility on the ISP’s and operator’s infrastructure. A good example is a personalized stock quote service. Stock quotes are frequently changing data that can hardly be cached. If, in addition, a service for distributing stock quotes should be personalized (e.g., with respect to the portfolio of a certain user), it becomes inevitable that with current approaches a high network and computing load is generated on a centralized server. A distributed architecture [19] would permit personalization and information distribution within the network, and is clearly more scalable.
Programmable Networks’ Requirements
End User IP Access Network
123
Service Provider IP Access Network
IP Core Network
Service Site End User
End User
Mobile Network CATV Network
End User Telephone Network End User
End User
Load Distr.
Cache
AR
Service Site
Cache
Content Distri. Server
Content Distri. Server
Content Distribution Networks
Figure 7.3 Web caches, content, and load distribution servers.
The basic idea of our active Web service infrastructure is to exploit the capabilities of activeness in the following two ways: •
•
On the one hand, so called “service nodes” within the network are implemented using AN technology. These service nodes are active Web servers that can be programmed by the Web service provider. They provide, for instance, persistent storage, in order to allow the service provider to store content locally, and an environment to execute Web service logic (e.g., JavaBeans). The service provider downloads content and service logic onto them and can in this way implement features such as content distribution and personalization, aggregation of user replies or fast response times. On the other hand, so called “redirection nodes” are also implemented using AN technology. They provide features that allow the node to filter out HTTP traffic, to build service sessions (i.e., to deal with the per-user state), and to forward the traffic of a particular session to a service node. Onto these nodes, code is downloaded that observes the network load, and observes the load and availability of servers, and, based on this information, determines a strategy for redirecting the traffic to the service node that is most suitable for a given Web service/user.
Of course, both kinds of functionality may be combined within one physical node, and there may also be nodes that provide a functionality that is a mixture of both. However, we usually expect them to be separate, both physically and
124
Programmable Networks for IP Service Deployment
logically. On the one hand, programming a network (e.g., load balancing algorithms, routing algorithms) requires different skills than programming Web services and, hence, will be carried out by different actors. On the other hand, both kinds of nodes will usually be implemented with different design objectives: redirection nodes will be some sort of router with an emphasis on network throughput; service nodes will require at least some part of the functionality of Web servers (i.e., huge computing power, persistent storage). Also note that both kinds of activeness are complementary and can be implemented independently. The redirection of traffic can also be beneficial if a distributed set of Web servers is implemented and operated completely independent of AN technology. Conversely, a distributed, AN-based implementation of a Web service infrastructure can also be used with conventional techniques for load balancing. In the following paragraphs, we call any Web service that is implemented using some kind of activeness an “active Web Service.” The overall scenario is depicted in Figure 7.4. Both redirection nodes and service nodes will usually be physically located within the access network of the end user, the access network of the Web service provider, or connected via a separated access network. For this reason, we may also call both of them “active Web gateways.” Note that we assume that the core network is not an active one, and only provides basic IP connectivity. One design goal is that the overall setting is fully transparent to the end user; that is, the end user uses the Web service via an ordinary Web browser, using standard protocols such as IP or HTTP. This assumption takes into account that updating several hundreds of millions of Web clients is not only infeasible, but also unnecessary. To sum up, the proposed application can be viewed as a continual evolution of existing Web infrastructure: • •
Traditionally, the Web has been based on a “dumb” IP network, offering only pure best-effort IP routing and forwarding capabilities. Recently, the Web has been enhanced by installing caches, switches, and content distribution networks. However, the “intelligence” (i.e., the service logic) still resides on centralized servers operated by the Web service providers.
In the future, the Web might by a fully distributed computing infrastructure, with service intelligence distributed on the IP and/or on the application layer.
Programmable Networks’ Requirements
End User IP Access Network
125
Service Provider IP Access Network
IP Core Network
Service Site End User
End User
Mobile Network CATV Network
End User Telephone Network End User
Redirect Node
Redirect Node
Redirect Node
AR Service Node
Service Node
Service Node
Service Node
End User
Figure 7.4 Service nodes and redirect servers implement active Web services.
7.4.2.2 Motivation for the Use of Active Network Concepts With active network technology, Web traffic can be handled flexibly within the network. Potential benefits include, among others: •
•
•
Network-aware Web services: Contrary to existing Web services, such services can operate based on information such as available bandwidth on access and network links, network topology, and load of servers. Because the information is not available at the client or server side, it should be implemented with AN technology within the network. Ease of service programming and management: Active networks eliminate the need for cumbersome ad hoc solutions to specific problems that require separate management; active networks provide a common ground for the deployment of new services and a mechanism inside the network that unifies the management of these mechanisms and services. Distribution of service logic: Service logic is executed at several locations, including specific points inside the active network, which is potentially advantageous for large volume services, fine (per user) granularity of services, and new service features. With existing solutions such as caches or content distribution networks, only content, but not service logic, can be located within the network.
126
•
Programmable Networks for IP Service Deployment
Dynamic, autonomous adaptation: The task of operators on a service site (service provider, network provider) is getting bigger as the number of network customers/consumers grows. So the task of provisioning should be dynamic. Besides, active nodes may cooperate with each other without operator (human) intervention.
7.4.3 Active Multicasting 7.4.3.1 Overview The FAIN multicast application [22] use case specifies requirements related to network infrastructure services and the business roles for the design, the implementation, and the use of an active IP network that will support various kinds of multicast communications. We intend to pave a way for both the reengineering of state of the art multicast IP technologies [37], and the development of new multicast communication services on active IP networks. We also describe some of the technical capabilities of the next generation of multicast communications, in light of requirements expressed by networked game platforms and applications. Active IP networks are expected to enable the design and implementation of new forms of multicast protocols, services, and applications that could not (or hardly) be implemented using the existing traditional network technologies. This is especially true when properties such as the flexibility of communications service supports (e.g., through modification of associated computations), control of communications through external agents (outside the network), or the provision of multiple and variable QoS properties are required. Also, it is important for network operators to provide a smooth transition of their infrastructures to meet the requirements of new networking capabilities. In that context, it is important to take into account the possibility of reengineering existing services using new networking technologies. The use case specification in this document also deals with the necessity of integrating existing and next generation services within the same active network platform, by describing the way state-of-the-art multicast communication services can be implemented using active IP networks. 7.4.3.2 Next Generation Multicasting and QoS It is now well accepted that scalable and flexible multicasting support is increasingly needed for emerging communication systems. The demand is driven by multimedia applications; especially by collaborative applications. The same holds for applications, such as scientific computing and distributed simulation. For many of these applications, group communication is a natural paradigm. Therefore, proper support within the communication subsystem is required. This
Programmable Networks’ Requirements
127
demand is acknowledged by various recent approaches that address transport protocols for reliable multicast. The focus of most current proposals is on the support of reliability. Mostly, all group members experience the same level of service, independent of their network attachment and end system equipment. Thus, all participants in the group are provided with a homogeneous quality of service. Furthermore, within the last few years communication environments are becoming increasingly heterogeneous. This imposes new challenges on communication support for multimedia and collaborative applications. Because of this we need active multicast protocols to provide multipoint communication support for large-scale groups with heterogeneous receivers. Active multicasting nodes inside the network include so-called QoS filters that remove information from continuous media streams, in order to reduce data rate for low-end receivers without affecting high-end receivers. The Needs of Active Multicast Services with QoS Early Internet-based multicast service specifications were limited to the ability to deliver the same data packets and messages to receivers in a multicast group, with very few guarantees regarding the properties of message delivery. With the development of the Internet, reliable data communications (multicast file transfer), real-time multimedia communications, and (increasingly) distributed applications over the Internet, the needs of multicast services with various guarantees are becoming one of the principal requirements for future Internet services. QoS requirements related to logical time, the ordering of messages, real-time timeliness, reliability or fault tolerance are some of the most important properties that the future generations of multicast services must support. Multicast services with real-time delivery, reliable delivery or flexible group control, and management mechanisms have been studied largely in the context of traditional IP networks, with limited success. It seems that some of the difficulties in the design of multicast protocols with such properties lay in the very basics of the design of these traditional IP networks, which do not allow manipulations of network resources, or modification/superposition of computations. Active IP networks should ease the enhancement of multicast protocols with these properties (or even more complex ones) and enable new applications, provided that multicast issues are dealt with appropriately at the initial design stage. Concerning the next generation multicast services, the studies presented in this section deal with the provision of QoS related to real-time delivery and reliability, and the ordering of active packets. These topics have been chosen for their popularity in the multicast research community, and their use in networked games in particular. Other topics such as security or flexible group management may also be chosen to illustrate the use of active network technologies for next generation multicast services.
128
Programmable Networks for IP Service Deployment
QoS Related to Reliable Delivery Reliable multicast protocols are often approached in networking research as protocols that take into account reliability in message delivery. Usually, reliable multicast protocols can be characterized by the way they tackle reliable delivery, and their performance in terms of the degree (or the ratio) of reliable delivery they can offer. The reliability ratio can range from 0% (totally unreliable protocols) to 100% (guaranteed or totally reliable). Protocols that lie between these two extremes are sometimes called semireliable protocols. On the other hand, almost all of the protocols developed in research fields such as computer networking or fault-tolerant distributed computing not only provide guaranteed reliability but also impose additional reliable delivery semantics to make the protocols more suitable for fault-tolerant distributed computing services. Reliable multicast protocol specifications, the most often used in this case, impose the following conditions: • • •
Validity: A message sent to a multicast group is eventually received by all the receivers of this group without alteration; Agreement: if a receiver of a multicast group receives a message, then the same message is eventually received by all the receivers of this group; Integrity: a message sent to a multicast group is received at most once by any receiver of this group, and only if it has been previously sent to this group.
Networked game applications seem to need all of these classes of reliable multicast protocols; a requirement that can cause performance problems, or is sometimes impossible due to the underlying network. We believe that active network services can be used to overcome these limitations. QoS Related to Timely Delivery Timeliness constraints appear in multicast protocols for audio and video streaming, and data communications between entities that interact in a networked game. Like the case of reliable delivery, protocols in this case are required to provide a variable level of real-time guarantees, ranging from 0% (communications without timed constraints) to 100% (communications that guarantee that all the timed constraints will be met). Several multicast protocols have been developed for audio/video streaming over the Internet that provide varying levels of real-time guarantees, depending on their performance and the supporting network. However, very few of them can provide stringent real-time guarantees, due in part to the impossibility of having an external control over network resources. Active and programmable network services will allow
Programmable Networks’ Requirements
129
implementation of multicast protocols with better control over real-time requirements. QoS Related to Ordering Beyond the reordering of data packets received at the transport level according to their sequence number, some applications need more complex ordering mechanisms that apply to messages sent or received by applications. The following ordering semantics are usually required by distributed applications in general, and networked games in particular: •
• •
First-in-first-out (FIFO) multicast: This is a reliable multicast service that satisfies a FIFO order, defined among the flow of messages sent and received in a multicast service session. For example, the FIFO order may require that messages be delivered to receivers of a multicast group according to the order in which they are sent to that group. Causal multicast: This is a reliable multicast service that satisfies the causal order; that is, the partial order that results from the causality relation between message emission and reception events introduced in [15]. Atomic multicast: This is a reliable multicast service that satisfies the total order defined as follows: If two receivers p and q receive both messages m and m’, then p receives m before m’ if and only if q receives m before m’. The atomic order ensures the same view of the sequence of messages received in the system.
Variants of these ordering semantics can also be defined, as can new ones. Combinations of ordering semantics can also be defined. Several attempts to implement multicast protocols with such ordering semantics led to frustrations in the past (a performance problem; nonpracticability), due in part to the lack of an adequate network infrastructure. The use of active network services can lead to a better ability to deal with these problems. Networked Game Application Framework Networked games that allow multiple players to interact in real time over the Internet, as shown in Figure 7.5, will emerge in the near future. An important challenge with these games is their ability to scale to support thousands of end users spread around the world, while maintaining an acceptable QoS. The QoS requirements themselves can vary a lot, depending on the semantics of games, and the processing and communications required by parts of games. A new trend is to build a distributed communication and processing platform that will allow one to implement virtual environment support for these games. Networked games may also be used, as application frameworks to do research in several distributed
130
Programmable Networks for IP Service Deployment
simulation systems, for example, military simulation applications, concurrent engineering, virtual marketplaces, and so on. The availability of multicast protocols with various technical properties is one of the key requirements for networked games, and, beyond the networked game application framework, such protocols will allow the implementation of applications with more stringent communication needs than those that can be satisfied by today’s Internet. During the past decades, multicast research and the software industry produced various protocol specifications, standards, prototypes, and commercial products. However, almost all the resulting products have been tailored for a specific application domain, and/or they satisfy a relatively limited set of technical properties. For example, the multicast IP protocol allows a host station to send the same packet to a set of receiving hosts over the Internet, with no guarantee in terms of reliable delivery, delay of reception or order of reception of different packets. Other research and products provided multicast protocols with a reliable delivery and/or a particular order of delivery, using a classical point-topoint transport protocol, but most of such protocols cannot scale to support thousands of receivers, and they do not respect timely delivery. Here, we are concerned with very general multicast protocol frameworks, that are open with regards to the QoS properties that can be ensured, and that can be tailored or adapted to meet the needs of specific applications. Active multicast router
Service domain
Game server
Transport path
Figure 7.5 Networked game infrastructure.
Programmable Networks’ Requirements
131
7.4.3.3 Motivation for the Use of Active Network Concepts Past experiences with multicast services show that it is very difficult to implement protocols that satisfy many of these features or technical requirements at the same time. For example, there seems to be a trade-off between reliable delivery and real-time delivery, or between ordering mechanisms and scalability techniques, and so on. It is very hard to implement protocols that can satisfy all these requirements at the same time using the existing network architecture, and sometimes it is impossible due to initial architectural constructs. For example, it is impossible to guarantee hard real-time delivery constraints with the existing packet switching networks, simply because, by design, the network has no means to provide such guarantees. On the other hand, applications such as distributed game systems that need protocols with all these requirements are emerging. A solution is to provide multiple-protocol frameworks; that is, networks and support services that allow one to use several protocols within the same multicast service session). Another solution is to develop protocols (or protocol frameworks) that allow one to manipulate messages carrying individual technical properties. The latter approach seems to be particularly interesting. It has been investigated in the applicationlevel framing (ALF) [1] approach for some time, but it seems that there is a lack of adequate network support for the full implementation of the ALF protocol. We expect active networks to provide an adequate architecture and support services for this. 7.4.4 Active VPN 7.4.4.1 Overview Virtual private network is a service that is a private network configured within a public network. For years, common carriers have built VPNs that appear as private national or international networks to the customer, but physically share backbone trunks with other customers. VPNs enjoy the security of a private network via access control and encryption, while taking advantage of the economies of scale and built-in management facilities of large public networks. VPNs have been built over X.25, switched integrated services digital network (ISDN), frame relay, and ATM technologies. Today, there is tremendous interest in VPN over the Internet, especially due to the constant threat of hacker attacks. The VPN adds that extra layer of security, and a huge growth in VPN use is expected. Protocol technologies used are: PPTP, L2F, L2TP, IPsec, PVC for security, and a transparent LAN service (VPN definition from http://www.techweb.com). The term “virtual private” means that the offered service retains at least some aspects of a privately owned customer network. A virtual private network is a
132
Programmable Networks for IP Service Deployment
secure network that runs over an IP-based public network (e.g., the Internet). Security includes the following: confidentiality, integrity, and authentication. The term virtual private network refers to the interconnection of customer sites, making use of a shared network infrastructure. Multiple sites of a customer/consumer network may therefore be interconnected via the public infrastructure, in order to facilitate the operation of the private network. An example can be seen in Figure 7.6. This section describes some of the use cases for a dynamic virtual private network over an active networks environment (i.e., active virtual private networks services). It serves as the first iteration of the requirement capture process.
VPN over current IP network
Client Router/firewall PWR
WIC0 ACT/CH0
WIC0 ACT/CH0
ETH ACT
OK
ACT/CH1
ACT/CH1
COL
IP network
Server Router/firewall
VPN Tunnel
PWR OK
WIC0 ACT/CH0
WIC 0 ACT/ CH0
ETH ACT
ACT/CH1
ACT/ CH1
COL
Private Network (Headquarters)
Private Network (Regional Branch)
•SSH+PPP •IPSec •PPTP •CIPE •L2TP
Tunnel encapsulates the original IP packet inside an “outer” IP packet (creating a completely new packet with the original packet as its payload)
Current VPN inherits problems from the Internet: •Best effort is no guarantee for quality of service (QoS) •User has no control over packets once they are outside the internal network •There is no guarantee that every packet will reach its destination
Figure 7.6 Current IP-VPN model.
Programmable Networks’ Requirements
133
7.4.4.2 Background CPE-Based Versus Network-Based VPNs The term customer premise equipment-based virtual private network (CPE-based VPN) refers to an approach in which (ignoring management systems) knowledge of the customer network is limited to customer premise equipment. In a classical CPE-based VPN, the service provider is oblivious to the existence of the customer network. The provider may be offering a simple IP service, an ATM service, or a frame relay service. However, it is common for a service provider to take on the task of managing and provisioning the customer edge equipment, in order to reduce the management requirements of the customer. This results in providerprovisioned CPE-based VPNs. In CPE-based VPNs, the customer network is supported by tunnels, which are set up between CPE equipment. If the provider offers an ATM or frame relay service, the tunnels may consist of simple link layer connections. If the provider offers IP service, then the tunnels may make use of various encapsulations to send traffic over IP [such as Generic Routing Encapsulation (GRE), IP-in-IP, IPsec, layer 2 tunneling protocol (L2TP), multiprotocol label switching (MPLS) tunnels]. For classical CPE-based VPNs, provisioning and management of the tunnels is up to the customer network administration. Typically this may make use of manual configuration of the tunnels. For provider provisioned CPE-based VPNs, provisioning and management of the tunnels is up to the service provider. For CPE-based VPNs (whether classical or provider provisioned), routing in the customer network considers the tunnels as simple point-to-point links, or in some cases as broadcast LANs. A network-based VPN (NBVPN) is one in which equipment in the SP network provides the VPN. This allows the existence of the VPN to be hidden from the CPE equipment, which can operate as if it is part of a normal customer network. In NBVPNs, the customer network is supported by tunnels, which are set up between PE equipment. The tunnels may make use of various encapsulations to send traffic over the SP network (such as GRE, IPsec, IP-in-IP, or MPLS tunnels). There are many different types of NBVPNs that may be distinguished by the service offered, and they are: •
•
Layer 2 services: The provider forwards packets based on layer 2 addresses (such as frame relay, ATM, or MAC address) and/or on the basis of the incoming link. There are three major types of protocols being used: the point-to-point tunneling protocol (PPTP), L2TP, and GRE. Layer 3 services: The provider forwards packets based on layer 3 information, as well as on the basis of the incoming link. The most commonly used is IPsec.
134
Programmable Networks for IP Service Deployment
7.4.4.3 Motivation for the Use of Active VPN Concepts The concept of an active virtual private network can be discussed from three perspectives: the consumers, the providers, and the ANSP. Consumer Perspective From the perspective of the consumer, the notion of AVPN presents a flexible, pragmatic, and well-understood form of requesting closed user group communication services. Consumers (or their agents, retailers/service providers) are relieved from the burden of having to pre-estimate accurately their communication needs in terms of the amount of bandwidth and quality required. By virtue of the active nature of the AVPN service, the bandwidth required to realize the communication between users is adjusted dynamically, according to actually demanded traffic, so that the desired quality is achieved. Further, customers have the flexibility of declaring the sites that must be in communication at a certain period of time (according to the requirements of their workflow system), rather than running a VPN connecting all possible customer sites. Service Provider Perspective From the perspective of service providers, the notion of AVPN is also useful. It is attractive to consumers and leaves wide margins for pursuing scalability, cost effectiveness, and differentiation among them. The realization of active VPNs has more potential to scale over the traditional VPNs. In active VPNs there is no need to provide permanent connectivity between all sites, as in the traditional VPNs (implying a persistent reservation and consumption of resources such as bandwidth, identifiers, routing entries), but only between those sites that need to communicate at the time (based on the customer’s workflow requirements). The provision of active VPNs has the potential to result in better utilization of the network resources over the traditional VPNs. This is because the bandwidth in the network is actually allocated when communication between two specific sites is in effect, without needing to be preallocated between any VPN sites as in the traditional VPNs. However, the provision of active VPNs may add complexity and overhead that is required for their management (especially in network planning). ANSP Perspective This defines the notion of AVPN from the perspectives of an active network service provider (ANSP). An AVPN is defined as an active VPN. As a VPN, an AVPN connects specific consumer sites according to certain communication requirements (with respect to bandwidth, quality of service, and so on). However, an AVPN is
Programmable Networks’ Requirements
135
different from the traditional VPNs by being active. The term active characterizes the underlying network means and services required for establishing, administering, and managing the communication between the consumer sites. In an active communication context, the required means and services are actively and dynamically realized and customized when it is needed and as it is needed. To be precise, an AVPN has the following property over the traditional VPNs. The administration and management of the communication between VPN sites (e.g., configuration and bandwidth management, statistics, accounting, alarm reporting) can be customized to the consumers’ preference in an active manner that is completely dynamic according to well-defined conditions (service level agreements or SLAs and/or management policies), and is not in a semistatic (request-based) manner as in traditional VPNs. Connectivity between the sites of an active VPN is in effect only when it is required (i.e., when the two sites need to communicate), and not on a permanent basis between all consumers’ sites as in traditional VPNs. In active VPNs, the connectivity of the VPN may be driven by the workflow system of the customer. Since each time the tunnel connection path may be different, and it is created when it is needed, it increases the utilization of network resources and the security of the connection. Once connectivity between specific sites of an active VPN is in effect, the required bandwidth may actually be demand driven within certain limits, and not permanently fixed as in traditional VPNs. That is, the bandwidth of a link of an active VPN (customer’s view) or of the trail (provider’s view) can dynamically adjust (within certain limits) to the actually demanded bandwidth, so that communication is within desired quality levels. According to the workflow requirements, the required connectivity is provided, and the necessary resources are made available to transport the user traffic within the specified quality constraints. Also by using encryption and tunnel encapsulations, a VPN can achieve the goals of allowing secure remote access, and linking up of the corporate intranet and extranet. With the flexibility of active networks, it is easier and quicker to deploy a secure and reliable private network on the fly. Private links or “tunnels” can be dynamically created and terminated according to the protocols used, and the SLAs between the consumer and the service provider. 7.4.4.4 Use of Active CNM with Active VPN Creation and Management Active Customer Network Management (CNM) enables the dynamic creation and management of the active VPN. Active CNM provides supports for network management services that: •
Dynamically configure business objects communications services.
136
•
Programmable Networks for IP Service Deployment
Dynamically configure work flow management engines such as integration with service and network management, and processing management and information management via dynamic brokerage (via the broker actor from the FAIN enterprise model).
The application includes the business level management solution (owned by the consumer) and includes work flow components. 7.5 GENERIC REQUIREMENTS FOR THE FAIN ARCHITECTURE 7.5.1 Service Architecture The service provision architecture needs to be flexible to cope with various underlying connectivity networks, and support multidomain/multiservice provider scenarios. It must make optimal use of functions provided by the underlying transport networks. With respect to service definition and deployment, the architecture should allow the dynamic programming, deployment, and activation of services; both by end users and service providers. This involves both the data plane as well as the related management facilities. For service definition, the architecture should be based on components, which can be specialized and enhanced. This capability enables service providers to differentiate through their own service development. With the market opening up for communications, it is expected that one operator or service provider will serve an even greater number of users than before. At the same time, there might be markets for highly customized services targeted at small user groups. Worldwide markets, combined with the capability to customize products for small groups, put strong requirements for scalability on all software systems. This requires that services be created with scalability in mind; they should be made to scale well (i.e., be efficient enough to support as little as 10 to over a million users). Additional requirements related to the service architecture include service personalization and mobility of terminals, networks, and services. 7.5.2 Service Access Requirements A set of requirements could be identified related to access of services. These provide commonly used utilities; necessary for the network capabilities to be accessible, secure, resilient and manageable, available, and independent of any particular type of service. For example, mutual authentication between users and the network requires authorization of access to resources and services; service provisioning based on service level agreements requires handling of resources that
Programmable Networks’ Requirements
137
are dynamically registered from the network and can in turn be dynamically discovered by applications, and subscription to events such as alarms or call setups The OSA [28] specifies interfaces for interaction with network-level functions. 7.5.3 Service-to-Network Adaptation/Management The service-to-network adaptation requirement enables the user to access any service, regardless of the underlying network technology. Thus, in order to make the service accessible within different networks, the service must be adaptable to the different network capabilities. The requirements related to the AN architecture include: • • •
Reservation of QoS resources; Selection of suitable networks; Dynamic negotiation of QoS parameters, for example, based on changing network conditions.
7.5.4 IP-Based Network Models FAIN is targeted toward IP-based networks. This gives rise to several requirements related to the Internet protocol IP. Most importantly, the IP connectivity service as it is presented to the service provider should be “upward compatible;” that is, it should support active IP as well as basic IP-based connectivity service classes as they are specified by IETF Integrated Services (IntServ) and Differentiated Services (DiffServ) workgroups [11]. Moreover, service management should be based on a good information model of IP connectivity, which gives the service provider a service level view. The principle of abstraction and information hiding should be use to provide an essential, core set of connectivity services, which are to be used and expanded for more application-level services. Carrier-IP features, which require more rigorous treatment of fault, configuration, accounting, performance, security (FCAPS) issues, should be provided. 7.5.5 Service Level Agreements The service level agreement (SLA) dictates the terms of service and payment for the subsequent service. The SLA is a result of negotiation between two business entities. The SLA is effective over a period, which is also a part of the terms of service dictated by the SLA. Quality of service may be one part of the SLA relating to connectivity services. The format of the service level agreement should allow on-line specification, processing, and fulfillment of service level parameters and automated (re)negotiation of the SLA to allow the dynamic establishment of service.
138
Programmable Networks for IP Service Deployment
7.5.6 Quality of Service End user traffic should be transported according to the SLA negotiated QoS level. The end user should receive alarms/trouble reports on, for example, service interrupts and service degradation. The SP should receive, scheduled or on demand, service logs, for example, on traffic statistics, capacity levels, resource usage, periods of service interrupt and periods of service degradation. Moreover, the end user should have access to information that will allow them to assess the QoS. 7.5.7 Charging/Billing Service providers should provide customers with flexible choices in receiving and paying bills. The main idea behind making the billing process flexible is to cut costs through a greater control over service usage. In particular, this is related to hot billing or real-time billing where the customer may want to receive bills within a few minutes of the end of service usage (or may have a prepay service where the credits must be deducted at the same time as the service usage). 7.5.8 Security End user traffic should be protected according to the SLA negotiated security level. The end user should receive, scheduled or on demand, service logs on, for example, security violations, and violation attempts while the SP is responsible for ensuring the confidence of information received from the end customer. 7.5.9 Active Node/Network Control In active networks, virtual nodes can be created on a set of nodes in the network to form a virtual network. An independent control plane should control each virtual network (i.e., the virtual nodes that make up the network). This control plane should not be aware that it is controlling virtual nodes rather than physical ones. Similarly, each virtual network can be managed by a separate network manager that does not need to be aware of the fact that it is managing a virtual network and not a physical one. This leads, for example, to the following requirements: • • •
Creation and management of multiple execution environments on a node, which are completely isolated from each other. Provision by the network infrastructure provider of an open network infrastructure, in a way that networking resources can be divided virtually between different active network service providers. Higher-level network management systems that allow the creation, modification, and deletion of virtual nodes.
Programmable Networks’ Requirements
• • •
139
Separate control software (e.g., signaling) from the transport resources (e.g., forwarding functions) using standard interfaces. Requirements related to the sharing of resources by multiple controllers, processes, and users. Requirements related to the policy-based management of node resources (node global as well as resources of a virtual node). This also includes the ability to perform execution of multiple instances of FCAPS management logic on a node.
7.5.10 Generic Framework Requirements The most comprehensive set of management building block requirements currently available is being worked on in the TM Forum's Application Component Team (ACT) [14]. This has already seen some application in the use of the information networking architecture (INA) principle in the work of TINA-C [12, 13]. These requirements identify a building block as an abstract notion of a distributed computing entity, that aids in the discussion of the deployment and interoperability aspects of server-weight software with multiple interfaces. More simply, it describes a building block that is a deployable unit of interoperating software. 7.6 REQUIREMENTS FROM OPERATORS’ EXPECTATIONS In this section, we will translate the expectations of network operators of active networks into requirements for the system as a whole, and into requirements for the different reference points identified and described in the FAIN enterprise model. 7.6.1 Impact of Speeding Service Deployment and Customization The requirements to speed up service deployment and customization influence the requirements of the whole value chain. NIPs must offer a sufficient managed resource on top of which service execution environments run. These EEs must be flexible; for example, in that node resources can be allocated/deleted flexibly when new services are defined or terminated (RP3, RP7). Moreover, with respect to ANSPs, this requirement relates to the requirement that open interfaces should allow the flexible selection of different service developers for implementing services, as well as to make the modification and update of services possible at run time (RP4). Service component providers must provide service components that are sufficiently generic (e.g., multicast) but can also be supplemented with additional properties such as reliability, security, and real-time operation (RP1).
140
Programmable Networks for IP Service Deployment
Based on this, SPs are required to be able to deploy and integrate services, to negotiate with others SPs to offer end-to-end services, and to offer personalized services to their customers (RP1, RP4, and RP5). Moreover, consumers may wish to make service level agreements (RP4). Active middleware manufactures (AMM) are required to provide the necessary infrastructure with which ANSP can run active services and adapt modification of service behavior. The platform can be upgraded with their latest network software release. HMs are required to make their network element products flexible to accept modification or upgrade by active network features, and to reduce the lengthy standardization phase and technology diffusion phase before being operational. 7.6.2 Impact of Leveraging Network and Service Management In order to realize these requirements, NIPs are required to provide network infrastructures for multiple third-party SPs with network management by ANSPs. ANSPs in turn are required to allow SPs to use network infrastructure under service provider-specific circumstances regarding, for example, legislative requirements. This also requires the use of a standardized protocol/interface for full network management (FCAPS; RP2). SPs are required to select and use network services in multiple network infrastructures. The roles of HMs, consumers, SCMs and AMMs are largely unaffected by this requirement. 7.6.3 Impact of Decreasing the Dependence on Vendors Vendor independence has always been a strong requirement for network operators and, hence, needs to be taken seriously for active networking infrastructures. For instance, NIPs are required to provide basic IP connectivity among heterogeneous products, to provide open/standardized interfaces [26, 27, 30, 31] for ANSP, and to provide facilities to dynamically introduce the required functionality at network nodes by means of injecting executable code into the node (RP3). ANSPs are required to select flexibly between different manufacturers of switches and routers according to market-driven requirements (e.g., based on price, availability, or functionality), while SPs are required, for example, to integrate services or to provide solutions using flexible functionality, which is not prescribed (RP4). HMs are required to support standardized interfaces on their network hardware. The roles of SCPs, consumers, and AMMs, however, do not act directly to decrease vendor dependency. AMMs may supply platforms with execution environments that enable ANSPs to realize network management.
Programmable Networks’ Requirements
141
7.6.4 Impact of Networks and Service Integration and Information Networking In order to accommodate the needs related to the integration of service integration, transport networks provided by NIPs must provide a wide range of service classes and sufficient bandwidth to support several simultaneous broadband communications. Related to the specifics of ANs, networks must provide sufficiently powerful processors to support the execution of arbitrary routing or switching programs. In particular, processors dedicated to specific routing or switching programs should be avoided in order to preserve the ability to support the execution of arbitrary transport services. For example, ANSPs must offer the means to develop integrated network and information services, that is, services that are able to combine information and treatments coming from network operation and applications in the same framework. They must support the execution and the isolation of network services provided by several service providers, and emulate the execution of network services that normally run on other platforms (RP3, RP6 and RP2). SCPs are required to develop active network services specified by multiple service providers, according to certain development guidelines (e.g., the use of distributed object platforms [25, 29] and service frameworks, and support for certain mechanisms for remote access, execution, control, and management of services). In order to facilitate information networks and service integrations, SPs must provide mechanisms that allow other service providers and information services to override network service components with new arbitrary network services, provided that some minimal consistency requirements are met. A consumer is any actor with systems that request active network services. The role of a consumer to facilitate information networks and service integration is very complex. For that reason, we have derived only a very limited subset of the requirements that come from a nondetailed analysis. For instance, consumers should form special interest groups, per application domain, so as to specify information service frameworks and active network service requirements for their application domain. The interest groups must involve actors concerned with the application domain, especially active network service providers. For example, consumers such as next generation telecommunication service providers, with an interest in next generation IN service control and management platforms, could form a specification group in order to define and standardize a new service framework based on active networks. HMs, AMMs, retailers, and brokers are not directly concerned with active network technology, but their activity could be influenced by the development of active networks. For instance, hardware providers might have to develop new generations of network processors or transmission devices that better take into account the needs of active networks.
142
Programmable Networks for IP Service Deployment
7.6.5 Impact of Diversifying Services and Business Opportunities For the diversification of services and business opportunities, NIPs must develop mechanisms for the integration of equipment from several manufacturers, the integration and use of several network infrastructures, or the integration and use of different active middleware software. SCPs must develop a wide range of technical competence in network service technologies, and commercial relations with their suppliers (service providers). SPs are required to provide platforms that are able to run programs developed by several service component providers. Finally, the consumer must also be able to use services from several active network service providers, or to use several network services with different service qualities in the same application. 7.7 APPLICATION REQUIREMENTS After generic and operator requirements, the third category of requirement is derived from a set of reference applications. In this section, we describe how the different reference points are affected by the different reference applications we considered, with respect to the different reference points (RP). 7.7.1 RP1: SCP–SP At this reference point (RP) the interactions are between the SCP and the SP. The requirements for the system that are derived from the applications are the following: •
•
•
An interface should be provided in order for the two different roles (SP and SCP) to communicate. Under this interface SP requests service modules according to the service description, and SCP provides these modules including specifications about the requirements of the service module. In this way SCP informs SP about the resources needed (VE/EE) from the service module to execute properly. The SCP must organize the service modules into categories according to the functionality provided by the service component (service descriptions). In this way, the SP can find the proper service modules, based on a formal description about the requirements for the service modules, including resources needed in order to have the modules function properly inside a VE/EE. For instance, a customer-specific VPN may require specific modules for administration and setup of the VPN in the customer premises VE/EEs. A check mechanism must be provided to check possible conflicts between the service modules.
Programmable Networks’ Requirements
•
143
A flexible and scalable management solution that will reduce management traffic and human intervention for running the network and managing the application services must be provided.
7.7.2 RP2: SP–ANSP This RP concerns the interactions between the SP and the ANSP. The following requirements of this RP are mainly derived from the three reference applications: •
•
•
• •
• •
An interface should be established between these two roles in order to allow them to communicate. A dynamic service deployment mechanism should be provided that enables the SP to install new modules and upgrade components and protocols onto the ANSP EE. For instance, the multicasting application requires the deployment of sender agent, receiver agent, multicast router agents, multicast high-quality application agents, and CORBA [25] object multicast tables. ANSP checks the service modules according to their service description. If no problem appears, then they are deployed in the appropriate EE. If conflicts or other problems occur, then the ANSP informs the SP that the execution cannot be done. During the execution of the service modules, a mechanism to check and to monitor whether the ANSP can reserve the appropriate resources from the corresponding nodes should be established. In cases where the ANSP identifies that the consumer’s request cannot be satisfied (lack of resources or low QoS, and so on), this mechanism should also be able to give information about the reasons that have generated the problem. Authorization and authentication mechanisms should also be provided for security reasons. This means that every SP that would like to interact with the ANSP should declare his identity, so that he can prove that he is authorized to ask for the ANSP to reserve resources. The SP must have the ability to contact more than one ANSP (the interaction is being made by the associate ANSPs), in order to choose the most eligible one (the use case receiver in a different domain). There should be a mechanism to prioritize resources and therefore reduce resource conflicts, and to allow the ANSP to “borrow” more resources from a neighboring NIP should the need arise (e.g., to satisfy all the consumer’s QoS needs in the case of the VPN application). A mechanism to account for the resources a user consumes should also be provided. Besides networking resources, resources type shall also include persistent storage, for example, for the active Web service application.
144
Programmable Networks for IP Service Deployment
7.7.3 RP3: ANSP–NIP This reference point concerns the interactions between the active network service provider and the network infrastructure provider. The requirements are: •
•
• • •
•
An interface should be established between these two roles in order for the ANSP to use the NIP. Under this interface, the NIP must provide a physical active networking resource computation mechanism to help in the allocation of resources. ANSP must be informed if the NIP cannot provide the appropriate physical resources, for example, for building the appropriate QoS-aware multicast trees. The NIP, through monitoring, could inform the ANSP about the resource usage per EE in order to make it possible to map the service module in the appropriate EE (this is optimal resource reservation in different multicast group levels). NIP must be able to install and remove code within a VE in order to satisfy new requirements by the application running to the multiple EEs (e.g., path reconfiguration in the multicasting application). There should be a mechanism for the ANSP to deliver the active code to the NIP, which will perform the actual injection of the code. In the case of active VPN the ANSP should have control, using ports on the node in order to create consumer-specific packets, and to route them to the destination. Moreover, the nodes should have the ability to encapsulate an internal active VPN IP address into each packet in order to create a “tunnel” environment (IP Masquerade). This environment enables the two separate networks to function as if they are one network. Granularity of IP packet monitoring should be not only on the IP source/resource address, but also at higher levels, for example, HTTP for the active Web service, or even within the application.
7.7.4 RP4: Consumer–SP At this RP the interactions are between the consumer and the SP. The requirements are: • • •
An interface should be established between the SP and the consumer. Through this interface, the consumer can determine and request the available SP services. The consumer should also be able to specify the parameters (SLA/QoS) under which he wants a particular service to be provided. There must be a mechanism to ID tag the consumer sessions and map them onto the SP part of the ANSP EE in order for the packet handler to distinguish among different consumer packets.
Programmable Networks’ Requirements
145
7.7.5 RP5, RP6, and RP7: Federation Among SPs, ANSPs, and NIPs These reference points deal with federation, that is, interactions between the same roles. Since we considered federation to be of second priority with respect to FAIN, we did not make an in-depth discussion of these RPs. However, we identified some requirements: • • •
•
The provisioning of an interface that will allow the roles to federate. Using this interface, the roles should be able to cooperate in order to provide interdomain communication. The SP first authenticates the other SP (RP5), and the two SPs confirm with each other the identity of the two consumers of a particular service. Similar requirements can be made for RP6 and RP7. The must be a mechanism for communication of service modules deployed by different SPs for the same service. For instance, in the active Web service scenario, one SP (Web service distribution provider) may negotiate with another type of SP (Web service provider) in order to offer its Web sites for advertising. Mechanisms for end-to-end resource management, monitoring, and accounting should be provided.
7.8 CONCLUSION Based on three sources for requirements—operators, applications, and related projects—we have provided in this chapter an overview of the requirements identified for active and programmable networks. These requirements guided the implementation activities in the FAIN project. The comprehensive requirements’ definition and their analysis are provided in [5]. These requirements are based on business, technical, and application-oriented considerations as follows: • • •
•
The network operators’ expectations of network programmability and active networks. An enterprise model of active and programmable networks. The description of three reference networking applications that can benefit from active networks in one way or another. The three examples of active applications are: Web service distribution, reliable multicasting, and virtual private networks. Identification of a set of technical FAIN system requirements based on the operators’ expectations, the enterprise model, and the reference applications.
146
Programmable Networks for IP Service Deployment
References [1]
Application-Level Active Networks, http://dmir.it.uts.edu.au/projects/alan/.
[2]
Calvert, K.L. (ed.), Architectural Framework for Active Networks, Draft version 1.0, July 27, 1999. http://protocols.netlab.uky.edu/~calvert/arch-latest.ps.
[3]
DARPA Active Network Program, http://www.darpa.mil/ato/programs/activenetworks/actnet.htm.
[4]
FAIN Project, http://www-ist-fain.org.
[5]
FAIN Project Deliverable D1 - Requirements Analysis and Overall Architecture, http://www.istfain.org/deliverables.
[6]
FAIN Project Deliverable D7 - Final Active Network Architecture and Design, http://www.istfain.org/deliverables.
[7]
FAIN Project Deliverable D8 - Final Specification of Case Study Systems, http://www.istfain.org/deliverables.
[8]
FAIN Project Deliverable D40 - FAIN Demonstrators and Scenarios, http://www.istfain.org/deliverables.
[9]
FAIN Project Deliverable D14 - Overview FAIN Programmable Network and Management Architecture, http://www.ist-fain.org/deliverables.
[10] Galis, A., et al., “A Flexible IP Active Networks Architecture,” Proc. International Workshop on Active Networks, Tokyo, October 2000, and in Active Networks, Springer Verlag, October 2000. [11] Internet Engineering Task Force, http://www.ietf.org. [12] Inoue, Y., Lapierre, M., and Mossotto, C., (eds.), The TINA Book. A Cooperative Solution for a Competitive World, Upper Saddle River, NJ: Prentice Hall, 1999. [13] TINA-CMC - TINA Consortium: Computational Modeling Concepts, Version 3.2, 1997 http://www.tinac.com/. [14] Telecom Management Forum (TMF), http://www.tmforum.org. [15] Lamport, L., “Time, Clocks, and the Ordering of Events in a Distributed System,” Communications of the ACM (CACM), Vol. 21, No. 7, July 1978, pp. 558-565. [16] Aidarous, S., and Pevyak, T. (eds), Telecommunications Network Management: Technologies and Implementations, New York: IEEE Press, 1997. [17] ABone Testbed, http://www.isi.edu/abone/. [18] CORBA Components, v3.0, full specification, Document-formal/02-06-65. [19] Distributed Management Task Force, http://www.dmtf.org. [20] Denazis, S. G., and Galis, A., “Open Programmable & Active Networks: A Synthesis Study,” Proc. IEEE IN 2001 Conf., Boston, May 6-9, 2001. [21] FAIN Project Deliverable D9 - Evaluation Results and Recommendations, http://www.istfain.org/deliverables. [22] Houatra, D., and Zimboulakis, E., “Network Model for the Provision of Active Multicast Services,” XVIII World Telecommunications Congress, Paris, France, September 22-27, 2002.
Programmable Networks’ Requirements
147
[23] Mulder, H., (ed.), TINA Business Model and Reference Points, V4.0. Tina Consortium, 1997, http://www.tinac.com/. [24] NMF BMP - Network Management Forum: A Service Management Business Process Model, Network Management Forum, NJ, 1996. [25] Object Management Group, http://www.omg.org. [26] Open Signaling Working Group, http://www.comet.columbia.edu/opensig/. [27] Open Service Gateway Initiative, http://www.osgi.org. [28] Open Service Access, http://www.3gpp.org/ftp/TSG_CN/WG5_osa/. [29] OMG-TSAS - Telecom Service Access & Subscription Specification V1.0, Object Management Group, October 2000, http://www.omg.org/techprocess/meetings/schedule/TSAS_FTF.html. [30] PARLAY, http://www.parlay.org. [31] IEEE P1520, http://www.ieee-pin.org/. [32] TINA-BM - TINA Consortium: Reference Points and Business Model, Version 3, June 1996, http://www.tinac.com/. [33] TINA-SA - TINA Consortium: TINA-C Service Architecture, Version 5.0, 1997, http://www.tinac.com/ [34] Y.110 - ITU-T Recommendation Y.110: Global Information Infrastructure Principles and Framework Architecture, June 1998. [35] Y.120 - ITU-T Recommendation Y.120: Global Information Infrastructure Scenario Methodology, June 1998. [36] Clark, D. D., and Tennenhouse, D. L., “Architectural Considerations for a New Generation of Protocols,” Proc. of ACM SIGCOMM ’90, September 1990. [37] Floyd, S., et al., “A Reliable Multicast Framework for Light-Weight Sessions and Application Level Framing,” IEEE/ACM Trans. on Networking, Vol. 5, No. 6, December 1997, pp. 784-803.
Chapter 8 FAIN Network Overview In the world of networking we are experiencing a significant paradigm shift resulting in new technologies and architectures. The motivation behind this shift is the still-elusive goal of rapid and autonomous service creation, deployment, activation, and management, to meet new customer and application requirements. Research activity in this area has clearly focused on the synergy of a number of concepts: programmable networks, managed networks, network virtualization, open interfaces and platforms, and increasing degrees of intelligence inside the network (see Chapters 2 and 4). Next generation networks must be capable of supporting a multitude of service providers that exploit an environment in which services are dynamically deployed and quickly adapted over a common heterogeneous physical infrastructure, according to varying and sometimes conflicting customer requirements. In this chapter, we describe a new programmable network architecture and its management, which has been designed and implemented as part of the Future Active IP Networks European Union research and development IST project [19]. The main objective of the FAIN project is to develop an active network architecture [29] oriented toward dynamic service deployment in heterogeneous networks. This architecture encompasses the design and implementation of programmable nodes that support different types of execution environments, policy-based network management, and a platform-independent approach to service specification and deployment. The architecture is deployed and evaluated in a pan-European testbed. The functionality deployed in the FAIN testbed and demonstrations highlights the key achievements in the project: • •
Development and integration of the FAIN active routers and FAIN management systems; Installation and scenario-based runs of the FAIN testbed.
The FAIN active routers are deployed in the FAIN testbed as active nodes, which provide flexibility to the user for network management and service 149
150
Programmable Networks for IP Service Deployment
provisioning. The defining characteristic of the FAIN active router is the ability for users to load and manage software components dynamically and efficiently. In Section 8.1 we briefly introduce the FAIN enterprise model, which specifies the business relationships of the FAIN networks. In Section 8.2 we present the FAIN reference architectural model on which the network architecture is based. In Sections 8.3, 8.3.2.1, and 8.3.2.2 we give an overview of the FAIN networking architecture, FAIN programmable nodes, and FAIN management nodes, respectively. The FAIN service provisioning, testbed, and testbed scenarios are presented in Sections 8.4, 8.5, and 8.6, respectively. In Section 8.7 we provide our conclusions. Further design and implementation details of the FAIN programmable nodes can be found in Chapters 9 through 14. Further design and implementation details of the FAIN management nodes and FAIN service provisioning can be found in Chapter 15 and Chapter 16, respectively. Further implementation details of two of the FAIN testbed scenarios, namely, DiffServ and WebTV scenarios, can be found in Chapters 17 and 18, respectively. 8.1 FAIN ENTERPRISE MODEL In this section, we elaborate the FAIN enterprise model, which was introduced in Chapter 7. From an abstract point of view, an enterprise model consists of: (1) actors, that is, people, companies, or organizations of any kind that own, use, support, and/or administer parts of an (active) networking system; (2) the various (business) roles these actors play; and (3) the contractual relationships between these roles, that is, the reference points. The term “business role” refers to the specific behavior, properties, and responsibilities assigned to an actor with respect to an identifiable part of the networking system, and the related activities that are necessary to keep this part of the system operational. The set of FAIN’s actors is mapped onto the set of business roles occurring in the FAIN context; that is, each role is understood to be played by at least one actor who may, however, perform more than one role. The relationships between roles are expressed as reference points, which determine the interactions (interfaces and action sequences, together with related properties and constraints, granted rights, and so on) between the actors in the respective roles. The FAIN actors are the “usual suspects:” (1) the providers, that is, companies operating networks (by managing the respective hardware and software resources) and offering services over the network (employing their own resources or those of the traditional network operators), and so on; (2) the users, that is, individuals or companies that use the proffered connectivity and services; and (3) the manufacturing companies that support the providers by developing (and vending) the necessary hardware and software components. The manufacturers are
FAIN Network Overview
151
not the main focus of the FAIN context, even if their role is an important one; however, they are briefly reflected in the enterprise model, in order to make the picture complete. The FAIN enterprise model describes only the business roles that may be taken by the above-listed actor companies (and, obviously, each actor company may act in more than one role if this suits the business). The purpose of defining business roles is to partition an active networking system so that responsibilities are identified, such that they can be clearly separated and assigned to different actors. Consequently, the enterprise model must determine the interactions that are expected to take place between the actors acting in the respective business roles, in order to allow for independence and “composition” of the partial systems provided by the actors. The reference points are the means to describe these interactions that comprise the administrative relations, such as how to establish/handle the business relations and legal issues—the “business model” in the terminology of the Telecom Management Forum (TMF) and the technically necessary interaction schemes that must be supported by the respective interacting components, their functionality and their capabilities, the qualities they must guarantee, and so on. In short, these are necessary technical (interface related) contracts that must be established between the components, in the realm of the various actors in their respective roles.
B ro k e r
C onsum er
RP4
S e rv ic e P ro v id e r/ R e ta ile r
RP1
S e rv ic e C om ponent P ro v id e r
S e rv ic e C om ponent M a n u fa c tu re r
RP5 RP2
A c tiv e N e tw o rk S e rv ic e P ro v id e r
A c tiv e M id d le w a re M a n u fa c tu re r
RP6 RP3
N e tw o rk In fra s tru c tu re P ro v id e r
H a rd w a re M a n u fa c tu re r
RP7 R o le s in th e fo c us o f F A IN
Figure 8.1 The FAIN enterprise model.
152
Programmable Networks for IP Service Deployment
8.1.1 Roles The enterprise model for active network/service provisioning is depicted in Figure 8.1. The gray-shaded business roles are considered to be within the focus of the FAIN project; the other roles in the figure are not further detailed beyond the brief description given in the text that follows. The FAIN business model is composed of the following roles: Consumer (C): The consumer is the end user of the active services offered by a service provider. In FAIN, a consumer may be located at the edge of the information service infrastructure (i.e., be a classical end user) or it may be an Internet application, a connection management system, and so on. In order to find the required service among those proffered by the service providers the consumer may contact a broker and; for example, use its service discovery features. Service provider (SP): A service provider composes services, including active components delivered by a service component provider; deploys these components in the network via the active network service providers, and offers the resulting service to the consumers. The services provided by a service provider may be end user services or communication services. A service provider may join with other service providers in order to build more complex services. Descriptions of the offered services are published via the broker service. Active network service provider (ANSP): An active network service provider provides facilities for the deployment and operation of active components into the network. It provides virtual environment facilities (one or more) for active service components to service providers. Together with the network infrastructure provider, it forms the communication infrastructure. Network infrastructure provider (NIP): A network infrastructure provider provides managed network resources (bandwidth, memory, and processing power) to active network service providers. It offers a network platform including AN components to the active network service providers, who can build their own execution environments, and proffers basic IP connectivity,1 which may be based upon traditional transmission technology, as well as emerging ones (both wired and wireless).2
1 Since FAIN focuses on Internet technology, a basic network infrastructure consists of the following characteristics: the presence of a physical network; an IPv4 and/or IPv6 forwarding mechanism; a default IP routing protocol; if RP7 is implemented, a peering service to other NIPs or ISPs; and an element management interface. 2 The service offered by the NIP may also be used by current Internet service providers.
FAIN Network Overview
153
Service component provider (SCP): A service component provider is a repository of active code that may be used by the consumer and the service provider. A service component provider may publish its components via a broker. Service component manufacturer (SCM): A service component manufacturer builds service components and active code for active applications and offers them to service providers, consumers, and service component providers in appropriate form (e.g., as binary or as source code). The service component provider is not a genuine FAIN business role since it is not directly involved in the main FAIN business process. Middleware manufacturer (MM): An active middleware manufacturer provides platforms for active services (including the execution environments) to the active network service provider. The active middleware manufacturer is not a genuine FAIN business role since it is not directly involved in the main FAIN business process. Hardware manufacturer (HM): A hardware manufacturer produces programmable networking hardware for the network infrastructure provider. The hardware manufacturer is not a genuine FAIN business role and will not be detailed further in the FAIN enterprise model. Retailer (RET): The retailer [according to the OMG Telecommunications Service Access and Subscription (TSAS) model] acts as access point to the services provided by service providers to customers; it must be capable of negotiating and managing the end user contract (e.g., handling authentication and security issues as well as accounting and billing) and it must allow the service lifecycle management by the user (subscription and customization of a service, administrative interactions, and so on). Broker (BR): The broker collects, stores, and distributes information about: (1) available services (i.e., those offered by the service providers); (2) the administrative domains in the active networking system; and (3) service components (i.e., code offered by the service component providers). The FAIN broker basically resembles the “broker” role in the TINA business model. 8.1.2 Reference Points Among the business roles, the following reference points are defined:
154
Programmable Networks for IP Service Deployment
RP1 (SCP-SP): This provides contracting, subscription, and transfer of components in a client/server situation; information about execution environments should be exchanged to determine which components to transfer. RP2 (ANSP-SP): SP delivers components to be injected in the network by the ANSP; SP requests generic network services, processing resources, and code injection from the ANSP; ANSP allocates resources depending on availability; ANSP provides information about the capabilities of network services; SP is responsible for service session management, according to the agreed service level (ANSP-SP); ANSP is responsible for network service management, according to the agreed service level (ANSP-SP). RP3 (NIP-ANSP): NIP provides and allocates physical active networking resources; ANSP requests resources (network, processing, transmission, and so on) from the NIP in order to provide network services, and is responsible for injection of active code. RP4 (SP-C): The consumer requests and uses services offered by the service provider. The SP provides the service with its functionality. It also subsumes the role of the retailer as service access point (i.e., it determines the service contract). In addition, RP5 (SP-SP), RP6 (ANSP-ANSP), and RP7 (NIP-NIP) deal with federation among roles of the same kind. For instance, a service provider may provide features of the services of other cooperating service providers and it may use these service providers to offer facilities of different service component providers it is not directly associated with. 8.2 FAIN REFERENCE ARCHITECTURAL MODEL In this section we present the FAIN reference architectural model on which the network architecture is based. The FAIN network element (NE) reference architecture is depicted in Figure 8.2. It describes how the ingredients identified in Section 8.1 can be combined synergistically to build next generation NEs capable of seamlessly incorporating new functionality or can be dynamically configured to change their behavior according to new service requirements [16]. It applies both to application-level active networking [3, 45] and to active routers [4, 9, 14, 52]. One of the key concepts defined by the FAIN architecture is the execution environment. In FAIN, drawing from an analogy based on the concepts of class and object in object-oriented systems [46], we distinguish EEs between the EE type and the EE instances thereof. An EE type is characterized by the
FAIN Network Overview
155
programming methodology and the programming environment that is created as a result of the methodology used. The EE type is free of any implementation details. In contrast, an EE instance represents the realization of the EE type in the form of a run-time environment by using specific implementation technology, for example, programming language and binding mechanisms to maintain operation of the run-time environment. Accordingly, any particular EE type may have multiple instances, while each instance may be based on different implementations. This distinction allowed us to address the issue of the principles that must govern and the properties that must be possessed by next generation NEs, from the issue of how to build such systems.
Privileged VE
EE 2
EE 1
EE 2
Control
EE 1
Transport
VEM
EE 3
Management
VE
VE
Resource Access Control AN Node Facilities Security
ASP
Mgnt
Dmux
Extended Node OS Programmable NE (Hardware)
Figure 8.2 The FAIN NE reference architecture.
The programming methodology that was used as part of the FAIN EE type was the building block approach [54], according to which services break down into primitive, distinct blocks of functionality, which may then be bound together in meaningful constructs [17]. To this end, services can be rebuilt from these primitive forms of functionality (i.e., the building blocks) while building blocks may be reused and combined together in a series of different arrangements as is dictated by the service itself. The result of this process is the creation of a programming environment like the one depicted in Figure 8.3. In FAIN we have built two different EE instances, a Java EE and a Linux kernel-based EE, of this particular EE type [21]. The FAIN architecture also allows EEs to reside in any of the three operational planes, namely, transport, control, and management, while they may
156
Programmable Networks for IP Service Deployment
interact and communicate with each other either across the planes or within a single plane. In fact, it is not the EEs that communicate, but rather distributed service components hosted by them as part of deployed network services, which can be accessed by applications or higher level services by means of the network API they export. EEs (instances) are where services are deployed. Services may well be extensible in the sense that the programming methodology and the corresponding environment (EE type) support service extension, while they can access services offered by other EEs to achieve their objectives and meet customer demands. For example, a service uses the code distribution mechanism to download its code extensions. The extension API then becomes part of the overall service interface. Component
Building Block
Binding
A composite block built using basic blocks.
Basic element that performs a specified action as packets flow through it.
C1 A4
A2
A1
A5
A3
A6
C2 A7
Figure 8.3 EE type: The programming environment.
Furthermore, FAIN separates the concept of the EE from that of the virtual environment. We argue that the concept of an EE as defined previously and that of a VE are orthogonal to each other. In fact, a VE is an abstraction that is used only for resource management and control. Therein services may be found and may interact with each other. From the viewpoint of the operating system, the VE is the principal component responsible for the consumption and use of resources, the recipient of sanctions in the event of policy violations, and the entity that is permitted to receive authorization when services access control interfaces. Similar conclusions may be found in [39, 58]. In other words, a VE provides a place where services may be instantiated and used by a community of users or groups of applications while staying isolated from others residing in different VEs. Within a VE, many types of EEs with their instances may be combined to implement and/or instantiate a service. Another property of the reference architecture is that it makes no assumptions about how “thin” a VE is. It may take the form of an application, or a specialized service environment, for example, video on demand, or even a fully fledged
FAIN Network Overview
157
network architecture as proposed in [7, 8]. Finally, a VE may coincide with an implementation (EE instance) that is based on only one technology; for example, Java. In either case this is a design decision dictated by customer requirements and/or the VE owner. Out of all the VEs residing in a node there must be a privileged one that is instantiated automatically when the node is booted up, and serves as a back door through which subsequent VEs may be created through the management plane. This privileged VE should be owned by the network provider, who has access rights to instantiate the requested VE on behalf of a customer through a VE manager (VEM). From this viewpoint the creation of VEs becomes a kind of meta service. EE 1
EE3
EE2
NP Forwarding Engine
Forwarding Engine
NP Connection Module
Queues
Control Processor
Output Port
Output Port
Figure 8.4 The network element representation.
The other major and most important component of the reference architecture is the NodeOS [49]. It offers all those facilities3 that are necessary to keep all the other components together, and provides resource control, security, management, active service provisioning of service components, and demultiplexing. More details may be found in [21, 22]. All these facilities in the NodeOS cooperate to deliver the overall functionality of the NodeOS to achieve its goals. Between VEs and NodeOS lies the node interface that encapsulates all the capabilities offered by the NE. Its objective is to create programmable abstractions of the underlying NE resources, whereby third-party service providers, network administrators, network programmers or application developers can exert or extend node control through the use of higher level APIs. This interface coincides with the L-interface [36], and its specification must be implemented by EEs in order to achieve interoperability among different NEs. Finally, between the 3 We use here the word “facilities” to refer to services offered by the NodeOS to VEs, and distinguish them from services found inside EEs.
158
Programmable Networks for IP Service Deployment
NodeOS and the hardware NE there might be the open router interface. Its scope coincides with the scope of the Connection Control and Management (CCM) interface of P1520. The FAIN reference architecture is the starting point from which a detailed node architecture specification follows. Accordingly, it is complemented by the system architecture requirements, design, and specification. This, together with customer/user/application requirements determines the degree of programmability to be built in the NE, and the choice of technologies. The previous two ingredients, namely, the EE instances and the open interfaces, require an NE to reside in. Packets arriving at the node must follow different data paths inside the node. At every part of the node, EEs have been instantiated implementing the programming methodology of their corresponding EE types with some of them, creating component based programming environments. This gives rise to a new generation of network elements with architectures that are component based. Such a trend has been accelerated by the advent of innovative network products like the network processors (NP) [34, 38], which are capable of hosting an EE without the cost of performance degradation. Figure 8.4 depicts this new situation in the form of a possible NE representation. In FAIN we have designed and built a prototype of an AN node that adopts the scenario above. Instead of an NP, we have built one EE in the kernel space, and another one in the user space. Both EEs support the building block approach, and receive packets and direct them to specific components for processing. A more detailed description may be found in [21]. 8.2.1 Discussion of the FAIN Reference Architecture The FAIN NE reference architecture serves as a way to manage and control overall service deployment. Based on the ability to combine different EEs as part of service creation and deployment, not only may specific service components be deployed, but also the whole programming environment (EE instance) that is bound with existing EE instances. To this end, different functional models may be mapped onto the same physical NE infrastructure. One example could be that an EE instance is deployed in an NP, while another is preprogrammed in an Application Specific Integrated Circuit (ASIC). This constitutes a departure from the active networks reference architecture where only EE instances of the same type are allowed to communicate. Furthermore, the separation between VE and EE allowed us to separate the resource control from the specifics of a technology used by EEs, and as multiple EEs may be hosted by one VE and still be able to allocate resources as these are assigned to the VE. Returning back to the ForCES working group [35] and in particular their architectural representation of an NE built around CEs and FEs as well as the
FAIN Network Overview
159
proposed FE model [25], it is clear that the EE definition in FAIN is also valid for an FE definition as inferred from the current state of the IETF working group [37]. In addition, an EE that resides in the control plane may well represent CEs, since such control EEs are used for controlling EEs in the transport plane. Accordingly, the issues of FE control and configuration, especially those that pertain to dynamically extensible FEs, are identical to those in FAIN. As such, the mechanisms for service deployment built in FAIN that facilitate configuration and control of EEs (in the transport plane), may also be used for the same purposes within the context of the ForCES activity. In the sections that follow we describe in detail how the reference architecture was realized to create the FAIN network and node architecture. 8.3 FAIN NETWORKING ARCHITECTURE In this section we present an overview of the FAIN networking architecture, FAIN programmable nodes and FAIN management nodes. 8.3.1 Networking Issues in FAIN The FAIN active network architecture defines active nodes (see Figure 8.5), which provide full flexibility to the user for network management and service provisioning. The defining characteristic of an active node is the ability for users to load and manage software components dynamically and efficiently. This can be achieved safely, since customers who are sharing the same active node are provided with VPN-like resource partitioning. Active Element Management Node Active Network Management Node Active Node Passive Node
Figure 8.5 FAIN active networks.
Packets requiring active processing are marked to allow correct handling by active routers. This allows discrimination between active and conventional
160
Programmable Networks for IP Service Deployment
packets, and the selection of an active node. Routing and node resources configuration in the active nodes can be achieved by setting policies at the network management level (element and network management nodes). Access to this functionality is controlled and only possible via a well-defined API. 8.3.2 Components in the FAIN Programmable Network The FAIN reference architecture consists mainly of the following two entities: node systems, and management systems. 8.3.2.1 FAIN Programmable Nodes The FAIN node systems consist of: Active applications/services, which are applications executed in active nodes. Execution environments, which are environments where application code is executed. A privileged EE manages and controls the active node, and it provides the environment where network policies are executed. Multiple and different types of EE are envisaged in FAIN. EEs are classified into virtual environments, where services can be found and interact with each other. VEs are interconnected to form a truly virtual network. NodeOS, which is an operating system for active nodes, and includes facilities for setting up and managing communications channels for interEEs and AA/EEs, managing the router resources, providing APIs for AA/EEs, and isolating EEs from each other. Through its extensions, the NodeOS offers facilities through the following components: •
•
•
Resource control facilities (RCF): Through resource control, resource partitioning is provided, and VEs are guaranteed that consumption stays within the agreed contract during an admission control phase; whether static or dynamic. Security facilities (SF): The main security aspects are authentication and authorization to control the use of resources and other objects of the node, such as interfaces and directories. Security is enforced according to the policy profile of each VE. Application/service code deployment facilities (ASP support): As flexibility is one of the requirements for programmable networks, partly realized as static or dynamic service provisioning, the NodeOS must support code deployment.
FAIN Network Overview
161
•
Demultiplexing (DEMUX) facilities: As flows of packets arrive at the node, demultiplexing filters, classifies, and diverts active packets to the appropriate VE, and consequently to the destination service inside the VE. • Node management (NM) facilities: The main aspects are the initiation and maintenance of VEs, control and management of the RCF and SF, and management of the mapping of external to node policies into node resource and security configurations. Figure 8.6 describes the main design features and the components of the FAIN nodes. Notifications & Events
VEs AAs AAs AAs
Management VEs
AAs AAs AAs
Resource Control Facilities
Policies
AAs AAs MAAs
Node OS Extensions Node OS
AAs AAs MAAs
Active Node
Security Facilities
Fast Forwarding
Figure 8.6 FAIN active node.
In FAIN, node prototypes include: a high-performance active node, with a target of 150 Mbps, and a range of flexible and very functional active nodes/servers, with the potential for multiple VEs hosting different EEs. Figure 8.7 provides an overview of the major FAIN nodes, components, and their corresponding interfaces that comprise the FAIN nodes architecture. More specifically, the privileged virtual environment has been enhanced with a new component, called the VE manager, that implements the VE management framework. This component is the most crucial one, as it offers a number of node services that are deemed necessary to configure and setup the node. It is used for instantiating new VEs, and deploying EEs and components therein, as well as control interfaces that allows services inside VEs to customize resources according to application-specific requirements. In addition, the proposed framework allows the implementation of other components, like resource managers in RCF [32] or channel managers in DEMUX, to be easily integrated with the implementation of the framework by means of a set of classes from which these components inherit. Finally, the VEM manager specifies another set of interfaces, namely the template manager and component manager, that facilitate
162
Programmable Networks for IP Service Deployment
communication and integration with types of EEs other than the one that the VEM used for its own implementation. This enables future integration with other implementation instances that are currently under development.
ASP SEC
RCF
VEM
DMUX Node OS
PBNM Privileged VE
FAIN Active Node
FAIN Management Node
Active Network Test Bed
Figure 8.7 Overview of AN node architecture.
The DEMUX has been enhanced with a channel manager that is capable of creating different types of channels. They are used to receive different types of packets—for example, data packets or ANEP packets—and consequently deliver these packets to the proper services that are running inside different VEs. These channels are created by the DEMUX components and are given to the requested VEs that control them. To this end, VEs may request the creation or deletion of channels as well as configuring these channels to receive certain packets; for example, a specific IP address. The DEMUX actually provides the two ends (input and output) of a plug-and-play data-path that is supported by the component-based AN node architecture. Two new entities were added, namely, the security manager [26, 50, 51] and the connection manager. The former enhances the authorization functionality, which now supports multiple authorization engines, and exports security interfaces (policy and credentials) to the node. The latter provides security support for hop-by-hop data integrity over connections with adjacent nodes. To this end, the security-related options of the ANEP header were also specified, and used in scenarios that involve this aspect of security functionality. The AN resource control framework (RCF) has adopted the VEM framework, whereby the resources and their corresponding resource managers of the original RCF architecture in [21] are encapsulated in the components manager of the VEM. Accordingly, the RMs can be deployed and controlled like any other regular service component through the interfaces inherited from the component manager. The RCF has also been extended with an admission control entity that is
FAIN Network Overview
163
responsible for deciding whether the new VE creation request may be accepted, provided that there are resources available left in the AN node. The AN node architecture has been implemented in a Java EE on top of a Linux operating system, using netfilter for packet classification and forwarding. This implies that the management functionality of the node has also used a Java EE. However, another aspect of the architecture is the simultaneous support of multiple EEs provided by the VEM framework. To this end, we have built two different types of EEs: one high performance EE in the kernel space of the Linux operating system, and one control EE [11, 18], called active SNMP EE [53] (see Chapter 13). The high-performance EE is capable of dynamically deploying service components in the data-path on behalf of different VEs. The active SNMP EE is an example of an in-band signaling [47] EE that enables valid users to control node resources by communicating with an SNMP agent. Other examples are in [44]. Further details and discussions about the active node are provided in the remainder of this chapter and in Chapters 9 through 13. 8.3.2.2 FAIN Management Nodes Introduction The management approach in the FAIN project takes a policy-based approach. The FAIN management systems consist of: • •
•
Policies: A description of policies required to manage the active nodes and network. A node management component: The design of management components within the active nodes will execute policies within an active node and monitor the local node resource usage. The execution of policies means mapping target policies into node resource configurations. Management stations: A set of management nodes that will provide mechanisms to enable network administrators to manage the active networks as a whole, including network policies setup and processing, as well as managing the network service provisioning process.
As the delivery of services will require the cooperation of a number of active nodes, the network providers will need the means to manage the active nodes as a group of nodes and not individual nodes. They will need monitoring mechanisms for checking that the correct policies are being defined and used in relation to the network, before they are sent to the actual network. They will need to know what policies are currently loaded in the active nodes, and what impact these are having on the network. They will also need to protect and monitor the security of the
164
Programmable Networks for IP Service Deployment
network. Therefore, the network/service providers need a set of management mechanisms that will enable them to manage the network as a whole. In FAIN we see the need for two types of management nodes [64] in order to provide these mechanisms: element management stations (EMS), and network management stations (NMS). The main difference in functionality provided by these two types of management nodes is in the policy types that they could process and manage, in the subnetworks that they cover, and in the creation of management domains for different types of users, as shown in Figure 8.8.
Network Management Node Policies
Notifications
Element Management Node Notifications
Policies
Mang .Virtual Environments Policy Execution Manager
Active Node
Network Layer
Figure 8.8 Active network management.
Furthermore, the relationships between the EMS, NMS, and active nodes with regards to the policy flow are shown in Figure 8.8. FAIN PBNM Approach Network management, either of telecommunication or data networks, has traditionally followed the manager-agent model [1], and deals with three fundamental aspects: (1) functionality grouped according to five areas, namely, fault, configuration, accounting, performance, and security; (2) information modeling by which network and network element resources, are identified and abstracted in a way that underpins specific operations semantics; and (3) the communication method among managers and agents. Realizing and implementing this model gave rise to a series of different network management architectures, each the result of limitations of the previous ones, but most notably because of emerging network architectures and the new demands imposed on their management. The latter resulted from the combined effect of rapidly changing customer requirements, enabling technologies, and new
FAIN Network Overview
165
market forces. Such management architectures have been categorized as centralized, hierarchical, and distributed, while a hybrid between hierarchical and distributed is also possible [15, 25]. Although one of the original intentions of making the management architecture hierarchical was the need for reducing the management data flows from the agent to the central manager, the management application was still running in the manager and away from the network element (NE) while treating the agent as a mere implementation of the communication protocol unable to make any decisions. Management by delegation (MbD), introduced first as a concept in [31], was conceived in an attempt to transfer the management logic from the central management system closer to the managed entity. This results in alleviating the central management system from the management burden. Traditional network management techniques focused on the procedures that are required to carry out FCAPS functions. They have built the communication protocols and information models that are pervasive across a network, so that the network can be managed as smoothly as possible. However, they have not addressed a framework for high-level automated network management based on well-defined rules, which capture the semantics of the procedures that the network should adhere to. Policy-based network management filled this gap. The following requirements are considered [62] as being increasingly consistent across different market segments, ranging from small to medium enterprises to large enterprises, as well as service providers’ operation support system (OSS) environments: • • • • • •
Ease of use and implementation; Ease of integration; Dynamic adaptability; Scalability; Reliability; Cost/value.
Policy-based network management [13, 41, 62] is also instrumental in meeting most of the aforementioned requirements, as it provides a common “language” [7] to communicate these requirements to the network infrastructure, which is then configured accordingly. Policies capture the semantics of a second level and more advanced management logic, which can be associated with different uses of the network resources at the same time. Figure 8.9 abstracts the main entities that participate in policy-based networking. Central to this figure are the policy enforcement point (PEP) that resides in the NE, and the policy decision point (PDP) that resides in a policy server [61]. A database, generally known as a policy repository, stores the policies that the NEs must adhere to.
166
Programmable Networks for IP Service Deployment
The PDPs and PEPs collaborate by exchanging communication messages by means of a standard protocol in order to support policy control. Such messages transport configuration information, syntactically and semantically defined in the policy information base (PIB). Two common models are used for policy control: outsourcing and configuration [10]. External Events
REP COPS Control Protocol
PDP
PEP
Other NE Events/Notifications
Enforcement Figure 8.9 IETF policy-based networking.
In either of the two models, a large number of management operations may be automated and simplified, making network management simpler and more scalable. In addition, policy-aware NEs manifest a vendor’s independence, making integration and interoperation feasible without prohibiting product differentiation. Finally, by treating PEPs and PDPs as execution environments that can be extensible, like the elastic server in the MbD paradigm, functionality can be added dynamically, thereby adapting the network to new demands in the form of new policies. This was one of the motivations to design our management based on the policy framework. The FAIN policy-based network management (PBNM) architecture [28, 40] is designed as a hierarchically distributed architecture, consisting of two levels (two-tiered architecture): the network management level, which encompasses the network management system and the element management level, which encompasses the element management system. This approach was extended in [27] to cover GRID networks and services. Furthermore, the defined policies have been categorized according to the semantics of management operations, which may range from QoS operations to
FAIN Network Overview
167
service specific operations. Accordingly, policies that belong to a specific category are processed by dedicated policy decision points, and policy enforcement points (see Figure 8.10).
NMS PDP REP PEP
EMS PDP
REP
VE PEP
EMS PDP
AN Node
REP
VE
AN Node
PEP
Figure 8.10 The hierarchical FAIN management architecture.
The NMS is the entry point of the management architecture. It is the recipient of policies that may have been the result of network operator management decisions or of service level agreements [58] between ANSP and SP, or SP and C. These SLAs require reconfiguration of the network, which is automated by means of policies sent to the NMS. Network-level policies are processed by the NMS PDPs [30], which decide when policies can be enforced. When enforced, they are delivered to the NMS PEPs that map them to element level policies, which are, in turn, sent to the EMSs. EMS PDPs perform similar processes at the element level. Finally, the AN node PEPs execute the enforcement actions at the NE. The use of this policy control configuration model [10] and its use in a hierarchically distributed management architecture combine the benefits of management automation with a reduction of management traffic and distribution of tasks. As the FAIN management architecture is based on the FAIN business model, the relationship among the three main actors; namely, ANSP, SP, and C, is
168
Programmable Networks for IP Service Deployment
projected directly onto the architecture. Accordingly, each one of these actors may request and get his own (virtual) management architecture, through which he is enabled to manage the resources allocated to the virtual environments (VE) of his virtual network (see Figure 8.11). In this way, each actor is free to select and deploy his own model of managing the resources; namely, his own management architecture, which can be centralized, hierarchical, or policy or nonpolicy based. The complexity of the virtual network and the types of service that are deployed in it dictate the particular choice of management architecture by its owner. In addition, different management architectures simultaneously coexist in the same physical network infrastructure, as they may be deployed by different actors. To this end, we create an environment that is capable of accommodating opposing requirements; an accomplishment that is beyond the capabilities of the traditional approach of monolithic architectures. Our model extends the Tempest approach [58] to the management plane. This was the first to advocate the simultaneous support of (virtual) control architectures for ATM networks. It also extends the scope of management by delegation [31] as it allows delegation of the network management responsibility to a third party, for example, an SP, which can be deployed and hosted in a separate physical location from the NMS of the owner of the network, for example, the ANSP. Figure 8.11 illustrates the aforementioned discussion. Starting with the management architecture of the network operator, namely the ANSP, it instantiates and registers a new management instance (MI), which is delegated to one of its customers; that is, the SP. This management instance will host the SP’s management architecture. The SP has the option to buy from the ANSP an instance of the ANSP’s architecture; in our case a policy-based one. To this end, the network management architecture developed by the ANSP is used not only for managing the network elements, but it becomes a commodity, thus creating another important source of income for the ANSP. Furthermore, the ability of the ANSP to generate and support multiple management domains may create additional business opportunities. For example, the ANSP may build an OSS hosting facility for SPs to instantiate their own management architectures. In this way, the ANSP may sell both its expertise in running and operating an OSS, as well as the architecture and its corresponding implementation. In contrast, the SP does not need to build its management architecture from scratch, but can customize an existing one according to the services it intends to run. Alternatively, the SP may deploy its own management architecture using the OSS hosting facility provided by the ANSP; so reducing the cost of managing the network.
FAIN Network Overview
169
In FAIN we have focused on and experimented with the automated instantiation of management architectures using as a blueprint the PBNM system of the ANSP to instantiate another management system for the SP. Note also that this instantiation relationship can be recursive in the sense that the SP may further delegate its own instances to a consumer.
ASP Policy Editor
ANSP Management Instance
ANSP proxy
PDP Manager
REP
Monitoring system
Inter-PDP Conflict Check
Other SP Delegation of Management Management Instances Architectures
Access Rights Delegation PDP
PEP
QoS PDP
Service Specific PDP
PEP Resource Manager
PEP
Figure 8.11 FAIN management instances and their components.
Finally, the architecture MI used by the ANSP has been designed in such a way that it is dynamically extensible in terms of its functionality, as a result of using active networks technology. The ANSP’s management architecture can be extended in two distinct ways: • •
By deployment of a whole new pair of PDPs/PEPs that implement new management functionality. By extension of the inherent functionality of existing PDPs/PEPs.
The former is triggered by the PDP manager whereas the latter is achieved by the PDPs themselves. The execution of the extension; namely, fetching and deploying the requested functionality, is the responsibility of FAIN’s active service provisioning system [22]. One important assumption underlying the previously described virtual management architectures is that well-established open interfaces and protocols must be provided by the NEs. This may seem from the outset to be a demanding
170
Programmable Networks for IP Service Deployment
condition, but there is convincing evidence that there exists a strong push toward ubiquitous open interfaces. Initiatives like the IEEE P1520 [5, 36] and, lately, the IETF ForCES [35] working group, serve as proof for such claims. Furthermore, the programmable and active networks paradigm also relies on similar assumptions [21]. These requirements are for nonFAIN active nodes; FAIN AN follows this assumption. Further design details of the FAIN PBNM architecture can be found in [56, 57, 59, 60] and in Chapter 14. 8.4 FAIN ACTIVE SERVICE PROVISIONING 8.4.1 Introduction In this section we provide a description of FAIN service provisioning. Active service provisioning is understood in the context of the FAIN project as a system aimed at deploying active services in the FAIN network. In general, active service deployment is considered as a process of making a service available in the active network so that the service user can use it. The deployment process is usually seen as a number of preparatory activities before the phase of service operation. The typical activities include releasing the service code, distributing the service code to the target location, installing it, and activating it. Since the mid-1990s many efforts have been made to develop active networks technology to enable more flexibility in provisioning services in networks. By defining an open environment on network nodes, this technology allows us to rapidly deploy new services which otherwise may need a long time and the adoption of additional hardware. The remainder of this section describes the FAIN approach, the actors, use cases, and the ASP architecture. 8.4.2 FAIN Approach The FAIN project follows an approach in which a number of existing and emerging active network technologies are integrated. With regard to deployment, it proposes a novel approach to deploying services in heterogeneous active networks [6]. In particular, the FAIN approach to deployment is characterized by the following: •
On-demand service deployment support: The ASP supports deploying a service whenever it is needed. A service deployment may be explicitly
FAIN Network Overview
•
•
•
•
•
171
requested by a service provider, or by another service already deployed, or by a management component. A component-based approach: Deploying and managing high-level services requires an appropriate service model. While fully fledged component based service models are an integral part of many enterprise computing architectures (e.g., Enterprise Java Beans, the CORBA component model, Microsoft’s .NET), it is not the case in many approaches developed by the active networking community. The FAIN deployment framework is designed on top of a component-based service model similar to the CORBA Component Model. The service model is hierarchical in that service components may recursively include subcomponents. This allows for a fine-grained service description and composition. Network and node-level architecture: To deal with the complexity of deployment issues in active networks, the active service provisioning has been designed according to the object-oriented concept of the “separation of concerns.” The network-level ASP copes with network issues that include finding the nodes of the target environment for a given service, considering topological service requirements as well as network link QoS requirements; for instance, bandwidth. The node-level ASP, on the other hand, is concerned with node-specific requirements, including technology and other service dependencies. Integrated service deployment and management: The FAIN approach to service deployment is tightly integrated with FAIN service and network management. On one hand, the ASP depends on the service management framework implementing EE-specific deployment mechanisms, including installation and instantiation. On the other hand, the target environment in which the service is to be deployed is codetermined by the network management system. The target environment is defined to be a virtual active network that is established by the FAIN network management system. The VAN is created by the management system according to the service requirements. Selective code deployment: The service code distribution is done by the selective downloading of selected code modules from a code repository. The decision as to which code modules are needed is made at the ASP components at the target active nodes [43]. Support for heterogeneous services and networks: The ASP has been designed to enable service deployment in heterogeneous networks. This is achieved by specifying a unified interface to the node capabilities, and a unified notation for describing service specification and implementation requirements. Whereas CORBA [12] technology is used to define the unified API to the node, XML technology is used to define the unified service description.
172
Programmable Networks for IP Service Deployment
8.4.3 Actors The main actors communicating with the ASP system are the: •
•
Service provider: The SP composes services that include active components, deploys these components in the network via active service provisioning, and offers the resulting service to consumers. The service provider is responsible for releasing and withdrawing a service that includes a service version update or a complete removal of the service from specific nodes or from the complete active network, respectively. Furthermore, the SP may be represented by the FAIN network management system with regard to the initiation of service deployment or service reconfiguration. Active network service provider: The ANSP provides facilities for the deployment and operation of the active components into the network. Such facilities come in the form of active middleware support of new technologies. ANSP is represented by active nodes, which are the target environment in the context of deployment. This means that services may be deployed in these nodes and use the node resources made available to them by the ANSP. These roles are described in the FAIN enterprise model [20] in more detail.
8.4.4 Use Cases The main use cases of the ASP system are: •
•
•
Releasing a service: The service provider that decides to offer its service in the active network must release it in the active network. The service is released by making the service meta information and service code modules available to the ASP system. Deploying a service: After the service is released in the network, the service provider may want to deploy its service so that it can be used by a given service user. This means finding target nodes that are most suitable for the given service installation, determining a mapping of the service components to the available execution environments of the target node, downloading the appropriate code modules, and, finally, installing and activating them. Reconfiguring a service: The service provider or network management system on its behalf may request changing the current configuration of the service. This may include modifying component bindings, deploying
FAIN Network Overview
•
•
173
additional service components or redeploying components that have been already deployed. Removing a service: The service provider may ask to remove a deployed service from the environment it was deployed in. The ASP identifies the installed service components, and removes them from the EEs of the target environment. Withdrawing a service: A service released in the active network may be withdrawn so that is no longer available to be deployed. The ASP removes the service meta information, and discards the service code modules.
8.4.5 ASP Architecture The FAIN ASP system has a two-layered architecture: the network level and the node level. The network-level functionality is concerned with finding the target nodes for the service to deploy, coordination of the deployment process at the node level, and providing a service code retrieval infrastructure. At the node level, necessary service components are identified (through code dependency resolution as well as the deployment mechanisms, including service installation, activation, and preconfiguration). Network-Level ASP Design The network-level ASP system consists of three components depicted in Figure 8.12: the network ASP manager, the service registry, and the service repository. The network ASP manager serves as an access component to the ASP system. In order to initiate the deployment of a particular service, a service provider contacts the network ASP manager and requests a service to be deployed as specified by the service descriptor. The service registry is used to manage service descriptors. Service descriptors are stored on it, when a service component is released in the network. The network ASP manager and the service creation engine (described later) may contact the service registry to fetch service descriptors. Finally, service descriptors are deleted from the service registry, if a service is withdrawn from the network. In Figure 8.12 only one service registry is shown in the network. Of course, several instances with possibly different content could be deployed in a network. The service repository is a server for code modules. A code module is stored on the service repository when a service descriptor referencing the particular code module is released in the network. The code manager, which is part of the nodelevel ASP system and is described below, may fetch code modules from the service repository. A code module is deleted if a service descriptor referencing the particular code module is withdrawn. As is the case for the service registry, several service repositories may coexist in a big network.
174
Programmable Networks for IP Service Deployment
Node-Level ASP Design On the node level, the following components make up the ASP system, as shown in the node ASP block in Figure 8.12: the node ASP manager, the service creation engine, and the code manager. Service Provider Network Service Repository Network Service Registry
NMS Active Nodes Local Service Repository
Service Creation Engine
Local Service Registry
Code Manager
Deployment Agent
Network ASP Manager
Agency
Stationary Agent
Node ASP Manager
Node ASP
Mobile Agent
Figure 8.12 FAIN active service provisioning.
The node ASP manager is the peer component to the network ASP manager on the node-level. The network ASP manager communicates with the node ASP manager in order to request the deployment, upgrading, and removal of service components. The requests are dispatched to the service creation engine or the code manager, respectively, which implement corresponding methods. The service creation engine plays a major role in the node-level deployment of service components [55]. Its main task is to select appropriate code modules to be installed on the node in order to perform the requested service functionality.
FAIN Network Overview
175
The service creation engine matches service component requirements against node capabilities, and performs the necessary dependency resolution. Since the service creation engine is implemented on each active node, active node manufacturers are enabled to optimize the mapping process for their particular node. In this way, it is possible to exploit proprietary, advanced features of an active node. The selection of service components is based on service descriptors that are retrieved from the service registry. As a result, information about code modules that are to be installed on the particular node (a so-called service tree) is passed to the code manager. The code manager performs the EE-independent part of service component management. During the deployment phase, it fetches code modules identified by the service tree from the service repository. It also communicates with node management to perform EE-specific part of installation and instantiation of code modules. The code manager maintains a database containing information about installed code modules and their association with service components. If a particular service component needs to be removed, this database is consulted in order to find out which code modules are associated with the component and, as a consequence, must be removed as well. Please note that information fetched from the service registry and the service repository is locally stored in their respective caches (local service registry, or a local service repository) in order to optimize recurrent service deployment requests. 8.5 FAIN TESTBED This section provides an overview of the FAIN active testbed, which serves as a permanent experimental network for active network technologies. A similar approach is described in [2]. The testbed is completely operational. In the remainder of this section we will describe the structure of the testbed, the precise location of the different facilities and components, and the types of nodes running at various sites. 8.5.1 Network Topology and Interconnection This section describes the configuration of the FAIN testbed, showing the physical location of the nodes at partner sites, as well as describing the logical (overlay) networks and the key components such as the domain name service (DNS).
176
Programmable Networks for IP Service Deployment
Figure 8.13 FAIN testbed topology.
8.5.1.1 Testbed Topology Figure 8.13 depicts the current topology of the FAIN testbed, with four sites (Zurich, Berlin, London, and Ljubljana) forming two core triangles, and the rest of the sites connected as leaves to one of the core nodes. The decision about which node had to be a core node and to which core node the other nodes had to refer, was made after measurements were taken of the bandwidth and link quality between the different sites. Essentially, this is a three-level hierarchical tree topology with cross connections at the second level of the tree. The advantage of this topology in comparison with the full mesh is that the latter provides only single hop paths between active nodes, while it may be more interesting to test applications over multihop paths. On the other hand, a tree with cross connections provides alternate paths between nodes, which is not the case with a simple tree topology. Finally, contrary to full or partial mesh, a carefully constructed tree topology accommodates the fact that some partners have a lower bandwidth
FAIN Network Overview
177
connection to the testbed due either to technical limitations or to corporate security policy. 8.5.1.2 Tunnel Configuration The FAIN testbed has been set up as an overlay (i.e., virtual) network on the existing network infrastructure. The overlay network is based on IP tunneling and is realized by appropriately configuring point-to-point tunnels between specific nodes, as shown in Figure 8.14. There are several different tunneling technologies, and the choice of tunneling technology depends on the requirements. In FAIN, we have employed simple IP GRE tunneling (originally developed by Cisco) since the major requirement is to prevent the interference of experimental traffic with the production traffic. We do not consider testbed traffic to be of a sensitive nature (confidential), so there is no need to use IPsec tunneling to protect this traffic while in transit over the public Internet.
hel.fa ucl.fa fhg.fa
ntua.fa
upen.fa
tik.fa
jsis.fa sag.fa upc.fa
ACTIVE NODE PASSIVE NODE Figure 8.14 FAIN testbed overlay network.
178
Programmable Networks for IP Service Deployment
For the two tunnel end points, a properly configured tunnel looks the same as a physical point-to-point link; that is, the nodes “think” they are directly connected, even though they use the public Internet to communicate with each other. 8.5.1.3 Partner Network Data/Properties Each partner site has: • •
At least one node connected to the public Internet acting as a tunnel end point. A testbed subnet behind the tunnel end point with the address range of the form 10.0.p.0/24, where p is the partner number from the consortium partner list (in order of appearance in the FAIN consortium partner list).
Partners can freely use the addresses from the private address range assigned to them. 8.5.1.4 Domain Name Service There is a DNS service running within the testbed. The primary DNS server is hosted and maintained in Berlin, and its IP address is 10.0.12.12. A secondary DNS server is hosted in Zurich, and has the IP address 10.0.11.11. All nodes and hosts within the testbed use .fa as the top-level domain. The Fully Qualified Domain Names (FQDN) for hosts within the testbed have the following form: hostname.FAIN-partner-code.fa For example, the host “onizuka” located in Berlin is called onizuka.fhg.fa. 8.5.2 Sites Overview Figure 8.14 shows the actual status of the FAIN testbed. The bold lines represent the tunnels, whereas the lighter lines are the links in the private networks at the partners’ sites. The FAIN testbed comprises different types of FAIN nodes: •
FAIN active network nodes; either type A (i.e., node type A, which contains the following components: VEM + DEMUX + RCF + SECOND + ASNMP + PromethOS) or type B (i.e., a hybrid router, which combines a commercial router with an active node provided by a physically separate and attached PC);
FAIN Network Overview
179
• A FAIN element management station; • A FAIN network management station. The EMS and NMS could be installed on either active or passive nodes. The FAIN testbed is constantly monitored using a “ping-based” tool. It checks whether the tunnel end points and the most important active nodes are up, and the ports where the most important service run is open, and monitors the average delay, loss rate, and bandwidth on the links. 8.6 FAIN SCENARIOS In this section we introduce the FAIN scenarios exercised on the testbed. A number of testbed application scenarios have been developed in the FAIN project [23, 24, 33]. The purpose of the application scenarios is to exercise the functional concepts of FAIN in an intuitive and realiztic manner. The FAIN application scenarios are therefore placed in circumstances that reflect the requirements and demands of reality. The application scenarios defined in FAIN are the: • • • • • • •
DiffServ scenario; WebTV scenario; Web service distribution scenario; Video on demand scenario; Mobile FAIN demonstrator scenario; Managed access scenario; Security scenario.
Brief descriptions of these application scenarios are given in the following section. DiffServ and WebTV are discussed in detail in Chapters 17 and 18. 8.6.1 DiffServ Scenario In this application scenario [63], a service provider agrees on a contract with an ANSP. The contract ensures three levels of priority transmissions; each identified by a distinct DiffServ code point (DSCP). The SP interconnects the three parties A, B, C as shown in Figure 8.15. Traffic entering or leaving A and C passes through the hybrid routers HANN-1 and HANN-2, respectively. Party A sends a video to party C with low transmission priority. Sometime later party B starts sending “jamming” traffic to party C. The jamming traffic is identified by a DSCP guaranteeing higher priority over the video traffic. As long as the jamming traffic does not exceed the output bandwidth of HANN-2, nothing
180
Programmable Networks for IP Service Deployment
happens. As soon as it exceeds the output bandwidth, user C sends an active packet to A. The active packet is authenticated and authorized at each active node that it traverses (i.e., at HANN-2 and HANN-1). Upon arrival at A’s local node, a SNAP program changes the DSCP of the video stream, so that it gets higher priority over the jamming traffic. A
C HANN-1
HANN-2
Video Client
(1)Video Send Active Proxy V
Video
Active Proxy V
GR2000
Server AP
GR2000 AP
Linux
V
Active
Jam Traffic Receiver
Router AP
Jam Traffic (3)AP Send Receiver V
: Video Packet
AP
: Active Packet
Jam Traffic (2)Jam Traffic Send
Sender
: Jam Packet B
Figure 8.15 DiffServ demonstration scenario.
8.6.2 WebTV Scenario An SP, called WebTV SP, offers a WebTV service to its customers by broadcasting the video program over the Internet. The customers are able to watch the video, irrespective of their terminal capabilities. In order to do so, the WebTV SP requests from the ANSP to set up an active virtual private network (AVPN) wherein he can deploy services that are customized to customer requirements. Customers need only subscribe to the WebTV service by contacting a server of the WebTV SP. In the scenario, one customer uses a terminal that is not capable of displaying the video stream correctly; for example, it uses a handheld device with low processing power and low access bandwidth. The WebTV SP preprocesses the video stream for this particular customer by transcoding the stream into a format understood by the handheld device. Based on an SLA between the ANSP and the SP, policies are sent to the ANSP MI. As a result, the ANSP PBNM receives a QoS policy, and enforces it on both the NMS and all appropriate EMSs. The active node management framework creates a new virtual environment for the WebTV SP. If the VE creation is successful the ANSP PBNM enforces a delegation policy through the NMS and all appropriate EMSs. The enforcement requests the active node management system to activate the newly created VE.
FAIN Network Overview
181
The ANSP creates a management instance in all the appropriate EMS stations for this WebTV SP, and sets access rights at the active nodes interfaces. The WebTV SP is now ready to configure its AVPN by sending policies that are specific to customers. The WebTV SP also installs the transcoder and duplicator service components. In addition, the WebTV SP deploys service specific policies in the WebTV SP PDP of its MI, so that it can define its own service specific policies that will be enforced in the active node. As the last step, the monitoring system is used for the reconfiguration of the transcoder at run time. This is used when the access bandwidth changes dramatically, and the end user needs a different transcoding format on the video stream, for example. The transcoder and the controller components are deployed via the ASP mechanism, upon request, in the SP’s VE. The deployment is triggered by the SIP proxy, when the SIP request from the customer arrives. The creation of the VAN for the WebTV SP entails creation and configuration of a management instance. There is one MI at the network-level, and one at the element level. At the beginning, only the PDP manager is instantiated inside each MI. When requests are forwarded to the MI, the PDP manager may decide to trigger the deployment of specialized functional domains by contacting the ASP, in order to deploy the SP PDP into the EMS, and the SP PEP into the SP VE. At the bootstrap of this SP PDP, a set of alarms/events is configured by means of policies, which configure the monitoring system in order to receive those alarms/events from transcoder controller. 8.6.3 Web Service Distribution Scenario In the Web service distribution scenario, Web (i.e., HTTP) traffic is distributed and redirected within the network among several distributed servers, in order to provide reliability, performance, and scalability for Web services. The Web service infrastructure exploits the capabilities of AN technology as follows: •
•
A Web service provider has access to active nodes, called “service nodes” or “active Web servers.” The provider can install content and service logic onto those nodes, and by such means he can implement features such as content distribution and personalization, aggregation of user replies or fast response times. Other active nodes, called “redirection nodes” are also available. They provide features that allow the filtering of HTTP traffic, to build service sessions (i.e., to deal with per-user states) and to forward the traffic of a particular session to a service node. Code is downloaded onto these nodes, and it observes the network load, and observes the load and availability of
182
Programmable Networks for IP Service Deployment
servers. Based on this information it determines a strategy for redirecting the traffic to the service node that is most suitable for a given Web service/user. The overall scenario is depicted in Figure 8.16. Both redirection nodes and service nodes are usually physically located within the access network of the end user, the access network of the Web service provider, or connected via a separated access network. For this reason, we may also call both of them “active Web gateways.” Other approaches are presented in [48]. Note that we assume that the core network is nonactive and only provides basic IP connectivity. The overall setting is fully transparent to the end user; that is, the end user uses the Web service via an ordinary Web browser; using standard protocols such as IP or HTTP. End User IP Access Network
Service Provider IP Access Network
IP Core Network
Service Site
End User Mobile Network End User CATV Network End User Telephone Network End User
Redirect Node
Redirect Node
AR Service Node
Service Node
Redirect Node Service Node
Service Node
End User
Figure 8.16 Service nodes and redirect servers implement active Web services.
The benefits of AN technology shown by the scenario include among others: •
•
Network-aware Web services: Contrary to existing Web services, they can operate based on information such as available bandwidth on access and network links, network topology, and load of servers. Because the information is not available at the client or server side, it should be implemented with AN technology within the network. Ease of service programming and management: Active networks eliminate the need for cumbersome ad hoc solutions to specific problems, which require separate management; active networks provide common ground for
FAIN Network Overview
•
•
183
the deployment of new services and mechanisms inside the network, and unify the management of these mechanism and services. Distribution of service logic: Service logic is executed at several locations, including specific points inside the active network, which is potentially advantageous for large volume services, fine (per user) granularity of services, and new service features. With existing solutions such as caches or content distribution networks, only content, but not service logic, can be located within the network. Dynamic, autonomous adaptation: The task of the network and service provider staff is increasing, as the number of network customers/consumers grows. So the task of provisioning should be dynamic. Besides, active nodes may cooperate with each other.
8.6.4 Video on Demand Scenario PromethOS [21] is a Linux [42] kernel-space NodeOS for active nodes. It is managed from execution environments in user space. Since the virtual environment manager, which is an EE in user space, is also concerned with node management issues, an integration of both VEM and PromethOS becomes indispensable for the interoperability of different EE types. The video on demand scenario demonstrates this integration of VEM and PromethOS. It shows how the VEM management interface of PromethOS’ user space library has been enhanced in order to control PromethOS. The active network nodes have been designed to support multiple VEs, EEs, and EE instances. The scenario also demonstrates therefore how PromethOS supports multiple VEs, and how it is able to differentiate among the customers by using those VEs. For the scenario, one EE per VE is instantiated in kernel space. The EE runs a wave video plug-in, which scales its functionality according to the different requirements of the customer. At one site (e.g., a service provider) an active node and a source for a video stream are located. At a different location, the video receiver is installed. Between the source and the active node, a high-bandwidth link provides sufficient bandwidth. The link models a high-bandwidth backbone. The active network node and the video receiver are connected by two links with different bandwidths. The capacities of the links reflect different service level agreements. The active network node is supposed to adapt the high bandwidth, requiring an input stream according to the preset output capacities of the output streams. At boot time, the ANN runs the VEM and the management components of PromethOS. As a customer request arrives at the ANN, the VEM orders the deployment of a VE; that is, it assigns resources to the customer, and it orders the deployment of an EE where the wave video scaling plug-in is installed.
184
Programmable Networks for IP Service Deployment
The request to initiate service deployment is implemented by the service specification. The service specification is parsed and resolved by the service creation engine. The code required for the wave video plug-in is fetched from the code server and deployed on the ANN. As a second request from a different customer arrives, the ANN creates a second instance with the same configuration but with different resources assigned to the VE. No additional code fetching is required anymore, since the code is available from the node’s local cache. The creation process is fully implemented and controlled by the VEM. As the process completes, the video source gets a signal to begin with the transmission of the video flows. 8.6.5 Mobile FAIN Demonstrator This generic application scenario introduces a “wireless LAN” showcase that is implemented within FAIN. It shows how mobile networks benefit from FAIN concepts. Especially for mobile wireless networks where the bandwidth is not abundant, the FAIN concepts exploit their advantages. The scenario demonstrates how, in a challenging wireless environment, load balancing with active networks can be used to avoid bottlenecks and to provide quality of service to real-time services. To realize load balancing and load reduction concepts, some additional software components are implemented and installed on the servers. In total, there are three different use cases that demonstrate load balancing and load reduction. Each use case is illustrated by demonstrations. The use cases and the related demonstrations are listed in the following: •
•
•
Use Case 1: This shows redirection of a connection. The connection to a heavily loaded access point is terminated, and a new one is built up between the connecting client and an access point that is not so busy during peak-period demand. Use Case 2: This is a similar use case. It shows a simple redirection of a connection request. The connection is built up between the requesting client and a less burdened access point. The load at the burdened access point results from multiple requests for video streaming. Use Case 3: This generic application scenario shows how a personal mobile software proxy contributes to load balancing though the use of a load reduction mechanism.
FAIN Network Overview
185
Each of the cases relies on the following components: • • •
•
•
A WLAN access controller, which is the central control unit. It is a software component and is deployed on a FAIN PromethOS active node. At least two WLAN access points, to which the clients are linked by a wireless link. Via these, the clients receive data from a content server. At least one FAIN active node, which is PromethOS capable. Here, the following components are deployed: • PromethOS: A framework for the manipulation of Linux kernel modules (PromethOS modules) and for redirecting network traffic to such a PromethOS module. • netfilter: A framework for the manipulation of network packets. netfilter is used by PromethOS. • Iptables: This is the user space part of netfilter. It is used to load and configure netfilter modules and to redirect network traffic to such a module. PromethOS uses an extended version of iptables, which is able to load and configure PromethOS modules. • PromethOS modules used by the WLAN access controller: A PromethOS module is used for the actual manipulation and monitoring of the network traffic. • User interface: It is used to configure and monitor the PromethOS module. A server providing content used for the demonstration of actual data transfer via the WLAN access points. In the context of FAIN Dino Park, such a content server is dedicated to an exponent, or specifically, a designated server for registration procedures at the entrance. At least two terminals, which are notebooks or may be PDAs; each with WLAN capabilities. The terminals are equipped with a FAIN terminal daemon.
Optionally, a load generator is required for creating the background traffic that is used to trigger decisions on load balancing. Instead of using a load generator, the handover between access points may be demonstrated by simulating the load in the WLAN access controller. The general infrastructure of the demonstrations is shown in Figure 8.17.
186
Programmable Networks for IP Service Deployment
Access Point
WLAN Access Controller
Terminal lkj
Access Point
FAIN active node (Router) Content Server
Terminal
Figure 8.17 General infrastructure of the mobile demonstrations.
8.6.6 Managed Access A host joining a network is assigned a set of network services according to some SLA; for example, all hosts are allowed to use DNS but only a subset of them is allowed to use the SNMP services provided by the SP). Currently, most access providers manage only their own equipment. They assume that the customer has his own arrangements with the rest of the network, and the service providers. In this generic application scenario, active packet technology is used to manage packet filters across networks that belong to different SPs. There are three road warrior laptops. Each wants a different type of network access than the others. The road warriors access the private IP network, using a WaveLAN access point. These access points generate SNMP traps when a new host has been assigned an IP address. The IP address for a host is allocated by a Dynamic Host Configuration Protocol (DHCP) server. The DHCP server operates on a network controller host. This host also supports an SNMP trap collector and an injector. The injector is a process that can be used to inject active packets into the network. The router of the access network performs packet filtering and conditioning. It has an interface to the backbone network, which has two routers, for egress and ingress. The former performs network address translation of packets with private IP source addresses to the address of the public interface of the egress router. The ingress router performs NAT for packets from elsewhere to services hosted in the private network. The active packet is injected and reconfigures all the routers to accept packets for different services from the road warriors on the access network. The active packet could reconfigure all of the routers if need be. The value added by using active packets for this scenario is that the network elements can be controlled on demand. Normally, network administrators would put a static configuration in place.
FAIN Network Overview
187
8.6.7 Security Scenario A pure security scenario can be quite artificial, especially since security is an integral part of the FAIN architecture. For this reason the security scenario has been kept very general. As such it may easily become a component of any of the above-presented generic application scenarios. The security scenario shows how an active packet is passed through a node and how it triggers security concepts. The scenario is applicable to any active packet approach in transport, control, or management plane. Any network topology with three or more active nodes will be sufficient. Requirements are the installation of the basic FAIN node services, that is, of management, DEMUX and security facilities. Credentials and related key pairs must be generated, and security associations (SAs) between the nodes must be established before the scenario. If the policies are used, they must be supplied during the service setup or set by the network management framework. The scenario consists of the following steps, which are repeated on every network node that the packet traverses: • • •
•
• • • •
The DEMUX intercepts the packet and invokes the security receive check function with the ANEP packet, with UDP protocol information data, and with the local information (service), where the packet is headed. The ANEP packet is parsed. The hop integrity option is evaluated: • The correct security association is chosen. • The packet hop replay information is validated. • The integrity token is verified. If the packet contains one or more credential options: • The principal credentials are looked up in local cache. • If they are not already there, they are fetched from the previous node. • The credential path is validated. • The digital signature of the static part of the packet is verified with the trusted public key of the principal. • The credential option timeframe is checked. If the packet contains active code and verification is required (e.g., due to a policy), the code is verified. The packet security context(s) is/are built from the principal credentials and results of the packet code verification. The security context of the packet is compared to the security context of the packet destination. If the packet destination (service) has defined a policy the security context(s) is/are used to authorize access to the service. If the access is authorized, the packet is returned to the DEMUX, and the DEMUX passes the packet to the service.
188
• • • • •
•
Programmable Networks for IP Service Deployment
The packet data (variable and payload, code or data) is evaluated in the service. If the evaluation results in an action regarding the node, this action is authorized and the policy, if one exists, is enforced. The packet is returned to the DEMUX and the security check function is invoked. The active packet is built. The next hop integrity option is built: • The right SA is chosen for the packet next-hop destination. • Replay protection value is added. • An integrity token is built and the hop integrity option is added to the packet. If everything went well, the packet is returned to the DEMUX in order to be sent to the next hop.
8.7 CONCLUDING REMARKS The FAIN active router is one of the first prototypes that has a clear separation of the data, control, and management planes running multiple types of EEs. Our demonstration on the testbed exhibits a level of integration to achieve an AN router in terms of functionality, where a node operating system, complete with integrity checks; a resource monitoring facility at the active node; and an execution environment for dynamic service provisioning; is proven to be running. The FAIN management system is also one of the first prototypes of network management systems oriented to the full management of active networks. Our demonstration on the testbed represents one of the earliest efforts in managing active network routers using a policy-based approach. Our management system showed high flexibility (being able to adapt to new functionality at run time) thus only downloading new PDP/PEP pairs when needed. Additionally, it enabled the distribution of policies via active packets. The management system is shown to provide an efficient framework for the management of active networks since it supported its main requirement: customizability and support for delegation. Our testbed-based scenario demonstrations exhibit the following main concepts: • • • •
Delegation of management functionality. Creation of an active virtual network. Dynamic downloading of service specific management components. Dynamic extensibility of the management stations’ functionality of downloading new PDPs when they are needed.
FAIN Network Overview
• • • • •
189
Dynamic installation of an active service within the active router in the control and data planes. Multiple execution environments grouped and managed as virtual environments. The clear separation of data, control, and management planes. Hop-by-hop authentication and integrity checks of packets before they are delivered to the VE. Run-time configuration of the demultiplexing/dispatching and multiplexing functionality. The demultiplexing component dispatches packets to the configured services.
Today, there have been assertions and proposals within the research community about finding the “killer application” that can be created by active networking technology. In our demonstration, we note that AN technology should be viewed as a means to an end, and not the endgame itself. This can be seen from the service deployment scenario, which is a dynamic process initiated by the user. In particular we show how AN can provide a novel approach to instantiate the transcoder application. We also illustrated an active shaper service, which is used to limit the cross traffic entering an AVPN link, as well as the creation of active channels within the node that redirect flows to active service running within a particular VE. One of the salient features in our testbed included the instantiation of a VE on an AN node. This further enabled the introduction of management enforcement points within the node VEs to reduce management traffic. In our implementation, the SP obtained a restricted delegation of management functionality to manage its resources. It was even able to do so with its proprietary code. Hence, we feel that the delegation approach was proven to be adaptable to different business models and user needs. The PDP manager component in collaboration with the FAIN ASP system dynamically detected the needed functionality. It downloaded it and installed it at run time, hence extending the functionality supported by the management framework. The SP in the use case was able to manage its allocated resources, and tailor them to the needs of the implemented service. It was even able to customize the management framework for its needs with its own management code. Additionally, at the element level, PEPs that run within the active node enhanced the distribution of management functionalities over the network. This property aims to solve scalability problems inherent in centralized policy-based management architectures.
190
Programmable Networks for IP Service Deployment
References [1]
Aidarous, S., and Pevyak, T. (eds.), Telecommunications Network Management: Technologies and Implementations, New York: IEEE Press, 1997.
[2]
ABone Testbed, http://www.isi.edu/abone/.
[3]
Application-Level Active Networks, http://dmir.it.uts.edu.au/projects/alan/.
[4]
Calvert, K. L., (ed.), Architectural Framework for Active Networks, Draft version 1.0, July 27, 1999. http://protocols.netlab.uky.edu/~calvert/arch-latest.ps.
[5]
Biswas, J., et al., Proposal for IP L-interface Architecture, IEEE P1520.3, P1520/TS/IP013, 2000, http://www.ieee-pin.org/doc/draft_docs/IP/p1520tsip013.pdf.
[6]
Becker, T., et al., “Enabling Customer Oriented Service Provision by Flexible Code and Resource Management in Active and Programmable Networks,” IEEE International Conf. on Telecommunications, June 4-7, 2001, Bucharest, Romania.
[7]
Braden B., et al., Introduction to the ASP Execution Environment (Release 1.5), November 30, 2001, http://www.isi.edu/active-signal/ARP/DOCUMENTS/ASP_EE.ps.
[8]
Bhattacharjee, S., “Active Networks: Architectures, Composition, and Applications,” Ph.D. thesis, Georgia Tech, July 1999.
[9]
Campbell, A. T., et al, “A Survey of Programmable Networks,” ACM Computer Communications Review, April 1999, http://www.acm.org/sigcomm/ccr/archive/1999/apr99/ccr-9904campbell.pdf.
[10]
Chan K., et al., COPS Usage of Policy Provisioning, IETF RFC 3084, March 2001.
[11]
Cheng, L., et al., “Strong Authentication for Active Networks,” SoftCom 2003, October 7-10, 2003, Split, www.fesb.hr/SoftCOM.
[12]
CORBA Components, v3.0 full specification: Document-formal/02-06-65.
[13]
Damianou, N., et al., The Ponder Specification Language, Workshop on Policies for Distributed Systems and Networks (Policy2001), HP Labs Bristol, January 29-31, 2001, http://www.doc.ic.ac.uk/~mss/Papers/Ponder-Policy01V5.pdf.
[14]
DARPA Active Network Program, 1996, http://www.darpa.mil/ato/programs/activenetworks/actnet.htm.
[15]
Distributed Management Task Force, http://www.dmtf.org.
[16]
Denazis, S. G., and Galis, A., “Open Programmable and Active Networks: A Synthesis Study,” IEEE IN 2001 Conf., Boston, MA, May 6-9, 2001.
[17]
Denazis, S., et al., “Component Based Execution Environments of Network Elements and a Protocol for their Configuration,” IEEE Trans. on Systems, Man and Cybernetics, Special Issue on Technologies that Promote Computational Intelligence, Openness and Programmability in Networks and Internet Services, 2000.
[18]
Eaves, W., et al., “Resource Control in Active Networks,” IEEE GLOBECOM 2002, Taipei, Taiwan, November 17-21, 2002, http://www.globecom2002.com.
[19]
FAIN Project, http://www-ist-fain.org.
[20]
FAIN Project Deliverable D1 - Requirements Analysis and Overall Architecture, http://www.istfain.org/deliverables.
FAIN Network Overview
191
[21]
FAIN Project Deliverable D7 - Final Active Network Architecture and Design, http://www.istfain.org/deliverables.
[22]
FAIN Project Deliverable D8 - Final Specification of Case Study Systems, http://www.istfain.org/deliverables.
[23]
FAIN Project Deliverable D9 - Evaluation Results and Recommendations, http://www.istfain.org/deliverables.
[24]
FAIN Project Deliverable D40 - FAIN Demonstrators and Scenarios, http://www.istfain.org/deliverables.
[25]
Martin-Flatin, J. P., and Znaty, S., “A Simple Typology of Distributed Network Management Paradigms,” Proc. of Eighth IFIP/IEEE International Workshop on Distributed Systems, Operations and Management (DSOM ’97), Sydney, October 1997.
[26]
Gabrijelcic, D., Savanovic, A., and Blazic, B. J., “A Security Architecture for Future Active IP Networks,” CMS 2002 Conf., Portoroz, Slovenia, September 26-27, 2002.
[27]
Galis, A., et al., “Programmable Network Approach to Grid Management and Services,” International Conf. on Computational Science 2003, Melbourne, Australia, June 2-4, 2003, www.science.uva.nl/events/ICCS2003/.
[28]
Galis, A., et al., “Management of Active and Programmable Networks,” IEEE Design Of Reliable Communication Networks 2001 Conf. Proc., October 7-10, 2001, Budapest, Hungary, http://www.hit.bme.hu/drcn2001/ and also published in Hungarian in Hiradastechnika (Journal of Telecommunications), Budapest, Hungary, October 2002.
[29]
Galis, A., et al., “A Flexible IP Active Networks Architecture,” Proceedings of International Workshop on Active Networks, Tokyo, October 2000, and in Active Networks, Springer Verlag, October 2000.
[30]
Galis, A., et al., “Policy-Based Network Management for Active Networks,” IEEE ICT 2001 Conf. Proc., Bucharest, Romania, June 4-7, 2001.
[31]
Goldszmidt, G., and Yemini, Y., “Distributed Management by Delegating Mobile Agents,” The 15th International Conf. on Distributed Computing Systems, Vancouver, British Columbia, June 1995, http://www.cs.columbia.edu/~german/papers/icdcs95.ps.Z.
[32]
Houatra, D., “Design of DENES, a Resource Control Support for Active IP Networks,” Seventh International Conf. on Intelligence in Next Generation Networks (ICIN'2001), Bordeaux, France, October 1-4, 2001.
[33]
Houatra, D., Zimboulakis, E., “Network Model for the Provision of Active Multicast Services,” XVIII World Telecommunications Congress, Paris, France, September 22-27, 2002.
[34]
IBM Network Processors, http://www3.ibm.com/chips/products/wired/products/network_processors.html.
[35]
IETF ForCES, http://www.ietf.org/html.charters/forces-charter.html.
[36]
IEEE P1520.2, Draft 2.2, Standard for Application Programming Interfaces for ATM Networks, http://www.ieee-pin.org/pin-atm/intro.html.
[37]
Internet Engineering Task Force, http://www.ietf.org.
192
Programmable Networks for IP Service Deployment
[38]
IXP family of network processors, Intel, http://www.intel.com/design/network/products/npfamily/.
[39]
Kounavis, M. E., et al., “The Genesis Kernel: A Programming System for Spawning Network Architectures,” IEEE Journal on Selected Areas in Communications (JSAC), Special Issue on Active and Programmable Networks, Vol. 19, No. 3, March 2001, pp. 49-73. http://comet.ctr.columbia.edu/genesis/papers/jsac2001.pdf.
[40]
Kitahara, C., et al., “Delegation of Management for QoS Aware Active Networks,” IEEE and IEICE International Conf. on Quality Design and Management- CQR 2002, Okinawa, Japan, May 14-16, 2002, http://www.ieice.org/cs/cq/CQR2002/public/index.html.
[41]
Kato, K., and Shiba, S., “Designing Policy Networking System Using Active Networks,” Second International Working Conf. on Active Networks (IWAN'2000), Tokyo, Japan, October 2000.
[42]
Linux, http://www.kernel.org.
[43]
Mathieu, B., et al., “Download of Services into Active Network,” XVIII World Telecommunications Congress, Paris, France, September 22-27, 2002.
[44]
Moore, J., Kornblum, J., and Nettles, S., “Predictable, Lightweight Management Agents,” IWAN 2002, http://www.iwan2002.org.
[45]
Michael, F., and Atanu, G., “Application-Level Active Networking,” Computer Networks, Vol. 31, No. 7, 1999, pp. 655-667.
[46]
Object Management Group, http://www.omg.org.
[47]
Open Signaling Working Group, http://www.comet.columbia.edu/opensig/.
[48]
Open Service Gateway Initiative, http://www.osgi.org.
[49]
Peterson, L., Node OS Interface Specification, AN Node OS Working Group, November 30, 2001, http://www.cs.princeton.edu/nsg/papers/nodeos-02.ps.
[50]
Savanovic, A., Gabrijelcic, D., and Mocilar, F., “Security Framework for Active Networks,” IEEE International Conf. on Telecommunications, Bucharest, Romania, June 4-7, 2001.
[51]
Savanovic, A., Gabrijelcic, D., and Blazic, B. J., “An Active Networks Security Architecture,” MIPRO, Computers in Telecommunications, Vol. 5, 2002, pp. 20–24.
[52]
Smith J. M., et al., “Activating Networks: A Progress Report,” IEEE Computer, Vol. 32, No. 4, April 1999, pp. 32-41, http://www.cs.princeton.edu/nsg/papers/an.ps.
[53]
SNAP: Safe and Nimble Active Packets, http://www.cis.upenn.edu/~dsl/SNAP/.
[54]
Vicente, J., et al., L-interface Building Block APIs, IEEE P1520.3, P1520.3TSIP016, 2001, http://www.ieee-pin.org/doc/draft_docs/IP/P1520_3_TSIP-016.doc.
[55]
Solarski, M., Bossardt, M., and Becker, T., “Component Based Deployment and Management of Services in Active Networks,” IWAN2002, 2002, http://www.iwan2002.org.
[56]
Tan, A., et al., “A Framework for Delegated and Policy Management of Active Networks,” IEEE OpenArch, 2002, http://www.comet.columbia.edu/openarch.
[57]
Tsarouchis, C., et al., “A Policy-Based Management Architecture for Active and Programmable Networks,” IEEE Network, Special Issue on Network Management of Multiservice, Multimedia, IP-Based Networks, June 2003, http://www.comsoc.org/pubs/net/ntwrk/special.html.
[58]
Van der Merwe, J. E., et al., “The Tempest - A Practical Framework for Network Programmability,” IEEE Network, Vol. 12, No. 3, May/June 1998, pp. 20-28.
FAIN Network Overview
193
http://www.research.att.com/~kobus/docs/tempest_small.ps. [59]
Vivero, J., et al., “Network Management in Active Network Context: Problem and Solutions,” IEEE 5th World Multi-Conf. on Systemics, Cybernetics and Informatics (SCI2001), Orlando, FL: July 22-25, 2001.
[60]
Vivero, J., et al., “The FAIN Management Framework: A Management Approach for Active Network Environments,” IEEE International Conf. on Networks (ICON2002), Singapore, August 27-30, 2002, http://icon2002.calendarone.com/papers.htm.
[61]
Yavatkar, R., Pendarakis, D., and Guerin, R., A Framework for Policy-Based Admission Control, RFC 2753, January 2000.
[62]
Drogseth, D., “The New Management Landscape,” PACKET (CISCO System Users Magazine), Third Quarter, 2002.
[63]
Suzuki, T., et al., “Dynamic Deployment & Configuration of Differentiated Services Using Active Networks,” Proc. of the Fourth Annual International Working Conference on Active Networks (IWAN 2003), Kyoto, December 10-12, 2003, http://www.iwan2003.org.
[64]
Nikolakis, Y., et al., “A Policy-Based Management Architecture for Flexible Service Deployment on Active Networks,” Proc. of the Fourth Annual International Working Conference on Active Networks (IWAN 2003), Kyoto, December 10-12, 2003, http://www.iwan2003.org.
Chapter 9 Virtual Environments and Management The concept of virtual environments was introduced in FAIN to overcome the problem of having several execution environments implemented in various technologies, and providing different abstractions, interfaces, and so on. For example, in FAIN we implemented three different kinds of execution environment, based on Java/CORBA, SNAP, and PromethOS (see Chapter 14). Although there are approaches defining a virtual overlay for active networks [8, 9], FAIN details this at the node-level by the introduction of virtual environments. Several virtual environments situated at different network nodes can be assigned to a particular identity, and are then grouped to form a virtual active network. The advantage of having an explicit notion of a virtual environment is to provide a generic means to manage access and resource control on the node-level. While execution environments support the installation, instantiation, and configuration of active services code in various ways, the virtual environment puts a uniform management layer on top. This allows external clients to interact with services through the interface of the virtual environment in a generic way, and the interactions will be mapped to specific interfaces of the execution environments. Several execution environments can be attached to a virtual environment, just the same way as other resources. This leads to another aspect of virtual environments: the partitioning of resources. As defined in the FAIN business model [1] (Chapter 7), there may be a number of service providers acting as the customers of a network provider. The network provider can set up virtual environments on selected network nodes, and assign them to a particular service provider, in order to offer a virtual active network. Access to the virtual environments will be made available to the respective service provider so that it can manage its own virtual network. The resource partitioning implemented among virtual environments will prevent interference with other service providers and, additionally, allow an accounting per service provider. A special virtual environment, also called the privileged virtual environment, will be started when an active node is booted. This environment belongs to the network provider and contains fundamental services such as management of basic resources (CPU, memory, bandwidth), as well as management of virtual environments plus different kinds of execution environments. The privileged 195
196
Programmable Networks for IP Service Deployment
virtual environment also provides a means for the network provider to manage the nodes inside an active network. Several virtual environments belonging to the same service provider but running on different active network nodes will form a virtual network to be used by the service provider to deploy services and make them available to customers. In order to know which virtual environments belong to a particular virtual network, the environments are tagged with a special network identifier. To summarize, the concept of virtual environments enables several aspects: • • • • •
A generic way of deploying and managing active services independent of the technology of the underlying execution environment; A generic way to manage (i.e., monitor and control) active nodes for service providers as well as for network providers; Partitioning of resources among several service providers; Accounting of resource usage per service provider; Delegation of service management to the service providers.
In the following section, the requirements for node-level management and its design are presented in more detail. 9.1 REQUIREMENTS The major requirement in the design of the interface of a virtual environment is to provide a generic way to deploy and manage services on active network nodes offering different execution environments. Further, the deployment and management should be possible in a flexible and dynamic way to facilitate the creation, configuration, and reconfiguration of services on demand. Interaction between services should be supported in a safe and controlled way. Nonetheless, the development of services should be easy and concentrated on service logic by avoiding the need to reimplement commonly needed functionality. Last but not least, the implementation of the node-level management layer (i.e., the virtual environments) should be portable between different operating systems, and should interoperate with external components potentially implemented in different programming languages. In the following sections we will show how these requirements were met. 9.2 DESIGN In FAIN, services are described using a component-based approach. This approach allows the creation of new services by combining already available components and, potentially, by configuring them. However, there must be some means of implementing any missing functionality. Consequently, a component-based approach has been chosen for the deployment and management of services, as well
Virtual Environments and Management
197
as for the development of services. Legacy systems missing a component-oriented interface can be wrapped by components residing in the node management layer (see Figure 9.1).
management
control
legacy system transport
Figure 9.1 A simplified example service consisting of several components.
A component-based run-time environment for services makes it easier to develop services. Aspects such as life-cycle management, configuration, access control, monitoring, and so on, can be implemented by the supporting framework. Thus the developer of a service can concentrate on the service logic, and frequently needed aspects do not have to be implemented over and over again. By providing the means to dynamically interconnect components, a high degree of flexibility can be reached. After the initial setup, services may be reconfigured during run time by adding new components, removing components, or changing connections between components. This applies not only to high-level services but also to the basic services offered by an active node. For example, a specific service component may be connected to a so-called channel, providing the dispatching of packets belonging to a particular flow, while also controlling the bandwidth. Components are created and deleted by component managers. A component manager is responsible for one type of component and, besides the creation and deletion, it offers methods for activating, deactivating, and finding component instances. Thus a component manager implements the so-called factory and finder patterns.
198
Programmable Networks for IP Service Deployment
In order to deploy a service, which needs components that are not already available on a particular network node, the appropriate component managers must be loaded into an appropriate execution environment. Since a component manager functions as a template from which instances can be created, the execution environments are template managers allowing to install, uninstall, and find component managers. As virtual environments provide an abstraction from particular execution environments, they also need to implement the template manager pattern. Specialized component managers are used to manage specific resources. In addition to the factory and finder patterns, a resource manager offers methods for monitoring parameters such as memory usage or CPU cycles.
Basic Component
Configurable Component
Component Manager
Template Manager
Resource Manager
Figure 9.2 Hierarchy of component abstractions.
Figure 9.2 depicts the hierarchy of abstractions used for service components. The main abstraction is the basic component from which all others are derived. The configurable component adds functionality to modify the internal state of a component, and the interconnections with other components. While component managers handle component instances, the template managers deal with instances of component managers; that is, they manage managers of component instances. There is an extra abstraction for resource managers, which also manage component instances, but have the ability to put them under resource control. In the following sections the component abstractions will be presented in more detail. The implementation of the abstractions makes up the run-time support framework for service components and is described in Section 9.3.
Virtual Environments and Management
199
9.2.1 Basic Component The objective for the design of the FAIN component model has been its generality. It should be possible to cover all other models with this one. Thus, the FAIN model is quite simple: A basic component has a defined owner, and a unique identifier, and, optionally, offers a couple of ports through which its specific functionality could be accessed. A port will have a particular format and address so that is can be accessed from the outside. The values of the format and address are expressed as arbitrary character strings, which are transparent to the service run-time framework but must be understood by the component itself and its communication peers. At least one port is offered by all components, which is the initial port. This port is used by clients to query other supported ports, and to get access to them. In order to get access to a port, a client must authenticate himself. 9.2.2 Configurable Component A configurable component is derived from a basic component and, additionally, offers a configuration port. This port is used to get and set the configuration of the component in the form of properties, that is, pairs of names and values. Interested clients can connect a callback port to receive notifications when selected properties change their values. Further, the configuration port allows connecting the ports of the respective component with ports of other components. This is used when a service is deployed and components must be interconnected. 9.2.3 Component Manager A component manager is derived from a configurable component. It offers a port for managing component instances comprising the following: •
•
•
Creation of instances with the specification of a profile: The component manager will create a new instance in a standby mode, and store it together with the profile. The result is a unique identifier for the new instance. If no activation occurs for the new instance within a specific timeframe, the instance will be deleted automatically. Activation of instances with the specification of initial setup parameters: The component manager will put the new instance into action and initialize it with the setup parameters. The new instance is now ready to interact with its environment. Deactivation of instances: The component manager will put the instance back into standby mode. After a specific timeframe, the instance must be activated again or it will be deleted.
200
• •
Programmable Networks for IP Service Deployment
Deletion of instances: The component manager will simply delete the instance. Discovery of instances: The component manager will return a reference to a desired instance or a set of instances based on several criteria; for example, the instance’s unique identifier, an owner, or other properties.
9.2.4 Template Manager A template manager is derived from a configurable component. It offers a port for managing templates, where a template corresponds to a particular component manager instance. There are exactly two occurrences of template managers: execution environments and virtual environments. While execution environments are responsible for putting component managers into action, the task of a virtual environment is to dispatch requests to an appropriate execution environment. The role of the privileged virtual environment is to dispatch requests to the virtual environment owned by the appropriate service provider, and can thus be used as the initial point of contact for any client. Managing templates comprise: •
•
•
Installation of templates with specifying a template description: The environment will take the required steps to put the corresponding component manager in action. An installation request will eventually arrive at the appropriate execution environment. The execution environment will use a specific means for installing a component manager, depending on the underlying technology, for example, instantiating a new Java class loader or copying object files to appropriate locations. The template description includes a name, a version, the identifiers of the target virtual and execution environments, the path to the templates code base, and the entry point for starting the corresponding component manager. The result of the installation is a unique identifier for the new template. De-installation of templates with specification of the template’s unique identifier: The environment will delete the corresponding component manager. It is specific to the template—and thus part of the implementation of the component manager—whether running component instances should be deleted, too. Discovery of templates: The environment will return a reference to a component manager or a set of component managers according to various criteria; for example, the template’s unique identifier, its name, version, or identifiers of environments.
Virtual Environments and Management
201
9.2.5 Resource Manager A resource manager is derived from a component manager. It offers a port for monitoring component instances, and for registering callback ports to get notifications when certain thresholds are reached. The methods offered by a component manager are extended by the following functionalities: •
• • • •
Creation of instances with specification of a resource profile: The resource manager can use this profile for checking the availability of resources needed for putting the new instance into action. The required resources will be kept in a standby mode for a certain amount of time, in order to be usable by the new instance when it gets activated. If no activation occurs within the timeframe, the resources will be released. Activation of instances with specification of initial setup parameters: The new instance is now ready to use the assigned resources, and the resources are bound to the instance. Deactivation of instances: The resource manager puts the resources assigned to the instance back into standby mode. After a specific timeframe, the instance must be activated again, or the resources will be freed. Deletion of instances: The resource manager frees all resources assigned to the instance. Registering a callback: Interested clients can register a callback port in order to receive notification of when the usage of resources by an instance reaches particular limits. This can be done for upper or lower limits.
9.2.6 Special Managers During the FAIN active node boot procedure, the privileged virtual environment is started, together with an attached execution environment. When a new virtual environment is created, it needs some basic resources in order to support template installation and component instantiation. For this reason, various resource managers are installed inside the privileged virtual environment during the boot procedure. The basic resource managers are: •
•
A virtual environment manager for the creation of new virtual environments: This manager examines the resource profile and tries to create any referenced resource using the other basic managers. The resulting resource components are attached to the new virtual environment. A number of execution environment managers for the creation of specific execution environments: Since all templates must be installed inside one, and running instances can exist only inside execution environments, there
202
•
•
•
Programmable Networks for IP Service Deployment
must be at least one execution environment attached to a virtual environment. Specific execution environments will be described, for example, in Chapters 13 and 14. A security manager for the creation of security contexts: A security context holds information about the identity and security policies of the owner of an environment; that is, the network provider for the privileged virtual environment or a service provider for other virtual environments. The security context is used to check interactions with components belonging to the respective environment. A channel manager for the creation of channels: Component instances running in an execution environment can connect to a channel to receive and send packets from and to the network. The channel manager is responsible for dispatching packets to the appropriate channels. A traffic manager for the creation of traffic controllers: A traffic controller can be used by component instances to control particular packet flows. For example, a traffic controller may offer methods for setting up a guaranteed bandwidth, or a specific packet scheduling. priviledged virtual environment priviledged execution environment VE mgr
EE mgr
security manager
channel manager
traffic manager
Figure 9.3 Initial setup of the management layer of a FAIN active node.
Figure 9.3 shows the initial setup of the management layer of an active node. The privileged execution environment runs in the context of the privileged virtual environment. Inside the privileged execution environment, are the resource managers for the basic services. They will be used to create resources for other virtual environments. Details of the implementation are described in Section 9.3 below. 9.3 IMPLEMENTATION Java was chosen as the implementation language for the node-level management layer, for its fast prototyping power, its portability, and because it was well known within the FAIN consortium. Consequently, a Java execution environment had to be developed in order to execute the management layer.
Virtual Environments and Management
203
The Java execution environment is an implementation of the concept of an execution environment as described earlier in this chapter. It provides run-time support for service components implemented in Java [2] together with support for intercomponent communication based on CORBA [3] and SNMP [7]. The runtime environment consists of a collection of generic classes and some helper classes. The generic classes can be extended by component implementations and offer an internal interface to the framework as well as callback methods that can be overridden. Access to the CORBA ports is secured by secure sockets layer (SSL). Clients must provide certificates used for authentication. All interactions with a CORBA port can then be checked against policies maintained by the respective security context. By using CORBA portable interceptors and portable object adapters together with servant locators, it is possible to maintain the clients’ identity throughout a chain of interactions. In particular, the generic classes offered by the Java execution environment have been extended toward a node-level management framework as presented in more detail below. The Java execution environment is most suitable for service components implemented in Java and using CORBA, but can equally be used to implement wrappers for legacy systems, and allows for alternative intercomponent communication besides CORBA. The domain of the Java execution environment is the support of portable, easy-to-develop service components; usually on the control or management layer, but also on the transport layer for low packet rates. However, the Java execution environment is not good for directly running components targeted at high performance packet processing. In the following, the framework classes forming the Java execution environment and some basic services implemented therein will be presented. The framework classes implement the concepts presented in Section 2.3. They can be used to derive specific implementations for particular service components. Figure 9.4 shows the hierarchy of classes that make up implementation of the Java execution environment. As indicated by their names, some of them map one to one to abstractions presented in the design section of this chapter. Other classes extend them in order to provide support for specific functionality needed for active services. These can be grouped into three categories: • • •
Support for specific execution environments, providing a mapping to the particular technology of an execution environment; for example, Java or SNAP active packet interpreter; Support for specific communication ports; for example, using Internet InterORB Protocol (IIOP) or SNMP; Support for basic active node services; for example, traffic management, packet demultiplexing, security.
204
Programmable Networks for IP Service Deployment
In the following paragraphs the classes that make up the Java execution environment will be described one by one in more detail. Basic Component
1
has
+
Port
Configurable Component Component Manager
IIOPPort SNMPPort
Template Manager
Resource Manager Virtual Environment Manager
Virtual Environment Execution Environment
Java Execution Environment Manager
Java Execution Environment
SNAP Execution Environment Manager
SNAP Execution Environment
PromethOS Execution Environment Manager
PromethOS Execution Environment
Channel Manager
Channel
Security Manager
Security Context
Traffic Manager
Traffic Controller
DiffServ Manager
DiffServ Controller
Figure 9.4 Class hierarchy for the Java execution environment.
9.3.1 Basic Component This class implements the abstraction of the basic component. It sets up the component’s initial port, and provides an internal interface to the derived implementation to add or remove additional specific ports. This is done by providing a port description. Later, when clients try to get access to a specific port, this description will be used to create a port entity bound to the client. There are special methods to add or remove CORBA ports. The method for the creation of a port entity can be overridden by the derived implementation in order to provide specific handling for particular kinds of ports not based on CORBA. The internal interface provides methods for getting the unique identifier of the component and the name of the owner. Further, there is a callback method for the initialization of the component called by the appropriate component manager during the creation of the component. This method can be overridden when special steps must be taken by the derived implementation. At this point the specific ports should be added. 9.3.2 Port This class is a generic superclass for port classes. It defines two methods for allocating and freeing the represented port, which should be overridden by the specific subclasses. In general, when a port is allocated, a reference to the port is
Virtual Environments and Management
205
created and an implementation is bound to the reference. In the case of a CORBA port, for example, the reference would be an Interoperable Object Reference (IOR) and the implementation a servant. 9.3.3 IIOP Port This class represents a CORBA port. It makes use of the portable object adapter to create port references. Together with a servant locator, the references are mapped to the appropriate ports’ implementations. Using a servant locator allows us to perform access control prior to invoking the ports’ implementations. 9.3.4 SNMP Port This class represents an SNMP port. Components can use such ports to set or get values from a management information base (MIB) based on an object identifier. Additionally, an SNMP port will receive traps with a defined enterprise identifier, and invoke a callback on the respective port’s implementation. In contrast to the IIOP port, the SNMP port’s reference is only valid in the same execution environment and cannot be exported. 9.3.5 Configurable Component This class implements the abstraction of the configurable component. It is derived from the basic component class, and additionally implements the configuration port plus the handling of properties and property observers. The internal interface provides methods for setting, getting, and changing properties, as well as callback methods for the events of changes in the state of properties. Further, this class implements the handling of connections between ports of the represented component instance and ports of other component instances. There is a callback method that will be called by the framework when ports are about to be connected. This method should be overridden in order to store the address of the target port, and to implement the specific handling of connecting to nonCORBA ports. 9.3.6 Component Manager This class implements the abstraction of the component manager. It is derived from the configurable component class, and additionally implements the component manager port. Internally, it cares about the management of component instances and associated profiles. It defines callbacks for the creation, activation, deactivation, and deletion of instances, which must be overridden by the derived implementations.
206
Programmable Networks for IP Service Deployment
9.3.7 Resource Manager This class implements the abstraction of the resource manager. It is derived from the component manager class, and additionally implements the resource manager port. Internally, it cares about the management of supported resource dimensions and the current resource usage. Further, it sends notifications to registered clients when particular thresholds are crossed. It defines internal methods for defining supported dimensions and updating current usage. 9.3.8 Virtual Environment This class implements the abstraction of the virtual environment. It is derived from the configurable component class, and additionally implements the template manager port. Whenever a virtual environment is created via the virtual environment manager, it gets references to its associated execution environments. Those references will be used to dispatch requests concerning the management of templates to the appropriate execution environment. 9.3.9 Virtual Environment Manager This class is derived from the component manager class and manages instances of virtual environments. It extends the resource manager port by providing a method for retrieving instances based on the virtual network identifier. 9.3.10 Security Context This class is derived from the configurable component class. It offers a port for checking the access to other ports based on the identity of the current client. It maintains policies that define particular identities, and their access rights. 9.3.11 Security Manager This class is derived from the component manager class and manages instances of security contexts. For more details on the internals of security management see Chapter 11. 9.3.12 Execution Environment This class implements the concept of the execution environment on a generic level. It is derived from the configurable component class and (as does the virtual environment) implements the template manager port. It stores of templates and their descriptions. Specific implementations can be derived from this class, and override the callbacks for installation and deinstallation.
Virtual Environments and Management
207
9.3.13 Java Execution Environment This class is derived from the execution environment class and employs Java class loaders for the installation of templates. 9.3.14 Java Execution Environment Manager This class is derived from the resource manager class. When a new Java execution environment is activated, the manager starts a new process on the operating system level. The manager will monitor the CPU and memory usage of the process, and care about compliance with the environment’s resource profile. In the current implementation, the manager simply kills the environment process, in the case where the resource usage exceeds the defined quotas. 9.3.15 PromethOS Execution Environment This class is derived from the execution environment class. It wraps a PromethOS kernel-space execution environment [10], and maps requests concerning the management of templates to the PromethOS interface. Additionally, there are wrapper classes for PromethOS components and respective component managers. 9.3.16 PromethOS Execution Environment Manager This class is derived from component manager class. It simply manages instances of PromethOS execution environments. 9.3.17 SNAP Execution Environment This class is derived from the configurable component class. It represents an extended SNAP daemon [11] started as a user-space process. Though called an execution environment it currently does not support the management of templates. The SNAP execution environment features the execution of active packets and uses SNMP for communication with other component instances. Thus, it is also called the “active SNMP activator.” For more details on the internals of the active SNMP activator see Chapter 13. 9.3.18 SNAP Execution Environment Manager This class is derived from the component manager class. It simply manages instances of SNAP execution environments.
208
Programmable Networks for IP Service Deployment
9.3.19 Channel This class is derived from the configurable component class. It is used to forward packets from the network to component instances, and to take packets back and send them to the network. A channel is created per virtual environment. Particular component instances or execution environments belonging to the virtual environment can connect their ports with the ports of the respective channel. This can be done for the exchange of data packets as well as of active packets. The receiving component instances must specify a condition, based on which the channel forwards packets to them. Currently, the channel class supports ports for CORBA communication and plain UDP sockets. 9.3.20 Channel Manager This class is derived from the component manager class and manages channel instances. It uses operating-system-specific means (e.g., Linux netfilter) to intercept packets from the forwarding path, and dispatches them to a channel instance with a matching condition. For more details on the internals of the demultiplexing done by the channel manager, see Chapter 10. 9.3.21 DiffServ Controller This class is derived from the configurable component class, and offers a port for configuring packet handling based on the model of differentiated services [4]. 9.3.22 DiffServ Manager This class is derived from the component manager class and manages instances of DiffServ controllers. In the current implementation, it demonstrates how to wrap the functionality provided by a legacy router (Hitachi GigabitRouter2000) [5]. In order to receive configuration requests, this class offers an SNMP-based port for communication with the SNAP execution environment. For more details on the internals of the DiffServ manager see Chapter 12. 9.3.23 Traffic Controller This class is derived from the configurable component class, and offers a port for configuring packet handling based on different queuing models. 9.3.24 Traffic Manager This class is derived from the component manager class, and manages instances of traffic controllers. It also demonstrates how to wrap the functionality of a legacy
Virtual Environments and Management
209
system, but, instead of using a hardware router, it makes use of Linux traffic control [6]. For more details on the internals of the traffic manager see Chapter 12. 9.4 USE CASES The prototype implementation of the Java execution environment and the integrated basic services and wrappers were used in various scenarios for demonstration purposes; in particular: • • •
A video-on-demand scenario demonstrating the interworking of the management layer and the PromethOS execution environment; A WebTV scenario demonstrating the interworking of the management layer, the demultiplexing, and active service components; A differentiated services scenario demonstrating the interworking of active packets executed in the SNAP execution environment, the management layer, and the DiffServ controller running in the Java execution environment and interfacing with legacy router hardware.
Some basic use cases will be informally presented here and are parts of the previously mentioned scenarios. 9.4.1 Booting the Management Layer Booting the management layer is done by starting the privileged virtual environment and installing the basic services like the management of environments, security, demultiplexing channels, traffic, and so on. The privileged virtual environment will then publish the reference to its initial port on a wellknown TCP port. 9.4.2 Creating a Virtual Environment A virtual environment is created for a service provider by the node owner so that the former can deploy services for its customers on the node. After the reference to the privileged virtual environment’s initial port is obtained, the template manager port is accessed. There the initial port of the virtual environment manager is retrieved, and the component manager port is accessed. At this port a new virtual environment is created and activated. When a virtual environment is created, a resource profile must be specified. The profile defines all required resources to be attached to the new virtual environment. The virtual environment manager will contact the respective resource managers to create and activate the required resources.
210
Programmable Networks for IP Service Deployment
9.4.3 Deploying a Service Typically, a service provider will deploy services in its virtual environments and make them available to its customers. Deployment requires a reference to the initial port of the service provider’s virtual environment. To get this reference, the virtual environment manager must be contacted as described in the previous use case. Then the template manager port of the virtual environment is used to install one or more service components, which involve(s) the instantiation of the respective component managers. Later, the component managers will be used to create and activate instances of the service components. The initial configuration of a service is done by accessing the individual components, setting their properties, and interconnecting them. 9.5 CONCLUSION This chapter presented the management layer on the node-level. The introduction of virtual environments permitted the integration of several execution environments with different potential implementation technologies. Further, physical resources could be partitioned among several node users—the service providers—with the help of virtual environments. To achieve a flexible and finegrained control over service deployment and management, a component-based approach was chosen for the node-level management layer. With the introduction of properties and ports for components, a means was found for the dynamic reconfiguration of services, in that service components’ properties and the interconnections between service components could be changed during the service’s run time. This chapter also presented a collection of classes that make up the Java execution environment. This environment provides support for service components implemented in Java, and using CORBA or SNMP for communication. Further, it is extensible for other types of communication; for example, the channel class implements ports based on plain UDP sockets for other components to connect to. This collection of classes is used to implement the node-level management layer based on the requirements and concepts as defined in Chapter 2. Additionally, it serves as an extensible framework allowing us to derive specific service components without having to care about aspects like access control, resource monitoring, component deployment, and activation, and so on. For environments that are not implemented in Java; for example, a hardware router or a kernel-based environment; wrappers can be used to map the respective abstractions into the node-level management layer. Additionally, some basic use cases were shown informally in order to explain how the components presented are used to start the privileged virtual environment
Virtual Environments and Management
211
on a FAIN active node, to create new virtual environments for service providers, and, finally, to deploy services inside virtual environments. References [1]
FAIN Deliverable 1 - Requirements Analysis and Overall AN Architecture, May 2001, http://www.ist-fain.org.
[2]
Sun, http://java.sun.com.
[3]
Common Object Request Broker Architecture, Object Management Group, http://www.omg.org/technology/documents/corba_spec_catalog.htm.
[4]
Blake, S., et al., An Architecture for Differentiated Services, IETF RFC 2475, December 1998, http://www.ietf.org/rfc/rfc2475.txt.
[5]
Hitachi Internetworking, http://www.internetworking.hitachi.com.
[6]
Hubert, B., et al., Linux Advanced Routing and Traffic Control HOWTO, http://lartc.org.
[7]
NET-SNMP community, http://net-snmp.sourceforge.net.
[8]
Brunner, M., and Stadler, R., “Virtual Active Networks—Safe and Flexible Environments for Customer-Managed Services,” Tenth IFIP/IEEE International Workshop on Distributed Systems: Operations and Management (DSOM'99), Zurich, Switzerland, October 1999, pp. 66, 133.
[9]
Virtual Active Networks, Distributed Computing and Communications (DCC) Laboratory, Columbia University, http://www1.cs.columbia.edu/dcc/van.
[10] Keller, R., et al., “PromethOS: A Dynamically Extensible Router Architecture Supporting Explicit Routing,” Proc. of the Fourth Annual International Working Conf. on Active Networks (IWAN 2002), Springer Verlag, Lecture Notes in Computer Science, 2546, December 4-6, 2002, http://www.promethos.org. [11] Eaves, W., Cheng, L., and Galis, A., “SNAP Based Resource Control for Active Networks,” IEEE GLOBECOM 2002, Taipei, November 17-21, 2002, http//www.globecom2002.com.
Chapter 10 Demultiplexing This chapter presents the FAIN demultiplexing and multiplexing (MUX) framework [2] for incoming and outgoing packet data, respectively. These processes are executed for both passive and active packet data. Section 10.1 is the introduction. In Section 10.2, requirements for DEMUX/MUX (De/MUX) components are described. Section 10.3 shows the detailed framework of the De/MUX. The last section contains the conclusions. 10.1
INTRODUCTION TO DE/MUX
The FAIN framework supports multiple VEs and multiple EEs running in VEs and, as such, the packets are delivered to the right entity inside a node. To this end, packets must carry all the necessary information based on which the De/MUX component may forward the packet to its destination inside the node. In this case, we need to specify both environments, the VE and EE, to execute real processing to active packet data. FAIN adopted ANEP [1] for its active packet data, and it has extended its definition by introducing two new option formats: one for the VE identifier, and one for the EE identifier. In active networks, an active node receives packet data and processes it. To achieve this, packet data should be transmitted to an appropriate environment for processing in the active node. Therefore, the De/MUX first classifies the received packet data. After that, it transmits packet data to an appropriate processing environment based on a categorized class. Packet data must have an identifier to be classified. For example, packet data might have a specific identifier such as a processing environment identifier, or classification might be executed based on IP header information. Someone may send packet data with the environment ID, while another might send packet data without the environment ID. Therefore the FAIN active node [4] must deal with not only packets with the ID, but also packets without the ID. In addition, even if an IP datagram has an environment ID in the payload of the packet, when the IP datagram is fragmented, the fragmented IP packet data does not have the environment ID except in the first IP packet data. Therefore, the active node must handle fragmented packet data. Besides, a 213
214
Programmable Networks for IP Service Deployment
processing environment of the packet data might be dynamically changed; therefore, the active node must support the dynamic updating of policies that include relations between conditions and the receiver of the packet data. The objective of the FAIN De/MUX framework is to provide a mechanism that enables dynamic updating of De/MUX policies and processing of packet data, regardless of the existence of a specific ID of the processing environment for both incoming packet data and outgoing packet data. The scope of the FAIN De/MUX framework includes providing interfaces for dynamically updating their policies, and for transmitting packet data to an appropriate processing environment after classifying the data. 10.2
REQUIREMENTS
10.2.1 Requirements for Active Packet Format for De/Multiplexing To realize previous functionalities, the active packet format should meet the following requirements: • •
It should include an identifier to distinguish which packet data is active packet data. It should include an identifier to distinguish which packet data should be dispatched to which processing environment, to execute specific processing.
10.2.2 Requirements for De/MUX Mechanism De/MUX should be able to provide the following functions: • • • • • • •
Configuring filter conditions to intercept valid packet data from networks. Classifying intercepted packet data and selecting a valid receiver. Transmitting the packet data to a valid receiver. Sending packet data from the client to outside networks. Handling both active packet data and nonactive packet data. Calling security functions for checking received data and retransmitting packet data according to the result. Calling a security function to input security information into the active packet data.
Demultiplexing
215
0
31
N
4N+ 0 Byte
4N+ 1Byte
0
Version (8-bit)
Flags (8-bit)
1
ANEP Header Length (16-bit)
2--m
4N+ 2 Byte
4N+ 3 Byte
Type ID (16-bit) ANEP Packet Length (16-bit)
Options
m+1 ---
Payload
N Figure 10.1 ANEP packet format.
10.3 ACTIVE PACKET FORMAT Figure 10.1 shows the ANEP [1] packet format. We have adopted the ANEP packet format as the FAIN active packet format. Therefore, we have added a new type ID for the FAIN active network, and created several options for our framework. In our framework, resources are divided and virtual environments are created from them. In addition, a user of the VE can instantiate multiple execution environments in the VE. Therefore, we have defined a VE ID and an EE ID to distinguish each environment. The detailed explanation of each field of the ANEP packet format is as follows: Version This is the version of the header format in use. Currently, the value of the version is 1. This field is 8 bits long. Flags In version one, only the most significant bit (MSB) is used. If the MSB of this field is 1, the node should discard the packet. If the MSB of this field is 0, the node tries to forward the packet. This field is 8 bits long. Type ID This is an evaluation environment of the data. The value of the type ID for FAIN must be selected. For implementation we have used the number of 10,561 as a FAIN type ID.
216
Programmable Networks for IP Service Deployment
ANEP Header Length This field specifies the size of the ANEP packet header in 32-bit words. The ANEP header is from the field of Version to the field of Options. ANEP Packet Length This field specifies the size of the ANEP packet in 32-bit words. Options This field is used when there is option data. Payload Data Active code, policy data, and data being processed are considered as examples of payload data.
10.3.1 VE ID Option Data Figure 10.2 shows a format of the virtual environment identifier. This option is required when the type ID of the ANEP packet has a value of the FAIN type ID. FLG The owner of VE ID defines the value of the flag (FLG). Option Type The value of the option type for the environment identifier is 101, which was defined for the FAIN VE environment. Option Length The value of the option length is 2 in 32-bit words (4 bytes). VE ID This is an identifier for sending active packets to an appropriate VE. This field is composed of 32 bits. The active network service provider assigns a VE ID when an SP requests creation of a new VE. The value of zero is reserved for future use, and one is assigned for the privileged VE. 0 N 0 1
31 4N+ 0 Byte
FLG
4N+ 1Byte Option Type
4N+ 2 Byte
4N+ 3 Byte
Option Length
Virtual Environment (VE) ID (32-bit) Figure 10.2 Virtual environment identifier.
Demultiplexing
217
10.3.2 EE ID Option Data Figure 10.3 shows a format of an execution environment identifier. FLG The owner of the EE ID defines the value of the flag. Option Type The value of the option type for the environment identifier is 102, which was defined for the FAIN EE environment. Option Length The value of the option length is 2 in 32-bit words (4 bytes). EE ID This is an identifier for sending active packets to the appropriate EE. It is 32 bits long. Each VE owner assigns the EE ID. 0 N 0 1
31 4N+ 0 Byte
FLG
4N+ 1Byte Option Type
4N+ 2 Byte
4N+ 3 Byte
Option Length
Execution Environment (EE) ID (32-bit) Figure 10.3 Execution environment identifier.
10.4 FRAMEWORK, COMPONENTS, INTERFACES The packet data is delivered to an appropriate VE or to a specific service component by the De/MUX function. The packet data includes both ANEP packets and other data packets (passive packets). The ANEP packet delivers active packet data, while the passive packets deliver nonactive data to be processed by the node. Figure 10.4 depicts a block diagram of packet data delivery. •
Active packet data (ANEP) delivery (see Figure 10.5):
(1) At first, a client requests a channel manager to create a new active channel for receiving ANEP packet data by registering a VE ID, an EE ID, and an object reference of itself or a socket port number. (2) The channel manager creates the active channel by registering an active consumer object, which includes the VE ID, the EE ID, and the reference or the socket port number, into an internal table for active packets. (3) The netfilter [3] transmits received ANEP packet data to the
218
Programmable Networks for IP Service Deployment
channel manager since the manager sets conditions to intercept ANEP packets at boot time, for example, netfilter is a part of the framework inside the Linux 2.4.x kernel, which enables packet filtering, network address translation, and other packet mangling. Netfilter is a set of hooks inside the Linux 2.4.x kernel’s network stack that allows kernel modules to register callback functions called every time a network packet traverses one of those hooks. (4) The channel manager calls a security function for checking the ANEP packet before sending it to a valid client. (5, 6) After executing the security check, the channel manager sends the ANEP packet data to a valid client through an appropriate active channel by retrieving the target from the active packet database (see Table 10.1). (7) If there is an ANEP packet data to be sent to another node, the client sends ANEP packet data to the appropriate active channel. (8) The active channel inserts the security information into the ANEP packet by interacting with the security component before sending it to the outside network. (9) After that, the active channel transmits the ANEP packet data to the outside network.
VE-ID / EE-ID / Client (Object reference or Port number)
(7)
(15)
(10)
(1)
(16)
De/MUX (4)
(6) Security
(8)
Active
(12)
(2) Channel Manager
Channel-(n)
(14)
(5) (9)
(11)
(3)
Data Channel-(n)
(17)
(13)
Network (Linux Netfilter)
Figure 10.4 Block diagram of packet delivery.
•
Nonactive packet data delivery (see Figure 10.4):
(10) At first, a client requests the channel manager to create a new data channel for receiving a nonactive packet (data packet) by registering flow conditions and an object reference of itself or a socket port number for receiving. (11) The channel manager sets the filter conditions, which are given by the flow conditions such as a
Demultiplexing
219
source IP address, a destination IP address, a protocol, and so forth, to the netfilter. The filter conditions include which data packet should be sent to the client. (12) Then the channel manager creates a data channel by registering a data consumer object, which includes the flow conditions and the reference or the socket port number, into an internal table for data packets. (13) The netfilter transmits the data packet to the channel manager, since the manager sets flow conditions to intercept data packets. (14, 15) The channel manager sends the data packet to a valid client through an appropriate data channel by getting a target to transmit from the table for data packets. (16) If there is a data packet to be sent to another node, the client sends it to the appropriate data channel. (17) The data channel transmits the data packet to the outside network. Clien t-1
Clien t-2
Clien t-n
Act ive
Act ive
Act ive
Ch -1
Ch -2
Ch -n
Ch a n n el Ma n a ger
Act ive Ch ann el Dat aba se
Demultiplexer
Net wor k (Lin ux Net filt er )
Figure 10.5 Active packet transmission.
10.4.1
Active Channel
Figure 10.5 shows how an active packet (ANEP packet) is transmitted to a valid receiver (client). The ANEP format is described in Figure 10.1. When a new client is instantiated, it needs to register its VE ID and EE ID with an object reference or a socket port number. The channel manager stores them into the database for the active channel. Table 10.1 shows the database for the active channel. When the channel manager receives the active packet, it checks the VE ID and EE ID, which are included as option data in the active packet as shown in Figure 10.2 and Figure
220
Programmable Networks for IP Service Deployment
10.3, and searches the database for the target reference or the port number from the VE ID and EE ID, for retransmitting the active packet. After getting the target, the channel manager checks the target, an object reference or a socket port. If the target is an object reference, the channel manager calls an appropriate method using the target reference. If the target is a socket port number, the channel manager sends the active packet data to the port specified by the UDP socket. Table 10.1 Database for the Active Packet
No.
VE ID
EE ID
Target
1 2 3 --n
1 1 2 --L
2 3 2 --M
Port = 9995 Reference = XYZ Port = abc --Reference or Port
10.4.2
Data Channel
Figure 10.6 shows how a data packet is transmitted to a valid receiver (client). When a new client is instantiated, the client needs to register flow conditions with an object reference or a socket port number. The channel manager stores them into the database for the data channel. Table 10.2 shows the structure of the database for the data channel. When the channel manager receives the data packet, it determines the flow conditions by checking the data packet header; namely, a source IP address, a destination IP address, a protocol number, a source port number, and a destination port number; and searches the database for the target reference or the port number of the data packet to be retransmitted. After getting the target, the channel manager checks if the target is an object reference or a socket port. If the target is an object reference, the channel manager calls an appropriate method. If the target is a socket port number, the channel manager sends the data packet to the port specified by the UDP socket.
Demultiplexing
221
C li e n t -1
C li e n t -2
C l ie n t -n
Data
Data
Data
C h -1
C h -2
C h -n
Da ta Ch a n n el Databa se
C h a n n el M a n a ger
D e m u lt ip le x e r
N e t w or k (L in u x N e t fi lt e r )
Figure 10.6 Data packet transmission.
Table 10.2 Database for the Data Packet
No.
Source_IP
Dest_IP
Protocol
Source_Port
Dest_Port
Data Ch
1 2 --m
sip-1 sip-1 --i
dip-1 dip-1 --j
p-1 p-1 --k
sp-1 sp-1 --l
dp-1 dp-2 --m
Reference Port --Ref./Port
10.4.3 Interface Between De/MUX Components and Security Component 10.4.3.1 Interface for the Incoming ANEP Packet The interface used by DEMUX for calling a security check to an incoming ANEP packet is shown in the following code.
222
Programmable Networks for IP Service Deployment
///////////////////////////////////////////// // // Interface from DEMUX to Security Manager for Incoming ANEP packet // veid : Virtual Environment ID // eeid : Execution Environment ID // interface iSecDemuxR { tActivePacket receiveCheck( in tActivePacket active_packet, in string veid, in string eeid ); }; ///////////////////////////////////////////// // Active Packet Structure // // verdict: Decision of Security check // struct tActivePacket { boolean verdict; string saddr; string daddr; short protocol; short sport; short dport; tAnep anep_packet; tOctetStream anep_stream; }; ///////////////////////////////////////////// // Octet Stream Definition // typedef sequence tOctetStream; ///////////////////////////////////////////// // ANEP Packet Structure // // version: ANEP version // flags: ANEP flag // type_id: ANEP type id // aneph_length: ANEP header length // anepp_length: ANEP packet length // options: List of option // payload: ANEP packet payload // struct tAnep { short version; short flags; long type_id;
Demultiplexing
223
long aneph_length; long anepp_length; tOptionList options; tOctetStream payload; }; ///////////////////////////////////////////// // ANEP Option Structure // // flag_0: flag-0 of ANEP option data // flag_1: flag-1 of ANEP option data // option_type: ANEP option type // option_length: ANEP option length // option_data: Payload of ANEP option // struct tOption{ short flag_0; short flag_1; long option_type; long option_length; tOctetStream option_data; }; Typedef sequence tOptionList;
10.4.3.2 Interface for Outgoing ANEP Packet The interface called by MUX for inserting security information in an outgoing ANEP packet is shown below. ///////////////////////////////////////////// // // DEMUX-Security Manager Interface for Outgoing ANEP packet // interface iSecDemuxS { tActiveStream sendCheck( in tActiveStream active_stream ); }; ///////////////////////////////////////////// // Active Data Stream Structure // struct tActiveStream { boolean verdict; string saddr; string daddr; short protocol; short sport; short dport; tOctetStream anep_stream; };
224
Programmable Networks for IP Service Deployment
10.5 CONCLUSIONS This chapter describes the architecture of the FAIN De/MUX, as summarized below. The FAIN framework supports multiple VEs and multiple EEs running in VEs. To this end, packets must carry all the necessary information based on which the De/MUX component may forward the packet to its destination inside the node. FAIN adopted the Active Network Encapsulation Protocol (ANEP) [1] for its active packet data, and it has extended its definition by introducing two new option formats: one for the VE identifier, and one for the EE identifier. De/MUX intercepts a packet’s data and dispatches it to an appropriate receiver client. It provides a virtual channel for each VE, to transmit packet data between De/MUX and the receiver client. To realize this function, the channel manager and the channel classes are implemented. The channel manager creates one virtual channel object for each VE. The virtual channel includes multiple active channels and data channels. The active channel is used for active packet data transmission, and the data channel is used for nonactive packet data transmission. When a receiver client receives a packet’s data from De/MUX, it needs to register a target object that includes the receiving method and flow definitions. Currently, the De/MUX supports the CORBA interface and simple socket interface as the packet data transmission method. Concerning flow definitions, especially those of active packet data, the client must register the VE identifier and EE identifier. These identifiers should be included in the ANEP packet as option data. In addition, in the case of nonactive packet data, the client can register five tuples (source IP address, destination IP address, protocol number, source port number and destination port number) for classifying packet data. The DEMUX retransmits an intercepted packet’s data to the correct receiver client, according to the previous flow categorization. In addition, it calls a security check function to approve transmission of the active packet. If the security check fails, the active packet is discarded. In the case of an outgoing packet’s data, De/MUX calls security functions to insert security information as ANEP option data. Then it transmits the outgoing packet data to outside networks. In conclusion, the De/MUX can provide flexible and secure demultiplexing and multiplexing functions to the client. References [1]
Alexander, D., et al., Active Network Encapsulation Protocol (ANEP), RFC Draft, http://www.cis.upenn.edu/~switchware/ANEP/docs/ANEP.txt.
Demultiplexing
225
[2]
Denazis, S., et al., Final Active Node Architecture and Design FAIN Deliverable 7, May 2003, http://www.ist-fain.org/.
[3]
Netfilter, http://www.netfilter.org/.
[4]
FAIN Project Deliverables, http://www.ist-fain.org/deliverables: D1 - Requirements Analysis and Overall AN Architecture D2 - Initial Active Network and Active Node Architecture D3 - Initial Specification of Case Study Systems D4 - Revised Active Network Architecture and Design D5 - Specification of Revised Case Study Systems D6 - Definition of Evaluation Criteria and Plan for the Trial D7 - Final Active Network Architecture and Design D8 - Final Specification of Case Study Systems D9 - Evaluation Results and Recommendations D40 - FAIN Demonstrators and Scenarios
Chapter 11 Security Management 11.1
INTRODUCTION
Programmable and active networks enable their users to program and extend network elements to fulfill their specific communication needs. FAIN [14] aims to develop a flexible, high-performance and secure active network node. FAIN architecture [20] allows various active networking technologies to be used in the same node, so as to allow the implementation of various services for its users in the transport, control, and management planes. The flexibility of such a system raises serious security concerns. Security has already been an area of intensive research for more than half a decade in active networking. In general, security solutions can be divided into two distinct approaches: architectural based and language based. Architectural-based solutions aim to provide complete security solutions such as SANE [2] or active networking security working group security architecture [3, 27]. Language-based solutions rely on safe language and interpreter design, and they achieve security by strictly limiting the ability of the programs that can be injected in the network [22, 26]. FAIN [37] aims to develop a complex environment for flexible service provisioning, and management of these services. In contrast, existing security approaches are targeted toward simpler structured environments: They do not cover the collaboration of multiple execution environments, services, and service components; they neglect management issues; they do not provide a clear view of system entities; and they cover only a part of the tasks that the security architecture should perform. A security architecture is a set of principles, services, and mechanisms required to meet its users’ needs. It must prevent intentional and unintentional threats, and provide a set of system elements that implement the services. In Section 11.2, we define system relationships and entities. Section 11.3 concerns security threats, relationships, and architecture goals; and Section 11.4 describes the active and programmable networking security issues. In Section 11.5, we propose a high-level security architecture, and introduce system elements. Section 11.6 describes security architecture design and implementation. In Section 11.7, we provide a general scenario of an active packet passing a node, and in Section 227
228
Programmable Networks for IP Service Deployment
11.8 we report on system performance. Section 11.9 presents the case for the existing active networking approach. In Section 11.10, the whole approach is evaluated and presented. Finally, we present our conclusions and suggested directions of future work in Section 11.11. 11.2
SYSTEM RELATIONSHIPS AND ENTITIES
The basis of the node is defined using the FAIN reference model [20]. It decomposes the node into four layers: the router or hardware, the node operating system (NodeOS), virtual environments, and services. We will briefly describe some of these terms: NodeOS is a collection of basic node services that perform tasks of demultiplexing, resource control, active service provisioning, security, and management. NodeOS functionality is exported through the NodeOS API to execution environments, which reside in VEs. VEs are resource- and user-related abstractions on the node. More than one EE can reside in each VE. Services are collections of components performing an application for its users. Service and service components are defined by a service descriptor, which is resolved in the network and on the node in one or more code modules that can be instantiated into certain EE(s), where the component(s) become a run time instance(s). By way of example, the FAIN node supports the Java-based EE, and highperformance active network node [9] as an EE. These environments can be used in a synergistic way to support or complement each other. High-performance environments can be supported by a Java-based EE for management and control of the former, and SNAP [26] is extended with the Java environment to manage SNMP-enabled [8] devices. Packets are interpreted as requests, and are evaluated on the node within a service, in one or more components. This can result in zero, or one, or more packets on the output of the node. Each packet can be either active or passive. Active packets are considered to be those that contain an ANEP header, as defined in the ANEP draft standard [1]. Packet content or the state on the node can be changed during evaluation. Code or data in the packet can result in actions regarding the API defined by the dedicated component execution environment or the NodeOS API. A service can exist in a single node or can span multiple nodes in the network. Depending on the nature of the service, packets exchanged between communicating entities can be processed either at the sender and receiver only, or on any suitable node in between. For ease of explanation, we will assume a simple network structure as shown at the bottom of Figure 11.1. It is important to remember that the structure is virtual, and there can also be passive nodes in the network. Code modules that extend the programming environment of the node are a result of service deployment in the network, and service resolution on the node.
Security Management
229
This approach is called out-of-band because the code is not transferred together with data. To gain additional flexibility, FAIN also uses an in-band approach in active SNMP service, reusing the existing active networking approach, SNAP. In the case of FAIN, the majority of the code can be considered out-of-band. From the system relationships, we can deduce the following set of entities in the system: network users, network nodes, and services. Here, the execution environment in the processing model is a technical term that refers to the system elements necessary for successful evaluation, and is not a real system entity in itself. FAIN uses virtual environment abstraction so as to be able to abstract the resources available on the node of the active network user, and also those resources its services are using. VE represents resources on the node needed for the service operation. VE is a flexible abstraction that can overlap tightly with a service.
Figure 11.1 System entities and relationships.
The relationships between entities are shown in Figure 11.1. The figure shows two types of relationship: ownership (shown with arrows) and system entity relations on the node regarding the resources (shown with circles). The set of users on the left “owns” resources on the node. User C is an owner of the node and thus also an owner of the privileged VE (pVE). PVE is a special VE abstraction that holds basic node services, including security-related services, which will be explained later in Section 11.5. User B is an example owner of a VE that can run one or more services associated with a VE. Users D and E “own” the same service; this case represents the ownership of a packet evaluated in the context of a service. The maximum number of VEs on each node is limited by the available resources on the node. On the right side of Figure 11.1, the system entities are shown: pVE corresponds to the node owner; VE corresponds either to the service provider or a
230
Programmable Networks for IP Service Deployment
group of related services. Services can either be user or application related, and users of these services can be a service, VE or node managers, users, observers, and so on. These entities can easily be related to those defined in the FAIN business model [15]. In this system, there can also be a number of “external” entities that support the system operation, such as code producers, certification and attribute authorities, and so on. 11.3 THREATS, SECURITY REQUIREMENTS, AND ARCHITECTURE GOALS Threats in the active system extend over traditional “passive” systems, where evaluation is basically related to forwarding a packet through the network element. The consequences of threats are the same as for traditional systems. As categorized in [33], they are: disclosure, deception, disruption, and usurpation. Threat actions that can cause a threat consequence are presented in [3]. Section 11.2 presents all entities in the system that can be a source of the threat action, which can result in a series of threat consequences. PVE represents a threat to all other entities in the system. While basic pVE services (security, resource control) can prevent threat actions between other entities, the threat represented by pVE can be prevented with strict control by which nodes can process and evaluate packages. Basic security-related problems occur in the following areas: start-up of the node and node connection to the network; partitioning of resources of the node and in the network; deployment of new services, related code, configuration, and policies in the network and on the node; accessing, managing, control, and observation of the services and the related data; control of which data (packets) a service can access; naming of the resources in the network; providing traceability of the node state; and protection of the basic node resources. The security architecture designed in FAIN has the following general goals and requirements:
•
•
Authorized use to protect the network element and user resources in the network. Network element resources can be functional, computational, or communicational. User resources are the packet content and the possible states on the network nodes. Only authorized users should be able to access these resources, and only authorized nodes can process the packets. Separation between different VEs and their related services regarding access and resource usage. This should be enforced in the network and on the network elements.
Security Management
• • •
•
•
•
•
231
Verification. Code brought to the node must be verified, either statically or dynamically, and must be protected on the node, together with code-related configuration and data, against intentional or unintentional changes. Accountability of security-related events. An audit service should be provided on the nodes and in the network. Protection in transit. The security architecture perimeter is a network element; communication between network elements should be protected and should provide, in a secure manner, sufficient information about communicating entities in the system for security architecture operation. Common treatment regarding the security of: network elements, like the end, intermediate and management nodes; VEs, services, components; EEs and code modules; communication between and across multiple network elements; the same security mechanisms should be used in the management, control, and transport planes. Transparency. Operation of the security architecture should be transparent to its users and developers, either through well-defined interfaces, protocol headers or the architecture-implicit operation, and should require minimal user intervention. Flexibility. Multiple trust management approaches should be supported between entities in the system; multiple types of security policy should be supported; and the security architecture proposed should be general enough that it can be used for all developed technologies in FAIN, and also in existing established technologies. Sufficiency and extensibility. Basic security services should be sufficient for safe network element operation; but it should be possible to extend the security architecture by certain VEs or services to fulfill their specific needs.
Security architecture goals can be achieved primarily through a range of services: authentication, authorization and policy enforcement, the system services, code and packet integrity services, code verification, limiting users’ resource usage on the node and in the network, audit services, the right choice of selected security mechanisms, and system design. These issues are discussed in the next section.
232
11.4 11.4.1
Programmable Networks for IP Service Deployment
SECURITY ISSUES Authorization and Policy Enforcement
All access to the node and user resources should be subject to authorization. Node and user resources can be hardware, such as CPU, memory, storage, and link bandwidth; or functional, such as special-purpose files, routing tables, policy and credentials entries and databases, VEs and service-related data, and so on. Important resources (from the point of view of vulnerability) are possible service states on the nodes that can be shared among multiple users, and user packet content. Authorization is a process that provides an authorization decision about access of the subject to the object. When the subject accesses the object, the enforcement engine suspends the request, and asks the authorization engine for an authorization decision. Information passed to make the authorization decision is the security context of the subject and object. The authorization engine returns an authorization decision based on this information, that is then enforced by the enforcement engine. The security context refers to all security-relevant data available in the system regarding subject and object. Clearly, proper authentication is needed to provide authorization. Policies control which users have access to which resources and in what manner. Policies should be detached from the system, and should not be hardwired in the application. The system should be flexible enough to support multiple kinds of policies, as generally used policies can vary. In the policy context, users can be defined either by their identity or by attributes like roles or groups. Both are user attributes, and users in the system should possess certain credentials stating them. Multiple types of credentials should be allowed, to enable various ways of trust management between users. A scalable approach should be provided in the system, to enable nodes to access user credentials. When transferring credentials through the network, integrity must be provided. If credentials are referenced or included in the packet, such information must be bound to the packet content. A replay protection mechanism must be available to prevent packet replays. It should be possible for the system to generate and enforce certain policies by itself, independently of the fine-grained user policies. Such policies can be understood as mandatory access policies, enabling a separation between system entities like VEs and services, and preventing their unnecessary interaction. Certain kind of policies may require that the state of previous authorization decisions be known. The system should be able to define such a state, and to keep it available for later authorization decision provisioning. It may be hard to believe that all policies and all users can be globally intelligible. But we can assume that the users and the policy can be known in a
Security Management
233
single administrative domain, or in the context of a certain service. The system should be able to provide functionality that can enable replacement of the user credentials on the administrative domain borders. The enforcement process must have the classical attributes of a reference monitor: it must be nonbypassable, tamperproof, and analyzable [10]. Besides, as is noted in [3], it must be nonspoofable. The security context must be bound to objects and subjects in such way that it cannot be forged in the system. The process of building a security context must be examined carefully, so that system authorization decisions are based only on data we can trust. 11.4.2
Authentication
Authentication is a service needed for authorization and also for other security services in the system. Authentication must be provided per packet. The major problem for an authentication service in active networking approaches is how to authenticate an active packet passing through the network, and being evaluated on many nodes. Existing protocols or architectures, like SSL [4] or IPsec [24] are not designed for such a requirement. For authentication, symmetric or asymmetric cryptography-based solutions can be used. Using symmetric cryptography solutions requires that the session key be negotiated or provided for two or more parties in communication. Asymmetricbased approaches do not require such a step, but trust must be managed between the active packet sender and the nodes that the packet traverses. Symmetric approaches do not provide nonrepudiation; this fact is important if two or more nodes use the same session key, and any of them can become a source of the authenticated packet. In such a setup, there can be a serious security problem if one node is compromised. Besides, the session key can be seen as a hard state on the node: If it is not available, communication will fail. Symmetric techniques that negotiate separate session keys with each node in its path and provide authentication data for every node in the packet are too costly in terms of bandwidth and negotiation time. This approach has the same problems with packet integrity as using asymmetric techniques as described in Section 11.4.3. Symmetric cryptography approaches are still very useful when used properly; neighbor nodes can identify each other in this way after they have established a trusted relationship and a negotiated security association. Besides authentication of a peer, they can provide integrity and confidentiality for the packets exchanged, dependent on protection mechanisms used in SA. We call such types of communication a “session.” Sessions are useful for internode communication or for communication of a user with a node. The concept of sessions can be supported in various ways; for example, by IPsec or SSL. Data origin authentication is related to data integrity. There can be no data origin authentication if the data integrity service for exchanged data is not also provided.
234
11.4.3
Programmable Networks for IP Service Deployment
Packet Integrity
Packet content can be legally changed in the network. In this context, we will focus on active packets. While passive packets can also be evaluated and changed in certain services, care must be taken not to interfere with end-to-end security solutions like IPsec. Changing the packet content raises the question of data origin: If the packet originates at one node but content is added to the packet (for example, data is collected at nodes), or something is removed (for example, some lines of code), this data as a whole cannot be authenticated for data origin on the nodes that the packet passes. So it is logical to split the packet into the part that can change; for example, the part that is variable; and into the part that is static during the packet lifetime in the network. The variable part of the packet can be used by the service to store or modify the packet content. We must be careful to protect this variable part from unauthorized or malicious modification. We propose two countermeasures: •
•
The first countermeasure is to control which nodes can process and evaluate such a packet. With this approach, we can avoid unauthorized modification between two authorized nodes. Per-hop protection contrasts with the protection that can be applied end to end for the static parts of the packet, and is similar to the sessions, introduced in Section 11.4.2. It can be understood as system-level protection of active packets exchanged between two nodes. The second countermeasure concerns how to control additions or modifications. It is probably necessary to shift this responsibility from the system layer to the service layer, because of the diversity of approaches and internal service knowledge of the data structure.
If an active packet contains intermixed code and data, the integrity of the packet is hard to guarantee, and it is hard to know which parts of the packet data are static, and which are variable. Protection of such packets between hops is not adequate. In Section 11.9, we propose a solution for an active SNMP system. 11.4.4
System Integrity
System integrity service is a core network element service. It must be provided from the ground up—from the first piece of code run on the node. System integrity guarantees that the node will perform its intended function in an unimpaired manner, in tandem with other security services; notably, authorization, policy enforcement, and code and service verification. The task of system integrity is threefold: first, to store the state of the code after authorization and possible verification, when the node programming
Security Management
235
environment is extended; second, to enable mechanisms that enable preventing malicious, unintentional, and unauthorized system changes during node operation; and third, to keep previous states of the code and related data so the extensions and modifications of the node environment can be traced or rolled back if required. 11.4.5
Code and Service Verification
Code and service verification is a security service that verifies whether a service and code module’s operation is correct. We can divide the verification into two broad groups: static and dynamic. Static verification is done prior to injection of the service into the network. Dynamic verification occurs prior to or during the evaluation. It must be extremely fast in contrast to static verification. As discussed in Section 11.2, the majority of the code comes in a node out-ofband. For out-of-band approaches, it is possible to verify the code in a static way. Many kinds of verification are possible, like source code inspection, code testing, and generating proofs [28], which can be verified on the node prior to code installation and usage, and so on. For static verification, it is important to establish trust between the verifier and the code user (node, VE). Dynamic verification is usually performed by interpreters, as is the case of Java byte code verification. When the interpreter model is extended, the verification also must be extended. Dynamic verification is important in the case of in-band deployment of the code. Specific to service verification in FAIN is how the services are composed; many code modules can be composed into a single service when the service descriptor is resolved on the particular active network node. While service can be verified for the desired properties at the network level, final verification of the service must be done at the node level. For every piece of code, verification is needed. A common approach should be chosen, because FAIN uses a variety of active networking technologies. 11.4.6
Limiting Resource Usage
Resource usage is a difficult problem because active service resource usage cannot be clearly determined in advance. In today’s networks, basic IP packet forwarding is proportional to the packet length, but all processing that must be shifted from the hardware path to the network element processor is difficult for the network elements. This is even more true in active networking. Undetermined resource usage applies both to in-band and out-of-band approaches; neither can guarantee strict resource bounds in all cases regarding the packet(s) that trigger the service. The basic approach is to limit resource usage per active network user. The user resource box is a collection of the available communication or computational resources. If limited resource usage can be applied per user on the active node, the network resource usage cannot exceed the user resource limit.
236
Programmable Networks for IP Service Deployment
Resource usage must be limited network wide regarding the active service, which is triggered by matching packets. This can be achieved by tracking the packet-integral resource usage count, or the number of nodes the packet has passed, and its proportional usage, and how many packets the single request produces on the node. In this case, the packet-available resource limit must be divided among child packets. 11.4.7
Accountability
Accountability is an important property of the system. Accountability enables us to track and to analyze possible security breaches through an audit service. Accountability should not be provided only on the node, as active services span many nodes and can be influenced by many external subsystems. In a single administrative domain, accountability data should be gathered so that analysis is possible from a central point. 11.5
HIGH-LEVEL SECURITY ARCHITECTURE
We propose a high-level security architecture [21, 30, 31] to fulfill our security requirements, as shown in Figure 11.2. The architecture places requirements in the system as presented in Section 11.3, and as discussed in Sections 11.2 and 11.4.
Figure 11.2 High-level security architecture.
Security Management
237
Basic security services are positioned in the privileged VE for the following reasons: We want to treat all possible technologies and their implementations (implementing VE and services) in a uniform manner, reducing the risk of multiple implementations. As such, the services offered in the pVE are protected with the same security services and mechanisms. This does not preclude VEs or services from implementing their own security services or mechanisms when it is reasonable to do so. Resource control is not strictly part of the security architecture. It is a necessary element of any reasonable network node and, when using such an element, the security architecture can efficiently enforce separation of user data. 11.5.1
FAIN Architectural Model and Security Architecture
The core of the FAIN architecture is the active network node. Basic functions of the NodeOS were decomposed into the following subsystems: demultiplexing, a resource control framework (RCF), active service provisioning, management, and security. Each subsystem is also related to security:
•
•
• •
The DEMUX subsystem (discussed in Chapter 10) is responsible for the management of input and output channels to VEs and services. This is the point where the security perimeter of the architecture starts, and the security context of the incoming packets is built. At the output channels, the packets leave the perimeter, and it is here that external security representations must be added to the packet. DEMUX exports interfaces to set up or tear down the channels; these interfaces are part of the NodeOS API. The RCF subsystem is responsible for resource allocation and enforcement of resource usage. It enables separation of system entities regarding communication and computation resources. A guaranteed share of the resources must be provided to pVE, so the basic services can operate uninterrupted. RCF-exported interfaces enable resource reservation and report resource usage. The ASP subsystem is responsible for deploying the code for service operation to the node; it must cooperate with the security subsystem to ensure system integrity and static service verification. The Management subsystem is responsible for management of the basic node services, VEs, and their services and service components. It exports interfaces for their initialization, setup, control, suspension, observation, and termination.
The high-level security architecture was decomposed into system elements performing necessary tasks and mechanisms, as follows:
238
•
•
• •
•
•
• • • • •
Programmable Networks for IP Service Deployment
The principal manager is responsible for principal-related operations: adding to, removing from, and searching for, the principal manager in the principal database. Principal entries are collections of principal related data: principal attributes, a list of the principal’s credentials, and pointers to the principal secure store. The credential manager manages principal-related credentials in a uniform way, irrespective of credential type. It is responsible for parsing and validating credentials and extracting principal-related credential information: user attributes, credential time validity or possible policies embodied in the credential. The credential manager also provides utilities for user credentials and keystores generation. The policy manager manages various policies on the node in a uniform way. The policy manager also provides policy engine(s) that can provide authorization decisions related to the particular policy. The security manager is a central point of the subsystem. It is responsible for the construction of the security context of subjects and objects on the node. Security contexts are kept in the security subsystem, never leaving the subsystem. The security manager is the only security element exporting security-area-related interfaces. The authorization engine provides authorization decisions to node enforcement engines. An authorization decision is based on one or more policy engine decisions. Authorization engines also maintain the state of the authorization, which can be exported to the audit subsystem or stored if certain policies require such a state. The enforcement layer enforces the authorization engine decisions. The enforcement layer is separated into enforcement engines where the authorization decision is enforced, and the mechanism that enables secure gathering of the subject and object information on the node. The audit subsystem provides an audit service on the node through audit channels to the audit database. The system integrity subsystem collects all code modules, service, and EErelated data; and reacts to code or service changes, code time validity or code-related policy changes. The verification manager enables dynamic verification of the code or services. The cryptographic subsystem offers the mechanisms necessary for all cryptographic operations. The principal secure store holds principal-related cryptographic information, and provides asymmetric cryptography mechanisms, together with the cryptographic subsystem, in such a manner that the principalrelated private keys never leave the store.
Security Management
•
• •
239
The external security representation subsystem builds and extracts principal-related security context information (e.g., request packet) in a uniform manner, and mechanisms to fetch the principal-related credentials on the node. The connection manager can build security associations with its neighboring nodes in a secure and trusted way, to provide hop protection between nodes. The integrity subsystem ensures packet integrity between two neighbor nodes.
Further details can be found in [16, 19]. The above system elements represent the FAIN high-level security architecture. The implementation of such security architecture is presented in the following section.
Service start-up VE start-up
Figure 11.3 Security context, VE, and service start-up.
11.6
SECURITY ARCHITECTURE DESIGN AND IMPLEMENTATION
The FAIN active node supports various technologies: the operating system used is Linux, the core of the node is coded in Java, and CORBA is used for communication between node subsystems through a set of well-defined interfaces in Interface Definition Language (IDL). Subsystems like SNAP, the high-
240
Programmable Networks for IP Service Deployment
performance network node, and Linux subsystems like netfilter and operating system resource control features were wrapped with Java and CORBA to be able to access, control and manage these environments. FAIN testbed scenarios are presented in [17, 19] as well as in Chapters 17 and 18. From a security perspective, it is important that a component-oriented model be used for main system abstraction. In this model, everything on the node, including base pVE services, VEs, EEs, services and their components, and active packets are treated equally. Modularity, equal treatment of all system components, and the fine granularity of the approach are beneficial to security. 11.6.1
Building the Components’ Security Context
For each component in the system during component initialization and start-up, the security context of the component is built. As shown in Figure 11.3, the process of service, or VE start-up begins with transition and labeling. Relevant pointers to VE, service, and parent VE data are attached and, if a specific component policy is specified, this policy is set. The security context data is stored in the security context database in the node security area. During initialization, each component gets a node-local unique ID that points to the context. 11.6.2
Enforcement Layer, Authorization, and Policy Enforcement
When the components communicate, the CORBA interceptor is used to pass the security identifier (SID) transparently from the subject to the object. When accessed, the component interface is protected by an authorized call, which is passed to the component authorize method. The security manager interface is called, for authorization with SIDs of the communicating components, of action, and of the possible environment of the call as parameters. The authorization engine first compares the VE and service identifiers, and, if they are identical, it evaluates the action and possible environment of the call in the policy engine, as specified by the policy type. The authorization decision is then passed on to the caller. An active packet is treated in the context of the node in exactly the same way as a component. When the active packet’s external representation is accessible, as described in Section 11.6.3, the security context of a packet is built. The packet security context is used to provide authorization and policy enforcement regarding the packet-related request to system resources. In the same manner, access to the packet or parts of the packet can be controlled in the node. When policies are written for basic components, and the authorizations are in place, all inherited components need only take care of their extensions to the model, which are component specific. The developer of the component must take care that its abilities can be protected, leaving the task of the management environment to prepare proper policies regarding the security model.
Security Management
241
In the security context, sessions and per-hop protection in between nodes are treated in a similar way to components. Their security context is created from connecting entity credentials, and assigned to proxies on the node. In the case of per-hop protection, the security context is assigned to specific channels related to the node intercommunication services. An example of such a service is the connection manager case explained in Section 11.6.5. Sessions to the node are also supported, and, in this case, so are support management stations and their connections to the node. For such connections, we use CORBA over SSL. One or more connections are possible, with different security contexts between management station(s) and the node. The security context of such sessions is attached to the component ports returned to the client acting as a proxy for the connected user. The context itself is built from user-supplied credentials during SSL session negotiation (X.509 certificates). 11.6.3
External Security Representation
As a basis for external security representation, we use the ANEP [1] encapsulation protocol. This solution is applicable to any active networking approach that can be encapsulated with the ANEP protocol. From Section 11.4, we have defined six new options to carry security-related information over untrusted connections. These options carry the VE and service identifier, hop protection and credentials option-related information, service variable data, and resource usage information. Only source and destination addresses are used from original ANEP options. Hop protection is defined by the security association identifier, which points to the correct association with a neighbor node, a sequence field that protects against replays, and keyed hash [25]. Keyed hash covers the entire ANEP packet except the keyed hash itself. The hop protection protects all active packets exchanged between two neighbor active nodes. As a system-layer protection, these fields are removed from the packet after successful checks. Only information about the previous hop node is kept for the packet. If the packet leaves the node, a new hop option is built to allow the next hop. The credential option is defined by a credential identifier and type; a location field, specifying where the credentials can be fetched; a target field where the user can specify specific targets as nodes; a system layer or a packet itself; an optional time stamp that protects against replays, and the digital signature. The digital signature covers only the static data of an active packet: the first 32 bits of the ANEP header, protecting the type ID of active network nodes, the source address, the VE and service ID, the ANEP payload, and the credential option itself, except for the digital signature data. A time stamp in the credential option is an additional measure of protection against faulty or subverted node services. Per-hop replay protection in this case is not sufficient. In such a service, it is easy to store and replay the packet later. Roughly synchronized node clocks are needed for protection in the network.
242
Programmable Networks for IP Service Deployment
There can be zero, one, or more credential options in a single active packet. On each passing node credential option, related credentials are fetched, the certification path is validated, and the digital signature in the option is verified. The digital signature mechanism enables authentication of the data origin, provides data integrity service for the covered data end to end and enables nonrepudiation. From credential option(s), if present in the packet, a security context(s) is built on each passing node, which is later used for authorization and policy enforcement. Credential types can vary from X.509 certificates or attribute certificates [23], to simple public key infrastructure (SPKI) certificates [13] or keynote credentials [5]. Credentials can be fetched in multiple ways if they are not included directly in the packet: either from the domain name server (DNS) [11], lightweight directory access protocol (LDAP) [35] or from any other suitable store. We have designed and implemented a simple protocol that enables the fetching of credentials from the previous hop node. This protocol is used for the transmission of credentials to the active nodes, for validating active packets as they traverse the node. A node credentials cache supplies credentials on the intermediate nodes. After successful validation, the credentials are cached on the node for the time of their validity, or in line with the cache policy about cache size and maximum time period of the cache entry. Caching credentials has other benefits: If the cache entry is valid, there is no need to validate the credentials. In this way, we can reduce required digital signature validation to only one per credential option in the packet, which results in significant speed improvement, after the first principal packet has passed the node. Additionally, we cache bad credentials in a separate cache when the credentials cannot be verified. Packets with such credentials are discarded immediately. The same mechanism could be used with a supporting protocol for the exchange of bad credentials, and to prevent access to the node, temporarily or permanently, for certain principals with very low cost per packet. VE and service identifiers are used by the demultiplexer to divert the packets to the right service. A variable option is used by the service to store data that can change in the network (e.g., its state, collected data, and so on). 11.6.4
The Cryptographic Subsystem and Secure Store
For the cryptographic subsystem, we have used a Java-based and Sun Java cryptography extension (JCE) compliant cryptographic library.1 The part of cryptographic operations is performed inside a secure store that wraps the digitalsignature-related operations and Java keystore functionality in such a way that users’ private keys never leave the store. Only pVE security subsystem
1
Bouncy Castle Java Crypto library, http://www.bouncycastle.org.
Security Management
243
components have access to the stores; user stores can be managed directly by the users but are additionally protected with the password. 11.6.5
The Connection Manager
The connection manager is responsible for setting up secure associations between neighbor nodes.2 It exports interfaces so that the SAs can be managed, either manually or by triggering key exchange, by the network management system. Additionally, the protocol was designed to exchange the keys automatically. The protocol reuses the same mechanisms and possible credentials as discussed in Section 11.6.3, and station-to-station protocol as described in [32]. The protocol is modified to the extent that entire protocol messages are covered by the digital signature in the credential option. Messages are addressed to this channel, which does not provide hop protection, but access to it is authorized. Two nodes, as protocol entities establishing a SA, must supply credentials, which contain suitable authorization information regarding the channel policy. 11.6.6
The Verification Manager
A digital signature mechanism was chosen for static verification. Static verification is used in the process of out-of-band code deployment to the node, in conjunction with the node-level ASP manager. The cryptographic hash of the code is digitally signed and the code digital certificate is issued by either the code producer, the verifier, or a trusted archive. The verification manager verifies the credentials certification path and the signature; the authorization decision about possible code deployment is made with a common developed authorization mechanism regarding the local node code repository policy. Dynamic verification can be added as part of the verification manager functionality, as described in Section 11.9 for the active SNMP system. 11.7
GENERAL ACTIVE PACKET SECURITY EVENTS
In general, active packet security events can be divided into three parts: entry-level checks, evaluation-level checks, and exit-level checks. The entry level checks steps are the following: After an active packet is diverted by the demultiplexing subsystem, if it is recognized as an active packet, the packet is passed to the security subsystem. Based on the SA identifier in the hop protection option, the right SA is selected and sequence replay protection is checked. The option keyed hash is verified. If verification is successful, the 2 In the sense of the network topology in Figure 11.1 which represents a virtual topology, such a topology must be built and managed dynamically. Such work is not covered by security architecture.
244
Programmable Networks for IP Service Deployment
resource option is checked for the maximum number of hops. After that credential option is parsed, credentials are fetched from the previous node, if they are not already in the credentials cache. If the credentials are successfully validated, they are stored in the cache and the digital signature in the option is verified. If the credential timestamp is present, it is compared to the local clock. The security context is built from this credential option. The procedure is repeated for every credential option in the packet. VE and service identifiers in the packet are compared to those stated in the credentials. If they match, the packet as a request is authorized against the security context of the input channel. At least one security context must be authorized positively for the packet; otherwise the packet is dropped. If the incoming channel requires code verification, the code is verified. If the packet passes all checks, it is returned to the demultiplexer, which sends it to the service. Evaluation-level checks are performed if the packet evaluation results in access to NodeOS interfaces, the service state or the packet itself. In these cases, actions are authorized and service or node policy is enforced as described in Section 11.6.2. Regarding Exit level checks, basically, when the packet is sent by the service to the exit channel, the demultiplexing subsystem invokes the security send check interface and the packet resource counter is increased, the right SA regarding the packet next-hop destination is selected, and the hop option is built and inserted in the packet. The packet is returned to demultiplexing, and sent to the wire. In the context of exit-level checks, further checks are possible. The exit channel can have a policy set that can be evaluated regarding the packet security context(s). A resource counter is divided among outgoing packets, when the evaluation on the node results in multiple packets. 11.8
SECURITY ARCHITECTURE PERFORMANCE
Our main interest in security architecture performance concerns when the active packet passes the node. In this case, we assume that the suitable VEs and services are already set up on the node, that the user can access the secure store, and that his credentials and related key pair are able to create ANEP packets and corresponding credential options in the packet. The originating node should already have established an SA with its neighbor node. On the nodes that the packet passes, security costs are related to the general scenario as described in Section 11.7. The testing environment was set up on a commodity PC, with an Intel P4 2.2GHz processor with 512-MBit random access memory (RAM), Red Hat Linux 8.0, kernel 2.4.18-14, Java software development kit (SDK) 1.3.1_3, Bouncy Castle crypto-library version 1.17, and network-node-related FAIN code. Figure 11.4 shows the security-related cost of an active packet passing the active node.
Security Management
245
The active packet in the use case contains the basic ANEP header, the hop option, the option for VE and the service identifier, one full credential option, the resource option, and the zero-length variable option and payload. The hop option keyed hash is hashed message authentication code secure hash algorithm (HMAC-SHA1) [25] on the receiving and sending side. The credential used in the example case is an X.509-based certificate with RSA encryption and an MD5 hash signature. V3 X.509 extensions were used to encode user attributes, and VE and service identifiers. Signature in the credential option, computed on originating node, was RSA encryption with SHA-1 hash. The RSA key length used was 768 bits. The certification path length was one. The left-hand side of Figure 11.4 depicts a case where the user credential is contained in the packet and is validated on the node together with all other checks, as explained in Section 11.7. The hop part represents the costs of validating the hop options on the receiving side, and building a new hop option on the sending side. Encoding/decoding represents the costs of decoding a packet and encoding it with a new hop option. Other costs relate to the process of building a security context on the node, verifying user statements about VE and service IDs regarding those in the credential, and access control decisions regarding the security context of the input channel with a simple policy. The signature costs refer to the costs of validating a credential, and verifying the digital signature in the packet. In this case we can pass over the node at 396 packets per second. Packet Costs (Credential Validation, RSA768)
Packet Costs (Cached Credentials, RSA768)
Other
Other
Signature
Signature
Hop
Hop
Encode/Decode
Encode/Decode
Figure 11.4 Security-related packet costs.
The right-hand side of Figure 11.4 explains the packet-related security cost of exactly the same operations, where the user credential is already cached on the node. Security costs are lowered where a factor of three and 1,190 packets per second can be passed over the node. Packet-related costs are not drastically related to the RSA key size in the current setup: In the case of RSA512 the node can pass 1,340, and in the case of RSA1024, it can pass 1,003 packets per second. Per-hop costs are much lower then signature-related costs, as can be seen from Figure 11.4. As discussed in Section 11.4.2, per-hop protection also
246
Programmable Networks for IP Service Deployment
authenticates the sending node. This approach can be used for internode communication, like routing updates, and so on. The node can handle more than 7,000 per-hop protected requests. The results reported are for simpler packet structures than in our case, and range from 6,000 to 11,000 packets per second in a similar environment reported for ANTS [36] and Bees [34]. But no security-related costs were calculated or reported. The size of the variable option has an impact on the decoding/encoding costs and per-hop protection costs, which are proportional to the time needed to compute a hash of the packet data. In addition, the payload size increases the costs to calculate the hash used to verify the digital signature in the packet. By way of example, a comparison between the results with OpenSSL [29] shows that the OpenSSL library is more than two times faster than the Bouncy Castle library for digital signature verification (1,428/3,550 per second). Cryptographic accelerators like Broadcom-based BCM5821 [6] show additional speed improvements with very light utilization of the main CPU (8,142 signature verifications per second, and less than 1% CPU). Using native libraries or accelerators should improve the performance of the security architecture regarding the active packets, to few thousands per second. 11.9
ARCHITECTURE APPLICABILITY
The security architecture designed and implemented in FAIN should be applicable to any active networking approach. As a means of evaluating this assumption, we have applied the security architecture to the case of active SNMP; also developed in FAIN [12]. Active SNMP is a SNAP-based [26] solution for controlling and managing active node resources. SNAP is used in this solution as a carrier and a finite state machine to program a series of active network nodes through SNMP-enabled network devices. SNAP itself provides a high level of safety, and even provides resource usage guarantees per packet. Evaluation of the SNAP packet enables invocation of systems that have security problems [7], and the actions requested can be security critical. Every action should be authorized, and the policy enforced for the action on the nodes that the packet traverses. There are two distinct problems in integrating the SNAP activator in the FAIN framework. First, how do you provide protection for SNAP packets while they are in transit over the network? SNAP follows a pure active networking approach, and contains intermixed code and data in the same packet. SNAP packets can legally change in the network during the evaluation on the node. Therefore, for the packet as a whole, its data origin cannot be cryptographically verified on the nodes that the packet passes. The second issue is how to integrate the SNAP daemon into the
Security Management
247
node environment, and how to use FAIN-based mechanisms for authorization and policy enforcement. SNAP packet integrity issues were tackled in the following way: After compiling a SNAP program, the originating node produces a fingerprint of the program, extracting from the program the static part of SNAP packet data. Static parts are SNMP commands and related data that will be invoked on the node during the packet evaluation. The fingerprint is stored in the ANEP packet payload, while the entire SNAP packet is put into a variable option. The originating node builds a credential option and digitally signs the static parts of the packet, including the fingerprint in the packet payload. At the intermediate nodes, the general scenario of the packet passing the node is exactly the same as described in Section 11.7. The packet fingerprint with a known data origin is submitted to the verification subsystem together with the SNAP program itself, and a fingerprint of the program is produced again and compared to the one that was verified. If they match, the packet will be injected in the system; otherwise it will be dropped. Verification in this way ensures that the security-critical parts of the SNAP code have not changed in the network; and that the commands and the data, together with their position and their occurrence are protected against unauthorized modifications. The general protection of the packet designed in FAIN is achieved in the following way: The SNAP packets can be protected in the network with per-hop protection, so that only trusted nodes can process the packet, and end-to-end authentication is possible with the data origin authentication of the program fingerprint, and other packet options and verification of the fingerprint. Integration with the FAIN node services is in part covered by the integration of the SNAP daemon with the management framework. In this way, the SNAP daemon is treated as any other node component vis-à-vis security issues; for example, of its installation, initialization, and management. The SNAP daemon environment was extended on the node with a trap system that intercepts SNAP packet requests and invokes actions on the node corresponding to these requests. Additionally, two helper components were designed and implemented that take care of resubmitting and intercepting the packets going in and coming from the SNAP daemon. Those two components also take care of synchronizing SNAP packet evaluation with its security context, built from the active packet external representation. The SNAP packet actions can be authorized in the general way, as described in Section 11.6.2. The SNAP packet security context is compared with a security context of the trap system and, as such, the authorization decision is enforced.
248
Programmable Networks for IP Service Deployment
11.10 EVALUATION OF THE SECURITY ARCHITECTURE The security architecture was evaluated through a number of properties: flexibility, security, reliability, performance, and scalability. We addressed flexibility in many parts of the security architecture design: from the general point of view so that the architecture would be applicable to all active networking approaches, even existing ones (as shown in Section 11.9). It is possible to support different trust management approaches with the choice of unidirectional authentication with digital signature mechanism, public key cryptography, and design of the credential option supporting multiple types of credentials. Multiple credentials related options are possible in a single packet so that described relationships can be even broader; spanning multiple domains. On the domain borders, it is possible to replace user credentials by trusted services with domain credentials, which can be used in other domains. User involvement in security can be made minimal, regarding the service, its parameters, and its business model, and it should be enough that they are negotiated in advance, and that suitable credentials are issued. Regarding the user initiated key pair, access to the user secure store should be enough to enable the user to access the service and use it in accordance with the negotiated contract. The architecture can be used in the transport, management, and control planes, although its usage may be limited because of performance issues in the transport plane, as we will discuss later. The decision to build the security architecture at the NodeOS layer, transparent to the services, enhances the reliability of the system. The architecture is designed and implemented only once rather than repeatedly for different protocols, on different system layers. This simplification also applies to integration of the component model and security. All system components are treated equally vis-à-vis the security issues for installation, initialization, termination, and run time operations. The RCF can guarantee a reserved share of the computational and communication resources, as discussed in Section 11.4.6, which helps the system to operate reliably, even when node resources are scarce. Separation of VEs and services also adds to the reliability of a system as a whole: If a service in a VE is compromised, the impact should be limited to this service only and in no way affect other services in the same or in other VEs. Performance evaluation as presented in Section 11.8 shows that initial performance should be sufficient for applications in the management and control planes. Performance is not adequate for the transport plane without improvement; for example, by tuning the developed software, using cryptographic accelerators or adoption of some form of performance trade-off. In terms of scalability, the mechanisms selected, namely per-hop protection and end-to-end protection, scale well. Per-hop protection can handle a sufficient number of neighbor nodes and requests. End-to-end protection can be used while passing large number of nodes, because it is unidirectional. The real problem is performance related, due to the cost of digital signature verification. The system
Security Management
249
can support different types of credentials and trust management approaches. Multiple types of policies could be used. In terms of security, we can evaluate the architecture presented using securityrelated high-level architecture goals. Authorized use can be enforced for important system and users’ resources. Separation between VEs and services is enforced by the system itself. Active packets can be protected in transit, but this feature is implemented only in Java-based EE, so the transport plane of the PromethOS EE is not supported. Both environments, PromethOS and active SNMP, with either a commercial router such as Hitachi GR2000 or a Linux-based router, are managed with active packets through a component-based node environment and, so, are secured with the same security mechanisms as the rest of the node. Verification of the code deployed to the node in the static case is achieved by interworking with node-level ASP. Dynamic verification cases can be supported, as shown in the case of the active SNMP system. Common treatment is achieved through the unified component model, the same mechanisms used for securing sessions and active packets, common authorization and policy enforcement based on the security context, and the common verification mechanism. The three planes cannot yet be treated equally due to performance differences. In the example of active SNMP, we have shown that the base security services should be sufficient, and that the system can be extended with additional security mechanisms such as SNAP program fingerprinting and verification. Finally, security architecture can be applied to the types of nodes developed in FAIN: a Linux router with Java-based EE, PromethOS node extended with FAIN basic services, hybrid node; an active node with a commercial router (i.e., Hitachi GR2000 router), and a full FAIN-developed node and active SNMP system. All three nodes can be supported by security architecture in the management and control planes. Transport plane support, though with low performance, is possible with Java-based EE. Security architecture network experiments with two types of nodes; the Linux router and the hybrid node, were done in the FAIN established pan-European testbed [18]. 11.11 CONCLUSIONS FAIN security architecture was designed as a general and flexible system. We have shown that strong security on a flexible and heterogeneous network node is possible. Through experiments, we have shown that this security architecture can be applied to three types of nodes in the management plane, and that security architecture performance should be sufficient for operations in the control plane. The node component model should be formalized, together with security architecture operations. The flat security model assumes that system entities in interaction should be extended to multiple entities, and that the proper model proposed should treat them as compound principals. The security state of the node
250
Programmable Networks for IP Service Deployment
that can be exported through the authorization engine should be kept on the node and continuously analyzed. The same mechanism should be used for securityrelated protocols and the operation of security subsystems, such as the verification manager, system integrity subsystem, connection manager, and so on. References [1]
Alexander, D. S., et al., Active Network Encapsulation Protocol (ANEP), Active Network Group draft, July 1997, http://www.cis.upenn.edu/~switchware/ANEP/.
[2]
Alexander, D. S., et al., “Security in Active Networks,” Secure Internet Programming: Issues in Distributed and Mobile Object Systems, New York: Springer Verlag, 2000.
[3]
Security Architecture for Active Nets, May 2001, Active Networks Security Working Group.
[4]
Blake-Wilson, S., et al., Transport Layer Security (TLS) Extensions, RFC 3546, June 2003.
[5]
Blaze, M., et al., The KeyNote Trust-Management System, version 2, RFC 2704, September 1999.
[6]
Broadcom Based Cryptographic Accelerator, http://www.broadcom.com/products/5821.html
[7]
CERT, Multiple Vulnerabilities in Many Implementations of the Simple Network Management Protocol, Advisory CA-2002-03, February 2002.
[8]
Case, J., et al., Simple Network Management Protocol (SNMP), IETF, RFC 1157, May 1990.
[9]
Decasper, D., et al., “A Scalable, High Performance Active Network Node,” IEEE Network, February 1999.
[10] Trusted Computer System Evaluation Criteria, DoD standard, December 1985. [11] Eastlake, D., and Gudmundsson, O., Storing Certificates in the Domain Name System (DNS), RFC 2538, March 1999. [12] Eaves, W., et al., “SNAP Based Resource Control for Active Networks,” Proc. IEEE GLOBECOM 2002, November 2002. [13] Ellison, C., et al., SPKI Certificate Theory, RFC 2693, September 1999. [14] FAIN Project, http://www-ist-fain.org. [15] FAIN Project Deliverable D1 - Requirements Analysis and Overall Architecture, http://www.istfain.org/deliverables. [16] FAIN Project Deliverable D7 - Final Active Network Architecture and Design, http://www.istfain.org/deliverables. [17] FAIN Project Deliverable D8 - Final Specification of Case Study Systems, http://www.istfain.org/deliverables. [18] FAIN Project Deliverable D9 - Evaluation Results and Recommendations, http://www.istfain.org/deliverables. [19] FAIN Project Deliverable D40 - FAIN Demonstrators and Scenarios, http://www.istfain.org/deliverables.
Security Management
251
[20] Galis, A., et al., “A Flexible IP Active Networks Architecture,” Proc. of International Workshop on Active Networks, Tokyo, October 2000, and in Active Networks, Springer Verlag, October 2000. [21] Gabrijelcic, D., Savanovic, A., and Blazic, B. J., “Toward Security Architecture for Future Active IP Networks,” in Blazic, B. J., and Klobucar T. (eds.), Advanced Communication and Multimedia Security, Boston, MA: Kluwer Academic Publishers, 2002, pp. 183-195. [22] Hicks, M., and Keromytis, A.D., “A Secure PLAN,” Proc. of International Workshop for Active Networks (IWAN) 1999, June/July 1999, pp. 307-314. [23] ITU-T X.509 (2000) | ISO/IEC 9594-8:2000 - Information Technology - Open Systems Interconnection - The Directory: Public-Key and Attribute Certificate Frameworks, final draft international standard, June 2000. [24] Kent, S., and Atkinson, R., Security Architecture for the Internet Protocol, RFC 2401, November 1998. [25] Krawczyk, H., Bellare, M., and Canetti, R., HMAC: Keyed-Hashing for Message Authentication, RFC2104, February 1997. [26] Moore, J. T., Hicks, M., and Nettles, S., “Practical Programmable Packets,” INFOCOM 2001 Proc., April 2001. [27] Murphy, S., et al., “Strong Security for Active Networks,” Proc. IEEE OPENARCH 2001, April 2001. [28] Necula, G. C., “Compiling with Proofs,” Ph.D thesis, School of Computer Science, Carnegie Mellon University, September 1998. [29] OpenSSL, http://www.openssl.org. [30] Savanovic, A., Gabrijelcic, D., and Mocilar, F., “Security Framework for Active Networks,” IEEE International Conf. on Telecommunications, Bucharest, Romania, June 4-7, 2001. [31] Savanovic, A., et al., “An Active Networks Security Architecture,” Informatica, Vol. 26, No. 2, 2002, pp. 211-221. [32] Schneier, B., Applied Cryptography: Protocols, Algorithms, and Source Code in C, New York: John Wiley and Sons, 1996. [33] Shirey, R., Internet Security Glossary, RFC 2828, May 2000. [34] Stack, T., Eide, E., and Bees, J. L., A Secure, Resource-Controlled, Java-Based Execution Environment, December 2002. [35] Wahl, M., Howes, T., and Kille, S., Lightweight Directory Access Protocol, Version 3, RFC 2251, December 1997. [36] Wetherall, D. J., “ANTS: A Toolkit for Building and Dynamically Deploying Network Protocols,” OpenArch 1998, San Francisco, CA, April 1998, pp. 117-129. [37] FAIN Project Deliverable D14 - Overview FAIN Programmable Network and Management Architecture, http://www.ist-fain.org/deliverables.
Chapter 12 Resource Control Framework In this chapter the resource control framework for the FAIN nodes is presented and described. The RCF, a key component of the FAIN architecture [1], is responsible for the control and management of the resources on the node. The part of the RCF that manages the resources is part of the virtual environment management framework described in Chapter 9 while the run-time control of resources is performed by other node components. The RCF is considered very important as it supports one of the major concepts of the FAIN project: the binding of virtual environments in the FAIN ANs with the required resource capacities. The RCF supports this binding, isolates the various VEs in the same node, and dynamically supports the lifecycle of each VE by allocating, controlling, and releasing resources accordingly. 12.1 REQUIREMENTS The main objective of active networking of FAIN is to provide the necessary infrastructure for the dynamic deployment of the new services, and to give to the user the ability to customize the network by using his own code. This dynamic aspect of active networks has an impact on the requirements for resource management on the active node. In comparison to a traditional network node, the active node offers the ability to dynamically inject code, which implements a new service. The code is injected in an execution environment that belongs to a VE, which presents an abstraction of the node to the active application. In order to be able to support these services, the VE should have guaranteed access to the necessary resources of the node. Hence, the RCF should provide the necessary mechanisms to enforce resource sharing among the various users. Every resource allocation in the form of a VE should be isolated and independent from the other VEs. Furthermore, the dynamic creation, deletion, and reconfiguration of VEs should be supported at any time when the status of the resources allows. Another important rule that should be followed during control of FAIN AN resources is that the admission of the creation of a new VE should not affect the contracts and the agreements of the existing VEs. Moreover, apart from the access that RCF 253
254
Programmable Networks for IP Service Deployment
provides to the allocated resources, a set of control and management mechanisms is provided to the VE owner. These mechanisms are necessary for the VE owner to be able to use and to manage these resources according to his desires and its own necessities. This indicates the need of an open interface that will include the above functionality. This interface should be exported for use by the VE owner or other VE user entities like active applications. Other important resource control requirements can be found in [5, 6]. CLARA [5] is an architectural prototype of cluster-based routing, with the node coming from the Journey project so as to deploy routers with processing capabilities. A CLARA node is based on distributed computing resources aggregated in a cluster. MAGICIAN [1] is an active network prototype that takes most of the concepts of the ANTS active network approach with packets called SmartPackets. It proposes management of resources by limiting CPU and memory access to the threads. A scheduling mechanism of the user threads is based on queues applying different kinds of priority: high, medium, and low. Memory allocation is controlled by the number of calls to the new Java instruction. 12.2 RCF DESIGN In Figure 12.1, the high-level RCF design is depicted. For every controlled resource, a resource controller (RC) is responsible for its run time control, and a resource manager (RM) manages the partition of the resource among the VEs. Finally, for every VE, a resource controller abstraction (RCA) exists that represents part of the RC functionality to the VE client; specifically the part of the resource that has been allocated to the VE. Client VEM VE RCA RCA RCA
RM RC
Resources Figure 12.1 RCF architecture.
RM
Resource Control Framework
255
In detail, the main categories of RCF components are the following: •
•
•
Resource controller: The RC is the responsible entity for the run time control of a resource inside the FAIN active node. The RC can be a component running in the kernel space of the node for a software router, or it can be a specific hardware router device. Moreover, an RC can vary from a simple scheduler (e.g., CPU scheduler) to a more complex framework that controls a whole set of physical resources; for example, for a Linux-based AN it can be the netfilter framework or the traffic control framework. Every RC has an interface that enables its run time configuration, which includes the allocation and monitoring of the resource or resources for which it is responsible. Resource manager: For every RC, an RM exists in the user space. It is responsible for the configuration of the corresponding RC in order to enforce resource partitioning among the various VEs. Moreover, RMs are responsible for the RCA’s creation, configuration, and management. Among others, the RMs are responsible for admission control (AC) of the incoming requests for new allocations, and for realization of the allocation by configuring the corresponding RCs. Resource controller abstraction: For every resource capacity that is allocated to a VE, an RCA of that resource is created. The RCA represents the part of the RC that controls the allocated resources to the VE, and it is bound up with a specific part of the whole resource. The RCAs export interfaces and accept requests by VE owners and/or users for resource access. Resource access includes requests for resource consumption and management. RCAs check those requests against the resource status and the requested entities’ privileges, and enforce the valid requests by configuring the corresponding RCs accordingly.
The RMs and RCAs are part of the virtual environment management framework (see Chapter 9). RMs are specific component managers and RCAs are specific configurable components. The virtual environment manager is the higher manager in hierarchy and manages all the RMs. In a sense, VEM is also an RM that manages the high-level resource that is called VE. RCs are generally not part of the virtual environment management framework. They are mainly platform-dependent entities and mechanisms whose mission is to control one or a set of resources. Each of them has a configuration interface, which gives to the corresponding RMs and RCAs the ability to dynamically determine the way that they should control the resource by changing its configuration.
256
Programmable Networks for IP Service Deployment
12.3 RCF MAIN FUNCTIONALITIES In order for the RCF to handle the corresponding resources and their use requirements it acts in two different ways. The first aspect is admission control of any new VE creation request. The second is to control and manage the usage of the resources of the admitted VEs. 12.3.1
Admission Control
The creation of a new VE in the FAIN node cannot always be accepted because of the limited amount of resources. This requires the existence of an admission control mechanism within the RCF of the FAIN active node. Admission control in the FAIN active node addresses a set of actions that should be made by the RCF during the VE’s creation phase (or during the renegotiation phase), in order to determine whether a VE creation (or reconfiguration) request should be accepted or rejected. A new VE can be admitted to the node only if its requirements for resources can be satisfied without at the same time violating any commitments for resources that have been made to the existent VEs. The final decision for the acceptance or not of the new VE creation request depends on unreserved resources and the resource needs of the new VE. In parallel, the increase in the node’s utilization should be achieved by the acceptance of as many VEs as possible. In other words, the RCF should not refuse creation of a VE that the node status (in terms of resources) indicates can be admitted. The admission control decision for every resource is based on a specific algorithm that is not necessarily the same for every resource. The RCF framework at this point does not define specific algorithms for specific resources, as it aims to be a generic framework, and the admission control algorithms can vary according to the resource. It defines the mechanisms, the involved components, the interactions between them, and above all the admission control decision point for the FAIN AN. 12.3.1.1 Admission Control Model The VE includes a set of different resources for which different RMs are responsible. Therefore, admission control is not performed by a central object, but by the various involved RMs independently. Of course, for the overall decision when the creation of a new VE can be admitted or not the responsible component is the VE manager. In order to make the final decision, the VEM gets in contact with the involved RMs and asks them if they can or cannot satisfy the new VE’s resource requirements. The RMs perform admission control independently and, if the responses from all of them are positive, the VEM’s decision will be positive as well. Otherwise, the VEM rejects the VE creation request.
Resource Control Framework
257
Figure 12.2 depicts the relationships and the hierarchy between the components involved in the admission control process. The VE creation-requested entity is a client from the RCF admission control functionality perspective, hence, from now on it will be called a “client.” According to the FAIN business model (Chapter 7), a client cannot be someone/something other than a service provider, or an SP’s agent, or the network management system (Chapter 15) on behalf of an SP. Of course, the client’s nature does not in any way affect the AC process. Client
VEM
R1M R2M … RnM Figure 12.2 Hierarchy of components involved in admission control.
In FAIN we adopted a two-phase approach for admission control, the creation phase, and the activation phase. During the creation phase, the client requests the creation of a new VE. The request includes a resource profile that describes the client’s requirements in resources. The VEM receives the request and breaks it down into individual resource requirements. Then, the VEM passes these requirements to the corresponding RMs. Every RM decides if the required allocation can be done or not, and, if the reply is positive, preallocates the resource, and replies positively to the VEM. The pre-allocation includes the creation of an RCA for that part of the resource, but in a “standby” mode. In addition, no configuration of the RC for the actual allocation of the resource occurs. That means that the VE client cannot use the part of the resource that is preallocated, but neither can this part be allocated to someone else. The resources remain preallocated until an activation or a withdraw request message arrives, or until the expiration of a specific timeout period. When the VEM has collected all the replies from the RMs, it can decide to admit or not admit the new VE. If any of the RMs replies negatively, the VEM reply will be negative as well, and it then gets in contact with the rest of the RMs in order to inform them that the new VE will not be created, and therefore they must release the preallocated resources. In that case the RMs erase the previously created RCAs, and the preallocated resources are again available. When all the replies from the RMs are positive, the VEM replies positively as well. Hewever, even then, the VEM does not activate the newly created VE, and the resources remain preallocated. The reason is that generally every VE is part of a virtual private active network (VPAN). During the creation of a VPAN, different VE creation requests are made to different ANs. It is possible for the request in
258
Programmable Networks for IP Service Deployment
one or more of the nodes to be rejected, and therefore the creation of the VPAN would not be feasible. Then the admitted and created VEs should be removed. Moreover, the resources on every node that the VEs have admitted should be shielded from anyone who simultaneously tries to set up a VPAN. If the creation of all the VEs of a VPAN has succeeded, the activation of each created VE is required before being ready for use. On the node level, the activation request arrives at the VEM. The VEM gets in contact with all the involved RMs in order to activate the RCAs and configure the RCs accordingly, in order to enforce the appropriate resource allocations. 12.3.2
Resource Control
The other major responsibility of the RCF is the partitioning of resources between the various users and the real-time control of their use. Every actor wishing to make use of the node should request specific resources, which the RCF assigns to the actor’s VE. In addition, the RCF is responsible for exporting interfaces to the VEs' clients, giving them access to their resources and management capabilities. 12.3.2.1 Resource Control Model RCF supports a multilevel, hierarchical resource sharing model, as shown in Figure 12.3. The first level of sharing is among the various VEs. The RCF exports control interfaces to every VE (one for each resource), which have been allocated to the VE that allows the further partitioning of the resources according to their owner’s desire. For example, in Figure 12.3 the node is shared by three VEs. VE1 is divided into three parts, VE2 is not partitioned at all, and VE3 is divided into two parts, of which one is further divided into an additional (lower) level. FAIN AN
VE1
S11
S12
VE2
S12
VE3
S31
S32
S321
Figure 12.3 Multilevel hierarchical resource sharing.
S322
Resource Control Framework
259
The RCF is responsible for the first level of partitioning among the VEs, in the sense that it does not define how the resources will be further partitioned. On the other hand, the RCF supplies the VEs’ owners with the appropriate mechanisms, in order to be able to further divide their resources according to their wishes. For the first level of partitioning, namely, the partition of the overall node resources among VEs, we use the fixed allocation approach. Apart from being the simplest solution, there are three important reasons for that choice: •
• •
First, according to the FAIN enterprise model (Chapter 7), a FAIN network is owned by a network provider. Multiple service providers request the use of the network infrastructure in order to deploy their services. The NP provides this by setting up VEs in a number of nodes, with each VE representing some of the nodes’ resources. In other words, a NP commercially operates a FAIN network, and sells network resources to SPs. Second, as an SP is charged for the resources, he expects to have guaranteed access to them. Only the fixed allocation of resources guarantees this. Finally, an SP is responsible for managing his VEs, so he is able to use resource management techniques that lead to high utilization. If we had chosen to use an overallocation technique, an SP’s resource requirements could be left unsatisfied.
Fixed resource allocation is therefore considered to be the best choice, although the generic RCF architecture does not exclude other resource management approaches if warranted by particular conditions and/or requirements. Apart from the allocation of part of the resources, the NP provides the SP with resource control mechanisms, permitting resources to be managed efficiently. This gives the SP the ability to further subdivide his resources, using any of the various resource sharing algorithms that are available. Obviously, every SP aims to utilize the whole amount of the node capacity that he has been charged for. 12.3.2.2 Components and Interfaces Figure 12.4 depicts the involved components and interfaces, and the interactions between them during the resource control and management process. The responsible component for the run-time control of every resource is the corresponding resource controller. For every resource that is controlled there exists a separate RC. Every RC has an interface (Figure 12.1) that allows its configuration. The resource manager and controllers have access to that interface. The RM uses that interface to partition the resource to the various VEs. For every VE allocation, the RM creates a resource controller abstraction. The RM uses the configuration interface (see Figure 12.4) of the new RCA to configure it, in order
260
Programmable Networks for IP Service Deployment
to have access to the appropriate part of the resource. The RCA exports a control interface (Figure 12.3) to the VE client, in order to make possible the use of the resource. In addition, that interface allows, to some extent, the building of resource control mechanisms that determine how this part of the resource will be used, without of course having any effect outside the VE. The last is ensured by the RCAs, which intercept and check every configuration request by the VE clients. All the configuration requests from the client to the RCA are checked for validity. Valid requests are forwarded to the corresponding RC.
Client VE
(3)
RCA
(2)
RM (1)
RC Resource Figure 12.4 Components and interfaces involved in resource control.
12.4 MODEL RCF IMPLEMENTATION Many different types of resource in an active node need to be controlled. They vary from network resources such as link bandwidth, to computational resources such as CPU cycles and memory space. From all these resources we selected outbound bandwidth for control by the initial RCF implementation, because bandwidth is still considered to be the most valuable resource for every existing network architecture. In addition, this could be readily implemented using the Linux traffic control (TC) framework [2]. Linux TC is a standard component of the kernel of every recent Linux distribution, and it is a very powerful and flexible framework for the control of traffic.
Resource Control Framework
12.4.1
261
Traffic Control and Management for Linux
The general RCF framework and the virtual environment management implementation (see Chapter 9) form the basis of a traffic manager and a traffic controller abstraction implementation that controls and manages the outbound bandwidth of the FAIN ANs. These two components run on user space, and their implementation is based on the component model of the virtual environment management framework. The traffic manager derives from the component manager and the traffic controller abstraction from the configurable component. For the platform we use Linux, including traffic controller, which is a very powerful and flexible tool for building the bandwidth control mechanism. It is also totally aligned with the requirements of the FAIN RCF for resource controllers. Linux TC has been built on the following major conceptual components: • • • •
Queuing disciplines: They control how packets that are queued on a network device (e.g., a network interface) are treated. Classes (within a queuing discipline): They are used in queuing disciplines to determine different treatments for different kinds of traffic. Filters: They are used to distinguish between the different kinds of traffic. Policing: This is used in policing filters that match up only to a certainbandwidth.
In FAIN traffic control we make wide use of the three first concepts. For a queuing discipline we chose to use the class-based queueing (CBQ) discipline [3]. For every VE we create a different class, bounded with a specific bandwidth, and isolated from the rest of the classes. Generally, CBQ allows bandwidth borrowing between different classes, but we choose to disable that feature for the VEs' classes. After that, the VE owner can choose between different resource sharing models that are supported by Linux TC. For example, he can create multiple classes with a specific bandwidth in every class, and, by the use of specific filters, can assign each of them to a different flow or, with the use of the packet type of service (TOS) field or differentiated services code point (DSCP) bits he can filter his traffic in a way that the aggregation of flows will be mapped with different classes of bandwidth. In any case, the basic concept is that the VE owner will be free and capable to choose the traffic management model that he wishes for the bandwidth that he owns. The decision is up to the VE owner, but the RCF is responsible for checking the validity of every request, which is something that is not supported by the Linux TC. In cases where the requests of the client are not feasible or break some rules, the traffic controller refuses to realize them. Moreover, with the synergy of the FAIN security framework (see Chapter 11), every access to Linux TC is checked against the authorization and the privileges of the requested entity.
262
Programmable Networks for IP Service Deployment
Figure 12.5 depicts the interactions between the components that have been implemented. Client
Java EE
VE
VEM
(4)
(3) Traffic Controller (TC Abstraction)
User Space Kernel Space
Linux Kernel
Incoming Traffic
(2)
Traffic Manager
(1) Linux TC
Outgoing Traffic
Figure 12.5 Traffic control for Linux-based FAIN active nodes.
As shown in Figure 12.5, four main interfaces exist: • • •
•
Interface 1: This is a command line interface and it is used by the traffic manager and traffic controller for the configuration of the Linux TC. Interface 2: This is a CORBA communication interface and it is defined by the VE management framework. It is used by the traffic manager in order to configure the traffic controller. Interface 3: This interface (like 2) is a CORBA communication interface, and is defined by the VE management framework. It is used by VEM in order to request of the traffic manager the creation of a new traffic controller or the reconfiguration of an existing one. Interface 4: This is also a CORBA interface. It is the open control interface that the RCF provides to the VE client. It provides a set of operations that offer the capability to the VE client to create finer granularity of the bandwidth allocation by creating new child classes of the VE class, and by adding classification rules to assign traffic to those classes.
The traffic manager is the component that is responsible for managing the overall outbound bandwidth of the FAIN AN. First of all, it is responsible for the initial configuration of the Linux TC during the bootstrapping of the node. In addition, it divides the bandwidth to the various VEs by creating an isolated and bounded CBQ class for each one of them. Moreover, it creates a traffic controller
Resource Control Framework
263
for each VE. Finally, it decides whether or not the requirements for the bandwidth of a new VE can be satisfied or not, and therefore whether to admit that new creation or not. Every Traffic Controller is an abstraction of the Linux TC within the VE client. This is the entity that manages the TC class corresponding to the VE inside the Linux TC. In addition, the open control interface (Figure 12.5, interface 4), provides that part of the Linux TC functionality to the VE client required to manage the VE’s bandwidth. When an authorized client calls an operation of the interface, the traffic controller checks the validity of the request, checks if the status of the configuration of the VE’s TC class justifies that the request can be satisfied, and then translates the request to an appropriate sequence of TC commands. The execution of these commands configures the Linux TC in a way that satisfies the request of the client. 12.4.2
DiffServ Control and Management for a Gigabit Router
For the sake of the DiffServ scenario (see Chapter 17) with the use of a gigabit router (GR) like the Hitachi GR2000 or TC-100, a DiffServ controller and its corresponding manager have been implemented. DiffServ manager functionality is very simple, as it just initializes a DiffServ controller for every new VE that will be used as part of a virtual private DiffServ network. The description of the DiffServ controller for a gigabit router and its functionality follows. 12.4.2.1 DiffServ Controller for a Gigabit Router The DiffServ controller component as shown in Figure 12.6 is responsible for the dynamic configuration of a gigabit router; for example, GR2000 or TC-100, based on the traffic conditions. To do this, this component provides two main functions, configuration and monitoring. The DiffServ controller creates a mapping of DSCP values to specific flows by configuring the gigabit router (GR) accordingly. The configuration function uses the Broadband Active Network Generation (BANG) [4] API. The monitoring function consists of the SNMP traps handler interaction with the monitoring system of the FAIN PBNM. The SNMP traps from GRs are captured through the VEM, and are analyzed in the DiffServ controller. Then the monitoring system is notified based on filters. Eventually, the GRs are reconfigured as required by the network conditions. Interface for a Gigabit Router We have implemented the interface code (Wrapper) to configure GRs on the top of the BANG API. This interface is used to send commands using a telnet client. Init() establishes a connection between gigabit routers and an active proxy. Then it opens a configuration file of the router whereas closeConnect() saves and closes
264
Programmable Networks for IP Service Deployment
the configuration file. At the end it terminates this connection. BindFlow2DSCP() is called by the DiffServ controller in order to set the DSCP value to the specific flows; for example, video stream flow. SetSNMP() is used to configure the basic parameters; for example, the SNMP community name, that are required by the GRs in order that SNMP functions on it. SetEvent() and setAlarm() are used to set filters for the GRs in order to detect events. Java EE VE
VEM
DiffServ Controller
Diffserv Manager
Wrapper BANG API CLI
Gigabit Router Incoming Traffic
TC
Configure: Monitor:
Trap
SNMP Agent
Outgoing Traffic
Figure 12.6 DiffServ controller for a gigabit router.
12.5 CONCLUSIONS In this chapter the RCF module of the FAIN AN was described. The RCF partitions the resources among the VEs. It is responsible for ensuring that the resource consumption of the VEs remains within the agreed contract. In addition, it is responsible for performing admission control as to whether a new VE can be created, based on the resource requirements of the new VE and the availability of the FAIN AN resources. The RCF architecture we introduced is a generic and flexible framework that can support the control and management of various resources of the FAIN AN. We implemented an RCF prototype that can control and manage the outbound bandwidth with the use of Linux TC. For that purpose, two RCF classes were implemented; namely, the traffic manager and the traffic controller; as part of the VEM framework. Finally, we implemented two special RCF classes, the DiffServ manager and the DiffServ controller, that are used to dynamically configure the GR2000 gigabit router in order to act as part of a DiffServ network. With the model implementation, the applicability of the RCF architecture’s concepts were successfully proven, and the need for a flexible and
Resource Control Framework
265
open mechanism that gives controlled access to the resources to the node’s users was demonstrated. References [1]
Denazis, S., et al., FAIN Deliverable 7 - Final Active Node Architecture and Design, May 2003.
[2]
Hubert, B., Linux Advanced Routing and Traffic Control HOWTO, http://lartc.org/.
[3]
Floyd, S., and Jacobson, V., “Link-sharing and Resource Management Models for Packet Networks,” IEEE/ACM Trans on Networking, Vol. 3, No. 4, August 1995.
[4]
BANG Project, http://www.fokus.gmd.de/research/cc/glone/projects/bang/entry.html
[5]
Ott, M., Welling, G., and Mathur, S., “Clara: A Cluster Based Active Router Architecture,” Proc. of the Hot Interconnects VIII, Stanford University, CA, August 2000.
[6]
Wolf, T., and Turner, J. S., “Design Issues for High-Performance Active Routers,” IEEE Journal on Selected Areas of Communication, Vol. 19, No. 3, March 2001, pp. 404-409.
Chapter 13 Control Execution Environments The FAIN project placed particular emphasis upon developing an execution environment that could be used for configuring and monitoring network elements. The goal of this development is to extend system capabilities into the fields of programmable and ad hoc networking [7]; this development became known within the project as the control plane EE, or simply the control EE. The control EE can be defined as: an on-line execution environment supported by a number of active extensions. The execution environment was realized with the SNAP interpreter [22]. The active extensions were realized using an extensible SNMP [16] agent, which provided many networking services using Perl implementations of a number of custom management information bases. The on-line EE was able to execute SNMP primitives, which made use of the active extensions within SNMP agents. The FAIN project often referred to the control EE—the on-line SNAP interpreter and the extensions—as the SNAP activator. The breakdown of this chapter is as follows: It starts off with an explanation of the fundamental concepts of the active packet interceptor paradigm, and is followed by a detailed description of the FAIN SNAP interceptor design. The actual implementation of the design is then presented. The chapter then focuses on one of the major security issues related to the control EE mainly in authentication. Authentication challenges are discussed, and the authentication engine (known as the ANEP-SNAP packet engine) is presented. Lastly, a brief introduction to the role and participation of the control EE in the FAIN DiffServ scenario is presented. 13.1 INTRODUCTION Active networking as a research field has suffered from a lack of application areas [14, 15]. With the control execution environment, the FAIN project has used active networking to realize the concept of a network management interceptor [18]. Such interceptors are already known in IP networking; for example ReSerVation Protocol (RSVP) [5] relies upon a router being able to act upon a router alert [20] to bring a packet to the attention of the router’s RSVP subsystem. 267
268
Programmable Networks for IP Service Deployment
The control EE has introduced a degree of programmability to this: It allows simple state machines to be executed across network domains. 13.1.1
Management for Evolving and Adapting Networks
A data network has three planes. A typical relationship among them is shown in Figure 13.1. 13.1.1.1 Data Plane This plane transfers the data in packets from end point to end point. The data plane is not consistent in quality or topology. The key physical entities are the routers and switches that route, translate, and block packets as they progress from end point to end point. Agreements Agreements
Owner
Owner
Owner Management Plane States Policies
States Policies Control Plane Edge Control Network Access Control Network
Control Plane Core Control Network
Core Control Network
Edge Control Network Access Control Network
Packets Endpoint Network
Control Plane
Packets Data Plane Endpoint Network
Figure 13.1 Network planes.
Control Execution Environments
269
13.1.1.2 Control Plane There are many control planes superposed upon the data plane. Each control plane configures and operates some of the routers and switches of the data plane. A control plane usually manages a particular type of network: access, edge, or core. Each control plane is managed by one administrative entity that can consistently apply configurations to the routers and switches in its part of the control plane. The key physical entities within the control planes are the management workstations. These are uniquely privileged to address the routers and switches of their domain. The principal functions of the control plane are: the activation and deactivation of interfaces; the management of access rights to and the traffic quality across an interface; and assigning addresses to interfaces. The control plane is also the only means by which management daemons can be controlled. The management daemons propagate information about the interfaces a control plane controls; routing daemons and naming services are immediate examples. A control network is shown in Figure 13.2. The set of devices in this network—the VPN access router, router, and switch—are effectively one device: the firewall with a tunnel. The daemons and key server are ancillary services that support the control network.
Private IP (Control Plane)
Naming daemons
Firewall w/tunnel Naming Routing daemons Key servers
Ethernet Switch
Patch Panel
Bridge/Router
Department/Workgroup Server Generic Desktop VPN Access Router
Generic Laptop
Management Workstation
Figure 13.2 Control for a firewalled tunnel.
270
Programmable Networks for IP Service Deployment
13.1.1.3 Management Plane The management plane connects the control planes together so that they apply a policy. The most important function fulfilled by the management plane is to maintain the topology of the network. It also controls the extent of a control plane: It can change access agreements between control planes; it also manages naming, by managing the hierarchies of name servers. The management plane exists only as a set of policies that are expressions of multilateral agreements between the administrative domains that manage the control planes. 13.1.1.4 Some Notes The relationship between the planes is shown in Figure 13.1. This shows two end point networks. The one to the left has a control plane that can consistently control the access network of the end points; the edge control networks between the access networks and a part of the core. This might be typical of a large corporation that has invested in a private wide area network. The arrangement to the right is typical of a large company that has leased access to the Internet from an ISP. The ISP controls some part of the core. The large company operates its access and edge networks that connect it to a core. The management plane is largely an enterprise concept. It consists of network owners who reach agreements with one another. The management planes set policies and receive notifications of changes of state of the underlying control networks that may require a change in policy. 13.1.1.5 Some Practicalities Most routers and switches support a number of management information bases. These can control the instantiation of interfaces and the assignment of addresses to them. They can also be used to manage routing daemons and other network information entities. MIBs are standardized by the IETF against a uniform tree-structured naming space. Private enterprise-specific MIBs for network elements can also be specified within the naming space. A network operator can interact with MIBs using SNMP. The controlling software agent is called an SNMP agent. For routers and switches, these are implemented in firmware and are difficult to extend, but some SNMP agents are implemented in software, and can be extended to support many MIBs using the SNMP Multiplexing protocol (SMUX). SNMP versions 1 and 2 can be used within a physically secure control network (one that is free from packet sniffers), and most SNMP agents support application-level packet filtering.
Control Execution Environments
271
SNMP version 3 provides a secure interaction protocol suitable for interactions across control planes. 13.1.2
Extending the Control Plane
The difficulty presented by the network given in Figure 13.1 is that changing the network configuration consistently requires that the control plane be managed as one entity. It is difficult for network administrators to grant management rights to other operators, so, usually, reconfiguration is agreed out-of-band between the network administrators, and they also agree to implement a particular policy collectively. This is usually too inflexible to respond to network demands quickly enough. The goal of IntServ is to construct a single network between two networks under different administrations. Once that is done, packets can be marked using DiffServ, and coerced to a particular policy. The right to apply an IntServmanaged network configuration is granted in advance to the network administrators, and they are able to instantiate the IntServ network and destroy it on demand. IntServ addresses the problems of traffic quality management. The control EE aims to provide a more abstract facility; one that can be used to implement mutual firewalling policies, and to construct virtual private networks. It will also be seen that the control EE provides a means to address network routers and switches at the level of device management. This is something that IntServ and RSVP do not directly address. So another goal of the design of the control EE is to provide an implementation language for IntServ. 13.1.3
Operation of the Control EE
The control EE executes active packets that are injected into a network’s data plane, making use of the interceptor paradigm. Because this is discussed in more detail later, it remains to state that the right to execute programs on other people’s machines that reprogram their networks is not one that is granted lightly, and a network administrator granting that right would want very strong assurances that no damage will be caused by the foreign program. 13.1.4
Safety, Predictability, and Security
The control EE is implemented by means of the SNAP interpreter. This provides a safe computational environment for packets that are intercepted at routers. The main emphasis of the design, and of the first implementation of the SNAP interpreter, has been upon computational safety. Other aspects of data security have not been ignored, however: Authentication and confidentiality can be assured by constructing a channel from stubs [18], and the authentication used by the
272
Programmable Networks for IP Service Deployment
control EE is discussed in detail later. First, the concept of interceptors and computational safety will be developed. 13.2 ACTIVE PACKET INTERCEPTOR Active networking is based on the premise of store, execute, and forward. The execute part of the cycle has caused some concern, but the sequence of the operations is given for two interceptors in Figure 13.3. 13.2.1
Intercepting and Injecting
This is a simple data flow diagram that shows how an external entity injects a data flow, d, that is intercepted, and, as a result, a control signal is raised in some access network, which injects a control signal, c’ that is combined by an injector with the original data flow, d, to produce a composite flow, d + c’. This composite signal continues on the same path as the data signal and arrives at another interceptor that removes the control signal, c’, and sends the data signal, d, onto the acceptor. The initiator and the acceptor had no knowledge that the data flow would be used to carry an additional control signal. Access Network
Access Network
c
d
c'
c'
Interceptor
Injector
Interceptor Acceptor
d
Initiator
d + c'
d
Figure 13.3 Interceptor data flow.
13.2.2
Executing
Clearly, one would not want to allow arbitrary programs to execute within as critical a component as a network router. Active networking research has
Control Execution Environments
273
developed a number of approaches to providing a safe computational environment for active packets bearing programs. •
•
•
Sandboxes: This method of safely executing programs has been used with some success for Java applets running in Web-browsers. ANTS (see Chapter 2) and other Java-based systems have used sandboxes running under Java security managers [19]. Unfortunately, security managers cannot stop programs from looping indefinitely, and consuming computational resources. Proven code: This method usually uses a functional programming language to prove that the code may not exceed a given amount of space or execution time. The OCaml systems are an example of this technique [27] and, with them, unlike sandboxes and security managers, it is possible to prove that a program will not loop. Constrained languages: This is the method used by the SNAP interpreter within the control EE. The language used to implement the programs in the active packets is not Turing complete. It cannot loop or recurse. One can think of this as a sandbox. But because the program may not loop, it becomes a provable language. So it combines both the sandbox and the proven code technique. A constrained language does impose great limitations on what can be achieved with it, but it also provides the certainty that an SNAP program will not cause irreparable damage to a system.
Before discussing the SNAP programming language in detail, one should gain a new insight into protocols. One can view the operation of a router as the interpretation of a program. 13.2.3
IP Protocols as Active Packets
Alan Turing showed, with his Universal Turing Machine,1 that there is no fundamental difference between program and data. Indeed, David Wall [24] extended this idea further to network protocols, noting that there was no expressive difference between protocol implementations that exchange protocol data, and messages carrying protocol code and acting as protocol agents.
1 Turing machines are described in many basic texts such as [6] that describe the forward-branching Turing machine that is the theoretical basis of the SNAP interpreter.
274
Programmable Networks for IP Service Deployment
The ICMP protocol carried in an IP packet
IP
ICMP
Protocol Data
An active IP protocol in a UDP packet that implements ICMP
IP
UDP
Protocol Code = ICMP
Protocol Data
Figure 13.4 ICMP and active ICMP.
IPv4 packet headers serve as a good example. These can be thought of as programs written in a programming language specified by the IPv4 protocol standard. The programs that can be written are not particularly complex, being essentially of the form “forward me to the following destination,” perhaps modified by setting TOS or option fields, but they are programs nonetheless. When a router receives an IP packet, it can be thought of as interpreting the program contained in the header. If the packet is malformed, then this is a syntax error or otherwise incorrect (the destination address might be unknown), and it does not get forwarded. An illustration of this idea is given in Figure 13.4. The Internet control message protocol (ICMP) must be supported at each router, and a protocol number is assigned to it. When a router receives such a packet, it can decode the payload of the packet and pass it as parameters to a program implemented on the router. Were one to implement an active ICMP, then the program to implement an ICMP operation could be carried as a code payload, and the data to be used by the program would be passed to the program once loaded. The obvious attraction of this idea is that new protocols need not be implemented at each router. It is only necessary to provide a set of operating system calls that the program could use. Clearly, to achieve this, a new programming language to be carried in active packets must be defined, since IP is not expressive enough for our purposes. There are two important characteristics this new programming language must have:
Control Execution Environments
• •
275
Termination: Since a protocol must reach a final state, protocol programs written in our new language should always terminate. Flexibility: The new language should provide more flexibility than the programming languages defined by existing network-layer protocol standards.
The SNAP programming language fulfills these needs for a control EE. SNAP provides somewhat stronger resource guarantees than are required for a signaling language (program execution time depends on its length) and it provides a lightweight execution framework suitable for in-band operation [22]. 13.2.4
Constrained Language: Forward Branching Languages
As noted above, the key innovation of the SNAP interpreter is that it is a constrained language: The form of the constraint is simply that the SNAP interpreter can only branch forward (it cannot loop) and so will always terminate. It has no direct support for recursion, the form of recursion supported is to allow the program to terminate and then reload it. This can only be performed a limited number of times, depending on the value of the resource bound. An example is shown below: #src 10.104.104.13 #dst 10.104.105.12 #rb 32 Main: forw bne finish – pc push 1 getsrc forwto Finish: demux exit #data |4| #data 7800 #data 0
Forward branching languages are so constrained that there are very few implementations. The C preprocessor and the M4 macroprocessor can be thought of as examples of such languages. The SNAP language in appearance is almost identical to Assembly language.
276
Programmable Networks for IP Service Deployment
13.2.4.1 An Active ICMP ping SNAP Program This well-documented example performs the equivalent of an ICMP ping request and response. The lines starting with # are preprocessor directives; they state, in turn, the source and destination addresses of the packet and the resource bound. The destination attribute determines the behavior of the “forw” primitive on the first line of the program. The resource bound is a hop count for the program. Every time a SNAP interpreter executes the program, the resource bound is decremented by one. The last lines of the program are also preprocessor directives, and add three values to the stack. The bottom value is “4” and will be used to carry a sequence number; the next value is 7800 and is the port number the result will be written to by the “demux” primitive. The topmost value is a control variable used to indicate if the program is on its outgoing or return leg. The first line of the program proper is the label “main,” denoted by the following colon, as is typical of Assembly language. The first instruction “forw” is unique to the SNAP programming language. Its behavior is this: If one of the IP addresses of the current host matches that specified as the destination attribute of the program, then execute the next instruction, if it is not, halt execution of the program, and forward the program to the next interpreter. When the program does reach its destination, it performs a branch, if the topmost value on the stack is not equal to zero. On the outward leg of the program’s execution, the topmost value is zero; placed there by the data statement. The program does not jump to the finish statement, but rather continues. The branch operation performs a pop of the stack, and the program now pushes a new value (1). The “getsrc” and “forwto” operations effectively send the packet back to its source address. The behavior of the “forwto” operation is to set the destination address to the topmost value of the stack. (The topmost value is now that of the source address—this is the behavior of the “getsrc” operation: To push the source address onto the stack.) The “forwto” operation halts the program execution and sends it to its destination. The “forw” primitive completes when the packet arrives at its new destination address; in this case, its source address. The “bne” operation now succeeds, and the program jumps (forward) to the “finish” label. The “demux” operation is executed, the behavior of which is to write the second value on the stack to the port number given by the first value of the stack. The net effect of this is that a packet has traversed a network from host 10.104.104.13 to 10.104.105.12, carrying a sequence number, and the packet returns with it. The controlling program that sent the packet would have written the sequence to the packet and recorded a timestamp with the sequence number. It would then listen on port 7800. When the packet is received, it would generate a new timestamp, look up the timestamp associated with the sequence number, and
Control Execution Environments
277
subtract the one looked up from the current one. It can then display the amount of time taken to traverse the network. 13.2.4.2 Summary The basic idea of the SNAP language and its mode of operation should be clear from the discussion in this section. The simple ping program given is a fairly concise introduction to the SNAP language. The other primitives are given in [21]. 13.3 OPERATIONAL DESIGN OF SNAP INTERPRETER SNAP programs execute as byte code on a stack-based virtual machine (VM). The primitives of the language [21] allow a packet program to obtain information about itself and its environment, perform simple computations, make decisions, and send new packets. One important restriction in SNAP is that all branch instructions must go forward; this restriction is essential for meeting the local termination requirement for local packet executions. Another important characteristic of a SNAP packet is its resource-bound (RB) field. This field is decremented for each network hop, like IPv4’s time-to-live (TTL) field. Furthermore, it provides a conservation of resource-bound property, where a packet must donate some of its own RB to any child packets it sends out. 13.3.1
Instruction Classes
A SNAP program consists of a sequence of byte-code instructions, a stack, and a heap. The data types supported are integers, floating point numbers, addresses, exceptions, byte arrays, and tuples of the preceding types. Each SNAP instruction consists of an opcode and, optionally, an immediate argument. SNAP instructions fall into seven classes, listed in Table 13.1. Table 13.1 SNAP Instruction Classes
Instruction class Network control Flow control Stack manipulation Environment query Simple computation Tuple manipulation Service access
Examples forw forwto send demux bne beq ji paj pint pop pull getsrc getrb here ishere add addi xor eq eq mktup nth calls svc
278
13.3.2
Programmable Networks for IP Service Deployment
Marshaling and Execution in Place
The main concerns for SNAP’s packet format are compactness and the avoidance of any unnecessary marshaling overhead. The former is to allow large SNAP programs to be contained in smaller packets; the latter is to ensure that SNAP programs can be quick enough so that they can provide in-band control of a data stream. Previous systems have suffered from overly large program representations [2] or from high marshaling costs [17, 26]. As such, the current format is designed to allow a SNAP program to be executed in place in a packet buffer, often avoiding expensive marshaling, unmarshaling, or copying. In-place execution is achieved with the packet format shown in Figure 13.5, which shows both the SNAP packet format and the encapsulating IPv4 header (including the router alert IP option). Parts of the IPv4 header are reused; most notably the source and destination addresses. The time-to-live field is reused to provide SNAP’s resource bound, and the protocol field is set to indicate that the IP payload contains a UDP packet. An active IP protocol in a UDP packet that implements ICMP IP
IP options = Router Alert
UDP
SNAP Header
SNAP Code SNAP Data and Stack
Figure 13.5 SNAP packet format.
The SNAP header itself contains version and flag fields (not currently used), and a source port address similar in function to a UDP source port field. The remaining four fields in the SNAP header are the ones required for execution. The
Control Execution Environments
279
code length, heap length, and stack length fields define the sizes of the variable length code, heap, and stack segments that follow the SNAP header. The entry point indicates which instruction in the code segment should be the first executed (with the first instruction of the code segment being numbered zero). 13.3.3
Segments
Following the SNAP header are the code, heap, and stack segments. The packet is laid out in this order to permit execution in place, without additional copying. As a result, packet execution can begin almost immediately upon arrival, following a few structural checks; for example, that the entry point is within the code and that the various segment lengths do not exceed the buffer size. This is in contrast to systems like PLAN and ANTS which do not directly interpret the packet’s wire format; instead, values must be unmarshalled into corresponding OCaml or Java objects before execution as well as serializing them again before transmission. A SNAP program can be generally represented as follows. The code section consists of an array of uniformly sized instructions. Stack values are also uniformly sized; consisting of a tag and a data field. The tag indicates the type, and the data contains the actual value. For values that are too large to fit on the stack (like tuples or byte arrays), the data resides in the heap and is pointed to by the stack value. This pointer is implemented as an offset, relative to the base of the heap. This allows the packet to be arbitrarily relocated in memory at a cost of an extra calculation during interpretation. This feature eliminates the large fixed cost of adjusting pointers in the code and stack before execution. Heap objects each contain a header with length and type information. 13.3.4
Stack and Heap Addressing
The encoding of the packet was designed to use as little space as possible. All instructions and stack values are one 32-bit word, and heap objects have a oneword header. Stack values are divided into an n-bit tag, and a (32-n)-bit data part. Integer precision is reduced as a result, and network addresses and floating point values must be allocated in the heap. This works well in some cases, as floats are used infrequently, and integers almost never require high precision in the context of simple packet programs. Unfortunately, addresses are used often, thereby resulting in extensive heap allocation even for fairly simple programs, as will be seen below. Instructions are similarly an n-bit opcode and a (32-n)-bit immediately corresponding to the data part of a stack value. There are 102 distinct instructions in our final encoding; therefore, we set n to be 7 bits for tags and opcodes, meaning that our integer precision is 25 bits.
280
13.3.5
Programmable Networks for IP Service Deployment
Expanding Execution Buffers
The interpretation of most of the instructions is extremely straightforward, with the exception of “send” and the other network operations, as explained later. The interpreter is constructed as a loop around a large switch statement, with one case for each opcode. Most instructions extract arguments from the stack or heap, perform some computation, and then push the result back on the stack. If the initial packet buffer is sufficiently large, packet execution may occur in place. That is, with the packet at the front of the buffer, the stack is allowed to grow toward the end of the buffer during execution. Heap allocation takes place within a second heap region, situated at the end of the buffer and growing toward the stack.2 In the kernel implementation developed by the University of Pennsylvania, the buffer received from the kernel is not much bigger than the packet itself. If the stack attempts to grow beyond this buffer, or if heap allocation is attempted, the packet must be copied into a maximally sized buffer. A feature of the language is that the interpreter is able to calculate the maximum amount of memory a program will require, in the absence of unsafe services.3 This ondemand approach allows us to avoid copying the packet for programs that do minimal stack manipulation or no heap allocation.4 13.3.6
The Send Primitive
Send creates a new packet containing its parent’s code, subsets of its stack and heap, and some of the parent’s resource bound. Creating this new packet presents two difficulties. First, if any heap allocation has taken place, then the parent packet has two heaps that must be consolidated into a single heap in the child packet. Second, it is preferable to include only the portions of the parent packet that will be needed by the child packet. Both issues are addressed by employing a scheme similar to that of garbage collection by copying [1]. This process ensures that only the portions of the two parent heaps that are reachable from the child’s stack will be copied into the child heap, and it adjusts any heap offsets (whether in the code, stack, or heap) to point to the correct locations in the child heap. While this approach is general, it is both computationally and memory intensive. Fortunately, it can be made less so in certain common cases. First, if no heap allocation has been performed, the parent heap and stack subset can be copied directly to the new packet, without requiring heap offset fix-ups, since the position of heap objects will not have changed. The result is faster packet creation 2 Although this separate heap area requires some copying to get a contiguous buffer for packet sends, an appropriate low-level scatter/gather interface would allow us to avoid this extra copy. 3 Even in the presence of unsafe services, it is quite likely this maximum size will be a good estimate, so it is possible to avoid any further copying to resize the buffer. 4 For example, the ping program described elsewhere simply executes a forw on intermediate nodes; thereby avoiding a packet copy. However, on the destination host, the getsrc instruction causes a heap allocation, requiring a packet buffer copy.
Control Execution Environments
281
times but potentially larger packets, since any objects that are unreachable will not have been removed. Even better, if the stack required by the new packet consists of the entire stack of the current packet, it can be wholly reused. The current packet’s buffer will require only modifications to its header, but only if transmission occurs before further modifications take place. Since a program terminates after executing “forw” or “forwto,” the simple routing of active packets may occur without any significant marshaling costs. 13.4 SNAP ACTIVATOR For the FAIN project, the SNAP interpreter was extended in four ways: • • • •
An SNMP management interface was added; SNMP services were added to the interpreter; Other utility services were provided; A variety of netfilter-based [4] packet interception mechanisms were added.
In addition, a number of support utilities were added to better integrate a SNAP interpreter with the network management system: • • 13.4.1
Java native interface support for packet assembly and injection; A Java tuple interpreter. Packet Interception Mechanisms
The original SNAP system developed at the University of Pennsylvania by Jonathan Moore and his colleagues used the same packet interception mechanism as RSVP; the router alert [20] appears as an IP packet option. The FAIN project’s implementation of the SNAP interpreter made use of destination network address translation (DNAT). When a packet arrived, a filter based on the encapsulating UDP packet’s port number could send the packet to another port (and on another host, if a hybrid active node was used). It was also possible for SNAP packets to be detected by another filter (again based on the UDP destination port number), and passed to the user space packet library “libipq” using the QUEUE target. The router alert and DNAT interception methods could operate simultaneously in the same SNAP EE. The QUEUE target method requires that the EE be loaded with a special configuration option; this would be negotiated by its VE at creation. The choice of operating mode is decided at run time: once given it cannot be changed while that instance of the EE is running.
282
13.4.2
Programmable Networks for IP Service Deployment
Other Services
A variety of services were added to the SNAP interpreter; these could be called by SNAP programs as they pass through the interceptor. Services are the principal means of extending the SNAP interpreter; unfortunately, they must be linked to the executable by a library, so adding new services ordinarily requires a reload. Dynamic loading of shared objects could have been implemented, but this is not a particularly portable solution. The control EE development makes use of SNMP agents to implement policies. So, to execute a sequence of commands that might reprogram an intermediate firewall; the code will be loaded into an SNMP agent by a SNAP program, and will be executed in parallel to the SNAP program. An example piece of SNAP code for invoking a service is given. This is identical in structure to the ping program shown earlier, and is in fact an implementation of the “get” operation of SNMP. The program traverses the network until it reaches its destination, and then invokes the SNAP interpreter service “snmpget,” which invokes SNMP get in the local network. Unlike a conventional SNMP get operation, this is invoked in the local network—not across the whole network from the client network. In this way, the program acts as a mobile agent; it can apply its own logic to find a particular host by side-stepping firewall packet-filtering rules, and then invoking SNMP commands. Just to step through the logic, the “forw” operation is as before, but the finish label processing is not shown. A parameter for the “snmptarget” service is pushed onto the stack. The service is invoked with the “calls” primitive. The parameter passed to this opcode is the name of the service to be invoked. The “eqi” primitive checks if the service is available and jumps to the failure label, if it is not. The result is popped. Then other SNMP get parameters are pushed: the object identifier and the community. Finally, the service that invokes a real SNMP get on “localhost” is invoked. It is passed an authority string, which is not currently used. This logic is shown below: move: forw bne finish – pc push “localhost” Calls “snmptarget” eqi E_35 bne failure – pc pop ; Just pop one if you test push “systen.sysUpTime.0” calls “snmpoid” popi 2 ; Pop two if you do not push “public” calls “snmpcommunity” popi 2
Control Execution Environments
283
;; Pass some authority to the final invocation method push “authoritystring” calls “snmpget” popi 2 svcv “snmpget” push 1 getsrc fortwo
SNAP service primitives are designed to be used asynchronously. The SNMP get implementation would run in parallel to the SNAP program. Currently, the implementation is wholly synchronous, so the result may as well be collected immediately. This is done with the “svcv” primitive. This collects the result stored in the register by the “snmpget” service. SNAP services should be invoked asynchronously to the SNAP program, and the results collected and distributed by another packet. This would again make it more likely that the SNAP program would be able to execute inline with a data stream. This is only a fragment of a larger program. Most of the services added to the SNAP interpreter are used within it. 13.4.2.1 SNMP Services The following SNMP operations have been implemented: get, set, and trap. Associated with the services are a set of variables that contain the host name of the target of the SNMP command, the community string to use, and the object identifier. For the set and trap operations, there are additional registers. An SNMP operation can be invoked all at once by presenting it all at once using a universal resource indicator. This service is “snmp.” 13.4.2.2 Node Registers A number of registers were added; one for each data type. The data type of a register is nominal. 13.4.2.3 Tuple to String Conversion Tuples are a nested data type. They are similar to List Processor language (LISP) vines feature. This simple service allows a tuple to be rendered as a string. In conjunction with the demux operation, a tuple can be written to an arbitrary listening UDP port on the local host.
284
Programmable Networks for IP Service Deployment
There is a Java parser that can reduce the string to a nested object, which can be easily searched and processed within a Java application. 13.4.2.4 Naming Service This service provides a means of resolving a domain name to an IP address. This service allows generic domain names to be carried in SNAP packets; these can then be resolved and packets can be sent to an IP address. This is particularly useful for SNMP services. It might be expedient for a SNAP program to announce, by means of an SNMP trap, that a customer of a particular company has appeared on a network with a particular IP address. The edge interceptor would inject a packet, and ask to resolve the name of the company’s admissions controller in each domain. 13.4.2.5 Epoch Service This provides an accurate timing service for packets. It also provides other features; it can be configured to reset the service heap. There is an example of its use in the SNMP get example program. 13.4.3
SNMP Interface
The SNAP daemon itself supports an SNMP interface. This has been implemented using the SMUX facility of SNMP agents. The implementation is largely copied from the Zebra routing daemon suite. The SNAP daemon MIB is included as an appendix, but a brief summary of its features is given here. 13.4.3.1 Service Variable Access As noted in the example programs, each of the services has a service variable associated with it. It is possible with the SNMP interface to read the states of these variables. This is particularly useful when used in conjunction with the SNMP trap. To continue the example of a mobile telephone subscriber announcing his or her IP presence on a network; a SNAP packet would be injected by an edge interceptor, and this would contain authentication information to be stored in one of the host registers maintained by the SNAP daemon. As a packet passes through the daemon, it writes the authentication information to a register, and raises a trap on the vodafone-access address. This SNMP agent catches the trap and reads the register for its current value. The result of this would be that the vodafone-access information servers would be aware of which IP networks their customers are using. This facility is the one used in the DiffServ scenario discussed at great length in Chapter 17.
Control Execution Environments
285
13.4.3.2 Heap Management The amount of heap a SNAP protocol has access to can be limited for each epoch it starts. If a heap becomes exhausted, it is possible, using the SNMP interface, to obtain a copy of the heap and inspect it. From this, it might be possible to determine why the protocol has failed. 13.4.3.3 Operational State and Traps Of course, it would be necessary to alert some other entity in the system that the heap is exhausted. This is achieved using a “heap exhausted” trap. The destination of the trap notification is a configurable option. The MIB has other traps that can be raised. One can be raised every time a program is executed, or every time a packet arrives with a previously unknown source or destination. 13.5 SECURITY IN THE CONTROL EE 13.5.1
Introduction
Authentication in passive networks often refers to the creation of an association between a claimed identity and a client. Authentication in active networks faces a completely different set of challenges from authentication in passive networks; particularly in enforcing strict access control and authenticating active packets at intermediate active nodes. FAIN provides a generic security solution to active packets authentication at intermediate active nodes. Authentication and authorization of all requests to the node’s API that are based on security policies (that are previously set by an authorized principal) are handled by the security manager (SEC) on FAIN active nodes. We will present in the following sections an active packet authentication facility on FAIN active nodes that is known as the ANEP-SNAP Packet Engine (ASPE). ASPE provides strong authentication for SNAP active packets that are generated by the control EE; for example, the active SNMP EE on FAIN active nodes. 13.5.2
Active Networks Authentication
13.5.2.1 Challenges In passive networks, authentication is usually end-to-end based, whereby an end user would be authenticated by an end server (and vice versa). Due to the dynamic
286
Programmable Networks for IP Service Deployment
nature of active networks, they include end-to-end authentication as well as hopto-hop authentication. Thus active packets must be authenticated at the recipient, that is, an intermediate active node, the active node at the destination, and so on, based on the identity of the (intermediate) node(s) on which the packets were modified [13, 23]. Hop-to-hop authentication is needed because the contents of an active packet may change while it is in transit across active networks. For instance, an intermediate node may process an active packet, and then push the results of the code execution back onto the packet before forwarding the packet to its next hop. Thus, in order to avoid contaminated nodes and spoofed packets, hop-to-hop authentication (as well as node and link integrity) must be enforced in active networks. 13.5.2.2 Cryptographic Authentication The use of cryptographic techniques for enforcing active packets authentication was proposed by the developers of the Packet Language for Active Networks [17]. Cryptographic techniques can be divided into two types: symmetric and asymmetric. It should be noted that asymmetric authentication is not a feasible solution for active network authentication. In asymmetric authentication, a private key is used for signing active packets. This arrangement works out fine on conventional networks, as passive packets should be signed at the source only; that is, signed once only. However, as mentioned previously, active packets are dynamic. Upon receiving and processing an active packet, the intermediate nodes are unable to reproduce the principal’s signature, since they do not have the principal’s key. To solve this problem, we may assign a unique private key to each active node. However, under this arrangement, the new signature may overwrite the principal’s signature. Moreover, digitally signing packets at each intermediate node for each modification made on the packet will generate a significant performance impact on active nodes. Symmetric authentication, on the other hand, seems to be a more appropriate solution. Every intermediate node would share a preassigned private key for signing active packets. Later in this chapter the use of symmetric techniques to protect authenticity, integrity, and confidentiality in active networks will be presented. 13.5.2.3 Secure Active Node Transfer System In the Secure Active Node Transfer System (SANTS) [23, 25], a solution for authenticating active packets was proposed. Active packets are split into static and dynamic parts. Split active packets are then encapsulated into ANEP [3]. Static parts are digitally signed by the principal; dynamic parts are protected by hop-tohop protection. Credential references are used to protect the end-to-end and hopto-hop authenticity of active packets and their clients, whereas HMAC-SHA1 was
Control Execution Environments
287
enforced for protecting link integrity. However, SANTS may suffer from several drawbacks: (1) Scalability: In SANTS, a modified version of the ANEP packet format was used; as a consequence, each node must be equipped with a specialized SANTS packet handler to intercept SANTS packets; (2) Efficiency: The ANTS payload contains both static and dynamic data. As a result of the lack of a clear distinction between its static and dynamic data, separation of the ANTS payload consumes significant system resources. The latter problem of SANTS is related to the ANTS packets’ structure. Later in this chapter, we discuss the FAIN solution to the active network authentication problem by providing strong authentication to FAIN SNAP packets (that are generated by the active SNMP EE). SNAP packets in FAIN are protected while they are in transit across active networks. At the same time the scalability and efficiency drawbacks experienced in SANTS are eliminated. It should be noted that SNAP has a clear distinction between its dynamic data and static code structure. Thus the use of SNAP overcomes existing efficiency problems. SNAP packets are encapsulated according to the standard ANEP format in FAIN, to avoid the scalability problem. 13.5.3
FAIN Solution
As depicted, SNAP [21, 22] is used as the active packet language in the FAIN control EE; that is, the active SNMP EE [7]. The SNAP developers claim SNAP to be an efficient programming language (SNAP latencies are no more than 10% slower than are IPs [21, 22]) that provides active packets at a high level of safety. Recall from our previous discussion that SNAP packet programs consist of a series of (static) byte code instructions, a stack, and a heap. The byte codes are for instance forw (move on if not at the destination), here (push current node address onto the stack), and getsrc (get source field) and so on [21, 22]. The byte codes in SNAP are defined as static contents. This is because once the SP creates these instructions, they will be executed on each modifying node without being subjected to any further modification. Small values such as addresses and integers are stored on the stack and the heap of SNAP. As the packet traverses across the network, we can push the network status on and off the SNAP stack/heap. As a consequence, the data in the stack and heap are dynamic. It should be noted, however, that as SNAP is designed to be a lightweight and simple active packet language, SNAP itself provides no facility for authentication at all. The first line of defense of SNAP is that it should not be used to exert control over a node [21, 22]. Thus, additional security protection must be enforced on SNAP packet programs. This is one of the major functionalities provided in SEC components (Chapter 11) in order to use SNAP in a practical manner.
288
Programmable Networks for IP Service Deployment
Despite the fact that SNAP is insecure, and additional security measurements must be provided, a major advantage of using SNAP is that SNAP clearly distinguishes its static and dynamic contents to be its byte codes and its stack values, respectively. In this way the inefficiency problem that is experienced in SANTS can be avoided. Figure 13.6 shows that the active SNMP EE comprises two components: the SNAP activator and the ASPE [7, 8, 9, 10, 11, 12, 13]. The active SNMP EE generates SNAP packet programs that carry various SNMP commands in the form of byte codes (for the purpose of service control on FAIN active nodes). ASPE provides the necessary ANEP encapsulation for SNAP packet programs that are generated by the SNAP activator. The SNAP-encapsulated ANEP packets are known as the ANEP-SNAP packets.
Active SNMP EE (Control EE) SNAP Activator SNAP Interpreter
SEC
SNAP Assembler
ASPE SNAP Analyzer
DeMUX
Encapsulator/ De-encapsulator
Communication Manager
Digester
To next hop
Figure 13.6 The FAIN active node architecture.
The splitting of SNAP packet programs is performed at the SNAP analyzer in the ASPE. Static contents of the SNAP packet program are encapsulated into the ANEP Payload field. The entire SNAP packet program (including SNAP dynamic data) will be kept in one of the ANEP option fields. The communication manager keeps a temporary record of the security ID. This information is crucial during the de-encapsulation and encapsulation process: When a checked ANEP-SNAP packet arrives from the SEC (via DEMUX), this packet is assigned a unique security ID by the SEC. During the de-encapsulation process, this ID is retained in the ASPE. This ID is needed for the SEC in order to enable packet recognition. When the extracted SNAP packet has been processed by the active SNMP EE, the corresponding security ID of the packet will be extracted by the communication manager and attached to the packet. The SEC will then be able to determine which packet is which, when the packet is returned from the ASPE (via DEMUX).
Control Execution Environments
289
It should be noted that FAIN uses the standard ANEP header format to avoid the scalability problem that is experienced in SANTS. Figure 13.7 shows the resultant packet format of an ANEP-SNAP packet: N 0 1 2 m M+1 n
4N+0 Byte 4N+1 Byte 4N+2 Byte 4N+3 Byte Version Flags Type ID Header length Packet length Option’s flag Option length SNAP packet (~bytes) Payload length SNAP static content (~bytes) Figure 13.7 FAIN ANEP-SNAP packet format.
13.6 CONTROL EE IN DIFFSERV In Section 13.5.3, the procedures for the creation of SNAP packet programs, the ANEP encapsulation process, and the execution of ANEP-SNAP packet programs were described. These procedures are demonstrated in the DiffServ scenario in Chapter 17. The procedures of the DiffServ scenario are briefly outlined here: Once an ANEP-SNAP packet is generated at the ASPE, the packet will then be forwarded to a local SEC (via DEMUX) for security provisioning. DEMUX will then forward the protected ANEP-SNAP packet to its next hop. Note that per-hop protection is enforced by the SEC in order to protect the ANEP-SNAP packets from unauthorized or malicious modifications. The security manager uses a solution similar to the Authentication Header (AH) in IPsec for hop protection. When the packet arrives at its next hop (e.g., an intermediate node), DEMUX on that node will pass the packet to a local SEC for security checks. If the security checks are successful, DEMUX will forward the checked packet from the SEC to the ASPE. The ASPE will then extract the encapsulated SNAP packet from the desired ANEP option field, and forward the extracted SNAP packet to a local SNAP activator for packet execution. The encapsulation/de-encapsulation process repeats until the packet reaches its final destination. 13.7 CONCLUSION The control EE developed within the FAIN project is a particularly useful example of what active networking can achieve. It is one of the first control EE examples. The SNAP interpreter provides an in-line execution environment that is a
290
Programmable Networks for IP Service Deployment
realization of the interceptor paradigm. The active extensions provided by MIBs and Perl implementations running within an SNMP agent are demonstrations of how policies can be enforced across networks. The control EE is satisfyingly lightweight, and suitable for deployment in an embedded system. Its components are proven, safe, and open source. This chapter has identified that hop-to-hop authentication should be enforced in active networks, whereas end-to-end authentication should be enforced on the static data of active packets only. Two drawbacks in terms of the scalability and efficiency of existing solutions were identified. A generic security architecture that provides strong active packet authentication in FAIN active nodes was then presented. The advantages of the FAIN solution are: (1) Architectural independence: The FAIN security architecture follows a generic approach that uses the standard ANEP packet format; (2) Efficiency: SNAP has a clear structure for data and code. As a consequence SNAP packets can be split easily. The functionalities of the control EE were demonstrated in the DiffServ scenario. Here, SNAP was used as the underlying active packet language in the control EE that enabled various service control activities on FAIN active nodes to be performed. Integration between the SNAP activator and the ASPE is currently going on. Integration of the two components would enable a more efficient packet splitting process: Rather than splitting packets after they are generated, packets can be split while they are being generated. References [1]
Abdullahi, S., and Ringwood, G., “Garbage Collecting the Internet: a Survey of Distributed Garbage Collection,” ACM Computing Survey, Vol. 30, No. 3, September 1998, pp. 330-373.
[2]
Alexander, D., et al., “Secure Quality of Service Handling,” SQoSH, IEEE Communications Magazine, Vol. 38, No. 4, April 2000, pp. 106-112.
[3]
Alexander, S., “Active Network Encapsulation Protocol (ANEP),” http://www.cis.upenn.edu/~switchware/ANEP/.
[4]
Bandel, D., “netfilter 2: In the POM of Your Hands,” Linux-Journal, Vol. 97, May 2002, pp. 64, 66-68, 70.
[5]
Braden, B., et al., Resource ReSerVation Protocol (RSVP), Version 1 Functional Specification, IETF RFC2205, September 1997.
[6]
Davis, M., Computability,Complexity, and Languages: Fundamentals of Theoretical Computer Science, New York: Academic Press, 1994.
[7]
Eaves, W., Cheng, L., and Galis, A., “SNAP Based Resource Control for Active Networks,” IEEE GLOBECOM 2002.
[8]
FAIN Project Deliverable D1 - Requirements Analysis and Overall Architecture, http://www.istfain.org/deliverables.
[9]
FAIN Project Deliverable D7 - Final Active Network Architecture and Design, http://www.istfain.org/deliverables.
Control Execution Environments
291
[10] FAIN Project Deliverable D8 - Final Specification of Case Study Systems, http://www.istfain.org/deliverables. [11] FAIN Project Deliverable D9 - Evaluation Results and Recommendations, http://www.istfain.org/deliverables. [12] FAIN Project Deliverable D14 - Overview FAIN Programmable Network and Management Architecture, http://www.ist-fain.org/deliverables. [13] FAIN Project Deliverable D40 - FAIN Demonstrators and Scenarios, http://www.istfain.org/deliverables. [14] Galis, A., “Active Networks Management Framework,” The International Conference on Information Society Technologies for Broadband Europe, Bucharest, Romania, October 9-11, 2002. [15] Galis, A., et al., “A Flexible IP Active Networks Architecture,” Proc. of International Workshop on Active Networks, Tokyo, October 2000, and in Active Networks, Springer Verlag, October 2000. [16] Harrington, D., An Architecture for Describing Simple Network Management Protocol (SNMP) Management Frameworks, IETF RFC3411, December 2002. [17] Hicks, M., “A Secure PLAN,” IWAN 1999, Vol. 1653, July 1999. [18] “Open Distributed Processing Reference Model Architecture,” Information Technology, BS ISO/IEC, Vol. 10746, No. 3:1996, February 15, 1997. [19] Kassab, L., and Greenwald, S., “Toward Formalizing the Java Security Architecture of JDK 1.2,” ESORICS 1998, Berlin, Gemany, Springer Verlag, 1998. [20] Katz, D., “IP Router Alert Option,” IETF RFC2113, February 1997. [21] Moore, J., Hicks, M., and Nettles, S., SNAP (Safe and Nimble Active Packets), http://www.cis.upenn.edu/~dsl/SNAP/. [22] Moore, J., Hicks, M., and Nettles, S., “Practical Programmable Packets,” Proc. IEEE INFOCOM 2001. [23] Murphy, S., “Strong Security for Active Networks,” IEEE OpenArch 2001. [24] Wall, D., “Messages as Active Agents,” 9th Annual ACM Symposium on Principles of Programming Languages, Albuquerque, NM, January 1982. [25] Wetherall, D., “ANTS: A Toolkit for Building and Dynamically Deploying Network Protocols,” OpenArch 1998, San Francisco, CA, April 1998, pp. 117-129. [26] Wetherall, D., Legedza, D., and Guttag, J., “Introducing New Internet Services: Why and How,” IEEE Network, Vol. 12, No. 3, May-June 1998, pp. 12-19. [27] Remy, D., “Applied Semantics International Summer School,” APPSEM, Vol. 2395, No. 2002, pp. 413-536.
Chapter 14 High-Performance Execution Environments 14.1 MOTIVATION Integrating dynamic new technologies into the shared network infrastructure is a challenging task. The growing interest in the active networking field [22] might be seen as a natural response to this. In the active and programmable networking vision, routers can perform computations on user data in transit, and users can modify the behavior of the network by supplying programs called services that perform these computations. These routers are called active nodes (or active routers) and propose a greater flexibility toward the deployment of new functionalities that are more adapted to architecture, users and service providers’ requirements. Currently, active network designers must face two major problems: the security of deployment of services inside equipment and the performance of on-the-fly processing. This chapter focuses on existing and prospective solutions for the design of a software high-performance execution environment. Most researchers of active networks find providing active services with high-level languages (Java), and inside UNIX user space to be too costly. This is due to the latency added for processing packets, as numerous experiments in active networks rely on the ANTS toolkit [23] (based on Java), with peak performance of less than 10 Mbits of raw bandwidth. So while most current backbones face gigabit challenges, most software active networks remain dedicated to low bandwidth networks and local platforms. We aim here to describe the design for a software active architecture able to support current network requirements. This node must be deployable around highperformance gigabit backbones. This chapter presents the state of the art of highperformance active node research, and defines an architecture targeted for the design of a gigabit active node. This active node must be able to process and route
293
294
Programmable Networks for IP Service Deployment
active streams coming from gigabit backbones. We define an active network execution environment as an environment able to load and deploy network services, and able to direct packets toward the required service via appropriate header filtering. High-performance challenges indicate that active services must be deployed at various levels depending on the required resources (processing capabilities or memory consumption) and intelligence (flexibility of the execution environment). In order to provide an adapted EE for each type of service, and to minimize packet processing by higher layers of software, we have designed an active node architecture on four levels: The network interface card (NIC), kernel space, user space, and distributed resources (see Figure 14.1). This layered architecture proposes solutions for the dynamic embedding of active services optimally deployed on suitable levels: • • • •
Ultra lightweight services in network programmable cards (packets marking, dropping, and filtering services); Lightweight services in kernel space level (packets counting, QoS, management services, intelligent dropping, and state-based services); Middle services in the user space level (packets monitoring, reliable multicast, packets aggregating, and data caching); High-level services in distributed architecture (compression and multimedia transcending on the fly).
Our approach also considers the strategic deployment of active network functionalities around the backbone in access layer networks and by the provision of high-performance dedicated architecture (see Figure 14.2). Our active network model is focused on active edge equipment, located around the core network between backbone and access networks. Core networks are mainly optical, and must remain fast and simple. Access networks must operate with heterogeneous equipment and protocols, and could benefit from the deployment of dynamic network services [31]. To illustrate the quest of performance for active networks, we present the Tamanoir [8] active node software suite, based on widely used components and tools: Myrinet and Giga Ethernet for the NIC level, Linux netfilter [19] for kernel space support, Java for the user space level [28, 29, 30], and the Linux virtual server [26] for distributed infrastructure. This Tamanoir software is deployed and executed on various local and long distance platforms.
High-Performance Execution Environments
295
Figure 14.1 The execution environment of an active node architecture.
access network core network
Figure 14.2 Active equipment deployed in the access layer.
14.2 INITIATIVES IN HIGH-PERFORMANCE ACTIVE NETWORKING Since active networks were first conceived, there have been numerous research projects, but few on the topic of high-performance active networking. This section
296
Programmable Networks for IP Service Deployment
provides an overview of the main works in the field of performance in active networks. 14.2.1
Practical Active Network: The First Step Toward High Performance
The practical active networking (PAN) project [14] was inspired by the ANTS [23] project with an exclusive orientation toward the design of a high-performance active node to support real experiments. This project is focused on the data critical path in order to remove potential sources of overhead: memory copy, code interpretation, code loading, and the kernel/user space switch. Written in C, PAN nodes reach good raw performances. Two versions of the implementation are available: one is running in user space, the other in kernel space (as a module). Measurements were taken over UDP with a packet size range from 128 to 1,500 bytes. PAN experiments show that memory copy for small packets is not expensive. For packets of 1,500 bytes, the native EE running in user space is five times more expensive than the kernel version. The PAN version running in kernel space is able to saturate a Fast Ethernet link with 1,500-byte packet size and an overhead of only 13% to process each packet. This result is obtained thanks to memory copy limitation, and the intensive use of native (processor specific) code. The use of native code is a real problem for the portability, safety, and security of a service, which should be executed on any architecture of the network equipment. 14.2.2
Active Network Node with Hardware Support
The active network node (ANN) project [4] aims to provide support for ATM gigabit networks. The implementation of the ANN concept is built on a hardware approach connected to an ATM network, an optimized operating system for the target architecture, and a specific EE required for this architecture to manage active codes downloaded on a node. The ANN approach concerns more active stream processing than isolated packets processing. From a hardware point of view, they use modules called active network processing elements (ANPE) composed of one processor (Pentium), memory, and an ATM Port Interconnect Controller (APIC) for the network interface. Each packet contains a reference to an active module, called service, stored on a trusted code server: distributed code caching for active networks (DAN). Modules are dynamically linked and executed like native code on the node. ANN supports ANTS [23] which provides a facility to design prototypes for active network experiments. 14.2.3
Simple Active Router Assistant
The simple active router assistant (SARA) [11, 12] project’s objective is to provide a framework for the support of: active network main functions, IPv4 and IPv6, and performance sufficient for an industrial network equipment provider.
High-Performance Execution Environments
297
SARA consists of a router assistant connected to a commercial router. The SARA framework is based on a dedicated-processor-based router for routing functions, management, fast packet forward between network interfaces, and a general purpose processor-based assistant for deployment and execution of active applications. Execution of an active application generates a lot of interrupt requests (irq). To reduce this number of interrupts, EE and AA are deployed on the router assistant. Active processing is then done outside the router. It is delegated to an assistant or a pool of assistants directly connected to the commercial router through a fast local area network. In this active network context, router functionality is reduced to identify and hijack active packets toward the assistant. For that, SARA relies on the IP router alert option available in the IPv4 and IPv6 headers, which tags the packet as “must be processed.” From an implementation perspective, the SARA project has been written in the Java and C languages (for socket tuning). If the active application (AA) (service) is not available on the assistant, it is downloaded from a code server. AAs are native threads able to filter and send active packets, to consult the router's status and MIB and to use improved sockets. The AA time life depends on active packets. Java native interface communication between router and assistant is encapsulated in UDP datagrams. 14.2.4
Cluster-Based Active Node
The cluster-based active node (CLARA) [18] is an architectural prototype of cluster-based routing, with the node coming from the Journey project so as to deploy routers with processing capabilities. A CLARA node is based on distributed computing resources aggregated in a cluster. One dedicated node of this cluster is used as a front end. With CLARA, data packets are called media units. They can contain a group of pictures (GoP) from a moving picture expert group standard (MPEG) stream. Each media unit is considered independent. The employed model allows processing of the media units to be determined by the local conditions and availabilities. Decisions are taken independently from the other routers. There is no control message between the routers to process one stream. This model does not guarantee that each packet will reach its destination after being processed. It is a best-effort approach, like an IP network. To determine if a packet has been processed or not, CLARA uses the IP router alert option. If processed, the IP router alert option is unset, and the packet is directly routed. If unprocessed, the IP router alert option is set, and the packet is loaded by CLARA. The architecture of a CLARA node is composed of one PC for routing, and some other PCs for processing. All PCs are connected through a very highspeed network. Media units are directed toward the processing PCs, thanks to an easy round-robin algorithm for load processing distribution. By increasing the
298
Programmable Networks for IP Service Deployment
processing PC number, it seems possible to increase the global processing power of a CLARA node, unless the routing PC becomes a bottleneck too rapidly. 14.2.5
Composable Active Network Elements
The composable active network elements (CANEs) [15] project aims to provide network-based capabilities that enhance the communication service and performance seen by users of the network, with mechanisms like reacting to congestion, the transparent caching of information in network nodes, and support for multicast video distribution to heterogeneous end users. CANEs is an execution environment running on Bowman [16] NodeOS, which implements a subset of the NodeOS interface. 14.2.6
Active Packets Edition
The active packet edition (APE) [21] project is based on a dual architecture: a software part for processing the active packets, and a hardware part for efficient packet processing. The hardware part is based on field programmable gate array (FPGA) and Content Addressable Memory (CAM). The packet editor called “Gorilla” allows packets classification and modification depending on preestablished rules. The software part of packets processing, called “Chimp,” allows dynamic configuration of the packet editor by the execution of instructions contained in ANEP active packets [1]. The software part manages the complex active processing that cannot be done by the packet editor. The objective of APE architecture is to provide very fast reconfiguration mechanisms for functions or applications dealing with the transport layer, such as packets classification, address translation (NAT), tunnel mode, multicast, and proxy. These functions involve only simple processing on the header, and must be applied as fast as possible to support the link speed. Reconfiguration is done with streams of active packets, which dynamically reconfigure a table of rules in the CAM. The originality of the APE approach relies on two levels of processing: a fast part and a slow part. For example, the rules configuration part is not required to be particularly fast. Even though the APE project supports active applications, which need more complex processing than simple packet edition, APE does not try to speed up these applications. The project provides a security mechanism to build a secure path before sending a data stream. The path is established thanks to the first active packet containing a key. Experiments with a simple forwarding service over the gigabit Ethernet link of a Cisco 7500 show that throughput of 900 Mbps can be achieved.
High-Performance Execution Environments
14.2.7
299
Protocol Boosters: Programmable Protocol Processing Pipeline
The programmable protocol processing pipeline (P4) [5, 10] project aims to increase protocol performance by dynamically removing unproductive processing. Protocols are usually designed by considering the worst case. A protocol booster reduces a protocol’s complexity by optimizing the communication protocols dynamically. This project uses a hardware approach based on FPGA to reach the required flexibility. Experiments with a protocol booster prototype have been done over a 155 Mbps ATM link by accelerating the TCP/IP protocol. Protocol boosters dynamically add or remove the forward error correction (FEC). 14.2.8
Kernel Services
Various projects have studied the deployment of active services in kernel space. The first architecture of an active node able to embed active services directly in the kernel's operating system of Solaris is described in [20]. References to the function applied on packets are embedded inside IP options. This reference must be unique for a given function. Each function can be parameterized on the node, or can be given in the header of each packet. Each node allows access to its status, and holds a table of its available functions with pointers. When a packet arrives, the node calls the process function to apply on the packet. The objective of this project was to attain good performance for services execution. PromethOS [27] extends the standard netfilter architecture of Linux by adding, at run time, programmability and extensibility to the Linux kernel for unknown components. The whole framework is inherently portable, strictly allowing the interfaces of netfilter. The performance of the PromethOS framework is comparable to the standard Linux networking environment; only one additional hash table look-up has been introduced to schedule the PromethOS plug-ins. 14.2.9
AMP
The AMP [3] project concerns the development of a new software architecture that allows high-performance active code to be executed securely and safely. The AMP system provides a fast and lightweight execution environment for active network nodes. By enforcing resource usage limitations, active code cannot interact with the rest of the active node. AMP takes advantage of the exokernel project, which focuses on the management of physical resources with level applications. Experiments have been done on Fast Ethernet networks through a Netgear FS516 switch with Pentium-based AMP nodes (PIII/750 MHz). The user space active forwarding services of ANEP UDP packets support 6,900 packets per second for the Java version and 17,600 packets per second for the C version.
300
Programmable Networks for IP Service Deployment
14.2.10 Magician: Resource Management and Allocation The Magician [25] active network prototype takes most of the concepts of the ANTS active network approach with packets called SmartPackets. It is based on the Java 1.1 serialization mechanism, and provides a demultiplexing mechanism of an ANEP packet to support different EE running on the same node. It proposes the management of resources by limiting CPU and memory access to the threads. A scheduling mechanism of the user threads is based on queues applying different kinds of priority: high, medium, and low. Magician also provides thread classification depending on the activities of a node. The memory allocation is controlled by the number of calls to the new Java instruction. 14.2.11 AMnet: Flexinet Project The AMnet [7] project provides a programmable platform and applications for multicast communications with heterogeneous receivers, security enforcement, and mobility support. AMnet uses service modules available in managed service module repositories (SMR); for example, authentication, authoring and accounting modules. The service module code is distributed under different formats to support various hardware architectures of the active equipment. The SMR keeps updated resource requirements (hardware, memory, throughput, access rights) of modules. These modules can be interconnected to provide complex services. In addition, they define two kinds of service: application services used by client applications, and infrastructure services used by the network administrator. Performance evaluation has been made over three different services [6]: multicast of MP3, HTTP compression, and video transcoding for personal digital assistants. The multicast service enables throughput of 68 Mbps between the transmitter and a receiver across an active node connected through a 100-Mbps network (Ethernet). The service of HTTP compression optimizes the transmission of large text files, and fits perfectly for low throughput networks, like current wireless networks. The speed improvement is about 25% and reaches 40% for very large files. The lack of performance means that the AMnet node is only able to support one service of video adaptation for a PDA (dealing with only one stream). 14.2.12 Safe and Nimble Active Packets The SNAP [17] project proposes a new kind of active packets based on a low-level language. The objective is to provide more flexibility to the IP protocol without compromising its safety, security, and efficiency. Experiments show that SNAP is able to reach performances close to an IP software router.
High-Performance Execution Environments
301
14.2.13 TAGS: Optimizing Active Packet Format TAGS [24] is a joint project focusing on the packets demultiplexing bottleneck. In the active networking equipment, each packet must be demultiplexed, not only to the network layer, but also to the application-level execution environment. To speed up this demultiplexing stage, TAGS implements a new active packet format called simple active packet format (SAPF). Measurements show that SAPF packets can be processed 30% faster than regular IP packets that use the traditional ANEP header. Since the mid-1990s, this research area has been explored widely. This work highlights real bottlenecks in security, performance, and heterogeneity support. We can see that proposed solutions are heterogeneous and difficult to compare. In the next section, we propose and define a generic architecture of highperformance networks and nodes, which takes advantage of the first initiatives in this area. This architecture is designed to support requirements in terms of the performance of current and future networks. 14.3 TOWARD AN ARCHITECTURE OF HIGH-PERFORMANCE ACTIVE NETWORKS AND NODES This section proposes an architecture of a high-performance active network, and a detailed description of a high-performance active node model. We give the essential features and functionalities that an active architecture must support to be sustainable and give good performances. 14.3.1
Proposing an Architecture for a High-Performance Active Network
14.3.1.1 Entities Involved In our model, an active network is composed of the three following elements: active equipment, a service repository, and a client. Active equipment or active nodes must be able to be installed in a standard data network such as the Internet. They should take the place of or assist network equipment like routers and gateways. In this active equipment, active services are deployed and dynamically loaded. Active equipment should be able to communicate with each other. They should support discover mechanisms such as existing routers (BGP), to optimize the transport of active streams from one node to another. Active service is referred to as an active application. It is a piece of code running in the active node EE, and applied on packets streams.
302
Programmable Networks for IP Service Deployment
Active stream refers to a data stream that has prefixed packets with a specific header. This header contains identification to the service, which should be apply to the transported data. Services can be mono node, sequential, multiblock or parallel: • • •
A mono node service is applied only one time, by only one active node. A sequential service can be applied every time the active stream will pass through an active node. A multiblock or parallel service can be composed of different subservices. Each subservice can be applied on one active node (multiblock or sequential) or on different active nodes (parallel).
A service repository is a dedicated infrastructure connected to the network that provides a storage space containing the code of active services for active equipment. A piece of active equipment will address its download requests to this repository. Security solutions and trust domains can be associated to the repository in order to secure service deployment. Clients of active network concern the end host, which runs user applications. They can be heterogeneous (e.g., PDA, cellular phone, workstation) and connected to equipment with heterogeneous connections [e.g., copper wire (Ethernet), wireless (WiFi, Bluetooth), optic fiber (gigabit Ethernet)]. 14.3.1.2 Active and Passive Streams High-performance active network architecture must quickly differentiate an active from a passive stream with fast solutions supported by current routers and gateways. This distinction mechanism enables the filtering of active streams and their redirection toward deployed active equipment. For passive streams, the impact of these filtering actions should be limited in terms of performance. 14.3.1.3 Transport Protocols The most common protocols (like TCP, UDP, ICMP, RTP, and RTCP) must be supported by active equipment. While UDP or RTP are used for multimedia or real-time applications, the support of TCP streams is required for applications requiring reliable communications, such as file transfer, Web traffic and grid application. Active streams should be transported with existing common protocols like UDP or TCP for the fast deployment of an active node in the current infrastructure. It is easy to transport active data in a nonconnected protocol such as UDP. Most existing projects make this choice. For a connected protocol like TCP, there are two possibilities. Both can be integrated in the current network, but one of them breaks the end-to-end model of TCP as discussed later.
High-Performance Execution Environments
303
14.3.1.4 Proxy or End-to-End Mode Active equipments can be like high-level application gateways or low-level intelligent routing equipments. The proposed architecture must provide two kinds of integration in existing networks: •
•
Proxy mode: Active equipment is explicitly requested by applications (like a Web browser and a proxy Web). Active services can be associated with dedicated ports of the active equipment. IP tunnels are built between each active networks element (active nodes and clients). E2E mode: Active equipment is hidden inside the networks. This equipment catches packets on the fly, and applies active services. This approach can be considered as best effort. If active equipment is available on the data path, services are applied.
14.3.1.5 Dynamic Services Deployment Our high-performance architecture provides a dynamic deployment mechanism to support the installation of new services on the active stream path. So a service is deployed on demand from an active node transited by the active stream. Three solutions must be supported: • •
•
14.3.2
Local deployment: The EE checks if required service is in local cache, to detect if the required service is locally available. If the service is available, it will be dynamically loaded in memory, and applied to the stream. From a service repository: The first active node crossed by an active stream coming from an application is allowed to download services from a certified service repository. Service distribution from a client host is forbidden for security reasons. From node to node: As services are deployed along the data path, and as a means to avoid a bottleneck on a service repository, services must be deployed from node to node. Proposing an Architecture for a High-Performance Active Node
We define an active network execution environment as an environment able to load and deploy a service on an active node. It must also be able to direct packets toward the required service through appropriate headers filtering. Ideally, an EE directs packets to the service as transparently as possible, without adding overheads.
304
Programmable Networks for IP Service Deployment
14.3.2.1 Language of the Services: Dynamic, Efficient, and Portable The choice of a language for the service design depends on the following criteria: •
• • •
It must be executable on every hardware architecture running the EE. The distributed aspect of active networks imposes the use of a portable language. While the first approach could require the use of an interpreted language, we cannot use this solution for performance reasons. The language must support the efficient execution of active services to limit the impact of latency on active packets. It must provide a dynamic loading mechanism of new functions in memory, while the application is running. The EE must be able to discover a service when an active stream requires it, and be able to dynamically deploy it. It must provide a strong mechanism to manage exceptions and provide type safe data, which forbids random cast functions.
14.3.2.2 Multiservices Architecture An active node should be able to process many different streams at the same time in parallel. As streams run through the network in parallel, services must be executed in parallel through multithreaded and parallel solutions. An active node is not specialized or reserved for a specific service. The EE must be able to embed generic and dedicated services. In order to provide an adapted EE for each kind of service and to limit packet overhead, we design an active node architecture on four levels: the network interface card (NIC), kernel space, user space, and distributed resources (Figure 14.1). Network Interface Card Level Programmable network interface cards like Myrinet [2] embed CPU, RAM, and direct memory access (DMA) engines. In these cards, software network protocols can be executed to optimize communications between host and network. We take advantage of this flexibility to deploy low-level network services on these NICs. Running services directly on programmable network interface cards allows us to run services as close as possible to the wire. This is conceptually not so far from the network processor. The classes of service must be restricted to an ultralightweight one (packet marking, packet dropping, packet counting), so as to not impact processing time per packet, and NIC memory space allocation. 14.3.2.3 Kernel Space Level In kernel space, the OS runs time-sensitive operations: the scheduler, protocols stacks, and drivers. At this level, an active node can deploy efficient lightweight
High-Performance Execution Environments
305
services that require memory and processing capabilities from the host. This deployment is especially useful when NICs are not programmable. The kernel space level is perfectly suited for lightweight level services like QoS services or intelligent packet dropping. Furthermore, services running in kernel space level can benefit from the kernel’s routing functionalities, and also use zero-copy or OSbypass techniques to communicate with the user space. This approach requires an open kernel and easy access to the network protocol stack. Running a service in kernel space allows very fast execution, and takes advantage of the host system’s resources (fast CPU, system memory). Clearly, this approach requires an open operating system [like Linux or Berkeley Software Design (BSD)]. This system must provide tools to direct active packets to kernel lightweight services (like netfilter in Linux). Kernel space is a critical part of the active equipment and services, and must be restricted to time-sensitive code in terms of processing time per packets. 14.3.2.4 User Space Level A user space level can provide all the safety, flexibility, and ease of running a fullfeatured execution environment. Services executed at this level can access all system resources (memory, disk, dedicated hardware), and enable use of highlevel languages (like Java). However, the overhead introduced by the processing of packets on this user space level, and the cost of copying data from kernel space to user space must be taken into account in order to reduce the impact on raw performance. The EE must be executed as fast as possible and then efficiently implemented. It should not be interpreted during execution, and must then be either compiled or must use just-in-time compilation techniques. 14.3.2.5 Distributed Resources Level Active streams requiring heavy processing functions like compression, cryptography or conversion on the fly, require heavy processing capabilities in the active node. These services must be supported by a parallel architecture. We explore the design of a parallel active node depending on execution environment requirements and available architecture. •
Shared memory approach: The first approach consists of distributing services on various processing units (Figure 14.3). These services can be executed in parallel and can access packets through a shared memory (on an SMP architecture), or a distributed shared memory (on a cluster of machines [13]). Packets reaching the active node are placed in queues located in shared memory. One queue is associated with each available service. Services are considered as consumers of these queues. This
306
•
•
Programmable Networks for IP Service Deployment
approach allows the easy migration of services between processing units for the purpose of load balancing. Message-passing approach: This approach is dedicated to distributed computing resources (clusters of machines) communicating through messages. Like the shared memory approach, services are distributed on various machines. The execution environment is mapped on a dedicated node. In Figure 14.3, the EE receives packets and directs them toward the node holding the required service. Message-passing libraries, such as parallel virtual machine (PVM) and message passing interface (MPI), could be used. Next, the node processes the packet with a suitable service before its retransmission. Replicated EEs approach: This pragmatic approach consists of replicating EE and services on distributed resources (Figure 14.3). We call these nodes back ends. A front-end machine is added to the architecture to provide active packets to replicated EEs. This front end is dedicated to the distribution of streams to the back ends. This approach requires less modification to execution environments and services, and supports scalable deployment.
Figure 14.3 Approaches to design a parallel active node architecture: (a) shared memory, (b) message passing, and (c) replicated EEs.
These three approaches (a shared memory system, message passing, and replicated EEs) provide parallelism in streams processing [Figure 14.3]. Another advantage of using a distributed architecture is the fault tolerance capability of the active node, which allows stopping a back-end node for maintenance, without stopping the whole active node and services. It is also possible to upgrade the performance of an active node by adding more back ends. In Figure 14.3, in order to avoid a single point of failure on an EE (b) or on the front end (c) these components can be replicated on various nodes. All distributed solutions must also take into account the load balancing of heterogeneous resource-consuming
High-Performance Execution Environments
307
services, and support distribution policies (from the round-robin (RR) algorithm to more complex solutions like weighted RR, and least connection RR). 14.4 TAMANOIR: A PRACTICAL FRAMEWORK FOR HIGHPERFORMANCE ACTIVE NETWORKING The Tamanoir [8, 9] project aims to design a high-performance execution environment that validates the architecture described in Section 14.3.2. Development of the high-performance EE in Tamanoir was achieved through several steps: the design of an EE running in user space, the investigation of kernel space active modules, and, finally, a distributed computing approach. 14.4.1
High-Level Multithreaded Execution Environment
The Tamanoir suite is a complete software environment dedicated to deploying active routers and services inside the network. Tamanoir active nodes (TAN) provide persistent active routers that are able to handle different applications and various data streams (audio, video, grid, HTTP, and so forth) concurrently (the multithreaded approach). Both main transport protocols TCP and UDP are supported by TAN. Data is encapsulated inside ANEP [1] packets or raw IP packets (Figure 14.4). The execution environment relies on a demultiplexor receiving active packets and redirecting these packets toward the adapted service by reference to a hash key contained in the packet headers. New services are plugged into the TAN dynamically. The active node manager (ANM) is dedicated to the deployment of active services, and to updating routing tables. The injection of new functionalities (called services) is independent from the data stream: Services are deployed on demand when streams reach an active node that does not hold the required service. Two service deployments are available: the first is by using a service repository (Figure 14.5), where TANs send all requests for downloading required services; the second is by deploying service from TAN to TAN (TAN queries the active node that sends the stream for the service). To avoid a single point of failure, a service repository can be mirrored and replicated. When the service is available on a node, it is ready to process the stream.
308
Programmable Networks for IP Service Deployment
Figure 14.4 Functional view of a Tamanoir active node.
14.4.2
User Space and Implementation Issues
The Tamanoir execution environment running in user space is written in Java, which provides great flexibility and is shipped with a standard library. User space active services are also written in Java and are inherited from a generic class called service, itself inherited from the Java thread class. So, each service is executed in an independent thread. For a given service with TCP active streams, a thread service is dedicated for each stream; while with UDP, only one dedicated thread processes all the streams. A given service can be applied on TCP or UDP active streams without change.
Figure 14.5 Active service deployment: from a code repository or from a Tamanoir active node.
High-Performance Execution Environments
14.4.3
309
Kernel Space Execution Environment
Tamanoir allows the deployment of lightweight services inside the kernel space of the operating system. The goal is to deport active functionalities efficiently from the high-level execution environment (JVM) into the OS kernel. Recent versions of the Linux kernel (2.4.x) are well furnished with networking functionalities and protocols: QoS, firewalls, and routing and packet filtering. Netfilter is a framework for packet modification, outside the normal Berkeley socket interface [19]. With the IPv4 communication protocol, netfilter provides five hooks, which are defined points on the IP packet way. These hooks facilitate the development and execution of modules, written in C, in the kernel level. The function nf_register_hook is used to attach a personalized function to a specific hook. When a packet reaches the hook, it is automatically transmitted to this personalized function. The various modules that are set up into the OS kernel can be modified dynamically by active services. A Tamanoir active service, running inside the JVM, configures the netfilter module by sending control messages (Figure 14.6). These messages are captured by the netfilter module and used to parameterise lightweight services (forward, packet marking, drop). This on the fly configuration allows the dynamic deporting of personalized functions inside the kernel.
Figure 14.6 User space service (S1u) and kernel space service (S1k) communicate through the communication module (c).
14.4.4
Distributed Service Processing: Tamanoir on a Cluster
High-level and application-oriented active services (compression, cryptography, transcoding on the fly) require intensive computing resources. To support these services, Tamanoir implements the replicated EEs architecture described in Figure 14.4. A Tamanoir active node embeds a dedicated cluster to efficiently support parallel services on streams. The Linux virtual server (LVS) [26] software suite offers the best performance in terms of throughput and availability by supporting distributed
310
Programmable Networks for IP Service Deployment
internal servers (ftp, Web, mail). LVS is able to transmit packets in three different ways: • •
LVS-NAT, based on address translation (NAT); LVS direct routing (LVS-DR) where packets’ media access control (MAC) addresses are changed and the packets are transmitted to a real server; LVS tunneling (LVS-TUN) where packets are IPIP encapsulated (ie., IP datagrams encapsulated within IP datagrams), and transmitted to a back-end machine.
•
We modify LVS usage for active networking and use it in the Tamanoir EE. A Tamanoir LVS is a collection of TAN execution environments running on a cluster of machines and linked together with a high-performance network (Myrinet or Giga Ethernet). A dedicated machine is configured as a front end, and is used to route packets from the Internet to back-end machines. The front end is seen by an external client (on the Internet) as a single server dedicated to distributing connections on each node of the cluster in a round-robin or weighted round-robin way. The Tamanoir execution environment is replicated on each back-end machine. 14.5 TAMANOIR PERFORMANCE EVALUATION Many projects propose new execution environments for active networks. Most are very experimental with a complex implementation process, and not all are freely available. In this section, we first describe the different testbeds used to conduct our experiments. Next, we provide some results regarding latency and raw throughputs with a standalone and distributed Tamanoir active node. Finally, we show some measures obtained on a wide area Tamanoir platform. 14.5.1
Hardware and Software Descriptions of the Testbeds
Software platform: Tamanoir experiments were carried out with Java virtual machine JVM J2RE 1.4.2 provided by Sun Microsystems including the HotSpot technology. All the measurements presented here have been made on nodes running GNU's Not Unix (GNU) Linux (Debian distribution), with a 2.4.19 kernel. Hardware platforms: Platform 1 (P1): Our first experimental platform consists of dual-processor Pentium III 1 Ghz for TANs, and AMD Athlon 1 GHz for client hosts. Active
High-Performance Execution Environments
311
nodes and clients are connected through a dedicated Fast Ethernet (100 Mbps) network. This platform was used to measure latency in kernel and user space. Platform 2 (P2): This is a 7-rack cluster made of Compaq DL360 (G2) 1.4 GHz dual-PIII on a 66-MHz PCI bus. Racks are interconnected using a gigabit Ethernet network (RJ45 connectors) on a top-of-the-range Foundry gigabit switch. Platform 3 (P3): This is a 16-rack cluster, Sun LX50, 1.4-GHz dual PIII; also on a 66-MHz Peripheral Component Interconnect (PCI) bus. Nodes are interconnected thanks to two networks. There is also a standard Fast Ethernet network, and a very high-throughput, low-latency Myrinet [2] network with optical fiber on a Myricom switch. Platform 4 (P4): This is based on a long-distance high-performance backbone: the vraiment très haut débit (VTHD) platform. It is a high-throughput experimental network (2.5 Gbps). It uses a Giga router able to switch about 10 Gbps. This network links many French research centers like INRIA, ENS, Sun Labs/Europe, ENST, Eurecom, and France Telecom R&D. We deploy two types of service: • •
A lightweight service used to count and monitor the packets of each stream. No memory consumption or significant CPU resources are used. A heavy service used to compress packet payloads. It is an intensive CPU consumer service. Memory consumption is limited to the allocation of an area the size of the payload of each packet. Once the payload has been compressed, the allocated memory area is freed.
Our tests required that we implement two simple tools in Java. The first tool is an UDP or a TCP stream receiver. The received data stream is by default redirected to the standard output (stdout). The second tool allows an ANEP format data stream to be sent. The first option always sends the same packet, while the second option allows a binary or text file. Parameters allow the transport protocol (UDP or TCP) to use the payload size of each ANEP packet and the service name, to apply in the active nodes crossed by the stream. Merged throughput is called the sum of all the individual stream throughputs. In the following, we use the proxy mode approach of Tamanoir. 14.5.2
Latency Measures
Latency refers to the time in which an ANEP packet is processed and routed to its next destination by a Tamanoir active node. Measurements on the first experimental platform were enabled by netfilter. When a packet reaches the node, we start a timer, and stop it when the same packet (now processed) leaves the node. Packets crossing the Tamanoir active node remaining in the Linux kernel
312
Programmable Networks for IP Service Deployment
layer spend around seven microseconds for basic forwarding operations with TCP (Figure 14.7) on Platform 1 (Athlon 1-GHz PC) with a different JVM (IBM and Sun 1.3), (JIT enabled); and a compiled version with GNU compiler for Java (GCJ). On the kernel level, the size of the ANEP packet does not affect performance. As expected, performance is more affected by the service running in the user space (in the JVM) than in the kernel space. Results obtained with standard Java virtual machines (Sun or IBM) are quite similar. GCJ is the GNU compiler for Java, and provides native code from Java sources or byte code (.class) files. Code is next linked with the library libgcj. The compiled execution environment obtained with GCJ, also running in user space, does not improve performance. With TCP transport (Figure 14.7), performance obtained with small packets remains at around 16 ms (packet sizes of less than 4,096 bytes). Meanwhile, we obtain better results with bigger packets (around 4.4 ms for 4-Kb TCP ANEP packets with JVM, and around 10.5 ms with the GCJ version) as the small packets aggregation policy was originally designed for improving data transmission. As shown in [9], we obtain better results, between 0.5 and 1.25 ms, on UDP with small packets.
10000
Latency (us)
Latency (us)
10000
TCP - KERNEL LINUX TCP - SUN SDK 1.3 TCP - IBM SDK 1.3 TCP - GCJ 3.0
1000
100
100
10
10
1
UDP - KERNEL LINUX UDP - SUN SDK 1.3 UDP - IBM SDK 1.3 UDP - GCJ 3.0
1000
1 0
5000
10000
15000
ANEP payload size (Bytes)
20000
25000
0
5000
10000
15000
20000
25000
ANEP payload size (bytes)
Figure 14.7 Latency added by an active node on an ANEP packet over TCP or UDP on platform 1.
14.5.3
Data Path Optimization in a Tamanoir Active Node
As shown earlier, a mechanism is needed to interconnect a Tamanoir service written in Java running in user space and a minimum service running in kernel space. The first of the following experiments of deployed services shows how we can configure a Tamanoir service running in kernel space. Figures 14.8 and 14.9 present the results obtained by a lightweight service first executed in the user space EE and, later, inside a netfilter module. ANEP packets need seven microseconds to be processed and routed by a service running in kernel space. Running some services in kernel space improves performance for
High-Performance Execution Environments
313
active packet transport, and low-level services executed in kernel space minimize the load on the JVM (and user space). 10000
Packets ANEP Payload 1 Byte Packets ANEP Payload 200 Bytes Packets ANEP Payload 4096 Bytes Packets ANEP Payload 10000 Bytes Packets ANEP Payload 20000 Bytes Packets ANEP Payload 25000 Bytes
Latency (us)
1000
100
10
1 20
22
24
26
28
30
ANEP packet number
Figure 14.8 Throughputs reached over TCP. 10000
1000 Lat enc y (us) 100
Packets ANEP Payload 20000 Bytes Packets ANEP Payload 1 Byte
10
1 100
200
300
400
500
600
700
800
900
1000
ANEP packet number
Figure 14.9 Throughputs reached over UDP. The first 500 packets are processed inside the JVM; the rest remain in the kernel.
14.5.4
Throughput Measures
Results are given in megabits per second (Mbps). They represent the average speed of data processing in a Tamanoir active node. These measures were achieved with the latest release of Tamanoir, still written in Java.
314
Programmable Networks for IP Service Deployment
14.5.4.1 Throughput Measures over a Fast Ethernet Network Measurements shown in Figure 14.10 were obtained through platform 3 (cluster Sun). We use a lightweight service, vary the stream number from 1 to 2 on TCP, and then send only one UDP stream. Over TCP, as soon as the payload size is 1 Kb, we reach a throughput greater than 70 Mbps for a single stream, and 80 Mbps for two streams. For UDP this works quite well, but strange behavior occurs with packets that are too big (64 Kb). Notice that the current Tamanoir implementation does not support the symmetric multiprocessing (SMP) architecture for services applied to UDP streams. The maximum throughput available on a Fast Ethernet network is reached very quickly with a lightweight service and a small number of TCP streams. The throughputs shown in Figure 14.11 have also been measured on platform 3. We apply a heavy service on one or two data streams. This service is more efficient for big packet size (8 Kb). For two streams, we take advantage of the SMP architecture.
Figure 14.10 Throughputs over a Fast Ethernet network with a lightweight service on TCP and UDP.
High-Performance Execution Environments
315
Figure 14.11 Throughput over a Fast Ethernet network with a heavy service.
14.5.4.2 Throughput Measures over Gigabit Ethernet Network Measurements shown in Figures 14.12, 14.13, and 14.14 were obtained through the same system used above, but interconnected through a local gigabit Ethernet network (1000 Mbps) and a high-performance switch (platform 2). Figures 14.12 and 14.13 show, respectively, throughputs reached over TCP and UDP, and a lightweight service for one or two streams. To make the most of the bandwidth, it seems better to use packet sizes ranging from 8 to 32 Kb. The 64 Kb packet size does not seem to be well adapted, because the throughput is lower (especially for UDP). Notice that we do not saturate the link, even with two streams. Figure 14.14 shows results of the throughputs reached thanks to a heavy service applied to one or two streams over TCP. This suits large packets really well. From 8 Kb we double the aggregated throughput. We really take advantage in this experiment of the Tamanoir active node dual-processor architecture (SMP). Notice that for one stream we reach about 90 Mbps, which is the Fast Ethernet network limit, but on Figure 14.11 maximum throughput of a unique stream is only 55 Mbps, even though the service is the same and consumes the same resources. This difference shows the overhead introduced by interface access. This is due to the network interface cards and their drivers.
316
Programmable Networks for IP Service Deployment
Figure 14.12 Bandwidth with Gigabit Ethernet and TCP with a lightweight service.
Figure 14.13 Bandwidth with Gigabit Ethernet and UDP with a lightweight service.
High-Performance Execution Environments
317
Figure 14.14 Bandwidth with Gigabit Ethernet and TCP with a heavy service.
14.5.4.3 Throughput Measures over Myrinet and Tamanoir Cluster The benefits of distributing resources inside an active node with a cluster-based Tamanoir node were considered in [26]. The local experimental platform consists of 12 clients and a Tamanoir LVS node (Figure 14.15 shows only six clients). On the left are the active packets senders; on the right the receivers. Streams are routed by the front-end node acting as a director (streams dispatcher); three back ends are attached to provide distributed resources. Results reported in this section were measured on a gigabit Myrinet network.
Figure 14.15 Platform topology with clients and a cluster-based TAN.
Figures 14.16 and 14.17 show performance results achieved experimentally on the P3 platform. Figure 14.16 presents performances obtained with a three-node cluster-based Tamanoir. In this configuration, Tamanoir supports gigabit performance (1.1 Gbit for 8-Kb packets) for a monitoring service applied on 24 active streams. We exceed the 1-Gbit limit because of the high bandwidth provided by Myrinet networks. Figure 14.17 summarizes the best-obtained results. All these results show that our active node needs to process many streams to exploit all the potential of the processing resources. With heavy or high-level services like the Gzip service (data compression on the fly of the 2.4.19 Linux
318
Programmable Networks for IP Service Deployment
kernel source files), active node resources are used more extensively, and throughput is reduced. With heavy services, a three back-end based Tamanoir active node is still able to process up to about 240 Mbps of active packets.
Figure 14.16 Throughput with a three-node cluster-based TAN applying a lightweight service.
Figure 14.17 Throughput with a three-node cluster-based TAN applying a heavy service.
14.5.4.4 Deployment over Long Distance Gigabit Networks This section briefly presents a pragmatic deployment of a Tamanoir highperformance infrastructure. The Tamanoir active infrastructure relies on the adaptation of visualization tools from the DataGrid Project: Mapcenter. This tool, initially designed for grid management, helps to maintain Tamanoir active nodes distributed around backbones. During the French Réseau National des Technologies Logicielles (RNTL) Etoile project, Tamanoir was deployed on a high-performance VTHD backbone (2.5 Gbits). Various distant sites were involved in this experiment (see Figure 14.18).
High-Performance Execution Environments
319
Figure 14.18 Physical topology of a VTHD platform.
To manage active nodes and their services, each node can be graphically tested (see Figures 14.19 and 14.20) in terms of pure network functionality (ping, TCP/UDP sent packets). More users can manage active service deployment through Web interfaces. The Tamanoir active service API provides an information method where active services can export vital data relating to the service (number of processed packets, service version, author). This display can be dynamically updated using Web technology. To validate the large deployment and scalability of infrastructure, the Tamanoir framework provides a fully distributed active packet generator that enables the generation of numerous active streams through active nodes. 14.5.4.5 Experimental Conclusions Regular performance measurements are essential to check the overhead introduced by modifications in the hardware configuration or by software tweaking. Evaluation of the raw platform performance is important. Results achieved are those we should reach with an active technology. For example, if the raw performance of a platform is 90 Mbps, we must find the minimal configuration to reach this same value (90 Mbps), but in an active context. Most measures are done with a lightweight service (low CPU and memory requirements). Our objective here is not to provide a high-performance service for an on-the-fly processing context, but to provide an optimal, fast, and efficient EE to support all the active network services.
320
Programmable Networks for IP Service Deployment
Figure 14.19 Geographical deployment of Tamanoir active nodes around a VTHD platform.
Figure 14.20 Available active nodes with main information (IP address, ICMP test, alarms, and so on).
High-Performance Execution Environments
321
14.6 CONCLUSION It is quite a challenge to provide efficient execution environments for programmable and active networks. The main approaches have shown that software active nodes can support high-performance requirements from applications. We illustrate these approaches with the Tamanoir project, which implements high-performance software-based active node. It is a particularly concrete and freely available example of what high-performance active networks can achieve. Of course, these environments must be efficiently used through the efficient design of active services. Although many propositions of EE have been made from all over the world, efficient prototypes are difficult to find. Most rely on a specific framework (NodeOS) or hardware. Disposing of an available set of high-performance EEs is needed for benchmarking solutions adapted to applications requirements. The search for high performance marks the beginning of a new inquiry into the area of active networks. High performance is mandatory if we want to see active network devices deployed in a real industrial network. Most of the available active networks projects are based on software EEs (except ANN and P4 projects in this chapter). This chapter has focused on software-based high-performance active nodes. It is now time for industrial companies to propose hardware-supported equipment, such as with network processors [25] that are able to be pragmatically deployed in current networks. References [1]
Alexander, S., et al., Active Network Encapsulation Protocol (ANEP), RFC Draft, July 1997.
[2]
Boden, N., et al., “Myrinet: A Gigabit Per Second Local Area Network,” IEEE Micro, Vol. 15, No. 1, February 1995, pp. 29-36.
[3]
Dandekar, H., Purtell, A., and Schwab, S., “AMP: Experiences with Building an Exokernel Based Platform for Active Networking,” 2002 DARPA Active Networks Conf. and Exposition (DANCE’02), May 2002, p. 77.
[4]
Decasper, D., et al., “A Scalable, High-Performance Active Network Node,” IEEE Network, Vol. 13, January 1999.
[5]
Feldmeier, D., et al., “Protocol Boosters,” IEEE Journal on Selected Areas in Communications, Vol. 16. No. 3, April 1998, pp. 437-444.
[6]
Fuhrmann, T., et al., “Results on the Practical Feasibility of Programmable Network Services,” ANTA 2003, Osaka, Japan, May 2003, pp. 141.
[7]
Fuhrmann, T., et al., “Amnet 2.0: An Improved Architecture for Programmable Networks,” Proc. of the Fourth Annual International Working Conference on Active Networks (IWAN 2002), Zurich, Switzerland, December 4-6, 2002.
322
Programmable Networks for IP Service Deployment
[8]
Gelas, J-P., Lefèvre, L., “Tamanoir: A High-Performance Active Network Framework,” Active Middleware Services, Ninth IEEE International Symposium on High Performance Distributed Computing, Pittsburgh, Pennsylvania, August 2000, pp. 105-114.
[9]
Gelas, J-P., Lefèvre, L., “Mixing High Performance and Portability for the Design of Active Network Framework with Java,” Third International Workshop on Java for Parallel and Distributed Computing, International Parallel and Distributed Processing Symposium (IPDPS 2001), San Fransisco, April 2001.
[10] Hedzic, I., and Smith, J.M., “P4: A Platform for fpga Implementation of Protocol Boosters,” LNCS, editor, FPL'97, Vol. 1304, September 1997, pp. 438-447. [11] Larrabeiti, D., et al., “A Practical Approach to Network Based Processing,” AMS, July 2002. [12] Larrabeiti, D., et al., SARA: A Simple Active Router-Assistant Architecture, Technical Report, Univ. Carlos III, Madrid, November 2001. [13] Lefèvre, L., and Reymann, O., “Combining Low Latency Communication Protocol with Multithreading for High-Performance DSM Systems on Clusters,” in Eighth Euromicro Workshop on Parallel and Distributed Processing, Rhodes, Greece, IEEE Computer Society Press, Jan 2000, pp. 333-340. [14] Nygren, E. L., Garland, S J., and Kaashoek, M. F., “PAN: A High-Performance Active Network Node Supporting Multiple Mobile Code Systems,” IEEE OPENARCH '99, March 1999. [15] Merugu, S., et al., “Bowman and Canes: Implementation of an Active Network,” Thirty-Seventh Annual Allerton Conference, Monticello, IL, September 1999 [16] Merugu, S., et al., “Bowman: A Node OS for Active Networks,” IEEE INFOCOM ’2000, March 2000. [17] Moore, J. T., Hicks, M., and Nettles, S. M., “Practical Programmable Packets,” INFOCOM '01. IEEE, April 2001. [18] Ott, M., Welling, G., and Mathur, S., “Clara: A Cluster Based Active Router Architecture,” Proc. of the Hot Interconnects VIII, Stanford University, CA, August 2000. [19] Russell, R., Welte, H., Linux Filter Hacking HOWTO, “netfilter Description and Usage,” July 2000, http://www.iptables.org/documentation/HOWTO/netfilter-hacking-HOWTO.html. [20] Bhattacharjee S., Calvert K., and Zegura E., “Architecture for Active Networking,” HPN'97, White Plains, NY, April 1997. [21] Takahashi, N., Miyazaki, T., and Murooka, T. “APE: Fast and Secure Active Networking Architecture for Active Packet Editing,” IEEE OPENARCH, June 2002. [22] Tennenhouse, D., and Wetherall., D. “Toward an Active Network Architecture,” Computer Communications Review, 26(2):5-18, April 1996. [23] Wetherall, D., Guttag, J., and Tennenhouse, D., “ANTS: A Toolkit for Building and Dynamically Deploying Network Protocols,” IEEE OPENARCH '98, April 1998. [24] Wolf, T., and Decasper, D., “Tags for High-performance Active Networks,” OpenArch 2000, Tel Aviv, March 2000. [25] Wolf, T., and Turner, J. S., “Design Issues for High-Performance Active Routers,” IEEE Journal on Selected Areas of Communication, Vol. 19, No. 3, March 2001, pp. 404-409. [26] Zhang, W., “Linux Virtual Server for Scalable Network Services,” Ottawa Linux Symposium, 2000.
High-Performance Execution Environments
323
[27] Keller, R., et al., “PromethOS: A Dynamically Extensible Router Architecture Supporting Explicit Routing,” Proc. of the Fourth Annual International Working Conference on Active Networks (IWAN 2002), Zurich, Switzerland, December 4-6, 2002. [28] GCJ. The Gnu Compiler for the Java Programming Language, http://sourceware.cygnus.com/java/. [29] IBM. IBM Java Developer Kit for Linux, http://www.alphaworks.ibm.com/tech/linuxjdk. [30] Java Programming Language, http://java.sun.com/. [31] Wijata, Y., Resource Management in Active Networks. Technical report, U.Kansas, May 1997.
Chapter 15 Network Management The network management system designed and implemented during the FAIN project is a hierarchically distributed policy-based network management architecture (PBNM), which is an important result of this project. The policybased network management system, with appropriate policies, performs finegrained management of the FAIN active node resources, and delegates a management capability to third parties, according to the Fain business model service chain. Thus the active network service provider (ANSP) delegates management functionality to its registered service providers who, in turn, delegate particular management tasks to their customers. Security and the isolation of resource usage are assured by mechanisms developed by the FAIN project and deployed in the FAIN active node. The uniform API offered by the FAIN node allows the deployment of a manufacturer-independent networkwide solution. 15.1 INTRODUCTION Management, being a key component of a network architecture, must also be considered and designed around the same concepts. To this end, the management architecture must: • • •
Support the coexistence of different management strategies facilitating customization; Interoperate with different vendors’ equipment; Be dynamically extensible to support the deployment and operation of new services.
The FAIN management architecture encapsulates the three aforementioned concepts, and is built in accordance with the IETF’s policy-based management framework [1], used in an active network environment. As a consequence, it inherits the features of this enabling technology, which are then applied to this new problem space.
325
326
Programmable Networks for IP Service Deployment
In this chapter, we describe the management aspects of a new network architecture, which was designed and implemented during the FAIN European Union research and development IST project [2]. This architecture encompasses the design and implementation of active nodes that support different types of execution environments, policy-based network management, and a platform-independent approach to service specification and deployment. The architecture is deployed and evaluated in a pan-European testbed. The FAIN management architecture is the realization of the business model proposed in FAIN. Accordingly, a brief description of the most important actors and the relationships between them in the context of the FAIN business model is essential to understand the motivation and objectives of the FAIN management architecture. The main actors of the business model are the ANSP, the SP, and the consumer (C). The ANSP is the primary owner of the network resources, and provides facilities for the deployment and operation of the active components in the network. The ANSP offers secure and isolated access to such facilities, as well as part of its network resources to potential customers like service providers or large corporate customers. Network operators may assume such a role. The SP buys network resources from the ANSP, and creates services comprising active components delivered by a service component provider. It then deploys these components in the network, and offers the resulting service to consumers. The consumer is the end user of the active services offered by an SP. A consumer may be located at the edge of the information service infrastructure (i.e., be a classical end user), or it may be an Internet application, a connection management system, or even another SP. FAIN has focused mainly on the relationships and interactions between ANSP and SP, and SP and consumer with respect to service deployment and management. The FAIN management system is also one of the first prototypes of network management systems oriented to the full management of active networks. 15.2 DESIGN AND FUNCTIONALITY Active networking technology impacts network management functionality in two main ways: 1.
First, as an enabling technology, it offers new capabilities to achieve more flexible, distributed, and efficient management [3].
Network Management
2.
327
Second, as a consequence of the last item, it puts an additional burden on the network management, which now must account for the additional complexity introduced by this technology; for example: • Network management systems must handle functionalities dynamically installed in active routers. • They must manage increasing requirements for customization that consumers impose on ANs. • They must be able to manage heterogeneous network technologies; even nonactive ones, since a progressive introduction of active routers into strategic places within the network is foreseeable.
PBNM copes with some of the management requirements imposed by active networks. In fact, it offers more autonomous and flexible management facilities, as well as the possibility of customizing network behavior to user requirements. In particular, policies seem to be an appropriate mechanism for the delegation of management functionality, which is a requirement that stems from the implementation of ANs. However, a policy-based approach on its own does not solve the other requirements imposed by ANs, such as the dynamic extensibility of management functionality, or the support of service specific management code. These requirements imply the dynamic deployment of either new network management components in the management station, or service-specific management code; that is, some activeness. The synergy of both technologies is obvious here. In FAIN, active service provisioning collaborates tightly with the network management system to enable this requirement in our PBNM. Additionally, centralized policy-based management approaches that are developed nowadays, suffer from scalability problems [4, 5, 6, 12]. That is due to the number of policies that such a system must process to manage total network behavior, and the amount of management traffic flowing between the managed devices and the policy manager. Moreover, if we take into account that in active networks: the number of resources to be managed increase significantly (i.e., CPU, memory, disk, security access, and so forth); different users might want to manage network behavior to tailor it to their needs; and we have a potentially huge number of different application services running within the network that should be managed; a centralized management approach appears unattainable. That introduces the idea of a two-tier-based architecture to cope with this issue. Indeed, active and policy-based networking technologies are relatively new. Hence, up until now there has not been any sound approach that analyzes the synergies that can be obtained from joining these technologies to realize a global network management architecture for active networking environments. The FAIN management architecture does it. It is an innovative, distributed, two-tier, policybased management architecture specifically designed for best managing active and
328
Programmable Networks for IP Service Deployment
programmable networks, while taking advantage of the inherent facilities provided by these technologies. To summarize, FAIN architecture solves scalability problems of centralized policy-based management systems using a two-tier approach, targeting networkand element-level management concerns, based on telecommunications management network (TMN) concepts. This results in reducing redundant traffic and information processing, and increasing dynamic fine-grained management tailoring, as well as providing dynamic adaptability to heterogeneous network technologies and functionalities. Policies in FAIN have been categorized according to the semantics of management operations, which may range from QoS operations to service specific operations. Accordingly, policies that belong to a specific category are processed by dedicated policy decision points and policy enforcement points. Figure 15.1 gives an overview of a FAIN management system. The network management system is the entry point of the management architecture. It is the recipient of policies that may have been the result of network operator management decisions or of service level agreements between the ANSP and SP, or the SP and consumer. These SLAs require reconfiguration of the network, which is automated by means of policies sent to the NMS. Network-level policies are processed by the NMS PDPs, which decide when policies can be enforced. When enforced, they are delivered to the NMS PEPs that map them to element level policies, which are, in turn, sent to the element management systems. EMS PDPs perform similar processes at the element level. Finally, the AN-node PEPs execute the enforcement actions at the network elements. NMS PDP REP PEP
EMS PDP
REP
VE PEP
EMS PDP
AN Node
REP
VE
AN Node
PEP
Figure 15.1 A hierarchical view of the FAIN network management architecture.
Network Management
329
The use of this policy control configuration model [1] and its use in a hierarchically distributed management architecture combines the benefits of management automation with a reduction of management traffic and distribution of tasks. As the FAIN management architecture is based on the FAIN business model, the relationship among the three main actors, namely, ANSP, SP, and C, is projected directly onto the architecture. Accordingly, each one of these actors may request and get his own (virtual) management architecture through which he is able to manage the resources allocated to the virtual environments of his virtual network. In this way, each actor is free to select and deploy his own model of managing the resources; namely, his own management architecture, which can be centralized, hierarchical, or policy or nonpolicy-based. The complexity of the virtual network and the types of service that are deployed in it, dictate the particular choice of management architecture by its owner. In addition, different management architectures simultaneously coexist in the same physical network infrastructure, as they may be deployed by different actors. To this end, we create an environment that is capable of accommodating opposing requirements; an accomplishment that is beyond the capabilities of the traditional approach of monolithic architectures. Our model extends the Tempest approach [7] to the management plane, which was the first to advocate the simultaneous support of (virtual) control architectures for ATM networks. It also extends the scope of management by delegation [7] as it allows delegation of the network management responsibility to a third party; for example, an SP, which can be deployed and hosted in a separate physical location from the NMS of the owner of the network; for example, the ANSP. In order to allow the ANSPs to manage this virtual infrastructure, the NIP also creates for the ANSPs new conveniently restricted instances of the management framework that we call management instances. Figure 15.2 illustrates the earlier discussion about the architecture. Starting with the management architecture of the network operator, namely the ANSP, it instantiates and registers a new management instance, which is delegated to one of its customers; that is, the SP. This management instance will host the SP’s management architecture. The SP has the option to buy from the ANSP an instance of the ANSP’s architecture; in our case a policy-based one. To this end, the network management architecture developed by the ANSP is not only used for managing the network elements, but it becomes a commodity, thus creating another important source of income for the ANSP. Furthermore, the ability of the ANSP to generate and support multiple management domains may create additional business opportunities. For example, the ANSP may build an OSS-hosting facility for SPs to instantiate their own management architectures. In this way, the ANSP may sell both its expertise in
330
Programmable Networks for IP Service Deployment
running and operating an OSS, and the architecture and its corresponding implementation. In contrast, the SP does not need to build his management architecture from scratch, but can customize an existing one according to the services he intends to run. Alternatively, the SP may deploy his own management architecture using the OSS hosting facility provided by the ANSP; thereby reducing the cost of managing the network. In FAIN we have focused on and experimented with the automated instantiation of management architectures using as a blueprint the PBNM system of the ANSP to instantiate another management system for the SP. Note also that this instantiation relationship can be recursive, in the sense that the SP may further delegate his own instances to a consumer. Finally, the architecture of the MI used by the ANSP has been designed in such a way that it is dynamically extensible in terms of its functionality, as a result of using active networks technology. The ANSP’s management architecture can be extended in two distinct ways: (1) by deployment of a whole new pair of PDP/PEPs that implement new management functionality, or (2) by extension of the inner functionality of existing PDP/PEPs. The former is triggered by the PDP manager, whereas the latter is achieved by the PDPs themselves. The execution of the extension; namely, fetching and deploying the requested functionality, is the responsibility of FAIN’s active service provisioning system. The FAIN prototype implementation is deployed on the pan-European FAIN testbed: an overlay network connecting 10 different sites. Initial trials have focused mainly on the functional evaluation of our management system, and, in particular, on the creation and usage of MIs and their extensibility features. The remainder of this chapter will describe design choices and mechanisms that allow the FAIN management system to: • • • •
Create an active virtual network for an SP; Delegate to the SP some management functionality; Download dynamically service-specific management components from the SP; Dynamically extend the management stations.
15.3 THE FAIN PBNM CORE COMPONENTS DESCRIPTION We proceed now to present the details of the FAIN policy-based management architecture. Following the FAIN enterprise model described in Chapter 7, the first instance of the management architecture that is created is that of the ANSP. As the NMS and EMSs of the ANSP instance have similar functionality and components
Network Management
331
with regard to the policy-based approach, we factor these common parts into a set of components we call core components of the management system. ASP Policy Editor
ANSP Management Instance
ANSP proxy
PDP Manager
Monitoring system
REP
Inter-PDP Conflict Check
Other SP Management Delegation of Management Instances Architectures
Access Rights Delegation PDP
PEP
QoS PDP
Service Specific PDP
PEP Resource Manager
PEP
Figure 15.2 FAIN management instances and their components.
15.3.1 Common Use Cases This section describes the main common use cases of the core management system, which are shown in the use case diagram in Figure 15.3.
NIP, ANSP, SP or Consumer
provision policy
Enhance PDPs Policy Knowledge
Deploy Management Functionality ASP
Figure 15.3 Use case diagram for core components.
332
Programmable Networks for IP Service Deployment
All the functionalities represented by these use cases are supported by the core policy-based management framework, and therefore by both the network-level and element-level management systems that extend from it. 15.3.1.1 Provision Policy This is probably the most important use case for a policy-based management system. It represents the basic policy processing functionality. That is, the “provision policy” use case encompasses all functionalities realized in our management framework each time a policy is introduced in the system. The activity diagram in Figure 15.4 shows the main functionality within the provisioning use case.
wait for policies check identity fail success
fail
forward to management instance
Actor Owner Management Instance
check access rights
success check if needed Functional Domain is installed Deploy Functional Domain
no yes
Check if needed PDPs Policy Knowledge is installed Enhance PDPs Policy Knowledge
no register events
yes
event processing
make decisions enforce decisions
store policies
send events
Figure 15.4 Activity diagram for policy provisioning.
Network Management
333
First, the preprocessing functionality that is realized outside any management instance1 checks the identity (through its credentials) of the actor that intends to use the management system, and demultiplexes the policy to the corresponding management instance. Once the policy is dispatched to a particular management instance, the steps to be followed are: •
• • •
Check the actor rights within the management instance. Each management instance has an associated profile. Here, we define what the actor is allowed to do, and the maximum amount of resources that can be allocated. This profile has been implemented as an XML schema used to validate the incoming request in the form of XML policies. Extend the management functionality through the download of new components to correctly process the policy. This feature is explained by the use case called “deploy management functionality.” Where necessary, extend the management functionality of the PDP by upgrading the action and condition interpreters. The “enhance PDP’s policy knowledge” use case further explains this functionality. Execute the core policy functionality (the list below identifies the most common functionality in a policy-based system): a) Check the policy syntax and semantic conflicts; b) Store the policy in the repository; c) Make decisions about when a policy should be enforced based on events received through the event processing functionality; d) Enforce decisions.
15.3.1.2 Deploying Management Functionality Another basic feature of an active network management system is its ability to extend itself with functionality unforeseen at development time. For this reason, it requires an appropriate mechanism to add new functionality at run time. Once the management system detects that it must be extended, it requests the ASP framework to deploy the required functional domain. Henceforth, the functionality to forward the request begins.
1
A management instance can be seen as a sandbox where all components running have the same owner. Each management instance has at least one component: the PDP manager. For the ANSP, it is formed by a PDP manager, a QoS PDP/PEP, and a delegation of access rights, PDP/PEP. All these components will be described in more detail later on.
334
Programmable Networks for IP Service Deployment
15.3.1.3 Enhancing PDP’s Policy Knowledge This use case is an enhancement from the previous one. The management system is able to accept new policies from an already existing functional domain by triggering the deployment of new action/condition interpreters. The FAIN policy rules are policy core information model (PCIM)2 compliant. The system extracts the appropriate fields to determine which class is responsible for interpreting the condition and action fields included in the incoming policy request. If the system detects that the required class is not located in the local system, it issues a request to the ASP to transport the corresponding code package from the network code repository to the local code repository. Henceforth the functionality for processing the request is resumed. 15.3.2 Core Components Figure 15.5 illustrates the components of the core policy-based management system, derived from the previous functional requirements captured by the common use case section. The core components are used at both management levels, together with level-specific components, which might extend the core functionality in order to realize level-specific functionality. 15.3.3 ANSP Proxy The proxy has been introduced to enhance the security of the ANSP and/or of its customers, the SPs. It provides authentication of the incoming requests (policies), and forwards the policies to the correct management instances. The ANSP proxy can accept policies coming from both the ANSP and the SP, even directly from customers3 (end users). It is also responsible for delivering the reports related to policy status enforcement sent by the underlying components to the policy owner. 15.3.4 PDP Manager Figure 15.6 illustrates the main functionality of the PDP manager. The PDP manager is responsible for forwarding received policies to the appropriate policy decision point. If the corresponding PDP is not installed, the PDP manager requests the ASP system to download and install it, thereby extending the management functionality of the system as required. 2 The policy core information model describes the generic policy entities (policy groups, rules, conditions, and actions) and their relationships in a domain-independent manner. Appropriate extensions are required in order to apply the PCIM to specific domains, such as QoS or security. 3 The users may be allowed to use their own facilities to create policies, and FAIN complaints, and send them to the ANSP proxy directly.
Network Management
335
The PDP manager also acts as a control point, as it has all the necessary information to understand the policy processing state. As an example, imagine that two different policies must be deployed, but the second is only deployed if the first is successfully enforced. In this case, the PDP manager keeps the second one in a halt state, until it receives notification of the first policy’s successful enforcement. i_ANSProxy
Policy Parser
AnspProxy i_report i_PdpMgr
Policy Repository
PDPManager i_report i_pdp
Monitoring System
PDP i_report
Figure 15.5 Architectural model for the core system.
Such a situation occurs, for instance, when an SP requests the instantiation of a new virtual network and its corresponding management instance. In this case, the PDP manager receives two different types of policy, the QoS policy and the access rights delegation policy. Following the described procedure, it then installs the QoS policy (an action that requires admission control) and only when there are sufficient available resources (it receives the success notification) does it attempt to install the delegation policy. Only when both installations are completed successfully, does it instantiate the new MI and hand it over to the SP. Figure 15.7 illustrates the main components comprising the PDP manager.
336
Programmable Networks for IP Service Deployment
ANSP Proxy
ASP
Instantiate Functional Domain Control of Forwarding of Policy Sets
Deploy Functional Domain
Find PDP
Control Life Cycle Functional Domain
Release Functional Domain
Forward Policy
Register Policy Validity Period
Check PDPLifeCycle
Uninstall PDP
PDP
Figure 15.6 PDP manager use case diagram. i_PdpMgr i_repository
FwC i_report
[1]
Repository
PDP Manager ARC
[2]
i_core
[3]
PdpMgr [6]
i_arc i_domainManager
[4]
DMgr
[5]
i_pdpLifeCycle
PDPLC
PDPUI
ASP
i_pdpUnInstaller i_ListenSchedEvent
[7]
PDPs Figure 15.7 PDP manager interfaces.
The forward controller (FwC) subcomponent implements the functionality of the “control of forwarding of policy sets” use case. When a policy arrives, it checks if it is single policy or a policy set. In the case of a policy set, the FwC
Network Management
337
splits the set into individual policies, obtains the set-forwarding mode, and forwards the individual policies accordingly. The PDP manager (PdpMgr) subcomponent coordinates the core behavior of PDP manager. It uses the domain manager component (DMgr) to realize the “find PDP” use case and, if the PDP is not found, it requests its installation to the domain manager. The PdpMgr subcomponent is also responsible for forwarding the policy to the PDP once it is installed. Then, it realizes the register policy validity period use case using the policy life-cycle (PDPLC) subcomponent. The policy life-cycle subcomponent implements the functionality of the “check PDP life cycle” use case. It periodically checks if a PDP has expired. If so, it contacts the PDP uninstaller in order to remove it, so fulfilling the “uninstall PDP” use case. The domain manager subcomponent implements the functionality of the “find PDP,” “control life-cycle functional domain,” “deploy functional domain,” and “release functional” use cases. The domain manager subcomponent allows the management framework to dynamically upgrade itself with new management capabilities. That is, it allows the management system to extend itself by downloading and creating an instance of a new functional domain. The use cases realized by this component are: Find PDP: As previously described, when the PDP manager wants to retrieve the reference of a specific PDP that belongs to a particular functional domain,4 it asks the domain manager for this reference. Then the domain manager looks in the local cache to see if the PDP is running on the system. If so, it returns its reference, the PDP’s interoperable object reference (IOR); otherwise, it tries to create a new instance. If the code is already downloaded into the classpath, it then returns a reference to the new instance, but if it does not have this code (i.e., it is not in the classpath), then it requests the ASP to initiate deployment of the code that implements the functional domain requested. Once the ASP has downloaded the code requested, it notifies the domain manager which then creates a new instance and returns the reference of the newly instantiated PDP to the PDP manager. Control life-cycle functional domain: The domain manager is responsible for triggering deployment of functional domains using “deploy functional domain.” It also maintains the references of the currently instantiated functional domains, and 4
The functional domain must be understood as all required components needed for processing policies that have conditions and actions that conceptually address a common management goal, that is, QoS, the delegation of access rights, and performance.
338
Programmable Networks for IP Service Deployment
provides these references to surrounding components, which use them to access the components that comprise the functional domain (PDP/PEP). Instantiate functional domain: A functional domain can be composed of more than one component; for example, it can be composed of two components, a PDP and a PEP. Currently, the PDP is instantiated within the management station (NMS, EMS). The PEP could be instantiated in the management station or in a VE inside a FAIN active node. When in the active node, the ASP system is responsible for instantiating the component. Deploy functional domain: As already explained, when a PDP or PEP is not found locally, the domain manager obtains from the ASP system the corresponding code. The policy has enough information to identify the functional domain responsible for processing the incoming request. This information is used to determine if the particular functional domain components are already running in the system or, instead, if they are stored locally in the management station. Release functional domain: In order to increase the overall performance of the management station, the domain manager will deactivate all those components that compose the functional domain when all policies processed by it have expired. It requests the removal of the PDPs and PEPs instantiated in the management system by contacting the PDPLC subcomponent. If a PEP is located on a FAIN active node, it contacts the ASP to remove it. 15.3.5 PDP The PDP is the main component in policy-based management architecture. According to its use case diagram (see Figure 15.8), its main functionality is to check for possible syntactic and semantic conflicts in policies (and sometimes, even try to solve these conflicts). Another role of the PDP is to decide when a policy should be enforced, for which purpose the PDP needs to receive information from the monitoring system. The third important function is to forward decisions to the PEP components for enforcement. As illustrated in Figure 15.9, each PDP consists of a set of subcomponents: The controller (also known as the core) is the one responsible for receiving the incoming request and, after checking that the incoming policy is not going to generate any conflict, will send it to the evaluation subcomponent to be processed.
Network Management
PDP Manager
>
>
>
>
>
>
Policy
Monitoring System
Make Decision
Check Conflicts >
339
>
>
>
i_pep
PEPs
340
Programmable Networks for IP Service Deployment
The conflict checker is responsible for checking for any possible conflict that the incoming request could generate. At the core component, the only functionality offered is detection, which might resolve syntax conflicts, but not semantic conflicts. To solve the latter, it is required to have in-depth knowledge about the type of policies, so they will be managed by the extended conflict checker subcomponent. The core offers basic mechanisms to allow itself to extend its functionality by adding specifics conflict interpreters. The evaluation subcomponent is responsible for making a decision about whether the incoming request must be enforced right now or wait. Taking into account that the interpretation of the policy depends on the sort of policy to be processed, the core component offers the basic mechanisms to determine which interpreters are required, and to dynamically retrieve and use them. If they are not registered locally it will request the ASP to download them. The condition/action interpreter subcomponents provide action and condition processing logic for those policy types that are handled by the PDP. Each PDP has at least one instance of each type, but they can be dynamically extended to accommodate more interpreters capable of processing new actions and conditions conveyed by the policies. The generic condition and action interpreter makes use of a particular field, inside the policy, to decide which particular action and condition interpreter should be used to process it. This particular processing logic is specific for each policy, and hence for each functional domain and for each management level; that is, there are condition/action interpreters specific to the element and network levels. 15.3.6 Monitoring System Policy decisions rely on both local and global network status information. While PEPs are unsurpassable sources of device-specific data, a monitoring system is required to provide an overall picture of the network state. It is a challenge in terms of extensibility, scalability, and efficiency to obtain such a picture in an active network, where new modules, service components, and resource abstractions are constantly incorporated. These properties are architecturally addressed within the monitoring system in various subsystems on both the network and element levels. Indeed, grouping these trends in terms of responsibility led to the adoption of a layered architecture, as shown in Figure 15.10. The three layers reflect the different aspects of the monitoring activity: While the acquisition layer gathers and processes data coming from network entities (offered by the active nodes through resource abstraction interfaces), the distribution layer permits the efficient delivery of such information to the PDPs through an extended notification channel. The policy-based control
Network Management
341
layer aims to make decisions affecting the way the monitoring operations are carried out.
[policy based] [CORBA based] [CIM based]
Control Layer Distribution Layer Acquisition Layer
Figure 15.10 The FAIN monitoring system architecture.
Altogether, these layers apply a set of strategies that guarantee an immediate response either to the appearance of new network elements or the need for new information processing methods; making the monitoring system inherently extensible. Such strategies roughly pertain either to the group of interface decoupling techniques, or the application of the building block concept. The first of these strategies, applied within the acquisition layer, relies on dynamically discovering the interfaces of the target to be monitored, and analyzing how to access them. Then, a set of parameters contained within run time information (provisioned by the PDPs) is used to set up the monitoring operation. Even external metering blocks are subjected to this type of configuration. This approach makes it feasible to access components that were not initially foreseen, and immediately extends the policy-based configuration mechanisms to virtually any measurement element. Dynamic analysis techniques are also applied to the events produced by the event sources, whose event structures are traversed and the information they contain reorganized for its automatic adaptation and delivery as structured events. In this way the events thus become capable of further filtering evaluation within the notification channel. In higher abstraction layers, the use of an event channel as the only means to interchange both events and configuration orders (embedded in filtering constraints), guarantees interface independence between the PDPs and the monitoring system. Applying the building block concept to design the data processing elements also pursues extensibility. First, a set of basic and generic data manipulation blocks have been defined (threshold surveillance, statistics blocks, and so on). Second, appropriate rules for connecting such blocks into manipulation chains have been specified. Finally, the configuration of the manipulation chains is flexibly defined in monitoring policies. Altogether, these techniques are used to create new data processing functionalities. Flexibility is achieved through special features of the distribution layer. Through the use of a notification channel, this layer decouples the PDPs from the network entities that generate the events. Furthermore, the distribution layer ensures that events remain within limits. This feature and the hierarchical
342
Programmable Networks for IP Service Deployment
arrangement of notification channels within the FAIN network promote scalability by preventing unsolicited element-level events from reaching the network-level management PDPs. 15.3.7 Policy Parser The policy parser converts XML-encoded policies into their Java component counterparts. It provides mechanisms for ensuring policy correctness, and enables the automatic generation of the XML code corresponding to the different policy elements. The parser design distinguishes a set of structural classes and a set of classes actually holding the information. The former provide the connection points between the diverse information elements, whereas the latter offer those fields defined in the XML policy schemas, through their respective interfaces. One of the major features of this design is its extensibility. The structural classes contain unmarshaling methods, which allow dynamic population of the policy structures with information classes not even foreseen at design time. The information class look-up is performed based on the data held in the XML policy itself, and is therefore a self-sufficient mechanism. Extensibility is also improved by the fact that each class holds its own marshaling and unmarshaling code, so that each new class embodies an ad hoc parser, able to interpret the corresponding XML document. In order to keep the design sufficiently simple and robust, only two main API subsets have been defined: those accessor methods for retrieving and storing the policy information, and those methods for realizing the marshaling and unmarshaling procedures. Nevertheless, after analyzing the API from the user perspective, it has been found useful to provide an additional interface for performing the direct parsing of a complete XML document, making use of the methods offered by the rest of the policy classes. This entry point facilitates the correct use of the API, completely isolating the applications from XML document management. The parser provides an additional easy-to-use method for validating the XML policy against a specified schema. Simplifying the use of the parser has been a major design goal that led to the definition of an API, which reproduces the fields defined in the XML schema. The use of accessor methods and the implementation of the serializable interface favor dynamic discovery of the policy properties using introspective mechanisms.
Network Management
Figure 15.11 Policy components class diagram.
343
344
Programmable Networks for IP Service Deployment
Figure 15.11 displays the main relationships between the classes involved in supporting the policy information. Each policy holds a list of ConditionReferences and ActionReferences as well as the set of attributes defined in the IETF specifications. The ConditionReferences and ActionReferences point to PolicyConditions and PolicyActions, respectively; thus acting as general containers. Each policy class maintains a Java document object model (JDOM) element as a private attribute, which is joined to the particular part of the main XML document associated with the policy. The policy parser maintains an updated copy of the XML document, minimizing the amount of memory required to hold the data field values. This strategy also benefits the overall performance in two ways: First, the unmarshaling operations are delayed until the moment the information is actually required; and second, the marshaling procedures are carried out as soon as the information is available, which decreases the time required to provide the complete XML document once requested. Finally, the parser classes implement the storable interface defined as part of the database package. Through this interface, the classes’ hierarchical relationship information is provided to the policy database controller in a flexible way. Different types of policy conditions are represented as subclasses of the PolicyCondition. The creation of complex policy conditions is automatically managed by the CompoundFilterConditions, which finally store the resulting set of ConditionsReferences. An actual implementation of conditions is provided by the SimplePolicyConditions, which maintain the relationship between PolicyVariables and PolicyValues. PolicyActions follow a similar approach. Actions that are specific to different technology domains extend the fainSimplePolicyAction, which includes basic code for storing and manipulating the JDOM element. 15.3.8 Policy Repository The policy repository is supported on a Lightweight Directory Access Protocol (LDAP) directory, which provides content-based policy searches and distribution transparency. These features make it suitable for providing scalable storage solutions in networkwide systems, such as the FAIN management system. In general, accessing LDAP directories from Java applications is enabled through the use of the Java naming and directory interface (JNDI) API. JNDI offers an abstract view of the LDAP directory, hiding the LDAP-specific operations under a standardized interface suitable for interacting with different storage services that follow a similar approach. Figure 15.12 illustrates the basic blocks building the repository.
Network Management
345
Policy Repository Interface JNDI API Name Manager drivers LDAP
DNS
Figure 15.12 Policy repository access interface structure.
The LDAP directory might be accessed directly from the PDP internal components, or through an intermediate cache that would improve the overall efficiency. A reduced set of requirements have determined the policy repository design; namely: • • •
Access to the repository shall be technology transparent. Neither LDAPnor JNDI-specific issues shall be exposed to the outside components visiting the policy database. A centralized component (controller) shall organize the directory look-ups. The policy directory shall provide searching mechanisms based on policy, condition or action specific attributes.
As a design decision to enhance performance, only essential look-up attributes are stored in the directory. Since only Java applications are intended to access the directory, the impact of such a decision on interoperability is limited. The policy directory basically consists of a database access controller and a series of state factories and object factories appropriate for the different policy classes. As stated in [8], the database access controller provides a simple query interface for efficient database searching operations. The main responsibility of the controller is to “locate the requested policies, retrieve them, and return them in an appropriate format.” The controller is also in charge of directing the storage process according to the specified hierarchical relationships. The state factories and object factories are merely format translators that convert Java objects into LDAP entries and vice versa, respectively.
346
Programmable Networks for IP Service Deployment
Each class that may be stored in the repository must implement the Storable interface that contains operations for providing the hierarchical information not contained in the schema, and its identifier (a distinguished name that must be obtained so that there is no collision when storing the entry in the directory). The policy schema defines the type of policy classes that can be stored in the LDAP database, and the valid attributes for each of them. The hierarchical relationships existing between the classes is not reflected in the schema, but is maintained on the directory structure itself (as in the case of a file system). 15.4 NETWORK-LEVEL MANAGEMENT SYSTEM Following the FAIN management system description, we will describe in this section the main functionality of the network management system captured through a use case diagram, and the components that realize it. The design of the NMS in the FAIN project has been developed to take into account three different kinds of functional requirements such as: network provisioning, network maintenance and restoration, and network data management requirements. Also, these have specific targets that we have mentioned in [9]. 15.4.1 Use Cases This section describes the main use cases that are more specific to network-level functionality. Those related to both levels will be covered under the element management system section. The use cases, to be covered are delegate management functionality, the manage service function, interdomain management, the use management instance, and signaling. Figure 15.13 describes the use case diagram of the NMS. 15.4.1.1 Delegate Management Functionality The delegate management functionality use case is conceptually almost identical to the provision policy use case explained in the core use cases in Section 15.2.2. The only difference is that, in this case, the provisioning actions are the creation and activation of a virtual active network and a management instance for a new actor with certain access rights. As stated in the diagram, this use case can only be realized by the NIP, ANSP or SP actors, and not by the consumer, since it cannot delegate management functionality to any other actor.
Network Management
347
15.4.1.2 Manage Service A new function included within the FAIN management framework is reflected in the “manage service” use case. As stated in the use cases diagram, NIP, ANSP, and SP can in theory realize this use case each time the SP wants to deploy a service through the service manager (SM, described later). The SM will coordinate the creation of a VAN for deploying a given service as follows: • • • •
Retrieve from the ASP all topological requirements associated with the given service. Generate the appropriate policies for allocating resources along the VAN path to be created. Generate the appropriate policies for delegating management functionality on the appropriate virtual environment nodes that constitute the VAN. Communicate with the ASP to trigger deployment of the given service within the already created/activated VAN.
15.4.1.3 Interdomain Management This is mainly used to allow a FAIN ANSP to manage requests over FAIN domains. This happens when an SP wants to deploy a service over different administrative domains. For instance, it allows the resource manager (RM) to provide the best route across different domains. 15.4.1.4 Use Management Instance As stated in the use case diagram, the SP and authorized consumers can make use of the functionality described by the provisioning policy use case, to manage the allocated resources and services. 15.4.1.5 Request Decision Through Signaling The signaling approach is another basic feature of a policy-based system where the managed device (AN) requests through the policy enforcement point (EMS), a set of resources to the decision point (NMS). Depending on the available resources, and on the policies available in the system, the policy decision point decides whether this request should be accepted, and thus whether the resources are allocated or rejected. 15.4.2 NMS Components Since some of the NMS components have been described in Section 15.1 and some of them will be described in Section 15.5, we are focusing on the
348
Programmable Networks for IP Service Deployment
components of the NMS that perform a specific functionality just for this level, or on those components for which implementation is not necessary for other levels. Examples of these components are the service manager, the resource manager, and the interdomain manager (IDM), which support service deployment, decision making with regards to resources control, and interdomain communication, respectively. These components deal with networkwide issues. The use case below (see Figure 15.13) illustrates the management system’s components and how SM, IDM and RM are integrated within the NMS.
Manage Service
NIP, ANSP, SP
ASP
provision policy Provision Policy
Request Decision Through Signaling
IDM, SM peer domain
Delegate management delegate Management functionality Functionality
EMS
calculate VAN Calculate VAN
Use Management Instance
Create VAN
Interdomain Management
Activate VAN
SP,Consumer
PBANMS Figure 15.13 NMS use case.
Figure 15.14 shows the NMS Architectural model, further explained below. 15.4.2.1 Policy Editor The policy editor permits the creation and modification of management policies in a graphically assisted environment, as well as supervising the deployment of such policies within the network. To enhance manipulation of the policy information, it incorporates interpreters enabling the translation of the XML structures into graphical elements that represent each of the policy components. Such graphical elements are then hierarchically arranged and displayed, providing a view of the policy layout. In the policy view, it is possible to conduct fine-grained operations; for example, selection of a tree element causes its associated attributes to be displayed in a property sheet, so that they can be viewed or changed. In the tree view, new elements may be added directly into the policy structure. This process is
Network Management
349
easily performed by selecting one of the available policy components (rule, condition, or action) in the toolbar, and subsequently clicking on the desired point in the tree. Once the policy is considered to be complete, deployment is initiated from the policy editor menu. A validation process is automatically carried out before actually deploying the policy, and the user is informed of any problems. The policy editor then contacts the ANSProxy and forwards the resulting XML document through the i_ANSProxy interface. During deployment, the editor gathers and displays the reports sent by the different entities involved in the enforcement chain, which allows the detection and location of any fault or conflict that may arise, and facilitates its solution.
ANSP Management Instance
Policy Editor
Interdomain Manager
Service Manager
ANSP proxy
PDP Manager
ASP
Resource Manager
Access Rights Check
Other SP Delegation of Management Management Instances Architectures
REP
Access Rights Delegation PDP
QoS
PEP
PEP
Monitoring system
PDP
Figure 15.14 Architectural model for the NMS.
15.4.2.2 Service Manager The service manager is responsible for setting up a VAN for a particular SP and service. It receives (as input) the SLA (agreed between the ANSP and SP) and sites to interconnect, as well as the services to deploy. We refer to the first input as a static requirement, while the second and third are known as context information, which are essentially dynamic requirements. The SM uses this information, together with the topological service requirements imposed by the service (if relevant), which are retrieved from the NetASP, to generate the appropriate set of
350
Programmable Networks for IP Service Deployment
network-level (NL) QoS and delegation of access rights policies. As a result of the enforcement of these policies, a VAN is created, and the SM contacts NetASP to trigger the deployment of the service on the VAN created. Figure 15.15 illustrates the sequence of events explained earlier. SP
ANSP
Servic e Manager
NetASP
NMS
Negotiate SLA Register Servic es Deploy Ser vice Retrieve SLA
Retrieve Servic e Topological Requirements Merge Requirements Generate QoS & Del Policies Deployment of Policies Report about Deployment Deploy Ser vice
Figure 15.15 Setup of a VAN for service deployment.
15.4.2.3 Interdomain Manager In order to understand the functionality of the interdomain manager, we have defined a domain as a collection of nodes managed by a single administrative entity, where security and management policies are uniformly applied. Thereby, the interdomain manager is a component of the management system in charge of implementing the end-to-end negotiation of service deployment into separate active nodes that belong to different administrative domains, managed by different organizations across the Internet. When the network management system receives a customer’s service deployment request that involves a target node outside its domain, the IDM will contact an equivalent IDM of another domain, and negotiate service deployment in one or more of their active nodes, for use by the customer of the first domain. In Figure 15.16 we show the interaction between two IDMs that belong to different domains, and the entities involved in interdomain service deployment.
Network Management
351
For our design, each element management system manages only a single active node. A network-level management system (NMS) oversees these EMSs. The IDM is located within the NMS. Interdomain Manager
Interdomain Manager
1. Initiation 2. Negotiation 3. Agreement 4. Enforcement
NMS
NMS
EMS EMS EMS
EMS EMS
EMS
Dom ain 2 Dom ain 1
Edge (egress) router
Edge (ingress) router
Figure 15.16 An abstract depiction of the interdomain manager.
15.4.2.4 Resource Manager The resource manager is responsible for maintaining a global view of connectivity, the availability of resources in the managed domain, and establishment of an endto-end path for the installation of active services, taking into account the specific service requirements obtained from QoS policies and the service descriptor, which it is a component of the ASP described in Chapter 16. In order to keep a current view of network connectivity, the RM stores information about all the nodes and the links of the active network. For each active node, the information available is: its public IP address, the private IP address, the types of virtual environments that are supported by the particular node, the links attached to the node, and, finally, the IP address of the element level manager that is responsible for this node. For each link, the available properties are: the IP addresses of the start and end nodes, the total capacity of the link, and the currently available bandwidth of the link. Additionally, the RM stores a list of the
352
Programmable Networks for IP Service Deployment
virtual active networks that have been established in the system. Each VAN is associated with a unique identifier indicated by the service manager. The main role of the RM is to determine a suitable path for the installation of an end-to-end service. The procedure is triggered by the QoS PDP, which calls a request path operation, providing all the resource and topological requirements of the service. Looking at its internal database, the RM tries to find suitable paths in the network that satisfy the requirements given by the QoS PDP. If the search is not limited by other constraints; for example, choosing the shortest path, a set of different paths will result. All of these paths are candidates for the creation of the new VAN, which will accommodate the service. The selected paths fulfill the resource and topological requirements of the service. However, a service can have additional requirements that can be extracted from the service descriptor. The resource manager does not have access to such information, as this lies within the domain of the ASP, so the ASP network manager makes the final decision. The resource manager sends the list of the valid paths to the ASP network manager, via the calculateBestCandidates operation, and gets as output the most suitable path. For each path, the resource manager also includes information about the properties of its nodes and links, based on which the ASP network manager will be able to perform its requirements matching, and identify the most suitable path. When the ASP network manager returns the final path, the resource manager stores the new VAN, and updates all related information to reflect the establishment of the new VAN. Finally, the information about the new VAN is sent back to the QoS PDP, which will enforce the necessary actions for the establishment of the VAN. If for some reason the new VAN cannot be successfully set up, the QoS PDP can roll back the process by setting the status of this VAN to “FAILED,” which leads to the removal of all information associated with it. The resource manager also supports the extension of an existing VAN; for example, in the case where a service provider has already obtained a VAN, but wishes to install additional service components to offer his service to new customers. VAN extension follows the same procedure as VAN creation, except that existing nodes are also taken into consideration. 15.4.2.5 Monitoring System at the Network Level The monitoring system at the network level, shown in Figure 15.17 is connected to the diverse notification channels at the element level, like any other consumer, providing the event correlation capabilities required for obtaining a precise picture of the active network status. The definition and design of appropriate filters avoids event flooding, and provides the necessary degree of scalability. The monitoring
Network Management
353
policies at the network level define the composite events as a set of simple events produced in a specified order. From the monitoring policy, appropriate filters are generated by a composite event controller and sent to the EMS event channels. At the same time, appropriate components are configured in the NMS in order to detect the appearance of a succession of events coming from either the same or different EMS stations.
NMS 1.- PIB
Mon PDP 3.- FILTER
Composite Event Controller
composite
2.- FILTER
Extended
4.- EVENT
EMS Extended
EMS
EMS
Figure 15.17 The main NMS monitoring components.
The composite event controller generates composite event instances that become event consumers subscribed to receive the events of which it is composed. The configuration of the composite event leads to the definition of an event matrix. Whenever an event is received by the composite event instance, it checks the order of appearance (relative to the other events) and stores it in the event matrix. When the event matrix is completed, a composite event is raised, and appropriate alarms are generated at the network-level. The NMS monitoring system may also act as a server that simply gathers all the events, following a certain pattern or being associated with a given technology domain. 15.4.2.6 Quality of Service PDP The quality of service policy decision point (QoS PDP) is an adaptation of the core PDP for a QoS configuration. That is, policies processed by this PDP are oriented to the differentiation of certain flows, or groups of flows, with an enhanced quality of service. At the network-level, this PDP component plays a fundamental role within the policy-based management architecture.
354
Programmable Networks for IP Service Deployment
To develop all these capacities, the QoS PDP needs to interchange information with monitoring system and resource manager components, and make proper decisions based on policy conditions and network or node status. Besides, the QoS PDP contains two types of components: the condition and the action interpreters.
: PDP Manager
QoS PDP
Action Interpreter
Resource Manager
: ASP
1: setPolicies() 2: Evaluation()
fainQoSAllocAction
3: interpretAction() 4: executeAction() 5: getPath() 6: calculateBestRoute()
Figure 15.18 Policy process by the QoS PDP.
They provide action and condition processing logic for those policy types that are handled by this PDP. The QoS PDP can be dynamically extended by contacting the ASP system to accommodate additional interpreters, capable of processing new actions and conditions conveyed by the policies. The more important interactions of the component are shown in Figure 15.18. As a part of our management system design, we have split the VAN specification into three main parts: QoS parameters, computational QoS parameters, and specific service requirements. When the QoS PDP receives a policy, only information relative to QoS parameters (like bandwidth and priority) are used. As previously mentioned, this data is obtained from the SLA between the ANSP and the SP. The corresponding information of the other two parts is obtained from the resource manager through the monitoring system and the ASP. The action interpreter contains the functionality to interact with the resource manager and monitoring system components, in order to request from them the path of actives nodes that will be part of the VAN, as well as the information relating to computational QoS parameters and service requirements. In this case this request needs to have additional service information; mainly, the name of the service, the virtual network ID (VNID), and the action mode (i.e., active, remove, modify, and so forth).
Network Management
355
Once the resource manager sends back the chosen path, the QoS PDP adds this information to the rest of the VAN requirements, and integrates them in a structure that is forwarded to the QoS PEP, in order to generate and distribute element-level policies. 15.4.2.7 Quality of Service PEP The quality of service policy enforcement point (QoS PEP) component at the network level has, from the conceptual point of view, the same functionality as that at the element level, except that signaling support is not provided at the network-level, for the reasons given in the common components section. However, the actual processes needed to realize the functionality (i.e., translation of policy decisions into commands that are understandable by the target) vary significantly, since these processes at the network level are policy translations from network- to element-level policies. This fact is reflected in the use case diagram shown in Figure 15.19.
PDP
enforce decision
map action to interfac e
demux policies to EMSs
transcoder1 FT 0.1 mainClassName org.ist_FAIN.services.transcoder1.TranscoderManager mainCodePath /usr/local/jmf2.1.1/lib/jmf.jar:/usr/local/jmf-2.1.1/lib/sound.jar:/usr/local/jmf2.1.1/lib:code/demux.jar AdmissionTimeOut 30000 JVM 1.3.1 jvm.transcoder1.FT.transcoder1.jar Figure 16.5 (b) Example of a node-level service descriptor.
Service Deployment in Programmable Networks
385
16.4 ASP COMPONENTS In this section, the design and implementation details of the ASP components are given. 16.4.1 Network ASP To optimally deploy a service in an active network, the network characteristics must be considered during the deployment process. The previous work on deployment in the FAIN project focused on element-level aspects, and thus extensions at the network-level were required. This section describes the design and implementation of the network-level ASP, which has been significantly extended since the last public description of the FAIN ASP system [1]. 16.4.1.1 Network ASP Manager As an ASP component having access to this kind of network view, the network ASP manager is introduced as the representative of the network ASP. The network ASP is the network-level counterpart of the node ASP. The network ASP manager introduced in FAIN deliverable 5 [2] did not have such a network view, since the ASP part of the document focused on the node ASP design only. Therefore the network ASP manager was used as a simple initiating component on the network level for the deployment of a service to a single node. This network ASP manager had no network knowledge of any kind; it only knew about one single node, where it initiated deployment of a single service. The current design of the network ASP includes such a network view. This network view is restricted to service-specific information, so the network ASP has exclusive knowledge about services, and contributes it at specific stages of the service deployment process to support decisions that must be taken at network level first, and then be able to continue the service deployment process at the element level. 16.4.1.2 Service Registry The service registry sends information to the ASP network manager based on the ASP network manager request. The ASP network manager is responsible for combining this information to fulfill the user request. No further processing is needed in the case of (un)register requests. There is more work to do for a deployment of the user's service: resolving service dependencies and asking the service registry for other information, asking for a download of the right code at the right code repository, downloading the code to the appropriate active node, and so on.
386
Programmable Networks for IP Service Deployment
The service registry must register new services, unregister old services, and find services (based on the name). The service name must be unique (in order to clearly distinguish services): It is composed of the name of the service concatenated with the name of the service provider (for example, VideoTranscoder_FTR&D). No public attribute is necessary. Only four methods are public: •
• •
•
The registerService method: In order to register a service into the service registry, the service name must be passed with a descriptor, describing the service. This descriptor is an XML file, mapping the Chameleon requirements. If the service is already registered (one previous version has already been registered), then the service registry registers this new request as a new version of the service, and increments the number of the version. The unregisterService method: Only the service name is passed to the service registry, and the latter removes it from the database, and removes all the descriptions related to it. The fetchService method: Only the service name is passed to the service registry, and the latter is responsible for retrieving the XML descriptors in the database and sending them back to the client (ASP network manager or local service registry). If there are several versions of the service (and hence several XML descriptors), all the XML descriptors are sent back. The getServicesList method: No input parameter is given. When receiving this request, the service registry sends back all the registered services.
Some exceptions are also defined: checking the correctness of the name, the syntax of the XML descriptor, and so on. In the initial implementation, this registry is stored in regular files rather than in a database (because the data size is not great). We have decided to pass the XML file to/from the service registry as a string. We assume that a string can be long enough to represent the XML descriptor. Another option was to implement an HTTP server, and to request a GET/PUT method on this server, but in this case we are no longer in a CORBA architecture (as it has been defined by FAIN members). 16.4.1.3 Service Repository The service repository contains the implementation components for the services which are available in the network. These components can be specific to an implementation from a particular vendor or for a specific EE type. The repository stores only the code files. Additional information, about which components are required for the service or how these must be configured, is stored in the service registry. In the FAIN network architecture, there is a central service repository for each administrative domain.
Service Deployment in Programmable Networks
387
The main requirement of the service repository is to function as a file server which contains the code modules that can be injected into the active nodes. The repository can be implemented using existing file transfer mechanisms. In the initial implementation, an HTTP server is used as the service repository. An alternate implementation using plain sockets is also under development. 16.4.2 Node ASP The node ASP must run on each active node where the services are to be deployed. It is contacted by the network ASP when a service component is to be checked for whether it is deployable on the given node, or to perform the actual deployment. On the node level, the following components make up the ASP system: the node ASP manager, service creation engine and code manager, local service registry, and local service repository. 16.4.2.1 Node ASP Manager The node ASP manager is implemented as a stationary agent running in a Grasshopper agent system. The connection of the node ASP manager to the DEMUX component is realized by using the communication service facilities of Grasshopper. A (mobile) deployment agent will be received via that communication service, which is encoded in an ANEP packet and received from the network level (i.e., the network ASP manager). 16.4.2.2 Local Service Registry The local service registry (LSR) is responsible for managing service descriptions locally inside the active node. Its role is then to fetch service descriptions and to store them (cache). When the code manager (CM) wants to deploy a service, it asks the LSR for the descriptions of this service. If this service has already been deployed (or requested by the CM), the LSR has to keep the descriptions in cache, and can then give them back to the CM. If this description is not known locally by the LSR, then the latter will contact the network service registry and fetch the descriptions for this service. The LSR will then store it locally (keep it in cache), and will send back this information to the CM. The LSR can also reply to the CM if the CM wants to get the list of all available services. This option will certainly not be used because it is not the role of the CM, but it is possible.
388
Programmable Networks for IP Service Deployment
If the CM makes a request, then the LSR will contact the network service registry to retrieve the list of services. This list is not cached in order to always receive the up-to-date list. No public attribute is necessary. Only two methods are public: •
•
The fetchService method: Only the service name is passed to the local service registry, and the latter is responsible for retrieving the XML descriptors (locally if stored, or remotely with a CORBA invocation to the network service registry if not) and sending them back to the code manager. If there are several versions of the service (then several XML descriptors), all the XML descriptors are returned. The getServicesList method: No input parameter is given. When receiving this request, the local service registry requests the network service registry to get this information, and sends back all the registered services (received from the service registry) to the CM.
Some exceptions are also defined: checking the correctness of the name, the syntax of the XML descriptor, and so on. 16.4.2.3 Local Service Repository The local service repository caches the service implementation components that have been recently downloaded to the active node, so that if they are requested in the future, it will not be necessary to retrieve them from the network. The number of components that are cached depends on the available storage space on the node. If the available space is exhausted, a replacement algorithm is applied to delete an unnecessary component from the cache in order to store a new one. The repository itself does not have the necessary logic for these checks. This is the responsibility of the code manager. In this context, the local cache can be considered as a simple back end of the code manager. 16.4.2.4 Service Creation Engine The service creation engine (SCE) [3] selects appropriate code modules to be installed on the node in order to perform the requested service functionality. The service creation engine matches service component requirements against node capabilities, and performs the necessary dependency resolution. Since the service creation engine is implemented on each active node, active node manufacturers are enabled to optimize the mapping process for their particular node. In this way it is possible to exploit proprietary, advanced features of an active node. The selection of service components is based on service descriptors. Moreover, service descriptors describe how service components are bound to each other. This type of information is extracted by the SCE and passed to the code manager.
Service Deployment in Programmable Networks
389
Install service component flow: The SCE is responsible for mapping a service component name to code modules suitable for the local node environment. The SCE starts with a service component name that stands for a specific type or functionality. Based on the service component name, the SCE requests a list of matching service component descriptors. From this list, the SCE selects—based on the available EEs—the appropriate service component descriptor. If a service component descriptor contains a nonempty list of service component names that it depends on, the resolution process continues in a recursive manner. A service component descriptor might contain a reference to a code module. If such a service component is selected by the SCE, the necessary information is stored in the serviceComponents data structure. The resolution process terminates when all dependencies are resolved. The binding between the code modules is determined based on the information available from the different service descriptors, and it is stored in the connectionInfos data structure. The SCE subsequently requests the download and installation of the compound implementations, and implementations from the code manager. The necessary information is in the installation map, which is passed to the code manager, using the fetchAndInstallServiceComponents method. Check service flow: The checkService flow works basically the same way as the installServiceComponent method, except that the code manager to trigger the download and installation of code modules is not contacted. 16.4.2.5 Code Manager The code manager performs the EE-independent aspects of service component management. During the deployment phase, it fetches code modules identified by the service tree from the service repository. It also communicates with node management to perform the EE-specific part of installation and instantiation of code modules. The code manager maintains a database containing information about installed code modules and their association with service components. The code manager is a node-level ASP component that maintains the information about the code modules installed on the node. This component is contacted after the service creation engine has resolved the dependencies of the service component whose deployment was requested.
390
Programmable Networks for IP Service Deployment
The service component information comprises: •
•
Service component dependencies: a) Resolved intercomponent dependencies, in the form of a list of all the service components that the given service component depends on. b) Environment dependencies; that is, the dependencies on the execution environment that the code module associated to a service component is supposed to run in. Service component local installations: a) Expiration date; b) VE identifier and EE identifier, where the code modules are installed.
The code manager holds the information about the installed service components in a data structure forming a directed acyclic graph (DAG). The nodes in the data structure represent the service components installed on the node, whereas the edges represent the dependencies between these components. The DAG has two levels: The first level consists of nodes representing service components that were requested to install by the SCE. Nodes on the second level represent the service components that the service components from the first level directly or indirectly depend on. The information maintained by the code manager is updated by: • •
The SCE in case it requests fetching and installing a service component. The code manager itself, whenever a service component expires and needs to be uninstalled from a given target environment.
16.5 CONCLUSION The active service provisioning system enables on-demand deployment of heterogeneous distributed component based active services in the FAIN network. The main features of the systems are: •
A two-layered architecture: The rationale for choosing this architecture was a separation of concerns in the service deployment problem space. Whereas the network-level ASP deals with network issues, including identifying nodes of the target environment for a given service with regard to the topological service requirements, and network quality of service requirements; the node-level ASP is concerned with node-specific requirements, including technology and other service dependencies.
Service Deployment in Programmable Networks
•
•
•
•
•
•
391
Heterogeneous active service support: The ASP enables deployment of active services independently of the implementation technology they are based on. As long as a service is structured in terms of components, and described using universal service descriptors defined in XML, it can be deployed using the ASP in the same way. MultiEE node support: The ASP allows for deployment of active services on top of multiexecution environment nodes. A service may consist of components to be deployed in different execution environments on a node. The decision as to which EE should be chosen depends on the execution capability and the availability of the suitable component implementation. Deployment support for service components in different planes: ASP is designed for deploying service code, independent of the purpose. In the same way, it is possible to deploy components to run in management, control, and data planes. A hybrid two-phase process for the selection of a target environment: The selection of active nodes suitable for the deployment of active services is designed as a hybrid of a centralized preselection of the candidate nodes to be used for service deployment, and a decentralized checking that the actual node capabilities on the candidate nodes suffice for the service needs. A universal service meta information description: The service descriptors are expressed in XML, a commonly used standard generalized markup language (SGML)-based language standardized by the World Wide Web Consortium (W3C). By applying this language, the descriptors are easy to write for the service providers, easy to process by the programs (e.g., to generate it automatically by developing a service or to parse it) and, last but not least, easy to extend. The common availability of the parsers also makes the software processing XML-based service descriptors easy to port to other platforms. The binding of service components: The FAIN ASP also supports the binding of the service components forming a service. A service descriptor enables one to describe the way in which the components should be connected to each other, and how the node-level ASP should interpret this information and perform the necessary actions.
The network-level ASP has been significantly extended from its rudimentary form to allow the deployment of active services on a preselected number of nodes. This was achieved by adding the automatic selection of the target nodes, based on matching the service requirements and the locations of the service components with the network capabilities. At the node-level, the ASP code has profited from more thorough testing when performing the more advanced test case scenarios required for deploying more complex services. These services consist of components deployed on different active nodes distributed in the network.
392
Programmable Networks for IP Service Deployment
Future work will focus on providing improved support for service reconfigurability. Deployment algorithms and a more optimized target environment selection algorithm will also be investigated. References [1]
Solarski, M., Bossardt, M., and Becker, T., “Component Based Deployment and Management of Services in Active Networks,” IWAN’02, Zürich, Switzerland, December 2002.
[2]
FAIN Deliverable D8 - Final Specification of Case Study Systems, May 2003, http://www.ist-FAIN.org.
[3]
Bossardt, M., et al., “A Service Deployment Architecture for Heterogeneous Active Network Nodes,” Seventh Conference on Intelligence in Networks (IFIP SmartNet 2002), Saariselkä, Finland, April 2002.
Chapter 17 DiffServ Scenario In the context of the DiffServ scenario [1, 2] it was decided to create a new resource management component; namely, DiffServ management, to be a part of the VE management system [3] and support DiffServ functionality. The differentiation of services, which is the basic concept of DiffServ, includes special treatment of services with respect to the resources that they consume. Therefore, and taking into consideration the nature of the application used for demonstration purposes; the involvement of traffic management [4], which is responsible for the outbound link, is considered as mandatory. The objective of this chapter is to describe the relationship between DiffServ management and traffic management in the context of VE management, and describe what interactions take place between them as part of the DiffServ scenario. 17.1 INTRODUCTION This chapter describes the differentiated services (DiffServ) scenario; full details of which can be found in [1, 2]. The first generic scenario is a WebTV demonstration. We can demonstrate the WebTV scenario as a separate demonstration. However, if the QoS of the network for data transmission is not guaranteed, video program data may not be provided perfectly to a receiver client. An SP gets some resources to create a VE or VAN from an ANSP to provide the WebTV service. For example, 10 Mbps of bandwidth is requested to create the VAN that includes multiple active nodes. In this case, the SP will use 10 Mbps to provide multiple services that include the WebTV service [6]. In other words, 10 Mbps is shared by multiple services. This means that other type of flows may damage a real-time flow. To avoid this kind of damage, resources for a specific flow must be assigned. However, a service that uses real-time transport protocol (RTP) selects one of the available ports dynamically. Therefore, we cannot identify the specific flow before the service selects the port number for the RTP flow. In this sense, we need to assign some resources after the service has selected the port for the RTP flow. 393
394
Programmable Networks for IP Service Deployment
To execute this kind of resources allocation, an active packet will be useful. Since only resources on the routing path for transmitting the video data should be reserved, the active packet that is sent from a server to a receiver client can easily detect which active nodes are involved in transmitting the video data to the receiver client. Therefore, the active packet can assign the resources for the video transmission to the appropriate active nodes. That is one example of the use of the active packet. Another example is as follows. During Internet shopping, if the protocol is changed from hypertext transfer protocol (HTTP) to hypertext transfer protocol over secure socket layer (HTTPS), the service provider may wish to reserve some bandwidth for that HTTPS connection. In that case also, dynamic resource allocation by the active packet looks useful. In the following sections, the use of the active packet is described, and the final section provides a summary of our proposals. 17.2 ARCHITECTURE DiffServ and the traffic manager belong to the family of resource managers and therefore have similar functionality. In fact they are component managers, with the only difference being the resource components that they manage. In the case of the traffic manager, the components that are managed are the traffic classes, while in the case of DiffServ manager they are the DiffServ classes. The VEs have a oneto-one relationship with every resource component. The managers are responsible for the creation of the VE, while in fact the VE is an aggregation of components. Furthermore, they are responsible for the reconfiguration of the VE in that they reconfigure the components that constitute it. A DiffServ class is required to enable the VEs to control their services according to the DiffServ privileges. This is in addition to the traffic class, which represents the outbound bandwidth that has been allocated. The DiffServ class associates the different flows with specific DSCP numbers. The DSCP numbers correspond to different levels of service. For example, DSCP-1 corresponds to a better service level than DSCP-2. The mapping between the DSCP and the service levels is not common to the whole FAIN network, or even among the different VEs in the same FAIN node. Every VE has a different mapping between the DSCP numbers and the service levels, and also has a different range of service levels. For example, a VE may have five different service levels, while a different VE of the same node may use only two service levels; and the DSCP-3 of the first VE may have a better service level than the DSCP-1 of the last VE. Thus, for every DiffServ VE, we need a different DiffServ class, which maps between the service levels and DSCP numbers that apply to that VE. The actual service level that corresponds to a specific DSCP number of a specific VE depends on the capacity of the resources that the services (which belong to that VE and have that DSCP number), have access to. In our case, the
DiffServ Scenario
395
only resource that can be managed is the outbound bandwidth; therefore, the mapping between the different service levels and the resources is the mapping between the different service levels and the outbound bandwidth (BW). The traffic class of a DiffServ VE is responsible for mapping between the different DSCP numbers and the bandwidth. Of course, the sum of the bandwidth that corresponds to the different DSCP numbers of a VE cannot exceed the total bandwidth that has been allocated to the VE. A simple example that shows the way the outbound bandwidth is shared among the different VEs and DSCP numbers, and the mapping of DSCP numbers to specific parts of the BW, is illustrated in Figure 17.1.
Node Outbound Bandwidth
VE-1 BW (25%)
DSCP-1 BW (50%)
DSCP-2 BW (30%)
VE-2 BW (75%)
DSCP-3 BW (20%)
DSCP-1 BW (50%)
DSCP-2 BW (30%)
DSCP-3 BW (15%)
DSCP-1 BW (5%)
Figure 17.1 Example of bandwidth sharing between two DiffServ-enabled VEs.
The actual mapping between DSCP numbers and subparts of the bandwidth of a VE is ready immediately after the creation of the VE, and cannot be changed unless VE reconfiguration takes place. On the contrary, after the creation of the VE there is no assignment of flows to specific DSCP numbers. The VE owner does this on the fly, by accessing the interface that the DiffServ class provides to the users of the VE. During VE creation, the traffic class of the VE creates the bandwidth partition and assigns the DSCP numbers. The VE manager gets a resource profile for the creation of a new VE that includes the bandwidth that is required. In the case of a DiffServ VE, the VEM gets the mapping between the DSCP numbers and the VE’s bandwidth. The VEM requests the traffic manager to create the traffic class that will have access to the required bandwidth partition, and will map between the DSCP numbers and the bandwidth subpartitions. Furthermore, the VEM requests to the DiffServ manager the creation of a DiffServ class, which will know only the
396
Programmable Networks for IP Service Deployment
range of the DSCP numbers that it will be able to use, and not which flows will belong to which DSCP number. After the VE is ready, the user of the VE can access the interface that is provided by the DiffServ class, in order to request the assignment of a specific DSCP number to a flow. As previously mentioned, DiffServ and traffic classes are members of the resource classes family. Consequently they have similar design and functionality. Figure 17.2 illustrates the general resource class and the functionality that it supports. (2): open control API
to security API security check I/F implementation (1): initialization API validity check request processing platform adaptation to platform
Figure 17.2 Internals of a resource controller.
Resource controllers have two different interfaces: an initialization API that is used by the corresponding manager to configure the resource controller during its creation, and an open control API that is exported to the VE in order to give the VE owner access to the resource. Apart from the security check, the rest of the internal functionality has a different meaning for the DiffServ and traffic control, respectively. After the access of the open control interface of a resource class, the same sequence of actions takes place. First, the request is checked against security by accessing the authorization operation of the security framework. Then the validity of the request is checked and processed, if needed. Finally, the platform is accessed and all the appropriate actions and configurations take place in order that the request is satisfied. There follows a description of the DiffServ and traffic classes. 17.2.1
Traffic Controller
The traffic class represents the part of the outbound bandwidth that has been allocated to a VE. In the case of the GR2000 router, this is done by the creation of
DiffServ Scenario
397
a GROUP, and in the case of Linux this is done by the creation of a bounded and isolated bandwidth class into the Linux traffic controller. In the case that a DiffServ VE will use this bandwidth, the traffic class is responsible for partitioning the bandwidth into smaller parts, and assigning these parts to specific DSCP numbers. The mapping is kept internally, and achieved by the configuration of the platform (GR2000 or Linux TC). For example, a mapping between bandwidth and DSCP numbers can be as shown in Table 17.1. Table 17.1 Mapping Between DSCP Numbers and Bandwidth DSCP number
Bandwidth
1
2 Mbps
2
1 Mbps
3
500 kbps
The mapping is ready after VE creation and cannot be changed unless the traffic manager reconfigures the traffic class during the reconfiguration phase. For the DiffServ scenario, the open control interface of the traffic class is not used. 17.2.2
DiffServ Controller
The DiffServ manager creates a DiffServ class for every DiffServ VE, and configures it to use only the range of DSCP numbers available for the VE. The DiffServ class exports an open interface to the VE, in order to be able to assign the DSCP numbers to specific flows. The mapping is kept internally. An example is shown in Table 17.2. Table 17.2 Mapping Between Flows and DSCP Numbers Flow ID
DSCP Number
1
2
2
3
3
1
4
3
The user or the owner of the VE requests that a DSCP number be assigned to a flow, using the interface that has been exported by the DiffServ class. The DiffServ class checks the request for security and validity, and then accesses the
398
Programmable Networks for IP Service Deployment
platform to make the appropriate configuration changes, which depend on the platform. 17.3 SCENARIO Figure 17.3 shows the components and the interfaces that are exercised by the DiffServ scenario. VE
SNAP EE
(1) VE Mgr
DiffServ Mgr (2) Traffic Mgr
(5)
(7)
(3) SEC
DiffServ Class
Traffic Class (6)
(8) (9)
(4) (10)
GR2000/Linux GR2000 DiffServ Controller / Linux TC
Figure 17.3 Components and interfaces involved in the DiffServ scenario.
The interfaces that are shown in Figure 17.3 are the following: 1. 2. 3. 4. 5. 6.
1
The node interface (ORB interface1): This is used for VE creation and reconfiguration. A request for a new VE, which will include a resource profile, will be done to that interface. The DiffServ manager interface (ORB interface): The VEM uses that interface to request the creation of a DiffServ class during VE creation. The traffic manager interface (ORB interface): The VEM uses that interface to request the creation of a traffic class during VE creation. The DiffServ class initialization interface (ORB interface): The DiffServ manager uses that interface to configure the DiffServ class. The DiffServ class open control interface (ORB interface): This is not used by the DiffServ scenario. The traffic class initialization interface (ORB interface): The traffic manager uses that interface to configure the traffic class.
CORBA object request broker interface.
DiffServ Scenario
7. 8. 9. 10.
399
The DiffServ class open control interface (Socket interface): This is used by the VE user (in our case the SNAP EE) in order to request the assignment of a DSCP number to a flow. The SEC authorization interface (ORB interface): This is used by the traffic and DiffServ classes for checking if the entities that use the open interfaces have the access rights to do so. The Platform DiffServ interface (Linux or GR2000 interface): This is a telnet interface in the GR2000 case, while in the Linux case this is a set of shell scripts. The Platform TC interface (Linux or GR2000 interface): This is a telnet interface in the GR2000 case, while in the Linux case this is a set of shell scripts.
In the DiffServ scenario, the creation of a DiffServ VE takes place first. The VEM receives a VE creation request that includes a resource profile for the new VE. The profile includes the amount of bandwidth and the mapping between the bandwidth and DSCP numbers. The VEM requests of the traffic manager the creation of a traffic class, which will allocate that amount of bandwidth and will create the mapping. The VEM also requests of the DiffServ manager the creation of a DiffServ class, which will export an interface that gives the ability to the user of the VE to assign DSCP numbers to flows. Figure 17.4 depicts an example of the DiffServ configuration. Let us assume that the video data is transmitted to the receiver client by way of active node 1, active node 3, and active node 4. As described, the video server decides on a port number to transmit the video data to the client by the RTP at first. Then the server sends the active packet that includes information such as a request for resources and flow, including a specific port number. In this case, the active nodes 1, 3 and 4 are configured by the active packet. However the active node 2 is not configured, since the video data is not transmitted by way of active node 2. These procedures should be executed to guarantee resources for video transmission from the start. However if it is not guaranteed from the start, the video flow is impaired during run time. In that case, resources should also be guaranteed dynamically by sending an active packet.
VN-1 (DiffServ network) video server
a ctive n ode 2
a ctive n ode 1
a ctive n ode 4
receivin g clien t
a ctive n ode 3 : a ctive pa cket
Figure 17.4 Example of the DiffServ configuration.
400
Programmable Networks for IP Service Deployment
In this DiffServ demonstration scenario (Figure 17.5), a service provider tries to make a priority transmission network by renting network resources from an operator (ANSP). The SP makes a contract to rent three levels for the priority transmission with the ANSP. We assume that those three levels are DSCP-0, DSCP-8, and DSCP-56. Those DSCPs correspond to TOS-1, TOS-32 and TOS224, respectively. The SP connects branch office A and the head office through two hybrid active network nodes (HANN-1 and HANN-2) as shown in Figure 17.5. In addition, it connects branch office B and the head office through HANN-2. Then the SP assigns the TOS-1 and the TOS-224 transmission qualities between branch office A and the head office. In addition, it assigns TOS-32 transmission quality between branch office B and the head office. Initially, user A in branch office A sends video data to user C in the head office, through the network with TOS-1 transmission quality. Then user B in branch office B sends “jamming” traffic to another user C with a transmission quality of TOS-32. The priority of TOS-32 is higher than that of TOS-1. If the amount of video data and jam traffic is above the output bandwidth of the network node (HANN-2), the video data transmission will be impaired, since the priority of the video is less than that of the jam traffic. Then user A changes the priority of the video data from TOS-1 to TOS-224 by an active packet (a SNAP program). The active packet is sent from user C to user A. The authenticity and authority of the active packet is checked at each HANN. After the priority of the video data is changed, the video transmission is no longer impaired. Branch Office-A(User-A)
Head Office(User-C) HANN-1
(1)Video Sen d V
Video
Act ive P r oxy
HANN-2 Act ive P roxy
V
GR2000
Ser ver AP
GR2000 AP
V
Receiver Clien t
Hub AP
(3)AP Sen d J a m Tr a ffic Sen der
(2)J a m Tra ffic Send
V
: Video Pa cket
AP
: Act ive P a cket : J a m P a cket
Branch Office-B(User-B)
J a m Tra ffic Reciver
Figure 17.5 DiffServ demonstration scenarios.
17.3.1
HIT/HEL Testbed Configuration
Figure 17.6 shows the current network configuration of the Hitachi Ltd./Hitachi Europe Ltd. (HIT/HEL) testbed.
DiffServ Scenario
401
Active P r oxy A1
10.0.8.17/28
H EL t est bed
Video Sen der SNAP Receiver
V2
10.0.8.30/28
G1
10.0.8.49/28 edge1
FAIN t estbed
GR2000
G2
G0
T0
(Giga-bit Rou t er ) 10.0.8.62/28
To FH G
GRE Tu n n el
10.0.8.9/28
10.0.8.8/28
HIT/HEL Side
Figure 17.6 HIT/HEL testbed configurations.
17.3.2
FHG Testbed Configuration
Figure 17.7 shows the current network configuration of the Fraunhofer Institute FOKUS (FHG) testbed. In this configuration, we used 10-Mbps bandwidth for the output of the TC-100; especially the T2 port of the TC-100 router. SNAP Sen der
J AM Sen der
10.0.12.78/28 F4
To H EL F0
10.0.12.17/28
10.0.12.1/28
GRE Tu n n el
F1
F2
A_Pr oxy Video_S
10.0.12.2/28
F3
T0
10.0.12.50
T1 10.0.12.65/28 10.0.12.40/28
TC-100 (Rou t er )
10.0.12.20/28
T2
F5
F8
J a m _S Rou t er
10.0.12.33/28
10.0.12.0/28
FHG Side
Figure 17.7 FHG testbed configurations.
F6
10.0.12.49
Video Receiver
10.0.12.62 F7
J AM Receiver
402
17.3.3
Programmable Networks for IP Service Deployment
Active Proxy Configuration
Figure 17.8 shows major interfaces in an active network node: 1. Interface (1) between SNAP and VEM. 2. Interface (2) between VEM and DEMUX. This a CORBA interface that has been defined. 3. Interface (3) between Security and DEMUX. There are two interfaces between DEMUX and Security. The first interface is from DEMUX to Security for incoming active packets. The other is from DEMUX to Security for outgoing active packets. 4. Interface from DEMUX to Security for incoming ANEP packets. 5. Interface from DEMUX to Security for outgoing ANEP packets. 6. Interface (4) between DEMUX and SNAP. 7. Interface from DEMUX to SNAP(ASPE_D). This is the UDP socket interface, and an ANEP packet should be transmitted to the SNAP EE. 8. Interface from SNAP(ASPE_E) to DEMUX. This is the UDP socket interface, and an ANEP packet should be transmitted to the DEMUX. 9. Interface (5) between SNAP and DiffServ. The value (src ip address, dst ipaddress, src port, dst port, protocol, TOS) will be sent to DiffServControlGR2K when the SNMPPort obtains the value. 10. Interface (6) between DiffServ and GR2000. We have prepared an interface code as shown Figure 17.8 [5] to configure GR2000. This interface is to send command by telnet function. Init() establishes a connection between GR2000 and an active proxy, then opens a configuration file of the router—whereas closeConnect() saves and closes the configuration file— and then terminates this connection. ConfigDSCP() is called by TrafficControllerGR2K in order to configure qos_ip_list. BindFlow2DSCP() is called by DiffServControllerGR2K (GR2000 router) in order to set the DSCP value (TOS=224) to a specific flow; for example, video stream flow. SetSNMP() is used to configure basic parameters e.g., snmp community name, and so on, that are required by GR2000 in order for SNMP to function on it. setEvent() and setAlarm() are to set parameters for the GR2000 to detect and send the event (trap).
DiffServ Scenario
403
Active Proxy SNAP-EE I/F (5)
I/F (1)
VEM
SNAP da emon (7778)
I/F (2)
DiffServ
(7777)
I/F (6)
I/F (4-2)
I/F (4-3) (9998)
(9994)
En c
Dec (9995)
I/F (4-4)
I/F (4-1)
(3500) I/F (3)
Secu rity
De/MUX
Netfilter
GR2000
Figure 17.8 Interfaces between components.
active proxy SNMPPort
SNAPD (2) trap (1) set
(3) handle DiffservController DiffServController (6) trap
SNMPD
(4) get
(5) configure
node
I/F wrapper
SNAPD: SNAP daemon GR2000
SNMPD: SNMP daemon
Figure 17.9 The interactions of components.
404
Programmable Networks for IP Service Deployment
The interaction among the SNAP EE, the SNMP daemon (SNMPD), and the DiffServ components is depicted in Figure 17.9. When the SNAP daemon (SNAPD) receives a packet relevant to the active proxy, then the sequence is as follows: 1. The SNAPD stores a value that is carried by a SNAP packet. 2. The SNAPD raises a trap in order to notify an arrival of new data. 3. After the SNMPPort receives the trap, this notifies the DiffServController of the arrival. 4. The DiffServController retrieves an actual value from the SNMPD. 5. Then the DiffServController sets a configuration value in GR2000 using the I/F wrapper. 6. The GR2000 also has functions of SNMP and can raise traps. The process (6) is one of them, and this trap is analyzed in the DiffServController for a monitoring purpose. 17.4 CONCLUSION This chapter presents the dynamic and autonomic deployment of differentiated services in the FAIN testbed. The FAIN node supports the dynamic deployment and instantiation of multiple active VEs; each one of them capable of hosting multiple execution environments, and supporting communication among different EEs in the same node. The EEs may, in turn, be deployed and instantiated on demand, thereby introducing new features and functionality in the node according to new requirements and changing needs. We tested the FAIN active network by developing and dynamically deploying a control EE, which was designed and tested for the QoS configuration of the DiffServ-enabled FAIN testbed. In addition, we have already tested dynamic configurations of the GR2000/TC-100 by the SNAP EE. References [1]
Denazis, S., et al., WP5 Network Scenarios WP3 Input, WP3-HEL-048-Rxx-Int.doc, July 2002.
[2]
Suzuki, T., DiffServ Demonstration Scenario, WP5-HEL-051-DiffServ-Intv.doc, October 2002.
[3]
Becker, T., Virtual Environments and Management, WP3-FHG-031-D4-VEM.doc, March 2002.
[4]
Lazanakis, A., Resource Control Framework, WP3-NTU-FT-007-RCF-Intv1.doc, April 2002.
[5]
Java Media Framework (JMF), http://java.sun.com/products/java-media/jmf/2.1.1/download.html.
[6]
Videolan, http://www.videolan.org/.
Chapter 18 WebTV Scenario This chapter presents a case study that was demonstrated on the FAIN [1] project testbed [5]. It integrates the functionalities of the active node components as well as the management system into a single, realistic application scenario [6], hence validating the project objectives in designing and implementing a programmable and fully managed network node for dynamic service creation. 18.1 MOTIVATION AND KEY CONCEPTS The main objective of this scenario is the dynamic deployment of service components [4], that is, the duplicator and the transcoder, so that different formats of video streams can be converted to suit the end users’ needs, and monitor their performance. The key concepts to be highlighted are: •
•
•
VAN creation and activation: The creation of virtual environments that constitute a topology for a virtual active network, where a VAN is a network of VEs. Each VE is sandboxed; that is, only permitted to use predetermined computational (CPU and memory) and communicational (relative to the adjoining VE) resources. The novelty of this idea is that VAN creation (and extension) is performed on demand, triggered by the service provider. Delegation of management tasks: Dedicated management components for the created VEs to manage their own services; for example, monitoring facilities after the service is up and running. This sets up an innovative approach to network management where the active network service provider passes on service specific management responsibilities to the SP. Flexible service component deployment: The deployment of service and management components (e.g., QoS PDP/PEP, delegation PDP/PEP, monitoring PDP/PEP, service-specific PDP/PEP), as required. It is difficult to predict what management capabilities will be required in the future. With this flexibility, this uncertainty is somewhat alleviated, and the ANSP’s
405
406
Programmable Networks for IP Service Deployment
management system is more adaptable to the SP’s future service requirements. 18.2 GENERAL DESCRIPTION In the context of the FAIN enterprise model (see Figure 18.1), the active network service provider owns both the PBNM and the ASP system, and beyond resource reservation and delegation of management responsibilities, it is unaware of WebTV service. The service provider is the WebTV provider. It has a delegated management instance to manage its own service. The consumer subscribes to the WebTV service. An SP wants to offer the consumers (i.e., the end users) a WebTV service. It broadcasts programs via the Internet for end users to watch; regardless of their terminal capabilities. Broker
Consumer
RP4
RP5
Service Provider/ Retailer
RP1
Service Component Provider
Service Component Manufacturer
RP2
RP6
Active Network Service Provider
Active Middleware Manufacturer
RP3 Network Infrastructure Provider
Hardware Manufacturer
RP7 Roles within the FAIN focus
Figure 18.1 FAIN enterprise model [2].
The WebTV SP requests the ANSP to set up a customized private network in order to meet the customer requirements. End users then subscribe to this WebTV service by directly contacting the WebTV SP server. In our scenario, illustrated in Figure 18.2, one of the end users uses a terminal that is not capable of correctly displaying the video stream of the WebTV content.
WebTV Scenario
407
For example, this particular end user may use a handheld device with low processing power and a low access bandwidth. In this case, the WebTV SP can individually select and process the video stream destined for his customer by deploying an audio/video transcoder in the network, so that the video stream received by the handheld device is in the correct format. EMS MI ( WebTV - SP) WebTV
- SP
Service Repository Service
Transcoder Policies
MI (ANSP)
AVPN Policy
Active Nod e VE ( WebTV - SP) Transcoder
WebTV
Pr - VE VE (ANSP)
AVPN (
WebTV
Video Stream - SP)
Client
Figure 18.2 General depiction of the WebTV scenario.
18.3 FAIN PBNM AND ASP REVISITED: DETAILED SCENARIO DESCRIPTION The WebTV SP first advertises a program and the consumer connects to the WebTV SP’s program portal (i.e., Web site). A session initiation protocol (SIP)1 negotiation occurs between the consumer’s machine and the WebTV SP via this portal, which then results in the WebTV SP signaling to request the deployment of the transcoder and the duplicator. It does this by contacting the PBNM; that is, the WebTV SP contacts the service manager, providing the parameters of the consumer (e.g., expected video format, bandwidth, and so on). In particular, it provides the service manager component with the consumer’s context information, for example, service descriptors, and source and destination IP addresses. The service manager contacts the network ASP manager (NetASP) to retrieve the topology requirements associated with this new service, and generates policies (QoS and delegation) in order to enforce them on the NMS. Consequently, the ANSP’s management instance on its PBNM receives a QoS policy, and enforces it on both the NMS and on all relevant EMSs. This invokes the active node 1 The SIP proxy is in charge of routing SIP calls between end users. We have implemented a component that intercepts these calls to retrieve the multimedia information of the client’s session, and request the deployment of the right transcoder depending on the information fetched.
408
Programmable Networks for IP Service Deployment
management framework to create a new VE for the WebTV SP. If the VE is created successfully, then the ANSP PBNM enforces the delegation policy through the NMS and (again) on all appropriate EMSs, that is, the VE resource profiles are refreshed and the VE is created and activated. Now that the resource profiles are in place on the active node, the ANSP will delegate further management responsibilities to the WebTV SP. For this reason, the ANSP creates an MI in all the pertinent EMSs2 for this WebTV SP, and assigns the access rights to the interfaces on the active nodes. Once the VAN has been fully set up, the service manager will trigger the deployment of the service given, for example, the transcoder, by contacting the network ASP manager. Upon receiving the request from the service manager, the ASP network manager creates a deployment agent. The agent fetches the service deployment descriptor of this transcoder from the service registry via a CORBA interface. The ASP network manager processes it to find an optimal code distribution at the network level. The agent containing the service descriptor moves to the chosen active node, and asks the service creation engine to install this service. The SCE parses the node-level deployment descriptor, and resolves the dependencies between the service code modules of the service. It asks the code manager for other service descriptors (if needed), or for code modules to instantiate. The code manager is in contact with the local service registry and the local service repository (which caches the information locally) and, if they have not got the information, they contact their network counterpart. The SCE matches the service code modules’ requirements with the available node resources by contacting the node manager. Finally, the SCE starts the EE-specific installation and instantiation of the service code modules by sending EE-specific requests to finalize the deployment. After a successful service deployment, the NetASP returns to the service manager, providing a structure containing the appropriate service component references, which in turn is sent back to the WebTV SP, which is now ready to configure its private network by sending policies that are consumer specific. In addition, the SP deploys service specific PDPs in its dedicated MI. In this way the SP can define its own service specific policies that will be enforced by the active node. Finally, the monitoring system is used for the reconfiguration of the transcoder at run time when, for instance, the access bandwidth changes dramatically and the consumer needs a different format of the video stream. Finally, all incoming RTP (audio/video) data that arrives in the active node is transcoded by the active service and sent to the client. When a new client with the same features wants to watch the same WebTV, the transcoder is not deployed again in the active node (it is already running), but an active packet is sent to the 2 Each element management system manages only a single active node; network level management system overlooks these EMSs.
WebTV Scenario
409
controller in order to configure the transcoder to send the transcoded data to this new client. Extension of the VAN: If a second consumer3 wants to subscribe to this service, and the resource manager (RM) detects that this targeted consumer is not within the same domain, the RM contacts the interdomain manager to establish an agreement with the peer domain. VAN calculation, creation, and activation occur in the peer domain and, thus, a full end-to-end path is set up for the WebTV SP. Setting up a VAN involves the alignment of requested and available resources. However, before the neighboring ANSP can commit to the requested resource profile, it must be able to ascertain that the requests adhere to its own local policies. After the VAN has been extended, the service manager contacts the NetASP for deployment of the second transcoder service. The policies of WebTV SP are sent and enforced by the appropriate EMS. This results in the deployment of the WebTV SP PDP/PEP and the monitoring probe if it is required, and the configuration of the service component. 18.4 WEBTV COMPONENTS The transcoder and the controller components [4] are deployed via the ASP mechanism when requested by the service manager, as shown in Figure 18.3. Service manager is triggered by the SIP proxy when the SIP request from the consumer (wishing to watch the WebTV) is processed by the SIP proxy. As a consequence of creating a VAN for this particular SP, a management instance is created and configured; there is one MI at the network level and also one at the element level. At the beginning, only the PDP manager will be instantiated inside each MI. After an incoming request is forwarded to this MI, the PDP manager should decide to trigger the deployment of the specialized functional domain, contacting the ASP with a request for it to deploy the SP PDP into the EMS, and the SP PEP into the SP VE. Controller/ Transcoder
SP
mpeg
Duplicator
H.263
eg mp
mpeg
Consumer 1
Consumer 2
Figure 18.3 WebTV service components.
3 We assume here that the second transcoder needs to be deployed in the peer domain; hence, the WebTV SP’s existing VAN needs to be extended.
410
Programmable Networks for IP Service Deployment
At the bootstrap of this SP PDP a set of alarms/events should be configured by means of policies, which will configure the monitoring system to receive those events/alarms from the transcoder controller. These policies would be as follows: Policy #1: If ( packet_loss < 20% ) { removeVideo( client_1 ); } Policy #2: If ( packet_loss < 10% and removedVideo( client_1 ) == TRUE ) { addVideo( client_1 ); }
The incoming request to the MI can be sent when deployment of the transcoder is requested: The PDP/PEP and transcoder/controller are very close, so we can imagine deploying both at the same time. The deployment request is sent by the SP (SIP proxy) to the service manager. The latter can trigger the deployment of the SP’s PDP/PEP at the same time. 18.4.1
Reconfiguration of the Transcoder
Up to now, one instance of the controller monitors the real-time control protocol (RTCP) sessions (one for the audio stream, and one for the video stream) for one client. Therefore there are as many controller instances as clients. A controller instance is created when a new client is added to the transcoder. Upon instantiation, the controller takes a list of thresholds as parameters. These thresholds represent the percentage of packets lost for the session. It is then possible to define several levels of QoS. In our example, there are only two levels, corresponding to the two actions: remove video or readd video. Up to now, the only reconfiguration that is possible for one client is: • •
To stop sending the video stream when the quality of the link is bad (for instance, if the access network is overloaded); To restart sending the video stream when the quality of the link is good again.
WebTV Scenario
18.4.2
411
How the Controller Works
First, the transcoder is deployed and instantiated in an active node. When started, the transcoder opens two RTP sessions (one for audio, one for video) with the WebTV emitter, and does no transcoding (because there is no client yet). Then, the transcoder is configured to transcode the WebTV emitter formats into the client format for this given client (i.e., audio format, video format, IP address, IP port, and size of the video). When the client is added, an instance of the controller is created. This instance is in charge of monitoring the RTP sessions between this client and the transcoder. All monitoring information is carried within the RTCP that is associated with the RTP (RTP for the data itself; RTCP for the control). The RTCP ports are calculated automatically as follows (add 1 to the RTP ports): Video RTCP Port = Video RTP Port + 1; Audio RTCP Port = Audio RTP Port + 1. When this controller instance is created, it will receive the monitoring information sent by the client by listening on the RTCP ports. To monitor the link, the number of packets lost is taken into account. Based on the total number of packets received and the number of packets lost, a percentage of packets lost is calculated. As seen in the previous section, when the instance has been created, the parameters have been passed as arguments. These parameters help to determine which level of QoS is associated with the percentage of packets lost. 18.4.2.1 Interface: Controller – Probe After gathering the service configuration policies, the service-specific SP PDP accesses the monitoring system hosted in the ANSP management instance, in order to register its interest in receiving the appropriate RTP events (delegation of functionality). The monitoring system uses the information contained in the associated filters to configure the data acquisition layer. As a consequence, the monitoring system will request the deployment of a probe in the target EE. The ASP is responsible for installing and connecting the probe with the transcoder controller. Appropriate access rights should have been set up (delegation of access rights). In addition, the monitoring system adjusts the threshold level for the packet loss variable through the monitoring interface exposed by the controller. Such behavior makes it possible to provide fine-grained control of the quality levels offered to the customers. Finally, when the percentage of packets lost reaches the threshold, then an event will be sent to the SP PDP. An example interface is shown here: module Events { // Exceptions definitions exception EventNotSupported { };
412
Programmable Networks for IP Service Deployment
// Low-level event definitions valuetype Event { }; valuetype RTPEvent : Event { public controller::ClientParameters parameters; public unsigned short level; }; // Interface definitions interface i_Probe { void notifyEvent ( in Event event ) raises EventNotSupported; }; };
Here, “level” represents the level of packet loss. Level 0 means that no packets are lost, and level 1 means that the percentage of packets lost is greater than the threshold defined in the policy. Such information will be packed in a CORBA structured event, and subsequently delivered to the notification channel so that it can be distributed among the interested entities. The resulting interface is generic enough to be used in different monitoring scenarios, while at the same time it provides the necessary flexibility to define a wide variety of event structures to be sent to the probe. 18.4.2.2 Interface: SP PDP – SP PEP When the PDP makes a decision about a policy, it should pass this decision to the PEP. The PEP will be in charge of processing the request. The interface offered by the PEP is shown here: module org { module ist-fain { module apbm { struct t_Parameter { string name; any value; }; typedef sequence t_ParameterList; struct t_request { string command; t_ParameterList parameterList; }; module pep { interface i_pep { oneway void decision (in t_request request); }; }; //pep };//apbm };//ist_fain };//org
WebTV Scenario
413
When the PEP receives a valid request, it performs the required action. For this purpose, the PEP maintains a table of all the controller/client references. The transcoder also maintains a table of clients (a transcoder instance manages its own list of clients but it does not know the clients of other transcoder instances), and ensures that the video stream is off before readding it, or that it is on before removing it. 18.4.2.3 Interface: SP PEP – Controller When receiving a command from the PDP, the PEP must enforce this policy. It will result to an invocation to the controller interface. An example interface is shown here: module org { module ist-fain { module services { module controller { struct ClientParameters { string IpAddress; boolean video; string videoFormat; short videoPort; string videoSize; boolean audio; string audioFormat; short audioPort; } boolean removeVideo (ClientParameters thisClient) // return true if OK boolean addVideo (ClientParameters thisClient) // return true if OK void setPacketLossPercentage (ClientParameters thisClient, short percentage) // sets the threashold above which the packet loss percentage is considered too high } };//services };//ist_fain };//org
When the controller instance receives an action to process from the SP PEP, it will reconfigure the transcoder for this given client (remove or readd video), that is, the client given as a parameter.
414
18.4.3
Programmable Networks for IP Service Deployment
Testbed Configuration for WebTV Demonstration
Requirements As illustrated in Figure 18.4, three laptops represent the video source and the two consumers with SIP video clients. These are Windows machines with JMStudio4 installed. Three active nodes [3] are needed in the service topology; specifically, a high bandwidth is required between the video emitter and the active node on which the transcoder resides. During actual experiments and demonstrations, the transcoders are deployed on VEs of onizuka.fhg.fa (the active node in Berlin) and komsys-pc-zurich.tik.fa (the active node in Zurich), while the duplicator is deployed on jorg.ucl.fa (the active node in London).
NMS at UCL
NMS at UPC
EMS
EMS EMS
Active node at TIK
Active node at UCL
Domain 2 Domain 1
Active node at FHG
Consumer 1
Video emitter
Consumer 2
Figure 18.4 Segment of the FAIN testbed used for the WebTV scenario.
18.5 CONCLUSIONS This chapter presents the provisioning of WebTV services in the FAIN testbed. The FAIN node supports the dynamic deployment and instantiation of multiple 4
JMStudio is a standalone Java application that uses the JMF 2.0 API to play, capture, transcode, and write media data.
WebTV Scenario
415
active VEs; each one of them capable of hosting multiple execution environments and supporting communication among different EEs in the same node. The EEs may, in turn, be deployed and instantiated on demand, thereby introducing new features and functionality in the node according to new requirements and changing needs. We tested VAN creation and activation, where we set up VEs that constitute a topology for the SP on demand. In terms of our policy-based management system, we enabled the SP’s access to the ANSP management functionality (within the node and the element manager). The SP used its own code to manage the allocated resources in order to offer a service. We demonstrated the delegation of management tasks, where we allowed dedicated management components to be downloaded onto the VEs, in order for the SP to manage own services; for example, monitoring facilities after a service is up and running. The ASP system enabled the dynamic deployment of heterogeneous distributed component based active services in the FAIN network. In this WebTV scenario, we used the ASP for deploying active services on a preselected number of nodes. We appended logic that enabled the automatic selection of the target nodes based on matching the service requirements regarding the locations of the service components against the capabilities of the network available to the service provider at deployment time. References [1]
FAIN Project, http://www-ist-FAIN.org.
[2]
FAIN Project Deliverable D1 - Requirements Analysis and Overall Architecture, http://www.ist-FAIN.org/deliverables.
[3]
FAIN Project Deliverable D7 - Final Active Network Architecture and Design, http://www.ist-FAIN.org/deliverables.
[4]
FAIN Project Deliverable D8 - Final Specification of Case Study Systems, http://www.ist-FAIN.org/deliverables.
[5]
FAIN Project Deliverable D9 - Evaluation Results and Recommendations, http://www.ist-FAIN.org/deliverables.
[6]
FAIN Project Deliverable D40 - FAIN Demonstrators and Scenarios, http://www.ist-FAIN.org/deliverables.
Chapter 19 The Outlook Active and programmable networks aim at enabling the easy introduction of new network services by adding dynamic programmability to network devices such as routers, switches, and applications servers. While in the initial stages of research in this area the focus was on the basic mechanisms needed for dynamically installing and moving service code in the network, research and development is now needed toward the main scientific and commercial benefits that programmable network technology may provide for both fixed and mobile networks. This means service network programmability and autonomic service deployment architectures bringing just the right services to the customer with just the right context (e.g., time, location, preferences, qualities, and so on). This chapter identifies some of the research and development issues that underpin the migration from programmable networks toward programmable service networks. 19.1 REFERENCE ARCHITECTURE FOR PROGRAMMABLE SERVICE NETWORKS Since its beginning, the Internet’s development has been founded on a basic architectural premise: A simple network service is used as a universal means to interconnect intelligent end systems. The end-to-end argument has served to maintain this simplicity, by pushing complexity into the end points that require it, allowing the Internet to reach an impressive scale in terms of interconnected devices. However, while the scale has not yet reached its limits, the growth in functionality—the ability of the Internet to adapt to new functional requirements— has slowed with time. We see new network architectures evolving both above and below the Internet. Underneath the Internet, all-optical networks without buffering will dominate the core, while wireless LANs subject to high packet error rates, as well as asynchronous digital subscriber line (ADSL) access networks, will dominate the edges. Above the Internet are systems that exploit and demand mobility, and that have the ability to deliver continuous media, private address spaces, and access control. While some of the requirements have been met quickly 417
418
Programmable Networks for IP Service Deployment
with ad hoc solutions and de facto standards, others have received painful, longlasting, and still unsuccessful attention: QoS, IPv6, and a well-designed multicast service, despite having standards, are still not widely deployed. Another unfortunate consequence of the existing architectures is the division between network engineers who adopt the viewpoint of protocols, and software engineers who advocate a greater use of object-oriented technologies and methodologies in engineering the network. To complicate the problem space even further, the responsibility for managing and controlling the end-to-end path is fragmented into different administrative domains, with providers who own parts of the network applying different business models, and marketing policies, and using a variety of technologies, as mentioned above. Accordingly, delivering an application to the end user (the customer) is not a trivial task, as it requires complex coordination among these different players over a heterogeneous network. We strongly believe that the next generation network, which we call the polymorphic Internet, must be built on middleware engineered in such a way that it allows all the different parties to coexist without impeding future evolution. The key to this is rapid application/service creation and deployment, as these are the true commodities that must be promoted by providers and purchased by customers, thereby creating a vast fuel tank that will constantly supply network expansion. In order to achieve this, it is essential that we bring together the two communities of engineers, and foster a networking paradigm where they both contribute in a synergistic way. In addition to the technological drivers such as link technologies and application software, new and interesting technologies have appeared in the realm of network architecture, and we believe that they can be applied to a new discipline of networking in which the end-to-end argument’s logic is still maintained, but avoid an overly simplistic and rigid interpretation as in today’s Internet. A common conceptual model unites the most advanced work in each of these areas, which is captured in the reference model shown in Figure 19.1. Our reference model expands across three dimensions that synergistically make up of the whole next generation Internet environment. More specifically, the first dimension (front side of the cube) argues for a layered architecture comprising four main layers, with a specific scope assigned to each. The functionality provided by each layer is exposed using open interfaces. Components running in any layer may access additional functionality available at the lower layer in a seamless way. To this end, services and their logic may be distributed across different layers, or may be confined within only one layer. Thus, for example, in the model we propose of an all-optical core, the core might be source routed if packet switched, or circuit switched. The network communication services would react to the particular design choice made to cope with the current limitations of photonics, and similar architectural choices might be made for
The Outlook
419
Management
Control
Network Application Layer Network Communication Services Layer Network Element Abstraction Layer Network Element Hardware Layer
Transport
Sp Se ec rv ifi ic ca e ti Se on C rv re ic at e io n S D e ep rv lo i c y e Se men r Ex v ic t ec e L u t og io ic n
shared wireless links at the edge. (Incidentally we expect the great majority of network enhancement software to operate on network elements lying between the wireless mobile edge and the stationary all-optical core.)
Figure 19.1 Conceptual and reference model.
The choice of these particular layers and the scope thereof are the result of converging conclusions collected from state of the art forums, such as IEEE P1520 [2], IETF ForCES [1], MSF [4], OSWG [6], OGSA [5], OSGI [7], PARLAY [8], WWRF [9], and by the FAIN [3] experiences. The bottom two layers; namely, the network element hardware layer, and the network element abstraction layer, refer to the large family of network elements or components thereof, like routers, optical switches, and gateways, and so on, found in abundance in every part of the network. These NEs participate in forwarding and processing flows that belong to specific services and users thereof. To this end, the scope of these layers is about the functionality provided by the NE. The behavior of these NEs is influenced by means of open interfaces that abstract the NE functionality in the NE abstraction layer. In contrast, the upper two layers, the network communication services layer (NCSL) and the network application layer (NAL), break down the functionality of network-deployed services into the communication part and the application part, respectively. The NCSL deals with how communication is established by distributed entities of a service. The type of services found in this layer are
420
Programmable Networks for IP Service Deployment
standard services that enable the transfer of packets in a network, and advanced support communication services, which provide certain extra features and support to service and application providers, according to their requirements. In contrast, the NAL is user specific and deals with end-to-end issues. In this way we decouple the communication aspect from an end user application; the “network” becomes accessible by means of the interface defined between the two layers. The second dimension (top side of the cube) refers to the concept of service creation and execution environments (SC&EE) that will be deployed across the layers above, and host and run these services. We identify four main conceptual areas for developing an SC&EE: • • • •
Service specification; Service creation; Service deployment; Service logic execution.
As part of the first area, a service must be able to be specified in an unambiguous way that not only identifies the service components that collectively define the service, but also where these components should be deployed. In contrast, service creation addresses issues of creating complex services from existing, simpler services. Consequently, service deployment deals with the way that the service and its components are deployed. Again, decoupling the specification from the creation and deployment mechanism allows work in these areas to progress independently, offering their functionality by means of open interfaces. Finally, service logic defines the operations and management of a service component and system. Such operations may access the open interfaces of the first dimension, since they must be visible inside the SC&EE, or exchange messages with other components not necessarily part of the same service. We also believe that a common service specification and message-driven communication mechanism will greatly facilitate the integration of network services contributed and deployed by different parties on different platforms and heterogeneous networks. Protocols like simple object access protocol (SOAP) as well as new XML-based languages, are deemed as essential in achieving this goal. Finally, the two dimensions, namely, the architectural layers and the SC&EE, may span all three operational planes—management, control, and transport— where we argue for a clear separation among them. For instance, SC&EE could be designed and built exclusively for one of the operational planes, by taking into account the requirements, limitations, and suitable available technology. We note here that the reference model of Figure 19.1 may also imply a new business model. By creating these layers and conceptual areas we intend to unleash new innovations and resulting market forces, promoting healthy competition and stimulating new products for specific areas of the model, either alone or jointly. As a consequence, new products or prototypes in the form of
The Outlook
421
hardware, middleware, algorithms, designs, frameworks, services, applications, interface specifications, software, demonstrations, testbeds, and so on, will be developed and integrated, giving flesh and bones to the reference model of Figure 19.1. We aspire to develop a new discipline of service network programming, focused on a network, that would enable users or service providers to create their own or extend and configure existing service networks. 19.2 REQUIREMENTS ANALYSIS FOR FURTHER DEVELOPMENT IN PROGRAMMABLE SERVICE NETWORKS In spite of huge investment, there is a striking discord between the great intellectual successes of research and advanced development in networking, computing, and other related technologies, and the challenging financial environment in the industry. How can this be? We argue here that the problem is caused by too great a concentration on the technologies themselves, with insufficient attention paid to the needs of the paying customer, and the business needs of the providers. We need to change the customers’ perception of the network1 by improving the quality of its support for the services they want to use—customized and controlled by themselves—and are willing to pay for, while maintaining a strong awareness of the broader business issues. We are envisaging such a network, called the polymorphic Internet, which is targeted at exactly this issue: the rapid, easy, and the cost-effective introduction of new value added services, and the customized configuration of the networks to support them. This implies that:
• The shared network resources should be tailored specifically for each application’s requirements, doing away with the current “lowest common denominator” approach. • The network elements should be programmable, in order to allow the provision of the necessary service flexibility in networks, and access to this programmability should be provided to users. There is a need to identify and fully understand the issues involved in this transformation, whether evolutionary or revolutionary. Unless these issues are addressed, the networking industry will remain mired in its financial tar pit, eventually quashing innovation as resources for research and development become scarce. The following are key factors that need to be addressed.
1 Throughout this chapter the use of the term “network” is used to describe the customer’s end-to-end path experience, comprised of a number of different networks: telecommunication networks, data networks, or otherwise.
422
Programmable Networks for IP Service Deployment
• Network heterogeneity: The network is heterogeneous, both in components
and in administrative policy. What is perceived by the end user as a single “network” is, in fact, composed of networks of different technologies (access, edge, core, and so on), administered by different authorities and owned by a variety of providers (e.g., access providers, ISPs, content providers, application providers) with diverse or competing business models and strategies. • Commodity: Obtaining a good return on investment when introducing new access technologies such as ADSL relies on selling useful value added services to customers. Bandwidth alone requires a significant capital investment, but is easily commoditized, and offers relatively low returns. There is therefore a pressing need to ensure that innovative commercial applications can be effectively supported by public networks. • Inflexible service deployment: Attempts to introduce and rapidly deploy new applications or value added services are hampered by network heterogeneity and the lack of configurability, and so current deployment is restricted to homogeneous subnetworks, which in turn limits the customer base. The polymorphic Internet would contribute to the pervasive service vision by making networks service driven and enabled. It will also contribute to a new view of networking: from a generic commoditized “pipe” for bit transport, to a new, richer platform that can support services in a newly enabled market. 19.3 EXPECTED KEY NOVEL FEATURES AND BENEFITS Figure 19.2 summarizes the key and novel characteristics of the polymorphic Internet. Π Programmability
Results
Target
Π Autonomy
Π e2e View
Network modification
Enablers for modification
Dynamic Enablers & net modification
Reconfigurability
Autonomic reconfiguration
Dynamic reconfiguration
Net modification
Node modification
Net & Node Dynamic modification Top Down & Vertical Layered Overlays
Systems
Peer layers/overlays
Top Down La yered Overlays
Initiated by
Net requirements for change
Service requirements for change
Inter oper ability
Network Components
Node components
Driven by
Local/global system optimisation
Local / global self- organisation
Business requirements for change Service, Net, Node Components Business organisation
Figure 19.2 Key characteristics of the polymorphic Internet.
The Outlook
423
Next generation networks (NGN) are heterogeneous and multipurpose communication networks. The polymorphic Internet (π) is an NGN service network that fully supports ubiquitous computing and communications. In this context, the polymorphic Internet key novel features and benefits are: • π is an IP-based network that is capable of dynamic reconfiguration and autonomic adaptation to service requirements, user activity, and network conditions. However, reconfiguration and adaptation are not restricted to the IP layer, but may span all network layers if necessary. • π networks have a high degree of autonomy in that they are reconfigured and adapted on demand by autonomous service and network management components. • π networks provide resources to network users, applications, and operators (communication, storage, computing, and content resources) at any place and any time. • π networks provide support for ubiquitous applications; for example the provision of storage and computing resources, mobility, and security in a very flexible and customizable way (ubiquitous computing). • π is a set of cooperative service-specific network overlays of physical and logical resources. π are nested, cooperative networking nodes comprising highly dynamic middleware environments that provide value added services over a range of wired and wireless network technologies (ubiquitous communications). • π networks offer application service providers a set of interfaces and tool kits that facilitate the flexible creation, rapid deployment, and (re)configuration/personalization of new services. • π networks offer the user the opportunity to customize the creation and participate in the management of adaptable connectivity services and audiovisual services. References [1]
IETF ForCES, http://www.ietf.org/html.charters/forces-charter.html.
[2]
Standard for Application Programming Interfaces for ATM Networks, IEEE P1520.2, Draft 2.2, http://www.ieee-pin.org/pin-atm/intro.html.
[3]
FAIN Project, http://www.ist-FAIN.org.
[4]
MSF- Multiservice Switching Forum, http://www.msforum.org.
[5]
OGSA - An Open Grid Services Architecture for Distributed Systems Integration. - Global Grid Forum, http://www.ggf.org/ogsi-wg/drafts/ogsa_draft2.9_2002-06-22.pdf.
[6]
OSWG -Open Signaling Working Group, http://www.comet.columbia.edu/opensig/.
[7]
OSGI - Open Service Gateway Initiative, http://www.osgi.org.
424
Programmable Networks for IP Service Deployment
[8]
PARLAY- Parlay Group, http://www.parlay.org.
[9]
WWRF- Wireless World Research Forum–Book of Vision, http://www.wireless-world-research.org/.
About the Editors
Alex Galis is a visiting professor in the Telecommunications Systems Research Group in the Department of Electronic and Electrical Engineering at University College London, United Kingdom. He is the author and coauthor of more than 100 papers and technical reports on telecommunications subjects, including network and service management, active and programmable networks and services, grid networks, context networking, intelligent and mobile agents in networks and services, broadband networks, and global connectivity. He has been the editor or a contributor for three books: Deploying and Managing IP over WDM Networks (Artech House, 2003), Multi-Domain Communication Management Systems (CRC Press, 2000), and On the Way to Information Society, edited by T. Magedanz et al. (IOS Press, 2000). He was the leader of the Future Active IP Networks (FAIN) research project. He also conducted the collaborative research and development MISA project focused on network management for broadband networks. He has been and is currently involved in other European Union research projects, including IST WINMAN, IST HARP, IST MANTRIP, IST CONTEX, IST ENEXT, and IST AMBIENT NEWORKS, which focus on network, security, mobile software in networks context management, and ambient networking research. Spyros Denazis received a B.Sc. in mathematics from the Department of Mathematics, University of Ioannina, Greece, in 1987. In 1993, he acquired a Ph.D. in computer science from the University of Bradford, United Kingdom, where he worked on the performance modeling and evaluation of computer networks. At the University of Bradford Dr. Denazis worked as a research assistant, concentrating on the performance evaluation of ATM switches, and as research fellow, researching the effects of correlated traffic on ATM network nodes. He has also worked as a project leader for Intracom S.A. in Greece. There he was involved in a number of ESPRIT and ACTS European projects, namely, ALCOM-IT, ISIS, COMMITY, and WAND. In 1998, Dr. Denazis joined the Information and Telecommunications Laboratory (ITL) Department at Hitachi Europe Ltd. in Cambridge, United Kingdom. In that position he was responsible for managing a number of projects between Hitachi Europe Ltd. and the Centre for Communication Systems Research at the University of Cambridge. Since 2003, Dr. Denazis has been working as an exclusive consultant of the Hitachi Sophia 425
426
Programmable Networks for IP Service Deployment
Antipolis Laboratory in France. His current interests include research on active and programmable networks, differentiated services, and adaptive QoS. He has recently completed the IST project FAIN, in which he acted as a work-package leader. Dr. Denazis is the author and coauthor of more than 25 scientific papers in conferences and journals and holder of one patent. Célestin Brou is a senior researcher at Fraunhofer/FOKUS in the PLATIN Competence Center, where he deals with distributed object computing. He has a Ph.D. in computer science from the PARIS VI University and an M.S. in software engineering from the National Polytechnic School of Côte d’Ivoire (Ivory Coast). His Ph.D. dissertation, for the Network and Multimedia Services Department at the Ecole National Superieure de Bretagne, was about the optimization of network resource usage for multimedia application distribution on demand. Dr. Brou has worked at the French national research institute INRIA-Rocquencourt analyzing and testing OSI Transactional Processing (ISO 10026). He has also worked on several European and international projects in which Fraunhofer/FOKUS was involved. Among those are the EURESCOM P610 (Management of Multimedia Services) and P806 (EQoS Framework) projects. Dr. Brou was also involved in OSCAR, a joint project of Hitachi, NTT, NTT Data, and Fraunhofer/FOKUS. Since 2001, he has been involved with the FAIN/IST (Future Active Internet Networks) project, in which he has been leading a work package that designed and implemented a policy-based active network management application. He is presently leading a work package in the IST/multimedia Content Discovery and Delivery (mCDN). His research interests include network services, network management, programmable and active networks, software engineering, and distributed processing. Cornel Klein is a principal engineer at Siemens AG, Corporate Technology, Software and Engineering, in Munich, Germany. He is responsible for the strategic direction of research projects in the area of technologies and architectures for distributed systems and for respective internal consulting projects for Siemens’ business units. Starting his career in 1998 in the Strategic Product Planning Department of Siemens Information & Communication Networks, he has a strong background in telecommunication networks for carriers, with a focus on IP-related technologies, broadband ISDN, software architectures for network elements, network management, service architectures, and mobile agents. Dr. Klein was a work-package leader in the FAIN project and he had been leading the participation of Siemens in FAIN. He is the author and coauthor of more than 20 scientific papers and holds several patents. Dr. Klein received an M.S. in 1992 and a Ph.D. in 1997 from the Munich Institute of Technology, where he worked on formal models and specification techniques for distributed systems.
Index A ABone 21, 48 Access rights delegation policy 337 Acquisition layer components 371 ACT 138 Active applications 15, 19, 22, 48, 116, 118, 144, 153, 160, 254, 297, 298 Active Bell Labs engine 48 Active bridge 38, 65, 72, 75 Active Network Encapsulation Protocol 20, 35, 36, 48, 49, 50, 162, 163, 188, 213, 215, 216, 217, 219, 221, 222, 223, 224, 228, 241, 242, 244, 245, 247, 298, 300, 301, 307, 311, 312, 387, 402 Active network management daemon 48 Active network service provider 109, 115, 116, 118, 132, 133, 137, 138, 139, 140, 141, 142, 143, 152, 153, 154, 168, 169, 170, 173, 180, 181, 182, 216, 327, 328, 330, 331, 332, 334, 336, 348, 349, 351, 356, 358, 359, 364, 373, 375, 376, 393, 400, 409, 410, 411, 412, 413, 415, 418 Active network 5, 6, 7, 8, 14, 15, 16, 17, 18, 19, 20, 21, 22, 27, 29, 30, 32, 33, 35, 36, 38, 39, 40, 41, 47, 48, 49, 50, 51, 52, 53, 55, 57, 58, 60, 61, 62, 65, 67, 68, 69, 70, 71, 72, 74, 75, 78, 79, 88, 96, 98, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 130, 131, 133, 134, 135, 137, 138, 139, 140, 142, 144, 149, 151, 152, 153, 154, 158, 159, 160, 162, 163, 164, 168, 170, 171, 172, 173, 174, 176, 179, 182, 183, 184, 185, 189, 190, 195, 196, 213, 215, 216, 224, 227, 228, 229, 233, 235, 236, 237, 241, 242, 246, 247, 248, 253, 254, 255, 256, 262, 263, 264, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 307, 310, 319, 321, 327, 328, 329, 330, 332, 335, 342, 349, 353, 354, 360, 361, 365, 372, 373, 374, 375, 376, 385, 400, 402, 404, 409, 410 Active networks environment 131 Active packets edition 298 Active service provisioning 19, 21, 47, 56, 62, 157, 161, 170, 171, 172, 173, 174, 175, 182, 190, 228, 237, 243, 249, 329, 332, 335, 336, 339, 340, 342, 349, 353, 354, 356, 367, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 385, 386, 387, 389, 390, 391, 410, 411, 412, 413, 415, 418 Active SNMP EE 163
Active SNMP 163, 207, 229, 234, 243, 246, 249 Active virtual network management prediction 48 Active virtual peer 96, 97, 98, 99, 100, 102, 103, 104, 105 Active VPN 130, 132, 133, 134, 143 Active Web gateway 123, 183 Active Web services 120, 124, 183 Admission control model 256 Admission control 161, 163, 255, 256, 257, 264, 337 Advanced intelligent network 6, 67 AEGIS 33, 34, 73 ALAN 52, 57, 97, 98, 105 ALF 130 ALIEN 38, 65, 71, 72, 73, 74, 75, 77, 78 AMnet 300 AMP 299 ANDROID 51, 52, 53 ANN 57, 58, 59, 184, 185, 296, 321 ANTS 19, 21, 57, 58, 59, 75, 246, 254, 293, 296, 300 A-PBM 51 APES 55 Application optimization layer 96, 97, 98, 100, 101, 102, 103, 104 Audit subsystem 238 Authentication 19, 22, 30, 31, 35, 36, 49, 117, 130, 135, 142, 153, 161, 190, 203, 231, 232, 233, 234, 242, 245, 247, 248, 300, 336 B Base building blocks 11, 13 Basic component 198, 199, 204, 205, 241 Bowman 17, 19, 298 BR 62, 115, 116, 117, 134, 152, 153, 398 C Caml 19, 68, 70, 71, 73, 74, 76, 77 CAN 91 CANARIE 5 CANES 19, 22 Capsules 15, 54, 70 CCM interface 9 Chained layered integrity checking 73
427
428
Programmable Networks for IP Service Deployment
Channel manager 162, 202, 208, 217, 218, 219, 220, 224 Channel 28, 58, 162, 197, 202, 208, 210, 217, 218, 219, 220, 224, 243, 244, 245, 342, 343, 364, 366, 367, 368, 370, 415 Chord 90 Cluster-based active node 254, 297 Code manager 174, 175, 176, 377, 378, 381, 387, 388, 389, 390, 412 Component manager 162, 163, 197, 198, 199, 200, 201, 204, 205, 206, 207, 208, 209, 210, 255, 261, 394 Configurable component 198, 199, 200, 205, 206, 207, 208, 255, 261, 370 Connection manager 99, 163, 239, 241, 243, 250 Consumer 27, 31, 65, 68, 70, 115, 116, 118, 131, 132, 133, 134, 138, 140, 142, 143, 152, 153, 154, 168, 170, 180, 217, 219, 229, 255, 296, 297, 300, 309, 311, 328, 330, 331, 332, 348, 354, 383, 384, 400, 410, 411, 412, 413, 420 Control elements 12, 13 Control life-cycle functional domain 339 CORBA 61, 62, 113, 141, 172, 195, 203, 204, 205, 208, 210, 224, 240, 241, 262, 373, 374, 386, 388, 398, 402, 412, 415 Core switchlet 72, 73 Credential manager 238 Crossbow/ANN 17, 18 Cryptographic authentication 75 Cryptographic subsystem 238, 239, 243 Customer network management 134 D DARPA 7, 8, 14, 27, 32, 48, 65, 66, 79 Data manipulator chain 372 Delegation of access rights PDP 358, 364, 365 Delegation of access rights PEP 359, 364, 365 Demultiplexing facilities 161, 162, 179, 188, 189, 213, 214, 217, 221, 222, 223, 224, 237, 387, 402 Deploy functional domain 339, 340 Deploying a service 171, 173, 375 Differentiated services DiffServ scenario 180 Differentiated services DSCP numbers 394, 395, 397, 399 Differentiated services management 393 Differentiated services 3, 7, 11, 12, 52, 136, 150, 180, 181, 208, 209, 261, 263, 264, 364,
393, 394, 395, 396, 397, 398, 399, 400, 402, 404 DiffServ Class 394, 395, 397, 398, 399 DiffServ controller 208, 209, 263, 264, 397 DiffServ manager 208, 263, 264, 394, 395, 397, 398, 399 DiffServ scenario 263, 393, 397, 398, 399 Distributed management task force 51 Domain manager 339, 340 E EDonkey 88 Element management station 164, 165, 168, 179, 180, 182, 330, 340, 349, 354, 361, 362, 363, 366, 367, 368, 370, 413 Enforcement layer 238 Enterprise model 3, 7, 87, 109, 114, 115, 117, 120, 134, 138, 144, 150, 151, 152, 153, 168, 173, 190, 195, 230, 248, 257, 259, 327, 328, 331, 332, 372, 375, 410, 418, 421, 422 Etoile project 318 Execution environment user space 17, 18, 158, 184, 186, 255, 261, 293, 294, 296, 300, 304, 305, 307, 308, 309, 311, 312 Execution environment 3, 8, 15, 17, 18, 19, 20, 21, 22, 23, 32, 37, 48, 50, 62, 98, 116, 118, 137, 138, 139, 141, 142, 143, 149, 152, 153, 154, 155, 156, 157, 158, 159, 160, 163, 167, 172, 173, 176, 184, 185, 189, 190, 195, 196, 198, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 213, 215, 217, 219, 220, 222, 224, 227, 228, 229, 238, 249, 253, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 312, 319, 321, 328, 364, 374, 376, 378, 379, 380, 381, 383, 384, 386, 389, 390, 391, 398, 402, 404, 412, 415, 418, 420 Execution environments for proxylets 98 Exokernel 17, 32, 69 External security representation subsystem 239 F FAIN active router 149, 150, 159, 160, 189 FAIN EE 20, 155, 217 FAIN management system 149, 150, 159, 163, 189, 328, 330, 332, 346, 348, 373 FAIN policy-based network management 51, 165, 167, 169, 171, 181, 263, 327, 329, 332, 410, 411
Index
FAIN reference architecture 150, 154, 158, 160 FAIN testbed scenarios 150, 240 FAIN testbed 149, 150, 176, 177, 178, 179, 180, 240, 332, 404, 418 FAIN 3, 4, 8, 16, 20, 21, 23, 40, 41, 51, 57, 61, 62, 65, 109, 110, 114, 115, 116, 117, 118, 120, 125, 134, 135, 136, 138, 143, 144, 149, 150, 151, 152, 153, 154, 155, 156, 158, 159, 160, 161, 162, 163, 164, 165, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 185, 186, 188, 189, 190, 195, 196, 199, 201, 202, 211, 213, 214, 215, 216, 217, 224, 227, 228, 229, 230, 231, 235, 237, 239, 240, 245, 246, 247, 249, 253, 255, 256, 257, 259, 261, 262, 263, 264, 327, 328, 329, 330, 331, 332, 333, 335, 336, 340, 343, 346, 348, 349, 358, 362, 363, 364, 367, 370, 372, 373, 374, 375, 379, 380, 382, 384, 385, 386, 387, 390, 391, 394, 404, 409, 410, 411, 418, 419 Flexible service component deployment 409 Flexinet 300 Forwarding elements 12, 13, 159 Fp reference point 13
429
Inter-domain manager 349, 350, 352 Internet2 5 IntServ 136 J Janos Java nodeOS 17, 19 Jasmin 51, 52 K Kazaa 88, 90 Kernel space 17, 158, 163, 184, 255, 294, 296, 299, 304, 305, 307, 309, 312 L Lara++ 17, 18 L-interface 9, 10, 11, 158 M
G GENI 18 Gnutella 88, 89, 90, 92, 99, 100, 101, 102, 103 H Hardware manufacturer 117, 153 HIGCS 59
Magician 300 Management by delegation 61, 165, 167 Management instance 169, 170, 181, 182, 332, 337, 349, 362, 364, 412, 413, 414 Mapcenter 318 Master sensor registry 366, 367, 370 Middleware manufacturer 116, 153 ML 38, 68, 70, 74, 75 Moab/Janos 17 Monitoring PDP 366, 367, 368, 370, 409 Monitoring PIB 369 Monitoring system 342, 354, 366
I N IEEE P1520 8, 9, 10, 11, 12, 13, 158, 170, 419 IETF ForCES 8, 12, 13, 159, 170, 419 IETF 5, 8, 12, 51, 54, 119, 136, 159, 167, 170, 327, 346, 419 IIOP port 205 Infrastructure language 70 Instantiate functional domain 339 Integrated services 130, 136, 138 Integrity subsystem 239 Intel IXP1200 18 Intelligent networks 5, 6, 35, 67, 113, 140, 421
Nemesis 17, 31, 32, 36, 37, 69, 75 Network application layer 420 Network ASP manager 174, 175, 377, 378, 385, 387, 411, 412 Network communication services layer 420 Network infrastructure provider 109, 116, 117, 118, 137, 142, 152, 153, 154, 331, 348, 362, 375 Network management station 164, 165, 168, 169, 180, 181, 330, 331, 332, 340, 348, 349, 350, 351, 352, 354, 355, 411 Network optimization layer 97
430
Programmable Networks for IP Service Deployment
NGI 5 Node ASP code manager 174, 175, 176, 377, 378, 381, 387, 388, 389, 390, 412 Node ASP local service registry 176, 377, 378, 386, 387, 388, 412 Node ASP node ASP manager 175, 377, 387 NodeOS 15, 16, 17, 18, 19, 22, 32, 36, 37, 157, 160, 161, 184, 189, 228, 237, 244, 248, 298, 321
Programmable protocol processing pipeline 245, 299, 311, 321 PromethOS execution environment manager 207 PromethOS 17, 18, 179, 184, 186, 195, 207, 209, 249, 299, 364 Pronto 17 PxP 51, 53 Q
O OCaml 38, 73 Open control interface 262, 263, 396, 397, 398 Open Service Gateway Initiative 62, 419 Opensig 6, 7, 8, 12 P P2P 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 103, 104, 105 Pastry 90 PLAN 19, 21, 58, 72, 75, 76, 77, 78 PLANet 57, 58, 76, 77, 78 Policy decision point 55, 166, 167, 170, 182, 189, 190, 330, 332, 334, 335, 336, 337, 338, 339, 340, 341, 342, 347, 349, 353, 354, 355, 356, 358, 359, 362, 363, 364, 365, 366, 368, 409, 413, 414, 415, 416 Policy definition 53, 54 Policy editor 350 Policy enforcement point 54, 166, 167, 182, 189, 330, 335, 339, 340, 349, 357, 358, 359, 360, 361, 363, 364, 366, 367, 368, 370, 373, 409, 413, 414, 416, 417 Policy extension 53, 54, 132 Policy information base 166, 367, 368, 369, 371, 372 Policy manager 238, 329 Policy parser 344, 346 Policy repository 166, 346, 347, 361, 366, 367, 368, 370 Ponder 51, 56 Portable common runtime 33 Practical active network 257, 296 Principal manager 238 Principal secure store 238, 239 Probabilistic routing 101, 102 Programmable networks 1, 2, 3, 4, 5, 6, 7, 8, 23, 27, 47, 48, 51, 56, 59, 67, 71, 109, 110, 144, 149, 161, 329, 373, 417
QoS PEP 356, 357, 358, 363, 366 QoS policy 181, 337, 411 R Reconfiguring a service 173, 376 Redirection node 123, 182, 183 Release functional domain 340 Releasing a service 173, 375 Removing a service 173, 376 Resource building blocks 11 Resource control facilities 161, 162, 163, 179, 237, 248, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 264 Resource controlled active network environment 17, 36, 37, 71, 75 Resource controller abstraction 254, 255, 257, 259 Resource manager 32, 162, 163, 198, 201, 202, 206, 207, 209, 254, 255, 259, 349, 353, 354, 355, 356, 359, 394, 412 Retailer 117, 118, 153, 154 RP1 118, 138, 141, 154 RP2 118, 139, 141, 154 RP3 118, 138, 139, 142, 154 RP4 118, 138, 139, 143, 154 RP5 118, 138, 143, 154 S Scout 17, 18, 37 Secure Active Network Environment 35, 37, 38, 39, 40, 71, 73, 78, 79, 227 Secure Quality of Service Handling 37 Secure sockets layer 203, 233, 241 Security accountability 231, 236 Security architecture policy manager 53, 54 Security architecture 41, 70, 73, 227, 230, 231, 236, 237, 238, 239, 243, 244, 246, 248, 249, 250
Index
Security association 4, 188, 189, 233, 239, 241, 243, 244, 245 Security code and service verification 235 Security context 188, 202, 203, 206, 232, 233, 237, 238, 239, 240, 241, 242, 244, 245, 248, 249 Security facilities 161, 188, 419 Security identifier 240 Security manager 20, 163, 202, 206, 222, 223, 238, 240 Security packet integrity 231, 233, 234, 239, 247 Security policy enforcement 166, 167, 231, 232, 235, 240, 242, 247, 249, 330, 349, 357, 358, 363 Security system integrity 234, 235, 237, 238, 250 SENCOMM 19, 22, 50, 60 Seraphim 51, 52 Service component manufacturer 116, 153 Service component provider 109, 115, 116, 117, 118, 138, 141, 152, 153, 154, 328, 375 Service creation engine 174, 175, 185, 377, 378, 387, 388, 389, 412 Service creation 1, 8, 23, 98, 149, 158, 174, 175, 185, 377, 378, 387, 388, 389, 409, 412, 418, 420 Service deployment 3, 8, 20, 22, 56, 57, 58, 59, 62, 110, 138, 141, 149, 158, 159, 171, 172, 173, 176, 185, 190, 210, 229, 302, 308, 319, 328, 349, 352, 373, 374, 375, 378, 380, 385, 390, 391, 412, 417, 420, 422 Service descriptor node level 174, 176, 228, 235, 353, 354, 377, 378, 379, 380, 381, 382, 383, 384, 388, 389, 391, 411, 412 Service level agreement 133, 135, 136, 137, 138, 143, 168, 181, 184, 187, 330, 351, 356 Service logic execution 420 Service node 113, 123, 124, 182, 183 Service provider 2, 5, 7, 21, 47, 50, 57, 62, 87, 109, 111, 112, 113, 115, 116, 117, 118, 120, 122, 123, 124, 125, 131, 132, 133, 134, 135, 136, 137, 139, 140, 141, 142, 143, 149, 152, 153, 154, 157, 164, 165, 168, 169, 171, 173, 174, 180, 181, 182, 183, 184, 187, 190, 195, 196, 200, 202, 209, 210, 211, 216, 230, 257, 259, 293, 327, 328, 330, 331, 332, 336, 337, 348, 349, 351, 354, 356, 358, 364, 365, 373, 375, 376, 379, 380, 386, 391, 393, 394, 400, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 421, 423 Service registry 174, 176, 377, 378, 380, 385, 386, 387, 388, 412
431
Service repository 20, 174, 176, 301, 302, 303, 307, 377, 378, 380, 381, 386, 387, 388, 389, 412 Service specification 74, 126, 149, 172, 185, 328, 374, 420 Service-specific building block 11, 12 SILK 17 Simple Active Router Assistant 297 Slave sensor registry 367, 368, 370 Smart Environment for Network Control 50, 60 Smart packets 15, 49, 50 SNAP activator 247 SNAP execution environment manager 207 SNAP execution environment 207, 208, 209 SNAP 19, 75, 76, 181, 195, 203, 207, 208, 209, 228, 229, 240, 246, 247, 249, 301, 398, 400, 402, 404 SNMP port 205 Spanner 15 Special manager 201 SPIN 32, 67, 68, 69 Streams 16, 20, 69, 74, 126, 184, 294, 298, 302, 304, 305, 306, 307, 308, 309, 314, 315, 317, 319, 409 Switchlet language 69 SwitchWare 3, 23, 37, 38, 39, 40, 57, 58, 59, 65, 73, 75, 76, 77, 78, 79 System integrity subsystem 238, 250 T TAGS 301 Tamanoir active nodes 307, 310, 317, 318 Tamanoir 20, 294, 307, 308, 309, 310, 311, 312, 313, 314, 315, 317, 318, 319, 320, 321 Template manager 162, 198, 200, 206, 209, 210 Traffic class 260, 261, 262, 263, 264, 394, 395, 396, 397, 398, 399, 401, 404 Traffic controller 202, 208, 261, 262, 263, 264, 364, 396, 397 Traffic manager 202, 208, 209, 261, 262, 264, 394, 395, 397, 398, 399 Trusted Computing Platform Alliance 34, 40, 74, 79 TSAS 117, 153 U U-interface 9 Updating a service 376
432
Programmable Networks for IP Service Deployment
V VAN 48, 50, 172, 182, 195, 348, 349, 351, 352, 353, 354, 356, 374, 393, 409, 412, 413, 418 VE manager 21, 157, 162, 163, 179, 184, 185, 255, 256, 257, 258, 262, 263, 264, 395, 398, 399, 402 VERA 18 Verification manager 238, 243, 250 V-interface 10 Virtual control cache 97, 98, 104 Virtual environment 3, 20, 21, 22, 116, 129, 141, 142, 152, 156, 157, 158, 160, 161, 162, 163, 169, 181, 182, 184, 185, 190, 195, 196, 198, 200, 201, 202, 206, 208, 209, 210, 213, 215, 216, 217, 219, 220, 222, 224, 228, 229, 230, 235, 237, 239, 240, 241, 242, 244, 245, 248, 253, 254, 255, 256, 257, 258, 259, 261, 262, 263, 264, 331, 340, 349, 353, 361, 362, 363, 364, 366, 373, 390, 393, 394, 395, 396, 397, 398, 399, 409, 411, 413 Virtual machine 15, 16, 17, 22, 28, 31, 49, 70, 113, 306, 309, 310, 312 Virtual private active network 257, 258 VTHD 311, 318, 319, 320 W WebTV scenario delegation of management tasks 409, 418 WebTV scenario VAN creation and activation 409, 418 Wire language 69 Withdrawing a service 173, 174, 375, 376 Workflow 133, 134 X Xen 75 XenoServer 75 X-Kernel 69 XML 53, 62, 119, 172, 335, 344, 346, 350, 374, 377, 379, 383, 384, 386, 388, 391, 420