Governance of Communication Networks
Contributions to Economics www.springer.com/series/1262 Further volumes of this series can be found at our homepage. Francesco C. Billari/Alexia Prskawetz (Eds.) Agent-Based Computational Demography 2003. ISBN 3-7908-1550-0 Georg Bol/Gholamreza Nakhaeizadeh/ Svetlozar T. Rachev/Thomas Ridder/ Karl-Heinz Vollmer (Eds.) Credit Risk 2003. ISBN 3-7908-0054-6 Christian MuÈller Money Demand in Europe 2003. ISBN 3-7908-0064-3 Cristina Nardi Spiller The Dynamics of the Price Structure and the Business Cycle 2003. ISBN 3-7908-0063-5 Michael BraÈuninger Public Debt and Endogenous Growth 2003. ISBN 3-7908-0056-1 Brigitte Preissl/Laura Solimene The Dynamics of Clusters and Innovation 2003. ISBN 3-7908-0077-5 Markus Gangl Unemployment Dynamics in the United States and West Germany 2003. ISBN 3-7908-1533-0 Pablo Coto-MillaÂn (Ed.) Essays on Microeconomics and Industrial Organisation, 2nd Edition 2004. ISBN 3-7908-0104-6 Wendelin Schnedler The Value of Signals in Hidden Action Models 2004. ISBN 3-7908-0173-9 Carsten SchroÈder Variable Income Equivalence Scales 2004. ISBN 3-7908-0183-6 Wilhelm J. Meester Locational Preferences of Entrepreneurs 2004. ISBN 3-7908-0178-X Russel Cooper/Gary Madden (Eds.) Frontiers of Broadband, Electronic and Mobile Commerce 2004. ISBN 3-7908-0087-1.5 Sardar M. N. Islam Empirical Finance 2004. ISBN 3-7908-1551-9 Jan-Egbert Sturm/Timo WollmershaÈuser (Eds.) Ifo Survey Data in Business Cycle and Monetary Policy Analysis 2005. ISBN 3-7908-0174-7
Bernard Michael Gilroy/Thomas Gries/ Willem A. Naude (Eds.) Multinational Enterprises, Foreign Direct Investment and Growth in Africa 2005. ISBN 3-7908-0276-X GuÈnter S. Heiduk/Kar-yiu Wong (Eds.) WTO and World Trade 2005. ISBN 3-7908-1579-9 Emilio Colombo/Luca Stanca Financial Market Imperfections and Corporate Decisions 2006. ISBN 3-7908-1581-0 Birgit Mattil Pension Systems 2006. ISBN 3-7908-1675-1.5 Francesco C. Billari et al. (Eds.) Agent-Based Computational Modelling 2006. ISBN 3-7908-1640-X Kerstin Press A Life Cycle for Clusters? 2006. ISBN 3-7908-1710-4 Russel Cooper et al. (Eds.) The Economics of Online Markets and ICT Networks 2006. ISBN 3-7908-1706-6 Renato Giannetti/Michelangelo Vasta (Eds.) Evolution of Italian Enterprises in the 20th Century 2006. ISBN 3-7908-1711-2 Ralph Setzer The Politics of Exchange Rates in Developing Countries 2006. ISBN 3-7908-1715-5 Dora BorbeÂly Trade Specialization in the Enlarged European Union 2006. ISBN 3-7908-1704-X Iris A. Hauswirth Effective and Efficient Organisations? 2006. ISBN 3-7908-1730-9 Marco Neuhaus The Impact of FDI on Economic Growth 2006. ISBN 3-7908-1734-1 Nicola Jentzsch The Economics and Regulation of Financial Privacy 2006. ISBN 3-7908-1737-6 Klaus Winkler Negotiations with Asymmetrical Distribution of Power 2006. ISBN 3-7908-1743-0 Sasha Tsenkova, Zorica NedovicÂ-Budic (Eds.) The Urban Mosaic of Post-Socialist Europe 2006. ISBN 3-7908-1726-0
Brigitte Preissl ´ Jçrgen Mçller (Editors)
Governance of Communication Networks Connecting Societies and Markets with IT
With 93 Figures and 55 Tables
Physica-Verlag A Springer Company
Series Editors Werner A. Mçller Martina Bihn Editors Dr. Brigitte Preissl Deutsche Telekom AG Konzernzentrale PIR 4 Friedrich-Ebert-Allee 140 53113 Bonn Germany Email:
[email protected] Professor Jçrgen Mçller The Berlin School of Economics Badensche Straûe 50±51 10825 Berlin Germany Email:
[email protected]
ISBN 10 ISBN 13
3-7908-1745-7 Physica-Verlag Heidelberg New York 978-3-7908-1745-4 Physica-Verlag Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Physica-Verlag. Violations are liable for prosecution under the German Copyright Law. Physica-Verlag is a part of Springer Science+Business Media springer.com ° Physica-Verlag Heidelberg 2006 The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera ready by the author Cover: Erich Kirchner, Heidelberg Production: LE-TEX, Jelonek, Schmidt & Væckler GbR, Leipzig SPIN 11796732
Printed on acid-free paper ± 88/3100 ± 5 4 3 2 1 0
Table of Contents
Introduction Brigitte Preissl.........................................................................................................1
Part 1 – Regulation: Making Networked Systems Function Rewriting U.S. Telecommunications Law with an Eye on Europe James B. Speta.......................................................................................................11 A Quadratic Method for Evaluating the New Hungarian Act on Electronic Communications with Respect to the Policy and Regulatory Objectives Gyula Sallai...........................................................................................................37 The Status of Regulation and Competition in Poland in the Advent of the Accession to the EU Jerzy Kubasik ........................................................................................................57 Regulatory Framework and Industry Clockspeed Jarkko Vesa ...........................................................................................................79
Part 2 – Technical Aspects and Standardisation A Comparison of ENUM Field Trials Dieter Elixmann, Annette Hillebrand, Ralf G. Schäfer .........................................93 3G: Standardisation in a Techno-Economic Perspective Anders Henten, Dan Saugstrup ...........................................................................111 Architectural, Functional and Technical Foundations of Digital Rights Management Systems Vural Ünlü, Thomas Hess....................................................................................129
Part 3 – Making the Market Fly: Critical Mass and Universal Service Service Universalisation in Latin America: Network Evolution and Strategies Arturo Robles Rovalo, José Luis Gómez Barroso, Claudio Feijóo González .....149 Sustainability of Community Online Access Centres Peter Farr, Franco Papandrea ...........................................................................165 The SMS Bandwagon in Norway: What Made the Market? Kjetil Andersson, Øystein Foros, Frode Steen ....................................................187
VI
Table of Contents
How to Achieve the Goal of Broadband for All Morten Falch, Dan Saugstrup, Markus Schneider.............................................. 203 Estimating the Demand for Voice over IP Services: A Contingent Valuation Approach Paul Rappoport, Lester D. Taylor, James Alleman............................................. 227
Part 4 – Integrating Citizens and Consumers in the Information Economy Master Plan The Transformation of Media – Economic and Social Implications Benedikt von Walter, Oliver Quiring................................................................... 243 Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action Monica Ariño ...................................................................................................... 273 New Perspectives on Mobile Service Development Jan Edelmann, Jouni Koivuniemi, Fredrik Hacklin, Richard Stevens ................ 295 “I-Mode” in Japan: How to Explain Its Development Arnd Weber, Bernd Wingert................................................................................ 309 Demand for Internet Access and Use in Spain Leonel Cerno, Teodosio Pérez Amaral ............................................................... 333
Part 5 – Integration of Markets European Integration and Telecommunication Productivity Convergence Elisa Battistoni, Domenico Campisi, Paolo Mancuso......................................... 357 Investment by Telecommunications Operators and Economic Growth – A Fenno-Scandinavian Perspective Tom Björkroth ..................................................................................................... 379 European Union Mobile Telecommunications in the Context of Enlargement Jason Whalley, Peter Curwen ............................................................................. 403 Fourier-based Study of the Oscillatory Behaviour of the Telecommunications Industry Federico Kuhlmann, Maria Elena Algorri, Christian N. Holschneider Flores... 427 The CAPEX-to-SALES TRAP Matthias Pohler, Jens Grübling .......................................................................... 439 Modelling Regulatory Distortions with Real Options: An Extension James Alleman, Paul Rappoport ......................................................................... 459
Introduction Brigitte Preissl1 Deutsche Telekom AG, Germany
Undoubtedly information and communication technology (ICT) bears a great the potential to connect individuals, firms and organisations. Whether this potential will actually result in the integration of markets and societies, is a different issue. The articles collected in this book deal with various aspects of the complex challenge of turning ICT into a tool that connects individuals, organisations and countries with different paths of development, different regulatory settings, different consumer preferences and – last not least – a different cultural background. The focus is, thus, on aspects that emphasise inclusion and integration, access to networks and conditions under which networks develop their self-enforcing powers. In a broad perspective this comprises attempts to harmonise frameworks for regulation, strategies of firms to become international players, telecommunication markets in New EU Member States merging with ‘old’ EU markets. However, it also touches upon aspects of the inclusion of customers in product and service strategies, access to advanced technology and networks for all groups in society regardless of their social status or geographical location. New technologies that offer new ways to communicate can substantially reduce barriers to universal communication and facilitate the connection of societies and markets beyond traditional boundaries. If ICT is supposed to connect societies on a truly global scale, the integration of developing countries, of remote areas and marginal groups in a society will be a big challenge. In order for the technical possibilities for an integrated international market to be transformed into socio-economic reality, a huge gap has to be closed. Universal service – still a major issue in many countries - is a key concept for guaranteeing a socially viable approach in the path to the information society. However, innovative tools to reach universally accessible services, such as community centres or mobile devices, are emerging which promise quick, but not always easy solutions. Many telecommunication markets still struggle with the transition from monopoly to competition. In these cases the relevance of sensible regulation for solving bottleneck problems and for governing the transition is generally accepted. If regulation succeeds in establishing sustainable competition, an important condition for creating markets which allow for the potential of ICT to unfold, is fulfilled. However, intriguing regulatory problems are still unresolved and keep emerging in each new round of technology and market development. Over1
E-mail:
[email protected]
2
Brigitte Preissl
regulation threatens to hinder innovation and technical development and, thus, to impose new obstacles to market development. Regulatory intervention needs to be as strict as necessary to achieve its goals and as light-handed as possible in order not to disturb market development and the dynamics of technological development. This book will not make an effort to enter into these discussions at length. The papers on regulatory issues included here will deal with selected topics which are concerned with making productive use of resources in order to allow a large number of users to benefit from networked systems. The book is divided into five sections. The first section discusses problems of telecommunications regulation from the perspective of generating optimal conditions to develop the connecting forces of IT. The second section addresses technical issues that are essential to enhance the connectivity of IT systems. Problems of critical mass and universal service are addressed in the third section, while the fourth section deals with an appropriate consideration of consumer and societal needs in the conception of IT systems. Finally, section five comprises contributions which discuss market developments. Regulation: Making Networked Systems Function Two phenomena may justify regulatory intervention: the fact that the technical characteristics of networks lead to sub-optimal results under competition. The resulting natural monopolies, standardisation needs and bottlenecks generate problems that require regulatory solutions (including self-regulation schemes). Another phenomenon is the transition from monopolistic regimes to competition where – for a while – market entrance has to be supported by regulating the former monopolist’s behaviour. The last type of regulation is supposed to accompany transition and to fade out once markets function reasonably well, i.e., regulation that accompanies the establishment of competitive markets works best, if it abolishes itself in due time. Telecommunication markets should then be subject to anti-trust monitoring like any other product or service market. However, common experience is that once regulatory institutions have been established, they do not tend to die. Therefore, finding a way to actually achieve this fading out of regulation is a major challenge for economic policy. There are various views on how to reach this state. Different approaches are being adopted in the European and the US regulatory systems. James B. Speta proposes to use the European way as an inspiration for the reform of certain aspects of the American regulatory system. He discusses the situation in the US reflecting it in the light of the European Commission’s approach to market analyses, regulation requirements and competition law. Whereas in many industrialised countries the liberalisation of telecommunication markets dates back to the last century, problems of introducing the right regulatory regime to support markets and to establish sustainable competition have recently arisen in transformation processes in former socialist countries. Not only do these countries strive to develop the potential of their internal communication markets, their integration in global communication networks requires open markets and smooth and seamless communication systems. This integration of net-
Introduction
3
works is crucial for the success of the EU25 economic, social and political union. Two chapters discuss this problem from different angles. Gyula Sallai provides a conceptual framework for a systematic evaluation and comparison of regulatory rules which he adopts to the Hungarian Telecommunications Act. His approach allows for reaching a quick assessment of the state of regulatory efficiency and effectiveness. Adopting his methods to other cases of regulation schemes might help to significantly improve regulation and to achieve its goals more quickly. Jerzy Kubasik documents ‘the Polish way’ of dealing with the issue of markets in transition from monopoly to competition in his paper ‘The Status of Regulation and Competition in Poland in the Advent of the Accession to the EU’. At the time the article was written, this was a story of unfinished tasks rather than of regulatory achievements. However, Kubasik suggests remedies and makes a number of sensible proposals to end the standstill in Polish telecommunications regulation. Although regulation can have a crucial role in making competition work in the presence of sunk costs, technological bottlenecks and network externalities, the downside of this is that it does not always provide the most effective and most economically desirable results. Regulation interferes with market forces under conditions of imperfect information and can, thus, give the wrong signals. It may hinder technical progress by reducing incentives to innovate. Technological development has its own rhythm which may not correspond with that of regulatory action. Jarkko Vesa’s paper analyses the interdependence between regulation and the pace of technical progress which generates a complex problem of governance. He uses the example of mobile data services in Finland where according to his findings the deployment of new technologies has been slowed down by wrongly phased regulatory intervention. Technical Aspects and Standardisation Although technology develops at breathtaking speed, by far not all technical questions of connecting communication lines have been resolved in a satisfactory manner. Hence, some of the papers in this book discuss tools that enable people to communicate more efficiently and more comfortably. They include the organisation of networks, the standardisation and harmonisation of technical devices, such as transmission protocols or telephone numbering schemes. Technical and organisational solutions that are internationally valid help firms and individuals to operate in foreign markets and, thus, serve the internationalisation of communication and economic activity. For example, the conception of consistent numbering schemes that can be adopted in fixed network and mobile communication, as well as in traditional telephony and voice over IP seems trivial at first sight, but is a major challenge for technicians and regulators in charge. Dieter Elixmann, Annette Hillebrand and Ralf G. Schäfer report the results of field trials testing a system which allows the transformation of telephone numbers into internet domains. The paper shows that numbering is not only a technical problem, but a feature of the organisation of telecommunication systems which determines mar-
4
Brigitte Preissl
ket power and the distribution of chances among actors in the market. Anders Henten and Dan Saugstrup open up yet another techno-economic perspective on standardisation issues in their analysis of standards for 3G mobile networks. Due to the network character of communication there will probably be only one winning standard that prevails in a certain geographic region and over a considerable period of time. Henten and Saugstrup present various alternatives and discuss their respective chances to become the ‘common’ standard. International communication networks offer never foreseen possibilities to distribute information at very low cost. As information and other media content are expensive to generate, but - due modern ICT- extremely cheap to copy and distribute, a problem arises with respect to intellectual property rights. On the one hand, an individual’s or a company’s intellectual property needs to be protected against unauthorised use in order to create incentives for the creation of information and knowledge. On the other hand, universal networks develop their full potential of connecting societies if they integrate as many people as possible in international knowledge system. Affordable access to knowledge is therefore crucial to reap the benefits of knowledge diffusion. In the absence of convincing solutions for the protection of intellectual property rights in electronic systems, it is useful to take stock of the existing options. This is what Vural Ünlü and Thomas Hess do in their paper on ‘Architectural, Functional and Technical Foundations of Digital Rights Management Systems’. Making the Market Fly: Critical Mass and Universal Service Communication networks have the interesting characteristic that they are the more valuable for each network partner the more individuals they connect. After reaching a critical mass of users, network usage expands at an accelerated pace of growth. Hence, it is particularly important - apart from the social aims pursued, to integrate as many people as possible in the network. Universal service, the obligation of service providers to reach a certain coverage of the national territory or the population, is one instrument to reach this state. Arturo Robles Rovalo, José Luis Gómez Barroso and Claudio Feijóo González present the history of universal service for Latin American countries and show a surprising variety of different approaches. A similar concern gave rise to the research done by Peter Farr and Franco Papandrea. Local community centres that offer access to advanced communication technologies in Australia are a means to connect people to the global communication world, if they have no chance to realise connection from their homes. Farr and Papandrea discuss the conditions under which these centres have been successful and will be develop sustainable operations in the future. Cooperative networks have been identified as one success factor and access to support services by central or regional units that offer help in the case of operational or technical problems, as another. Communication that encompasses large communities develops its own dynamics by creating a critical mass of users beyond which an accelerated growth path
Introduction
5
can be achieved. The paper by Kjetil Andersson, Øystein Foros and Frode Steen discusses this phenomenon using a concept of ‘bandwagon’ diffusion for the analysis of SMS development in Norway. They emphasise the importance of a liberal unregulated market and the excess functionality of the system for the successful launch of SMS services in Norway. Critical mass can be promoted with the deployment of essential infrastructures. The debate about universal access has been revived by the emergence of broadband technology which is a pre-requisite for the usage of many advanced communication services. Bringing broadband to each household is a major infrastructure project which is risky and costly to undertake for a single private enterprise. However, as the technology is the basis for migration into the next stage of the information society, many governments promote and support infrastructure investment. Even with the networks in place, however, there are barriers to broadband penetration that need to be analysed. This is the topic of the paper presented by Morten Falch, Dan Saugstrup and Markus Schneider. Another technology with a high potential to connect societies across the globe in radically new ways is Voice over IP, or the realisation of phone calls using Internet technology. Paul Rappoport, Lester D. Taylor and James Alleman (‘Estimating the Demand for Voice over IP Services: A Contingent Valuation Approach’) have generated a model that allows for estimating the potential of voice over IP by assessing demand and demand elasticities. Integrating Citizens and Consumers in the Information Economy Master Plan A distinctive characteristic of new tools of communication is the convergence of individual communication with the mass media sector. It enhances the generation of media content and makes it more easily available to large numbers of customers. In the future, thus, societies will not only be connected via physical and radio networks, but also because they share the same content offered over world wide networks. To make this happen, content and distribution turn into a commodity which is produced and sold according to the rules of the market. However, media and communication largely remain a social and cultural phenomenon; a combined effort of sociological and economic research is needed to understand the economic and social implications of an internationalised media market. Benedikt von Walter and Oliver Quiring propose a concept for interdisciplinary research on new media that takes into account the different rationale of economic and socio-cultural driving forces of media markets.. Convergence of markets and new technologies that allow for making better use of spectrum and of transmission capacity in networks offer a new dimension to broadcasting markets. The widespread consumption of media content makes this market a powerful cultural and political instrument which provides opportunities to enhance education, tolerance and mutual understanding in quite different societies. On the other hand, if governed badly, cultural heritage can be destroyed and the pluralism of opinions can be suppressed. Many media economists are therefore
6
Brigitte Preissl
concerned about the right regulatory framework for internationalised media markets. The paper by Monica Ariño on ‘Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action’ discusses these issues at the European level. Telecommunication services have long been characterised by a strong technology determination. Technicians make an innovation and impose the outcome on customers. This has resulted in services that do not respond to user needs and whose adoption is rarely user-friendly. In their analysis of future mobile services Jan Edelmann, Jouni Koivuniemi, Fredrik Hacklin and Richard Stevens argue that users are not interested in technologies, but in services, and that this should lead to better integration of services in order to offer service products that satisfy customer needs. A phenomenon which has surprised players as well as observers of mobile service markets not only in Asia, is the success of i-mode in Japan: the network character of communication technologies set in motion a self-enforcing mechanism which gave a decisive push to the diffusion of the service. The crucial question, however, of which factors give the initial impulses to make the chain develop its driving forces, remains. One answer that Arnd Weber and Bernd Wingert are suggesting is that cultural factors have created a favourable atmosphere for i-mode in Japan. By considering customers’ specific culturally rooted needs service providers in Europe might be able to offer more successful services. However, the second decisive factor, a high intensity of competition in the market, should not be forgotten as an important explanation for the success of i-mode in Japan. As network based markets develop with accelerated speed when certain thresholds have been overcome, the stimulation of demand is a crucial factor for market dynamics. In order to be able to assess the potential of markets and the speed of exploitation of this potential, reliable demand forecasts are needed. The internet provides a communication tool that has particularly wide impacts on communication systems. Therefore research that shows how the demand for internet services is going to unfold is particularly important to grasp the relevance of advanced communication tools at a worldwide level. Leonel Cerno and Teodosio Pérez Amaral have developed an econometric model for the assessment of demand for internet access and use in Spain. They ask, who uses the technology, and, thus, they provide information on the potential users and the perspectives of connecting Spain globally via modern internet technology. Integration of Markets In order to become a technology which connects societies and markets, ICT needs to be an economic success story. Whether this will happen (or has happened), depends on the impact of ICT based infrastructures and services on economic dynamics, on current business cycles and growth rates. A cyclical dependency emerges, where ICT development depends on the dynamics of growth in an economy, and on the other hand is a substantial driver of growth itself. Several papers
Introduction
7
deal either with the dynamics of ICT markets or with the impact of ICT on users’ markets or on the economy as awhole. ICT enhances the convergence of networks and services at the technological level. These unified technological conditions can stimulate the harmonisation of processes of production in economic terms. Elisa Battistoni, Domenico Campisi and Paolo Mancuso analyse these phenomena for EU countries. They use two different approaches, a stochastic frontier approach and a data envelopment analysis to model catching-up processes. The importance of ICT for economic growth can theoretically be derived from the economic impact of new technologies on innovation intensity, from investment in networks and equipment and – last not least – from changes induced and stimulation provoked on the users’ side. The general conviction that private investment in telecommunication infrastructure does not only create more efficient and more comprehensive communication opportunities, but also contributes to economic growth, is challenged by Tom Björkroth in his paper ‘Investment by Telecommunications Operators and Economic Growth – A Fenno-Scandinavian Perspective’. He concludes that the direct relationship between telecommunication investment and growth holds for some countries, but not for others. In Scandinavian countries and in Finland, growth impulses deriving from the use of ICT seem to be more important than those related to investment in ICT supply. Markets for ICT related goods and services have been considerably expanded by the opening up of telecommunication markets in the New EU Member States. The expansion of telecommunication providers into these countries will enhance their integration in world telecommunication systems, but it is also a major business opportunity for operators that engage in foreign direct investment. Jason Whalley and Peter Curwen analyse the strategies of mobile operators in the New Member States in a detailed study about licence ownership, market structure and concentration. In addition, they discuss strategic options of operators that enter theses new markets. Sustainability of companies and markets is a prerequisite of trust in telecommunication systems. However, the performance and behaviour of markets and industries often show fluctuations that challenge this sustainability. Two papers discuss related issues: Federico Kuhlmann, Maria Elena Algorri and Christian N. Holschneider Flores analyse the ‘oscillatory behaviour of the telecommunications industry. Matthias Pohler and Jens Grübling discuss changes in investment planning and risk assessment under changing regulatory regimes in their paper ‘The CAPEX-to-SALES-TRAP’. A similar concern has led to the paper by James Alleman and Paul Rappoport ‘Modelling Regulatory Distortions with Real Options: An Extension’. The paper asks how companies can deal with uncertainty involved in investment under conditions of regulated markets. It assesses the impact of regulatory constraints on investment valuations. The papers in this book discuss a number of features that all contribute to understanding the dynamics of telecommunication markets and their connecting powers. While rules and models of economic integration are following a fairly straightforward, although no smooth and even, path, the integration of societies
8
Brigitte Preissl
shows a more complex and difficult, but also a more diversified and humanly challenging pattern.
Part 1: Regulation: Making Networked Systems Function
Rewriting U.S. Telecommunications Law with an Eye on Europe1 James B. Speta2 Northwestern University School of Law, USA
Abstract The United States needs a new communications law, one that replaces obsolete service-specific regulatory categories with a law that recognises converging technologies and increasing competition. The European Union’s 2002 New Regulatory Framework provides an important example of such a law. This paper discusses a new U.S. law with an eye on the Framework, arguing that the U.S. should generally follow the Framework’s use of competition law reasoning as the trigger for regulation and its emphasis on maintaining interconnection. The paper notes that E.U. competition law, as embodied in the Framework, adopts theories of joint market dominance and monopoly leveraging that do not fit well with U.S. models. And the paper argues that a new U.S. statute must reform spectrum law to eliminate government allocation and uses, as the Framework does not, and take a more limited approach to universal service than does the Framework.
Introduction Much has been written about the theoretical and pragmatic clashes between U.S. antitrust law and the competition law of the European Union. A little over ten years ago, Diane Wood declared real international antitrust to be the “impossible dream”, and, since then, only inconsistent substantive harmonisation (albeit more significant procedural cooperation) has occurred between the U.S. and Europe (Wood 1992, 2002). Nevertheless, despite the occasional American criticism heaped on European authorities (for example, following the EU’s blocking of the GE/Honeywell merger), sentiment also exists that “several aspects of EU competition law are noteworthy, indeed praiseworthy”. In particular, EU law reflects “a 1
2
I am grateful for comments from J. Scott Marcus, Martin Cave, Richard Crawley, Natali Helberger, Alexander Scheuer, and Alexander de Streel. E-mail:
[email protected]
12
James B. Speta
more compact and intellectually appealing taxonomy than that which currently afflicts American antitrust law” (McChesney 2003, p. 1432). Owing in part to its more recent beginnings, European antitrust doctrine does not exhibit an accretion of separate rules for what are (from an economic perspective) only artificially different kinds of behaviours. In a similar vein, Europe has recently re-conceived communications law in an attempt to respond to developing convergence and competition. America must soon face the need to do so as well. This paper addresses some of the significant policy questions that a re-write of U.S. communications law would have to face, and it looks to the European model for comparative guidance. The paper then considers the ways in which certain aspects of the European approach might be received when viewed through the specific economic principles that currently dominate U.S. thinking about communications regulation. In this regard, the dual intent is, first, to begin thinking about U.S. telecommunications reform by using the European model, and, second, to further the dialogue between the U.S. and Europe on telecommunications policy. As Scott Marcus has noted, “the E.U. framework and the U.S. regulatory environment tend to address similar issues in similar ways, but not necessarily because of equivalent methodologies, but because our policy objectives, broadly stated, are similar” (Marcus 2003, p. 193). The time may, in fact, be ripe for an overhaul of U.S. communications law. U.S. telecommunications regulation has long been criticised for too much adherence to a “silos” approach – where each type of communications service (broadcasting, telephony, cable television, information services) is subject to its own regulatory structure (Werbach 2001). The Telecommunications Act of 1996, which notably embraced competition as the governing philosophy for (almost) all communications markets, did little to dissolve regulatory separation among services (Price and Duffy 1997, pp. 983–84). More than ever before, however, technological advances in electronic switching, data compression, and service protocols threaten to make meaningless the historic connection between types of communications technologies and the services they are used to deliver. Voiceover-Internet-protocol (VoIP) which allows both cable television plant and WiFi to provide plain-old voice service is the current, extreme example of both convergence and the increasing competition that convergence can provide. Indeed, a growing consensus in the United States – including leading Senators and the Chairman of the Federal Communications Commission – seems to embrace the need to rewrite telecommunications law from the ground up (Speta 2005). The unfolding European approach, which attempts a unified approach to an “information society”, is therefore of particular relevance. Over the past 18 years, Europe has undertaken a comprehensive agenda of privatising formerly stateowned communications companies, eliminating market entry barriers, and harmonising Member-State law (EC 1999; LaRouche 2002). Of particular interest to this paper, the March 2002 “New Regulatory Framework” adopts competition lawbased reasoning for regulation of communications markets – with the ultimate goal of the elimination of sector-specific regulation (Directive 2002/21). This new European approach therefore provides a structure that is appealing in the United States. Notably, the Framework Directive adopts a market analysis and
Rewriting U.S. Telecommunications Law with an Eye on Europe
13
regulatory structure familiar to U.S. antitrust economics. Prior to the Framework Directive, European law subjected any market participant with over 25% market share to mandatory access and unbundling rules.3 The Framework Directive, by contrast, premises most regulation upon an affirmative finding that an entity has “significant market power”. And the prescribed approach to determining significant market power has the steps of market definition (by considering demand and supply substitutability) and of ability to raise price through restricting output without incurring significant loss of sales or revenues that echo the U.S. merger guidelines and antitrust economics generally.4 The Framework Directive therefore suggests a profitable model for U.S. telecommunications regulation seeking to transcend service-bound approaches and to complete a transition to reliance on economic principles of antitrust. Nevertheless, other aspects of the Framework Directive are based on economic assumptions not compatible with U.S. approaches, both at the theoretical level of competition law and as recently applied in communications law itself. For example, the Framework Directive and its implementing Directives embrace presumptions of joint action, monopoly leveraging, and abuse of dominant position that do not enjoy similar currency in the United States. Quite to the contrary, the U.S. courts have increasingly required proof of joint action, even in concentrated industries, and U.S. regulators have largely rejected rules such as cable ISP open access that might have been based on a monopoly leveraging theory. Part I of this paper very briefly revisits the service-bound approach of U.S. communications law and notes some developing political consensus to change it. Part II identifies some key areas of reform – in network competition policy, spectrum policy, media regulation, and universal service – and discusses the approach stated in the new regulatory framework in Europe. The Part discusses the elements that might be imported into a new U.S. law, and also identifies European presumptions that seem less consistent with current American policy developments. Finally, Part III concludes, suggesting a model of limited interconnection regulation as perhaps the best policy for a new U.S. framework. 3
4
See Council Directive 92/44/EEC of 5 June 1992 on The Application of Open Network Provision to Leased Lines, 1992 O.J. L 165/27, Article 2(3); Directive 97/33/EC of the European Parliament and of the Council of 30 June 1997 on Interconnection in Telecommunications With Regard to Ensuring Universal Service and Interoperability Through Application of the Principles of Open Network Provision (ONP), 1997 O.J. L 199/32, Article 4(3); Directive 98/10/EC of the European Parliament and of the Council of 26 February 1998 on The Application of Open Network Provision to Voice Telephony and on Universal Service for Telecommunications in a Competitive Environment, 1998 O.J. L 101/24, Article 2(2)(I). See generally Framework Directive, art. 14(2); European Commission, Guidelines on Market Analysis and the Calculation of Significant Market Power under the Community Regulatory Framework for Electronic Communications Networks and Services, 2002 O.J. C 165/6 (July 11, 2002); Jens-Daniel Braun & Ralf Capito, The Framework Directive, in EC Competition and Telecommunications Law 309, 312–13 (C. Koenig, et al. eds., 2003).
14
James B. Speta
Current regulatory landscape in the United States As hinted at in the introduction, the United States does not have an integrated communications law in the manner of the Framework Directive or the Television without Frontiers (TWOF) Directive, in which technologically different but nevertheless competing forms of communications are dealt with under a single umbrella. Rather, the United States continues to be largely controlled by service distinctions begun in the 1934 Communications Act and added to that law as new technologies developed. Even the Telecommunications Act of 1996 did not attempt to integrate the internet into the telecommunications sections of the Act. Thus, Title II of the Communications Act regulates wireline common carriers, and Title II embodies traditional common carrier and public utility regulation (Speta 2002; Robinson 1989). Carriers providing these services (today called “telecommunications services”) must provide them to all customers upon request, at just and reasonable prices, and on a non-discriminatory basis. The 1996 Act, announced as a pro-competition statute, largely continued this regulation, and even increased regulation of incumbent local network facilities (Krattenmaker 1996). Standing opposed to “telecommunications services” are services where the provider also determines the content of the communication, or processes the user’s content, or otherwise provides some service other than raw transport. The Act defines these as “information services”, without specifying any particular regulatory structure for the category. The FCC has defined most internet-based transmission and services into this category, leaving them unregulated and pre-empting any attempted state or municipal regulation. Indeed, the FCC generally prefers to leave “new” services unregulated and so it attempts to classify any new service as an “information service” where possible (Powell 2004). Of course, the FCC’s flexibility is being tested by the onset of voice-over-internet-protocol telephony, which the FCC would prefer to remain unregulated, but which operates from the user’s perspective exactly like traditional telephony. Title III of the law gives the FCC plenary authority to issue spectrum licenses in the “public interest”.5 Section 301 of the Act requires that the federal government maintain ownership of the spectrum and generally provides that the FCC should define and regulate licensees’ use of the spectrum.6 Nevertheless, with the exception of bars on indecent programming, the FCC has weakened its public interest regulation of broadcasting to the point that there are few affirmative requirements that broadcasters provide news or educational programming. Moreover, unlike in Europe, “broadcasting” encompasses only free over-the-air service.
5 6
47 U.S.C. § 309(j). 47 U.S.C. § 301 (“It is the purpose of this chapter, among other things, to maintain the control of the United States over all the channels of radio transmission; and to provide for the use of such channels, but not the ownership thereof, by persons for limited periods of time, under licenses granted by Federal authority, and no such license shall be construed to create any right, beyond its terms, conditions and periods of the license.”).
Rewriting U.S. Telecommunications Law with an Eye on Europe
15
That is, any service (such as DBS) that is encoded and sold on a subscription basis is not considered to be a broadcast service (Shelanski 1997). Title VI of the Act gives the FCC certain powers over cable companies, but it forbids the FCC to regulate cable companies as common carriers and it largely precludes rate regulation of cable services.7 More importantly, its terms have been applied only to legacy video programming services, with cable modem internet services now treated as deregulated information services.8 Although federal law does not give plenary authority to the FCC to regulate communications markets, federal law also does not leave individual states free to regulate where the FCC cannot. One of the most important sections of the 1996 Act provided that the states could not “prohibit” or adopt laws that “have the effect of prohibiting” the entry of any entity into telecommunications services.9 Federal law likewise forbade states and municipalities from granting exclusive licenses for cable television service.10 And federal authority over spectrum licensing is exclusive. No one may operate any radio transmitter without FCC authorisation (unless it falls within one of the spectrum bands specifically left open to the public for unlicensed use, with approved equipment).11 As should be immediately obvious, this continued use of service-specific categories has created significant controversy in recent years. For one example, the FCC initially litigated the cases involving open access regulation of cable modem service on the premise that such high-speed internet service over cable lines was a “cable service”.12 This stood in contrast to its earlier position that telephone companies’ DSL services were interstate telecommunications services (or, alternatively, information services).13 After an appellate court rejected its position that cable modem service was a “cable service”, the FCC then opined that it was best considered an unregulated interstate information service, which later view the Supreme Court eventually affirmed. Given the Act’s regulatory structure, service classification determines the type of regulation, and, in the case of cable modem services, the difference is striking. If cable modem services are “cable services”, then they are exempt from common carrier regulation and state and local governments are without any regulatory jurisdiction (Speta 2000). If they are “information services”, then the FCC has apparently unlimited, but also totally undefined regulatory authority. Finally, if cable modem services are telecommunications services, then the cable companies must offer interconnection and the FCC must ensure that they are provided at just and reasonable prices and at non-discriminatory terms. The FCC may waive these features of public utility regulation only if it finds that there is competition in the market for high-speed internet access. But, given that high-speed internet access is 7
47 U.S.C. § 541(c). Nat’l Cable & Telecoms. Ass’n v. Brand X Internet Servs., 125 S. Ct. 2688 (2005). 9 47 U.S.C. § 253(a). 10 47 U.S.C. §§ 541(a), 546. 11 47 U.S.C. § 301. 12 See AT&T Corp. v. City of Portland, 216 F.3d 871 (9th Cir. 2000). 13 See id. at 873–75. 8
16
James B. Speta
dominated by only two entities – the incumbent cable companies and the incumbent telephone companies – such a finding would be difficult to sustain. The FCC is facing similar classification problems with emerging VoIP services. From the consumer’s perspective, VoIP provides exactly the same service as traditional telephony but over an internet network, not the traditional telephone network. VoIP, however, probably cannot be treated as a “telecommunications service”, for VoIP does not itself provide any “transmission service”. VoIP is simply an application riding on a general purpose IP network, and it is the IP network that provides the transport. VoIP is no different than a website, from the perspective of the network. Thus, VoIP does not fit comfortably within any provisions of the Act. But, if it is left entirely outside of the regulatory apparatus, widespread adoption could threaten the system for funding universal service – which today depends entirely on a tax on “telecommunications services”. It is for these reasons that key policy makers in the United States have begun taking the prospect of re-writing the U.S. communications law seriously. Senate Commerce Committee Chairman John McCain recently declared that “we began the 108th Congress with a hearing on the state of competition in the industry and I reminded the public, the FCC Commissioners, and my colleagues then of my long held belief that the 1996 Act is a fundamentally flawed piece of legislation. Since then, some of my colleagues have joined me in expressing the need for Congress to take a serious look at reforming the Act.”14 FCC Chairman Michael Powell agreed: “Whether it’s now or in the near future, it is my responsibility as your expert agency to tell you, I think the days are numbered on the way we’re doing this under the current statute. I do believe there is going to have to be a statute in the future that recognises these dramatic technical changes and gets us out of the buckets of the ’96 Act.”15
The direction of U.S. regulatory reform and lessons from Europe Reform of U.S. telecommunications policy, if it can be accomplished politically, needs to focus on four major areas: (1) network competition policy, (2) spectrum reform, (3) mass media, and (4) universal service. In this section, I sketch the basic direction for U.S. policy in each area and discuss the extent to which the European Directives can provide useful starting points. The analysis focuses on the European Directives themselves, as opposed to their implementation by the Member States.16 The focus is concededly incomplete, for the true measure of the Di14
Senate Commerce Committee Hearing, Voice over Internet Protocol, Feb. 24, 2004 (text available on Lexis). 15 Id. 16 The analysis therefore also excludes the consultation process between The Commission and the National Regulatory Authorities pursuant to Framework Directive, Article 7. This consultation process does provide important limits on the Member States’ imple-
Rewriting U.S. Telecommunications Law with an Eye on Europe
17
rectives as a regulatory system will depend on their operation in practice. Nevertheless, that implementation is still in the early stages. Moreover, as the project is the reform of the basic U.S. laws governing telecommunications, the Directives seem an appropriate starting place. The regulatory superstructure guides the implementation; the important legislative task is to define and confine the bounds of the regulators’ actions. Network competition policy The service-bound categories of U.S. law must be replaced by a more general approach to network competition policy. Drafting a new law requires answers to a number of sequenced questions: (1) With developing competition, is sectorspecific regulation still necessary? (2) If so, what should be the domain of that sector-specific regulation? (3) What are the triggers for the use of regulatory authority? (4) And, what are the tools the regulator may use to accomplish its regulatory goals? My view is that sector-specific regulation continues to be necessary and that the potential domain of that regulation should be expanded to encompass emerging services. (I set aside social, as opposed to economic, regulation for the moment.) In this regard, the EU Framework is quite similar. Nevertheless, triggers for regulatory action should be set quite high, and the competition law of the EU presents theories of market power that the U.S. has not and probably should not endorse. Finally, the remedies contemplated by the Framework Directive grant regulators too much discretion, even in circumstances in which market power is found. 1. The Burden To Justify Sector-Specific Regulation. The threshold question in communications law is whether sector-specific regulation continues to be necessary at all. A number of academics have taken the position that antitrust and common law can provide all of the regulation necessary (Huber 1997). For a time, New Zealand followed this approach, although it has recently reinstated its regulator. The Telecommunications Act of 1996 seemed to embrace the notion that regulation could soon fall away, in deference to antitrust law’s general rules, through its remarkable grant to the FCC of authority to terminate any provision of the Act when competition took hold (Speta 2004). Nevertheless, the need for continued sector-specific regulation remains, to address some of the traditional bases of communications regulation. In particular, although competition and technology continues to advance, communications markets are not free from bottlenecks.17 Moreover, although most see developing technologies as a cure for bottleneck monopolies, it is also possible that new technologies could, in some circumstances, create important new bottlenecks. For exmentation, by allowing the Commission to review any finding that an operator has significant market power in some market. 17 In residential and small business markets, ILEC market share of voice services continues to be over 90%, and cable companies and ILECs together have over 95% of the highspeed Internet access market. See FCC 2004.
18
James B. Speta
ample, interactive video products, such as movies with interactive elements or high-speed gaming or multi-media distance education, might require up- and downstream bandwidths that only cable television can supply (Speta 2004). Similarly, interactive or multimedia applications for wireless devices could tip the market toward an early provider of high-bandwidth mobile wireless, creating an entry barrier to a market (2G wireless telephony) that currently is fairly competitive (Speta 2002a). The Framework Directive and the Access Directive are significantly premised on the continuing need for “ex ante obligations to ensure the development of a competitive market”.18 Europe’s embrace of ex ante regulation is not, of itself, transferable to the U.S., for the level of competition in particular markets is an empirical question, and the state of telecommunications competition in Europe is, in general, weaker than in the United States. Still, current U.S. law premises the regulation of entire service categories on the existence of monopoly: the common carrier and the cable television chapters of the U.S. law both have their beginnings in concerns over monopoly power – of the Bell System on the one hand and of the local cable companies on the other. The European requirement that “ex ante regulatory obligations should only be imposed where there is not effective competition” should be the wave of the future, as opposed to the current U.S. law that requires an affirmative finding of competition prior to deregulation. Apart from structural market power in an identifiable service market, the European approach grounds regulation on the need for affirmative media policy, spectrum allocation, and access to certain network management functions.19 Media policy and spectrum allocation are discussed below, and access to network management functions is only a specific example of the more general problem of market power. A telecommunications law keyed to market power, however, seems to beg the question of why antitrust (competition law) cannot accomplish the task. There are two justifications, each reflected in aspects of the EU structure. First, even where the market for telecommunications services is structurally competitive, each individual carrier will have a “terminating monopoly” on services delivered from other carriers or networks to that individual carrier’s customers. As Jean-Jacques Laffont and Jean Tirole have shown, even competitive carriers will have the incentive to raise off-network termination charges, resulting in inefficient multiple marginalisation (Laffont and Tirole 2000, p. 184). Price-setting regulation, or mandatory bill-and-keep rules, can increase efficiency. Second, government may wish to assure that network competition does not eliminate fundamental interconnection. Two-way telecommunications networks, such as telephone, internet, and integrated data networks, exhibit direct network effects. If network competition is simultaneous, with numerous relatively small communications networks competing against one another, then each network will have a strong incentive to interconnect with the others, ensuring that all consumers can reach one another as well as reaching all services and content available on 18 19
Framework Directive, recital 25. See Framework Directive, recitals 6, 19, 31–33.
Rewriting U.S. Telecommunications Law with an Eye on Europe
19
other networks (Laffont and Tirole 2000, p. 190). But, if competition among networks is monopolistic or serial, then networks effects suggests that denial of interconnection may be a strategic tool in inter-network competition (Besen and Farrell 1994). Regulation to maintain interconnection may increase total welfare (or serve non-economic goals, such as maintaining a single community of speakers and access to information), even if it cabins the dimensions on which competition can occur. In particular, mandatory interconnection rules seem valuable at the physical and logical layers of communications networks – so that competition is channeled to the quality of service and price dimensions and away from the possibility of fragmenting an integrated communications network. Although such interconnection could potentially entrench certain kinds of networks, the social and economic benefits of maintaining an interoperable network probably outweigh the risks of entrenchment (Speta 2002). In this regard, the European approach emphasises the need for a minimum of regulation even where bottlenecks are not yet in evidence. The Access Directive makes clear that all public communications networks must interconnect with one another, and all access providers must interconnect with other networks in order to ensure the provision of a single, interoperable network. These obligations (on the part of networks and access providers) and this regulatory power (granted to Member State national regulatory authorities) are not dependent on a finding of significant market power in a particular market.20 This stands in contrast to the likely trend of U.S. reform, where most commentators argue that regulation cannot be justified unless a bottleneck is shown to exist and to impede competition. By contrast, the Access Directive’s minimum, but mandatory interconnection requirements ensure that competition does not lead to fragmentation of those parts of the communications network that ought to remain integrated. 2. Delimiting the Regulated Domain. Following from the foregoing, a new communications law for the United States ought to take as its limited domain two particular areas: those communications markets in which there is continuing bottleneck power, and those areas in which network interconnection is an overriding social value. The European approach largely follows this design. The Framework Directive initially sweeps within its grasp all “electronic communications networks” and “electronic communications services”, which include all systems and services “conveying signals by wire, by radio, by optical or by other electromagnetic means, including satellite networks, fixed (circuit- and packet-switched, including internet) and mobile terrestrial networks”.21 This remarkably broad definition, with the exception of the mandatory interconnection and access obligations just described, is then limited by the significant market power requirement – that national regulatory authorities may, in general, regulate only in those markets in which an undertaking has significant market power.22 From a U.S. perspective, however, the Framework Directive’s regulatory domain – and in particular its approach to the finding of “significant market power” 20
Access Directive, arts. 4(1), 5(1). Framework Directive, art. 2(a). 22 Framework Directive, art. 8. 21
20
James B. Speta
– presents two potential difficulties.23 First, the Framework Directive includes all communications services within its regulatory ambit, instead of identifying a more limited domain in which bottleneck problems are particularly likely to arise and as to which regulation is therefore particularly justified. Second, the Framework Directive embraces certain notions of market power that are not as well accepted in the United States. a. Limiting Regulators’ Discretion. Despite the service-specific categories of the U.S. Communications Act, the FCC and a number of commentators have, from time to time, asserted an authority to regulate any sort of communications by wire or radio – an authority that would be as broad as the Framework Directive’s initial cut (Weiser 2003). The Supreme Court has recently seemed to support this scope of FCC authority.24 I disagree with this broad interpretation of the FCC’s current authority (Speta 2003), but, whatever the answer to this interpretive question, there are important reasons to ensure that the regulators’ authority is limited to only those areas in which regulation is likely to be necessary to protect consumers. The mere prospect of regulation creates business uncertainty, an uncertainty to which new entrants are particularly vulnerable. Similarly, regulatory processes create the possibility of strategic action by incumbents to protect markets. At a minimum, regulation creates costs; at worst, regulators are captured and the process itself hurts competition. Thus, I would prefer a communications statute that limits regulators’ powers both to those markets in which interconnection is required to prevent network fragmentation and to those minimum regulatory tools of necessary to solve interconnection disputes. (The matter of appropriate regulatory tools is addressed infra.) Under this approach, the regulatory domain would include only those networks and services providing what used to be called “message service” – i.e., those networks in which horizontal (two-way) interconnection is the essence of the good. This would include telecommunications and data services – no matter what technology is used to provide them – but would not include cable television, broadcast, satellite, or any other emerging “one-way” services. To be sure, the Framework Directive contains important limits on the discretion of national regulatory authorities. The substantive significant market power requirement is the most important. But there are others as well. The Framework Directive contains procedural requirements that NRAs be established with a degree of regulatory independence, provide notice of intent to regulate,25 and provide supporting reasons and evidence for regulatory decisions. And the structure of the Framework Directive flows in part from the domain of the EU itself and the relevant treaties, issues that are beyond the scope of this paper to explore. Nevertheless, a more limited grant of regulatory authority ensures that market participants can proceed with less uncertainty. 23
Here, I am addressing only the SMP portion of the directive. As discussed below, the NRF also seems to allow significant utility-type regulation to further universal service. 24 Nat’l Cable & Telecoms. Ass’n v. Brand X Internet Servs., 125 S. Ct. 2688 (2005). 25 Framework Directive, art. 3.
Rewriting U.S. Telecommunications Law with an Eye on Europe
21
b. Finding Significant Market Power. Keying regulatory authority to a finding of significant market power is common ground between the best of the proposed U.S. approaches and the Framework Directive. Indeed, the Commission Guidelines on the finding of significant market power largely mirror similar guidance contained in the U.S. antitrust authorities’ merger guidelines (USDOJ 1992). Both documents define markets through an economic lens, taking account of demand substitution, supply substitution, and other market characteristics. And each document defines market power as a company’s ability to increase profits by increasing prices unilaterally (without an offsetting demand reduction). Two features of the Framework Directive’s approach to market power, however, do not resonate in current U.S. communications policy – the approach to collective dominance and the seeming approach to monopoly leverage. These differences mirror a debate between the U.S. and the EU on antitrust law, of course, but it bears setting out how these economic theories have fared in U.S. communications law. First, the Framework Directive permits the regulators very wide latitude in finding that firms “jointly” have significant market power.26 The specific criteria state that “two or more undertakings can be found to be in a joint dominant position ... if, even in the absence of structural or other links between them, they operate in a market the structure of which is considered to be conducive to coordinated effects.”27 Indeed, the Commission’s guidance seems to endorse a finding of joint market power based only on “an oligopolistic or highly concentrated market whose structure alone in particular, is conducive to coordinated effects on the relevant market.”28 This is consistent with an important strain of EU competition law, which finds a violation where companies are members of a “dominant oligopoly” and “adopt on a lasting basis a common policy on the market with the aim of selling at above competitive prices” even “without having to enter into an agreement or resort to a concerted practice.”29 As a matter of competition law doctrine, the U.S. does not condemn simple oligopoly. U.S. antitrust law reluctantly finds tacit collusion (and hence an antitrust violation), absent specific evidence of a facilitating practice or other evidence that confirms in the strongest terms that firms are acting jointly and not independently (Baker 1993). The famous statement is that an antitrust plaintiff must provide evidence “excluding the possibility that the alleged conspirators acted independently.”30 Moreover, the Supreme Court has clearly stated that an oligopolist’s act26
See Framework Directive, Article 14(2) (“An undertaking shall be deemed to have significant market power if, either individually or jointly with others, it enjoys a position equivalent to dominance, that is to say a position of economic strength affording it the power to behave to an appreciable extent independently of competitors, customers and ultimately consumers.”) (emphasis supplied). 27 Id., annex II. 28 Commission Guidelines, § 3.1.2.1, para. 94. 29 Airtours plc v. Commission, 2002 ECR II-2585 (2002). See generally Papadias 2004; Rey 2004. 30 Matsushita Elec. Indus. Corp. v. Zenith Radio Corp., 475 U.S. 574, 588 (1986).
22
James B. Speta
ing similarly to other participants is “not in itself unlawful” even if “firms in a concentrated market might in effect share monopoly power, setting their prices at a profit-maximising, supracompetitive level by recognising their shared economic interests.”31 The contrast here is quite stark. In the communications law context, U.S. courts have been pushing the FCC to justify any presumptions that joint action or other marketplace harm can be inferred merely from concentration. A typical case is Time Warner Ent. Co. v. FCC, in which the United States Court of Appeals for the D.C. Circuit struck down the FCC’s long-standing horizontal concentration limits for cable companies.32 That rule had “imposed a 30% limit on the number of subscribers that may be served by a multiple cable system operator.”33 It was based on a statutory requirement that the FCC set rules to ensure that no single “cable operator or group of operators can unfairly impede ... the flow of video programming from the video programmer to the consumer.”34 The FCC chose 30% to ensure that the market for program purchasing would have at least three cable companies acting in the market for programming. The Court, however, held that the FCC had not proved why collusion was likely if only two cable companies participated in the market. The FCC could not “simply posit the existence of the disease sought to be cured.”35 The Time Warner decision itself applied heightened scrutiny to the FCC’s decision, because the structural rules implicated the free speech rights of cable companies. (More on free speech concerns below.) But the D.C. Circuit also make clear in the context of the AT&T Consent Decree that it would not assume collusive action by the regional Bell companies simply because few companies dominated the market.36 Apart from whether coordinated behaviour should be presumed in concentrated markets, the U.S. is also struggling to decide whether regulation is justified simply because the market is oligopolistic. In merger review, the U.S. holds that a merger that creates an oligopoly or that increases concentration in an already oligopolistic market raises concerns under the “substantially lessens competition” standard.37 As one communications-market example, this was the reason that the Justice Department blocked the Echostar/DirecTV merger. The merger would have reduced 31
Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., 509 U.S. 209, 227 (1993); see also Reserve Supply Corp v. Owens-Corning Fiberglas Corp., 971 F.2d 37, 55 (7th Cir. 1992) (no conspiracy can be found where evidence “is as consistence with independent action in an interdependent market as it is with an agreement to fix prices”). 32 Time Warner Ent., L.P. v. FCC, 240 F.3d 1126 (D.C. Cir. 2001). 33 Id. at 1129. 34 47 U.S.C. § 533(f)(2)(A). 35 240 F.3d at 1133. 36 United States v. Western Elec. Co., 900 F.2d 283, 296 (D.C. Cir. 1990) (Triennial Review Opinion) (“the district judge's speculation that the BOCs could impede competition by way of illegal (and perhaps criminal) collusion to divide markets among them according to territory, would, in the absence of supporting evidence, seem to qualify only as a theoretical possibility”). See also generally Baker 2002. 37 1992 Merger Guidelines, § 1.51. Such a proposed merger is not automatically blocked, of course; the antitrust authorities subject it to additional scrutiny.
Rewriting U.S. Telecommunications Law with an Eye on Europe
23
the number of competing multichannel video operators from three to two,38 and this reduction in competition was deemed too significant – notwithstanding that the satellite carriers argued that the merger would increase their ability to compete with cable companies.39 But, outside merger review, many are sceptical that oligopoly justifies economic regulation – and especially more burdensome and costly forms of regulation such as price controls. When the Federal Trade Commission attempted to use its broader regulatory authority to bar parallel conduct in oligopolistic markets, the courts struck down its rules.40 The presence of multiple companies suggests an absence of barriers to entry and the possibility that oligopolies will break down due to cheating or to the development of powerful purchaser interests both weigh against regulation. The Framework Directive and the Commission’s Guidelines do acknowledge that regulating pure oligopoly is sometimes unnecessary, and so the difference between the U.S. and the European framework may be more one of approach than of serious doctrinal difference. Moreover, I have contended above that concentrated market structures may justify interconnection rules, although such rules would respond to the possibility of strategic competition, not joint action. Nevertheless, the European approach is much broader, and it seems it is unlikely that similar provisions on joint action will be included in a new U.S. statute. Second, both the Framework Directive and the Commission’s guidelines for implementing the significant market power analysis seem largely to adopt monopoly leveraging theory. The Framework Directive states that “where an undertaking has significant market power on a specific market, it may also be deemed to have significant market power on a closely related market, where the links between the two markets are such as to allow the market power held in one market to be leveraged into the other market, thereby strengthening the market power of the undertaking.”41 The Commission states that “this is often the case in the telecommunications sector, where an operator often has a dominant position on the infrastructure market and a significant presence in the downstream, services market.”42 Indeed, the Access Directive, in implementing the framework, seems to contemplate the imposition of interconnection, unbundling, access, or tariffing regulation on most infrastructure companies to respond to the risk of leveraging.43 In the U.S., by contrast, monopoly leveraging as the basis for communications access rules is receiving very little traction. The FCC has largely refused to adopt 38
The two DBS companies and the local cable company were, in most areas, the only sources of multi-channel video programming. The FCC has long held that broadcast television is not a substantial competitor to cable, as evidenced by the fact that nearly 90% of all American households subscribe to either cable or DBS. 39 By increasing their ability to use spot-beams to carry terrestrial channels. 40 E.I. duPont de Nemours & Co. v. FTC, 729 F.2d 128, 139 (2d Cir. 1984). 41 Framework Directive, Article 14(3). 42 Commission Guidelines, § 3.1.1., para. 84. 43 See Directive 2002/19/EC of the European Parliament and of the Council of 7 March 2002, on Access to, and Interconnection of, Electronic Communications Networks and Associated Facilities, 2002 O.J. L 108/7, Article 5.
24
James B. Speta
rules granting access to high-speed internet facilities.44 The Commission rejected the imposition of open-access rules, which would have required cable companies to unbundle or sell capacity at wholesale to competing ISPs.45 On the telephone side, the FCC has reversed its prior course on high-frequency unbundling (also called line sharing).46 And, although competing DSL providers may still purchase local loops at regulated rates from the incumbent local telephone companies, a significant roll-back in those rules is imminent. In its most recent decisions vacating FCC rules, the United States Court of Appeals for the D.C. Circuit has held that the FCC must limit its unbundling rules – even for local loops – to markets in which an affirmative showing of insufficient competition can be made. The court’s opinion strongly suggests that unbundling requirements should be far less common than they currently are. The D.C. Circuit’s decisions, as noted, are controversial, but the Bush administration decided not to seek review in the U.S. Supreme Court. These specific regulatory debates reflect the continuing theoretical debate in the U.S. about the rationality of monopoly leveraging theory. In the 1970s, Richard Posner, Robert Bork, and other members of the so-called Chicago School posited the “one monopoly rent” theory. Under this theory, monopoly leveraging (in general) is not economically rational, for any rent earned in a secondary, leveraged market simply dissipates the rent earned in the primary (leveraging) market.47 This theory has significantly colonised antitrust law in the U.S., even as more recent work has suggested potential flaws. As competition law has increasingly influenced telecommunications law, the one-monopoly rent theory has similarly come along. The open-access debate in the U.S., in particular, was made explicitly in terms of monopoly leveraging theory. Other internet policy debates similarly turn on whether one views monopoly leveraging as common or uncommon – meaning whether one views it as economically rational in general or whether one views it as economically rational in only a small number of circumstances (Farrell and Weiser 2003). I will not here attempt to resolve the debate over monopoly leveraging (if a final resolution is even possible), although I have previously been among those who argue that it is too soon to conclude that internet markets will be characterised by leveraging by infrastructure providers into other markets (Speta 2000). Following the lead of Judge Easterbrook, however, I believe that “the economic system corrects monopoly more readily than it corrects regulatory errors.” (Easterbrook 1984, p. 15) A monopoly leveraging theory permits a much wider scope of regula44
See High Speed Access to the Internet over Cable and Other Facilities, 17 FCC Rcd. 4798 (2002) (refusing to require open access to cable modem infrastructure), rev’d in part, Brand X Internet Servs. v. FCC, 345 F.3d 1120 (9th Cir. 2003); FCC Triennial Review Opinion, Aug. 21, 2003 (removing line sharing rules for DSL). 45 Id. 46 See U.S. Telecom Ass’n v. FCC, 359 F.3d 554 (D.C. Cir. 2004). 47 All acknowledge that a price-regulated monopolist has the incentive to leverage, because it is not earning its full rent in its primary market. But, in general, emerging telecommunications services (including wireless and Internet access) are not price regulated.
Rewriting U.S. Telecommunications Law with an Eye on Europe
25
tion, and regulation at a minimum creates costs for regulated entities. At worst, of course, regulation creates the possibility of industry capture and the use of legal process to erect artificial barriers to entry. What Judge Easterbrook means when he says that “the economic system corrects monopoly” is nothing more than the simple economic truth that the presence of monopoly profits attracts entry. The hope is that monopoly profits not only attract new companies in the short run, but also provide incentives for companies to invest in research and development if new technology is the answer to correcting monopoly. One need not be an internet utopian to note that technological advance in the telecommunications industry now proceeds at a much faster pace than it did before the digital era. In economic terms, the digital “long run” is not nearly so long. This truism, however, highlights two of the most difficult issues and interrelated problems in telecommunications policy – attempting to predict the future, and grappling with oligopoly (as opposed to monopoly). In fact, the topics just discussed, joint market power and monopoly leveraging, raise precisely these issues. If one or more entities have current market power, a regulator must decide whether that market power is likely to persist – for the costs of regulation cannot be justified if market developments would eliminate the monopoly naturally. Even more importantly, the choice to regulate is not costless, for imposing regulation – especially economic regulation such as rate caps – can eliminate market incentives. Similarly, leveraging into a complementary market will be contested by companies in the competitive market and, indeed, they will have an incentive not only to attack the leveraging practice (such as the tie), but they will also have an incentive to try to break the monopoly power in the primary market. Last, the presence of oligopoly, as opposed to monopoly, makes these issues particularly salient, for oligopolies are notoriously unstable, especially when oligopolists face large and sophisticated purchasers, as often happens in telecommunications markets where infrastructure companies face media and other content providers with significant weight. The Framework Directive and the Commission’s Guidelines address these issues directly, by requiring that national regulatory authorities take regard of several factors, including the relative level of technological innovation in the market, the possible presence of countervailing buying power, and the likely persistence of significant market power.48 Nevertheless, both the Directive and the Guidelines seem quite willing to regulate oligopoly, and this tendency is reinforced by the Commission’s list of markets that justify ex ante regulation, a list that includes virtually all communications market. The Communications Act of 1934 did not seem to contemplate the regulation of oligopoly in wireline communications, but only because both telephone and cable were assumed to be natural monopolies. In broadcasting, the FCC has regulated the networks based on potential oligopoly characteristics. That regulation substantially lifted, however, even before the development of video competition from cable and DBS companies. To my mind,
48
See, e.g., Commission Guidelines, § 3.1, paras. 78, 81, 84, 96.
26
James B. Speta
oligopoly might justify limited regulation, but that is intimately intertwined with the question of regulatory remedies, to which I now turn. 3. The Regulators’ Toolbox. Under the EU Framework, a finding of significant market power permits the regulator (at the Member State level) to employ all of the traditional tools of public utility regulation, including transparency and tariffing rules, non-discrimination requirements, accounting separation, access rules, wholesale and retail price control, and cost accounting.49 Although the Access Directive advises that the regulator should choose the lightest degree of regulation necessary to control market power, the Directive does not actually limit the regulators’ discretion.50 As suggested above, I believe that reform in the United States should grant the sector-specific regulator greater jurisdiction over two-way communications services, but such an increased jurisdiction ought to be limited by providing a much more limited set of regulatory tools than the EU structure contemplates. Mandatory interconnection rules, coupled (but only when necessary) with wholesale pricing rules, should control oligopoly. No restrictions on vertical integration are necessary, as antitrust rules generally seem adequate to prevent the possibility of foreclosure.51 My attitude here is informed by three considerations. First, with the advance of digital technologies, it seems unlikely that any service will be a natural monopoly. Although VoIP depends upon a broadband internet connection (and therefore is not in precisely the same market as traditional telephone service), VoIP and wireless services are presenting a substantial threat to incumbent wireless voice providers. Thus, oligopoly, in which competition of some sort is more likely, and not monopoly, is the principal concern. Second, if substantial spectrum reform occurs, this will increase the likelihood of three competing platforms for most services. And, third, universal service can be better accomplished by subsidies, rather than by regulation of carriers. Spectrum reform In the U.S., demand for new spectrum to provide new broadband services, both fixed and mobile, has created a substantial move for spectrum policy reform. The FCC convened a spectrum policy task force that acknowledged the need for additional spectrum and offered a number of proposals to change the “command and control” process of administering spectrum to a more flexible regime (FCC 2002). An influential working paper at the FCC calls for a “big bang” auction, in which most spectrum licenses would be re-auctioned to the highest bidders and dedicated to whatever use the purchaser chooses (Kwerel and Williams 2002). Such an auction would largely complete the transition of spectrum to a property-rights system, 49
Access Directive, arts. 9–13. See generally Crawley 2004; Cave 2004. Access Directive, arts. 12–13. 51 In this regard, the EU framework does not give the NRAs the power to order structural separation. See Marcus 2003. 50
Rewriting U.S. Telecommunications Law with an Eye on Europe
27
a transition which has been occurring in fitful steps over the past fifteen years (Shelanski and Huber 1998). Even more importantly, fundamental spectrum reform would allow wireless technologies to take their place as true competitors to wireline systems. The past several years have demonstrated that some of the most likely prospects for competition in telecommunications markets come from intermodal entry, and wireless has been the most successful. The example of DBS competition with cable companies is only the most obvious. In the United States, DBS has experienced double-digit growth over the past several years, while cable company growth has significantly slowed (FCC 2004). DBS companies claim that they are winning customers away from cable. Whether this continues depends in part on the prospects for broadband internet service, and whether the DBS companies and the DSL companies can develop an integrated product that challenges the cable companies’ ability to provide both. But early indications are that such a product will be forthcoming. Here, the European framework provides much less direction. Indeed, the Radio Spectrum Decision,52 although issued contemporaneously with the Framework Directive, is not expressly part of the new regulatory framework, leaving two current commentators to contend that “one cannot consider that there exists, as of the present time, any real common policy in the field of spectrum.” (Nihoul and Rodford 2004, p. 720) In the EU, spectrum policy has long been a matter for the Member States, and certain aspects of spectrum policy – especially mass media policy (on which more below) – have important sovereignty aspects. The Framework Directive recognises this, even as it suggests (gently) that Member States work to make spectrum policy more transparent, predictable, and efficient. In the United States, by contrast, spectrum policy has been an exclusively federal enterprise since the Federal Radio Act of 1927. Nevertheless, the state of U.S. and European spectrum policy reforms yields interesting contrasts in two areas that bear mentioning. First, European policy has not ruled out spectrum allocation by comparative hearing – the so-called “beauty contest” in which subjective factors play an important role.53 The Framework Directive suggests that these procedures should be reformed to increase their transparency, but Member States are permitted to select licensees in this manner. To some extent, this seems inconsistent with the general tenor of the Authorisations Directive, which strives to eliminate subjective government choice from the process by which entities receive legal permission to offer communications services. The Directive itself seems to acknowledge this tension.54 In the United States, by contrast, licensing by beauty contest has been dead for some time. By statute, all spectrum licenses, other than licenses for television and radio broadcast, must be allocated by auction.55 Certain entities, such as pioneers 52
Decision of the Council and of the Parliament 676/2002 on a Regulatory Framework for Radio Spectrum Policy in the European Community, 2002 OJ L108/1. 53 See, e.g., Authorisations Directive para. 23–24. 54 See Authorisations Directive at art. 5. 55 47 U.S.C. § 309(j).
28
James B. Speta
and, occasionally, minority-owned enterprises, have received “credits” in the auctions, such that they need only pay a percentage of their winning bids (FCC 1997). But, given the success in raising money for the treasury, Congress has mandated auctions. The exception for television broadcast licenses, far from reflecting some deliberate attempt to maintain government control over mass media policy, reflects nothing more than the political power of the incumbent broadcasters. Indeed, allocation by beauty contest is inefficient, and there is (and should be) no prospect for its rebirth in a new U.S. telecommunications law. Similarly, government definition of licensees’ uses of the spectrum should also give way. Here, the principal debate on allocation method is between the property-rights advocates, who would auction and privatise spectrum licenses, and the so-called “commons advocates”, who contend that the government can best promote access and technological innovation by setting aside significant parts of the spectrum in which users can operate on an unlicensed basis (so long as they use equipment meeting certain technical requirements) (Benjamin 2003). Each of these approaches has in its favour that it decreases the government role in deciding the uses to which spectrum may be put. In a fully-propertised model, the government role falls away, and licensees are permitted to provide any type of service that fits within their spectrum rights. In a spectrum commons, the government sets standards for the types of equipment that may be operated and it supervises the certification process, but government again does not set any limit on the types of services that may be offered using approved equipment. In both, government may be involved in resolving interference problems or enforcing license rights or equipment standards, either through an expert agency or through the courts, but government’s authority does not extend to determining the type of service, the number of service providers, or their identity. Public choice economics, which has been well-received in the U.S., explains a large portion of the desire to decrease the government’s role in spectrum allocation. Beauty contests and other comparative processes at a minimum create uncertainties for market participants. They are much slower than market-based allocation procedures, and they can never be as transparent. At worst, they create the opportunity for agency capture, for incumbents to use regulatory processes to create barriers to entry. “Regulations have consistently produced predictable outcomes – those favouring the interests of powerful incumbents, primarily commercial broadcast television licensees.” (Hazlett 2004, p. 237) Both phenomena threaten the innovation that is so important to communications markets. Notably, it is widely agreed in the U.S. that the 1996 Act missed an important opportunity to decrease the incumbents’ control over spectrum. Outside of the broadcasting context, there is little sentiment for non-market allocation processes. The concerns over comparative processes generally and over government selection of licensees in particular are also reflected in differing approaches to standard-setting. Reflecting European consensus that the setting of standards for second generation mobile telephones (GSM) was a big success in speeding deployment of those systems, the European framework explicitly contemplates a continued, significant government role in the setting of some telecommunications
Rewriting U.S. Telecommunications Law with an Eye on Europe
29
standards.56 Historically, the U.S. has relied on private industry standard-setting processes, except in the realm of broadcast receivers.57 More recently, the FCC has adopted standards for digital television receivers which, while they are based on the FCC’s traditional authority over broadcast receivers, also go significantly further to embed digital rights management (DRM) technology – and which hint at the FCC’s assuming authority to prescribe DRM for all digital network equipment, including general purpose PCs. If, as I have maintained, the network nature of many communications markets means that government must retain authority to order interconnection, government probably also needs to retain some standard-setting authority. But mandatory standard-setting, while it can further competition within a defined market, can also decrease competition for the market (Shelanski and Sidak 2001). More importantly, government standard-setting has the potential to become a barrier to entry. Interconnection rules should be designed only to prevent the fragmenting of important public networks, and the domain of government standard-setting should be similarly limited. Economic theory has not been able to identify as a general matter those situations in which a standards-setting battle helps or hurts consumers, and it is unlikely that government will do better. In the absence of a threat to the integrity of a network, government standard-setting should be sparing. Media policy The past several years have seen a substantial deregulatory movement in U.S. media policy. The traditional structural regulations, which were designed to further the goals of media diversity and local coverage, have been substantially decreased in scope. Thus, traditional horizontal and vertical ownership limits have been lifted in television and radio markets.58 The FCC has also acted to lift television/newspaper cross-ownership limits and to replace them with more general limits on “media concentration”. The courts have endorsed the idea in principle, although the most recent rules were vacated and remanded for further justification.59 The U.S. tries to use these structural ownership limits to affect media content indirectly, avoiding specific regulation of content that would be suspect under the Constitution’s first amendment (free speech and press). But even the permissibility of this structural regulation is under constitutional assault. Although the lower courts’ recent decisions have refused to question Supreme Court precedent,60 all of 56
Framework Directive, art. 17(1). See 47 U.S.C. § 330. 58 For a useful summary, see Prometheus Radio Proj. v. FCC, 373 F.3d 372 (3d Cir. 2004). 59 Id. at 404–11. 60 See id. at 401–02; Fox Television Networks v. FCC, 280 F.3d at 1046 (“contrary to the implication of the networks’ argument, this court is not in a position to reject the scarcity rationale even if we agree that it no longer makes sense. The Supreme Court has heard the empirical case against that rationale and still declined to question its continuing validity”). 57
30
James B. Speta
that precedent justifies regulation of media under a view that spectrum remains “scarce”.61 To the extent that such a view ever had merit, it is increasingly untenable, as the FCC itself has found. A competitive media market will provide diverse and local programming; as a result, the continuation of some structural rules is designed only to eliminate residual market distortions created by the lack of adequate competition in video programming markets. In the context of media policy, the European framework, however, diverges from its general commitment to regulating only upon a showing a significant market power. “In contrast to the telecommunications sector, the EU audio-visual sector is still to a significant extent regulated by unharmonised national legislation that covers a much wider scope than the limited aspects dealt with by Community Directives on television.” (Garzaniti 2003, § 2-001) Indeed, the new framework does not seem to embrace competition as the end (or means) of media policy. The Access Directive states, without further explanation, that “competition rules alone may not be sufficient to ensure cultural diversity and media pluralism in the area of digital television.”62 Member States are given authority, “without prejudice to measures that may be taken regarding undertakings with significant market power in accordance with Article 8”, to impose access requirements on digital and television broadcasting services.63 The Universal Service Directive seems to go even farther, holding that “Member States may impose reasonable ‘must carry’ obligations, for the transmission of obligations ... on undertakings under their jurisdiction providing electronic communications networks used for the distribution of radio or television broadcasts to the public where a significant number of end-users of such networks use them as their principal means to receive radio and television broadcasts.”64 The threshold for must carry regulation – a “significant number of end-users” – is not synonymous with market power (Nihoul and Rodford 2004, § 7.196). Media policy does seem to be the principal area in which competition policy and other social goals come most directly into conflict. In the U.S., the conflict is less, for the first amendment generally points in a deregulatory direction. Indeed, U.S. courts have used first amendment scrutiny to strike down a number of FCC structural regulations, on grounds that evidence had not been developed to justify the restrictions on free speech that those regulations entailed (Speta 2004a). As noted, the continued freedom that Congress and the FCC have to regulate television and radio will continue only so long as the courts continue to accept the scarcity rationale, for it is only that rationale that denies to broadcasters their full rights under the first amendment. 61
FCC v. Nat’l Citizens Comm. for Broad., 436 U.S. 775, 780 (1978) (upholding original cross ownerships rules under limited first amendment scrutiny, due to scarcity of broadcasting outlets); FCC v. League of Women Voters, 468 U.S. 364, 376 (1984) (refusing to revisit scarcity issue); Turner Broadcasting v. FCC, 512 U.S. 622, 638 (1994) (noting continuation of scarcity rationale). 62 Access Directive, preamble, para. (10). 63 Id. art. 5(1)(b). 64 Universal Service Directive, art. 31(1).
Rewriting U.S. Telecommunications Law with an Eye on Europe
31
The differing directions of U.S. and European media policy seem not to be driven entirely by constitutional imperatives, though these are important (especially where, as in some Member States, there is a constitutional requirement that the government affirmatively ensure pluralism and media diversity) (Kurth 2005). Rather, both the absence of government broadcasting and the relatively small amount of non-commercial broadcasting in the U.S. testify to American comfort with market processes in video markets. Media deregulation is controversial in the U.S., to be sure, but the controversy generally rages over there is adequate competition in distribution technologies to justify deregulation. Only a few and diminishing number of U.S. commentators demonstrate any concern over whether a competitive media market would provide sufficient “good” programming – meaning educational, news, or cultural programming (Minow and Cate 2003). Universal service Both the U.S. and European law demonstrate a strong commitment to universal service. The 1996 Act in the U.S., while it turned to market processes in many regards, emphasised and expanded the U.S. commitment to universal service.65 The Universal Service Directive similarly affirms the importance of universal service and confirms the Members States’ ability to adopt universal service protections.66 The U.S. statute and the European Directive similarly state that universal service funding should not alter competition in the market between carriers. The U.S. statute exhorts that universal service funding should be “competitively neutral”;67 the Universal Service Directive says that “it is important to ensure ... that any financing is undertaken with minimum distortion to the market and to undertakings.”68 Currently, the U.S. scheme falls well short of this aspiration, as universal service taxes are paid only by “telecommunications carriers”, which means the internet providers do not contribute (Mueller 2004, pp. 655–57). By contrast, the Universal Service Directive calls for “spreading contributions as widely as possible”.69 Given the commitment to universal service in principle, the difficult questions arise in providing appropriate tools to implement the policy. As an initial matter, it is well-understood that the least distorting mechanism for providing universal service would be through general revenues (Hausman and Shelanski 1999), but, reflecting similar political realities, neither the U.S. nor Europe makes that system mandatory.70 Thus, the U.S. system opts for a telecommunications-sector specific tax and funding mechanism. 65
See 47 U.S.C. § 254; see generally Mueller 2004. Universal Service Directive, preamble, art. 1. 67 47 U.S.C. § 254(d). 68 Universal Service Directive, para. 18; see also id. para. 23. 69 Universal Service Directive, para. 23. 70 See Universal Service Directive, para. 22, 23 (permitting general funds to subsidise universal service, but also permitting taxes on communications companies). 66
32
James B. Speta
Here, the Universal Service Directive empowers regulators to a much more significant extent than does the U.S. system, and it allows the regulators a set of tools inconsistent with a limited, competitively neutral regime. The Directive, consistent with the central tenets of the new regulatory framework, does express that regulators should primarily rely on market mechanisms to provide universal service.71 Nevertheless, the Directive authorises national regulatory authorities to use an extremely wide variety of tools to implement universal service goals, including tariff review,72 quality of service review,73 information obligations,74 cost of service monitoring,75 and even unbundling.76 U.S. regulation has moved decidedly away from each of these techniques, even in markets which are not fully competitive. In markets demonstrating higher levels of competition, the U.S. has dismantled tariffing regimes and largely eliminated consumer protections on quality of service and non-discrimination (Helein et al. 2002). Even in less-competitive markets, full tariff and cost regulation has been replaced by price-cap regulation, a technique which is not forbidden by the Directive, but one which is not required either. Indeed, the U.S. law will likely continue to limit the domain of economic regulation. And universal service is neither an economic regulation nor a response to market failure. Although universal service can be justified on the basis that it creates long-term economic benefits by reducing unemployment or enhancing education and information, universal service is not itself a policy designed to correct for an externality in the market.77 More importantly, universal service policies – as the Directive straightforwardly acknowledges – are designed to ensure each citizen a certain level of telecommunications service as a “basic good” and to provide it at prices lower than even a competitive market might establish.78 The wide range of regulatory tools granted by the Universal Service Directive to national regulatory authorities thus presents two difficulties from the U.S. perspective. First, tools such as tariffing and cost-of-service regulation raise the prospect of continuing the inefficient internal cross-subsidisation that impeded competition in some markets. Second, granting regulators a wide range of discretion without a limiting concept such as “significant market power” raises the possibil71
Id. art. 3(2) (“Member States ... shall seek to minimise market distortions, in particular the provision of services at prices or subject to other terms and conditions which depart from normal commercial conditions.”). 72 Id. art. 9. 73 Id. art. 11. 74 Id. art. 10. 75 Id. art. 12. 76 Id. art. 10(1) (“Member States shall ensure that designated undertakings ... establish terms and conditions in such a way that the subscriber is not obliged to pay for facilities or services which are not necessary or not required for the service requested.”). 77 Economists, of course, have established that a network provider cannot capture all of the gain from adding subscribers to the network – the definition of a “network externality.” But the scope of universal service policy is designed to go far beyond this rather small phenomenon. 78 Universal Service Directive, paras. 4–7.
Rewriting U.S. Telecommunications Law with an Eye on Europe
33
ity of entrenchment and capture – of continuing in place regulators that themselves may be barriers to entry of new services. To be sure, the new regulatory framework limits regulatory authority in other of its Directives, but the breadth of regulatory powers under the Universal Service Directive seems to cut in the opposite direction. Last, the EU system on universal system contemplates the authorisation of only a single universal service provider in any geographic location.79 But this is not competitively neutral, especially if the system is properly designed to offer either consumer or producer subsidies. All entrants ought to be eligible to compete for those subsidies, both to maintain neutrality and to ensure the efficient provision of universal service.
Conclusions It is probably unfair, at this date, to draw any firm conclusions about the European regulatory framework. Europe still is at an early stage of implementing the new regulatory framework. Some Member States are still implementing its provisions in national legislation. More importantly, this implementation comes against the backdrop of only recently liberalised markets. Until quite recently by comparative standards, European telecommunications was dominated by state enterprises and rigid legal monopolies.80 Nevertheless, given the need for statutory and regulatory reform in the United States, where the law still follows service-specific categories, the European framework provides an important and unavoidable example. There is no doubt that its attempt to develop rules addressing all communications services is worthy of emulation. Moreover, the basic approach of the European framework – to make regulation contingent upon a finding of significant market power – is exactly the direction that U.S. theory and practice has been heading. Indeed, it is no exaggeration to state that the FCC is already applying this approach where, given the constraints of legacy regulation, it can. More importantly, the European framework identifies areas on which U.S. regulatory reform must focus. Principal among these is the definition of market power, or, more accurately, the definition of those kinds of market power that will justify the costs of intensive sector-specific regulation. It seems unlikely that the U.S. will adopt leveraging theory wholesale, when so many current decisions have been premised on its rejection. But the U.S. must also devise policies to address oligopolistic markets, especially those that seem likely to have persistent, concen79
This is revealed by the consultation on VoIP. Commission Staff Working Document on The Treatment of Voice over Internet Protocol (VoIP) under the EU Regulatory Framework § 4.4 (June 14, 2004) (“As long as there is one operator in a specific geographic area with Universal Service obligations, there is no need for a National Regulatory Authority to designate any other operator to offer Universal Service.”). 80 The U.S. Bell System had a de facto monopoly over many services, but, it is difficult to say that it ever enjoyed a full legal monopoly (Baumol and Merrill 2004).
34
James B. Speta
trated structures. In this regard, media policy is especially problematic, for the harm of a concentrated media structure rests not only in the economics of purchase and sale (higher price and smaller quantities) but also in the provision of information, analysis, and cultural goods that are essential to functioning politics. The most likely direction for U.S. policy – and one that seems advisable – is to continue to diminish the realm of economic regulation and to rely on a single regulatory tool to address whatever the continued effects are of concentration and of network markets’ inherent tendencies toward “tippiness”. That regulatory tool is interconnection rules. In horizontal markets, such as traditional telephone and newer data markets, interconnection is necessary to ensure against barriers to entry, and evolving competition seems best protected by legal rules requiring such interconnection. An interconnection rule, however, limits the regulators’ power to wholesale markets, which decreases (but does not eliminate) the possibility for distortions from regulation. In the Access Directive, the European scheme provides a useful starting point. Last, Universal Service policy must evolve to more of a “tax and spend” structure, and the U.S. system is heading decidedly in this direction. The “tax” will remain on telecommunications entities, but the use of subsidies – and, ideally, of consumer directed subsidies such as vouchers – will again limit the domain of regulation and its distortions.
References Baker JB (1993) Two Sherman Act Section 1 Dilemmas: Parallel Pricing, the Oligopoly Problem, and Contemporary Economic Theory. Antitrust Bull. 38:143–219 Baker JB (2002) Mavericks, Mergers, and Exclusion: Proving Coordinated Competitive Effects under the Antitrust Laws. N.Y.U. L. Rev. 77:135–203 Benjamin SM (2003) Spectrum Abundance and the Choice Between Private and Public Control. N.Y.U. L. Rev. 78:2007–2102 Besen S, Farrell J (1994) Choosing How To Compete: Strategies and Tactics in Standardization. J. Econ. Persp. 8:117–131 Baumol WJ, Merrill TW (1997) Deregulatory Takings, Breach of the Regulatory Contract, and the Telecommunications Act of 1996. N.Y.U.L. Rev. 72:1037–1067 Cave M (2004) Remedies for Broadband Services. J. Network Indus. 5:23–50 Crawley RA (2004) The New Approach to Economic Regulation in the Electronic Communications Sector in Europe: The Application of Regulatory Remedies, J. Network Indus. 5:3–22 Directive 2002/21 of March 7, 2002, on a Common Regulatory Framework for Electronic Communications Networks and Services, O.J. 2002 L108/33 (“Framework Directive”) Easterbrook FH (1984) The Limits of Antitrust. Texas L. Rev. 63:1–40 European Commission (1999) Toward a New Framework for Electronic Communications Infrastructure and Associated Services, COM(539) FCC (1997) Report to Congress on Spectrum Auctions FCC (2002) Spectrum Policy Task Force Report. http://hraunfoss.fcc.gov/edocs_public/attachment/DOC-228542A1.pdf
Rewriting U.S. Telecommunications Law with an Eye on Europe
35
FCC (2004) Annual Assessment of the Status of Competition in the Market for the Delivery of Video Programming, Tenth Annual Report Farrell J, Weiser PJ (2003) Modularity, Vertical Integration, and Open Access Policies: Towards a Convergence of Antitrust and Regulation in the Internet Age. Harv. J.L. & Tech. 17:85–134 Garzaniti L (2003) Telecommunications, Broadcasting and the Internet: EU Competition Law & Regulation. Thomson Sweet Maxwell, London Hausman J, Shelanski H (1999) Economic Welfare and Telecommunications Regulation: The E-Rate Policy for Universal Service Subsidies. Yale J. on Reg. 16:19–51 Hazlett TW (2004) All Broadcast Regulation Politics Are Local: A Response to Christopher Yoo’s Model of Broadcast Regulation. Emory L.J. 53:233–253 Helein CH, Marashlian JS, Haddad LW (2002) Detariffing and the Death of the Filed Tariff Doctrine: Deregulating in the “Self” Interest. Fed. Comm. L.J. 54:281–318 Huber PW (1997) Law and Disorder in Cyberspace: Abolish the FCC and Let Common Law Rule the Telecosm. Oxford, New York Krattenmaker TG (1996) The Telecommunications Act of 1996. Conn. L. Rev. 29:123–174 Kurth M (2005) Marktdefinitionen, Netzzugang und Entgeltregulierung – Medienrechtliche Belange zwischen Wettbewerbs – und Kummunikationsrecht, in Digitale Satellitenplattformen in den USA and Europa und ihre Regulierung Kwerel E, Williams J (2002) A Proposal for a Rapid Transition to Market Allocation of Spectrum, Fed. Commun. Comm’n, OPP Working Paper No. 38. http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-228552A1.pdf Laffont JJ, Tirole J (2000) Competition in Telecommunications. MIT Press, Cambridge MA LaRouche P (2002) A Closer Look at Some Assumptions Underlying EC Regulation of Electronic Communications. J. Network Indus. 3:129–149 Marcus JS (2003) The Potential Relevance to the United States of the European Union’s Newly Adopted Regulatory Framework for Telecommunications. In: Cranor LF, Wildman SS (eds) Rethinking Rights and Regulations. MIT Press, Cambridge MA McChesney FS (2003) Talking ‘Bout My Antitrust Generation: Competition for and in the Field of Competition Law. Emory L.J. 52:1401–1438 Minow NN, Cate FH (2003) Revisiting the Vast Wasteland. Fed. Comm. L.J. 55:407–433 Mueller M (1997) Telecommunications Access in an Era of E-Commerce: Towards a Third Generation Universal Service Policy. Fed. Comm. L.J. 49:655–673 Nihoul P, Rodford P (2004) EU Electronic Communications Law: Competition and Regulation in the European Telecommunications Market. Oxford, New York Papadias L (2004) Some Thoughts on Collective Dominance From A Lawyer’s Perspective. In: Buiges PA, Rey P (eds). The Economics of Antitrust and Regulation in Telecommunications: Perspectives for the New European Framework. Edward Elgar, Cheltenham UK Powell MK (2004) Preserving Internet Freedom: Guiding Principles for the Industry. http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-243556A1.pdf Price ME, Duffy J (1997) Technological Change and Doctrinal Persistence: Telecommunications Reform in Congress and the Courts. Colum. L. Rev. 97:976–1015 Robinson GO (1989) The Federal Communications Act: An Essay on its Origins and Regulatory Purpose. In Paglin MD (ed) A Legislative History of the Communications Act of 1934. Oxford, New York
36
James B. Speta
Shelanski HA (1997) The Bending Line between Conventional “Broadcast” and Wireless “Carriage”. Colum. L. Rev. 97:1048–1080 Shelanski HA, Huber PW (1998) Administrative Creation of Property Rights to Radio Spectrum. J.L. & Econ. 41:581–607 Shelanski HA, Sidak JG (2001) Antitrust Divestiture in Network Industries. U. Chi. L. Rev. 68:1–99 Speta JB (2000) Handicapping the Race for the Last Mile?: A Critique of Open Access Rules for Broadband Platforms. Yale J. on Reg. 17:39–91 Speta JB (2002) A Common Carrier Approach to Internet Interconnection. Fed. Comm. L.J. 54:225–279 Speta JB (2002a) Maintaining Competition in Information Platforms: Vertical Restrictions in Emerging Telecommunications Markets. J. Telecom. & High Tech. L. 1:185–216 Speta JB (2003) FCC Authority To Regulate the Internet: Creating It and Limiting It. Loy. U. Chi. L. J. 35:15–40 Speta JB (2004) Deregulating Telecommunications in Internet Time. Wash. & Lee L. Rev. 61:1063–1157 Speta JB (2004a) Vertical Regulation of Digital Television: Explaining Why the United States Has No Access Directive, in Regulating Access to Digital Television (Eur. Audiovisual Obs.) Speta JB (2005) Making Spectrum Reform Thinkable. J. Telecom. & High Tech. L. 4:159– 191 United States Department of Justice (1992) Horizontal Merger Guidelines. http://www.ftc.gov/bc/docs/horizmer.htm Weiser PJ (2003) Toward a Next Generation Regulatory Strategy. Loy. U. Chi. L. J. 35:41– 85 Werbach K (2001) A Layered Approach to Internet Policy. J. Telecom. & High Tech. L. 1:25 Wood DP (1992) The Impossible Dream: Real International Antitrust. U. Chi. L. Forum 1992:277–313 Wood DP (2002) International Harmonization of Antitrust Law: The Tortoise or the Hare? Chi. J. Int’l L. 3:391–407
A Quadratic Method for Evaluating the New Hungarian Act on Electronic Communications with Respect to the Policy and Regulatory Objectives Gyula Sallai1 Budapest University of Technology and Economics, Hungary
Abstract Hungary transposed the new EU regulatory framework package on electronic communications in 2003. It was approved by the Hungarian Parliament in November 2003, entered into force on 1 January 2004 (except a few paragraphs). The new EU directives created an integrated regulatory system for the telecommunications, broadcasting and cable tv-distribution by introducing new terminology built on the collective term electronic communications but not including postal services. Thus, the former Act on Communications, more or less complied with the regulatory framework of 1998, was replaced by two new Acts: the Act on Electronic Communications (Eht) and the Act on the Postal Services. In the course of expressing opinion on the draft bill and the final text of Eht we have not only evaluated it article by article (this was done by a wide circle of commenting operators and state administration bodies) but we also examined the prospected fulfilment of electronic communications policy and regulatory objectives. We took the objectives in the general framework directive (Article 8) for basis and completed them for handling national particularities. In this way a wellbalanced set of 20 objectives arranged into four groups has been identified. A two-dimensional evaluation method has been elaborated (basic idea derives from the Gartner’s Magic Quadrant method used for the evaluation of companies on a defined market) that systematically analysis and scores the suitability and the feasibility of the legal solutions applied to the objectives by points ranging between 1 and 10. The evaluation is carried out by experts for each objective using the scoring criteria defined for both suitability and feasibility aspects. The pairs of scores for the objectives are represented on a 10 by 10 square. The square is divided into harmonic, pragmatic, theoretic and partial quadrants depending on the high or low scoring of the suitability and feasibility. In addition, 1
E-mail:
[email protected]
38
Gyula Sallai
the domains of the five quality grades of the solutions are also determined. Various mean, deviation, and correlation statistics are calculated, so that a quantitative evaluation and well-based conclusions can be drawn. The procedure, referred to as quadratic evaluation method, can be generally used for evaluating legal, policy or strategic documents, where the clarity and completeness of the concept (suitability) and the abilities to the implementation of the concept (feasibility) are equally important. Here we apply it for the evaluation of the Eht with respect to the 20 policy objectives and the legal and regulatory tools provided for the national regulatory authority in Hungary (NHH). The paper presents the policy and regulatory objectives identified, the quadratic evaluation method used and the results of scoring for the public draft and the final version of Eht.
Introduction The two key objectives of the Act on Communications (Hkt) adopted in the summer of 2001 and having entered into force on 23 December 2001 were to realise the liberalisation of the communications market (telecommunications + post services) and harmonisation with Community regulations in force at that time. After some time it has shown that the provisions of the Hkt failed to meet the hopes for creating competition and achieving any progress in the elimination of backwardness concerning new parties in the market and the penetration of the Internet. Each of the studies analysing the deficiencies and internal contradictions of the Hkt indicated the earliest possible amendment of the Act. However the most important and most urgent reason for the revision of the Hkt was the new regulatory framework system that entered into force on 25 July 2003 in the European Union's member-states. Member-states were to integrate the new Directives adopted by the EU's Parliament in the first half of 2002 into their own legal systems by 24 July 2003. Based on the Treaty of Accession Hungary had to incorporate the new Community legal material into its domestic legal system by the date of accession, 1 May 2004 at the latest. The new Community legal material created an integrated regulatory system for the transfer and distribution of information (telecommunications, broadcasting, cable-tv, etc.), introducing new terminology built on the collective term electronic communications, not including postal services. Thus, the Hkt was replaced by two new Acts: the Act on Electronic Communications (Eht) (Hungarian Parliament 2003) and the Act on the Postal Services. The subsequent drafts as the basis of the Eht were prepared after processing the opinions. In the course of formulating our opinion we evaluated the drafts not only article by article; this was also done by a wide circle of commenting operators and state administration bodies; but we examined the fulfilment of the electronic communications policy and regulatory objectives. We took the objectives in the Framework Directive (Article 8) as a basis and completed them for taking into account national particularities, so that a wellbalanced set of 20 objectives, arranging into four groups was formed. In order to
A Quadratic Method for Evaluating the New Hungarian Act on Electonic Communications
39
evaluate both the conceptual and the practical fulfilment of the objectives, a method called quadratic (or quadrant) evaluation method was used that scores the suitability and feasibility of the applied legal solutions separately. We collected inputs of independent experts, specialists of operators and regulators in evaluating the Bill (the second draft) of the Eht (Ministry of Informatics and Communications and Ministry of Justice 2003) and the Act, so that the statistics of the scores could be calculated. The paper outlines the Community's new regulatory package on electronic communications, identifies the policy and regulatory objectives, presents the quadratic evaluation method used, represents some expert opinions and analyses the statistical results.
The reform of the telecommunications regulation in the European Union The liberalisation of telecommunications in the European Union (EU) was launched in 1987 by the issue of the Green Paper on liberalisation (European Commission 1987). From then on the sector's step-by-step liberalisation was carried on for around ten years, reaching the liberalisation of telephone services and infrastructure in January 1998. On striving for an integrated and flourishing telecommunications services and infrastructure market in this period, the European telecommunications policy aimed at the facilitation of competition and the elimination of the unnecessary differences between the member states as well as made liberalisation and harmonisation measures to reach them. As a consequence of these efforts telecommunications started to show major development by reaching outstanding results especially in the digitisation of the network and the evolution of mobile services. From a legal point of view the sector became fully liberalised with a certain degree of competition. However, the intensity of this competition varied greatly by country and service market; and the availability of the Internet was far below the level of the leading countries of the world. The regulatory regime, developed through step-by-step liberalisation and consisting of more than twenty directives, is becoming mosaic-like and rather difficult to interpret with implementations in various countries containing significant differences; and made it increasingly difficult to create an integrated European market and pan-European services. The convergence of telecommunications, information and media technologies became ever more evident in the nineties. This was backed by the booming development of digital technology, because a great variety of content can be handled digitally. In a 30-year period the specific costs per bit decreased to at least onemillionth in case of transmission and to twenty to fifty thousandths in case of storage and processing. The convergence of technologies opened unprecedented business opportunities through the vertical integration potentials of the content production–transfer/distribution–consumption/use value chain on the one hand, and through the approach of service–network–terminal solutions, separated in the past
40
Gyula Sallai
according to content variations (voice, data, video) and the possibility to transfer any content on any network, on the other hand. The exploitation of synergic opportunities made it necessary to eliminate regulatory obstacles. The Green Paper on convergence published in 1997 was aimed at the facilitation of the convergence of technologies, and launched a substantial change in the regulatory regime of telecommunications (European Commission 1997). The regulatory model in Fig. 1, finalised and adopted by 1999, drew a sharp line between the regulation of content and that of services and networks (European Commission 1999). Theoretically, the regulation of content – involving the regulation of audiovisual content, web-services, information society services such as ecommerce, e-government services, etc. – should be independent from the transfer method, and the regulation of transfer should be independent of technology supposing a strong convergence of various services and networks. This latter is expressed by the introduction of the terminology on electronic communications services and networks. The concept of electronic communications networks, regardless of the type of information to be forwarded, comprises all types of networks that are suited to transfer signals on wires, radio, optic or other electromagnetic manner (telecommunications and computer networks, broadcasting and distribution networks, even electricity networks). Electronic communications services are services that provide primarily for the transfer of signals on electronic communications networks – against a charge – e.g. telecommunications and broadcasting services, however, not including content services and information society services and applications. A p p lic a tio n s , co n ten t s e r vic e s (in fo rm a tio n s o c ie ty s e rv ic e s , e .g . e -c o m m e rc e , a u d io v is u a l c o n te n t)
E le c tro n ic c o m m u n ic a tio n s s e r vic e s (e .g . te le p h o n e , fa x , d a ta tra n s m is s io n , e -m a il )
E le c tro n ic c o m m u n ic a tio n s n e tw o r k s (te rre s tria l, m o b ile , s a te llite , c a b le T V , p o w e r lin e s y s te m s , ra d io a n d T V b ro a d c a s t n e tw o rk s )
Fig. 1. Community model of the regulation of infocommunications
The experiences drawn from the competition-stimulating regulation of telecommunications and the responses to regulatory challenges from convergence lead to the detailed elaboration of the regulation of electronic communications. The regulatory package of electronic communications (Ryan 2003), most of which was adopted in the first half of 2002 and entered into force in EU member states on 25 July 2003, took the place of the former telecommunications regulation as
A Quadratic Method for Evaluating the New Hungarian Act on Electonic Communications
41
the regulation of the transfer of a wider infocommunications sector created by the convergence of telecommunications with information and/or media technologies.
Novelties of the electronic communications regulation The novelties in the new regulatory framework for electronic networks and services (Ryan 2003) can be grouped into three topics as follows: • approach to the competition law, • principle of technological neutrality, • closer coordination of national regulatory authorities (NRAs). Approach to competition law Sector-specific regulation was deemed excessive under increasing competition in the market. The new regulation of the market can be characterised by an approximation of competition law. This means that the sector-specific ex-ante regulation should be applied only in markets where competition is not efficient and the tools of the competition law are not sufficient, or they are required by general social objectives and obligations such as consumer protection, universal service obligations. The general principle is that competition is deemed inefficient in a certain market, if an operator with significant market power (SMP) can be identified in such market. In terms of the determination of the SMP operators and the imposition of ex-ante obligations the new market regulation is more flexible than the previous one in three aspects providing better assessment of the countries' market structures and the rate of market distortions: • The circle of relevant markets, where an SMP operator can be identified, changes basically: as opposed to the former four relevant markets a total of 18, 7 retail and 11 wholesale markets are defined; and it is also possible to define additional markets through coordination with the European Commission (EC); • In a relevant market an operator will be deemed an SMP in the future, that can make market decisions (on its own or with others) with some degree of independence from users and rivals (principle of dominance). This new rule cancels the former strict criterion of the 25% market share. • The circle of obligations to be imposed on the SMP operators, the so-called remedies, will not be pre-determined in the future, as it is now, but it will be determined on a case-by-case basis in accordance with the reasons and level of insufficiency of competition in the market. Remedies can be selected from a list, and it is also possible to apply obligations of other type (after coordination with the EC).
42
Gyula Sallai
The regulation equips the NRAs with the management of flexibility dimensions. In this way, the scope of their authority and responsibility is significantly increased. This, in turn, makes a more efficient management of a rapidly changing market possible. Thus, the NRAs regularly analyse the intensity of competition on the relevant markets and modify the circle of SMP operators and remedies imposed. As a consequence, the sector-specific interventions will be reduced or abolished in the markets where competition grows stronger; and the regulation will approach competition regulation. At the same time, we should be aware that the implementation tasks of the NRAs have increased, and their role is becoming more intensive. The principle of technological neutrality The development of technology has gone beyond network-dependent rules; content can be transferred on just about any network as a result of convergence. Thus, an ultimately consistent, technology-neutral regulation of infrastructure and service is indispensable. This also inspires the evolution of convergence, appearance of new solutions and the competition of a variety of solutions. The application of the principle of technological neutrality caused deeply rooted modifications in many elements of the regulatory regime (e.g. extended the issue of the interconnection of networks to any network, expanded the interpretation of universal services and mandatory number-portability to mobile operators). At the same time, the procedure of entering the market becomes drastically simpler. Close coordination of regulatory authorities The activities of national regulatory authorities (NRAs) undergo fundamental changes in order to manage the rapid technological development and to coordinate European regulatory practice more efficiently. The followings have been formulated: • requirements for member-states concerning legal and operating conditions are to be ensured for national regulatory authorities (independence, technical, economic and legal competence, list of scopes and responsibilities; empowerment to impose remedies, decide in disputes, collect information; transparency of decision-making procedures, possibility to appeal against decisions, etc.); • EU-level harmonisation tools, comprising recommendations, guidelines, harmonised list of standards; coordination institutions (Communications Committee, European Regulators Group, etc.) and consolidation mechanisms, such as the obligation of the NRAs to consult on other issues concerning the common market, the right of the EC to have an NRA decision withdrawn on the determination of relevant markets, identification of SMP operators, etc.
A Quadratic Method for Evaluating the New Hungarian Act on Electonic Communications
43
Electronic communications policy and regulatory objectives It is helpful to summarise the electronic communications policy goals and regulatory principles, based on Article 8 of Framework Directive (European Communities 2002), as a comprehensive EU system of objectives. The implementation of these by the member-states in their legislation empowers their regulatory authority to employ suitable and proportionate regulatory tools. The referenced Framework Directive specifies 15 items that, taking into account the special conditions in Hungary (typical in the acceding countries), can be supplemented with the additional 5 items (marked by *) and grouped as follows (Sallai 2003): A) Further development of the electronic communications infrastructure of the information society, including: A1. *Promotion of the expansion of new services and technologies related to the information society and the deployment of convergence; A2. *Elimination of backwardness in the availability of the Internet; A3. Application of the principle of technological neutrality; A4. Encouragement of efficient investment in infrastructure and promotion of innovation; A5. Ensuring the integrity and security of public electronic networks. B) Promotion of competition in the provision of electronic communications networks and services, including: B1. Elimination of distortions and restrictions of competition, and preventing their re-formation; B2. *Differentiated treating of market dominance; B3. *Imposition of the usage of competition-stimulating techniques (e.g. number portability, carrier-selection, local loop unbundling); B4. Ensuring maximum benefit for users, including disabled users, in respect of product range, price and quality; B5. Encouraging the efficient use of radio frequencies and numbering resources, realisation of an effective spectrum and number management. C) Contribution to the development of the European Community market of electronic communications, including: C1. *Introduction of EU-conform regulatory system; C2. Ensuring the development of consistent regulatory practice, harmonised at EU-level; C3. Elimination of obstacles to the provision of electronic networks and services at European level; C4. Non-discriminative procedures in the treatment of electronic communications operators; C5. Encouraging the establishment and development of trans-European networks and pan-European services.
44
Gyula Sallai
D) Protection of the interests of citizens, including: D1. Access to universal services for everyone; D2. Protecting the right of consumers, in particular, by ensuring a simple dispute resolution procedure, performed by an agency that is independent of parties under consideration; D3. Contribution to assuring high-level protection of personal data and privacy; D4. Promotion of the provision of clear information, particularly for the transparency of tariffs and general terms of contract; D5. Addressing the requirements of special social groups, in particular disabled users. In the following a method is shown to assess the suitability of the legal solution and to assess the feasibility, the probability of implementation to these 20 objectives in four target groups (information society, competition development, Community and consumer protection).
The Quadratic Evaluation Method The Magic Quadrant is a method developed by the Gartner information technology consulting company. It is used for years for the comparison of IT firms in respect of a group of products to determine the completeness of the firms' vision, and their implementation capabilities by two dimensions. Based on the method, the firms are classified into four groups in terms of their higher or lower qualifications, separately in two dimensions, as leaders, visionaries, challengers and gap players. The method can be applied to other cases as well. In particular, if the subjects of the assessment can be characterised in terms of both conceptual and practical aspects. These can include e.g. the comparison of various technologies in respect of their suitability to certain requirements and their maturity, spreading potential; or the assessment of a complex plan or concept, firm’s strategy, policy paper or legal document made to perform a multi-element system of objectives with respect to the conceptional suitability, the clarity and entirety of elaboration, as well as the feasibility, the conditions and available tools of practical implementation. In this case we are talking about the latter, when the quadrant method is adapted to legal documents (bills, acts, decrees) for the analysis of the conceptional suitability and practical feasibility of solutions applied for the implementation of a professional policy target system (Sallai 2003). The evaluation is carried out on the individual objectives of the target system in two dimensions. On the one hand, it is shown how well the objective is reflected and manifested in the applied solution; in principle how correct and conceptually how well-prepared, complete and clear the solution is (horizontal, X-axis: suitability). On the other hand, it is shown whether or not (or with what probability) the objective is feasible with the applied solution; if it complies with circumstances; if the necessary regulatory and/or institutional tools are available (vertical Y-axis: feasibility). In both dimensions integer value-points, scores can be between 1 and
A Quadratic Method for Evaluating the New Hungarian Act on Electonic Communications
45
10, where higher scores indicate better suitability level and better feasibility probability. The pairs of scores for the objectives are represented on a 10 by 10 square. Fig. 2 shows an example containing an A1 with scores (9; 10), an A2 with scores (7; 5) and a D5 positioned (3; 6). The square is divided into four quadrants (2x2 fields) depending on the high or low scoring of the suitability and feasibility, as follows: • Harmonic quadrant (H): where conceptually well-prepared, feasible objectives are positioned. • Pragmatic quadrant (P): where conceptually flat, not so well-prepared, under the given conditions feasible objectives are positioned. • Theoretic quadrant (T): where conceptually well-prepared objectives are positioned, whose feasibility is doubtful and the tool system is weak. • Partial quadrant (Q): where quasi solutions, conceptually flat, partly prepared, weak, probably not feasible objectives are positioned. This quadrantal grouping of the objectives, the classification of their solution as Q, T, P, H, is one of the key achievements of the quadratic method, which provides a picture about the level of elaboration of the entire system of objectives and their expected feasibility. A1 A1
10
Y, Feasibility
9 8 7
PRAGMATIC P D5
6
HARMONIC H
D5
5
A2
4
PARTIAL Q
3 2 1
2
3
4
THEORETIC T 5
6
7
8
9
10
X, Suitability
Fig. 2. The Q, T, P and H quadrants
The 10 by 10 square is also divided into the domains of five quality grades of the solutions (Fig. 3) based on the dominance of the less favourable qualification. The solutions are classified as excellent (5), good (4), medium (3), weak (2), and unsatisfactory (1), the first three categories are considered acceptable. This way a position assigned to an objective can be characterised comprehensively, e.g. the position of A2 (7; 5) describes a medium theoretic (T3) solution. In case of more evaluators the averaged evaluation of the individual objectives is calculated (mean values). The evaluation of target groups A, B, C and D, and the summarised evaluation of the system of objectives can be obtained with the averaging of the evaluation of
46
Gyula Sallai
the objectives (central points). In general we could define a specific weight to each objective. Because the system of objectives was considered well-balanced in the experts opinions, for simplicity we apply equal weights in the statistics. 10
Exc’t
Y, Feasibility
9 8
Good
7 6
Medium
5 4 3
Weak
2 Unsuitable 1 2 3 4
5
6
7
8
9
10
X, Suitability
Fig. 3. The five quality domains
An orienting interpretation can be given for the individual scores on the basis of the interpretation and qualifying scaling of the dimensions. The X-axis measures suitability, the perfection of the concept: 10: Correct, clearly formulated system of views covering every detail. 9: System of views covering every detail, but more refined or better structured wording is required. 8: Minor deficiencies in details, more refined or better structured wording is required. 7: Minor deficiencies in details, more exact wording is required. 6: Slightly superficial, but as a whole contains every important element, wording is acceptable. 5: Slightly superficial, as a whole contains every important element, correction is needed. 4: Important detail is missing, misinterpreted ,or incorrect. 3: Essential or several important details are missing, misinterpreted, or incorrect. 2: More essential details are missing or incorrect. 1: Difficult to understand, unclear, essential details are missing or incorrect. The Y-axis measures feasibility, probability of implementation: 10: Properly feasible, appropriate system of tools is available (91…100%). 9: Properly feasible, but the application of the system of tools requires a somewhat clearer interpretation (81…90%). 8: Some corrections are needed, properly feasible with a system of tools better suiting to the circumstances (71...80%). 7: System of tools are to be improved, their application is largely feasible with a clearer, more detailed interpretation (61…70%).
A Quadratic Method for Evaluating the New Hungarian Act on Electonic Communications
47
6: Slightly superficial, but all-encompassing system of tools is available, acceptable conditions for application (51…60%). 5: Slightly superficial, as a whole all the important tools are available, application conditions are to be clarified (41...50%). 4: Important tool is missing or not complying with circumstances (31…40%). 3: Essential or several important tools are missing or unsuitable, not complying with circumstances (20…30%). 2: Essential tools are missing or unsuitable (11…20%). 1: No tools, implementation is not probable (0…10%).
Quadratic evaluation of the Bill on Electronic Communications The second public draft version of the Act on Electronic Communications (Eht), the Bill (Ministry of Informatics and Communications and Ministry of Justice 2003) has been evaluated by 15 evaluators according to the quadratic method. The evaluators are specialists of operators and the National Communications Authority (the Hungarian regulator, NHH), members of the National Council for Communications and Informatics (NHIT) and the Scientific Association for Infocommunications (HTE), and the author as an independent expert (IE). Six evaluators gave some explanations to the scores (called expert opinions), others only scored the fulfilment of the objectives. Figs. 4–8 show some examples of the expert evaluation of the Bill with respect to 4x5 policy and regulatory objectives, and their position in the suitability– feasibility coordinate system (Sallai 2003). Fig. 4 shows the first-born evaluation, prepared by the author (IE). Summing up the fulfilment of the objectives, that is the average of the evaluating scores, the central point is: X=7.40, Y=6.90, which stands for a good harmonic (H4) overall state, and suitability is stronger than feasibility. In Fig. 4 the central point – in the position (7; 7), by continuous interpretation of coordinates – is marked by a star. Out of the 20 objectives 14 gave harmonic solutions (3 objectives H5, 7 objectives H4, 4 objectives H3), 1 pragmatic (P2), 4 theoretic (2-2 T3 and T2) and 1 partial (Q2). In case of four objectives (A1, B2, C1 and D5) the correction of the solution is especially important. As shown in the figure the X and Y scores are in correlation, generally a higher X score corresponds to a higher Y score. The correlation coefficient is 37.3%.
48
Gyula Sallai
A1
10
Y, Feasibility
C3 D4
B5
9 8
C1
C5
7 D5
6
B3
A3 D3 B1 D1
B4
C2
D2
A2
5
A5 C4
A4 A1 B2
4 3
D5
2 1
2
3
4
5
6
7
8
9
10
X, Suitability
Fig. 4. Independent expert (IE) evaluation, central point: X=7.40, Y=6.90
Fig. 5 shows the evaluation by the regulatory manager of the incumbent telecommunications operator, which reflects a higher degree of satisfaction with the Bill. The only exception is the objective of the promotion of infrastructure investments, which was classified unsatisfactory (1; 1). The coordinates of the central point: 8.05; 7.50, the correlation coefficient is high: 92.3%. B4 D1 D5 B2 C1 D4 A1
10
Y, Feasibility
9
B1 B3 A5 B5 C4 D3 C2 C3 D2
8 7 A1 D5
6 5
A2
A3
7
8
C5
4 3 2 A4
2
3
4
5
6
9
10
X, Suitability
Fig. 5. Expert evaluation from an incumbent telecommunications operator, central point: X=8.05, Y=7.50
A Quadratic Method for Evaluating the New Hungarian Act on Electonic Communications
49
Fig. 6 shows the qualification of a mobile operator’s expert. It is generally somewhat more moderate than the IE evaluation and considers the Bill more theoretic. The coordinates of the central point: 6.35; 5.80, the correlation coefficient is high: 72.7%. Figs. 7 and 8 show the opinion of a EU regulatory specialist and a system regulatory specialist. In their evaluation suitability and feasibility appear in a more differentiated form, the correlation coefficient is 50.2% and 51.2%, respectively. Average suitability values: 6.65 and 7.25; average feasibility values: 6.10 and 5.50; which means that both expert evaluators are relatively pessimistic about the feasibility of the draft law. A1
10
Y, Feasibility
9 A3 B3 C1 C4
8 7
D5 C3 D3
C5
A2 B4 B2 D1
D5
6 5
C2
4
B5 D2 D4
3
A5
A4
3
4
B1
A1
2 1
2
5
6
7
8
9
10
X, Suitability
Fig. 6. Expert evaluation from a mobile operator, central point: X=6.35, Y=5.80
Gyula Sallai
Y, Feasibility
50
10
A1
9
A5 D3
8
B5
7
C1 C2 D5
6 5
C5
4
B4
A3 C3 B3 A4
D2
B2
B1
D4 D1 C4
A1 A2
D5
3 2 1
2
3
4
5
6
7
8
9
10
X, Suitability
Fig. 7. Evaluation by a EU regulatory specialist, central point: X=6.65, Y=6.10 A1
10
Y, Feasibility
9 C3
8 D4
7
B1
D5 C5
6
A1
5 A4
4
A5 D5
B2 D3
A2
C1 B4 C2 C4 D1 B3
B5
D2
A3
6
7
8
3 2 1
2
3
4
5
9
10
X, Suitability
Fig. 8. Evaluation by a system regulatory specialist, central point: X=7.25, Y=5.50
A Quadratic Method for Evaluating the New Hungarian Act on Electonic Communications
51
Statistics of the quadratic evaluations Taking the evaluations for each objective the mean values and standard deviations of the X and Y scores (the square of the variance of X scores and Y scores, respectively) as well as their quadratic deviation (the square of the covariance of X and Y score-pairs) have been calculated (Sallai 2003). Fig. 9 shows the mean values of the 6-expert evaluations (evaluations with explanations), produced on the individual objectives (with rounding the mean values to integer numbers). Fig. 10 shows the mean values of all 15 evaluations. 10
A1 Harmonic
Pragmatic
Y, Feasibility
9 8
C1
7
C5
4
C2 D2
D5
6 5
B5
Promotion of infrastructure investments and innovation
D5
A1
B3 C3 D3 D4 A3 A5 D1 C4 B1 B4 B2
A2
A4
3 2 1
Theoretic
Partial 2
3
4
5
6
7
8
9
10
X, Suitability Fig. 9. Mean values of the six expert opinions
Considering the expert’s opinions only, Fig. 9 shows 15 harmonic, 1 pragmatic, 2 theoretic and 2 partial solutions for the 20 objectives. Taking all evaluations, Fig. 10 shows 10 harmonic, 7 theoretic and 3 partial solutions. The best mean value was given to objective C4 (Non-discriminative treatment of operators) with respect to suitability (X=7.53); and to objectives C1 (EU conformity) with respect to feasibility (Y=6.47). The worst mean value was given to objective A4 (Promotion of infrastructure investments and innovation) with respect to both suitability and feasibility (X=4.40; Y=3.60). The total average of all evaluations, namely the total central point is X=6.23; Y=5.52, which stands for both the average of the mean values of the 20 objectives and the mean of the central points of all evaluators’ evaluations. This shows a medium harmonic average qualification. (Considering the expert opinions only, the
52
Gyula Sallai
total central point is better: X=7.23; Y=6.54, which indicates a good harmonic average qualification).
Y, Feasibility
10
A1 Harmonic
Pragmatic
9 8 7 6 5
Promotion of infrastructure investments and innovation
D5
A3 A5 B3 C3 B5 C1 D1 3 4 C5 D5 A2 B4 B1 B2 C2 D2
A4
4
C4
A1
3 2 1
Theoretic
Partial 2
3
4
5
6
7
8
9
10
X, Suitability Fig. 10. Mean values of the 15 evaluations
The figures show the correlation of the X and Y mean values. This correlation coefficient is 79.4% (Fig. 10), which is obviously higher than the mean of the correlation coefficients of the evaluators (65.1%). In comparison to the IE evaluation the other quadratic evaluations typically show higher correlation of X and Y scores. To represent the dispersion of the scores of the individual objectives, Figs. 11– 13 sketch the score-pairs of the evaluators for the objectives A4, B3 and C1, respectively. (Experts’ score-pairs are darker.) As regards A4, the standard deviation of X scores is 1.93, one of Y scores is 1.82, and the quadratic deviation of XY score-pairs is 1.74. Regarding the objective B3, the same deviation figures are: 1.14 for X scores, 1.45 for Y scores and 0.94 for joint deviation. Regarding the objective C1, the same deviation figures are: 1.53, 1.50 and 0.99. The greatest three deviations (more than 2.1) are presented at the objectives A5, D3 and D4, both for X scores and Y scores, and XY pairs of scores. The significant deviations are explained by the different scale of values, the sensibility of the evaluators, and the differences in the interpretation of the score definitions. The average standard deviation of suitability (X) evaluations is: 1.74 (considering the expert opinions only, it is 1.38). The average standard deviation of feasibility (Y) evaluations is: 1.81 (for expert opinions: 1.58). The average quadratic deviation of the XY scorepairs is: 1.51 (for expert opinions: 1.02). In all cases the average deviation is higher when all the evaluators are taken into account. This shows greater uncer-
A Quadratic Method for Evaluating the New Hungarian Act on Electonic Communications
53
tainty in assessment. Similarly, the average deviation of the feasibility is also higher due to the more difficult assessment. A1
Y, Feasibility
10 9 8 7 D5
6 5 4 3
Objective A4
2 3
4
5
6
7
8
9
10
X, Suitability
Fig. 11. Scores for the objective A4 (Promotion of infrastructure investments and innovation). Mean: X=4.40, Y=3.60; Deviation: 1.93 for X, 1.82 for Y, 1.74 for XY A1
10
Y, Feasibility
9 8 7 D5
6 5 4 3
Obje ctive B3
2 1
2
3
4
5
6
7
8
9
10
X, Suitability
Fig. 12. Scores for the objective B3 (Imposition of the usage of competition-stimulating techniques). Mean: X=7.33, Y=6.13; Deviation: 1.14 for X, 1.45 for Y, 0.94 for XY
54
Gyula Sallai
A1
10
Y, Feasibility
9 8 7 D5
6 5 4 3
Objective C1
2 1
2
3
4
5
6
7
8
9
10
X, Suitability
Fig. 13. Scores for the objective C1 (Introduction of EU conform regulatory package in electronic communications). Mean: X=6.33, Y=6.47; Deviation: 1.53 for X, 1.50 for Y, 0.99 for XY
Quadratic evaluation of the Act on Electronic Communications (Eht) The quadratic evaluation was performed again by the author (IE) on the final version of the Act on Electronic Communications (Eht) passed by the Parliament (Hungarian Parliament 2003). The former version of the Eht, the Bill, was amended at several paragraphs during the coordination and parliamentary work. The amendments also affect the extent of the fulfilment of the objectives. The amendments concerned favourably the objectives A2, B3, C1 and C3, neutrally the objectives A5 and B2 and unfavourably the objectives A3, B1, B4, D1 and D3. Additionally, the scores of the IE were reconsidered at some objectives when significant deviations had been detected from the mean values (Sallai 2003). The scores for the Eht are shown on Fig. 14. Out of the 20 objectives 14 obtained harmonic solutions (1 objective is qualified as excellent, H5; 7 objectives as good, H4; and 6 objectives as medium, H3), 5 theoretic (3 objectives as T3 and 2 objectives as weak, T2) and 1 partial (Q2). Summing up the fulfilment of the objectives, the central point is: X=7.20, Y=6.45, which stands for a medium harmonic (H3) overall state. The correlation coefficient between the X and Y scores is 45%. Comparing with the Fig. 4, the quadratic evaluation of the Eht is some-
A Quadratic Method for Evaluating the New Hungarian Act on Electonic Communications
55
what weaker (the difference of the centrals is 0.2 along X and 0.45 along Y), the correlation coefficient is slightly higher. 10
A1 Harmonic
Pragmatic
A5
Y, Feasibility
9 8
B5 D2 D3
7 6 5 4
1
A4
A3
C2
B4
B1 A1 B2
D5
IST, convergence
Partial 2
3
4
5
6
7
D4 C4
B3
A2
Spec. Spec.social társ. groups, csoportok, disabled users fogyatékosok
3 2
C1 C5 D1
Promotion of infrastructure D5 and investments innovations
C3
Treating market dominance
Theoretic 8
9
10
X, Suitability Fig. 14. Independent expert (IE) evaluation of the Act on Electronic Communications. Central point: X=7.20, Y=6.45
The results of the quadratic evaluation of the Eht highlight some issues that are particularly important at the formation of the government and ministerial decrees, in the work of the authority (NHH), and in the preparation of the next legislation steps, as follows: • Treating the need of the special social groups and disabled users (Objective D5) is hardly concerned in the Eht; • The promotion of the infrastructure investments and innovations (Objective A4) is only indirectly formulated in the Eht; • The implementation of the objectives related to the competition development (Objectives B1, B2 and B4) needs the maximum exploitation and enlargement of the regulatory tools; • Promoting the deployment of the information society technologies (IST) and the convergence (Objective A1) is emphasised only in principle, the articles related to the support of the spread of Internet (Objective A2) needs further clarification; • The introduction of the EU conform regulation in general and for universal services (Objectives C1 and D1) needs particular attention at making executive decrees.
56
Gyula Sallai
References European Commission (1987) Green Paper on the Development of the common market for telecommunication services and equipment. COM 290 final, Brussels European Commission (1997) Green Paper on the Convergence of the telecommunications, media and information technology sectors, and the implications for regulation. Towards an Information Society approach. COM 623, Brussels European Commission (1999) Communication from the Commission: Towards a new framework for electronic communications infrastructure and associated services. (Communications Review) COM 539, Brussels Ryan MH (ed) (2003) The EU Regulatory Framework for Electronic Communications and related EU Legislation. Handbook. Arnold & Porter, London European Communities (2002) Directive 2002/21/EC of the European Parliament and of the Council of 7 March 2002 on a common regulatory framework for electronic communications networks and services (Framework Directive). Official Journal L 108, 24 April, pp 33–50 Dragos N (2003) Gartner Methodology. Gartner Symposium on Application Integration and Web Services, Rome, 16 June Ministry of Informatics and Communications (IHM), Ministry of Justice (IM) (2003) Bill of the Act on Electronic Communications. July. 004735/2003, Budapest Hungarian Parliament (2003) Act No. 100 on Electronic Communications. Budapest, 24 November Sallai G (2003) The quadratic evaluation of the Act on Electronic Communications. Report (in Hungarian), Budapest, p 43
The Status of Regulation and Competition in Poland in the Advent of the Accession to the EU1 Jerzy Kubasik2 PoznaĔ University of Technology, Poland
Abstract The subject of the work is the assessment of the performance of the legislature, regulatory agencies, and the development of the Polish telecommunications market in recent years. It focuses on the preparation of the Polish telecommunications for operating within the extended European Union after May 1, 2004. The legal regulations in force in the years 2001-2004 and the results of their application are reviewed. The activity of the regulatory agency and the impact of its decisions on the market are also assessed. An attempt is made to identify the reasons for the alarmingly delayed development of the Polish market and to explain why its condition is much worse than in other acceding states. Finally, the work discusses the legal solutions for implementing the EU new regulatory framework for electronic communications new regulatory framework for electronic communications, which were recently the subject matter of a legislative debate in the Polish Parliament (adopted in July 2004), and the prospects of the Polish telecommunications market in the context of Poland’s accession to the EU.
Introduction The present outlook for the telecommunications market in Poland is far from optimistic and its development must be considered as slack. At present, one can hardly blame this situation on the legal provisions of the past or on the current practice of regulation. This practice is based on a peculiar philosophy of implementing competition beginning with the local market (“from the bottom”), and 1
2
Already published in: J. Kubasik, “Poland: Is Regulation in Place? The Status of Regulation and Competition in Poland in the Advent of the Accession to the EU”. In: Communications & Strategies, no. 56, 4th quarter 2004, pp. 125–150. For more information, see: www.idate.org. E-mail:
[email protected]
58
Jerzy Kubasik
burdening operators with dual expenses, namely capital expenditure and license fees. The market has remained depressed although the long-distance connections segment was liberalised on Jan. 1st, 2002, the highly profitable segment of fixedto-mobile (F2M) calls was formally declared open on the same day, and the international calls segment was opened on Jan. 1st, 2003. Studies of retail prices of services demonstrate that the competition mechanisms have not become active yet. Throughout the 1990s, successive governments continually promised to encourage competition and improve regulation. A particularly spectacular instance of this was the liberalisation of the long-distance connections market. Its concept was developed between 1997 and 1999, undergoing fairly radical changes in the process, the last of which took place after the conclusion of the tender procedure for long-distance licenses. Apparently liberal, its provisions included the free choice of operator for subscribers (open call-by-call) and third-party billing of services. In spite of this promising legislation and official declarations, operators were not offered any support. This was because liberalisation of the long-distance market occurred at the same time as the privatisation of Telekomunikacja Polska (TP). The calendar for deregulating the market developed at that time, a secret document providing for a destabilisation of the regulatory and legislative process, along with the other processes of privatisation, resulted in several years of what amounted to an administrational standstill. After the expiry of the term of office of the government that had prepared the launch of privatisation, the provisions of the calendar, which in a modified form were included in the privatisation agreement, could only be guessed at, and in practice encumbered the regulator’s activity. Even now, the regulator, the legislature and certain representatives of the incumbent operator continue to refer to those secret provisions when attempting to justify their points of view.
Competition in Polish telecommunications In the years 1990–2003, the number of fixed lines in Poland almost quadrupled to around 12,275,000. At the end of 2003, there were 32.1 main telephone lines per 100 inhabitants (Fig. 1). The fixed telephony growth rate was highest in 1999, during which 1,375,000 new lines were installed, over 1,150,000 of which were connected by TP.
59
51,1
The Status of Regulation and Competition in Poland
33,1
29,6
28,3
1H2004
2003
2002
2001
2000
1999
5,0
1998
2,1
1997
0,6
1996
1995
0,2
10,2
17,5
24,9
22,8
19,3
16,9
13,0 0,1
1994
11,5 0,0
1993
10,3 0,0
1992
9,3 0,0
1991
8,6 0,0
1990
8,2
1989
0
0,0
10
14,9
20
26,3
30
31,1
36,4
40
32,1
45,6
50
Fig. 1. Fixed and mobile penetration in Poland, 1989–2004 Source: GUS – Polish Statistical Office, Warsaw.
TP is the largest of the incumbent telecommunications operators in the new member states of the EU. With 11,371,000 fixed telephony lines in Poland, its share of the local network market amounts to 90.1% 3. In spite of this substantial improvement during the last decade, the density of the fixed network in Poland is still only half of the EU average (Fig. 2). On a more optimistic note, Poland is among the very few new EU member states where the number of fixed telephones has actually increased. The fixed telephony market has been stagnant for several years. Fewer and fewer new lines are being connected, as independent operators have been hit by the global depression in the IT and telecommunications sectors, aggravated by local regulation issues, and because the TP group prefers to invest in mobile telephony. As a result of rapid growth in mobile telephony, demand for fixed telephony services is dropping, especially as increasingly attractively priced offers by mobile operators make cellular services an appealing substitute for fixed services. The current relatively low penetration rate of fixed telephony is not expected to change, as certain services are taken over by cellular telephony and other technologies. Nevertheless, fixed telephony is not in the danger of extinction. It will remain an important medium of data transmission, especially as a popular means of internet access.
3
TP data as of June 30th 2004.
60
79,7
24,1
28,3
25,3
30
35,1
31,9
36,1
40
36,0
41,4
40,7
45,4
42,9
48,4
48,1
48,6
49,4
50
48,8
56,6
66,9
52,1
61,4
59,1
60
65,7
70
68,8
73,6
80
Jerzy Kubasik
20 LU SE CY DK DE NL UK FR MT BE FI
IE IT AT GR ES PT SI HU CZ EE PL LV LT SK
Fig. 2. Fixed penetration in the EU (%) Source: ITU, May 2004
In late 2003, 88 companies held licenses to operate the fixed telephony network in Poland. Of these companies, 27 were authorised to provide the services available through CSC 4. This number has grown by another dozen in 2004 (URTiP 2003). However, since no effective regulation is in place, only a few operators holding CSC licenses have concluded interconnection agreements with the TP network, and even fewer have acted on such agreements. Accordingly, neither the number of operators authorised to engage in business activity in the telecommunications sector, nor the character of such authorisations provides a reliable index of market liberalisation. The European Commission has made a disapproving assessment of the condition of the Polish telecommunications market, emphasising that in spite of its formal liberalisation, the prices of services remain among the highest in Europe due to TP’s constant subversion of the competition’s activities (StreĪyĔska et al. 2003). In the 1990s, the liberalisation of the local telecommunications market resulted in the emergence of over 50 alternative operators, who developed their local networks and had gained approximately 1,148,000 mainlines by the end of 2003. The largest of these are Telefonia Dialog and Netia (URTiP 2003). The liberalisation of Polish telecommunications continued in 2001, when TP’s monopoly of the domestic long-distance (DLD) market was abolished. Initially, due to the aggressive activity of NOM and the market entry of the companies Netia1 and Energis, alternative operators took over a share of some 27% of these services. However, this turned out to be a short-lived success and liberalisation 4
CSC – carrier selection code.
The Status of Regulation and Competition in Poland
61
58,4
66,6
65,0
68,4
67,6
72,5
78,5
76,8
96,5
78,6
84,5
87,9
87,1
88,9
90,4
88,7
84,1
69,6
70
78,5
80
90,1
90
91,6
100
106,1
110
101,8
was compromised by difficulties with the rebilling of NOM’s services by TP, caused by the formulation of the law on VAT. According to the terms of the interconnection agreement concluded between NOM and TP in 2002, the latter ceased to lose its share in the long-distance market and actually began to regain it to a certain degree, unfortunately to the detriment of the consumers (Kubasik 2002). At present, TP’s share in the DLD market has fallen to 81% and its major competitors are Tele2, Netia, Energis and NOM.
52,9
60
45,1
50
40 LU IT CZ ES PT FI SE DK AT SI IE UK BE DE GR NL MT FR SK HU LT EE CY LV PL
Fig. 3. Mobile penetration in the EU (%) Source: ITU, May 2004
The international telephony (ILD) market was formally liberalised at the beginning of 2003. By mid-2004, competitors had taken over 23.6% of the connections that were previously initiated in TP’s network. The most significant players in this market, besides TP, are Tele2 and Energis. A TP subscriber must be connected to a digital exchange in order to fully benefit from the offer of the alternative operators. In fact, according to the timetable approved by the regulatory authority, almost a million subscribers will have to wait for the replacement of the analogue exchanges owned by the TP, up until the end of 2005 5. The fixed-to-mobile (F2M) market was liberalised in October 2003. During three quarters of a year, the independent operators gained a 17.5% market share. TP is expected to continue to lose shares in this market. The year 2002 also marked a breakthrough in the division of the market into cellular and fixed telephony. By the end of 2002, the number of fixed telephone lines had grown to 11,872,000, while the number of mobile users had increased to 13,890,000. In March 2004, the density of mobile telephony exceeded 50% (Fig. 5
According to the regulator’s decision – 305,742 subscribers until end of 2003, 397,012 subscribers until end of 2004 and 231,543 subscribers until end of 2005.
62
Jerzy Kubasik
1), and in the summer of that year, the number of mobile users totaled more than 20 million (Fig. 3). Three operators are currently active in this market, offering eight brands (3 post-paid and 5 pre-paid ones). All the operators’ market shares are similar, ranging from between 30.4 and 38.4% (Fig. 4). All three operators hold both GSM/DCS and UMTS licenses. According to a decision of the regulatory authority, the date for the launch of the provision of UMTS services has been postponed to early 2006, although an operator may inaugurate them earlier.
Centertel 31,2%
PTC 38,4%
Polkomtel 30,4% Fig. 4. Mobile market shares in Poland (1H2004) Source: TP, July 2004
Internet access is also spreading fast in Poland, even if quantitative estimates differ considerably, depending on their sources. According to the 4th IBM report 6, internet penetration amounts to 23%. Only 13% of Polish households have internet access (the European average being 45%), typically through a modem and an analogue telephone line. The development of this service is hampered by the facts that households are not wealthy and therefore own few computers, and that services are expensive. According to the ITU, there were just over 10 PCs per 100 inhabitants of Poland in 2003. An estimated half a million users, one third of whom are cable television subscribers, have broadband internet access through all existing platforms (URTiP 2003). Among telecommunications operators, the largest provider is TP, with its 270,000 ADSL and 70,000 SDI subscribers 7. Other companies offering DSL technology to their customers include Telefonia Dialog, Netia and the TeleNet group. Another possible source of readily available, always-on internet access is the rapid development of wireless technologies, and in particular of WLAN/WiFi. Poland remains one of the countries with the most expensive telecommunications services. While price reductions are continuously improving Poland’s posi6
7
“4th Report on Monitoring of EU Candidate Countries (Telecommunication Services Sector)”, December 16th 2003. SDI users have access to a data transmission method (the HiS technology by Ericsson) similar to ADSL but with a much lower speed transmission (max. 115 Kbps). Accordingly, SDI should not be considered a category of broadband internet access.
The Status of Regulation and Competition in Poland
63
tion in the ranking summarised in Table 1, and the value of baskets is dropping, the changes are not radical. The value of the home basket, expressed in PPP, reached between USD 864 and 970 in 2003, which was about 77% higher than the OECD average (URTiP 2003). Table 1. The prices of telecommunications services: Poland’s rank (OECD baskets, USD/PPP) Basket
Rank (1 - most expensive) Feb. 2002 Feb. 2003 May 2003 Aug. 2003 Nov. 2003
Business basket
3
3
4
4
4
Residential basket:
1
3
3
3
3
- fixed basket
13
7
8
7
8
- domestic F2F calls basket
4
2
3
4
3
- domestic F2M calls basket
2
2
2
3
4
- Internet dial-up basket
2
1
3
2
2
- ILD calls basket
3
5
5
5
5
Source: URTiP based on Teligen’s T-Basket (URTiP 2003)
Regulation in practice As of the beginning of 2001, the Polish telecommunications market has been regulated by the new, modern Telecommunications Law of 2000 (TL2000). The provisions of the TL2000 include the definition of universal service, the specifications of the rights and duties of operators with significant market power (SMP) and the establishment of a national regulatory authority (Kubasik 2002). The first three years of TL application do not warrant optimism. This section constitutes an attempt to establish whether regulatory activity with a view to liberalising the most substantial sector of the telecommunications market, i.e. fixed and mobile telephony, has achieved the aims specified in TL2000, which consist of: • assuring universal access to telecommunications services throughout entire territory of Poland, • protecting the interests of telecommunications users, • furthering fair and effective competition in the provision of telecommunications services. The assessment covers the enacting of legal regulations applying to business activity in the telecommunications service market, the granting of licenses to engage in such activity, the handing down of decisions and rulings by the regulatory authority, the inspection of the telecommunications service market, and the availability of universal services provided by Poland’s largest operator.
64
Jerzy Kubasik
The Minister responsible for posts and telecommunications The “central-level administration authority” dealing with the matters of telecommunications is “the minister competent for posts and telecommunications”. As of Oct. 23rd 2001, this is the Minister of Infrastructure (MI). Previously, as of July 25th, 2001, the Minister of Economy (ME) was responsible for telecommunications and prior to that, the Minister of the Post and Telecommunications (MPT). According to TL2000, “the minister competent for posts and telecommunications” was obliged to enact at least eight major decrees necessary to regulate the market for services provided by public fixed telecommunications networks. Out of these eight, only two were adopted at the same time as TL2000. The last decrees were enacted only in October 2003, almost three years after the date when the law came into effect. Four were amended in 2004, due to the adoption of the consolidated version of TL2003. Due to the lack of a decree specifying the detailed requirements of interconnection of telecommunications networks, it was impossible for the regulator to take efficient action with a view to a rapid and undiscriminating connection of other operators’ networks with TP’s. Until the end of 2002, there was also no decree concerning leased line services, and the regulator was therefore unable to intervene adequately in TP’s denial of its telecommunications lines to other operators. In spite of the delayed enactment of the executive decrees of TL2000, the MI chose not to involve the regulatory authority in the development of the draft texts of legal acts. Due to the parliamentary debate about the amendments to TL2000, which took place in the years 2002–2003, the MI altogether gave up or slowed down the rate of its development of executive decrees applicable to the existing text of the law, without, however, speeding up work on the executive regulations applicable to the amended provisions of TL2003. While the draft text of the amended law was submitted to the Sejm (the lower house of the Polish parliament) on July 30th, 2002, and the law itself was adopted on May 22nd, 2003 and published on June 30th, 2003, the MI failed to enact on time certain decrees, crucial for the regulation of the telecommunications services market and required by the amended TL2003, including the decree specifying the detailed conditions of LLU (April 2004) and the decree specifying the manner of calculating the cost by telecommunications operators (December 2003). Certain regulations adopted in the years 2001–2004 evidently fail to promote the aim defined as “furthering fair and effective competition in the provision of telecommunications services”. Thus, for example, in its decree on the detailed terms of the exercising of their rights by the subscribers of the public telephone network, from October 2002, the MI established the policy that DLD, F2M and ILD calls should be put through by the carrier supporting the subscriber, unless the latter had selected another operator for the performance of this service. According to the initial drafts of this decree, there was to be no “default carrier” but instead, a continuation of the policy that the subscriber should be obliged to select the operator by means of preselection or a CSC. Eventually, under pressure from TP and in spite of protests from alternative operators, the policy of default carrier was established, enhancing TP’s market power.
The Status of Regulation and Competition in Poland
65
The MPT, and later the ME, did not take decisive action in order to stimulate an expansion of the telecommunications sector at the time when the application of the provisions of TL2000 resulted in a wide disparity in the public-law financial obligations of various companies engaging in the same activity. Thus, while before 2001 telecommunications licenses were granted for fees of hundreds of millions of euros, later the cost of a license fell to a mere EUR 2,500 for fixed telephony and EUR 5,000 for cellular telephony. Regardless of the difficult financial position of operators pursuing their activity under older licenses, their applications for postponing the payment dates of license fees instalment were examined over excessively protracted periods, in violation of statutory time limits. Out of the 31 decisions on postponing instalment payment dates of license fees, in seven cases the time from the submission of the application to the handing down of the decision was between 63 and 631 days. In twelve cases, decisions were handed down after the due date of payment of the instalments, even though the applications had been submitted on time. All of those irregularities contributed considerably to impeding the investment activity of operators competing with TP (NIK 2004). Similar cunctation, detrimental to the liberalisation processes of the telecommunications service market, is noticeable in the MI’s administrative procedures related to the decisions handed down under the provisions of the law on restructuring the license fees from public fixed telephone network operators. Out of the eleven applications for restructuring the license fees through a conversion to the incurred investment expenses, submitted by nine operators in late 2002, in six cases the MI handed down the decisions after procedures lasting over seven months, and responded to the remaining applications only after another several months (NIK 2004). These irregularities may stem from the MI’s insufficient commitment to telecommunications issues; after all, the same ministry also manages road and railroad transportation matters, navigation, aviation and building engineering. It was only four months after the MI had become “competent for telecommunications” that this subject began to be discussed at meetings of the Ministry’s upper executives. The regulatory authority (URT/URTiP) The “competent regulatory authority for telecommunications activities” (“centrallevel administration authority”) is the President of the URTiP8. As of Apr. 1st, 2002, this administrator took on the duties and prerogatives of the President of the URT9, the regulatory and inspection authority competent for telecommunications established on Jan. 1st, 2001 (Kubasik 2002). Due to the inaptitude of the URT’s administrative procedures, the President of the URT was unable to exercise the regulatory instruments which, under TL2000, 8
9
URTiP – Urząd Regulacji Telekomunikacji i Poczty (the Office of Telecommunications and Post Regulation) URT – Urząd Regulacji Telekomunikacji (the Office of Telecommunications Regulation).
66
Jerzy Kubasik
are admissible with respect to operators with significant or dominant power on a certain telecommunications service market against TP. The telecommunications license which the President of the URT issued to TP on Feb. 28th, 2001, contained content-related and typographic errors. TP immediately applied for the supplementing and rectifying of this document, which thereby became invalid. TP now had a formal legal excuse for challenging all regulatory decisions relating to the company in its capacity as a public operator. The most flagrant errors in the license were rectified as much as a year later, while contentrelated issues were only settled in May 2002, under an agreement between the Presidents of TP and the URTiP. The procedures to determine the public operators’ market power the as provider of various telecommunications services were equally protracted. Although the President of the URT initiated the administrative proceedings on Apr. 11th, 2001, or on the effective date of the MPT’s decree on the detailed criteria of determining an operator’s share in the market of a telecommunications service, and in September 2001 handed down a decision determining that TP had dominant power in the domestic market for the provision of universal telecommunications services, TP appealed to the Anti-Monopoly Court, complaining that the decision violated the law by assuming erroneously (based on an invalid telecommunications license) that TP was indeed a public operator (cf. supra). The appellate proceedings were discontinued in May 2002, after TP had withdrawn its complaint. The proceedings with a view to determining TP’s power in other markets were initiated in September 2001. The decision pronouncing TP the dominant operator in the leased line market took effect only in May 2002, and the decision ascertaining that the company was a major operator of interconnection services provided in the domestic market, was not effective until early 2003. Another instance of protracted administrative activity was the proceedings with a view to determining the significant market power of the three Polish mobile telephone network operators, initiated on July 30th, 2002. Although the URTiP conducted three market studies during the proceedings, upon their completion it took the regulator 43 days to request operators to supplement the information in their reports. It was four months after the conclusion of the studies that draft texts of the URTiP President’s decisions were submitted for approval to the President of the UOKiK10. The regulator handed down its decisions ascertaining that the three companies were operators with significant market power, on Dec. 31st, 2002, or before the effective dates of the supporting decisions by the President of the UOKiK, thereby enabling the operators to appeal against the former decisions on formal legal grounds. After the appeals had been filed, the President of the URTiP did not exercise its right to revoke the controversial decisions, awaiting a court settlement of the case. This was tantamount to the regulator relinquishing its right to initiate new administrative proceedings in this matter based on the data and studies already available at the time.
10
UOKiK – Urząd Ochrony Konkurencji i Konsumentów (the Office for Competition and Consumer Protection).
The Status of Regulation and Competition in Poland
67
In line with TL2000, the regulator must hand down decisions on the interconnection of operators’ telecommunications networks within 60 days of the submission of the application. Out of the 49 proceedings conducted in the years 2001– 2003, only two were completed within the specified time limit. The average duration of the procedure was roughly 10 months, and the longest lasted 1,368 days. Many instances of such protraction are described in the report of the Supreme Control Chamber (NIK) (NIK 2004). Most of the operators’ applications discussed in that document concerned specifying the terms of their cooperation with TP. Accordingly, the protraction in fact favoured TP’s market power and obstructed the liberalisation of the telecommunications market. The President of the URTiP applied discriminating criteria when examining TP’s applications for suspending subscribers’ rights, and particularly the right to select the operator putting through a call. In accordance with provisions of the TL, only the criterion of the technical possibilities of the operator’s network was applied when examining applications for suspending the rights of subscribers related to digital exchanges. In cases of subscribers connected to analogue exchanges, however, it was assumed in blind acceptance of the operator’s argumentation, that such subscribers would be able to exercise these rights fully only when all the analogue exchanges had been superseded by digital ones, the date of which would depend on the operator’s financial condition. Furthermore, the examination proceedings overlooked the fact that, at that time, TP preferred investment projects aimed at gaining new subscribers and providing ISDN and broadband services, rather than replacing obsolete equipment with modern devices. By handing down decisions based on such premises, the regulator accepted the fact that the possibility of selecting the operator of ILD calls would be unavailable to over TP 600,000 subscribers connected to 729 exchanges until at least the end of 2003, and to 231,000 subscribers of 372 exchanges until at least the end of 2004 (NIK 2004). The regulator’s examinations of operators’ applications for the assignment of numbers and the granting of telecommunications licenses, were equally protracted. The regulator began to issue licenses to operate telecommunications network after a delay of nine months from the effective date of the amendments of the procedures of such issuing. The URT only approved the procedures for the submission of applications, the new application specimens and the lists of the required appendices in April 2001, the new specimen of the telecommunications license for the operators of cable television networks in September 2001, and the fee charging procedure for the issued telecommunications licenses in October 2001. Even as late as 2004 the procedure of issuing a telecommunications license could take seven months. Such lengthy proceedings in this matter (and in a dozen other cases during the same year) were due to the protracted procedure of handing down opinions by the Ministry of National Defense, the Ministry of Internal Affairs and the Chief of the Internal Security Agency. The main reasons for the delayed handing-down of decisions on the assignment of numbers were the lack of proper examination procedures and the fact that applications were submitted along with notifications of telecommunications activity. It was only when a notification had been accepted, i.e. once the regulator had ap-
68
Jerzy Kubasik
proved the initiation of telecommunications activity that an application for the assignment of numbers could begin to be examined. The obligation of assessing the functioning of the telecommunications service market was not being discharged. The URT’s Office of Studies and Information was liquidated when the URTiP was established, apparently in a reorganisation aimed at minimising the cost of the latter authority’s operations. The duty of conducting studies of the telecommunications service market was taken on by the DRT11, which, like many other departments of the URTiP, is continually being reorganised. Its present Director is the fourth since the establishment of the Department in April 2002. Thus, studies and analyses have been commissioned from institutions appointed by means of adjudication by tender, and fees have been paid from the budget. Due to the lack of sufficient resources in the 2002-2003 budget, only very few studies were conducted during that time. Among the issues which were not studied, were the reasons for the delayed launch of new operators’ telecommunications activity. In fact, the DRT did not even review the reports of activity submitted by telecommunications operators. Action with a view to rectifying the content of TP’s report for 2002 was taken only upon the intervention of inspectors of the Supreme Control Chamber (NIK 2004). The incumbent operator (TP) Availing itself of its dominant power in the telecommunications service market, TP has been making it difficult or downright impossible for other operators to enter this market. A study of the negotiations for the conclusion of nine agreements on the terms of network interconnection and the mutual settlements of accounts between TP and other operators has revealed that in no case was the agreement concluded within 90 days, i.e. the time after which either party may apply to the regulator for handing down a “decision on the connection of networks”. Fourteen months passed from the date when Netia1 applied to TP for the interconnection of telecommunications network and actual interconnection. Other operators also had to wait to have their networks interconnected with TP’s: Telefonia Dialog waited for 14 months, NOM for 12 months, Energis for 10 months, and Tele2 for 9½ months. Although telephone communication is now available throughout the country and the average time of a subscriber’s waiting for the initiation of the provision of telecommunications services has significantly shortened (to 2½ months in 2002)12, there are still users who have been waiting for TP to install a telephone line for over ten years. In June 2003, almost 370,000 users, or 3.5% of TP’s subscribers, were waiting for the execution of an installation order or the transfer of a main 11
DRT – Departament Rynku Telekomunikacyjnego (the Department of the Telecommunications Market). 12 In 1991, the average waiting time for the installation of a new telephone line in Poland was over 13 years (Kubasik and Kelly 1992).
The Status of Regulation and Competition in Poland
69
line. Optimistically, in 66% of cases the order was carried out within a year (the MI’s average quality index being 10 months of waiting), but roughly 8% of the orders had been waiting for between 5 and 10 years, and 1%, for over 10 years (NIK 2004). TP has not made other operators’ ILD calls available to all of its subscribers within the time prescribed by TL2000. This option was being extended to the subscribers of individual exchanges on an ongoing basis, and after six months it was available to less than 10 million subscribers connected to digital exchanges. However, it must be remembered that the MI only completed the issuing of its decrees specifying the technical characteristics required to initiate the adaptation of telephone exchanges in October 2002. In spite of the unavailability of such specifications, in July 2002 TP entered into negotiations with telephone exchange providers regarding the adaptation of such exchanges for the selection of ILD operators. In July 2002, TP notified the MI that the delayed enactment of TL executive decrees may postpone the liberalisation of the ILD market.
Diagnosis and remedies The Polish Telecommunications Law assigns certain tasks from the area of the furthering of competition and the protection of consumer interests to itself rather than to the President of the URTiP. Ideally, the regulator’s mission should be specified not only in the form of legal provisions stipulating its prerogatives, but also as a set of strategic aims. This would make it possible to assess the activity of the President of the URTiP from the point of view of the achievement of such aims, and not only of formal observance of the law. Decisive action by the President of the URTiP and the practical exercising of this authority’s competence are as important as legal regulations. Some of the regulator’s actions (e.g., consultations or publications) must comply with the established norms set by the applicable guidelines, while others must serve the overall aim of efficiently and rapidly fulfilling the regulator’s proactive market function, with a view to furthering competition. While by law, the President of the URTiP is an independent regulatory authority, in practice the extent of its independence is unsatisfactory. The degree of financial independence of the regulatory authority and its employees is particularly disappointing. Theoretically, this independence should ensure adequately qualified personnel and the services of third-party experts. Due to its formal subordination to “the minister competent for posts and telecommunications”, the President of the URTiP may be pressured into taking the government’s priorities into account. The fact that the present President of the URTiP is the third (independent and non-dismissible by law) regulator appointed during the 15 months of the current cabinet’s functioning, may raise doubts about the regulator’s stability in a period of political transformation (Kubasik 2002). Difficulties identified by the European Commission and OECD experts are that the President of the URTiP has not been allowed to independently enact executive
70
Jerzy Kubasik
regulations (e.g., the National Numbering Plan) or formulate certain policies of its activity (e.g., the policies of accounting and of the cost systems, the scope of the reference offer and the extent of the intervention in matters of interconnection). This results from the Polish system of the source of the law, where the establishing of an authority, the granting of a prerogative or the assignment of a duty must have clear legal grounds, specified by law In practice, much of the activity of the President of the URTiP is of an economic nature, requiring the ability to make decisions in a flexible manner, adapting to protean market conditions, as well as proficiency in and familiarity with the processes that affect this organisation and the specific cases that it decides. The regulator’s current mode of operation and the present provisions of national law may turn out to be inadequate in the future, when new guidelines will have to be implemented and the law will have to be much more flexible. The Bulletin of the URTiP does not print decisions on network interconnection, which the regulator is required to publish by law. Thus, the regulator’s opinions on most content-related and procedural issues, and their legal explanations, are kept secret: each operator must discover them by trial and error, knowing only that they continuously change. Polish telecommunications regulations must increase consumer participation in the processes of consultation. Currently, these processes are restricted to expressing the opinions of consumers’ organisations and business associations (chambers of commerce and industry). These organisations may comment on draft texts of the legislation, but not on regulatory solutions (e.g., in terms of market power, reference offers or cost models). The opinions need not be heeded. The law does not specify time limits for the submission of opinions, or the authority’s obligation to respond to them. A bad practice of the URTiP is that consultations are limited to arbitrarily selected informal consulting teams. Procedures of consulting must be established and published, stipulating minimum periods of their duration. The consultants must be assured the right to obtain a response to their comments from the authority, or at least an explanation of why certain suggestions have been adopted and others rejected. The duties assigned by law to SMP operators and the execution of the regulator’s decisions must be enforced, and penalties must be imposed when necessary. Decisions must be executed in a prompt and determined manner, even if this is onerous. In reality, an obligated operator does not implement the arbitration rulings or implements them very belatedly. Accordingly, the regulator’s arbitration activity and its very existence have only a limited effect on the market, or at least on cooperation between telecommunications companies. Each new entrant is very familiar with the dilemma of whether to conclude the agreement on the terms set by the incumbent, in order to launch the activity as soon as possible, or to request the regulator’s support, only to find out after many years that the free space in the market has already been taken up by another companies. Another obstacle to market entry consists in such apparently minor difficulties as the delayed issuing of decisions and certificates or protracted procedures for assigning frequency bands and numbers. These procedures must be fully governed
The Status of Regulation and Competition in Poland
71
by the policies of administrative proceedings, including the authority’s obligation to notify the applicant of any possible problems with settling the matter in question. The President of the URTiP is becoming increasingly concerned with the market power of mobile operators. Unfortunately, while Polish Law on the protection of competition and consumer interests uses the notion of “joint dominant position in a market”, the regulatory authority, when determining significant market power, is not obliged to apply the provisions of the anti-monopoly law, and therefore cannot prevent a possible oligopoly of mobile operators or other companies in the telecommunications market, for example. The anti-monopoly authority, however, may do so, as well as determining the relevant market based on each operator’s power resulting from the control of access to end users. Another task with which the Polish regulation authorities are not able to cope, is distinguishing between the short-term and long-term concept of consumer interests, i.e. reconciling the protection of consumers with the protection of competition. An example of this is the fact that TP for many years kept the prices of dialup internet access identical with those of local calls, while inter-operator settlements of accounts were unavailable. Consequently, the internet market collapsed, as the ISPs did not collect revenue and the cost of internet access was among the highest in the world. This system operated under the guise of a pro-consumer solution, which the competent agencies dared not challenge. Thus, the development of competition was hindered, and the launching of lump-sum offers was delayed in comparison with the EU member states. The current practice of regulation is essentially a matter of the ruinous debate between the UOKiK and the URTiP about their competence. The two offices are predominantly concerned with demonstrating that it is the other authority that should have responded to a certain issue. Operators who claim their due, have to cope with the passive attitude of both offices and to a large extent are deprived of the latter’s protection, which in turn affects their business decisions and makes them more likely to waive their rights, formally guaranteed by law. Thus, another field in which Poland must implement the new guidelines is establishing sound relations between the two agencies of the regulatory administration (StreĪyĔska et al. 2003).
Towards a European regulation The adoption of TL2000 constituted a revolution in the Polish telecommunications market in some ways. The updated system featured many solutions that had to be considered liberal, and at the same time imposed a number of specific legal obligations on the incumbent operator. Soon, however, it transpired that the new law was in many regards worse than the previous Telecommunications law, particularly because of the vagueness of its provisions, which made it impossible for the regulator to perform its duties. The new law introduced many transitory periods and deprived the existing operators of their acquired rights. Its main shortcoming
72
Jerzy Kubasik
was its incompatibility with the EU legislation, especially in matters of carrier selection, number portability, universal services, LLU, interconnection, the inequality of the obligations imposed on companies with qualified market power, etc. Another imminent threat to the law was the privatisation of TP; since its conditions were secret and provided an excuse for legislative omissions. The lack of essential regulatory instruments and executive decrees made the law unenforceable. All of these factors combined to give birth to a situation where the applicable law was not clear, predictable or stable, to the detriment of business enterprise. As early as in 2001, the European Commission ascertained that the Polish Telecommunications Law did not comply with EU law, and had to be amended. The obligation of adaptation was discharged by the amended version TL2003, effective as of Oct. 1st, 2003, which, however, was still not entirely compliant with EU law. The European Commission’s principal adverse findings regarding the amended law are as follows (StreĪyĔska et al. 2003): • In breach of Directive 97/13/EC, it continues to impose the general obligation to procure a license to engage in activity in public telephone networks; • It fails to specify accurately whether the essential rights and duties of the cooperation of operators apply to all operators (as recommended by Directive 97/33/EC), and to which operators the policy of non-discrimination has been applied; • It re-establishes an ill-conceived access deficit charge (ADC); • It does not clearly define universal services and the extent of the duties of the operators who provide them; • It does not authorise the regulator to inspect the leased line offerings, and does not establish a specific procedure for designating the operators obliged to provide the minimum set of lines; the definition of the latter is much broader than the one recommended by the Directive; • It allows for the application of benchmarks with respect to operators, although it is the administration that is to blame for the lack of regulation on cost systems, and therefore for the fact that a cost model cannot be developed by TP and audited by the regulator. The EU adopted a new regulatory framework for electronic communications in March 2002 to respond to developments since it liberalised all telecommunications markets, including public voice telephony, on January 1st, 1998. All member states were required to implement the new framework by July 2003. Nevertheless, the 2003 framework is only being implemented slowly. Although the date for complete implementation of the new guidelines had been specified as July 24th, 2003, only eight of the fifteen member states of the “old” EU managed to implement it by the end of 200313. 13
“European Electronic Communications Regulation and Markets 2003”, (9th) Report on the Implementation of the EU Electronic Communications Regulatory Package, Communications from the Commission to the Council, The European Parliament, The Economic and Social Committee and the Committee of Regions, Commission of the European Communities, Brussels, COM(2003)715, 19 November 2003.
The Status of Regulation and Competition in Poland
73
Along with the other acceding states, Poland undertook to ensure full compliance with the effective guidelines by the end of 2002, but this task was not accomplished. As we mentioned earlier, TL2000 did not implement the EU’s previous regulatory framework in a completely correct or viable manner, and the law was consequently amended in 2003, still in the spirit of the EU guidelines of the 1990s. The amended TL2003 was not fully implemented because a complete set of executive decrees was not in place14. Meanwhile, it became necessary to begin preparations for implementing new guidelines, which should have taken place prior to accession to the Union on May 1st, 2004. The government submitted the draft text of the new law in December 2003. After a brief consultation and a tumultuous legislative process, the Parliament adopted the new Telecommunications Law (TL2004) on July 17th, 2004. The law will take effect on Sept. 3rd, 200415, or four months after the scheduled deadline. Table 2 summarises the most significant new provisions of TL2004. From the point of view of Poland’s interests, the most significant features of the new regulatory framework are (StreĪyĔska and Kulisiewicz 2004): • the convergence of technology, the resulting collective regulatory instruments, applicable jointly to telecommunications, mass media and IT, and the incorporation of mass-media-specific solutions in the new guidelines; • the evolution of regulation into the area of anti-monopoly law, as the market is liberalised; • procedures for ensuring that the provision of universal services will not be of an anti-competitive nature, and that consumers will have access to the full range of competitive state-of-the-art services (this includes the obligations imposed on non-SMP operators); • the breakdown and allocation of the tasks and competence of the regulatory authorities, combining the regulation of this branch of business activity with antimonopoly regulation; • the policies of cooperation with the European Commission and of its supervision of the national regulatory authorities. Major threats to the process of implementing the new package of regulations in Poland are (StreĪyĔska and Kulisiewicz 2004): • the lack of a comprehensive regulation of the entire electronic communication sector (for political reasons), • the shortage of proper administrative staff, and the administration’s political dependence and instability, • the paucity of financial resources available to the regulator, which makes it impossible for the latter to perform its tasks in a thorough manner,
14
The MI enacted the final decree on July 26th, 2004, or after the Parliament had already adopted the new TL2004. 15 Some provisions of TL2004 will become effective only as of Jan. 1st, 2005.
74
Jerzy Kubasik
Table 2. Major changes to the Telecommunications Law in Poland in 2004 Issue TL2000/2003 Regulation principle Regulation ex-post Relevant markets 4 markets for which operators with SMO could be determined (fixed, mobile, leased lines, interconnect) Licensing system Licences and notifications
TL2004 Regulation ex-ante Up to 18 markets (11 wholesale, 7 retail)
Notifications only (no entry barrier) Price regulation Power to reject of US and LL Possibility of use a price cap, cost pricelists (within 14 days) based pricing or benchmark and special regulation, if market failure on relevant retail market determined. 30 days for rejection of proposed price list Cost calculation Accounting separation, etc. No significant changes (adjustments to relevant markets only) Network access Based on interconnection only, Interconnection and virtual operaSMP obligation. tor, SMP obligation. Number portability For fixed network only, SMP obli- For all networks incl. mobile, addigation, additional charge possible tional charge possible Administrative costs None Fee max. 0,05% of revenues, all operators Claims 14 (30) days to deal with a claim 30 days, if claim not dealt with, it’s considered as acknowledged Preselection For all telecommunications ser- No changes vices, limited by technical viability only Universal services Obligation to connect and to ren- No changes to the scope (obligation to provide ADSL for public der basic services schools etc. with costs fully reimNo reimbursement for USO, bursed by the state budget) even if net costs incurred Possible tender for USO (for a given geographic area) URTiP’s decision on imposing USO Possible net cost reimbursement (virtual US Fund – 1% of market revenues) Regulation for mo- Very few regulation currently As for fixed SMP (cost orientation bile market on interconnect, price control, etc.) 1 day for blocking of stolen handBlocking of stolen No regulation sets mobile handsets
The Status of Regulation and Competition in Poland
75
• a closed system of the source of law, which prevents the regulator from responding flexibly, • debates between authorities, particularly between the KRRiT16 and the URTiP, regarding the extent of their competence in the entire sector of electronic communication, • unsatisfactory development of telecommunications infrastructure, resulting, for example, in the tendency to establish intricate and anti-competitive procedures for the provision of universal services, • the technical condition of the network, which according to the incumbent, makes it impossible to implement fully the package of regulations from 1990– 2000, and even more difficult to implement 2003 guidelines. After the adoption of the new law, the MI will have to face the huge task of developing many complex executive decrees. As the regulator’s prerogatives and duties have been considerably extended, one may doubt whether it will manage to cope with these responsibilities with its present personnel and financial resources. Its first task will be to conduct extensive market studies of 18 product markets, in order to determine the operators’ market power and obligations. In most member states of the EU, this task has taken many months of hard work by large teams. The mode and timetable of the implementation of the new TL2004 also gives grounds for anxiety. In the present situation, where the “old” package of regulations has not been fully implemented, the regulator’s conversion to the aims of the new law may result in a downright collapse of regulatory activity and in chaos, as the dates and manner of implementing new regulation are inadequate to deal with the current condition of competition in this market and the regulator, who misses deadlines and is in arrears with the settlement of dozens of specific and general cases.
Conclusions Beside the persistent consequences of the telecommunications policy of the past periods and the troublesome privatisation of TP, another essential reason for the absence of viable competition on the Polish telecommunications market is the lack of efficient regulation. This is due to the regulator’s manifold negligence. Firstly, the Polish regulatory agency has not published a strategy of regulation. Its decisions are slow and arbitrary. As the agency does not reveal its position until the conclusion of an administrative procedure, the parties to the litigation cannot argue with it. Neither is there an effective procedure for enforcing decisions. Secondly, a regulatory agency should exercise its duties in an objective and transparent manner. The detailed scope of the duties and their division between the regulatory and the anti-monopoly agency must be clear and known to the pub16
KRRIT – Krajowa Rada Radiofonii i Telewizji (the National Council for the Radio and Television).
76
Jerzy Kubasik
lic. In practice, there is an ongoing conflict of competence between the URTiP and the UOKiK. Consultation and cooperation between the organisations responsible for the development of the telecommunications market (including the legislature) should be mandatory. In practice, the regulator’s opinions are usually ignored, while in certain more complex cases the regulator is burdened with the responsibility for the legislation. Thirdly, a regulatory agency should have all the necessary resources at its disposal, including staff, know-how and the financial means required to perform its duties. In practice, the regulator is yet another under-financed, ineffective and under-educated agency of the Polish administration. Contrary to common beliefs, the number of URTiP staff dealing with regulation is small, particularly so in the most essential organisational units of the office, while the education and experience of such staff members, especially if they have been employed only recently, are unsatisfactory. Due to the low incentive value of salaries, better-qualified staff members constantly leave the office. The regulator cannot afford the financial resources necessary to employ truly competent staff and educate them, or to seek experts’ advice in the most important cases. The administration of the telecommunications sector in Poland is even unable to compel the observance of existing regulatory legislation, and the implementation of the intricate new directives must be considered a task altogether beyond the limits of its abilities. And finally, there is also the extremely significant matter of the regulator’s political independence. In fact, due to political upheavals, the personnel of this theoretically independent agency has been entirely replaced on three occasions, and at one point the agency did not operate at all during a period of six months. On the eve of Poland’s accession to the European Union, any discussion of a reform of regulation must be based on a critical and frank appraisal of the regulatory procedures which are already in place and of their mutual relationships. A major cause of difficulties in the Polish telecommunications market is the legislative procedure. Much has already been written on the subject of the delayed development of the legal acts of the telecommunications law. The recent amendments to the “Telecommunications Act” raised much hope of a better regulation, provided that the auxiliary executory regulations become effective prior to the adoption of an entirely new act compliant with the new EU regulations. Unfortunately, it has now turned out that such hope was vain. The key issue now is the method and manner of the implementation of new EU directives. The systematising of the law must begin with studies of the current condition of the organisation and development of markets, as well as overall legislative policies in Poland and the existing legislation, concerning both matters specific to the network and electronic communication services, and the more general provisions, including the mutual relationships between the various agencies and organisations. The essential policy of the European Commission is to make the legislation a tool suitable for the level of the development of the markets. Accordingly, the scope of the directives is limited to specifying the essential targets, and member states are given a total freedom to choose the means of regulation. Another component of the status quo is the existing aims of the state policies, regardless of whether they have been clearly defined or are being pursued. If they are
The Status of Regulation and Competition in Poland
77
still valid, then ways of achieving these aims must be considered, bearing in mind their compatibility with the targets of the new directives and with other provisions of EU law. Thus, the development of the guidelines of the Act, and subsequently of its draft text, may provide a good opportunity for reviewing and updating state policies. A complete implementation of the EU law is the only viable means of achieving a quantum leap in the Polish telecommunications sector and improving the situation of its users in the segments of both basic and advanced services. Its proper implementation is also a prerequisite to the availability of EU financial aid to develop the broadband infrastructure, for example.
References Kubasik J, Kelly T (1992) Telecommunications Investment and Tariff Policy in Poland. Proc. of OECD/CCEET Workshop “Improving Conditions for Investment and Growth in Telecommunications for Partners in Transition”, Prague, pp 213–272 Kubasik J (2002) Regulation without a Regulator: The Tariff Policy in Poland. ITS 14th Biennial Conference, Seoul NIK (2004) Informacja o wynikach kontroli realizacji zadaĔ przez administracjĊ rządową w zakresie regulacji rynku usáug telekomunikacyjnych. NajwyĪsza Izba Kontroli, Warszawa, (in Polish) StreĪyĔska A, Kulisiewicz T (2004) Komunikacja elektroniczna i spoáeczeĔstwo informacyjne. In: Biaáa KsiĊga 2004. Polskie Forum Strategii LizboĔskiej, GdaĔsk/Warszawa, pp 77–89 (in Polish) StreĪyĔska A, Hagemajer J, Janiec M (2003) Perspektywy polskiego rynku telekomunikacyjnego (liberalizacja, regulacja, technologie). Instytut III Rzeczypospolitej i Centrum Studiów Regulacyjnych Instytutu BadaĔ nad Gospodarką Rynkową, GdaĔsk/Warszawa (in Polish) URTIP (2003) Raport Roczny Prezesa UrzĊdu Regulacji Telekomunikacji i Poczty 2003, Warszawa (in Polish)
Regulatory Framework and Industry Clockspeed Jarkko Vesa1 Helsinki School of Economics, Finland
Abstract This chapter discusses the role of regulation in the evolution of the mobile services industry from an industry clockspeed perspective. Based on the analysis of the Finnish mobile services market it is argued that regulative actions, such as handset subsidy ban, has not only pricing or competitive implications but also structural consequences which may even determine which business models or integration strategies are allowed for the mobile operators. In a fast moving industry like mobile services, it is of utmost importance that regulation does not slow down the natural speed of industry evolution. In a market where new technologies and business models emerge in waves (e.g. the 1st generation of analogue mobile telephony, the 2nd generation of GSM based cellular technology, and the emerging 3rd generations of UMTS technology), policy makers and regulators must ensure that prevailing regulatory frameworks do not artificially force a market to hang on to market structures which may no longer be optimal under the new circumstances.
Introduction The emerging mobile data services (a.k.a. mobile internet or non-voice mobile services) industry represents a business environment characterised by fast and unpredictable changes in all levels of the market: new network standards emerge, handsets become more sophisticated, mobile applications offer increasing functionality, and new kind of content can be delivered through mobile networks as more bandwidth becomes available. In this paper the term “mobile data services” refers to three main categories of services offered by mobile operators: The first category consists of so called conversational services (excluding traditional mobile voice services), the second category contains various types of content services (e.g. ordering a new ringtone by sending a short-message to the service provider, or downloading Java-games 1
E-mail:
[email protected]
80
Jarkko Vesa
from a mobile portal), and the third category is called data access (i.e., the various kinds of data transfer methods that enable the use of the services described in the two previous categories) (Vesa 2005). What makes the mobile data services industry particularly interesting from a research point of view is that it represents an intersection of three distinct industries and technologies, namely the mobile services industry, the media content industry, and the internet world. This kind of business context often leads to disruptive technical innovation, where the speed and impact of change are high (Palmberg and Martikainen 2003). One popular way of describing the rate of change within a given industry is the notion of industry clockspeed introduced by Fine (Fine 1996). This concept has later been used in the analysis of the IT industry (Fine 1998, 2000; Mendelson and Pillai 1999; Mendelson 2000), the mobile handset industry (Constance and Gower 2001), and the mobile services industry (Vesa 2004b, 2003). There is, however, one limitation in the existing research in issues related to the industry clockspeed: the impact of regulatory frameworks on the clockspeed of an industry has received little attention. Based on the characteristics of the mobile data services industry discussed above, it would be natural to assume that the clockspeed of the industry, i.e. the speed of change within the industry, would be high. However, as the analysis presented in this paper demonstrates, the constraints imposed by the regulatory framework in a given market have significant implications on the speed and magnitude of change. Based on the analysis of the Finnish mobile data services market it is argued that national regulatory authorities (NRAs) need to follow carefully the evolution of industries and services (as they naturally attempt to do), in order to ensure that the prevailing legislation does not slow down the natural evolution of the mobile services industry. Unfortunately it seems that in Finland the regulatory framework, which has been optimised for a voice-centric mobile business paradigm, has been one of the main reasons for the current lameness of the Finnish mobile services industry. As a result of this development, Finland, which once was one of the leading markets in the world of mobile telephony, has become an uninteresting market for the global players in the mobile services industry not least because of the regulatory framework, which has not allowed the kind of operator-driven business model which is increasingly popular in other parts of Europe. Furthermore, the regulatory framework has not allowed Finnish mobile operators to develop their business in ways that have turned out to be successful in Asia and in other parts of Europe. Evidence from the Finnish mobile market indicates also that NRAs should not base their decision making solely on the wishes and the demands of the existing players, if their goal is to ensure a healthy and innovative market development in the field of emerging mobile data services. The structure of the paper is the following: in chapter two, the theoretical background of industry evolution is discussed and propositions for the analysis of the Finnish market are developed. In chapter three, the methodology used in this paper is discussed briefly. Chapter four describes the current structure of the mobile services market in Finland. In the following chapter, the evolution of the Finnish mobile services industry is analysed in the light of the propositions presented earlier
Regulatory Framework and Industry Clockspeed
81
in the paper. In chapter six, the findings of this analysis are discussed and conclusions are drawn.
Theoretical background The role of regulation in the mobile industry There is a widespread consensus that Europe was very successful in joining forces to create a common GSM standard for digital mobile telephony in the late 1980’s and early 1990’s (Palmberg and Martikainen 2003; Steinbock 2003). As a result of this development, European mobile phone users have enjoyed more and better services than their colleagues in the US (Blanco 2003). However, this argument applies only to the traditional voice services: When it comes to non-voice mobile services, Europe can hardly be described as a leader especially if we look at the non-SMS part of the revenue. From a business and marketing perspective there is a major difference between traditional mobile voice services and the future mobile data services. Albeit modern digital mobile telephone networks are technically highly complex, the “product” (i.e., person A being able to call person B while on the move) is reasonably simple and highly standardised – a telephone call is a telephone call. However, the multimedia-driven mobile data services of the future are much more complex “products” (there is an ongoing debate amongst marketing researchers whether a distinction between products and services can – or even should – be made), because data-centric mobile services have several different elements that all have to work seamlessly together in order to offer a positive user-experience. Mobile data services can be described as complex goods, which Mitchell and Singh have defined as “an applied system with components that have multiple interactions and constitute a nondecomposable whole” (Mitchell and Singh 1996). As traditional economic theories such as transaction costs economics Williamson argue, the level of standardisation of products and services has direct implications on the optimal way of producing them, i.e. whether to integrate vertically or to use the markets (Williamson 1975). Earlier analyses of key factors behind the success of mobile data services in Japan (Vesa 2004b, 2003) have indicated that an integrated business model appears to be more successful as the industry moves from voice-centric to data-centric services. The discussion presented above leads us to the following proposition: PROPOSITION 1. A regulatory environment that has been optimised for traditional mobile voice services is not necessarily optimal for the more complex mobile services of the future.
82
Jarkko Vesa
The transformation of the mobile industry According to the theory of industry evolution, all industries are constantly changing – one way or another. These changes are related both to the boundaries of industries as well as the industry itself. The drivers and magnitude of these changes vary from one industry to another. Sometimes an industry may experience what Mitchell and Singh call an “environmental shock”, which can be described as “sudden and substantial changes in technology or market segmentation” (Mitchell and Singh 1996). In the literature this kind of changes are sometimes also called paradigm shifts or disruptive changes. Or as Vincenzo Novari, CEO of mobile operator “3” in Italy, put it in his presentation at the ITU Telecom World 2003 conference: “mobile phones are changing their DNA” (Novari 2003). By this statement he was referring to the shift from traditional voice services to a more datacentric business paradigm. Likewise, Nokia’s CEO Jorma Ollila has pointed out that one of the key drivers of the “New Mobile World” is the growth of mobile multimedia in the consumer segment (Ollila 2003). The observations discussed suggest the following proposition. PROPOSITION 2. The mobile industry is going through a major transformation within the confines of the industry; and also the boundaries of the industry itself are currently being re-drawn. The concept of industry clockspeed As discussed above, an industry is constantly going through a process of change. One way of describing the rate of this kind of change within a given industry is the notion of industry clockspeed introduced by Fine (Fine 1996). This concept has later been used in the analysis of the IT industry (Fine 1998, 2000; Mendelson and Pillai 1998, 1999; Mendelson 2000), the mobile handset industry (Constance and Gower 2001) and the mobile data services industry (Vesa 2004b, 2003). According to Fine (Fine 1996), there are several ways to measure industry clockspeed. Fine suggested sub-metrics such as process technology clockspeed (capital equipment obsolescence rate), product technology clockspeed (rates of new product introduction or intervals between new product generations), and organisational clockspeed (rates of change in organisational structures). In addition to the internal metrics, the rate of change in the industry’s external environment (developments in technology, consumer preferences, and market conditions) differs from industry to industry (Fine 1998). It is argued here that several – if not all – of the factors described by Fine are clearly visible in various parts of the mobile industry, particularly in the handset and mobile data services businesses. This leads us to the following proposition: PROPOSITION 3. The mobile data services industry can best be described as a high-clockspeed industry where technology, organisational structure and market conditions are all constantly changing.
Regulatory Framework and Industry Clockspeed
83
Methodology This research is based on a case study in which the analysis of the Finnish mobile data services market is used as an instrument towards a better understanding of the impact of regulatory framework on the speed of industry evolution – or the industry clockspeed – of mobile services business in general. The research is guided by propositions developed on the basis of a literature review. In this paper the unit of analysis is a given market – not individual companies or institutions.
Mobile services industry in Finland Let us take a closer look at the Finnish mobile data services industry. Traditionally, the Finnish mobile telephony market has been regarded as highly developed, mainly due to the high penetration rate of mobile phones. Currently the mobile phone subscription penetration rate is over 100 per cent of the total population of five million people. Internationally Finland is still positioned high in the rankings, but not anymore described as the “mobile wonderland” as in the past. The Finnish regulatory authorities are very proud of the fact that the tariffs of mobile phone calls are among the lowest in the world. According to the Finnish Ministry of Transport and Communications, in Europe only in Denmark and Luxemburg mobile phone calls are cheaper than in Finland (Ruohonen 2004). In addition to low mobile phone call tariffs, there is also another special characteristic that makes the Finnish mobile market particularly challenging for operators: During the first twelve months since the launch of mobile number portability (MNP) in July 2003, Finnish mobile phone users changed their operators almost one million times (Pirilä-Mänttäri 2004). According to the Finnish Ministry of Transport and Communications, this resulted in the highest rate of ported mobile phone numbers in the EU countries in proportion to the total amount of subscriptions (Ruohonen 2004). The eagerness of the Finnish mobile phone subscribers to switch operators so frequently can be partly explained by the fact that the main criteria when selecting a mobile operator have been the pricing of mobile phone calls and short messages, the amount of free airtime, and the value of giveaways offered by mobile operators. Unlike in most European countries, the Finnish operators are not allowed to bundle subscription and handset, which in many markets dampens consumers’ enthusiasm – or ability – to switch between mobile operators on such a regular basis (Ruohonen 2004). The situation in the Finnish mobile market raised the question of how far competition can go before it starts to damage the whole industry: In April 2004 the CEO of DNA, the 3rd largest mobile operator in Finland, warned that the ongoing price war on mobile phone call prices would drive several smaller operators out of business. Those operators that would survive would be forced either to cut down their service offerings (as DNA decided to do in 2005) or to slow down their investments in the mobile networks of the future (Kauppalehti 2004). Recently smaller players such as ACN and Tele2
84
Jarkko Vesa
have left the Finnish market, and the leading mobile virtual network operator (MVNO) Saunalahti has merged with Elisa. Furthermore, in March 2006 TeliaSonera announced that they would stop selling subscriptions of their discount brand Tele Finland. As a result of the development described above the number of mobile service operators, MVNOs and resellers in Finland has fallen by half during the past eighteen months. But why does competition in the Finnish mobile market focus so heavily on prices, instead of other dimensions of the traditional marketing mix? In order to answer this question we will analyse – as suggested by Fine (Fine 1996, 1998, 2000) – the characteristics of the Finnish mobile market along two dimensions: industry structure and product architecture. The industry structure in Finland is horizontal: The competition is taking place at the horizontal level, that is, operators are competing against each other, handset manufacturers are competing against each other etc. (see Fig. 1). Handsets
Siemens
Nokia Samsung
Motorola Sony Ericsson
Open standards (GSM, GPRS, EDGE, UMTS) Network operators Service operators Resellers
TeliaSonera Sonera
Elisa
Finnet
Kolumbus
Elisa
Tele Finland
Saunalahti
Hesburger (Saunalahti)
AINA
DNA Fujitsu
Spinbox
SK Mobile (Sonera)
Cubio
Stockmann Dial (Elisa)
Open standards (WAP) Mobile Portal
Sonera SurfPort
Zed
MTV3
Helsingin Sanomat
Buumi.net
Open standards (Java, XML) Applications
Java games
Content
Movie trailers
Browser Weather
Messaging Music
Location-based services News
Maps
Fig. 1. Mobile services market in Finland
The product architecture is modular, which allows subscribers to mix-andmatch practically any subscription, handset, and service or content. The Finnish model is very different from the “dominant design” in Japan or in other parts of
Regulatory Framework and Industry Clockspeed
85
Europe where mobile operators are orchestrating the whole business – including the handset sales. The Finnish market has some special characteristics when compared with other national markets in Europe or internationally. The first peculiarity is related to the marketing of mobile phone subscriptions: The legislation prohibits mobile operators from bundling subscriptions and handsets. Or more accurately, a consumer’s decision to subscribe to a specific operator’s mobile telephony service must not affect the pricing of the mobile phone he or she is possibly purchasing at the same time. Another characteristic of the Finnish mobile market is that Finland is one of the few countries in Europe not allowing the use of so called SIM-lock, which prevents users from using the SIM card containing their subscription data in another handset. However, this situation changed in April 2006 as handset subsidies were allowed for 3G handsets when sold as a bundle together with 3G subscription. The revised Communications Market Act allows operators to lock the handset and subscription for a maximum period of two years by using SIM locking. For the 2nd generation the handset subsidy ban remained untouched. Albeit 3G bundling will bring the Finnish mobile market closer to the mainstream of mobile services markets, this analysis will focus on the evolution of the Finnish market during the past five years when mobile data services started to emerge in Europe. In the following section we will discuss the role of legislation and regulation in the evolution of the mobile services market in Finland.
Interplay between regulation and clockspeed As mentioned earlier in this paper, the structure of the Finnish mobile services market has been very different from that in the rest of the world. This paper argues that the reason for the structure of the industry has been the regulatory environment – and not an intentional strategic choice of the operators. But let’s take a closer look at why the Finnish market ended up looking the way it has looked over the past fifteen years. During the era of the 1st generation analogue mobile telephony, the Finnish national telecom operator TELE was the sole provider of the NMT telephone network services. TELE controlled the whole value chain, i.e. the network, handset business and services – which at the time meant, of course, only voice calls. At the time, TELE worked closely with Nokia to develop various dimensions of mobile telephony (as did Telia and Ericsson in Sweden). However, in the beginning of 1990s the market structure changed totally as competition opened up along with the 2nd generation digital GSM networks and deregulation. In a short period of time, the mobile services industry transformed from a vertically integrated and closed market to a more horizontal and open market (see Fig. 2.).
86
Jarkko Vesa
Integrated Horizontal
Product architecture
Modular
Digital (GSM) 1990’s
Industry structure
Analog (NMT) 1980’s
3G (UMTS) 2000’s
Vertical
Fig. 2. Transformation of the Finnish mobile voice services market
Behind this transformation there was a technical component, whose role often goes unnoticed – the SIM card. The introduction of the SIM card along with the GSM standard had a major impact on the shaping of the industry in Finland, starting from the retail strategies of mobile handsets. The regulatory framework created in the 1990s turned out to be very successful for all the players: Consumers enjoyed lower tariffs; the handset prices went down as a result of standardisation; handset and network vendors prospered; and the Finnish economy boomed as Nokia’s business was growing at an amazing rate. However, the success of the mobile voice market has started to turn against itself. What has happened in Finland during the past few years is that the competition in mobile services market has been intensifying at an increasing rate. Albeit the number of mobile network operators has remained the same over the last few years (Swedish-Finnish operator TeliaSonera, Elisa, and the Finnet Group), the number of service operators increased dramatically in years 2003–2004 when mobile number portability made the market more attractive for new entrants. At some point there were fifteen service operators / MVNOs / resellers in Finland (in April 2006, the number has gone down to seven) – which was a lot in a market of five million people and over 90 per cent penetration rate. As a result, the competition between operators focused mainly on price. From an economic perspective, one could argue that there was “too much competition” in the traditional mobile voice services market. The product called “mobile phone call” turned out to be a highly standardised commodity, which had very few differentiating factors – especially as the quality of the three Finnish GSM networks is on a high level in all commercially available networks.
Regulatory Framework and Industry Clockspeed
87
One could, however, raise the question whether the current model is optimal also for the future mobile data services. It is argued here that a regulatory framework, which was originally optimised for traditional voice services, is not necessarily optimal for the new and more complex mobile services of the future. The most obvious evidence of this is the actual usage of mobile data services in Finland. According to Finnish mobile operators’ key performance indicators (KPIs), the percentage of non-voice mobile services in their average revenue per user (ARPU) in the end of 2005 was approximately 15–16%. However, if the highly popular use of SMS (which is mainly used for messaging) is excluded, the percentage of non-SMS mobile data services is estimated to be around 1–2%. On the basis of these figures it looks like the current market structure has not attracted users in any larger volumes to the usage of non-SMS mobile data services. This view is also supported by various consumer surveys: Finnish mobile phone users don’t use mobile data services, nor do they have very strong intentions to do so in the near future, either (at the same time, well over 80% of Japanese mobile phone users subscribe also to mobile internet services). We will now take a closer look at the different ways the regulation of the Finnish mobile market has influenced the success of mobile data services. Based on earlier research on the success factors of mobile services in Japan and lately also in the UK (Vesa 2004b, 2003), it is argued that mobile data services are more successful in markets where mobile operators take a leading role as the “orchestrator” of mobile services development and delivery. This does not mean that operators should do everything by themselves, but they do need to have the capability of building extensive networks of companies in order to offer a true end-to-end mobile data service. These particular business networks are often called business clusters or ecosystems. This finding is in line with research on industry evolution, which argues that as market situations change, new technologies emerge, or uncertainty in the market increases, companies need to review their vertical integration strategies in order to be well positioned in the new era (Harrigan 1985; Vesa 2005). This is where the regulatory framework of mobile industry comes into the picture: unlike in most parts of Europe, the mobile operators in Finland did not have the opportunity to redesign their business model towards a more integrated approach. It is argued here that one reason for this development can be found in the regulatory framework: The Finnish mobile operators have not been allowed to bundle subscription and handsets, which appears to be a key element in creating more user-friendly mobile services. As discussed earlier in this article, this situation will change for 3G handset and subscriptions starting April 1, 2006. Another disadvantage of the current situation in Finland has been that the existing legislation has made the Finnish market less attractive for the major players in the mobile services industry. Companies like Vodafone, Orange, T-Mobile, or O2 would have had major difficulties in entering the Finnish market because the regulatory environment in Finland would not allow the kind of business model these companies are using in other countries. This has, of course, been beneficial for the existing mobile network operators in Finland as the legislation has kept interna-
88
Jarkko Vesa
tional competition away from their market. From consumers’ or the Finnish government’s point of view this may, however, turn out to be a serious drawback, as there is a risk that Finland will fall behind the rest of Europe in the evolution of mobile data services markets. Unfortunately this risk has to a large extent materialised over the past three years. The third aspect of the interplay between the regulatory framework and the mobile services market in Finland is the question of clockspeed. Earlier in this paper it was argued that the mobile services industry is one of the high-clockspeed industries, where technology, organisational structure and market conditions are constantly changing. The point this paper is making is that in a high-clockspeed industry, falling behind other markets is even more dangerous than in industries where the pace of change is slower. It is argued here that the Finnish regulatory environment that was originally optimised for voice services has slowed down the clockspeed of the natural evolution of the Finnish mobile services industry.
Discussion and conclusion The objective of this article was to demonstrate how the regulatory framework of a given market defines the feasible business models for mobile operators. This was done by developing three propositions based on research on the mobile services industry, on industry evolution and on the concept of clockspeed as an indication of the speed of change in an industry. By using the Finnish mobile services market as an example, the article showed how the regulatory environment in Finland has inhibited the local mobile operators from adopting certain operator-driven business models that are dominant in other parts of Europe. Furthermore, the brief analysis presented in this paper demonstrated the way in which the clockspeed of the mobile services industry in Finland has slowed down, and how the previously advanced mobile market in Finland is starting to fall behind in international comparisons. Albeit the analysis presented here is based on limited evidence, it raises the question of how to make sure that the regulatory framework is kept up to date also during a profound transformation of a regulated industry. The result of this analysis shows that sometimes there is a trade-off between conflicting goals, when it comes to regulating a market: While the Finnish authorities were highly successful in creating an optimal regulatory environment for traditional mobile voice services, the regulatory framework has failed to address the requirements of the current shift towards the more complex, data-centric world of the future mobile services. Albeit the Finnish policy makers and regulators now attempt to speed up the renewal of the handset base and the takeoff of mobile data services by allowing 3G handset subsidies, Finland may already have missed this “wave of technical evolution”. As the technical evolution transforms the very basics of the mobile industry, authorities would need to have early on a right kind of mindset to question the basic assumptions behind the regulatory framework they impose. Inexpensive
Regulatory Framework and Industry Clockspeed
89
phone calls should not be the only criterion when making decisions about what the operators are allowed to do. In its current form, the Finnish mobile market is a far cry from what is used to be in the past. Against this background, it is somewhat paradoxical that the Prime Minister of Finland at some point expressed his concern regarding the operators’ ability to invest enough into the future infrastructure. The national regulatory authority should have realised the potential damage the prevailing regulatory framework was causing already some time ago – now it may be already too late from the Finnish mobile industry’s point of view, as the damage has already been done: As Anni Vepsäläinen, the CEO of TeliaSonera Finland, has pointed out in her comment on the present status and the future outlook of the Finnish telecommunication market, investment bank Credit Suisse – First Boston gave a recommendation in their recent report “Euro Telcos Regulation” (March 2004) to avoid investments in TeliaSonera due to the over-zealous regulatory authorities in Swedish-Finnish telecom operators’ home markets (Vepsäläinen 2004). Needless to say that this can hardly be the objective of any national regulatory authority, albeit the consumers certainly enjoy this situation (Kauppalehti 2004) – at least as long as the quality of networks remains at an acceptable level.
References Blanco CL (2003) Presentation at the ITU Telecom World 2003 in Geneva Constance SC, Gower JR (2001) A Value Chain Perspective on the Economic Drivers of Competition in the Wireless Telecommunications Industry. MBA paper, Alfred P. Sloan School of Management, MIT Fine CH (1996) Industry Clockspeed and Competency Chain Design: An Introductory Essay. Proceedings of the 1996 Manufacturing and Service Operations Management Conference, Darthmouth College, Hanover, NH, June 24–25 Fine CH (1998) Clock speed: Winning Industry Control in the Age of Temporary Advantage. Perseus Books, Reading, Massachusetts Fine CH (2000) Clock speed-Based Strategies for Supply Chain Design. Production and Operations Management 9 (3): 213–221 Harrigan KR (1985) Vertical integration and corporate strategy. Academy of Management Journal 28 (2): 397–425 Kauppalehti (2004) Price war will drive operators to bankruptcy says DNA’s Tolonen. April 19 (in Finnish) Mendelson H, Pillai RR (1998) Clockspeed and Informational Response: Evidence from the Information Technology Industry. Information Systems Research 9: 415–433 Mendelson H, Pillai RR (1999) Industry Clockspeed: Measurement and Operational Implications. Manufacturing & Service Operations Management 1 (1): 1–20 Mendelson H (2000) Organizational Architecture and Success in the Information Technology Industry. Management Science 46 (4): 513–529. Mitchell W, Singh K (1996) Survival of business using collaborative relationships to commercialize complex goods. Strategic Management Journal 17: 169–195
90
Jarkko Vesa
Niemi K (2004) Text-messaging is holding back the mobile world. Tekniikka & Talous June 9 (in Finnish) Novari V (2003) Presentation at the ITU Telecom World 2003 conference in Geneva Ollila J (2003) Structuring for Growth. Presentation at Nokia Capital Market Day, November 24, New York Palmberg C, Martikainen O (2003) Overcoming a technological discontinuity – The case of the Finnish telecom industry and the GSM. Discussion paper Nr. 885, ETLA – The Research Institute of the Finnish Economy Pirilä-Mänttäri A (2004) Mobile phone number has been ported almost one million times in one year. Helsingin Sanomat July 2 (in Finnish) Poropudas T (2005) FICORA blamed for the lameness of Finnish mobile services industry. Retrieved from www.mobilemonday.com, September 26 Ruohonen A (2004) Finnish are the most eager to switch subscriptions. Taloussanomat July 2 (in Finnish) Saarinen M (2004) Bundling of mobile phones and subscriptions may be allowed in the future. Taloussanomat June 10 (in Finnish) Steinbock D (2003) The Wireless Horizons: Strategy and Competition in the Worldwide Mobile Marketplace. Amacom, New York Vepsäläinen A (2004) Are we securing the future of information society?. Taloussanomat, May 15 (in Finnish). Vesa J (2005) Mobile Services in a Networked Economy. IRM Press, Hershey, Pennsylvania Vesa J (2004a) Contradictory views of the bundling of handsets and mobile subscription. Tietoviikko November 11 (in Finnish) Vesa J (2004b) The Impact of Industry Structure and Product Architecture on the Success of Mobile Data Services. Austin Mobility Roundtable, March 11–12 Vesa J (2003) The Impact of Industry Structure, Product Architecture, and Ecosystems on the Success of Mobile Data Services: A Comparison between European and Japanese Markets. ITS 14th European Regional Conference, Helsinki, August 23–24 Williamson OE (1975) Markets and Hierarchies: Analysis and Antitrust Implications. Free Press, New York
Part 2: Technical Aspects and Standardisation
A Comparison of ENUM Field Trials Dieter Elixmann1, Annette Hillebrand2, Ralf G. Schäfer3 WIK GmbH, Bad Honnef, Germany
Abstract The paper focuses on current worldwide field trials regarding ENUM. ENUM is a transformation procedure that helps to transform E.164 numbers into an internet domain and vice versa. The paper addresses the basic technical features of ENUM, it gives an overview of the objectives, the time schedule, the stakeholders and the financing schemes of the ENUM field trials, it analyses DNS concepts and their requirements for the implementation of ENUM, it tackles the ENUM service potential, it addresses IT-security issues and ENUM specific data protection problems and it highlights the requirements for the future utilisation of ENUM.
Introduction The developments in the field of telecommunications and the internet made lots of different communications services for private and business users available. Today, these services are still provided via circuit (e.g. public switched telephone network) as well as packet switched networks (e.g. internet). To enable „seamless“ services, interoperability between these two different platforms has to be established at the technical level. However, this is only possible under certain circumstances. Either the services have to work with the same address scheme or there has to be an unequivocal address “translation” between the networks (Stastny n.d.). For identification and addressing in the public switched telephone network the international number plan, the E.164 country code scheme, is being used as a common standard (ITU-T 1997). In IP-networks the Uniform Resource Identifiers (URI) constitute the standardised addressing and naming scheme (Berners-Lee et al. 1998). To enable interoperability of services a logical connection is required.
1 2 3
E-mail:
[email protected] E-mail:
[email protected] E-mail:
[email protected]
94
Dieter Elixmann, Annette Hillebrand, Ralf G. Schäfer
A possible solution for this problem is the so-called ENUM.4 ENUM is defined in a technical specification of the Internet Engineering Task Force (IETF 2002). ENUM virtually is a transformation procedure that helps to transform E.164 numbers into an internet domain and vice versa. The current ENUM field trials have given rise to a recent WIK study5. The main content and findings of our empirical analysis are presented in this paper. The study is focused on the trials in Germany, France, the United Kingdom, Austria, Sweden and the US. Furthermore, experiences from Taiwan and South Korea are taken into account. The empirical investigations have taken place in a period spanning the year 2003 and the first two months of 2004. In this paper we have updated some crucial information. The analysis of the ENUM trials shows that in every country national peculiarities concerning internet administration exist. This paper concentrates on the identification and assessment of major challenges connected with the practical use of ENUM and tries to extrapolate the trend line of ENUM diffusion in general. The paper is structured as follows. In section 2 basic features of ENUM are introduced relating in particular to the technical foundations of ENUM and its functions. Section 3 is devoted to organisational features of ENUM field trials. Section 4 focuses on institutions involved in the implementation of ENUM and analyses their interaction. Section 5 discusses ENUM service potentials. Section 6 concentrates on IT-security issues and ENUM specific data protection problems. Section 7 finally, highlights requirements for the future utilisation of ENUM. Section 8 contains a summary of the analysis and conclusions.
Basic technical features of ENUM ENUM represents a possible approach for mapping E.164 telephone numbers uniquely to internet domain names. ENUM does not define rules for the interaction between different devices, i.e. it does not form a new protocol for communication. Rather, ENUM represents an agreement on using existing communication protocols like the E.164 numbering plan, the internet Domain Name System (DNS)6, Naming Authority Pointer Records (NAPTR) and Uniform Resource Identifiers (URI)7.
4 5 6
7
ENUM means tElephone NUmber Mapping. Elixmann et al. (2004) The internet Domain Name System forms a distributed, hierarchically structured service for translation between domain names and IP addresses. Cf. Stastny n.d. Regarding URI also cf. RFC 3404.
A Comparison of ENUM Field Trials
95
Transformation of an E.164 telephone number onto a domain name In a simplified view, the mapping procedure of an E.164 telephone number to an internet domain name consists of three steps (IETF 2002): 1. Initialisation: The procedure starts with the complete E.164 telephone number including the country code but without any other characters and symbols. 2. Transformation: The number is separated by a dot between each two digits. Additionally, the string is reversed. 3. Domain creation: The string resulting from Step 2 is completed with a certain top level domain at its end. .arpa top level domain Within the ENUM approach the top level domain .arpa and the sub-domain e164 together are often called the ENUM Tier 0. Actually, e164.arpa is the only domain which is intended for the international implementation of ENUM. This proposal is supported by IETF as well as by the ITU (DENIC 2003). Sometimes this approach is called „Golden Tree“(Stastny n.d.). In principle it is possible that multiple DNS zones are simultaneously used for ENUM. So it is not astonishing that there are discussions which focus on the use of alternative domain names. Indeed, the domain e164.com is used for commercial purposes by the company NetNumber. It serves as an operable platform for development and testing of ENUM based applications including processes for administration and registration8. Until now the international discussions about the number of domains and about their real name(s) are not finished (McTaggart 2001; Cannon 2001). The European Telecommunications Standards Institute (ETSI) prefers an approach with a single domain in particular with respect to ensuring consistency and robustness of the ENUM implementation (ETSI 2002). Numbering and the Domain Name System The decision for a concrete domain name for the implementation of ENUM is directly linked with the question of administrational authority. In this context ITU points out a fundamental difference between the E.164 numbering plan and the DNS (ITU 2002). The domain name system forms a technical coordinated approach, while the E.164 numbering plan stands for an administrative coordinated approach. The ENUM approach does not alter the accountabilities assigned within the E.164 numbering plan. Each ITU member state remains responsible for the respective part of the E.164 numbering plan (Stastny n.d.). Moreover, the states are the owner of the ENUM subdomain corresponding to the individual country code, 8
Cf. http://www.netnumber.com
96
Dieter Elixmann, Annette Hillebrand, Ralf G. Schäfer
e.g. 9.4.e164.arpa in the case of Germany. This sovereignty comprises the responsibility for the management of the subdomain which can be delegated to an appropriate organisation. The individual implementation is a national task, i.e. its realisation may vary from country to country. Within Europe the technical specification “ENUM Administration in Europe” of ETSI (ETSI 2002) serves as a guideline for implementation.
Organisational features of ENUM field trials Objectives Across the different field trials a multidimensional system of objectives can be identified. This means, in particular, that the involved parties do not pursue a single purpose. Rather, a set of different targets is relevant in the ENUM field trials. Overall there are three groups of objectives aiming at ensuring an extensive provision of information both for participating companies as well as for the public and the government: • Testing of technological/functional aspects, e.g. systems, processes and interfaces. • Analysis of economic aspects, e.g. demand, business models, financing, applications. • Evaluation of political aspects, e.g. economic policy, competition policy, regulation. Time schedule The ENUM field trials are initiated either by the national regulatory authorities and the ministries for telecommunications/internet, or by the Network Information Centres (NIC) and comparable national ISP associations. It is worth noting that telecommunications carriers or service providers generally do not act as the initial driver for national ENUM tests. The ENUM field trials are usually structured along 4 phases: Investigation of interest (consultations); Preparation and conceptual design (boards and workgroups); Implementation and operation (testing); Transition to actual launch period. After finishing the trials the commercial phase is to start. However, at the beginning of April 2004 none of the ENUM field trials has been officially finished, i.e. the transition phase has not yet started anywhere. Rather one can see a stepwise prolongation of the ENUM tests. In our opinion, this development reflects the uncertainty of the market participants. We estimate that ENUM as an actual market implementation will not occur before 2005/2006.
A Comparison of ENUM Field Trials
97
Stakeholder From the viewpoint of competition policy all ENUM field trials aim at assuring neutrality and unrestricted competition. This particularly is to imply that participation in the field trials is possible for all interested parties. This goal seems to be achieved as we have found no signs of discrimination of individual companies. Despite the open design of field trials it is eye-catching that very often the number of participants in the ENUM tests is still on a low level. This is true of both the number of involved companies and organisations and the number of end users. Overall we see this as a sign for the still existing scepticism about necessity, benefit, cost and security of ENUM. The table below visualises the intensity of the involvement of different stakeholders across the field trials investigated. Table 1. Involvement of different stakeholders in the national ENUM trials
D
A
F
GB
USA
Incumbent Competitive Carriers
S n.a.
Mobile network/ service providers Telco manufacturers
n.a. n.a.
NIC ISP/ASP/Registrars IP Manufacturers National Research Networks End customers
~ 200
~ 100
n.a.
n.a.
n.a.
n.a.
Government Bodies Regulatory authorities Legend
. = no involvement . .
Source: WIK analysis
= complete involvement
It is obvious that the domestic incumbents participate in the national ENUM trials while competitive fixed line carriers in general show only little activity. Besides internet-related companies like ISPs, ASPs and software providers often play a significant role in the trials. In contrast, companies of the mobile communications sector hardly are interested in participating. We assume that these companies actually face other challenges (e.g. 3G) but in their view they are not affected by ENUM until now. Further participants of the trials can be found in the scientific sector, e.g. national research and educational networks and universities. User associations like INTUG so far are observing the development rather than playing a formative role. Their main interest is directed towards privacy and secu-
98
Dieter Elixmann, Annette Hillebrand, Ralf G. Schäfer
rity aspects. Actually, they see no further necessity for dealing with ENUM, because applications are not yet in the focus of the field trials. Manufacturers are not heavily involved in the analysed ENUM trials although the topic ENUM plays an important role in their internal research and development activities (e.g. in the context of next generation networks). Their public engagement in ENUM is mainly concentrated on participating in workshops. One intermediate result of the ENUM field trials is that business customers will highly likely be the early adopters of ENUM based applications. Some of the trials therefore widened their scope to the deployment of ENUM in private branch exchanges. This step should make the trials more attractive for business customers. Interaction and communication between the involved parties turn out to be open and transparent. They reflect a sort of self organisation which is well known in the internet community. Communication often takes place in the form of mailing lists or forums. These are organised mainly at an international level. Thus, supranational cooperation and exchange of experience is explicitly fostered. Financing The ENUM field trials mainly follow the principle of self-financing. This implies that each participant bears all the costs incurred within his organisation. This particularly applies to the NIC’s (in their role as Tier 1-registry) and the involved authentication agencies. This financing model sometimes leads to approaches for reducing financial risks. Indeed, in many cases one can observe limitations for the number of end users or simplifications for processes and applications, respectively. In some Asian countries (e.g. Korea, Taiwan) the governments are directly involved including financial support of the ENUM trials.
DNS concepts and their requirements for the implementation of ENUM Regarding registration and administration of domain names there are three concepts playing a vital role in the DNS system and, in turn, also with respect to ENUM: registrant, registrar and registry (Hillebrand and Büllingen 2001). • A registrant is the person or institution applying for registration of a domain name or in the case of ENUM for an E.164-number. • A registrar has an intermediary function between registrant and registry. He processes the application and secures the storage of the necessary information in the DNS data files. Moreover, he is involved in operating the distributed data files by operating name servers for his customers.
A Comparison of ENUM Field Trials
99
• A registry is an institution receiving information on registrations within a particular TLD (e.g. .DE) from registrars and processes these as „zone files“ in a central data file (register). Tier-0, Tier-1 and Tier-2 level Regarding the organisational frame of the field trials our study is not aiming at addressing profoundly the discussion on the tier 0 level e164.arpa. To cover this topic would be a project of its own and therefore it has been neglected from the beginning due to its complexity. Usually, problems which could be brought about by the current choice of the TLD .ARPA for ENUM do not play any major role in the field trials. An exception is France where this issue seems to be of higher importance for the regulator, because only in this country there is a respective official statement (ART 2001a; ART 2001b). A crucial point in the context of ENUM is who is administering the ENUM registry. Here the principal responsibility rests with the Internet Architecture Board (IAB). IAB has authorised RIPE NCC (Réseaux IP Européens Network Coordination Centre, located in the Netherlands) with the delegation of subdomains of the domain e164.arpa. Thus, RIPE is in practice responsible for ENUM Tier 0. The supervision function is performed by IAB together with ITU. Concerning the ENUM tier 1 level (i.e. the national level) there is only a unique player in the ENUM field trials. Usually, this is the respective national NIC. An exception is UK, where (at least in the field trials) 3 companies are involved. On the tier 2 level one can think of two approaches: (1) registrar and DNS provider (Name Service Provider) are separate institutions ("full“ competition) or (2) registrar and DNS provider are institutionally intertwined (ENUM Service Provider). The latter alternative practically yields a reduction of complexity, however, it can be viewed simultaneously as reduction of competition. In the field trials one can observe mainly model 1 (see Fig. 1). Tier 0
RIPE NCC RIPEe164.arpa
NCC
NIC
Tier 1
Registry AFNIC x.y.e164.arpa
Tier 2 Name Server
Registrars and DNS-Provider
or: seperate DNSProvider
Validation entity
ENUMRegistrant
Fig. 1. Organisational approach of ENUM trials
100
Dieter Elixmann, Annette Hillebrand, Ralf G. Schäfer
Apart from the different tiers there is often a validation agency. This agency can be affiliated to the player on the tier 1 level, however, usually it is independent. The validation agency is responsible for the control of the ENUM applications, i.e. there is a control mechanism if the customer/registrant is entitled de jure to register the telephone number as an ENUM address. In the vast majority of the trials the model of a single national ENUM database is established, i.e. different databases for different number areas, geographical preselection ranges or even individual numbers are not in use so far. The prefix region “1” is an exception from this rule. The USA and countries like Canada and others which share the E.164 national prefix “1” discuss if instead of only one registry for 1.e164.arpa several registries should be realised for each country. Since the answer to this question will have extensive political and economic consequences for the ENUM implementation the search for consent will still take some time. Administration of the national ENUM subdomain In this field a division of labour between NICs and national regulatory authorities (NRAs) is the rule. NICs have the responsibility for the operational part, NRAs for the administrative tasks. NRAs usually have observing and supervising functions (control and steering) and they define basic conditions and rules for the trial. The national Ministries of Economic Affairs, however, play a more or less passive role within the ENUM trials, i.e. they observe the respective tests in their country, but they have delegated the responsibilities to the respective NRA. Models for future organisation and charging As soon as ENUM becomes a market reality, the organisation and allocation of the activities of registries and registrars is especially important. In the ENUM trials decisions on and definitions of the future role of today’s trial registries and registrars are deliberately left open so far. Rather, plans for future responsibilities and allocation of tasks are regarded as dependent on the precise arrangement of the ENUM business model at large. With respect to the latter there are currently still long planning horizons and concrete time tables are not available. Also with respect to accounting models it is worth noting that there are virtually no definite plans regarding the actual market reality of ENUM. It is, however, likely that the billing for the entire validation process, i.e. comprising the cost shares of the institutions involved on tier 1 and tier 0, takes place on the tier 2 level, i.e. at the customer interface. Obviously, the requirements concerning accounting finally depend on the integration of the individual functions across the different suppliers involved.
A Comparison of ENUM Field Trials
101
ENUM service potential An important implication of ENUM is that telephone numbers are no longer tied to a particular location, rather (by transforming them into IP-addresses) they can be used worldwide provided internet access is available (Hwang et al. 2001). ENUM, thus, can be viewed as an enabler for modern forms of unified messaging. Unified messaging in this context means the standardisation of different communication data of and access modes to a particular person. Otherwise speaking, voice message, SMS, MMS, Fax and e-mail services are brought together, enabling to retrieve, process, distribute and store them. The vision is that ENUM will get a central “business card function“ enabling its users to specify, depending on an actual situation, by which communication interfaces they want to be available. Apart from this, the inverse address-search, i.e. the identification of persons on the basis of available communication data are viewed as an ENUM application area in the field trials. Evaluating and assessing the discussions so far and the field trials it is, however, fair to state that the core application of ENUM will be VoIP/internet telephony. This topic will be addressed subsequently. Yet, it is also fair to state that an ENUM “killer application“ up to now is not to be seen. „User ENUM“ and „infrastructure ENUM“ Being a transformation procedure between E.164 telephone numbers and domain names ENUM itself does not provide communications services. Rather, it is a vehicle for the development and implementation of IP based services and applications which are applying an E.164 telephone number. Basically, one can distinguish the two alternatives „user ENUM“ and „infrastructure ENUM“ (ETSI 2002). Broadly speaking, one can define „user ENUM“ (also called „public ENUM“) by applying ENUM as a basis for different end user oriented communications services which can be reached via E.164 telephone numbers. In this case, ENUM can be viewed as the prerequisite and driver for new IP based services. Contrary to this, „infrastructure ENUM“ (also called „operatorENUM“ or „private ENUM“) focuses on ENUM in the frame of operating public or private networks. The latter case, thus, describes a situation where ENUM is used as an enabler for efficient routing in carrier networks.9
9
Shockey (2004) mentions e.g. that IBM is about to switch to VoIP by 2005 and that it therefore is aiming at standardising global VoIP number plans across all existing VPNs and Intranets as well as across all specific vendor platforms. ENUM in this context enables a common administration and a common access plan, respectively. Further examples of private ENUM are cable operators in the USA or Next Generation Network DSL VoIP providers in Japan. Central to ENUM in these cases is optimisation of call termination (routing calls directly from one operator to another).
102
Dieter Elixmann, Annette Hillebrand, Ralf G. Schäfer
ENUM as an enabler for (end-to-end) VoIP/internet telephony10 The discussion about the future significance of ENUM often is concentrated on the issue of worldwide end-to-end VoIP/internet telephony. Two different lines of argumentation can be found. The first argument is that one should be able to use IP-telephone terminal equipment like traditional telephones in order to give an incentive to end users to adopt VoIP/internet telephony in practice. Broadly speaking, an IP telephone has two address levels: an E.164 address on the application level and an IP address on the transport level.11 Thus, the necessity arises that these two levels are uniquely mapped onto one another. ENUM actually is sorting out this problem. The second argument rests on the fact that actually already today there are a multitude of VoIP networks (e.g. carrier and corporate networks). In order to enable users of such a VoIP network to communicate with PSTN subscribers a gateway is necessary which provides the link between the IP world and the PSTN world. This is visualised in the following figure.
PSTN 12345678 10.0.0.10
IP Net VOIP Gateway IP
E.164
10.0.0.10
12345678
10.0.0.11
12345679
10.0.0.12
12345680
1. Call „12345678“ 2. PSTN routes the call to 12345678 to the VOIP Gateway 3. Gateway maps E.164 address „12345678“ to IP 10.0.0.10 4. Gateway initiates a SIP session with 10.0.0.10
Source: based on [12]
Fig. 2. The basic gateway model
In this respect the gateway as the interface between the PSTN and the IP network is the most appropriate location, at which the mapping of IP addresses and E.164 addresses is held. From this perspective each gateway covers a certain number of terminal devices and maps telephone numbers to the respective IP ad10 11
The following analysis is based on Huston (2003) and Shockey (2004). Another observation may prove once again the rationality of a system like ENUM. Traditional PSTN-terminal devices usually only have a limited set of symbols on the keypad. Otherwise stated, to type an E-Mail address like mailto:
[email protected] or a respective SIP address like sip:
[email protected] on the usual keypad is virtually not possible. This makes obvious that it is much simpler to communicate to the outside world a telephone number which can be typed over the regular keypad to announce your different IP communications channels. The prerequisite would be that the necessary mapping of the telephone number to the respective addresses of the communications links is handled in effect automatically and imperceptible “within the system” for the calling party with a traditional terminal device.
A Comparison of ENUM Field Trials
103
dresses for these devices. It deserves to be stated that each gateway knows only about its own ”locally” served devices. Calls between different gateways (i.e. terminal devices hooked upon different gateways) need to be explicitly configured in each gateway to be put through. This can be realised via IP or some private connection, generally, however, it will still be the PSTN. The reason is that E.164 numbers can only be routed across the PSTN. Carrier specific and company internal VoIP numbering plans cannot be accessed from other VoIP network segments directly. Thus, Huston (Huston 2003) concludes that the “PSTN currently is the glue that allows the VoIP islands to interconnect with each other” (PSTN as the “Inter-VoIP-Network“). This arrangement, however, is not necessarily cost efficient, i.e. there are strong incentives to avoid PSTN transit and termination charges for calls if they virtually can be processed end-to-end across company and carrier borders, respectively, via IP. The two core issues regarding ENUM are therefore: • How do network elements like gateways, SIP servers etc. find access to services on the internet if only an E.164 telephone number is available? • How can end users define preferences for specific services and servers to respond to incoming communications requests? The basic objective of ENUM is therefore to enable each IP device to find out if an E.164 telephone address is reachable end-to-end via IP, what is the preferred IP-application and to establish technically what IP address, port address etc. should be used. Having said this it becomes obvious that ENUM is one potential instrument to provide interoperability of services in telephone networks and on the internet12. The unique mapping of a telephone number on a domain via ENUM allows to approach certain entries in the Domain Name System. This will become more concrete by an example (Blank and Dieterle 2004). Starting point is to call the German telephone number 069 27235 0 of DENIC.
12
Further approaches to link circuit switched networks and IP based packet switched networks via a unique identifier for different communications services are Universal Personal Telecommunications (UPT), see e.g. ITU-T (1993) and Universal Communication Identifier (UCI), see e.g. ETSI (2001).
104
Dieter Elixmann, Annette Hillebrand, Ralf G. Schäfer
DNS-Inquiry 0.5.3.2.7.2.9.6.9.4.e164.arpa DNS-Reply NAPTR RR
ENUM-DNS
IP
IP
IP-Network
DENIC +49 (0)69 27235 0
DENIC sip:enum @ denic.de
Source: DENIC
Fig. 3. Basic concept of applying ENUM
1. The inquiry for the telephone number +49 6927235 0 will be re-written by the user’s terminal device, provided it supports ENUM, to 0.5.3.2.7.2.9.6.9.4.e164.arpa. 2. An inquiry for 0.5.3.2.7.2.9.6.9.4.e164.arpa will be made to the Domain Name System. 3. As a result the inquiry brings back NAPTR records13 to the application having initiated the inquiry. Otherwise speaking, the result of the inquiry are Uniform Resource Identifiers (URIs) with respect to IP based applications, in particular to the respective (IP-) addresses, which can be reached over the internet. One of these URIs will be selected as the initial address and represents the protocol for the further communication.14 In this system, the user can specify in particular specific preferential orders of the communications channels by which he wants to be reached. It would be possible e.g. to specify for an incoming call that first of all it should be tried to establish a VoIP call to the own SIP server, in case this is not possible to try to establish a connection to the mobile handset and in case this is also not possible to try to establish a connection to the traditional fixed-link telephone. In case all these alternatives fail it would still be possible to specify that an e-mail with a voice message is sent.
13
A Naming Authority Pointer Resource Record (NAPTR RR) is an entry in the Domain Name System containing rules for the conversion of an inquiry. 14 Utilisation of the DNS requires that the information is linked to an E.164 telephone number, in particular the NAPTR records, server links and data of contact persons, are stored in distributed data files.
A Comparison of ENUM Field Trials
105
IT-security issues and ENUM specific data protection problems The general challenge for the employment of ENUM in practice can be described as follows: it has to be ensured that only authorised users of an E.164 telephone number have the right of use for the appropriate ENUM subdomain, i.e. the domain in e164.arpa. This demand must be fulfilled anytime. Thus, two fundamental requirements arise: • to determine the identity of the person authorised to use an E.164 call number. The issue in question is if the customer actually is the one who he asserts to be? (Avoidance of "disguise") • to examine whether the requested number belongs to the customer and, thus, is used rightfully as ENUM domain. Otherwise stated, it shall be excluded that the number is obtained by devious means (avoidance of "domain grabbing"). In a future ENUM mass business the necessary checks could take place as outlined in Fig. 4.
Registrar
Registry
contact to customer data registration sends data to Validation Agency sends validated data to Registry
gets validated data from Registrar
Validation Agency validation of data
Source: WIK Analysis
Fig. 4. Possible validation model
An identification and validation system that fulfils these requirements therefore has to offer solutions for all cases in which the authorisation for the use of a particular telephone number changes. An example could be if someone moves out of an apartment and the telephone number is kept by the next tenant of the apartment. Moreover, because of prepaid cards in the cellular mobile market there is an abundance of mobile phone numbers whose authorised user is not necessarily known. Moreover, quite frequently the period for which the use of a prepaid SIM card is authorised actually has expired. Experiences gathered so far in the ENUM trials show that there exist plain and suitable procedures for identification and validation (e.g. via identity card or copy of the phone bill). However, it is also true
106
Dieter Elixmann, Annette Hillebrand, Ralf G. Schäfer
that these procedures are still not sophisticated enough regarding the degree of automation and reliability. Challenges are obvious e.g. keeping the register up-to-date (modification, shutdown, deletion of phone numbers). It is foreseeable that only a fully automated system is going to meet the requirements of mass registration. To compile the necessary standards is one of the main tasks within the field trials. Identification and validation are less complicated if the ENUM domains are based on newly assigned E.164 numbers. We assume therefore that the discussion about specific E.164 number plans for ENUM will continue in the future. Generally in all trials security and data protection are recognised as essential factors for a broad acceptance of ENUM. The findings from the field trials highlight the fact that the standards of the DNS must also be regarded as a minimum requirement for ENUM. Nevertheless in our opinion it remains to be seen if these standards are sufficient. In particular data protection problems have to be solved regarding Naming Authority Pointer Records (NAPTR) and WHOIS inquiries. The field trials usually are aspiring to meet the requirements of the prevailing legal data protection rules. Two aspects are particularly remarkable. On the one hand these rules are already taken into account in the development phase, i.e. ex-ante. On the other hand, the rules are strictly followed during the implementation process in order to prevent improper use as far as possible. The field trials bring about the basic claim that the user has sovereignty over his/her data and that the opt-in principle is implemented. At any time users should be able to know whether and how their data is used. From a technical perspective, however, an abuse cannot be excluded, i.e. the optin principle can be viewed as a necessary but not as a sufficient condition to prevent misuse. According to our findings the opt-out principle is not discussed anywhere yet. In view of WHOIS data bases a publication of end users’ personal data can be ruled out by making them only available via the users’ technical partners (registry, DNS provider). In this case, only the information where the data can be found is published and not the personal data itself. This solution often is termed "thin registry”.
Requirements for the future utilisation of ENUM Requirements based on competition policy and regulation From a competition policy and regulatory perspective regulatory action on the tier 1 level could be necessary, e.g. with regard to prices and quality of service, in order to prevent the abuse of a monopolistic position. The participation of the NICs in the ENUM trials, if extended to the time when ENUM becomes market reality, potentially could be a first mover advantage for them. If ENUM has become market reality a competition neutral procedure, e.g. an invitation to bid or an auctioning off of licences should be taken into consideration to determine the tier 1 pro-
A Comparison of ENUM Field Trials
107
vider. Concrete actions, however, have not taken place in any of the countries examined. In this context, one should take into account the formal organisation of the respective NIC (e.g. if they are open regarding participation of ISPs) before steps are initiated towards the assignment of a third party. Challenges for a commercial operation For the transition from the test operations in the frame of field trials to a real commercial use of ENUM several challenges have to be met. So far in most of the trials no final solutions have been found yet. In our opinion there is need for action in the following areas: • Top level Domain: No official agreement about the ENUM TLD has been reached. The .arpa domain is so far only accepted for the trials. • Registration and validation processes: The processes in the trials are not yet appropriate for mass services with millions of users. Today, there exist rather pragmatic, but only partly automated process solutions. • Data protection and privacy: Apart from national legislation multinational aspects are very important. It is obvious that the different parties involved in an ENUM solution a-priori come from very different locations exhibiting very different legal rules and systems. Our analysis has substantiated that one can expect an abundance of different parties involved should ENUM become market reality. These may compete with one another regarding the end user. However, we also assume that there will exist ENUM specific supply and output relations on upstream and downstream levels between enterprises. Thus, it is obvious that ENUM business models have to meet complex market requirements. On the tier 2 level and within the field of ASPs we got the impression that full competition will be the rule and that these areas therefore can be left to a large extent without regulatory interference. It may be rational to establish a certification regime bringing about an incentive to meet quality standards and safety requirements. This especially applies to authentication measures. In principle, however, the goal should be to maintain openness on the tier 2 level. In particular attention should be paid to the fact that no actual competition restrictions result from implicit preconditions (e.g. regarding processing orders). From the end user’s point of view, a free and independent choice or change of registrar, DNS Provider and ASP should be made possible. The reason is that advantages of the telecommunications incumbents and other enterprises with significant market power should not be perpetuated. Rather, on the contrary, everything should be done to promote competition on the tier 2 level.
108
Dieter Elixmann, Annette Hillebrand, Ralf G. Schäfer
Conclusions Until now, the implementation of ENUM takes place only as field trials. Usually these trials are initiated and performed on a national level. Sometimes, however, single companies arrange ENUM field trials. The objective of the present study is to illuminate the complex challenges of implementing an ENUM-solution, to condense the experiences of the field trials and to derive implications regarding a final implementation of ENUM. The study focuses on the national field trials in Austria, Germany, France, Sweden, the U.K., and the USA. Moreover, the available information on field trials in Taiwan and South Korea has been evaluated and assessed. The field trials are addressing several issues relevant for the implementation of ENUM. First and foremost the objectives of the field trials comprise the following aspects: (1) assessing the interest of telcos and ISP’s in ENUM and gaining experience in running ENUM under test conditions, (2) assessing pros and cons of different ENUM implementations, particularly with respect to the role of registries and registrars, (3) evaluating processes, interfaces and protocols governing the relationships between the involved parties, (4) testing ENUM and associated applications from a technical and user oriented perspective, (5) assessing the economics of ENUM, especially identifying profit and operational costs, (6) evaluating possible business models, (7) handling user data in the ENUM processes against the backdrop of security standards, (8) discussing issues involved with ENUM which are relevant for competition policy and regulation. Our analysis shows that in particular the following four topics are vital for a successful implementation of ENUM: Securing integrity of the E.164-numbering scheme (validation procedures), data security issues (subscriber data, NAPTRresource records, DoS attacks), privacy issues (use of the whois-data bank and of NAPTR-resource records) and competitively neutral institutional arrangements between the players.
References ART (2001a) Principles and Conditions for Implementation of an ENUM Protocol in France. Abstract of Contributions to the Public Consultation. July 6 ART (2001b) Internet Naming and Addressing. ART publishes the results of the public consultation on the principles and conditions for implementation of the ENUM protocol in France. Press release, Paris, July 16 Berners-Lee T, Fielding R, Masinter L (1998) RFC 2396, Uniform Resource Identifiers (URI): Generic Syntax. August Blank P, Dieterle S (2004) ENUM – Domains bei der DENIC eG. DENIC Frankfurt, March 10 Cannon R (2001) ENUM: “The collision of telephony and DNS policy”. Paper presented at the 29th Telecommunications Policy Research Conference, Alexandria, Virginia, USA; October 27 - 29
A Comparison of ENUM Field Trials
109
Center for Democracy & Technology (2003) ENUM: Mapping Telephone Numbers onto the Internet, Potential Benefits with Public Policy Risks DENIC (2003) Report of the ITU Study Group 2 Meeting. http://www.denic.de/enum/studygroup bericht 20021203.html, June 5 Elixmann D, Hillebrand A, Schaefer RG, Wengler MO (2004) Zusammenwachsen von Telefonie und Internet – Marktentwicklungen und Herausforderungen der Implementierung von ENUM. WIK Discussion Paper No. 253, Bad Honnef, June ETSI (2001) EG 201 940, Human Factors (HF), User identification solutions in converging networks, April ETSI (2002) ENUM Administration in Europe, TS 102 051, V 1.1.1, July Hillebrand A, Büllingen F (2001) Internet-Governance – Politiken und Folgen der institutionellen Neuordnung der Domainverwaltung durch ICANN. WIK Discussion Paper No. 218, April Hofmann J (2002) Verfahren der Willensbildung und Selbstverwaltung im Internet – das Beispiel ICANN und die At-Large-Membership. Wissenschaftszentrum Berlin für Sozialforschung, WZB, FS II 02-109 Huston G (2002) “ENUM – Mapping the E.164 number space into DNS”, the Internet Protocol Journal, vol. 5, No. 2, June Huston G (2003) Implications of ENUM, September http://www.potaroo.net/papers/2002/enum.pdf, download November 2003 Hwang J, Mueller M, Yonn G, Kim J (2001) Analyzing ENUM service and administration from the bottom-up: The addressing system for IP telephony and beyond. Paper presented at the 29th Telecommunications Policy Research Conference, Alexandria, Virginia, USA, October 27–29 IETF (2002) RFC 2916, E.164 number and DNS. September ITU (2002) Global Implementation of ENUM: a tutorial paper. February ITU-T (1993) Recommendation F.850, Principles of Universal Personal Telecommunication (UPT), March ITU-T (1997) Recommendation E.164, The International Public Telecommunications Numbering Plan. May Leib V (2002) ICANN und der Konflikt um die Internet-Ressourcen: Institutionenbildung im Problemfeld Internet-Governance zwischen multinationaler Staatsfähigkeit und globaler Selbstregulierung, Ph-D. Dissertation, University of Konstanz McTaggart C (2001) E Pluribus ENUM: Unifying international telecommunications networks and governance. Paper presented at the 29th Telecommunications Policy Research Conference, Alexandria, Virginia, USA, October 27–29 McTaggart C (2003) The ENUM protocol, telecommunications numbering, and Internet governance. Prepared on behalf of ICANN, ccTLD, and the Legacy Root: Domain Name Lawmaking and Governance in the New Millenium. Benjamin N. Cardozo School of Law, Yeshiva University New York, USA, March 17 Shaw R (2001) Issues facing the Internet Domain Name System. Paper presented at the Asia Pacific Telecommunications Regulation Forum, Phuket, Thailand, May 15–17 Shockey R (2004) ENUM. Paper presented at the International SIP, Paris, January Stastny R (n.d.) Introduction to ENUM. Document Version 0.1
110
Dieter Elixmann, Annette Hillebrand, Ralf G. Schäfer
Appendix 1: Overview of ENUM Trials worldwide The following table 2 shows an overview of the approved requests for number delegation for ENUM test purposes. Table 2. E.164 country codes for which TSB has received approvals for ENUM delegations to be performed by RIPE NCC (as of 30.07.2004) E.164 Country Code 246 247 290 31 33 353
Country Diego Garcia Ascension Saint Helena Netherlands France Ireland d
358
Finland
36 374 40 41 420 421
Hungary Armenia Romania Switzerland Czech Republic Slovak Republic
Delegee Government Government Government Ministry DiGITIP (Government) Commission for Communications Regulation Finnish Communications Regulatory Authority CHIP/ISzT Arminco Ltd MinCom OFCOM Ministry of Informatics Ministry of Transport, Post, and Telecommunications SWITCH Regulator DTI/Nominum NPTA NASK DENIC Brazilian Internet Registry IDA (Government) CNNIC VISIONng Etisalat Global Networks Switzerland AG
Date of TSB Approval dd/mm/yy 12/08/02 12/08/02 12/08/02 23/05/02 28/03/03 25/05/04 26/02/03 15/07/02 11/07/03 10/12/02 01/10/03 24/06/03 04/06/03
423 Liechtenstein 21/10/03 43 Austria 11/06/02 44 UK 16/05/02 46 Sweden 10/12/02 48 Poland 18/07/02 49 Germany 16/05/02 55 Brazil 19/07/02 65 Singapore 04/06/03 86 China b 02/09/02 878 10 a 16/05/02 971 United Arab Emirates 13/01/03 882 34 c 05/03/04 Source: RIPE NCC aThis is a Universal Personal Telephony (UPT) code. bThis is a temporary authorisation for ENUM global TLD trial and evaluation. This delegation will end on 30 June 2005. If the ITU Interim Procedure is discontinued before then, or if the Recommendation E.A-ENUM is approved before 30 June 2005, the delegation will be turned into an objection. cThis is a country code and associated identification code for Networks (shared country code). eThis delegation will end on 30 March 2005.
3G: Standardisation in a Techno-Economic Perspective Anders Henten1, Dan Saugstrup2 Technical University of Denmark
Abstract The main question in this paper is: Which standard/technology will win the 3G mobile market? The most prominent contenders are WCDMA (also known as UMTS) and cdma2000. In addition, EDGE will also play a role as will (presumably) the Chinese TD-SCDMA standard3. Furthermore, other kinds of wireless solutions such as WLAN (WiFi and WiMAX) are spreading fast and may pose a threat to (or complement) the cellular mobile technologies. Such phenomena should not be overlooked. However, in this paper we will concentrate on 3G technologies and specifically on the two main contenders, WCDMA and cdma2000.
Introduction GSM is the worldwide dominating 2G standard with 5–6 times as many subscribers as cdmaOne. The question is whether the proponents of GSM technology (who are supporting the WCDMA solution) will be able to extend this dominant position into the 3G mobile markets. Until recently, cdma2000 (which is supported by the cdmaOne community) seemed to be doing better in the markets than WCDMA. In South Korea, cdma2000 technology has obtained an impressive number of customers, and in the Japanese market, the NTT DoCoMo 3G solution FOMA, based on WCDMA technology, has not had nearly the same success as the KDDI au-offer, based on cdma2000 technology. Does this indicate that cdma2000 in the future mobile markets will be a strong challenge to WCDMA or will WCDMA eventually dominate the markets? In addition to this main question, there are two groups of sub-questions being dealt with in the paper: 1 2 3
E-mail:
[email protected] E-mail:
[email protected] For abbreviations, see list at the end of the paper.
112
Anders Henten, Dan Saugstrup
• Which kind of victory will it be? Will one technological solution be alldominating or is co-existence more likely? • Which are the most decisive factors in the battle between the different standards? Which roles do technology path-dependence and strategic concerns play? In order to approach answers to these questions, the paper will first briefly examine important institutional aspects of the history of 3G standards. This is followed by a technology oriented description of the migration paths from 2G to 3G solutions. And finally, before concluding, there is a discussion of the issues dealt with in the paper on basis of a stakeholder analysis, encompassing infrastructure and terminal manufacturers, network operators, policy makers and administrators, and end users.
History of standard generation It is often the case with new technologies that technical committees, standardisation organisations, company R&D departments, etc. have been working on their specification and development long before they hit the headlines in the news. With respect to 3G technology, the development of specifications started even before the 2G solutions had reached the markets. Work has been performed in several organisations and parts of the world in parallel and in cooperation. However, in the context of this paper we will take our point of departure from the work performed in relation to EU, ETSI and ITU and examine how this intersects with work done and decisions taken in Japan, South Korea and, last but not least, the US. Work in the EU context on 3G technology started in 1988 in relation to the first communication technology research programme, RACE I, with participation from European based equipment manufacturers, telecommunication operators and universities. Work continued in the RACE II programme and the subsequent ACTS programme, and results from this work were submitted to the European standards institute ETSI as candidates for UMTS air interfaces and to ITU as IMT-2000 submissions. In 1997 the proposals for air interfaces were grouped by ETSI in 5 different categories: WCDMA, WTDMA, TDMA/CDMA, OFDMA and ODMA4. At that point of time, there was no definite decision as to which air interface technology would eventually be favoured by ETSI. It was still not a closed game; however, with the decision of ARIB in Japan later in 1997 to support WCDMA, it was decided in ETSI (in 1998) to select WCDMA as the preferred air interface for 3G. Ericsson and Nokia already favoured WCDMA, which was part of the background for the Japanese decision. Therefore, there were already strong indications that WCDMA would be given priority. And, to continue working together in a broader international context, in 1998 ETSI took part in the establishment of the 4
See Toskala 2001.
3G: Standardisation in a Techno-Economic Perspective
113
so-called 3GPP with participation from Europe (ETSI), Japan (ARIB and TTC), South Korea (TTA) and USA (T1P1) and later in 1999 CWTS from China5. ITU had started working on 3G specifications already in 1986 – at that point of time it was called Future Public Land Mobile Telecommunication System (FPLMTS). However, ITU has not had a decisive role in the processes of 3G standardisation. The most important organisations in this field are 3GPP and its counterpart 3GPP2 (organising the proponents of cdma2000 technology) and the regional standardisation organisations behind these two conglomerates and the different equipment manufacturers and telecommunications operators. The contributions of ITU in the field have mostly focussed on a coordinating role in relation to the IMT-2000 project and the decisions taken in the context of the World Radio Conference (WRC) in 1992 on the allocation of spectrum frequencies for 3G solutions. The initial vision with IMT-2000 was to develop a common worldwide 3G standard. However, because of strong strategic and economic interests of the different players, this vision could not be realised, and presently the aim of the IMT2000 project is to secure as much compatibility as possible between the different 3G standards. To that effect, the idea of the concept of a family of standards was introduced – the IMT-2000 family, see Fig. 1. Fig. 1 shows the different radio access technologies in the IMT-2000 family of standards as well as the possible flexible assignment of core networks, in principle, enabling roaming. Compatibility is the main intention of the IMT-2000 project. In practice, however, there is no roaming, presently, between WCDMA and cdma2000 systems.
Fig. 1. The IMT-2000 family of standards Source: Schiller 2003, page 140.
5
Ibid, pp. 43–44.
114
Anders Henten, Dan Saugstrup
3GPP As more or less comparable standards were being developed in different regions around the globe and with some players participating in all regions, it became evident that creating identical specifications in order to secure equipment compatibility, with work being done in parallel, would be very difficult. A single forum for WCDMA standardisation was, therefore, created – the 3rd Generation Partnership Project (3GPP) During the late nineties, ETSI, ARIB, TTA, TTC and T1P1 handed over their WCDMA standardisation work to 3GPP for further development of the Universal Terrestrial Radio Access (UTRA) standard. The ‘old’ standardisation organisations are, presently, participating as very active partners within 3GPP. In addition to the standardisation organisations, operators and manufacturers of telecommunication equipment are also participating in the 3GPP work. Currently, 3GPP is, furthermore, responsible for ongoing developments and standardisation of GSM, GPRS and EDGE technologies. GPP2 Similar to the WCDMA development situation, work carried out the in US TR45.5 and the South Korean TTA standardisation groups was merged into 3GPP2, which focused on the development of cdma2000 Direct Sequence (DS) and Multi Carrier (MC) modes for the cdma2000 3G specification. And, as in the case of 3GPP, other organisations and manufacturers and operators have joined up. Following the creation of 3GPP and 3GPP2 and the handover of standardisation work from the national/regional standardisation organisations to the two 3G partnership programmes, there has been a period of harmonisation and negotiation activities in order to bring the different cdma2000 and WCDMA solutions into line. Currently, 3GPP and 3GPP2 are the main driving forces in the standardisation processes together with equipment manufactures and operators to some extent.
Migration paths Looking at the mobile market regarding choice of technology, the market situation has somewhat changed over the last decade concerning the front runners. Where Europe – in particular the Nordic countries – seemed to lead the way during the 1990s with the successful GSM system, Japan took over after the introduction of imode in 1999 and have led the way into the new millennium. Currently, however, the cdma2000 operators in Japan and South Korea have gone to the front. Considering the 2G mobile communication market regarding numbers of subscribers, GSM is by far leading the way with almost 1.3 billion subscribers compared to the 219.3 million CDMA and 90 million US TDMA subscribers (Decem-
3G: Standardisation in a Techno-Economic Perspective
115
ber 2004) 6. When looking at 3G mobile technologies and services, the picture is somewhat different. The world’s first IMT-2000 network (cdma2000 1x) was commercially deployed in October 2000 in South Korea by SK Telecom, whereas the first WCDMA network was commercially launched one year later (FOMA in Japan). However, it should be noted that, even though the cdma2000 1x standard is defined as a member of the 3G IMT-2000 family, its data speed is just slightly higher than GPRS data rates, and the services which can be provided also look very much the same as in the case of GPRS7. Rather than comparing WCDMA and cdma2000 1x, it is, consequently, more relevant to compare cdma2000 1x and GPRS. GPRS has generally been slow to take off. However, lately many subscribers have started taking up GPRS. In Denmark, for instance, the number of GPRS subscriptions and GPRS as a supplementary service to GSM subscriptions increased from 930.907 at the end of 2003 to 3.296.881 at July 30th 20058. When examining mobile technology solutions with higher bit rates, WCDMA and cdma2000 1x EV (DO or DV), WCDMA has lately taken the lead. Table 1. Number of 3G subscribers July 2005 Technology Subscribers Networks WCDMA 31,700,000 68 CDMA 2000 1x 148,800,000 91 CDMA 2000 1x EV 15,400,000 19 (DO/DV) Source: http://www.3gtoday.com/index.html, http://www.gsacom.com
In the following sub-sections, the possible evolution or developments paths for GSM, cdmaOne and US TDMA operators towards either WCDMA or cdma2000 solutions are described and analysed. The TD-SCDMA development path is not included, as China, so far, is the only country promoting this solution – but has, at the same time, postponed its launch of 3G services9. GSM operators In theory, GSM operators could go both ways – WCDMA or cdma2000 – as the core network is more or less identical (based on SS7) and both migration paths require a new radio interface for the GSM networks. However, the preferred migration path for GSM/GPRS operators seems to be WCDMA. For GSM operators without 3G spectrum licenses, a WCDMA VMNO or EDGE solution is believed to be the most reasonable one. Operators without 3G spectrum can reuse their GSM allocated spectrum when deploying EDGE on their GSM/GPRS network and, thereby, provide high data rate services in a very cost 6 7 8 9
http://www.gsmworld.com, Statistics 1Q-2005 See, for instance, Northstream 2003. Telestatistics n.d., p 19. Darling n.d.
116
Anders Henten, Dan Saugstrup
effective manner. The biggest question regarding EDGE is believed to be terminal availability. A notable difference between GSM and cdmaOne is that with GSM the service network layer is largely standardised, meaning that SMS, MMS and other GSM services are being launched as global solutions, whereas proprietary variants of cdma exist – which from a service perspective often leads to a fast service launch, but at the same time causes poor interoperability between operators. A second service differentiator, and maybe the most important one, is the support for roaming. Here, the ubiquity of GSM networks and already established roaming agreements between GSM/GPRS network operators will provide WCDMA subscribers with almost global coverage – as the WCDMA operators are reusing the already established roaming agreements, however with WCDMA data roaming as the long term scenario. In addition to the roaming issue itself, another important aspect is the revenue generated by roaming agreements. Presently, the revenue streams generated by roaming contribute substantially to most mobile operators’ revenues, and as people in general are travelling more and more, these revenue streams are believed to increase significantly in the future – other things being equal. Overall, WCDMA operators stand to benefit most from this development, as they have roaming agreements in place. Thirdly, the market size is also believed to be a significant factor, as a greater market size will create greater manufacturing volumes and, thereby, in theory lower manufacturing cost per unit, as the fixed costs are shared between more units. Based on the high numbers of GSM/GPRS operators, this may turn out to be a long term advantage for the WCDMA markets. An evolution from GSM to cdma2000 suggests two possible paths, one being the deployment of two parallel systems or the deployment of cdma2000 access on top of the existing GSM network. The deployment of two parallel systems does not seem like a rational path, as the operators would have to operate two systems without roaming possibilities between the two. However, there are examples of this, e.g. China Unicom and Telstra, where the Chinese solution is believed to be based on industrial policy and political incentives, whereas the Telstra solution in Australia is based on the extremely low population density making high coverage the primary consideration.10 cdmaOne operators As with GSM operators, cdmaOne operators can in theory choose either a cdma2000 or WCDMA path. However, the cdmaOne to cdma2000 evolution path is the most obvious and is happening on a considerable scale with, e.g., 91 cdma2000 1x networks accounting for over 148 million subscribers as of July 2005. The cdma2000 path for cdmaOne operators is straightforward and can be viewed as a step-by-step migration path towards 3G including some network up10
Northstream 2003.
3G: Standardisation in a Techno-Economic Perspective
117
grades and network replacements along the way, e.g. from cdma2000 1x to cdma2000 1x EV-DO and cdma2000 1x EV-DV. The cdmaOne migration to cdma2000 1x mainly consists of implementing and integrating an overlay packet switched core network. This is done by adding a new channel card in the transceiver station (allowing for doubling the voice capacity), adding a packet data serving node in the core network and software upgrades in the different network nodes – comparable to the GSM/GPRS transition11. Furthermore, the fact that cdma2000 is based on the same carrier frequency as cdmaOne should provide a smother and less complicated transition path regarding the transition to cdma2000 1x as well as to 1x EV-DO. However, it should be noted that the 1x EV-DO solution uses a separate carrier frequency for data but will be able to handover to a 1x carrier if both voice and data is needed. At the same time, cdmaOne operators choosing to deploy cdma2000 can fairly easily migrate their current service offerings to the cdma2000 platform, allowing operators to build on existing service and application offerings and, at the same time, provide seamless introduction of new services and applications. Another and somewhat related issue concerns the availability of terminals. Currently, there is a significantly higher number of cdma2000 terminals compared to WCDMA terminals, which from a user perspective clearly gives cdma2000 the upper hand. A cdmaOne or cdma2000 1x evolution to WCDMA would require a whole new network being implemented on top of the existing cdma network and, furthermore, the option/question of first implementing a GSM/GPRS network before actually implementing the WCDMA solution. This solution is believed to be difficult. However, overall operational or political considerations might pave the way for this solution, e.g., if an international mobile operator should want the deploy the same networks in all markets, instead of cdma2000 in some and WCDMA in other markets in order to centralise and harmonise service development using the same technology platform. It is also foreseen in South Korea because of a political decision to deploy WCDMA networks12. TDMA operators For TDMA operators, the decision concerning the 3G technology path is somewhat different, as these operators cannot stay on their current TDMA path but have to choose between the WCDMA and the cdma2000 development pathways. Starting with the WCDMA solution, the TDMA operators will have to deploy a GSM/GPRS as a parallel overlay on the existing TDMA network and then follow the GSM road to 3G, depending on spectrum availability13. Factors supporting this development path are mainly related to roaming and service capabilities and, thereby, also terminal aspects. The roaming and service aspects are highly interrelated, as the extended GSM roaming coverage allows for extended service and ap11
http://www.cdg.org See Park and Chang 2004. 13 Northstream 2003. 12
118
Anders Henten, Dan Saugstrup
plication capabilities no matter on which network the user is located. Regarding terminals, the GSM/GPRS/WCDMA terminal market is expected to become a global mass market over time, allowing customers to use their terminals and the services and applications they have signed up for almost anywhere – whereas the cdma2000 development path in the short run will provide more terminals. Looking at the cdma2000 evolution path for TDMA operators, the picture is somewhat different. However, also in this scenario the core network needs to be upgraded and, furthermore, a cdma2000 1x radio access network needs to be deployed. One of the advantages in this scenario is the wide availability of cdma terminals, which allows for a gradual migration to cdma2000 and, at the same time, maintaining the old TDMA customer base. Secondly, the cdma2000 1x deployment path reuses the circuit switched part of the TDMA network, requiring a smaller investment in network upgrades and replacements. Compared to a complete WCDMA implementation, a cdma2000 1x and eventually a DO/DV implementation should provide the TDMA operators with better infrastructure reusability and thereby a better overall gradual investment in networks and equipment. In conclusion, the TDMA development path being WCDMA or CDMA2000 is believed to be highly influenced by external factors regarding specific market requirements, regulatory and spectrum issues, operator ownership structure and actions/paths chosen by leading TDMA operators etc. In Fig. 2, the most rational migration paths for GSM, cdmaOne and TDMA operators towards 3G deployment are depicted, based on the analysis carried out above. GSM
GSM/GPRS
WCDMA EDGE
TDMA 1 x EV-DV CDMA
2000 1 x
1 x EV-DO
Fig. 2. Most rational migration paths towards 3G
Discussion and analysis As mentioned in the introduction of the paper, the main question dealt with is to determine which 3G technology will win the markets. Related to this overall question are the issues concerning the kind of victory it will be and the factors influencing the outcome. In order to deal with the question of winning the markets, there are, at least, three different but interrelated factors to be examined:
3G: Standardisation in a Techno-Economic Perspective
119
• Factors affecting the selection of technology solutions • Factors affecting the deployment of the technology solutions • Factors affecting the diffusion/take-up by customers With respect to all three factors, there are technology-based aspects, market and economic aspects, policy and regulatory aspects, and a range of broader social aspects to be considered. Furthermore, it should be remembered that aspects which, presently, have a technology shape, at a point of time, may have been based on policy decisions, as for instance the decision to deploy GSM networks taken in the 1980s in Europe, which has greatly influenced the technology basis on which new networks are to be established. Some of the aspects can, therefore, be seen as technology aspects as well as market or policy aspects. Finally, the actual stakeholders and decision makers should also be included in the analysis in order to ‘bring life’ to the factor analysis. The analysis will, therefore, start with the stakeholders. Stakeholders The first factors to be examined are those affecting the selection of technology solutions. ‘Selection’ is an ambiguous word encompassing market selection as well as policy choices. And, as in most other technology areas, the processes of selection of standards/technologies in the 3G area are influenced by de facto as well as de jure elements. Accordingly, the stakeholders with influence on the selection of technology solutions in this area are market players – in this case equipment manufacturers and telecommunication operators – as well as policy decision makers and administrators. Other market players such as content providers and aggregators also have an interest in the development of 3G technologies and markets, but their interests are primarily related to the relationships between operators and content providers – where the different 3G options do not differ in essence – and they have no real influence on the selection between different 3G solutions. They may, on the other hand, have an influence on the deployment and take-up factors, as their products and services are important for the decisions of network operators to deploy new networks and the decisions of users to take up new communication systems and services. Equipment production in the field of mobile communication, roughly speaking, includes the production of network technology (core networks and radio interfaces) and the production of handsets. Divisions of labour among the different companies in the field are diverse and are found along the dimensions of networks vs. handsets and software vs. hardware, etc. In most of the cases of the large equipment manufacturers, companies are involved in several market segments. However, some companies specialise in or have an emphasis on one of the different market segments. Ericsson, for instance, is ‘heavy’ on the networking side, whereas Nokia has more emphasis on handsets. Even though this could potentially lead to differences in interests in accordance with the differences in customer
120
Anders Henten, Dan Saugstrup
groups, there is no indication that this has had any influence on the processes of technology selection. Basically, equipment suppliers have an interest in selling as much equipment as possible. In the handset market, the more often end users change their terminals the better. In the network market, equipment suppliers will willingly provide the necessary equipment if mobile operators are prepared to invest in entirely new systems – if only they have the necessary patents and/or licenses. This is the important thing: The interests of equipment producers in promoting a specific technology solution is dependent on their patent rights and licenses acquired in the area – and on the technology competences they have. For network operators, the issue looks somewhat different. Not only must they convince end users that a shift to a new technology is desirable, they also depend on the already installed equipment base. Equipment producers also indirectly depend on this, as they have to sell their goods and services to the network operators. However, the prime basis of path-dependent behaviour in the mobile field is among the network operators. They must make sure that they can reuse as much of their existing infrastructure as possible and only invest in new systems if they can foresee a profitable market possibility. Another important category of stakeholders in this area are the policy makers and administrators, seeking to represent the interests of their countries and of the companies located in their countries. The intervention of policy makers and administrators are multifaceted, but the direct influence has gradually decreased with the liberalisation of the telecommunication markets. However, technology selection, deployment and take-up are still influenced in many diverse ways, for instance via decisions on frequency allocations (e.g. the fact that the so-called IMT2000 frequencies were already occupied in the US), licensing of operators (where the EU has favoured UMTS, although technology neutrality was emphasised after pressure from the US), direct support for specific standards (with China as the prime example with the promotion of TD-SCDMA), or via influences on standardisation organisations (for instance in relation to the decision of ARIB in Japan to go for WCDMA in order not once again to be stranded with a purely national standard as in the case of PDC). It is not always possible for policy markers and administrators to serve all national interests at the same time. There may be differences in interests between equipment producers and network operators, and there may be strategic industrial policy interests which run counter to the interests of operators as, for instance, in the case of South Korea. The last group of stakeholders are the end users. They do not have a direct noticeable influence on the technology selection processes – representatives of user groups have, for instance, very little representation even in official de jure standardisation organisations. But users have an influence on the deployment mode and speed of technologies via the take-up ratio and, therefore, indirectly on the development of standards. To sum up, equipment manufacturers and policy makers/administrators are the most influential stakeholders with respect to technology standard selection; operators, manufacturers, policy markers/administrators and, to some extent, end users all have different ways of influencing technology deployment; finally, end users,
3G: Standardisation in a Techno-Economic Perspective
121
operators, handset producers and policy makers/administrators all have some influence on technology take-up. Put together, the stakeholders in their different roles all have an influence on the outcome of the battle between different technology solutions but also on the development and shaping of the technologies themselves. Factors affecting the selection of technology In the first stages of the conceptualisation of the new high-speed mobile technology (3G), the intension was to develop one global standard – with the obvious advantages for users in such a scenario. This, however, could predictably not work because of the many economic and strategic interests in the field. The enormous worldwide success of GSM technology has pointed at such a schism between the ideal of a common system and the forces of dissociation. GSM has illustrated the great advantages of having a common system with respect to roaming, costs of production and, therefore, end user prices, but has also shown that a system with a point of departure in one region of the world (in this case Europe) leads to the dominance of certain stakeholders over others. Nokia and Ericsson would not be likely to have had the same position in the world market, had it not been for GSM. Equipment manufacturers and policy makers and administrators all over the world have learned from this experience. In Europe, the lesson has been that a common standard (with a strong European influence) is the best way to go. In Japan, one of the lessons has been that there are problems in being stranded with a purely national standard (PDC). And in the US, they have learned that one should not let the Europeans dominate the game with a single standard. Admittedly, these are not entirely new lessons, and they are based on different traditions for standardisation in, e.g., the US and Europe. While there is a long tradition in the US for more inter-standard competition and for a greater degree of market-based de facto standardisation, in Europe, with the strength of the EU, there is not only the old European tradition in favour of de jure standardisation but also an intention to create a European-wide single standard in each technology area and to focus on intra-standard competition. These traditions and the lessons from the GSM development have certainly influenced the standardisation of 3G technology. The mobile operators with existing 2G networks were not entirely enthusiastic about a new 3G system in the first part of the 1990s. At that point of time, 3G was conceptualised as a totally new system, and 2G operators were more concerned with building their 2G networks and – very importantly – to begin making money on their investments. However and partly based on this reluctance from the mobile operators, 3G began, from the mid-1990s, to be seen not as a revolutionary new system but as an evolutionary development on top of existing 2G systems. The core networks would be the same as in the 2G systems, and it would only be the radio interfaces (and the terminals) which were to be changed. Furthermore, the core networks of the different existing 2G systems (GSM, cdmaOne, etc.) were basically the same, based on the SS7 signalling system, and could be combined with different air interface technologies. The battle between different 3G standards
122
Anders Henten, Dan Saugstrup
are thus mainly about new air interfaces and the possible migration pathways from existing interfaces to the new ones. As explained in the section concerning the history of 3G form an institutional point-of-view, it was not decided in Europe until 1997 which air interface would be preferred. However, the major GSM producers, among them Nokia and Ericsson, were favouring WCDMA, and with NTT DoCoMo’s and ARIB’s decision in Japan in 1997 to go for WCDMA, ETSI in Europe also finally decided to go that way. NTT DoCoMo operated a 2G system based on the Japanese PDC standard, while other operators in Japan used cdmaOne technology. The initiative of NTT DoCoMo was partly based on the strategic decision to distance themselves from the other operators in the Japanese 3G environment. Although the Europeans quickly followed suit, there was not necessarily any great enthusiasm with the Japanese decision, as this caused a precipitation of the European decision before a well-founded agreement had been reached. Seen in a technology migration perspective, an important issue is to choose a technological solution, which in an evolutionary manner builds on the existing technology. GSM is a time division system, while cdmaOne is a code division system. And, even if wideband time division technologies were considered also in Europe in the first part of the 1990s and were part of the proposals examined in ETSI, the general technology road has been to opt for CDMA technology for 3G solutions – although combinations also can be seen as in the case of the Chinese TD-SCDMA standard. An important reason for choosing CDMA technology is that code division technology is more flexible with regard to the assignment of free capacity on the networks. An obvious question in this context is whether a migration from a time division GSM based technology to code division technology is more difficult than a migration from cdmaOne to cdma2000. And furthermore, whether this could be a technological reason for the swift deployment and take-up of the cdma2000 1x solution as compared to WCDMA. Again a word of caution is necessary, as it is probably more appropriate to compare cdma2000 1x with GPRS than with WCDMA. However, the basic question remains, i.e. whether the migration path to 3G is more demanding from a GSM point of departure than from cdmaOne. However, this is not the case. The core network is, as mentioned, the same, and the air interface will at any rate have to be changed in order to accommodate wideband data services. With GPRS, an overlay on the GSM networks has already been implemented. With WCDMA, a new overlay network has to be installed. There is, therefore, not just one natural migration path – there is no technological path-dependency at this level – and, in the case of the equipment manufacturers, the selection of a 3G solution is based mainly on other considerations. Their main considerations are related to patent rights and licenses, and two of the main players in this field are Ericsson and the US based company Qualcomm. These two companies have also been the main contenders in the standardisation battles between WCDMA (Ericsson) and cdma2000 (Qualcomm). Furthermore, production costs and, consequently, end user prices will be affected once a decision on technology path has been chosen. Moreover, the maintenance costs for the different kinds of technology solutions will be important. And in this area, WCDMA
3G: Standardisation in a Techno-Economic Perspective
123
seems to be far cheaper, which is an important reason for the market potentials of WCDMA to be greater than in the case of cdma2000. Factors affecting the deployment of technology solutions Once a general decision has been taken with respect to technology selection, including a migration path, the decisions of individual operators regarding choice of technology are in most cases fixed. Operators can, in principle, choose to have alternative air interfaces established on top of their existing networks. However, most operators will follow the general migration paths, as these will be the less costly. Some operators will ‘cross the lines’: There will be cdma operators establishing WCDMA interfaces and vice versa, but this will not be the general picture. As a rule, GSM operators are opting for WCDMA and cdmaOne operators for cdma2000. In this field there is a strong degree of path-dependence. The main reason for this path-dependence is the well-defined migration paths laid out by the specifications. These will ensure a smoother transition to the new network solutions. Included in the migration paths is also a backward compatibility with existing 2G networks – GSM for WCDMA operators and cdmaOne for cdma2000 operators. This means that users with WCDMA terminals, in principle, can roam not only on other WCDMA networks but also on GSM networks all around. The strength of the GSM technology will, therefore, be transferred to the new WCDMA networks. This also entails a strong degree of path-dependence. It can be discussed whether there are network effects based on positive feedback mechanisms involved in the battle between the different 3G technologies. Positive network effects are at work if the utility for the users of a network increases with an increasing number of users – with the implication that users derive more utility from joining a larger network than a smaller one. The reason is that if the two networks are not interconnected, users will potentially be able to initiate and receive more calls on the larger than on the smaller network. However, if the networks are interconnected, this kind of network effect will not arise. And, this is the case with different telephony networks. Calls are transferred from one network to the other based on interconnection between the networks. This, however, is not a ‘sure thing’ with all kinds of communications between users. Some data services may not work in communications between users on different networks and some services, e.g. information services, may only be accessible from one network because of exclusive agreements between network operators and content providers. In both cases, there will be positive network effects – in the first case there is a direct positive network effect and in the second case an indirect positive network effect. This also applies, so to say, on an interoperator level in the sense that an operator using one technology will benefit from other network operators using the same technology, as their customers can benefit from the roaming possibilities and, therefore, will be inclined to subscribe to their service. The extent to which these network effects will play a role in the 3G markets is not yet clear. In the 2G markets, there are as mentioned no network effects of this
124
Anders Henten, Dan Saugstrup
kind, as voice can be exchanged between the operators. The only issue is coverage, which to some extent is related to the number of subscribers and certainly to the roaming agreements that operators have. In the 3G field, however, this is different or intensified, as there are issues relating to communicative data services, information services and roaming, which are the basis of some degree of network effects. Network effects are thus stronger on 3G networks than on 2G networks. However, it does not seem likely that these network effects are so strong that they will lead to a winner-takes-all situation. Even in countries where different standards are battling directly against each other, it is likely that there will be some kind of co-existence of the different standards. Furthermore, on a global scale, different standards in different countries will co-exist. It is only in some markets that there will be a direct battle between the WCDMA and cdma2000 and where positive network effects in a national context will play a role. However, in the international context, the international roaming possibilities may play a role – if the choice of technology is not already made by the path-dependent evolution from, for instance, GSM. In Japan and South Korea, there is or will be a competitive situation between WCDMA and cdma2000. In Europe, WCDMA solutions totally dominate the picture because of the pathdependent development from GSM and because there has been a strong political pressure to opt for the WCDMA way. However, at the ‘fringes’ there are some cdma2000 systems being established. This applies to the Scandinavian countries, where Nordisk Mobiltelefoni has acquired licenses for operating cdma2000 systems in the 450 MHz band. It also applies to the Czech Republic, where Eurotel – which already operates a GSM network as well as and a NMT-450 network – has launched a cdma2000 1x EV-DO network for data communications14. In the US, there will also be a battle between different high-speed mobile networks, but in this case, EDGE will play an important role, as there is a lack of vacant IMT-2000 frequencies in the US. In Europe, it is to a large extent a strategic and politically influenced initiative, which forms the basis for the deployment of WCDMA (UMTS) networks. This does not mean that is has been an irrational choice, but there is no question that UMTS has been heavily politically promoted in Europe. Even though the 3G licenses that have been awarded, in principle, have been technology neutral (to a large extent, after pressure from the Americans), only UMTS solutions have received a license. Another example of political intervention is the South Korean decision to distribute two WCDMA licenses in spite of the development of cdma solutions in the market in South Korea. Thus, policy decisions also play a role in the deployment of 3G solutions. Finally, there are a variety of important economic aspects to take into consideration. One of them deals with the advantages of large scale production and the building up of broadly expanding competences in the field. This may lead to an advantage of the most widespread system – which in all likelihood will be the WCDMA system. However, the size of the production of cdma2000 equipment will also be so significant that this potential advantage will probably not be impor14
EMC Market Data n.d.
3G: Standardisation in a Techno-Economic Perspective
125
tant. Actually, Qualcomm has sold equipment very cheaply to KDDI in Japan in order to promote cdma2000 technology in Japan. Last but not least, it should be mentioned that the very high 3G license fees in some European countries have contributed to the set back of the mobile sector in Europe and have had a part in the slow start for 3G developments in Europe. This has probably contributed to holding back WCDMA developments. However, the question could be raised whether this has had as great an importance as the European mobile sector will have us believe. Maybe the slow start has as much to do with the lack of relevant services offered to the users on 3G systems. Factors affecting take-up With the main question of this paper in mind – who will win? – it is important to include the actual take-up in the analysis. A standard/technology can be perfect and it can be offered widely by suppliers; but it is of no great importance if it is not taken up by users. The question in this context, however, is whether there are any differences in this regard between the different 3G standards. The important factors affecting take-up in this field are the general ones, i.e. availability and quality of services and prices, which in this field more specifically includes availability of handsets, backward compatibility with 2G network, roaming possibilities, and other users with whom to communicate. With respect to telephony, there are no problems with the last mentioned issue, as 3G voice services are interconnected with other networks offering telephony services. However, this is a crucial issue with respect to other communicative services, video telephony for instance, as the value of such services is a function of the number of other users (the network effect discussion). The availability and quality of services is of prime importance and has not really been solved in the case of 3G networks. The remarkable success of the imode service in Japan has pointed at the centrality of easy access to a variety of relatively cheap services. However, the success of this service has probably also held back the development of the NTT DoCoMo 3G FOMA service. For the users to shift from i-mode to FOMA there must be significant advantages and these advantages are apparently not obvious to the great mass of users. In Europe, the development of mobile data services has not been nearly as impressive as in Japan15. Nevertheless, a similar issue regarding significant advantages of new services is on the agenda. The operator ‘3’, which has been the most active operator in Europe with respect to launching 3G services, has obvious problems in convincing prospective customers that there are clear gains in switching to their 3G service. The most highly profiled new service which they are advertising is video-telephony. The question could be raised whether users will really demand this service, but there is presently the additional problem that video-telephony is only interesting if someone else also has a video-phone. There is a lack of a real 15
A comparative analysis of the development of mobile data services in Europe, Japan and South Korea can be found in Henten et al. 2004 for example.
126
Anders Henten, Dan Saugstrup
killer-application (or killer-applications) and the present strategy of ‘3’ has, therefore, become to offer low voice tariffs, often lower than most 2G operators. The other operators having acquired 3G licenses in Europe have been very slow to commence offering 3G services. The main reason is that they have difficulties in finding the applications and services that will kick-start the market and in developing the appropriate business models. Furthermore, GPRS has just started to develop in the European markets, and although GPRS could be seen as a road towards a 3G environment, it may also – as in the case of I-mode – hold back the development of 3G offerings. WCDMA has been conceptualised as a multimedia service, encompassing many different service categories in addition to voice. In the case of cdma2000, the picture is more diversified in the sense that, for instance, cdma2000 1x EV-DO is specified for data services specifically. This means that operators can set up networks for specific usages as in the case mentioned with Eurotel in the Czech Republic setting up an EV-DO network in parallel with their GSM and NMT networks. In the US, the vision of the future mobile environment is much more oriented towards data services – home office services – than in Europe. This does not mean that there is no mobile multimedia vision for the cdma2000 path. However, it remains to be seen whether the step-by-step approach of the cdma community or the multimedia strategy of the WCDMA community present the most successful business opportunities. The availability of a variety of high quality handsets is also important for user take-up. In this regard, UMTS networks in Europe have not been well-positioned. Very few UMTS handsets have been on the markets, and in the beginning they were rather clumsy. In contrast to this, for some time, there have been a much bigger variety of handsets available for cdma2000 customers. On the longer term, a multitude of handsets for WCDMA networks will reach the markets, but the lack of attractive handsets has been holding back the market. Finally, backward compatibility, roaming possibilities and other users with whom to communicate, using new and enhanced data and voice services, are equally important for the different 3G offerings. The only area, where there may be significant differences between the different technology solutions is the roaming issue, where WCDMA will have an advantage because of the relationship between GSM and WCDMA.
Conclusions The main question of the paper is if it is possible to point at one of the existing 3G technologies as the one that will dominate the markets in the coming years. An important background for this question is that for a period of time cdma2000 technology has been doing better in the global market than WCDMA technology. However, the answer in the paper is that WCDMA, in all likelihood, will dominate the markets, but that there will be a co-existence of different solutions – also inside the countries where more than one solution is implemented. In Japan and
3G: Standardisation in a Techno-Economic Perspective
127
South Korea, WCDMA and cdma2000 systems will co-exist. In the US, different solutions will likewise co-exist, and EDGE will be one of them. In China, there will also be different standards applied in the market with the special Chinese standard, TD-SCDMA, as one of them. We are, therefore, not witnessing an evolving winner-takes-all game; but WCDMA is the likely candidate to be the dominant standard. The prime reason for the likely dominance of WCDMA is not that it’s a better solution than the one provided by the cdma2000 family. The prime reason is that a migration path from GSM to WCDMA has been constructed and that this pathway leads to a path-dependent development for most GSM operators towards WCDMA. WCDMA can, therefore, build on the strength of the GSM system. There may, indeed, be other factors pointing in direction of WCDMA – e.g. the differences in the costs of maintenance of WCDMA systems and cdma2000 systems respectively. However, the most important reason is that there has been a strong community of interests deciding to opt for WCDMA – primarily the European institutions ETSI and the European Union and the European based mobile equipment manufacturers plus the subsidiary DoCoMo of the Japanese incumbent NTT and the industry and standardisation organisation ARIB. Strategic interests, resulting in policy interventions, have thus been strong in this area. But once a decision was taken and the migration paths from 2G to 3G developed, technology path-dependence has become important.
References Bekker R (2001) Mobile Telecommunications Standards. Artech House, London Darling A (n.d.) China Crisis. http://www.telecoms.com EMC Market Data (n.d.) Eurotel to launch CDMA-450 1xEV-DO. http://wcis.emc-database.com/ Gandal N, Salant D, Waverman L (2003) Standards in Wireless telephone Networks. Telecommunications Policy 27: 325–332 Henten A, Olesen H, Saugstrup D, Tan S-E (2004) New Mobile Systems and Services in Europe, Japan and South Korea. info 6, 3: 197–207 Holma H, Toskala A (2001) WCDMA for UMTS. Wiley, Chichester Hommen L (2003) The Universal Mobile Telecommunications System (UMTS): Third Generation. In: Edquist C (ed) The Internet and Mobile Telecommunications System of Innovation. Edward Elgar, Cheltenham, pp 129–161 Lembke J (2001) Harmonization and Globalization: UMTS and the Single Market. info 3, 1: 15–26 Min Lee K (2002) Modelling Regional Differences in the 3G Mobile Standardization Process: The Entrepreneur, the Committee, and the Investor. Communications & Strategies 47: 11–32 Northstream (2003) Operator Options for 3G Evolution. Stockholm, Northstream http://www.northstream.se Park H-Y, Chang S-G (2004) Mobile Network Evolution Toward IMT-2000 in Korea: A Techno-Economic Analysis. Telecommunications Policy 28: 177–196
128
Anders Henten, Dan Saugstrup
Schiller J (2003) Mobile Communications. Addison-Wesley, Boston Sehier P, Gabriagues J-M, Urie A (2001) Standardization of 3G Mobile Systems. Alcatel Telecommunications Review, 1st quarter, pp 11–18 Steinbock D (2002) What Happened to Europe’s Wireless Advantage? info 4, 5: 4–11 Telestatistics (n.d.) 2half 2004. http://www.itst.dk Toskala A (2001) Background and Standardization of WCDMA. In: Holma H, Toskala A (eds) WCDMA for UMTS. Wiley, 2001, pp 39–40 Walke B, Seidenberg P, Althoff MP (2003) UMTS: The Fundamentals. Wiley, Chichester
List of most important abbreviations 3GPP 3GPP2 ACTS ARIB cdma2000 EDGE ETSI GPRS GSM IMT-2000 ITU PDC TD-SCDMA RACE TDMA UMTS WCDMA WLAN
Third Generation Partnership Project Third Generation Partnership Project 2 Advanced Communications Technologies and Services Association of Radio Industries and Business code division multiple access Enhanced Data Rates for GSM Evolution European Telecommunications Standardization Institute General Packet Radio System Global System for Mobile Communications International Mobile Telecommunications for the year 2000 International Telecommunication Union Pacific Digital Cellular Time Division Synchronous CDMA Research and Development in Advanced Communications Technologies for Europe Time Division Multiple Access Universal Mobile Telecommunications System Wideband Code Division Multiple Access Wireless Local Area Network
Architectural, Functional and Technical Foundations of Digital Rights Management Systems Vural Ünlü1, Thomas Hess2 Institute for Information Systems and New Media at the Munich School of Management, Germany
Abstract In this paper, the architectural, functional and technical foundations of DRMS are analysed and some classes of application where these technologies may be effective in counteracting the threat of digital piracy are considered. However, it should be cautioned that this analysis may soon become outdated, since technologies and market demand are undergoing rapid development. Nevertheless, the following conclusions may be offered. DRMS have the potential to permit implementation of comprehensive protection schemes by combining and integrating three core technologies: encryption, digital watermarking and rights expression languages, for preventive (i.e. access and usage control) and forensic (i.e. prosecution of copyright infringement) purposes. Additional billing functionalities facilitate the use of progressive, usagebased revenue models. License registration procedures also provide media companies with detailed customer information. At least from a technological perspective, the recording and evaluation of end consumer information permits economically appealing forms of price differentiation. From an architectural perspective, DRMS can be classified as supplier-side (back-end) or user-side (front-end) solutions. Front-end clients can be software-based, hardware-based or hybrid systems, depending upon the desired level of protection.
1 2
E-mail:
[email protected] E-mail:
[email protected]
130
Vural Ünlü, Thomas Hess
Introduction3 Since digital data can be easily and cheaply copied, reproduced and disseminated without information loss in global communication networks, the means of existence of the end customer based media industry is threatened. This is especially relevant for the music segment, where declining sales are attributable to copyright infringements to a significant extent. According to the International Federation of the Phonographic Industry (IFPI), the international body representing the recording industry, worldwide turnover in this segment declined by about 10.9% in the first half of 2003, mainly due to end consumer and commercial piracy (International Federation of the Phonographic Industry 2003). The content industry is determined to address this critical situation by seeking technological methods of preventing the uncontrolled redistribution of content, so as to safeguard sustained sources of direct revenue. Digital Rights Management Systems (henceforward referred to as DRMS) are technical installations, which have the goal of specifying, enforcing and managing digital rights over multimedia assets. In addition to simply deterring piracy, advances in digital technology, in both hardware and software, will allow content providers to exert much finer control over media products and to develop new business models for digital content (Ünlü 2005), e.g. metered-usage revenue models and improved forms of price differentiation. The aim of this article is to develop a functional and technical reference model, by which different forms of DRMS solutions can be classified in a consistent framework. For this reason, we will first describe three representative solutions in the DRMS marketplace and then the functional and technical scope of DRMS in general. This classification scheme can then be used to classify existing DRMS solutions. This framework should be helpful for media companies that seek to identify an appropriate DRMS implementation.
Overview of three DRMS solutions The market of DRMS solutions has exhibited significant contraction after strong initial growth. Currently, a wide range of DRMS solutions is available on the market, probably reflecting the fact that no single technology or product can fulfil the diverse requirements of the media industry. A detailed market overview is provided, for instance, by Fränkl and Karpf (2004), who present around 40 DRMS in their market research paper. As a starting point for the construction of a reference model, we will present the following three exemplary DRMS solutions: (i) Digimarc’s ImageBridge, (ii) Acrobat’s Content Server and (iii) Microsoft’s Reader. The chosen solutions have a 3
This contribution builds on and extends an accepted paper from Wirtschaftsinformatik by Hess and Ünlü (2004) and the research results of a market analysis of DRMS solutions by Fränkl and Karpf (2004).
Architectural, Functional and Technical Foundations of DRMS
131
significant market share and also fit into the DRMS classification scheme put forward by Rosenblatt et al., who distinguish the following three types of “heavyweight“ DRM technologies (Rosenblatt et al. 2002): • DRM component technologies represent widely installed DRMS, based on one key component technology. Examples are Digimarc and Verance in watermarking technologies and Preview Systems in software packaging. • Single-Format DRM Solutions are based on one important content file format, by those companies who control these formats. The most prominent vendors are Adobe’s PDF and eBook format and RealNetwork’s RealMedia Secure for streaming content. • A DRMS Framework is a suite of programs that implements comprehensive DRMS technologies and are designed to integrate in the back-end digital media delivery process with software development kits and other interfaces. Therefore, it can be combined operationally with existing information structures, but is independent of specific DMRS component technologies. Leading service providers of DRMS Frameworks are Digital Clearing Service offered by Reciprocal, Rights|System by InterTrust and the Digital Asset Server / Reader suite offered by Microsoft. Therefore, the three solutions presented below cover the entire competitive DRMS landscape and serve well to formulate a functional and technological reference model. Digimarc Corporation is considered a market leader in the market of digital watermarking technology, which aims to track and identify unauthorised use of diverse media products, such as commercial and consumer photographs, movies, music and documents. A digital watermark is a software code, imperceptible to the end user, which is embedded inextricably into the content. Digimarc’s technologies include a rending application, which enables the recognition of these embedded codes and the identification of persons who distribute illicit content. The technology itself is not a remedy against piracy, but it is an important measure to trace leaks or to spot manipulated content, such as "screener" copies of feature films. Digimarc ImageBridge was introduced in 1996 and provides protection of digital images via digital watermarking. Detecting application are integrated in most image editing software, such as Adobe Photoshop, and can be downloaded free of charge, for example with the Microsoft Windows Internet Explorer. As a further functionality, the web crawling service MarcSpider, searches for digital watermarked images on the Internet, and reports back to content owners of content files where their images are used. This enables copyright holders to track image usage for proper billing or pursue forensic enforcement actions, thus protecting content copyrights. Furthermore, ImageBridge includes a Software Development Kit, which includes a fully documented set of APIs that can be integrated into other applications, promoting the integration with existing workflows and automatisation of the watermarking process. It provides flexible and automated batch or onthe-fly watermark insertion and reading capabilities. (Fränkl and Karpf 2004; Rosenblatt et al. 2002)
132
Vural Ünlü, Thomas Hess
Adobe Systems is the market leader for desktop publishing and electronically distributed documents through the Portable Document Format (PDF) and its associated distiller/reader application. PDF is designed as a read-only format and therefore provides some inherent rights restrictions. With the release of Content Server 3.0, an end-to-end DRMS solution that enables the secure distribution of PDF eBooks, Adobe is mainly targeting libraries for the protection of digital content, but also corporate clients. The Content Server 3.0 functionality makes it possible for libraries to loan and disseminate eBooks to patrons. Librarians and companies can also use this solution to offer digital subscriptions of PDF content to consumers, employees or other interested parties. The lending feature enables distributors to monitor media consumption of the people who access content. Libraries will be able to instantly offer existing eBooks and other digital information to patrons. The system supports a automatic check-in and check-out procedures and can be integrated with existing catalogue systems, thus, simplifying administration and saving process costs. Patrons use a simple web-based application to receive and check-out eBooks, but do not need to be online to read them. Libraries can also define granular usage restrictions, so that eBooks expire after a certain time period or at a specified date. When the license period expires, the eBook is automatically disabled on the patron's client and is returned to the library catalogue. Based on a simple browser-based interface, Adobe Content Server merges key functions such as encryption, packaging and distribution of eBooks over the web. The solution can be integrated with existing information systems, enabling rights owners and distributors to protect intellectual property and control digital rights. It ensures protection for copyright holders by allowing them to define rights and permissions, including whether content can be copied, printed, or lent to other readers. (Rosenblatt et al. 2002) The Microsoft Reader is a rendering software for textual content with the ".lit" extension, and includes some DRMS features. The Microsoft Reader offers three levels of protection: (Rosenblatt et al. 2002; NN n.d.) • Sealed eBooks offer a basic level of security by means of encryption, which prevents the modification of original textual and other content. Thus, it ensures the integrity of content, by guaranteeing that the received file has not been modified after creation. • Inscribed eBooks are first sealed and then further encrypted by Microsoft Digital Asset Server (DAS). An Inscribed eBook always includes information from the purchaser on the first page, usually, the name of the purchaser. By inserting the name of the purchaser in the file, it is assumed that this person will be more disinclined to distribute copies over the Internet since this illegal activity can be traced back to him. • Owner Exclusive eBooks, or sometimes called "premium content", are first inscribed and then an encrypted license is added by DAS to allow only the legitimate purchaser to access it. This security level requires that the end user's copy of Microsoft Reader is "activated" to purchase and read Owner Exclusive eBooks, by linking the Microsoft Passport account with the specific copy of Microsoft Reader on the consumer’s client. Activation creates a unique hard-
Architectural, Functional and Technical Foundations of DRMS
133
ware identification of the end users computer, so that the purchased eBooks cannot be opened on other computers, thus, rendering any copying of this file worthless. In addition to this function, the Owner Exclusive book also includes further use restrictions. Both the copy function to the clipboard and the text-tospeech capability are deactivated and the print button is eliminated. It is the Owner Exclusive eBooks that have “genuine” DRM protection. These are generally commercially published books, sold through online bookstores. Based on the analysis of these and other products in the market, we can discern that there are four main functions, that DRMS can possibly support: access control (e.g. when purchasing an Owner Exclusive eBook for Microsoft Reader), usage control (e.g. the use restrictions that can be defined in Owner Exclusive eBooks), billing (e.g. provided by Acrobat’s Content Server for patrons) and persecution of copyright infringements (e.g. through Digimarc’s MarcSpider webcrawling technology), based on the core technologies encryption, digital watermarking and rights expression languages. We will continue in the next section to generalise from these singular products and induce a functional and technical reference model of DRMS.
Architecture of a DRMS As stated in the introductory part, the main objective of DRMS is to promote the authorised consumption of digital works through the controlled distribution of content. Hence, DRMS must provide access control and usage control functions. While the former aims at restricting access to content (by controlling who can use it), the latter aims at controlling the usage mode (by determining how the content can be used). Both functions depend upon rights information, which can be defined with different levels of granularity, permitting front-end clients of the end user to execute authorised operations on the content. If these preventive measures fail or are deliberately not implemented, forensic functions must be available to identify copyright infringements or infringers. In addition to fulfilling these security objectives, DRMS should provide new options for revenue models. DRMS can accomplish this by a billing function that protocols usage information on the end user side and reports this information to a royalty management system. This function can also permit price differentiation for groups or even individual users. Fig. 1 illustrates a simplified DRMS logical design. Although DRMS vary widely depending upon their purpose and function, most existing DRM solutions share the same architectural principles. On the content supplier side, a back-end DRMS is necessary, to encode media products and add meta-information before delivery to the end user. On the end user side, a front-end DRMS is required, to enforce the security objectives (see Fig. 2). Another common architectural feature is that DRMS generally use different distribution chains for distributing encrypted content than for distributing usage rules and decryption keys.
134
Vural Ünlü, Thomas Hess
Cleared content
Content library
Access control
Usage control
License data
Billing
Prosecution of infringements
Billing data
Fig. 1. Data flow in DRMS (logical perspective) (Hess and Ünlü 2004)
back-end DRMS
front-end DRMS
Content server
Client
Content repository
Encryption DRM packager
Content
Content package
Metadata Product info
Financial transaction
Rights
Rendering application
Encryption DRM license generator
Encryption keys
License server
DRM controller
Keys Rights
License Identities
Fig. 2. Data flow in DRMS (physical perspective) (Rosenblatt et al. 2002)
Identity
Architectural, Functional and Technical Foundations of DRMS
135
From a more detailed perspective, overall physical DRMS architecture consists of three major components: a back-end content server, a license server, and a front-end client. The content server distributes the content electronically following secure packaging, and typically includes a content repository, i.e. a media asset database that stores the content together with associated metadata. A DRM packager is usually also required, which prepares the content for secure distribution (e.g. by encoding the content and inserting marked metadata) and triggers the generation of encryption keys to authenticate users and to decrypt content, before transmitting the information to the end user. The license server contains information that identifies the digital content, defines the rights grant associated with the content and establishes the usage terms for the exercise of rights, which are bound either to a user or to a device. At the front-end client, the DRM controller receives the user’s request to exercise rights over a specific media product, gathers information about the identity of the user, obtains a license from the license server, authenticates the application that exercises the rights, retrieves the encryption keys and decrypts the content for the appropriate rendering application. Front-end DRMS clients may be hardware-based or software-based, or may be hybrid systems which combine both approaches. • Hardware-based DRM solutions (e.g. DVD players, Smartcards and Conditional Access Systems) embed the technological protection in the hardware itself. This provides a high level of protection, since the process of encryption and decryption is done in a closed (trusted) hardware environment, and it is very difficult to access the decrypted data flows that are necessary to produce pirate copies. • Software-based DRMS are designed to ensure secure delivery of content, primarily over the Internet. They also enforce usage terms on general-purpose computers (e.g. PCs or Macs) and home-network environments. However, software-based DRMS are easy to circumvent by means of special debugger and disassembler software. Therefore, the successful employment of DRMS requires user client software to perform integrity checks, to decrypt the content and to enforce the usage rights associated with the digital content. In spite of the lower level of security, the use of software solutions is often preferred, since it is much cheaper to generate, distribute and upgrade software than hardware. • Hybrid DRMS solutions are exemplified by dongle-based software protection systems, where modules of the actual software are stored in a hardware key. Without the correct key, the software will not work. Existing hybrid solutions include USB, parallel port and serial port dongles, with features such as programmable memory, remote updating, lease control algorithms and counters (Fränkl and Karpf 2004).
136
Vural Ünlü, Thomas Hess
Functions of DRMS Access control Two types of access control can be identified. The first aims to ensure that only entitled entities (persons or devices) receive access to original media products. The second type of access control is used to prevent persons or devices from gaining access to illegal content. In the first type of control, access to legal content can be restricted to entitled entities and also restricted with regard to time and location. Typically, this type of access control involves three steps: (i) authentication, where the communicating parties prove that they really are who they claim to be; (ii) authorisation, where a policy database verifies whether an authenticated entity is entitled to access certain data or services, and (iii) access which is granted or denied, based on the previous authorisation check. The most critical step is the authentication procedure, which is associated with several technological approaches (Arnold et al. 2000; Schneier 2001): Information systems can authenticate identities and validate access privileges, based upon whether the user knows something (e.g. a password), whether the user is someone (demonstrated by biometrics), or whether the user possesses something (e.g. tokens). • User ID/password combinations represent the simplest authentication technique and are subject to a variety of limitations. For instance, they can be forgotten by the authorised user and obtained by illegitimate users by means of theft or guesswork. • Physical/digital tokens eliminate the need to remember passwords. Instead, the possession of a token attests the holder’s identity. However, as with passwords, tokens can be lost by their legitimate holders, and may be obtained by unauthorised persons. • Biometric methods include measurements of the user’s face, eye, finger, palm, hand geometry or other body subsets. Analyses of the user’s voice or handwritten signature are also possible. Combinations of these techniques can enhance security, but tend to reduce user convenience. Furthermore, all authentication methods represent trade-offs among various levels of security and implementation costs. For instance, password-based systems are easy and relatively inexpensive to implement, but can be readily circumvented due to the ease of passing on the alphanumeric string. At the other extreme, biometric procedures can unambiguously identify a user by means of a biological traits, but the implementation costs for media applications are prohibitively high. However, the most important trade-off is between acceptance errors (where illegitimate users are accepted) and rejection errors (where legitimate users are rejected). The stricter the authentication procedure, the more rejection errors and the fewer acceptance errors are to be expected.
Architectural, Functional and Technical Foundations of DRMS
137
The second type of access control can be realised by means of access filters or blocking mechanisms, which prevent access to illegally copied content (Köhntopp et al. 1997). Established techniques include DNA redirection, URL blocking and proxy filtering. An example for a proposed media filtering system for the music industry is the Rights Protection System, in which blocking mechanisms were to be implemented by German Internet Service Providers (ISPs) with connections to international data networks, in order to prevent access to illegal foreign content. It failed due to resistance of ISPs and technical implementation problems. Usage control The application of access control alone has been shown to be insufficient for preventing unauthorised copying and distribution of content. Once a user obtains possession of the original, unprotected content, the content owner loses control of how the content can be used, in a potentially hostile user environment. For this reason, it is also necessary to monitor the use of media products in the private sphere of the end user. It is necessary to ascertain which operations the user is entitled to perform on a media product. In particular, unauthorised copying and circulation must be prevented. Therefore, permissible operations must be cleared in advance by a special, authorised front-end DRMS on the user side. A precondition for this is that the front-end DRMS is informed about the usage terms for a specific media product. This task is handled via rights management information which is added as metadata to a media product by a back-end DRMS before delivery to the end customer. If the content is secured by means of device binding, the meta-information is associated primarily with the front-end client. In the case of person-bound usage, the meta-information is located primarily at the back-end DRMS. Hybrid utilisation concepts use a combined approach, e.g. iTunes. As illustrated in Fig. 3, rights models provide for the granting of three basic forms of licensing rights (Rosenblatt et al. 2002; Stefik 1996): 1. The right to render content (print, view and play) 2. The right to transport content (copy, move and loan) 3. The right to produce derivative works (extract, edit and embed) For instance, the printing and viewing of a document on the screen (a positive rendering right) can be permitted, while the transfer to another consumer is prevented by the prohibition of local storage (a restriction of transport rights). The simplest usage control systems employ a straightforward copy protection mechanism, such as the Digital Audio Tape (DAT) or DVD standard. However, the purpose of DRMS is not to prevent copying absolutely, but rather to monitor and control copying. The access control and usage control functions ensure the non-excludability of consumption and prevent media products from becoming public goods.
138
Vural Ünlü, Thomas Hess Right to render content
Print
View
Bla Bla Bla Bla Bla Bla Bla Bla
Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla
Bla Bla Bla Bla Bla Bla Bla Bla Bla
Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla Bla
Play
Right to produce derivative works
Right to transport content
Copy
Extract
Move
Edit
Loan
Embed
Fig. 3. Classification of content rights (Rosenblatt et al. 2002)
Metered-usage billing Through direct end user contacts, in addition to controlling access and monitoring the usage of content, DRMS can also assist in metering most uses of a digital work (e.g. pay-per-view, pay-per-click, etc). This is related to a media industry approach that involves charging not for the possession, but merely for the use of information goods (Rosenblatt et al. 2002). In this scenario, the consumption of media products could be handled similarly to the use of gas and electricity, resulting in a win-win situation for all stakeholders. Consumers of media products would pay only for the exact amount of usage, and would have an incentive to acquire media products selectively and in small amounts. Furthermore, the unbundling of works into discrete, customised products offers greater choice and a reduction in content prices for consumers. Content providers hope to obtain consumer rent through price differentiation. Except with regard to privacy concerns, legislators evaluate this price differentiation potential as a welfare-enhancing mechanism. From a technical perspective, usage-based billing requires a close interconnection between the system components on the supplier and the user side. In the ideal case, DRMS can protocol detailed content usage in real time and report this information by a back channel to the billing system of the supplier. In addition to the protocol function and back channel capability, an additional requirement is the integration of eCommerce and Secure Payment Systems (Sadeghi and Schneider 2003). Openness with respect to existing and new business models is also desirable. In the superdistribution concept described by Ryoichi Mori and Brad Cox, consumers become sales staff for the content provider and in this role can sell media products to interested persons in networks (e.g. within P2P architectures), generating license payments for the authors as well as sales commissions for themselves (Mori and Kawahara 1990; Cox 1996). Fig. 4 illustrates this multi-tiered distribution concept, using the example of book content. In the first step, the book buyer
Architectural, Functional and Technical Foundations of DRMS
139
licenses a digital book from the publisher with certain usage terms. The book buyer can then resell the book for the original selling price as many times as desired. The first book buyer and all subsequent book buyers may then sell the book again. The publisher receives a percentage of the book price for all transactions that occur in the superdistribution scheme. X% of Book price
Author
Book buyer
Publisher
Rights •Rendering: display on output device •Extent: direct, distributor, domestic, foreign, book club
Price of book minus X%
Price of book
Royalties
Rights •Rendering: display on output device, print •Extent: indefinite
Super distribution
Book buyer
Rights •Rendering: display on output device •Extent: indefinite
•Derivatives: translation, anthology, excerpt
Fig. 4. Superdistribution of book contents (Rosenblatt et al. 2002)
Superdistribution may appear to be a remote, futuristic concept, since it combines conflicting techniques that are used to enforce (DRMS) and violate (P2P) classical copyright legislation. However, a combination of the technological protection measures of DRMS, the distribution possibilities of P2P networks and clever incentive mechanisms may result in new, attractive business models (Gehrke and Anding 2002). Prosecution of copyright infringements Although usage restrictions certainly represent a curtailment of the usual usage possibilities, for certain media products and customer groups they nevertheless offer an effective means of protection. However, it will never be possible to implement complete protection. Even if the technological protection measures can remain a step ahead of the attack techniques and tools of the hacker community, the fundamental “problem of the analogue hole” will continue to exist. This refers to the possibility of digitising high-quality analogue copies and distributing at least one copy in media networks, with the resulting snowball effects (due to the “problem of the digital hole”). Thus, sooner or later the availability of unauthorised copies must be expected. Therefore, with the aid of DRMS, content providers must trace and prosecute unauthorised uses and users of works subject to copyright. Accordingly, the activity radius of content providers must extend not only to preventive but also to reactive measures. Although this function does not directly
140
Vural Ünlü, Thomas Hess
prevent copyright infringements, it can contribute to reducing infringements by means of a deterrence effect. A precondition for the identification of unauthorised copies is the use of deliberately contrived marks or the absence of marks as an indication of compromised media products. Again, various approaches can be distinguished here. Labelling and tattooing can be characterised as weak marking techniques. Labelling places copyright information in certain segments of the media product (usually in the header), while tattooing inserts a visible or audible copyright notice in the media product. These procedures have the disadvantage that they are either easy to circumvent because the meta-information is not hidden, or the quality of the media product and the corresponding willingness to pay is sharply reduced. In contrast, steganographic watermarks are “strong” marking algorithms, which allow the embedding of hidden metadata in media products. Such watermarking techniques are passive, and require an additional active policing mechanism in order to detect and prosecute infringements. This mechanism may take the form of traditional policing methods (e.g. web-based services, such as those offered by BayTSP and Ranger Online, which do not require a priori watermarking) or sophisticated, watermark-based detection systems. In the latter case, illegally copied media products are identified via an automated Internet searching robot (e.g. see Digimarc’s MarcSpider image tracking), which can track down illegally distributed content or the “traitor” (via digital fingerprints), based on the typical bit patterns of a media product and on marks which are present or absent (Katzenbeisser and Petitcolas 2000). In summary, it can be argued that in the new media context, the functional scope of media management must be expanded into the private sphere of the end consumer, so that the economic potential does not melt away in the process of digital distribution. In this scenario, content providers create technical property rights beyond the scope of the classical copyright regime (a legislative function), they enforce these property rights by their own hand (an executive function) and prosecute infringements (a judiciary function), all through technological means. Content providers thus simultaneously become legislators, policemen and judges, in the quest to re-privatise their product base. It is this abolition of the division of power that gives rise to an emotionally charged debate on the limits of the freedom of information and the need to restrict DRMS technologies by law.
Core technologies of DRMS Access control and usage control are effective only if their circumvention is made as difficult as possible. Encryption algorithms support this effort. If such protection mechanisms fail or are deliberately disregarded, digital watermarking can be used. Both technologies, in addition to the billing function, which is not discussed further, assume extensive knowledge of access and usage rights, which can be described with the aid of rights expression languages. While encryption algorithms represent an established core technology of IT security, digital watermarking and
Architectural, Functional and Technical Foundations of DRMS
141
rights expression languages have been especially designed for media applications. For the sake of simplicity, these three technologies are outlined below in isolation from one another. However, it should be kept in mind that these technologies in fact interact extensively. Encryption To protect media products against illegal access, usage or manipulation, the transferred content must be encoded, so that the mere possession and redistribution of encrypted data become worthless for an unauthorised user. Content should at all times be prevented from being accessible in its original, consumable form, except when permitted by the DRMS. Cryptography is designed to make the recovery of plaintext from ciphertext (as well as other available information) computationally infeasible for unauthorised users. Cryptographic technologies represent the most mature core DRMS technology. They aim to ensure the confidentiality, integrity and authenticity of data or their origins. Such technologies are generally categorised as symmetrical, asymmetric or hybrid algorithms. A further differentiation is made between public and secret algorithms (Pfitzmann et al. 2002). In the case of symmetrical algorithms (also referred to as private-key cryptography), the transmitter and receiver use the same key to decrypt and encrypt the content. However, this requires a secure transfer of the key and implies a high cost for the generation and transmission of keys. The established standards include the Data Encryption Standard (DES), Triple DES, the Advanced Encryption Standard (AES) and the Internationally Data Encryption Standard (IDEA) (Schneier 2001). In the case of asymmetric algorithms (also referred to as public-key cryptography), decryption and encryption are performed with different key pairs (public and private) (Beutelspacher et al. 2001). The data are encrypted using a public key, which is made publicly available. For this reason, a key infrastructure is necessary in order to administer the public keys of the communicating parties. The private key is kept secret by the individual. Encrypted content can be decrypted only by using the corresponding private key. The main advantage over symmetrical algorithms is that the security risk associated with key transmission is avoided and key distribution is simplified. However, systems using asymmetrical algorithms involve greater computational complexity, which slows down the entire communication process and necessitates a public management infrastructure. The most prominent examples of such systems are RSA, which takes advantage of the difficulty of factoring large integers, and El Gamal, which is designed on the basis of the discrete logarithm problem. A concrete application of public-key cryptosystems is a digital signature. The originator of an information object can generate a signature by using a private key to encipher a compressed string derived from the object. The digital signature can provide recipients with proof of authenticity of the object’s originator (Schneier 2001). In the case of hybrid algorithms, the information is coded symmetrically and the key is subsequently encoded asymmetrically. The computing costs as well as
142
Vural Ünlü, Thomas Hess
the security risks involved with key transmission are thereby reduced. One example is the Pretty Good Privacy (PGP) algorithm developed by Phil Zimmerman, that combines an RSA public-key algorithm and the IDEA secret key protocol. No matter how secure the design of an encoding algorithm may be, the code can be broken by searching the limited key space (brute force attack). Hence, with increasing computing power, falling computing costs, higher content value and longer content half-life, expanded key lengths and enhanced performance of the front-end DRMS are necessary. Digital watermarking Digital watermarking is a technique for embedding hidden data that uses a secret key to incorporate copyright, customer or integrity information into a media product or any digital object. This provides an indication of ownership of the object, and can also include other information that conveys the terms of use. Like encryption, watermarking is an important technology for ensuring both data integrity and data origin authenticity. It is a highly multidisciplinary field that combines cryptography with image and signal processing, communication and coding theory, and visual perception theory (Dittmann 2000). Based on the areas of application, different types of digital watermarking can be categorised as follows. Visible watermarks place a clearly recognisable copyright mark on a multimedia asset, which makes unauthorised use unattractive and leads to a (sometimes only marginal) quality loss. After the legitimate purchase of a media product, the visible watermarks are removed or replaced with invisible watermarks. Robust watermarks embed rights-related information in the content, and are invisibly and inseparably merged with the work. Such information is used to enforce access and usage rights, and it is necessary for billing purposes. The term “digital fingerprinting” is used to refer to robust watermarks that reveal the identity of the recipient of protected content (Dittmann 2000). This mechanism is intended to act as a deterrent to illegal content dissemination by enabling media companies to identify the original buyers of redistributed copies. Fragile watermarks serve to demonstrate intactness and integrity, so that manipulations can be recognised. This makes it possible to check whether a media file has been manipulated by attacks. Fragile watermarks should be resistant to processing operations (such as compression, scaling, etc.), but should be damaged when unauthorised distortions of the content (e.g. image manipulations) are performed. Fragile watermarks can therefore be used in prosecuting cases of infringement. (Dittmann 2000) Watermarking techniques are prominently used in copyright protection schemes. They can identify copy-restricted content and can be used in connection with the copying and playback controls of rendering devices such as DVD players. For example, DVD players equipped with CSS check for watermarks in motion pictures on recordable DVDs and refuse to play back disks that do not include the necessary watermarks.
Architectural, Functional and Technical Foundations of DRMS
143
The most important properties of digital watermarking techniques are quality (security), capacity, robustness, transparency and complexity. It is not possible to optimise all of these parameters simultaneously. For example, robustness is typically obtained at the expense of a considerable reduction in the amount of information that can be hidden in the media file. A variety of state-of-the-art attacks on watermarking systems, that deliberately attempt to impair the watermark without excessively distorting the associated data, have been identified. Four classes of attacks are frequently cited in the technical literature: removal attacks, geometric attacks, cryptographic attacks and protocol attacks. In addition, significant shortcomings in various commercial products, such as PictureMarc, SysCoP and SureSign, have been discovered. These weaknesses manifest the limitations of the techniques and demonstrate that digital data hiding technology is still in its infancy (Dittmann 2000). However, it is predicted that watermarking systems will steadily improve, through the iterative cycle of proposals, testing and enhancement. Moreover, strong demand for the technology should stimulate the development of more robust solutions. Rights expression languages Rights expression languages permit the specification of the scope of usage rights for protected digital content. They define a structure for expressing permissions in a form which can be read by machines (and humans) and can provide a “rights data dictionary” which precisely defines the meaning of the permissions and conditions expressed (Guth 2003). Depending upon the power of the rights expression language, usage rights can be described in a differentiated manner. For instance, the usage period, frequency of utilisation, quality (i.e. image and sound quality), permitted operations (print, view, copy, etc.) and further conditions or restrictions (geographic, linguistic or device-related) can be defined in a granular manner, thus permitting effective usage control (Rosenblatt et al. 2002). In an ideal form, rights expression languages should be able to model every conceivable rights dimension (in both existing and future forms) with regard to all exploitation categories and media forms (e.g. print, audio and motion pictures). Additionally, rights expression languages should express pricing information associated with the exercise of all of the specified rights. The possibility of individualised monitoring and billing of media consumption permits the development of digital and usage-based business models, which were inconceivable in conjunction with analogue media. Rights expression languages can be either open, in the sense that industry participants are invited to collaborate and further develop the language, or proprietary. An open, standardised language is a precondition for cross-platform usage. Two examples of established standards are the eXtensible rights Markup Language (XrML), which is promoted by the Organisation for the Advancement of Structured Information Standards (OASIS), and the Open Digital Rights Language (ODRL) developed by the Open Mobile Alliance (OMA). Both are based on the eXtensible Markup Language (XML) and support manifold formats and business
144
Vural Ünlü, Thomas Hess
neutral terms for specifying digital rights, such as unlimited usage, flat-fee sales, territory-restricted usage, pay-per-view, library loans, superdistribution and subscriptions (NN 2002). Like encoding technologies, rights expression languages are used extensively in DRMS. They support access control through the incorporation of end customer information, granting access only to pre-authorised users. Nevertheless, the primary purpose of rights expression languages is the implementation of flexible usage control and usage-based billing by the provision of rights and pricing information. In order to realise the above mentioned functional requirements, the core technologies must be employed in combination with one another, rather than in isolation (Ünlü and Hess 2005). For example, efficient usage control can be achieved only through the combination of all three core technologies.
References Arnold M, Funk W, Busch C (2000) Technische Schutzmaßnahmen multimedialer Daten. In: Dittrich R (eds) Beiträge zum Urheberrecht. Österreichische Schriftenreihe zum gewerblichen Rechtschutz, Manz, Wien Beutelspacher A, Schwenk J, Wolfenstetter K-D (2001) Moderne Verfahren der Kryptographie. Vieweg, Braunschweig, Wiesbaden Cox B (1996) Superdistribution: Objects as Property on the Electronic Frontier. AddisonWesley, New York Dittmann J (2000) Digitale Wasserzeichen. Springer, Berlin Fränkl G, Karpf P (2004) Digital Rights Management Systeme – Einführung, Technologien, Recht, Ökonomie und Marktanalyse. pg Verlag, München Gehrke N, Anding M (2002) Peer-To-Peer Business Model for the Music Industry. In: Monteiro JL et al. (eds) Towards the Knowledge Society, Proceedings of the 2nd IFIP Conference on eCommerce, eBusiness and eGovernment. Kluwer Academic Publishers, Lissabon, pp 243–257 Guth S (2003) Rights Expression Languages. In: Becker E et al. (eds) Digital Rights Management: Technological, Economic, and Legal and Political Aspects. Springer Verlag, Heidelberg, pp 101–112 Hess T, Ünlü V (2004) Systeme für das Management digitaler Rechte. Wirtschaftsinformatik 46(4): 273–280 International Federation of the Phonographic Industry (2003) Global sales of recorded music down 10.9% in the first half of 2003. http://www.ifpi.org/site-content/press/20031001.html, 2003-10-01, Download on 2003-11-1 Katzenbeisser S, Petitcolas FAP (2000) Information Hiding: Techniques for Steganography and Digital Watermarking. Artech House, Boston, London Köhntopp M, Köhntopp K, Seeger M (1997) Sperrungen im Internet. Datenschutz und Datensicherheit 21(11): 626–631 Mori R, Kawahara M (1990) Superdistribution: The Concept and the Architecture. The Transactions of the IEICE 73(7): 1133–1146
Architectural, Functional and Technical Foundations of DRMS
145
NN (2002) XrML 2.0 Technical Overview. http://www.xrml.org/reference/XrMLTechnicalOverviewV1.pdf, 2002-03-08, Download on 2003-11-01 NN (n.d.) Frequently asked questions about Microsoft Reader. http://www.microsoft.com/reader/info/support/faq/general.asp, Download on 2003-1003 Pfitzmann A, Federrath H, Kuhn M (2002) Anforderungen an die gesetzliche Regulierung zum Schutz digitaler Inhalte unter Berücksichtigung der Effektivität technischer Schutzmechanismen. Studie im Auftrag des dmmv e.V. und des VPRT e.V. Rosenblatt B, Trippe B, Mooney S (2002) Digital Rights Management: Business and Technology. M&T Books, New York Sadeghi A-R, Schneider M (2003) Electronic Payment Systems. In: Becker E et al. (eds) Digital Rights Management: Technological, Economic, and Legal and Political Aspects. Springer Verlag, Heidelberg, pp 113–137 Schneier B (2001) Secrets and Lies – IT-Sicherheit in einer vernetzten Welt. Wiley-VCH Verlag, Weinheim Stefik M (1996) Letting Loose the Light: Igniting Commerce in Electronic Publication. In: Stefik M (eds) Internet Dreams: Archetypes, Myths, and Metaphors. MIT Press, Cambridge, pp 219–255 Ünlü V (2005) Content Protection: Economic Analysis and Techno-legal Implementation. Herbert Utz Verlag, München Ünlü V, Hess T (2005) The access-usage-control-matrix: A heuristic tool for implementing a selected level of technical content protection. Proceedings of the Seventh International IEEE Conference on E-Commerce Technology, pp 512 - 517
Part 3: Making the Market Fly: Critical Mass and Universal Service
Service Universalisation in Latin America: Network Evolution and Strategies Arturo Robles Rovalo1,*, José Luis Gómez Barroso2,**, Claudio Feijóo González3,*** *
Universidad Nacional Autónoma de México and Universidad Politécnica de Madrid, Spain ** Universidad Nacional de Educación a Distancia (UNED), Spain *** Universidad Politécnica de Madrid, Spain
Abstract During the last two decades Latin American countries liberalised and, most of them, privatised their telecommunications industry. The reform aimed at increasing and maintaining public telephone networks while introducing a real and sustainable competition. All in all, the analysis reveals that, although teledensity rates have increased, the results are worse than expected. At the same time, and since the start of this decade, governments are more aware of the fact that the access of the whole population to telecommunication services is essential for the economic development of their countries and for the reduction of poverty. Thus, twenty years after the beginning of this process, an analysis of representative Latin American countries policies and their performance on the fixed telephone network evolution allows to provide some guidelines for the design of policies oriented towards the achievement of not just the traditional approach of universal service, but a new service and access universalisation.
Introduction Throughout the 1990s, privatisation and liberalisation of the telecommunication services industry had become almost a prerequisite to enter “the new race” (Antonelli 2003). Latin American countries, some of them as early as the 1980s, but especially during the 1990s, faced this challenge by trying to adjust the 1 2 3
E-mail:
[email protected] E-mail:
[email protected] E-mail:
[email protected]
150
Arturo Robles Rovalo, José-Luis Gómez Barroso, Claudio Feijóo González
processes according to their political objectives and national realities. As in any other country which needs to develop its infrastructures, the reforms have tried to achieve a balance between the introduction of effective and sustainable competition, and the compliance with, initially, telephone network deployment objectives. Thus, during the last fifteen years, every country in the region has prepared programmes and created incentives to expand the networks and increase service penetration rates. Despite in some occasions Latin America is considered a homogeneous region, the truth is that the strategies, and, thus, the results, have been quite different. Since the start of this decade, with the advent of the Information Society, states are, if possible, more aware of the fact that the access of the whole population to telecommunication services is essential for the economic development of their countries and the reduction of poverty (World Bank 2002). In parallel, they are also coming to terms with the fact that the universalisation programmes must be adapted to the requirements of the new socioeconomic paradigm. Therefore, this seems a good time to describe the different roads the Latin American countries have followed for their fixed network deployment strategies, as well as to assess their successes and draw the appropriate conclusions in view of future actions. The study is limited to four countries, selected because of their socioeconomic and geographic representativeness, as well as the dissimilarity of the mechanisms used. These countries are Brazil, Chile, Mexico and Peru. This article is structured as follows: section 2 reviews the instruments used for telephone service generalisation, which is specified in section 3 with the description of the practices carried out in the four countries; the results obtained for each country are presented in the following section before ending with the conclusions.
Mechanisms for service universalisation Generalised access to telecommunication services has been, regardless of the degree of success achieved, an objective of every government during the last century. This suggests that the advantages of a massive connection to telecommunication services have been understood regardless of the political forces in power. One of the main justifications protecting existening monopolies was its condition of being in charge of a public service. Despite this public service aspect, in most countries the commitment to extending the service was more implicit than explicit. Citizens did not benefit from an individual right of demanding telephone services, or, from the opposite perspective, telecommunications administrations were not legally bound to provide this service (OECD 1991). Thus, the development of both networks and services has been interpreted essentially in a voluntaristic way by administrations, being subject to the political changes and/or
Service Universalisation in Latin America: Network Evolution and Strategies
151
administrative priorities, the sensitivity and interest of the governing class towards the industry, and the degree of general development of each country. Generally, however, network development was rarely at par with the demand, a fact that translated into extraordinarily long waiting lists (Goggin 1998). In the United States, network deployment was quite regular: residential telephone penetration had exceeded 40% around 1945 continuing, from that moment on, with a sustainable growth until reaching an asymptote during the early seventies when 90% of the homes were connected (Sawhney 1994; Albery 1995). European countries had to wait for the seventies for service universalisation to really move forward. The investments made during this period should not be extrapolated to the previous decades; hence, the role of the monopoly in the extension of the service is incorrectly considered as a necessary historic rule (Noam 1987). In Latin American countries, service penetration continued to be very low at the time of engaging in the opening-up of the markets. No country reached 10 lines per 100 inhabitants in 1990, while, additionally, that low number of lines was exclusively concentrated in capital and major cities. Waiting lists were enormous, and the time required for being connected to the network could extend to several years. In the new liberalised environment, universal service appears as an attempt to reconcile the principles of public service with those of the market economy. There is no single comprehensive definition for universal service. There is, however, an agreement on the basic core of the concept that usually covers national availability of a series of specific services for which non-discriminatory access and generalised economic affordability are guaranteed (ITU 1998). The approach to universal service is quite pragmatic. Despite the relative uniformity of the definition included in most telecommunications legislations, the practical interpretation of universal service differs from one country to another, and even varies within the same country when the context shifts (ITU 1994). This is not new: as we have said, before the modern meaning of universal service in telecommunication appeared, universality objectives had changed through time according to technological development, the level of infrastructure development and the perception of user requirements (Bardzki and Taylor 1998). If the objectives can be disparate, even more so are the instruments and incentives conceived to achieve them, especially in those countries suffering from a major delay in network deployment. The commitments regarding license or concession awarding and bilateral agreements with operators are some of the tools used for this purpose during the first stages of liberalisation4, generally marked by limited competition in the number of participants or restricted to certain services. Further on up the road, they are replaced by universal service obligations or tenders to select the company which is to manage the specific network extension plans. The study of the results achieved by using these tools would allow to identify the most effective actions as well as their limitations. And this is precisely what 4
Several examples are described in detail in ITU (1994) or Stern (1995).
152
Arturo Robles Rovalo, José-Luis Gómez Barroso, Claudio Feijóo González
we will be doing in the following section with the four Latin American nations, selected after considering their differing socioeconomic characteristics and varied universalisation strategies. Mexico and Brazil are the two countries with the greatest weight in the area, both from an economic and demographic point of view, and as a consequence, they can mark the evolution of the technological markets. Peru is clearly representative of what has occurred in the Andean block. Last, Chile is an exceptionally interesting case since it was the first Latin American country, and one of the first in the whole world, to start the liberalisation process. The analysis carried out in the following section shall be comprehensive. Obviously , the progress towards true service access universalisation requires not only an increase in penetration rates, but a progressive extension of the areas where the service is available as well. The historical series of service penetration, and quite specifically the telephone service connection rates, have been published as aggregates for a whole country. The number of lines per 100 inhabitants (or per household, at the most) is the datum that marks the degree at which universality progresses. The lack of detail does not allow distinguishing between business and residential lines, neither does it consider the homes with second and subsequent connections; as a consequence, figures can hide disparate social and regional situations (Hills 1993). An interesting future work would consist in specifying the degree at which network deployment has been constant or unbalanced in the less developed regions. The method could be refined considering relative penetration rates and income levels in given regions and socioeconomic groups.
Evolution of the fixed telephone network in Latin America Brazil The approval in 1995 of the No. 8 Constitutional Amendment that provided for, among others, changes in the entrance of private and foreign investments, represents the starting point of one of the most effective telecommunication sector reforms in all Latin America. The process continued in 1996 with the publication of the so-called Minimum Law (allowing competition in specific market segments without the need to wait for the passing of a law defining the role of the State), the launching of the PASTE5 programme and a first announcement of the sale of the state operator. In 1997, the General Telecommunication Act was passed providing the creation of a regulatory entity, the National Telecommunications Agency (ANATEL 5
PASTE is the acronym for Programa de Recuperação e Ampliação do Sistema de Telecomunicações (Telecommunication System Recovery and Extension Programme). With this programme, the government set itself the objective of achieving 33 million telephone lines by mid 2001.
Service Universalisation in Latin America: Network Evolution and Strategies
153
Agência Nacional de Telecomunicações). The General Act was immediately followed by the approval of the Plano geral de metas para a universalização6 (General Universalisation Objectives Plan). The effectiveness of these measures, determined in terms of network progress, can be clearly seen in Fig. 1. All in all, and similarly to what occurs in the remaining countries under study, the maximum historical growth (23.4% in 1999) appeared following the sale of the historical operator Telebrás7. In 2000, following the exclusivity period, the entry of “parallel” or mirror8 companies was a factor that critically contributed to keeping up the accelerated pace of network expansion. BRASIL Fixed Teledensity Evolution 25%
30 23,42% 22,46% 25
20%
19,60%
15% 13,08%
15
12,41% 11,38%
10%
annual growth
lines / 100 inhab
20
10
5,26% 5,38% 5
3,96%
3,77%
7,01% 6,66%
5,94%
6,14% 5,31%
4,43%
2,82%
5%
2,74%
0%
0 1985
1986
1987
1988
1989
1990
1991
1992
1993
Main telephone lines per 100 inhabitants
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
Main telephone lines per 100 inhabitants Growth
Fig. 1. Teledensity and annual network growth evolution in Brazil. Source: prepared by the authors based on information from ANATEL(2004) and ITU (2003)
The validity of the PASTE programme and the struggle of the companies to achieve greater market shares allowed to maintain the excellent network growth 6
7
8
The 1997 Act provided that the universalisation obligations should be planned through the achievement of successive goals. For this purpose, the Plano geral de metas para a universalização (General Universalisation Goals Plan) was created and passed, establishing objectives for the local telephony service. Additionally, it specifically met the requirements of the education and healthcare institutions as well as those of persons with special needs. The auction of Telebrás was carried out in 1998. For this process, the country was divided into regions: three for local services, one for long distance national services and eight for mobile services. “Parallel” companies (espelho companies), are operators competing in the same areas as the established local and long distance telephony concessionaries (previously privatised).
154
Arturo Robles Rovalo, José-Luis Gómez Barroso, Claudio Feijóo González
rates, to the point where the objectives were achieved earlier than expected. Thus, in 2001, 37.5 million lines were reached, exceeding in over four million the goal established five years earlier. This datum probably influenced ANATEL’s decision to allow new “parallel” companies to provide services in areas where regional mirror companies were not operating. Additionally, the FUST9 universalisation fund as well as general compliance with the agreements entered into by the concessionary companies contributed to the universalisation effort. In 2002, a notable drop in the network growth rate was registered, despite the approval of the Constitutional Amendment allowing individuals to own communications companies. This important slowing down of the telephone line deployment was most likely influenced by the International depression of investments in the industry. Today the generalised loss of confidence has resulted in a situation in which the incentives do not seem sufficiently strong when compared to the investments required for achieving them. At the beginning of 2004, following a modest recovery in the growth rate, the number of lines in Brazil was over 45 million, representing a teledensity of over 25 lines per 100 inhabitants (ANATEL 2004). In contrast, the mobile telephony penetration figure of 28 lines per 100 inhabitants hardly exceeded that of the fixed telephony. Chile Chile was a pioneer country, not only in Latin America, but in the whole world as well, in starting the telecommunication service industry reform and liberalisation processes. The participation of the private sector was allowed as early as 197810, one year after the creation of SUBTEL11. In 1982, the General Telecommunications Act was passed, establishing a complete separation of regulation from operational functions. Additionally, the utilities working in each segment (local and long distance) became companies, the prices were deregulated and interconnection became mandatory in order to allow the entry of new operators. However, network evolution maintained modest rates until 1990, the year in which they rocketed coinciding with the end of the privatisation process of the
9
In August 2000, the 9.998 Act was approved creating the Fundo de Universalização dos Serviços de Telecomunicações (Telecommunication Services Universalisation Fund) (FUST). From 2001, ANATEL determined that a part of its income was to be allocated to the FUST. 10 Supreme decree No. 423 of October 5, 1978. 11 The Decree No. 1762 created the Telecommunications Undersecretary, SUBTEL (Subsecretaría de Telecomunicaciones), dependent on the Ministry of Transportation and Telecommunications, with regulatory functionalities.
Service Universalisation in Latin America: Network Evolution and Strategies
155
state company 12. Development continued throughout the following four years, in part thanks to compliance with the concession obligations. The drop of almost fifteen points in 1994 was quickly counteracted due to the entry of four new local service operators and the measures taken by the sector authorities in 1994: reform of the General Act and creation of the Telecommunications Development Fund (FDT, Fondo de Desarrollo de Telecomunicaciones) targeted at promoting the increase in the coverage of the telephone service in rural and depressed areas. With this, in the subsequent four-year period (1994–1998) the two-digit growth rates were recovered until reaching a penetration of 20 lines per 100 inhabitants. CHILE Fixed Teledensity Evolution 35%
25 32,37%
30% 20 25%
20,19% 19,52%
15
20% 17,14%
16,51%
15% 12,27%
10
11,60%
annual growth
lines / 100 inhab
23,47%
10% 7,56% 5,55%
5
2,13% 2,32%
5% 2,75%
1,59%
0,71%
0,60%
0% -1,34% -1,81% -5%
0 1985
1986
1987
1988
1989
1990
1991
1992
1993
Main telephone lines per 100 inhabitants
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
Main telephone lines per 100 inhabitants Growth
Fig. 2. Teledensity and annual network growth evolution in Chile. Source: prepared by the authors based on information from SUBTEL (2003) and ITU (2003).
Since then, the network deployment has suffered a sudden stop, with the exception of the 2000 recovery. This reduction can be partially explained with the splitting up of the FDT, which now shares funds with the “telecentros”13 programme, as well as with the change in the line-counting method used by SUBTEL in 1999. Other explanatory factors are the drop in Chilean economic growth, the reduction of world investments in the telecommunications industry 12
The sector reform included the privatisation of the two major telecommunication companies, then public, Compañía de Telecomunicaciones de Chile S.A. and Empresa Nacional de Telecomunicaciones S.A., a process which ended in 1989. 13 The No. 19.724 Act, passed in May 2001, created a new Fund called FDT II, for a ten year period, with the purpose of promoting the usage of advanced telecommunications services in low-income urban and rural areas. Thus, to date, two programmes have been implemented: the Telefonía Rural (Rural Telephony) (FDT I) and Telecentros Comunitarios (Community Telecenters) (FDT II) programmes.
156
Arturo Robles Rovalo, José-Luis Gómez Barroso, Claudio Feijóo González
following the burst of the technological bubble and the overinvestment in the sector (Fischer and Serra 2002). The most recent data available show that the country has approximately 3.5 million lines, representing a telephone penetration of 21.7 fixed lines per 100 inhabitants. Mobile telephony, with 6.7 million subscribers, has a density that almost doubles that of fixed telephony (42.6 mobile lines per 100 inhabitants) (SUBTEL 2003). The remarkable superiority of the mobile sector in terms of growth rates can be attributed to the appearance of fixed-mobile substitution, that is, the stagnation in penetration of telephone lines counteracted by the increase in mobile lines, a tendency similar to the one shown by other countries with a high teledensity (Torres Medina 2003). Mexico Although the teledensity evolution in Mexico may seem positive and constant, the deployment of infrastructures is far from a constant rhythm as Fig. 3 clearly shows. MEXICO Fixed Teledensity Evolution 14%
18 12,71%
16
12%
10,04%
9,91%
8,60%
10%
8,34%
10
7,11%
6,85%
8% 7,48% 6%
5,79% 8 4,43% 6
3,31%
annual growth
9,97%
12 lines / 100 inhab
11,12%
10,77%
14
4%
3,02% 2,21%
2%
4 0,44%
0%
2 -1,09%
-2%
0 1985
1986
1987
1988
1989
1990
1991
1992
1993
Main telephone lines per 100 inhabitants
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
Main telephone lines per 100 inhabitants Growth
Fig. 3. Teledensity and annual network growth evolution in Mexico. Source: prepared by the authors based on information from COFETEL (2004) and ITU (2003)
In fact, the first important breakthrough did not occur until 1989, the year during which the state, following an internal restructuring process, sold a first part of Teléfonos de México (the sales process ended in mid-1992). During the next year, marking the start of a new concession awarded to Telmex, the country reached a historical maximum in annual network growth figures, which exceeded 12%. Maintaining this 12% figure until 1994 was, as a matter of fact, one of the
Service Universalisation in Latin America: Network Evolution and Strategies
157
conditions included in the concession agreement14. Although it is true that the annual growth was always above 6% for the 1990–1994 period, the condition was only met during the first year. The general figures also hide the fact that the expansion in coverage was not geographically homogeneous: it focused on urban areas, and hardly reached the rural ones (González et al. 1998). Network deployment slowed down during the following years. In 1996 it even showed negative values despite the fact that, in June 1995, the old 1940 Communications Act had been replaced by the new Telecommunications Act that represented the starting signal for competition. During the period 1997–2000, a gradual and sustainable increase was observed in the number of telephone lines reflecting, most of all, the entry of competition into the local service, but also the greater confidence originated in the sector by the creation of an autonomous regulatory organ (COFETEL, Comisión Federal de Telecomunicaciones) (Federal Telecommunications Commission). All in all, the density in 2000 was of only 12.5 lines per 100 inhabitants, a figure very far from the goal established by the government at the time of the first concession (20 lines per 100 inhabitants). Nevertheless, the extension of the services was favoured by the implementation of the rural telephony system using wireless cellular and satellite systems to reach regions with low demographic densities (and, generally, high poverty rates). At the end of 2000, the situation of the sector was assessed and the Sectorial Communications and Transports Programme 2001–2006 (Programa Sectorial de Comunicaciones y Transportes 2001–2006) was designed. Although the performance during the past few years has been better than in other neighbouring countries, the growth rate was not sufficient to meet the goals established in the Programme. The estimate was to achieve 18.1 lines per 100 inhabitants by 2003 but only 15.8 lines were realised. The objective for 2004 was to reach 20.2; however, according to the preview provided by COFETEL, there will be slightly above 16 lines per 100 inhabitants. We must also note that the creation of a Social Coverage Fund in 2001 has not enhanced progress in universal service. The Fund is still not operational due to the to the lack of a clear definition of how the interested companies can participate and of a criterion for the allocation of funds. The approximately 16.5 million fixed telephone lines contrast with the 29.7 million mobile telephony users (representing 28.6 lines per 100 inhabitants, that is, its penetration virtually doubles that of fixed telephony) (COFETEL 2004).
14
The terms and conditions of the privatisation agreement, which included the agreement on an expansion plan every four years, allowed Teléfonos de México (Telmex) to maintain the monopoly until 1996 although in exchange they were to meet specific network expansion objectives: 12% annual increase in the number of lines (1990–1994), installation of at least one line in each city with over 500 inhabitants by 1996, increase in the density of public telephones to two per 1,000 inhabitants by 1993, and reduction of the maximum waiting time to six months.
158
Arturo Robles Rovalo, José-Luis Gómez Barroso, Claudio Feijóo González
Peru The Peruvian telecommunications reform starts with the 1991 Telecommunications Act allowing private investment in the sector. The 1991 Act drove the first marked network progress. It started from a situation in which the state monopoly had not managed to expand the service, causing a stagnation of penetration, as reflected by the figures showing around 2 lines per 100 inhabitants. PERU Fixed Teledensity Evolution 8
50%
7
41,12%
40%
6
20,31%
4
3
13,39%
11,49% 5,62% 6,27%
5,72%
2
20%
annual growth
lines / 100 inhab
30% 5
10% 9,27%
8,70% 7,98% 4,25%
3,89%
1,95% -0,06%
-0,65%
1
0%
-1,12% -4,36%
-6,19% 0
-10% 1985
1986
1987
1988
1989
1990 1991 1992 1993 1994 1995 1996
Main telephone lines per 100 inhabitants
1997
1998
1999
2000
2001
2002
2003
Main telephone lines per 100 inhabitants Growth
Fig. 4. Teledensity and annual network growth evolution in Peru. Source: prepared by the authors based on information from OSIPTEL (2004) and ITU (2003)
The creation of the regulatory agency OSIPTEL15 and the preparation of the sale of the monopoly fostered an acceptable growth until 1994. In any instance, as made obvious in the previous cases, the most important progress was achieved following the sale of the state operator, in 1995, with no less than a 41% increase regarding the previous year. Thanks to the agreements established in exchange for a period of exclusiveness16, the deployment of lines was important during the following two15
OSIPTEL is the acronym for Organismo Supervisor de la Inversión Privada en Telecomunicaciones (Organ Supervising Private Investments in Telecommunications), created in 1993. 16 The new monopolistic operator was awarded a period of exclusiveness until June 1999 for the provision of both local and long distance national and International telephony services. The license bound the company to install 1,100,000 additional lines and at least one public telephone in each of the 1,486 towns with over 500 inhabitants during the following five years.
Service Universalisation in Latin America: Network Evolution and Strategies
159
year period. However, in 1998, last year of the monopoly period, the growth rate suffered a true setback and reached negative values. Despite the introduction of competition in local service in 199917, the situation did not improve in the following years, and even registered a fall representing more than 4% in 2001. Paradoxically, Peru had equipped itself (at least in theory) with tools for moving towards access universalisation: on the one hand, the objective of establishing accesses at reasonable distances (5 km) in all rural towns was included in the design of the “Policies for opening the telecommunications market” (Políticas de apertura del mercado de telecomunicaciones); on the other hand, the regulatory agency requested the installation of public telephones capable of transmitting voice, fax and low speed data in 5,000 rural localities without service. The entry of five new companies into the local telephony segment in 2001 contributed to the recovery of the sector which returned to positive annual increments. This tendency inflection has also been backed by the activation of the FITEL18 fund. The data, in any case, are not too encouraging. Even with a 10% annual growth, the penetration of 20 lines per 100 inhabitants would not be achieved until 2015. At the end of the first quarter of 2004, Peru had approximately 1.9 million fixed lines in service (6.9 lines per 100 inhabitants). With over 3 million subscribers, the mobile telephony density is close to 11.5 lines per 100 inhabitants (OSIPTEL 2004).
Comparative revision of the universalisation programmes The analysis of the previous section proves that Latin American governments are aware of the present and future importance of providing access to basic telecommunication services to the whole population and have endeavoured to design effective strategies for achieving this purpose. As in the rest of the world, the sector reform has been founded on three basic pillars: privatisation of the monopolistic entity, progressive introduction of competition until definitive liberalisation and regulation of the new scenario. However, the time, conditions and determination with which these reforms have been adopted have conditioned the results obtained, including those regarding network deployment.
17
The Supreme Decree 020-98 ended the Telefónica del Perú exclusiveness period. The competition in the local service was materialised one year later, in 1999, with a second concession to BellSouth. 18 The Telecommunications Act sets forth that general carrier and end public service operators shall allocate a percentage of the total amount of their annual turnover to a Telecommunications Investment Fund which shall be used exclusively to finance telecommunication services in rural areas or in areas of special social interest.
160
Arturo Robles Rovalo, José-Luis Gómez Barroso, Claudio Feijóo González
The programmes designed to expand the fixed telephone network have been heterogeneous. The figure of universal service has provided legal coverage to most of these initiatives. In the four countries considered in this article, the concept of universal service has been inserted into the legislation and there is a specific programme intended to achieve its compliance. Table 1 summarises the points on which the universal service strategies are based in each country. In all cases, financing is articulated through a specific fund, although the contributions to it come from different sources. Table 1. Basic elements of the Universal Service Programmes Country
Universal service definition
Mexico
“Access to the basic telephone service, in its modality of telephone booth or residential service for anyone as soon as possible”
Universal service obligations
Universal service programmes
Universal service Fund
Telmex Concession License (1996)
Communications and Transport Sectorial Program 2001–2006
Social Coverage Fund (2001) (Not operational. Fund allocation criteria are not yet clear)
FDT I (1994): Rural Telephone Services FDT II (2001): Digital Telecentres
Telecommunication Development Fund (1994)
Federal Law of 1995
Chile
“Access to telecommunications services in the marginalised and isolated sectors of the population”
Brazil
"Providing access to the telecommunications service for everyone, regardless of their location and 1997 Act socioeconomic condition (...); usage of telecommunications in essential services of public interest”
TelecommuniUniversalisation cation Service General Plan Universalisation Goals (1998) Fund (2001)
Peru
“Access in the national territory to a set of essential telecommuniSupreme cation services, that is, Decree those available for most 020-98 of the users and provided by the public telecommunications operators”
Rural Development Information System (2001)
No explicit Universal service obligations
Telecommunication Investment Fund (1993)
Source: prepared by the authors based on information from COFETEL (2004), SUBTEL (2003), ANATEL (2004) and OSIPTEL (2004)
Service Universalisation in Latin America: Network Evolution and Strategies
161
The network development programmes adapt to reality the sometimes too pompous declarations contained in the universal service definitions; they also explain how certain open or vague clauses should be construed (“as soon as possible”, “isolated sectors of the population”). Obviously, the magnitude of the costs associated to the achievement of a truly universal coverage leads to a prioritised satisfaction of the demand, making it necessary to resource to social, political and technical and productive effectiveness criteria. Tables 2 and 3 show the specific objectives stated in the Mexico and Brazil plans. Table 2. General goals of the Communications and Transport Sectorial Programme 2001– 2006 in Mexico Indicators \ Years Number of telephone lines per 100 inhabitants Percentage of households with telephone availability Installation time for local access lines [days] Repair time for local access lines [days]
2001
2002
2003
2004
2005
2006
13
15
18
20
22
25
39
40
43
48
51
52
28
23
20
17
15
10
5
4
3
2
1
1
Source: Secretaría de Comunicaciones y Transportes de México (2004) Table 3. Goals for individual and collective accesses of the Universalisation General Plan Goals in Brazil Indicators \ Years
2001
Size of the locality where individual access is demanded [inhabitants]
1,000
Maximum periods for answering a request [weeks]
4
Public telephones per 100 inhabitants Size of the locality where one public telephone is demanded [inhabitants]
600
2002
2003
2004
600 3
2
2005 300
1
7
8
300
100
Source: ANATEL (2004)
Unfortunately, according to the available data, several aspects of these plans are not being met, be it due to an excessive ambition in their approach, be it because of the generalised difficulties the information and communication technologies industry has faced both globally and in the region in the last few years. Another noticeable element in the universal service programmes is their recent connection, or even subordination, with other plans (which are more ambitious and general) dedicated to the progress of the Information Society. The universal service becomes, in this manner, one of the basic elements for the development of
162
Arturo Robles Rovalo, José-Luis Gómez Barroso, Claudio Feijóo González
the Information Society, but in no way the only one. As stated in the next section, this fact leads to a redefinition of the purposes and procedures for achieving universal service, which should be reapproached so as to consider the needs of the users from a comprehensive perspective.
Conclusions and future challenges set out by the Information Society Despite the confirmation of the fact that the results are worse than expected, it is necessary to admit that the liberalisation-privatisation-new regulation triangle has contributed to major network deployment in a short period. It must be underlined, however, that the greatest growth rates have appeared immediately after the privatisation of the former monopolistic operator, stages which are generally marked by the maintenance of the monopoly or the introduction of limited competition. Complete liberalisation has not accelerated the growth trend; in some way, it even seems to have undermined the effectiveness of the universalisation programmes. The total opening of the market, when the countries still found themselves under a delayed stage of telephone penetration, should have been backed by a more severe introduction of universal service obligations and a more rigorous follow-up of their compliance. As a matter of fact, the obstacles for complying with the universal service objectives, given how the industry is currently configured in Latin America, are connected to the effects of the introduction of competition in the sector, the search for short term profitability of the operators in a complicated financial scenario (with the resulting impact on investment reductions), and the arrival of technologies providing substitute platforms for voice. Obviously, as we will set out below, future universal service development programmes shall have to take these effects into consideration for reformulating both the goals to be achieved and the means and technologies required to bring them to a successful conclusion. In any case, the analysis proves that, generally, the increase in teledensity has been positive. Brazil and Chile have shown the best results, probably due to the establishment of methods with a certain degree of flexibility and based on the presence of incentives. The progress of Peru and Mexico has been affected by the non-compliance with the objectives imposed on the operators. The implementation of new programmes based on the current situation and targeted at stopping the slowing down of the increase in penetration rates seems necessary. Programme modification would give the regional regulators the chance to move towards a certain harmonisation both in the contents of the universal service obligations and in the financing sources, while respecting the characteristics of each country. This would provide confidence and safety to the operators and would contribute to the development of the regional electronic communications market. We must not forget that an autonomous boost by the market must have priority in achieving service universalisation.
Service Universalisation in Latin America: Network Evolution and Strategies
163
In the mid term, however, the authors consider that, without abandoning the universal service concept, and thus its basic objectives, the procedures and technologies used should be deeply reviewed while simultaneously considering their contribution to achieving more ambitious objectives regarding the development of the Information Society. On the one hand, and in addition to taking measures targeted at directly contributing to network deployment, public intervention should explore indirect ways of development. More specifically, the demand should be stimulated and aggregated so that some regions which are not currently profitable may exceed the business threshold required by the operators to invest and provide service. On the other hand, the conventional notion of universal service for individual access, or in other words, the “one telephone per household” concept, is connected to the provision of a fixed access (although this access could be provided using wireless technology) and thus, to a technologically dependent conception. A modification of the current universal service definition providing for the usage of new communication platforms, and specifically of mobile/wireless technologies, would be convenient. Additionally, this would facilitate the way of getting past the limitations to the voice services and extend their contents. Given the true incorporation to the Information Society requires the usage of advanced communications, the universal service definition must be separated from a portfolio of specific services, thus progressively resulting in the provision of sufficient connectivity to the users. This approach, which is beginning to be considered by a growing number of administrations (although it still has not affected the definition of universal service), considers the deployment of the basic infrastructures as one of the barriers to be overcome in order to move towards the development of the Information Society. This would be a perspective which would not be restricted to the basic telecommunication services, but focused on the comprehensive requirements of the users instead. Both future and currently existing new technological elements (such as broadband or mobility) would have to be considered, as well as all the factors contributing to the usage and full exploitation of the telecommunication infrastructures (hardware and software equipment, applications and contents). Last, we should not forget that, in order to be really effective, these plans must necessarily be linked to the overall development policies of the country.
References Albery B (1995) What level of dialtone constitutes ‘universal service’? Telecommunications Policy 19(5): 365–380 ANATEL (2004) Plano geral de metas para a universalização (PGMU). Available at: http://www.anatel.gov.br/index.asp?link=/Telefonia_Fixa/stfc/indicadores_pgmu/2004 /brasil.pdf
164
Arturo Robles Rovalo, José-Luis Gómez Barroso, Claudio Feijóo González
Antonelli C (2003) The digital divide: understanding the economics of new information and communication technology in the global economy. Information Economics and Policy 15(2): 173–199 Bardzki B, Taylor J (1998) Universalizing universal service obligation: a European perspective. In: 26th Telecommunications Policy Research Conference. Alexandria, 3– 5 October COFETEL (2004) Serie mensual de líneas telefónicas en servicio (03/04). Available at: http://www.cft.gob.mx/html/5_est/Graf_telefonia/sermensual.html Fischer R, Serra P (2002) Evaluación de la regulación de las telecomunicaciones en Chile. Perspectivas 6 (1): 25–77 Goggin G (1998) Voice telephony and beyond. In: Langtry B (ed) All connected: Universal service in telecommunications. Melbourne University Press, Melbourne, pp 49–77 González AE, Gupta A, Deshpande S (1998) Telecommunications in Mexico. Telecommunications Policy 22 (4/5): 341–357 Hills J (1993) Universal service: a social and technological construct. Communications & Stratégies 10: 61–83 ITU (2003) ITU Database Indicators 2003. International Telecommunication Union. Geneva ITU (2001) Una Reglamentación Eficaz. Estudio de Caso: BRASIL 2001. Available at: http://www.itu.int/itudoc/gs/promo/bdt/cast_reg/79124.html ITU (1998) World telecommunication development report 1998: Universal access. International Telecommunication Union. Geneva ITU (1994) The changing role of government in an era of telecom deregulation. Report of the Second Regulatory Colloquium held at the ITU Headquarters 1–3 December 1993. Geneva. Available at: http://www.itu.int/itudoc/osg/colloq/chai_rep/2ndcol/coloq2e. html Noam EM (1987) The public telecommunications network: a concept in transition. The Journal of Communication, 37(1): 30–48 OECD (1991) Le service universel et la restructuration des tarifs dans les télécommunications. OECD, Paris OSIPTEL (2004) Indicadores de telefonía fija (marzo). Available at: http://www.osiptel. gob.pe/Index.ASP?T=P&P=2636 Sawhney H (1994) Universal service: prosaic motives and great ideals. Journal of Broadcasting & Electronic Media, 38(4) 375–395 Secretaría de Comunicaciones y Transportes de México (2004) Programa de trabajo 2004. Available at: http://www.sct.gob.mx/documental/index.html# Stern PA (1995) Les telecommunications. Le Communicateur (Services publics en concurrence) 28: 29–47 SUBTEL (2003) Estadísticas del Sector de las Telecomunicaciones en Chile. Available at: http://www.subtel.cl/pls/portal30/docs/FOLDER/WSUBTEL_CONTENIDOS_SITIO/ SUBTEL/ESTDEMERCADO/INFESTAD/INFESTAD2/INFOR_ESTADISTICO_8.P DF Torres Medina MJ (2003). La evolución del sector de las telecomunicaciones en América Latina a raíz de su privatización. AHCIET. Available at: http://www.ahciet.net/ especiales/evolucionsector1.doc World Bank (2002) Servicios de telecomunicaciones e información para los pobres: Hacia una estrategia de acceso universal. Available at: http://wbln0018.worldbank.org/ict/ resources.nsf/InfoResources/0539C87E642EDCE185256DA400504B44
Sustainability of Community Online Access Centres Peter Farr1,*, Franco Papandrea2,** * **
Peter Farr Consultants Australasia Pty Ltd., Australia University of Canberra and Australian Capital Territory, Australia
Abstract Community Online Access Centres (or telecentres or E-community centres3) have been established in many countries to provide rural and remote communities with access to a wide range of information and communications technologies (ICTs) including computers and the Internet. The centres are typically established with at least initial government financial support. In many cases, the centres have been established by development agencies on a pilot basis and often their sustainability cannot be assured after the initial project period. Indeed, after initial establishment, many centres have experienced difficulties in sustaining their operations without ongoing financial support from public or private sources. This chapter draws on Australian field research into the feasibility and sustainability of establishing broadband-enabled Community Online Access Centres in remote communities, and also practical experience with the highly successful Western Australian Telecentre network. The study conducted by the authors for the Australian Government examined the feasibility and ongoing sustainability of community-based telecentres that were being considered for establishment in remote indigenous communities (Farr 2003). The scenario for consideration was that the initial establishment and early operations of the telecentres would be funded with financial support from government sources. However, in the longer term, the telecentres would be expected to continue operating with minimal direct funding by Governments. The primary objective of the study was to explore practical models likely to promote sustainability of the telecentres with minimal ongoing government funding.
1
E-mail:
[email protected] E-mail:
[email protected] 3 Definition: Facilities providing public access to ICT-based services and associated applications for education, personal, social and economic development. 2
166
Peter Farr, Franco Papandrea
The study made use of the Triple Bottom Line approach to the assessment of projects or public policy proposals which argues that public funding decisions should take into account the full range of relevant benefits and costs flowing from the proposal. The range of costs and benefits are generally aggregated into three groupings, namely economic, social and environmental, which cover the major impacts of any decision (Crellin 2004). There are three key dimensions that are crucial to the successful establishment and ongoing sustainability of Community Online Access Centres, namely: • financial resources; • community empowerment and socio-economic impact; and • efficient operations and support systems. All three dimensions require careful assessment and analysis, and effective plans need to be developed and implemented for each of them if the Community Online Access Centres are to have a reasonable chance of long term sustainability. Experience with telecentres in various countries strongly suggests that those that are combined together into a ‘cooperative network’ have much better prospects for ongoing sustainability than standalone operations. In addition, given the dependence of community telecentres on volunteer services to perform their functions and maintain their operations, sustainability beyond the initial phase is enhanced by having access to support services provided by central or regional units especially established to ongoing operational and technical support services to staff in telecentres. This chapter discusses many relevant aspects of sustainability for Community Online Access Centres in rural and remote communities and concludes with a set of key findings and recommendations for sustainability.
Introduction Community Online Access Centres (or telecentres or E-community centres) have been established in many countries to provide greater or better access to advanced communication services and technologies to regional and remote communities that are not otherwise well served by private sector suppliers of telecommunication services. The range of services provided by telecentres can be adapted to the needs of the host community and the community’s capacity to support the associated operational costs. The size of a centre reflects the nature and type of services it supplies and can range from a very small centre providing access to the Internet and basic telecommunications to sophisticated operations catering for advanced communications and related services such as Telehealth and ISP hosting. Small basic centres can be established as an adjunct to existing community service facilities and may be operated on voluntary or part-time basis. Larger centres require dedicated facilities as well as permanent specialised staff supplying a range of advanced services.
Sustainability of Community Online Access Centres
167
The development of telecentres has tended to rely heavily on public funding for both their initial establishment and ongoing operations. In most cases, there is a general expectation that public funding should be limited to a part of the initial investment and start-up costs after which centres are expected to become commercially sustainable (Wellenius 2003). Sustainability, however, has proven to be elusive to many of the telecentres established in both developed and developing countries (Roman and Colle 2002). This seems to be a trait common to most of the models used for the establishment and funding of the telecentres (Fuchs 1998; Jauernig 2003; Proenza 2001). This chapter presents a discussion of the concept of sustainability for Community Online Access Centres (telecentres) in rural and remote communities with some special applications to the establishment of such centres in indigenous communities. The focus is on factors and processes needed to achieve sustainability. The chapter draws from research conducted as part of an Australian Government funded study to examine the feasibility and ongoing sustainability of establishing community-based telecentres in rural and remote indigenous communities. One concept underlying the proposed centres was that after initial government funding the centres would become largely self-sufficient with minimal direct funding by the Government. The primary objective of the study was to explore models likely to promote sustainability of the telecentres with minimal ongoing government funding. The chapter also draws from practical experience with the highly successful Western Australian Telecentre network, which comprises 104 telecentres in rural and remote areas. This chapter develops a model of sustainability applicable to rural and remote Online Access Centres and discusses the importance of the key dimensions that are crucial to the successful establishment and ongoing sustainability of the centres.
Assessment of policies A public policy intervention is typically intended to generate desirable outcomes that would not otherwise occur (i.e., correct for market failure) and is justifiable if the costs of implementing the intervention are less than the total benefits (both economic and social) generated by it. Cost benefit analysis is an economic tool that is typically employed to asses the value of public policies and projects. In cost benefit analysis all benefits and costs are valued in monetary value terms. However, many social benefits are not conducive to quantitative measurement, and there is a risk that they will not be given full or sufficient consideration in policy decisions. The lack of a reliable measure and of consensus on how trade-offs between economic and social benefits are to be handled, means that decisions are often dependent on the value judgements of decision makers. One way of simplifying the consideration of benefits and costs is to allocate them to separate categories according to their nature. For example, economic costs and benefits that can be assigned a monetary value are grouped together in one
168
Peter Farr, Franco Papandrea
category and social cost and benefits to which a monetary value cannot be ascribed are grouped into a separate category. A project producing more benefits than costs in both categories would be unambiguously justified. However, if the various categories of cost and benefits are not all positive the trade-off problem is not avoided. The Triple Bottom Line4 approach to the analysis or assessment of public policy projects is based on the principle of allocating benefits and costs to separate categories. It was developed primarily as an accounting tool to enable corporations to assess and report their performance against different measures reflecting their principal impacts on society. Typically, the costs and benefits are classified to three groups, namely, economic, social and environmental – hence the triple bottom line. They are considered separately because no suitable technique exists to calculate a valuation for the social and environmental dimensions of a decision. Thus it is not feasible to provide a single aggregate value for a decision option. Rather, the three impacts economic, social and environmental are compared for each option and the desired option is selected based on these comparisons (Crellin 2004). From a public interest perspective, to be justified a project would need to generate a net positive benefit in each of the three measurement/reporting areas. While short term trade-offs between positive and negative outcomes in the different areas might be possible, in the long run a project is deemed to be justified only if it can unequivocally produce a positive outcome in all the three measures. Based on their research with community telecentres, the authors suggest a similar approach to the analysis of long-term viability of community access centres. For the purpose of their analysis, the authors identified financial resources; community empowerment and socio-economic impact; and efficient operations and support systems as being crucial to the successful establishment and ongoing sustainability of community telecentres. To be sustainable in the long term, a telecentre cannot afford to underperform in any of these three areas.
Necessary conditions for a Community Online Access Centre to be ‘useful’ Fig. 1 depicts the foundations that are necessary to create a ‘useful’ Online Access Centre, and which must therefore get attention during the planning process and also once a centre has opened for business.
4
The term ‘Triple Bottom Line’ was coined by John Elkington in his book “Cannibals with Forks: the Triple Bottom Line of 21st Century Business” (New Society Publications, 1998)
Sustainability of Community Online Access Centres
169
The ‘right’ set-up, staffing, training, funding, etc Relevance to the Community Supportive Environment (Involvement)
Dim ensions of a ‘Useful' Com m unication Centre
Recognition of Differences
Fig. 1. Necessary conditions for a Community Online Access Centre to be ‘useful’
Each of the building blocks on the left side has several dimensions. For example, ‘Recognition of Differences’ could embrace: • • • • • • •
cultural factors geographical and demographic factors organisational and group identity ICT ‘readiness’ partners and their motives leadership, and inclusiveness.
'Relevance to the Community' could embrace Portals with local information – these are often regarded as an important means of engaging the interest of both novices and experienced users. The most common use of the new technologies is often e-mail and often the most important application is networking. There is often considerable richness of the idiosyncratic information (such as job opportunities in nearby communities or telework, markets, prices, production practices, etc.) available from a network of friends or people with common interests (Proenza FJ 2005).
Sustainability Here, sustainability is defined as the ability to maintain ongoing operations of a Community Online Access Centre without recourse to long-term financial assistance from government after its initial set-up period. Initial set-up funding includes both capital funding to establish a centre and operational funding for a sufficient period to enable the centre to reach a viable level of service.
170
Peter Farr, Franco Papandrea
The length of time required to reach a viable level of service will depend on the type of access centre as well as the size and demographic characteristics of the community served by the centre (Reeve 1998). Because many of the costs associated with operating a telecentre are fixed (or not scaleable), smaller centres will generally face a cost disadvantage in setting up a centre and maintaining its ongoing operations. Consequently, centres in smaller communities will probably take longer to reach and maintain a viable operational level of service. Dimensions of sustainability The various factors that contribute to sustainability can be grouped into these three key interdependent dimensions: • Financial Resources; • Community Empowerment and Socio-Economic Impact; and • Efficient Operations and Support Systems. The interrelationship between these three key dimensions is illustrated in Fig. 2. Financial Resources
SUSTAINABILITY Community Empowerment & Socio-Economic Impact
Efficient Operations & Support Systems
Fig. 2. Sustainability model
Each of these three dimensions is critical to the viability of a centre. Also, because of the interrelationship between the dimensions, insufficient attention to any one of them will be likely to have an adverse impact on the other two. Thus, while falling short in one dimension temporarily may not have enduring detrimental impact on long-term sustainability, ongoing neglect will be likely to eventually lead to the risk of failure as the other two interrelated dimensions begin to suffer.
Sustainability of Community Online Access Centres
171
Financial resources Financial sustainability implies that an Online Access Centre is able to meet all its costs from the revenues it generates in the provision of services. A centre is financially sustainable if its revenue from all sources is at least equal to the operational costs (wages, rent, maintenance, supplies including telecommunications services, etc.) plus a contribution to the cost of the equipment (either for expansion of services or replacement of existing equipment). Ability to meet operational costs only will not be sufficient for sustainability as equipment will eventually need to be replaced and a centre will need to be in a position to fund the replacement when necessary. Consequently, to achieve sustainability an Online Access Centre must be in a position to fully maintain itself as a going concern after an initial period of financial assistance from the government. Inability to achieve self-reliance would place a centre under a constant threat of failure. Field research conducted in Australia by the authors in 2003 (Farr 2003) revealed that in the immediate future, most remote indigenous communities are unlikely to be in a position to generate a level of demand for communications services and a related level of revenue that would be sufficient to cover the full cost of the delivery of services via a Community Online Access Centre. Even the smallest centre will need to be able to sustain a minimum scale of operation with at least one computer terminal, a telephone line, and other single units of essential facilities irrespective of the level of usage generated by a community. Provision of telecommunication facilities to remote locations is also costly. This is a feature of the provision of telecommunications services to remote communities generally and is the primary reason for the adoption of universal service policies involving a community subsidy for the supply of essential telecommunications services. To a large extent, the provision of at least a basic set of communications services to rural and remote communities is driven by social policy objectives of ensuring that all members of Australian society can access services that are indispensable to their wellbeing. As such, society has a collective responsibility to provide the means for equitable access at a reasonable price to essential communications services. Evidence from successful telecentre programs indicates that rural and remote communities generally are unable to fully fund the operations of telecentres and require some form of ongoing financial support. Typically, the financial support relates to the funding of infrastructure and other overhead costs such as facilities rental, manager’s salary and central support services that are largely unrelated to the level of demand for services. Services are provided at prices that are comparable to those charged for similar services in more central areas. Consequently, the earned income is likely to cover at least the variable cost of providing services as well as to make some contribution to overheads. Given this and much other evidence (Saga 2004), it would be unrealistic to expect that many rural or remote communities would be in a position to support viable Online Access Centres without some form of ongoing external funding. This raises two issues that need to be addressed by policy makers:
172
Peter Farr, Franco Papandrea
• What level of service is deemed to be essential/desirable? • Who should provide the necessary external financial support to guarantee access to the essential/desirable services? Currently in Australia, a range of government services (such as health, education, justice, social security) are exploring or developing strategies for the use of telecommunications services to enhance delivery of their services to rural and remote communities. These strategies are largely being developed independently of each other. Discussions that the authors had in 2003 with various government agencies indicate that several would consider seriously the possibility of using Online Access Centres to support their service delivery activities. Indeed, some already use community telecentres for the purpose. The authors’ assessment is that there is considerable scope for the coordination of those strategies in a way that will contribute substantially to the funding of overhead costs for the operation of Online Access Centres. In addition to government agencies, the services of Online Access Centres are also likely to be sought by commercially-based interests such as tourism activities, the marketing of locally produced products and the provision of other general commercial services. Many of the existing telecentres in regional and remote areas cater for the needs of commercial interests – be they local businesses, government agencies or tourists/travellers. The sale of services to such interests would also contribute to the ongoing viability of Online Access Centres. However, a proper balance has to be maintained between local needs and those of ‘outsiders’. For example, in some indigenous communities it would not be welcome for tourists or backpackers to take ‘control’ of PCs and Internet access. Long-term sustainability can be enhanced by developing partnerships with organisations and agencies with a demand for the type of services provided by Online Access Centres. By acting as a central service point aggregating the demand of several users of electronically delivered services and other related services, a centre can benefit from economies of scale and scope that would not be available to stand-alone operations that may be set up by the individual organisations needing services. By sharing some of these benefits with its government and commercial clients, a centre can be a cost-effective service provider. Earnings from such services will help underwrite the cost of operating the centre. To be successful in securing and keeping commercial clients, a centre will need to pay particular attention to these clients and to their needs. Assessment of the financial feasibility of Online Access Centres requires realistic estimates of costs and revenues. Revenues are estimated from the likely level of service demanded by local people and by partner organisations using the services of Online Access Centres. The derivation of such estimates should be guided by the experience of established telecentres. Wherever possible, costs estimates should be derived from actual data based on the experience of existing telecentres. The difference between revenues and costs indicates the level of subsidy required from external sources to maintain Online Access Centres operational.
Sustainability of Community Online Access Centres
173
Community empowerment and socio-economic impact The primary function of Online Access Centres is to provide their local communities with equitable access to a wide range of modern IT and communications services consistent with their needs. Although previous experience in the development of Online Access Centres can assist the design and development of practical means to serve community needs, the nature, circumstances and capacities of communities are highly diverse and consequently services provided by Online Access Centres will need to be structured accordingly if a centre is to be successful. A specific example of how this was successfully addressed in a particular case is provided below. Unfamiliarity with new technology and services may discourage use of, and demand for the services offered by an Online Access Centre and may threaten the likelihood of success. Online Access Centres should strive to become a cultural and social community centre consistent with community needs and offering services in a environment and in a manner that community members are accustomed to and do not find unfamiliar or threatening. For many communities and particularly those that have had little exposure to new technologies it is desirable to avoid developing an image solely as a centre for access to ICT-based services. It is important to the long-term survival of a centre to offer a range of services that have some synergy to ICT services either because they are delivered over a telecommunications platform or in their information attributes (for example, library services, Internet banking, buying goods on-line, on-line training, etc). The establishment of an access centre should also be used as a vehicle for extending community skills and capacity building. The establishment of an Online Access Centre in a community can have a major impact on the dynamics of the community’s culture, social structures and communications patterns. Planners need to ensure that changes to established community practices that may result from the establishment of a centre and the rate at which those changes take place are acceptable to the community. If a community is threatened by the changes or by the rate at which they are introduced, its acceptance of the centre will be jeopardised. It is critical, therefore, that communities are fully engaged from the start in the development of initiatives for the establishment of an Online Access Centre and in its ongoing operations and management. Community engagement is indispensable to the ability of an Online Access Centre to provide services that are relevant to the community it serves. Considerable efforts may need to be devoted to ensuring the community fully backs the project, develops a sense of ownership of the Online Access Centre and takes responsibility for its operations. A useful approach to securing community commitment and involvement is the establishment of a steering committee, comprising members of the community, to guide the development and planning of a centre. Once a centre is established, the steering committee could be converted to the centre’s management committee. Membership of these committees should be determined by the capacity of the individuals to promote the objectives of an Online Access Centre as well as to con-
174
Peter Farr, Franco Papandrea
tribute effectively to its planning and management. Influential individuals with a positive vision of the benefits that an Online Access Centre can bring to a community could help mobilise others to share their vision and encourage community support for the centre. Identification and involvement of influential community leaders in all aspects of an Online Access Centre will facilitate its acceptance by the community. Influential leaders can become ‘champions’ and their support for their centre will encourage wider community involvement in the ownership and management of the Online Access Centre. The involvement of community leaders will also help focus the centre’s strategy for delivery of services on meeting the community’s real information and communication needs. Unless an Online Access Centre has widespread support within a community, its ongoing sustainability will be at risk. It is imperative that an Online Access Centre is attuned to the needs of all the diverse groups in a community and that all people are provided with equitable access to the centre’s services. Overall, the aim should be to create a centre that promotes and encourages equitable, collaborative and open participation that enhances community development. A community’s demand for services is unlikely to remain constant, but is likely to shift as members of the community gain experience in the use of established services and become exposed to the benefits offered by new services. Operators of a centre need to be attuned to shifts in demand for services as well as pro-active in the development of programs likely to increase the familiarity of community members with existing services and in generating services likely to be of benefit to the community. Diverse community needs have to be considered and taken into account. Special programs to facilitate access to an Online Access Centre’s services – e.g. by seniors and youth (see case study below) – may be required to overcome customs or sensitivities that may otherwise inhibit their use of the services provided by Online Access Centres.
Sustainability of Community Online Access Centres
175
The dEeadly mOb Internet Café is a venture of the Gap Youth Centre in Alice Springs. Its services are designed to cater for the special needs of indigenous youth in the area. It provides a practical example of how success can be enhanced by focusing on users’ needs. The Café is located on the Gap Youth Centre precinct and fosters strong links with other activities at the centre. The focus of dEadly mOb is one of community development and mentoring for young people in the Alice Springs region of the Northern Territory. This is reflected in the broad range of programs and activities undertaken by dEeadly mOb. They take the view that the technology is a tool which provides young people with a huge range of opportunities to explore career options and to further develop their skills and interests. A key element of dEadly mOb’s strategy is structured around the website5 which has a strong focus on helping young people find the right training to lead on to employment. Another service that is going really well is the user-friendly E-mail (dEadly Mail). dEadly mOb activities are being promoted in bush communities throughout Central Australia, resulting in regular visits to the Café when people visit Alice Springs. dEadly mOb are looking to support the Café through a mixture of sources, by offering a strategic mix of services and programs within the Café.
Anchor customers from the government sector As already mentioned, advanced telecommunications can provide rural and remote communities with enhanced access to government services in a cost-effective manner, particularly in areas such as health, education, family and children’s services, law and order, training, employment assistance, etc. For government agencies, the benefits include the ability for more effective delivery of services to the communities and opportunities for training and developing their staff members at remote locations. The authors contend that long-term sustainability can be enhanced by developing partnerships with organisations and agencies with a demand for the type of services provided by Online Access Centres. By acting as a central service point aggregating the demand of several users of electronically delivered services and other related services, a centre can benefit from economies of scale and scope that would not be available to stand alone operations that may be set up by the individual organisations needing services. By sharing some of these benefits with its government and commercial clients, a centre can be a cost-effective service provider. Earnings from such services will help to underwrite the cost of operating the centre. Accreditation will give outside bodies more confidence about utilising Online Access Centres. Training and educational content conducted through Online Access Centres should be formally accredited. A recognised quality management 5
See: www.deadlymob.org
176
Peter Farr, Franco Papandrea
system should also be implemented across government-funded Online Access Centres.
Efficient operations and support systems Individually and collectively, Online Access Centres must work hard to build an image of being useful, having equipment that is easy-to-use and is reliable, having competent and helpful staff, and of being able to pay their bills on time. Online Access Centres must strive to maximise efficiency in all aspects of their operations including utilisation of infrastructures. The key elements of operational efficiency are: • focus on customer needs; • use of appropriate technology; and • efficient organisation and management. Focus on customer needs Services should be designed to satisfy community needs and should be meaningful and valuable to members of the community. To promote and encourage the use of services, the centre should endeavour to develop a service mentality towards the community and other major stakeholders. The community and other stakeholders should be kept informed of existing and anticipated services and should be provided with opportunities to evaluate the benefits that are likely to accrue from their use. Appropriate demonstrations should be organised regularly and should be offered as needed by prospective users. It is crucial that that Online Access Centres evolve as community centres and avoid creating a perception that their function is limited to the provision of access to technology. The service mentality extends to ensuring the availability of services at times they are likely to be needed and in facilities that are mindful of the cultural and social customs of users. Access should be open and flexible to accommodate community needs. This can have major implications for opening hours (needs and demand by young people, for example, would require after business-hours access). The delivery of services should be provided in a manner that takes account of the community’s capacity to access them. Community members should be encouraged to try and familiarise themselves with the equipment and experience access to services. Those with a capacity to muster the skills for personal access to services should be assisted in gaining the required skills. Others may require assistance from a skilled community member to gain access to the services. Both groups should be catered for by the centre. The less complex the equipment, the easier its use, and the larger the number of people that will be able to acquire the necessary skills to use it. As a service entity, an Online Access Centre cannot afford to lose its focus on customer needs and the building and maintaining a close relationship with them.
Sustainability of Community Online Access Centres
177
The needs of customers can change rapidly and management needs to be attuned to any changes and adapt accordingly if an Online Access Centre is to remain successful. By understanding and anticipating changes in customer demand as well as changes in services and delivery platforms, management can ensure that an Online Access Centre is ready for the challenges that may arise. To remain meaningful in the long-run an Online Access Centre will need to adopt the appropriate technology and business methods and be on the lookout for improvements. This cannot be done efficiently without a strategic plan to guide the overall operations of the centre as well as its day-to-day activities. Use of appropriate technology Infrastructure and equipment should be chosen carefully to ensure it is appropriate for the range and level of services to be provided by the centre. The services themselves must reflect the needs of the community being served and the capacity of the community to utilise the services. These will change from community to community. For some small communities, the appropriate infrastructure may be minimal computing facilities in addition to telephone, fax and dial-up Internet. At the other end of the scale, infrastructure and equipment may be needed to cater for a variety of services ranging from telephony and fax facilities to broadband services such as video conferencing and tele-medicine. A holistic solution is required. In present-day terms, typical requirements include: • ‘Smart’ procurement of carrier and Internet services. • Demand aggregation via cable or wireless technologies at local community level through the Online Access Centre to embrace key local users of bandwidth. • Demand aggregation across regions for connectivity to each Online Access Centre. A proportion of Online Access Centres will connect using terrestrial connections; others will use hybrid (1-way satellite, 1-way terrestrial) or 2-way satellite. • Bandwidth-on-demand to cater for peak uses such as high-quality videoconferencing, audio and video streaming, etc. • Interconnection of Online Access Centres via a secure IP-based Virtual Private Network (VPN) with sub-networks based on geographic/organisational/cultural principles. • Concentrated access to the Internet through a minimum number of ‘pipes’. • National/regional data centres and ASP (application service provider) model. • Central web hosting. • Standard Operating Environments for Online Access Centres. • Secure remote access for users/members who need this facility. • Extranets for external partners (e.g. electronic procurement services for communities). • Focus on the inevitable convergence of ICTs and electronic media.
178
Peter Farr, Franco Papandrea
• Central/Regional network management, Service Desk and related administrative support. Choice of equipment and software for use by community members should be guided by simplicity of use, sturdiness, ease of maintenance, etc. Having advanced equipment or software will be of little value if community members find its use daunting and are discouraged from using it or from learning how to use it. Whilst the great potential of videoconferencing still remains largely untapped, equipment prices have come down enormously and there are plenty of examples to show that videoconferencing should be a key facility for Online Access Centres in rural and remote communities6. Videoconferencing systems should be designed and funded to deliver acceptable standards of audio and visual quality (e.g. a bandwidth of not less than 384 Kbits/sec and auto-focusing cameras for roomstyle videoconferencing).
Efficient organisation and management A professional approach to the management of an Online Access Centre is essential to the achievement of ongoing sustainability. Planning is a core function of good business management. In the context of Online Access Centres, there are two key aspects of planning – strategic planning and operational planning. A strategic plan provides a road map to achieving the goals of the centre and keeping it viable. A strategic plan usually sets out longer term goals and is augmented by a series of shorter operational plans that detail the activities to be undertaken to achieve the goals. A major benefit of a strategic plan is that it provides cohesion and coordination to shorter term activities and helps to keep them focused on the final goals. A strategic plan should be developed by management in consultation with the Online Access Centre’s management committee and should be endorsed by the Committee. Operational plans are more detailed. They set out a timetable for achieving specific objectives as well as the method and the resources required to achieve the desired targets. Such planning is useful for the identification of strengths and weaknesses in the operations of a centre and provides the benchmarks against which actual performance can be measured. With good planning, differences between planned and actual targets can be identified and remedial action taken when the two diverge significantly. A budget is an essential part of management and planning. It should be prepared at least annually and should be based on realistic estimates of the costs and revenues that will accrue to the centre during the period covered by the budget. A well-prepared budget will highlight any potential future difficulties a centre may face and will help management to take early remedial action to avert the difficulties. 6
A high proportion of Western Australian telecentres offer public access videoconferencing facilities that are available to anyone on a fee-for-service basis.
Sustainability of Community Online Access Centres
179
Planning and budgeting need to be augmented with operational policies and procedures. It is imperative that Online Access Centres establish/adapt an operational manual that clearly defines the duties and responsibilities of all staff and procedures for all key operational aspects. An efficient accounting and recording system is indispensable for accountability and an effective billing system is indispensable for the timely collection of sales revenue from customers. Pricing policies based on cost of provision of services should be established. The Manager and staff/trainees need to be able to manage the day-to-day operations and business of the Online Access Centre. The business and administration skills they will require between them include: record keeping, financial management and office administration; the development and implementation of a business plan; costing and marketing of the centre's services; dealing with occupational health and safety issues; management of community participants; engaging with potential and actual clients and service provider agencies, and the ability to operate Agency services (such as post office, banking, taxation office agency, etc); and developing needs assessment skills and evaluation methods (community and client services). Training and networking with colleagues are additional important issues that need to be tied in with staffing. Note that telecentre Managers are often going to require training in management and business operations, along with training in equipment and software applications and maintenance. Competent technical expertise is likely to be difficult to obtain in many rural and remote communities and can pose major problems to the capacity of centres to maintain continuity of services when problems are experienced. Staff should have the technical capacity to operate the equipment in the centre effectively. While they do not need to be technical experts, they should at least have a basic capacity for troubleshooting problems as they occur and resolve those that do not require extensive expertise. Unless the expertise is readily available locally, access to a remote ‘helpdesk’ service by staff that can guide them to the solution of minor problems could be essential to maintaining continuity in service delivery. Such a service could be provided by a central support unit that may be set up in support of a group of Online Access Centres or by a commercial provider. Online Access Centres could depend on volunteers to assist with some of the centre’s activities and the delivery of services to clients. Volunteers are a great asset, but their management can be somewhat more difficult to implement than the management of regular employees. The centre manager needs to develop a good strategy to attract and retain the services of volunteers and a standard agreement that sets out the functions and responsibilities of those who agree to work as volunteers. Once an Online Access Centre is operational the training of users from the local community must become the priority as many members of a rural or remote community will be faced with unfamiliar technology and services and cannot be expected to use those services without some appropriate training and encouragement. It is likely that the establishment of a telecentre will generate considerable interest and curiosity among community members so it is imperative that priority be
180
Peter Farr, Franco Papandrea
given to activities that will tap into the interest and curiosity in a way that provides a positive experience. Demonstrations should be realistic and meaningful to the community rather than a display of the attributes of the technology. Training staff to cater for people with disabilities is an issue that can affect how the Online Access Centre should be arranged and how the facilities can be accessed. For people who are vision impaired, special enhancing equipment can be acquired for PCs etc. Speech to text and text to speech software can also be procured to help them and/or people whose writing/reading skills are limited. Automated translation from text to audio in an appropriate language can be very advantageous for people with related shortcomings unable to write or read. As for other key client groups, it is important that staff be aware of, and trained in how best to assist those with special needs.
Other operational and efficiency considerations Co-location The aim of co-location is to locate the Online Access Centre in a building with other community services so that it is not a standalone facility. This has the advantage of creating a 'community hub' or meeting place – one building where the community goes for a range of services. This has the potential to greatly increase awareness of the Online Access Centre and will lead to more people visiting it, using it and wanting to become involved. It also means that the Online Access Centre could share operating costs with the other service or services, and will reduce the amount it would have to pay as a stand-alone facility (see the case study below). Co-location requires careful selection and evaluation of the site if it is to be of benefit to the long-term operation of the centre and the community it serves. Access to the site should not be restricted at times of likely substantial demand for the services offered by the centre. For example, co-location in a building with restricted public access times or conditions (such as a school or Council office) would probably not be in the best interests of the Online Access Centre, but colocation with a public library could be a good strategy.
Sustainability of Community Online Access Centres
181
In Australia, the Kimberley Development Commission, in partnership with the Halls Creek Shire Council initiated the development of a Community Resource Centre, co-locating government and non-government organisations, in a new facility in the centre of town. Through co-location, agencies involved in regional economic or community development are able to provide improved and more efficient service delivery to regional communities. The centre incorporates a new Halls Creek Visitor Centre that also provides an up to date visitor service as well as café facilities and Automatic Teller Machine services. The Halls Creek Community Resource Centre has provided a strong focal point for the community and the wide range of new and upgraded services that it provides (Farr 2003).
Demand aggregation at the community level An Online Access Centre gives a community more critical mass, thus building a stronger business case and perhaps securing more attractive pricing for telecommunications/Internet services. Customer aggregation is recommended to achieve economies via the extension of services to a range of users (e.g. health clinic, school, training provider, local government office, etc.) surrounding a Community Online Access Centre. This would be achieved by the deployment of an appropriate mix of terrestrial and broadband cable or wireless technology. Supportive structures The viability of Online Access Centres will be considerably strengthened by supportive structures. Individual Online Access Centres operating independently as a stand-alone centre will face many challenges that can be daunting to management. In his examination of the lessons for long-term sustainability, Brett Sabien, Manager, Western Australia Telecentre Support Branch7 has stated that a telecentre network requires “a central support team to guide and support the ongoing development of the network and a consistent flow of funding in the initial years, otherwise the communities’ commitment to embrace the concept would most likely be short lived” (Sabien 2003). The Western Australia Telecentre Support Branch has a staff of 9 persons to support 103 telecentres (as at August 2005) and planning for more. International research conducted by Professor Heather E. Hudson, Director, Telecommunications Management and Policy Program, University of San Francisco also stresses the importance of having the appropriate level of support for the network to operate effectively and develop (Farr 2003).
7
See http://www.telecentres.wa.gov.au/network/tsb.asp; and http://www.oict.nsw.gov.au/content/3.4.CTCs.asp
182
Peter Farr, Franco Papandrea
There are many financial and organisational benefits that can accrue from being a member of a large group. For example: • Members of a group can share experiences and develop solutions to common problems. They can also share resources such the development of operating manuals, training guidelines and standardised contracts for commercial clients. They can standardise management planning and control tools. • A grouping (network) of centres would also be able to centrally organise and provide many of the support services that are needed for the efficient operation of Online Access Centres including a central helpdesk service, group purchasing, and group marketing. • Similarly, the use of standardised equipment by group members would enable the establishment of a centralised equipment maintenance and repair service. The centralised service could hold a small pool of replacement units of key equipment that could be provided ‘on loan’ to a centre while its faulty equipment is being serviced or repaired. The loan of such exchange units would help the centre maintain continuity of service while its equipment is being serviced. A Supportive Structure would involve the centre being part of a larger group, and this may include related local, regional, state and/or national organisations. Organised procurement of supplies and services The not-for-profit status of many telecentres can bring valuable benefits including enabling the organisation to access some funding sources. The viability of Online Access Centres will be considerably strengthened by procuring supplies and services on a best value-for-money basis. In this regard, there are many financial and administrative benefits that can accrue from strategic sourcing (e.g. demand aggregation) and e-Procurement. For example: • The group can act as a single buyer negotiating bulk purchase arrangements on behalf of its members. For example, a group purchasing equipment, computer software, or telecommunications services on behalf of the members will be able to exert buying power to negotiate better terms than an individual centre would be able to obtain. • Discounts applicable to not-for-profit organisations should be exploited. • Governments often have arrangements in place for panel contracts for numerous supplies and services, including telecommunications, IT equipment and computer software. Online Access Centres could qualify to procure goods and services from these panel contract arrangements. Experience with telecentres strongly suggests that those that are combined together into a ‘cooperative network’ have much better prospects for ongoing sustainability than standalone operations.
Sustainability of Community Online Access Centres
183
Conclusions The establishment of Online Access Centres in rural and remote communities reflects the pursuit of the social policy objective of providing all members of a society with access to communications services that are indispensable to their wellbeing at reasonable and affordable prices. Evidence shows that a local Online Access Centre can cut down a feeling of isolation and ‘falling on the wrong side of the digital divide’. It can lead to the development of many new skills along with long-term employment opportunities, economic development and a greater ability to cope with change. However, there are three interrelated features of sustainability all of which require close and ongoing attention to help ensure the long-term viability of Online Access Centres: • An Online Access Centre must have sufficient financial resources to meet all its capital costs (including set-up and replacement) and its operational costs. • Communities must be fully empowered to make their own decisions on the establishment and operation of Online Access Centres. • Viability cannot be sustained without efficient management of all the operational activities. Because of the substantial fixed and overhead costs of supplying ICT services in rural and remote communities, access to services at prices comparable to those charged in regional areas will be unlikely to generate sufficient revenue to fully support the operation of Online Access Centres without some form of external financial support. Note that such support would be consistent with policies on universal access to communications services. The external financial support should be based on the estimated shortfall between the cost of operating an Online Access Centre and the revenue likely to be generated by the sale of services. The supply of non-essential services should not be subsidised. The development of partnerships with organisations and agencies with a demand for services supplied by Online Access Centres will enhance their ongoing sustainability. Communities should be directly involved in the ownership, planning and operation of Online Access Centres. Direct involvement is indispensable to the ability of Online Access Centres to provide services that are relevant to, and meet the needs of the communities they serve. Online Access Centres must strive to maximise efficiency in all aspects of their operations. Inefficiencies add unnecessary cost to the operations and increase the risk of an Online Access Centre being unsustainable. The key issues to stress in regard to training are that training development, funding and implementation need to be part of the Online Access Centre establishment phase; resources will be required for on-going training until a significant depth of skills/community interest is achieved in the Online Access Centre; and training needs to be flexible, timely and appropriate for the different training target groups.
184
Peter Farr, Franco Papandrea
Government-funded ICT equipment and services, provided in the past to Australian rural and remote communities and organisations under various programs are now at imminent risk due to sustainability problems or because the original program did not deal with the totality of the challenge – e.g. the importance of ‘smart procurement’, documentation, training and support, developing meaningful partnerships, etc. In these programs, the hardware and software provided was generally funded along with recurrent expenditure funding for a limited period. Community TeleServices Australia has deduced that if the Commonwealth, State and Territory government agencies across Australia were to work more closely together – in collaboration with recipients – in the implementation of ICT programs and projects in rural and remote communities, beneficial outcomes would be much more probable. Such cooperation and coordination may need to become a matter for government funding policies. Greater inter-departmental cooperation by government agencies might not only lead to more financial sustainability in rural/remote ICT programs, but might also yield significantly higher cost-benefits from the ongoing funding currently required – but often not budgeted for – in maintaining ICT systems and training in rural/remote areas (Geiselhart 2004).
Acknowledgments The contents of this chapter include ideas developed in the course of conducting a study and producing a report for the Australian Government Department of Communications, Information Technology and the Arts (Farr 2003). The authors acknowledge the significant contributions to the ideas in this chapter from all members of the consulting team that conducted the above project, and in particular Laurence Wilson (formerly of the Centre for Appropriate Technology (Alice Springs); Dr Helen Molnar (MC Media & Associates (Melbourne); and Prof. Heather Hudson (Director, Telecommunications Management and Policy Program, University of San Francisco (California, USA).
References Crellin IR (2004) The Triple Bottom Line: The Rationale for External Support of Online Access Centres. http://www.teleservices.net.au/papers/Crellin_Triple_Bottom_Line.htm Farr P (2003) Connecting Our Communities – Sustainable Networking Strategies for Australian Remote Indigenous Communities. Report by Peter Farr Consultants Australasia Pty Ltd for the Department of Communications, Information Technology and the Arts (Australia). http://www.dcita.gov.au/__data/assets/pdf_file/15444/Connecting_Our_Communities. pdf
Sustainability of Community Online Access Centres
185
Fuchs R (ed) (1998) Little engines that did – Case Histories from the Global Telecentre Movement. Prepared for IDRC Study/Acacia Initiative. http://web.idrc.ca/ev.php?ID=10630_201&ID2=DO_TOPIC Geiselhart, K (ed) (2004) The Electronic Canary: Sustainability Solutions for Australian Teleservice Centres, a report by Community TeleServices Australia, Inc and commissioned by the Networking the Nation Board, Department of Communications, Information Technology and the Arts (Australia). http://www.teleservices.net.au/CTSA_Viability_Report%20Final.pdf Jauernig C (2003) Review of Telecenter Sustainability Criteria for the Establishment of Sustainable Rural Business Resource Centres for SMEs in Developing Countries. Background Paper prepared for the Small and Medium Enterprise Branch, United Nations Industrial Development Organization, Vienna Proenza FJ (2001) Telecenter Sustainability: Myths and Opportunities. The Journal of Development Communication 2, vol 12. Special Issue on Telecenters and ICT for Development: Critical Perspectives and Visions for the Future. http://ip.cals.cornell.edu/commdev/jdc-1.cfm. Proenza FJ (2005) Telecenters for socioeconomic and rural development: Investment Opportunities and Design recommendations in Latin America and the Caribbean. http://www.iadb.org/sds/itdev/telecenters/presentation.pdf. Presented at the Asian Development Bank Institute's Managing Sustainable E-Community Centers Workshop, India, May 2005. http://adbi.adb.org/event/2005/05/04/793.managing.ecommunity.centers/ Reeve I (1998) Telecentre Case Study – Australia, part 3.2. In: Fuchs R (ed) Little engines that did – Case Histories from the Global Telecentre Movement. Prepared for IDRC Study, Acacia Initiative. http://web.idrc.ca/ev.php?URL_ID=10638&URL_DO=DO_TOPIC&PHPSESSID=99 afaf865c63715ad0afba6db4963af3 Roman R, Colle R (2002) Themes and Issues in Telecentre Sustainability. Development Informatics Working Paper Series, Institute for Development Policy and Management, University of Manchester, Paper No. 10. http://www.man.ac.uk/idpm Sabien B (2003) Financial Sustainability in Telecentres. International Telecommunications Society Asia-Australasian Regional Conference, Perth, June 22–24 Saga K (2004) Key Issues for the Successful Implementation of Telecentres – Success Factors and Misconceptions. PTC’04 Conference, Honolulu, Hawaii, January, Session W.2.3. http://www.ptc.org/PTC2004/index.html Wellenius B (2003) Sustainable telecentres: a guide for government policy. The World Bank Group, Private Sector and Infrastructure Network, Note 251, January, http://www.eldis.org/static/DOC13200.htm
The SMS Bandwagon in Norway: What Made the Market?1 Kjetil Andersson2,*, Øystein Foros3,**, Frode Steen4,*** * ** ***
Telenor R&D, Norway Norwegian School of Economics and Business Administration, Norway Norwegian School of Economics and Business Administration, Norway
Abstract Short Message Service (SMS) has been an overwhelming success in Europe, substantially larger than in the United States. Norway represents in relative terms one of the largest SMS markets in the world. The aim of this paper is to examine the relationship between economic theories of bandwagon effects, and the Norwegian mobile providers’ management of the SMS market. We narrow the focus on the problem of getting the SMS bandwagon rolling. We emphasise two features crucial to the SMS success. The first is low prices on text messaging relative to mobile phone call charges for low-end tariffs. This seams to have been particularly important in the price sensitive youth market. The second key feature is the high degree of interlinking with respect to functionality and pricing. Both these features differ between Europe and the United States, and we argue that this might explain the difference in market development. The development in the SMS market suggests that it is important that the regulator does not interfere in the early stage. In the SMS market the absence of regulations and ex ante superfluous functionality ended up ex post as major successful services. This suggests that the regulator should be very careful when designing regulation regimes in bandwagon markets to avoid reduced innovation.
1
2 3 4
We thank Telenor Mobil for generously providing data on mobile originated SMS. Kjetil Andersson thanks Telenor R&D for financial support, Øystein Foros and Frode Steen are grateful for financial support from the Norwegian Research Council through the research programs KIM 1303 and ISIP 154834, respectively. E-mail:
[email protected] E-mail:
[email protected] E-mail:
[email protected]
188
Kjetil Andersson, Øystein Foros, Frode Steen
Introduction If you type 7777 44 2 555 555 0 9 33 0 4 666 0 666 88 8 0 333 666 777 0 2 0 3 777 444 66 55 1111 on your Nokia mobile phone, you could be sending a message asking your friend “Shall we go out for a drink?”. Even if the user interface does not seem to be that good the Short Message Service (SMS) has been an overwhelming success in Europe, and in particular in Norway where the customers on average sent 53 person-to-person SMS per month in 2002. The person-to-person services have been followed by a successful deployment of information services distributed by SMS (e.g. downloading of logos and ringtones, SMS voting, interactive TV, quizzes and games, jokes, betting, pay per view web content and so on). Moreover, additional communication services such as chatting are now widely used. While the average mobile subscriber in Europe sent 30 messages a month in 2002, text messaging has not been a comparable success in the United States. The average American mobile subscriber sends just seven messages a month (Economist 2003a, 2003b). The aim of the paper is to examine the relationship between economic theories of bandwagon effects and the Norwegian mobile providers’ management of the SMS market. We narrow the focus on the problem of getting the SMS bandwagon rolling. What are the key factors influencing the success of SMS in general, and in Norway in particular? What has been the role of the regulation authorities? What have the operators and the regulator learned from the SMS phenomenon before the launch of new services such as MMS5 and 3rd generation mobile systems (3G)? We emphasise two features that we believe have been crucial to the SMS success. The first is low prices on text-messaging relative to mobile phone call charges for low-end tariffs. Expensive mobile voice may force price sensitive users to use text messaging as a substitute. The fact that talk is cheap in the United States can also explain the low usage of text messaging in the United States compared to Europe. The second key feature is the high degree of interlinking with respect to functionality and pricing. In contrast to mobile phone calls, text message pricing does in general not depend on which provider the receiver subscribes to and sends messages to. SMS interlinking quality has been high in most European countries, again contrary to the US market, where this feature is quite new. The development in the SMS market suggests that it is important that the regulator does not interfere. Ex post constraints on revenue will reduce incentives to innovate. In the SMS market, in the absence of regulations and ex ante superfluous functionality have ended up ex post as major successful services. This suggests that the regulator should be very careful when designing regulation regimes in bandwagon markets. The article is organised as follows: In the next section we present a simple bandwagon model and briefly discuss key issues in managing the start-up problem of a bandwagon service. In the third section we give the short story of SMS as well as an overview of the Norwegian market structure. In the fourth section we 5
Multimedia Messaging Services.
The SMS Bandwagon in Norway: What Made the Market?
189
analyse the key underlying factors of the SMS bandwagon in Norway. Finally, we summarise some key lessons from the SMS story.
Bandwagon theory Bandwagon effects (or network effects) take place when the benefits to any individual consumer of a product or a system increase with the number of other users. Telephony, E-mail and SMS are prominent examples of products exhibiting strong network effects. Let us illustrate the implications of network effects by the following example based on Shapiro and Varian (1998b). Suppose that there are 1000 people in the market for a given service, and let v be the reservation price for person v, where v=1,…,1000. The price is p and the number of users that value the service at a price higher than p is 1000-p. In a traditional market we will then have a downward-sloping demand curve as shown in the panel to the left of Fig. 1. In a competitive market with constant marginal costs equal to c there will be a unique equilibrium with p=c and quantity equal to nˆ. Let us now consider bandwagon (network) services like text messages where the benefit to each user increases with the number of other users. Assume that the benefit for person v of the service is vn, where n is the number of users. By combining p=vn and n=1000-v the demand curve can be written as p=n(1000-n). The right-hand side panel of Fig. 1 illustrates this demand curve graphically. We see that it has a shape fundamentally different from the traditional demand curve since the first part of it is upward sloping.
p
p
c
c nˆ
n
0 n*
n**
n
Fig. 1. Demand in a traditional market (left-hand side panel) and in a market with bandwagon effects (right-hand side panel), where p is price, c is marginal cost and n is the number of users
The first few consumers that connect to the network have a low willingness to pay simply because they have few people to communicate with. However, the willingness to pay increases as more consumers are connected to the network.
190
Kjetil Andersson, Øystein Foros, Frode Steen
This is what gives rise to the upward sloping part of the demand curve in the figure. Nonetheless, the figure shows that the marginal willingness to pay decreases if a sufficiently large number of consumers is connected to the network. The reason for this is that those that value the service highest, i.e. have the highest v, are already connected to the network. The bandwagon market depicted in the right panel of Fig. 2 has three possible equilibria: Two stable ones and one unstable. If no one connects to the network (n=0) the willingness to pay is equal to zero, p=0. This will typically be the result if the potential users do not expect the system to take off. On the other hand, if a large number of consumers enter the system we may end up at n=n** and p*=c. Consequently, we have two stable equilibria; n=0 and n=n**. The equilibrium in between, denoted n*, is unstable. The significance of n* is that it marks a point of critical mass in the sense that once the market size is barely above n*, the willingness to pay is higher than the price and new customers enter until we have reached n** 6. These n** consumers (except the last one) experience a strict welfare gain from consuming the service. The equilibrium at n=0 is thus inferior from a welfare point of view. Each consumer that enters the network imposes a positive externality on the others, since she increases the value of the system. The problem is that no single consumer has any incentive to enter the network unless she expects others to do the same. What determines whether the network will reach a critical mass, i.e. a point to the right of n*, where the system grows and becomes a success? Penetration pricing may be one tool. Low prices when a new bandwagon service is launched obviously make it easier to induce people to connect to the network. A second tool, and probably at least as important for the providers who try to manage the bandwagon, is the strategy with respect to the degree of interlinking. By interlinking we think of whether the customers enjoy bandwagon effects with respect to all other customers (high degree of interlinking), or only to the customers subscribing to the same provider (no interlinking). Hereafter we follow (Rohlfs 2001) and let the term interlinking cover both direct network effects (demand side economies of scale) and complementary bandwagon effects (demand side economies of scope)7.
The short story of SMS The Short Message Service (SMS) enables the users to send and receive messages from their mobile phones. It was the first mobile data service to become a mass6
7
This definition of critical mass is not always utilised in the literature. For instance Economides and Himmelberg (1994) define critical mass as the smallest network size that can be sustained in equilibrium. This corresponds to the market size at the maximum point of the demand curve in the panel at the right hand side in Fig. 1. Analyses of telecommunications usually use the term interconnection to refer to direct network effects, while complementary bandwagon effects interlinking relates to the degree of compatibility.
The SMS Bandwagon in Norway: What Made the Market?
191
market success in Europe8. SMS is a non-proprietary standard that was developed in the early 1990s by the cross industry forum, GSM Association, and the SMS standard was part of the GSM standard9. The initial application was to send voice mail notifications from the network operator to their subscribers. The initial purpose also explains the limited functionality and capacity of SMS. An SMS message can only contain up to 160 characters10. Even if the initial purpose of the SMS standard was to send messages to the subscribers, the standard also allowed for messages to be sent from a mobile handset. Hence, the SMS standard allowed for interactive services. The initially “superfluous” ability to send messages from mobile handsets has formed the basis for the killer-application, person-to-person SMS. However, the mass usage of person-toperson SMS only occurred in the late 90’s. From the mid-1990s, the GSM phones launched on the market incorporated text-editing software, and mobile providers also began to put two-way capability into their networks. It is interesting to note that in the developing process of the GSM standard, the providers actually had incentives to include “superfluous” abilities enabling future service innovations such as mobile originated SMS . SMS may be divided into two categories: 1) Person-to-person SMS (P2P), or mobile originated SMS, which enables mobile users to send short text messages from their mobile handsets to other mobile phone users. In the production of P2P SMS only the mobile operators and the users are involved. 2) Information SMS, which enables the mobile users to buy different types of information and content services using SMS. Examples are downloading ringtones and logos, alerts (e.g. goal alerts), quizzes and games, SMS voting (who should leave “Big-Brother” tonight?), and paying for movies, web-content, parking, and so forth. Information SMS is typically provided by a separate content provider who buys SMS distribution and billing as inputs from the mobile operators. Hence, Information SMS involves more players than P2P SMS. Information SMS is commonly described as Premium SMS, but we find the term premium somewhat misleading given the fact that P2P SMS is by far the more popular and revenue generating category. The Norwegian market Since 1993 there have been two facility-based mobile operators in Norway, NetCom and Telenor, each of them operating a GSM network. At the end of 1999 the first virtual operator (Sense Communications), an operator who buys network access from the mobile network operator and provides mobile services to end users, appeared in Telenor’s network. 8
In Japan DoCoMo’s I-mode service has been a big success since the introduction in 1999. 9 Global System for Mobile (GSM). 10 To overcome these problems the handset producers have included new features to improve the user interface with respect to typing messages, such as the option to store pre-defined message templates, dictionaries and predictive text, and special keyboards.
192
Kjetil Andersson, Øystein Foros, Frode Steen
600 500 400 300 200 100 0 Q4 02
Q2 02
Q4 01
Q2 01
Q4 00
Q2 00
Q4 99
Q2 99
Q4 98
Q2 98
Q4 97
Q2 97
Q4 96
Q2 96
Fig. 2. Number of mobile originated SMS, in millions, in Telenor’s network. Source: Telenor.
Over the next few years several virtual operators (VOs) connected to the network of both Telenor and NetCom11. In 2002 the market share of Telenor, NetCom and the VOs was 59.7%, 28.7% and 11.7 %, respectively (Source: PT (The Norwegian Post and Telecommunication Authority) 2002). In addition to the mobile operators, the major type of players in the mobile market is the providers of Information SMSs. The providers of Information SMS may be divided into large content providers (Cellus, Mobilnett, Inpoc, Popit, Maxsms) who have their own interlinking agreements, usually called Content Provider Agreement (CPA), with NetCom and Telenor, and smaller content providers who buy interlinking from SMS aggregators (e.g. Carrot and Teletopia). The SMS aggregators then have interlinking agreements (CPA agreements) with NetCom and Telenor. Fig. 2 shows the development of mobile originated SMSs in the network of the largest mobile operator, Telenor. The development of SMS usage in NetCom’s network resembles that of Telenor. Clearly, some time during 1998 the market size reached a point that triggered explosive growth. Underneath the take-off of P2P SMS depicted above is a rapid growth in mobile subscribers. Both Telenor and Netcom reported record-breaking increases in the sale of mobile subscriptions during 1997–1999. Of the new subscribers, more than 50% chose the pre-paid subscription type that was first introduced in the second (NetCom) and third (Telenor) quarters of 1997. Interestingly, at the beginning, the pre-paid customers could not send SMS – this feature was first introduced in the fourth quarter of 1998. 11
VOs in Telenor’s network in 2003 included, among others, Tele 2, Song and Chess, while You, PGOne, and Sense are major virtual operators in NetCom’s network. Sense switched to NetCom after buying the NetCom VO Site Communication in 2002.
The SMS Bandwagon in Norway: What Made the Market?
193
However, the growth in mobile subscribers cannot explain the Norwegian SMS success. The market penetration and usage of mobile calls are high in Norway, but not significantly higher than in other Scandinavian countries when we consider the penetration of GSM subscription, see Fig. 3. In contrast, the usage of P2P SMS in Norway is much higher than in the other Scandinavian countries as illustrated in Fig. 4. 100 90 80 70 60 50 40 30 20 10 0
Denmark Finland Norway Sweden
1999
2000
2001
2002
Fig. 3. Penetration rates of GSM subscriptions. Sources: Ministry of Transport and Communications Finland, The National Post and Telecom Agency (Sweden), The National IT and Telecom Agency (Denmark), Norwegian Post and Telecommunication Authority
60 50
Denmark
40
Finland
30
Norway
20
Sweden
10 0 1999
2000
2001
2002
Fig. 4. Monthly SMS per GSM subscriber. Sources: Ministry of Transport and Communications Finland, The National Post and Telecom Agency (Sweden), The National IT and Telecom Agency (Denmark), Norwegian Post and Telecommunication Authority
194
Kjetil Andersson, Øystein Foros, Frode Steen
Basic features underlying the Norwegian SMS bandwagon We will now discuss some key features of the Norwegian SMS success with a particular emphasis on the features that probably made the start-up problem easier to overcome: the simple and cheap charging, the high degree of interlinking and finally, the hands-off role of regulatory authorities. Pricing: Simple and cheap in contrast to mobile calls Prior to 2000 the regular price of a P2P SMS was 1.50 NOK for both NetCom and Telenor subscribers12. In a short period following the introduction of SMS to the pre-paid subscribers at the end of 1998 Telenor offered SMS for free to these customers. At the same time the price per minute for calls during daytime (Monday to Friday 07–18) for the pre-paid customers was 7 NOK. The same per minute price also applied for the post-paid low-end tariffs. In retrospect, one may wonder whether this seemingly extremely effective penetration pricing, see Fig. 2, was a case of good luck or a deliberate marketing strategy to trigger the bandwagon effect. Nevertheless, the combination of cheap or free SMS and high per minute prices on mobile phone calls is probably a key factor of the SMS success. The latter feature, high prices on mobile phone calls, was particularly important to the new customer groups that entered the mobile markets in the late 1990s. As stated above, a large part of these customers bought prepaid cards or other low-end tariffs with low fixed fee and high price per minute for phone calls. Teenagers, for instance, then quickly grasped that they could communicate much cheaper by SMS than by making phone calls. The mobile providers typically bundle SMS with their GSM subscription, both for post-paid and pre-paid tariffs, such that all GSM mobile users have the ability to use SMS. Hence, the customers can send SMS only from their GSM provider, and rivals (VOs) cannot offer SMS services without offering GSM subscriptions too. In contrast to what we have seen for mobile voice telephony, SMS was offered with an extremely simple pricing model to end-users. The price per minute of mobile phone calls differed a lot between on-net and off-net traffic and between high-end and low-end tariffs. In the daytime the price per minute has been up to 10 times higher for prepaid cards than for the high-end tariffs. The unit price of SMS, however, was for the most part independent of the type of mobile subscription in the take-off period prior to 2000. The motivation for the price differences in mobile phone calls is obviously versioning, i.e. the providers want to prevent customers with a high willingness to pay (typically business customers) from switching to low-end tariffs such as pre12
In the first quarter of 2000 both operators lowered SMS prices, and also started to differentiate SMS prices between subscription types.
The SMS Bandwagon in Norway: What Made the Market?
195
paid cards13. One problem for the providers with this versioning strategy was that the low-end subscribers made few calls – they mostly received calls. However, when we narrow the focus to SMS, the pricing strategy of mobile phone calls most probably stimulated the use of SMS by the new customers using the low-end tariffs such as prepaid cards. The main tool to attract these new customer groups, such as teenagers, was handset subsidising and introduction of tariffs with a low, or zero, monthly fee. But in order to prevent the highly profitable business customers to switch to these new tariffs the operators were forced to include very high per-minute price on phone calls in these tariffs. Hence, even if the entry fee with prepaid cards was low, the price per “amusement” (phone call) was high. In this context, SMS was an optimal tool for the providers. SMS had been available to business customers for several years, but they did not send messages to any significant degree. Hence, low SMS charges would not be a temptation for the business customers. Therefore, the providers did not need to fear revenue cannibalisation by using low SMS prices as bait in their prepaid, and other low-end offerings. It may be a paradox that the awkward interface of SMS prevents the use of the service by the first generation mobile users, but the fact that the first generation mobile users (the business customers) did not value the SMS opportunity made SMS particularly suitable to the next generation of mobile users. There was no cannibalisation problem, and furthermore, teenagers probably had lower learning costs than the first generation mobile users. The simple and cheap pricing of SMS before the bandwagon take-off seems to have been one important reason for the huge success. Even if the operators did not have a clear strategy for the SMS introduction, they quickly started with marketing campaigns after observing that new customer groups such as teenagers used the previously “sleeping” functionality. In 1999, the operators in the Norwegian market tried to attract new prepaid customers with introduction offers whereby new customers were given a specific amount of messages for free or at reduced prices for a given period. Northstream (2002) argues that these marketing campaigns were an important reason for SMS being used more in Norway than in Sweden. If the text messaging take-off was caused by the fact that price sensitive teenagers found it less expensive to use text rather than voice, there is no value added for the consumers except the cost reduction. In fact, American analysts use this as an explanation of the difference between Europe and America. The reason for the low American usage of text messaging is that talk is cheap (Economist 2003a, 2003b). However, even if this effect was an important explanation to the SMS take-off, text messaging is now used in a lot of situations where phone calls are not a substitute. For instance, P2P SMS is used when the sender or the receiver cannot talk. Moreover, phone calls are not a substitute for the majority of the services offered in the Information SMS market. Finally, the youth image and the growth of a specialised language to overcome the interface limitations gave SMS a cult status (Ovum 2002).
13
See Shapiro and Varian (1998a) for an informal discussion of versioning
196
Kjetil Andersson, Øystein Foros, Frode Steen
Interlinking The first feature to note with respect to interlinking in the SMS market is the existence of a non-proprietary industry standard on how a message is sent from one mobile phone to another. The alternative would have been several proprietary standards where providers compete for dominance. The feature of a common industry standard has obviously been an important one for the success. The consumers and the non-strategic market players, such as small providers of information SMS, need not fear that they are choosing the wrong standard. The common technical standard formed the basis for interlinking in the SMS market. However, in order to have a high degree of interlinking from the customer’s perspective, bilateral agreements between the providers need to be implemented. As to direct network effects, the degree of interlinking in the P2P SMS market depends on whether the suppliers have interconnection agreements ensuring that people can send messages regardless of which operator the recipient subscribes to. With respect to P2P SMS, a complete degree of national interlinking has been agreed on in most European markets, and in Norway NetCom and Telenor have had P2P interconnection agreements since the fourth quarter of 1996. The number of SMSs increased by about 30% in Telenor’s network immediately after this agreement (see Fig. 2). Interconnection agreements on P2P SMS are easy to implement as long as the providers have the incentives to do so. The high degree of interlinking is important for the rapidly growing use of SMS since each SMS user enjoys the bandwagon benefits with respect to both NetCom’s and Telenor’s subscribers. A high degree of interlinking in this respect makes it easier to reach a critical mass. The degree of interlinking perceived by the end-users is enhanced by the fact that the end-user charges do not depend on whether the message is terminated off-net or on-net. When P2P SMS is considered as a substitute for mobile phone calls, this feature is particularly important. While the providers set the same price for off-net and on-net P2P SMS, at the time of SMS introduction, mobile voice was charged a higher price when the call was terminated off-net than when the call was terminated on-net. Hence, while the customers had to check whether the receiver was connected to the same provider or not in order to know the price per minute, this was not necessary for SMS. Another dimension of P2P SMS interlinking is the interface between mobile handsets and the networks. Since 1995 all mobile handset manufactures have integrated the SMS standard such that all GSM phones are capable of sending and receiving messages. Hence, when the users first learnt about SMS, they were able to start using SMS themselves right away. This was an advantage for SMS compared to other mobile services such WAP and MMS. In the latter case, consumers may want to wait and see in order to avoid spending money on a service that no one uses. Both WAP and MMS, at the time of introduction to the market, required the majority of customers to buy a new mobile handset. In contrast to the high degree of P2P SMS interlinking in Europe, the degree of interlinking has been low in the United States. The feature of sending messages between different networks was not implemented until the middle of 2002. Fur-
The SMS Bandwagon in Norway: What Made the Market?
197
thermore, not all handsets sold in the United States support two-way texting, and the feature of sending SMS is not included in standard subscriptions but must be bought as an additional service (Economist 2003a). In our opinion, a low degree of interlinking is probably a more important explanation of the low American usage of SMS than cheap phone calls, as discussed above. Interlinking with respect to Information SMS is more complicated than P2P interlinking. The mobile network operators need to agree on how to allocate the numbers. It is important for the content provider to have the same number from all the mobile operators to facilitate marketing to the whole set of users. One of the most important Information SMS services has been TV-related text-messaging where viewers vote and send comments. For such services it is important that the providers offer common shortcodes (four-digit numbers) for all subscribers. NetCom and Telenor offered common shortcodes from 2000, while common shortcodes were not offered before 2002 in the majority of other European countries. Common shortcodes have probably been the most important factor for the take-off of TV-related SMS (Economist 2002). A key feature of the Norwegian market is that NetCom and Telenor in general agreed on a high degree of interlinking for Information SMS. In April 2000 the two mobile network operators launched what was to a large extent a common Information SMS concept, Content Provider Access (CPA), with a very similar wholesale pricing and technical interface towards SMS content providers and SMS aggregators. Hence, the degree of cooperation on interlinking of Information SMS has been high in Norway compared to other European markets. An interlinking concept of Information SMS in Sweden was launched more than a year later than in Norway (Northstream 2002). The cooperation between NetCom and Telenor also increased the product range available to the content providers. This was primarily due to the principle of “Reversed Billing” that was applied from the launch in Norway in 2000. “Reversed billing” enables the operators to charge the customer for messages sent from the content provider to the customer. In contrast, the Calling Party Pay (CPP) principle is typically used for phone calls. The “Reversed billing” principle gives the content providers the possibility to offer subscription services such as goal alerts, whereby the content provider sends the subscriber a message when his favourite team scores. The subscriber is then charged for every such message he receives. In contrast, without “Reversed billing”, subscription services cannot be launched (the customer needs to send a message “to ask” whether his team has scored). In the Swedish market reversed billing was not launched until late 2002. Another difference between the Norwegian and the Swedish market is the high degree of transparency with respect to mobile operators’ wholesale offers to content providers and SMS aggregators in Norway. Several analysts, e.g. (Northstream 2002), have seen the high degree of interlinking and transparency in the interplay between mobile operators and content providers as a key feature of the high usage of Information SMS in Norway. We agree that the high degree of interlinking achieved through the cooperation between NetCom and Telenor probably implied that there was a larger pie to be shared by the mobile operators, the content providers and the customers. However, each operator has monopoly with re-
198
Kjetil Andersson, Øystein Foros, Frode Steen
spect to giving the content provider access to its subscribers. In order to gain access to Telenor’s customers, a content provider needs an agreement with Telenor. Likewise, a content provider needs an agreement with NetCom to reach NetCom’s customers. Hence, an agreement with NetCom is not a substitute for an agreement with Telenor for the content provider. Consequently NetCom and Telenor do not compete for SMS content providers and aggregators who want to offer Information SMS. In the absence of such competition, the content providers and aggregators may fear that the mobile operators capture the lion’s share of the total pie. This may limit the content providers’ incentives to enter the market. Hence, it is unclear whether the cooperation between NetCom and Telenor on wholesale pricing and interface conditions has reduced the start-up problem or not. As mentioned above, almost all mobile handsets since 1995 have integrated the SMS standard needed to offer P2P SMS. In contrast, the foundation for the Information SMS market was laid by Nokia’s proprietary standard, Smart Messaging introduced in 1997 (Ovum 2002). The main feature in this context is that Smart Messaging opens for downloading logos and ringtones. This was the start of the Information SMS market since Nokia allowed third parties to start offering logos and ringtones to customers with Nokia handsets. Since Smart Messaging was a proprietary technology, only Nokia phones could access these services from the content providers. Later the operators implemented gateways (ensured interlinking) such that other handsets than Nokia could access Smart Messaging content. However, the initial low degree of interlinking with other handsets than Nokia had given Nokia an advantage, and in that period Nokia did significantly better than their main rival Ericsson. Proprietary standards by the handset producers implied that almost all ringtones offered by content providers were available only to customers with Nokia handsets. In the Scandinavian countries Nokia became an almost dominant supplier of handsets, in particular with respect to prepaid customers. Obviously, it is hard to figure out to what extent proprietary standards for ringtones were decisive for this evolvement. Regulation When we discuss the role of the Norwegian sector-specific regulation authority, PT, we need to make a distinction between their ex ante hands-off approach in the infancy of the market, and their more active role after SMS became a success. The operators considered SMS a data service and therefore, in their opinion, SMS should not be regulated through the Open Network Provision obligations. In the infancy of the market there was no attempt from PT to use any form of remedy towards NetCom or Telenor with respect to P2P SMS. Moreover, neither PT nor the competition authorities have intervened in the strong cooperation between NetCom and Telenor on pricing and interlinking in the wholesale market of Information SMS. As discussed above, the coordination of the structure of wholesale prices to content providers may have a negative effect on the content providers and the end-user prices, although analysts such as Northstream (2002) argue that the cooperation on quality and wholesale pricing as well as the transparency
The SMS Bandwagon in Norway: What Made the Market?
199
in wholesale offering have benefited the content providers. However, it is well known that such transparency may be a tool to practise tacit collusion. As Northstream (2002) states, PT has encouraged the cooperation initiatives. It is interesting to note that several analysts, including PT, consider the cooperation on wholesale pricing, a practise that probably should have been banned according to the Norwegian competition law, as one of the key features behind the Information SMS take-off in Norway. The hands-off approach used in the infancy has changed after the bandwagon started. PT has obligated Telenor to offer P2P SMS as a wholesale service to the independent firm Teletopia. Hence, Telenor is forced to unbundle the P2P SMS service from the mobile subscription. Until now customers have had the opportunity to send P2P SMS only from their mobile telephony provider. Telenor and NetCom have been critical to this and argue that the wholesale unbundling and wholesale price regulation of P2P SMS will have negative effects on their investments and innovation incentives. To what extent this ex post approach influenced the market players in the early period of SMS depends on whether the players expected this type of intervention to happen. On the one hand, if the intervention was expected, the providers would have internalised the effects. On the other hand, if they did not expect ex post intervention, the regulation has not had any influence on the introduction of SMS. However, in the latter case, we would expect the providers to internalise the effect of ex-post regulation of services that become a success when they consider investing in new services such as MMS and 3rd generation mobile systems (UMTS). The potential negative effects on ex-post regulation of services that are successful are comprehensively discussed by Hausman (1997, 2002).
Lessons from the SMS bandwagon The success of SMS is often described as unexpected due to the awkward userinterface. You need to punch 55 digits in order to ask your friend out for a drink. Besides this, the introduction of SMS could be used as a textbook on how to get the bandwagon rolling. We have emphasised two features in particular. The first is low prices on text messaging relative to the mobile phone call charges for low-end tariffs. Expensive mobile voice may force price sensitive users to use text messaging as a substitute. The fact that talk is cheap has also been used as an explanation for the low usage of text messaging in the United States. The second key feature is the high degree of interlinking both with respect to functionality and to pricing. In contrast to mobile phone calls, text message pricing does not depend on which provider the recipient subscribes to. P2P SMS interlinking quality has been high in most European markets, while P2P SMS interlinking has just recently been partly achieved in the United States. The most significant difference between Norway and the other Scandinavian countries is in information SMS interlinking, where the Norwegian operators cooperated to achieve complete interlinking through common shortcodes, transparency, and al-
200
Kjetil Andersson, Øystein Foros, Frode Steen
most identical wholesale pricing long before the other European markets. Several other countries have now adopted the Norwegian business model for information SMS. It is a question whether this was a case of good luck or a conscious strategy. One striking feature is the lack of attention on text messaging in its infancy. In 1999 the providers’ attention was on WAP rather than SMS in all European markets including Norway. The fact that providers did not expect much money to be at stake could have created an environment where it was easier to agree on a high degree of interlinking in several dimensions. Moreover, it might also be easier for the providers to share revenue with non-strategic players, such as small content providers, when they expected the total pie to be limited. WAP became a flop partly because the providers were reluctant to share revenue with the non-strategic content providers. The role of sector-specific regulation will probably be important with respect to the providers’ incentive to include flexibility and functionality that may be considered superfluous when the initial standard is designed. It will usually be costly to include more options and flexibility, and whether firms wish to spend money and effort on ex-ante uncertain abilities depends on to what extent they can capture the gains from a success ex post. The story of SMS and WAP shows that it is hard to pick killer applications ex ante. Ex post WAP has been a flop and the operators have lost a lot of money while SMS has been a formidable success. Recently we have seen indications of a more active role for the regulator with respect to the SMS market. If this implies that the regulation regime constrains the gains from an ex post success without compensating if services turn out to be unsuccessful, it will probably provide disincentives for providers to invest in order to solve start-up problems and to incur flexibility through ex ante superfluous abilities. A tempting general lesson is that in the early development of a new market it is important that the regulator does not interfere. Only when the market is mature, and the critical level for the bandwagon effect is reached, can revenue and competition regulation possibly increase welfare. In a dynamic setting this might however be problematic since expectations of ex post constraints on revenue will reduce incentives to innovate. In the SMS market we saw that in the absence of regulations, ex ante superfluous functionality ended up as major successful service. This suggests that the regulator should be very careful when designing regulation regimes in bandwagon markets.
References Economides N, Himmelberg C (1995) Critical Mass and Network Evolution in Telecommunications. In: Brock G (ed) Toward a Competitive Telecommunications Industry: Selected Papers from the 1994 Telecommunications Policy Research Conference Economist (2002) Texting the television. 19th October Economist (2003a) No text please, we are American. 5th April Economist (2003b) Not just talk? 11th October
The SMS Bandwagon in Norway: What Made the Market?
201
Hausman JA (1997) Valuing the Effects of Regulation on New Services in Telecommunications. Brookings Papers on Economic Activity, Microeconomics, pp 1–38 Hausman JA (2002) Mobile Telephone. In: Cave M, Majumdar S, Vogelsang I (eds) Handbook of Telecommunications Economics, vol 1, North-Holland Northstream (2002) Den norska SMS-marknaden (The Norwegian SMS Market) Ovum (2002) MMS and SMS: Multimedia Strategies for Mobile Messaging PT, Norwegian Post and Telecommunication Authority (2002) Telestatistikk Rohlfs J (2001) Bandwagon Effects in High-Technology Industries. The MIT Press Shapiro C, Varian H (1998a) Information Rules: A Strategic Guide to the Network Economy. Harvard Business School Press, Boston, Massachusetts Shapiro C, Varian H (1998b) Networks Effects. Manuscript
How to Achieve the Goal of Broadband for All Morten Falch1, Dan Saugstrup2, Markus Schneider Technical University of Denmark
Abstract The purpose of this paper is twofold. First, the paper aims to explain national experiences in penetration of broadband, through an analysis of drivers and barriers. Second, the paper will on this background assess the role of government intervention in order to achieve universal access to broadband services. The paper focuses on identification and analysis of drivers and barriers towards the development of broadband for all. The purpose is to provide input for an assessment of various policy measures aiming at stimulation of the growth in penetration of broadband services. The analysis builds on experiences with broadband development in four countries, which varies from market leaders (Canada, Denmark and South Korea) to average penetration regarding broadband services (Germany). The paper is based on ongoing work in WP3 of the BREAD project (Broadband in Europe for all: a multi disciplinary approach), which collects information on ongoing regional and national initiatives in Europe and around the world.
Introduction The penetration of broadband connections has on a global scale increased dramatically during the past few years. However, the growth has been very unevenly distributed among countries. South Korea is by far the global leader in penetration of broadband with Canada and some Scandinavian countries in a second tier. In the other end, countries like Ireland, Luxembourg, and Greece are lagging considerably behind the other OECD countries. What causes the huge national differences in penetration of broadband? In order to answer this question, the first part of this paper outlines a number of factors considered to be decisive for penetration of broadband. The second part uses these factors to explain why development has differed in a number of countries. 1 2
E-mail:
[email protected] E-mail:
[email protected]
204
Morten Falch, Dan Saugstrup, Markus Schneider
In this paper we will categorise the decisive factors according to three different dimensions: The first dimension distinguishes between factors affecting supply and factors affecting the demand. These factors are of course interrelated. The demand depends on how and on what conditions broadband services are supplied. High quality services offered at low cost generate more demand than poor services offered at high costs. On the other hand, a certain level of demand is necessary to stimulate investments enabling supply of broadband services. The second line of division goes between content and infrastructure. Broadband networks are essential for any economy and information society: Inter alia broadband networks connect buyers with sellers, callers with receivers, and public authorities/institutions with companies and citizens. Just as railroads and highways, Broadband networks are the technological means which enable people/machines to meet and interact virtually with other people/machines. However, high penetration of broadband networks as such is never an end by itself but rather the overall purpose of enabling communication and two-way interaction between all groups within a society. The stage of development of broadband services is most often (also in this paper) measured by the number of connections, and not the content delivered via the broadband infrastructure. Development of content and infrastructure may stimulate each other. But for our purpose it is important to distinguish between the factors stimulating content development and those stimulating the infrastructure development. Technological, economic as well as political/cultural factors are affecting both supply and demand conditions for both content and infrastructure development. Technological aspects include development of new transmission technologies and development of new types of services that can be transmitted via a broadband infrastructure. Economic factors include market conditions such as the overall market size and the level of competition. Cultural/political factors include regulation and other types of policy intervention as well as differences in lifestyles. All these aspects are addressed in our third line of division, which distinguishes between technical, economic and political/cultural factors. This ends up with a typology as depicted in Table 1. Table 1. Typology of factors affecting penetration of broadband services Supply
Demand
Content
Technology Economy Culture/policy
Technology Economy Culture/policy
Infrastructure
Technology Economy Culture/policy
Technology Economy Culture/policy
Not all of these 12 different categories are equally important. This paper will address most of the categories, but the primary focus will be on infrastructure sup-
How to Achieve the Goal of Broadband for All
205
ply. Using the typology presented above we have identified a number of parameters, which we think are the most relevant in explaining national differences in diffusion of broadband, and which can be used as a starting point for identification of interesting policy measures. Table 2. List of parameters affecting penetration of broadband services Supply
Demand
Content
Supply of new services Development of new business models Pricing Number of Internet hosts Number of digital broadcasting channels incl. Web TV Legal issues (e.g. copyrights) Lack of harmonisation (legal) Standardisation of standards
Income level and income distribution Penetration of IT (e.g. PCs, Internet, mobile phones) in households and businesses Lifestyle Attitudes towards new technologies Penetration of broadband connections
Infrastructure
Existing telecom and cable networks (availability/ penetration/ capacity) Existing wireless infrastructures Demography of users Cost of capital and financial strength of operators Level of competition Ownership of competing infrastructures Initiatives by local communities to invest in broadband Market price for broadband services
Income level and income distribution Penetration of IT (e.g. PCs, Internet, mobile phones) in households and businesses Lifestyle Attainment towards new technologies Driving applications (what are people actually using)
These parameters are further detailed below. Section 3 discusses different types of policy parameters. Supply of content The content layer relates to the provisioning of online information services and applications which are to be transmitted over broadband networks to the receiver. Broadband enables distribution of a host of new services that either was non existing or only available off-line. Convergence is an important aspect of this. Broad-
206
Morten Falch, Dan Saugstrup, Markus Schneider
band infrastructure offers a transmission platform that due to its high capacity can be used for delivery of services originating from a wide range of industries. Current examples are online music platforms, Video on Demand (VoD), Voice over IP (VoIP) and web-based software applications. Important future applications are expected to be Video-conferencing, broadcast multi-casting and increasingly interactive content. An important driver is here the ability to develop new converging services combining features from services used to be distributed through separate delivery channels. There is however a number of economic and political challenges related to this. For instance, VoIP is threatening to cannibalise revenues of established (incumbent and new entrants,) telecom operators and the availability of (audio)-visual content is considerably affected by intellectual property protection in form of law and technology. Development of new business models and pricing schemes are important drivers for both generation and demand for content. One of the drivers behind the success of the Internet has been a charging mechanism, where the end-user pays distance independent price and where most content is free. On the other hand this model has had clear limitations with regard to content generation, as it seems difficult for users to accept payment for certain types of content. Supply of content is difficult to quantify and compare. In theory the supply of content is the same from everywhere, as one will be able to access the same content once the infrastructure is in place. But different languages and preferences towards locally produced content imply that users may experience national differences. Relevant measures for supply of content, which can be used for international comparisons, include number of digital broadcasting channels available, number of Internet hosts etc. But these numbers will never tell the full story and can only be used as indicators. Supply of infrastructure The infrastructure aspects relate to the broadband network itself. The development of supply depends on existing network facilities as well as the level of investments. Network operators are mostly commercially oriented companies that assess their investment opportunities according to the return of investments. The viability of investments in broadband depends on the level of total costs compared to expected revenues. The point of departure for creating an infrastructure, which can offer broadband for all, is very different within countries. First of all, there are differences in demography and geography. For instance densely populated areas will usually be cheaper to supply with broadband than rural areas. Secondly, the quality and capacity of existing telecom networks vary from country to country. Even though most countries within the OECD area provide more or less national coverage for basic communication services, the point of departure for upgrade towards higher bandwidth is very different. The differences in the extent of cable-tv networks are even more visible. While countries like the Netherlands offer almost universal ac-
How to Achieve the Goal of Broadband for All
207
cess to cable TV networks, Cable TV plays only a very limited role in some other countries (e.g. Germany). In addition to telecom and cable-TV networks other types of infrastructures provided by municipalities or power companies may be used as a basis for provision of broadband services. Local community associations may also be a driving force in provision of broadband e.g. through existing cable networks or by use of WLAN. The most successful access technologies for broadband have so far been ADSL and cable modems. Cable modems offer generally higher bandwidth and are cheaper than ADSL services. But cable networks are not as widespread as telecom networks. Therefore cable modems are only offered to a certain segment of users mainly in urban areas. Wireless connections such as 3G and FWA have so far a limited penetration. 3G services have today some success in Japan and South Korea and are expected to take off in other countries as well. However the bandwidth offered is limited and can hardly be a full substitute for a wired broadband connection. FWA is more expensive and is mainly used by business users, but may be an attractive solution in rural areas not served by ADSL or cable. Finally WLAN technologies may be used for providing broadband access in public spaces or in neighbourhoods. The extent to which these technologies can be applied by use of existing network structures affect the total cost of investments needed for supply of broadband services. In addition to demography of customers and reuse of existing network infrastructures, costs also depend on financial factors such as the level of interests to be paid, which again depends on the financial strength of the operators, and the possibilities for use of soft-funding mechanisms such as government subsidies. The total cost of the investment is however only one out of a number factors driving the development in broadband facilities. It also depends on the revenues that these facilities can be expected to generate. Expected revenues depend on the number customers and the prices that can be charged for delivery of broadband services. The factors driving the total demand will be discussed in more detail in the section on demand for content and infrastructure. But in addition to this, the market structure – first of all the level of competition – play an independent role in driving the supply. The level of competition on the telecom market is important for the supply of broadband. The level of competition, in turn, creates pressure on the companies to invest despite lower return on investment figures. After all, up to a certain limit, it is better to have a lower return than none at the expense of competitors. Competition includes competition among different suppliers of the same service and competition between different types of infrastructures. Competition among suppliers of ADSL services is highly dependent on the regulation for unbundling of local loop services (ULL), as it is often only the incumbent operator that has its own infrastructure for local access. Empirical research seems to indicate, that a strict regulatory practise on interconnection with the aim of promoting real competition, favours penetration of new services. Among the EU countries, a positive correlation between the level of competition and investments in telecom
208
Morten Falch, Dan Saugstrup, Markus Schneider
facilities can be documented (Henten et al. 2004). In Japan new entrants have been offered access to existing telecom facilities such as ducts and dark fibres on very favourable terms, and this is often mentioned as one of the key factors behind the very rapid development in penetration of broadband. Facility based competition and in particular competition between networks building on different technology platforms has turned out to one of the most important factors driving supply of broadband services. In particular competition between cable networks and xDSL provided via copper based telecom access networks has been important. The emergence of a converged digital communication market is expected to have a profound impact on the competitive market mechanisms. Today, telecom operators are beginning to offer VoD and cable network VoIP in addition to Broadband access services. This technological and market convergence creates new infrastructure competition and provides more choice for customers3. For instance, Internet Protocol TV (IPTV) launched its new services in France (MaLigne tv) and Spain (Imagenio). Thus, Internet TV can become a competitor to traditional broadcasting mediums. However, in the near future pricing and availability of content will hinder a fast uptake. Competition among infrastructures depends on the availability of alternative infrastructure and the ownership of these infrastructures. If the same incumbent controls both the telecom and cable network, this will limit competition between these two types of infrastructures. Another source of competition in the market for delivering broadband access services comes from local communities, where customers demanding broadband access set up there own local networks. Some of the largest of these networks are established in co-operation with the municipalities. This has been the case in e.g. Germany and in Denmark. Unlike network investments made by telecom or cable operators, these initiatives are customer driven. Pooling of demand from a group of customers can be a very effective tool to decrease prices by benefiting from economies of scale. Rather than approaching the network operator individually, communities considerably strengthen the bargaining power of the customers. This is however, not the only implication. Such initiatives also stimulate demand and thereby the supply of broadband services. Investments can also be stimulated through a reduction in investment risks. In the case of broadband infrastructure and services, the Internet allows for a higher degree of interactivity. This interactivity enables the investigation and measurement of demand for broadband infrastructure and services before major investments are made. In the UK, BT has set up a website where people living in rural areas can express their interest in broadband services. The goal was to measure the demand for broadband infrastructure in order to reduce the investment risk. Similarly, in the case of broadband content (music and games) the Internet is being used to lower the investment risk by measuring demand before the investment in
3
http://www.theregister.co.uk/2004/07/23/digital_homes/
How to Achieve the Goal of Broadband for All
209
music and games are made (e.g. Vodes.Net). Hence, the Internet allows the investment risk for both content and infrastructure to be lowered considerably. Apart from the potential return on investment, the degree of legal investment protection is another decisive factor in relation to the network operator's incentives to investment in broadband network facilities. As noted above competition seems to stimulate network expansion, but a very tight telecom regulation may have the opposite effect at least on the incumbent operator. If network operators are required by law/regulation to interconnect and give third parties' access to their networks, this will affect their investment decisions. The higher the degree of investment protection the higher are the incentives for network operators to invest in broadband. This has however to be weighted against the negative impact this may have on competition. As long as one operator (the incumbent) dominates by market share around 90% or more (as it is in many countries) in the market for network facilities in the access network, it seems that a policy in favour of more competition is necessary to stimulate investments. It should be pointed out, however, that increased, industry-wide driven commodisation of network elements and modularisation of infrastructure allow easy third party interoperability. In other words, the more industry standards emerge, the more can third parties investments complement the investments by the network operator itself. This is of particular relevance in relation to the high level of debts of many network operators. Supply may also be stimulated by Government or EU initiatives. In an age where there is almost blind faith in market forces, any kind of state intervention has to have compelling reasons. Nevertheless, some initiatives stimulating supply of broadband services have been taken – in particular for support of supply in disadvantaged areas. The EU telecom regulation on universal services includes only basic telecom services. However, some countries e.g. Sweden have implemented programmes for funding of provision of broadband services in rural areas. In addition, the EU Commission has provided funding for creation of a high speed research and education network through TERENA. Indicators for supply of broadband infrastructure In a comparison of indicators for supply of broadband in various countries, it is necessary to be aware of differences in the definition of what broadband really is. The OECD defines broadband as a 256 kbps downstream and at least 64 kbps upstream connection (OECD 2003b), the FCC requires high-speed lines faster than 200 kbps at least in one direction (FCC 2002). Some EU publications use 144 kbps in one direction as the limit (EU Communication Committee 2003). These differences are necessary to consider when statistics from various sources are compared. As technologies are rapidly changing, it is difficult to focus on exact kbps numbers in defining penetration of broadband. Maybe a broadband connection should for the time being be defined as an always-on connection capable of streaming music in good quality without interruptions. It may however be neces-
210
Morten Falch, Dan Saugstrup, Markus Schneider
sary to change this definition, if the majority of future broadband services will demand a capacity higher than can be delivered through such a line. Broadband availability refers to the number of end users which are within the reach of broadband capable access points to the core broadband network. In the case of DSL "availability" number refers to the number of upgraded local exchanges. One can refer to this figure as "theoretical broadband availability" or "supply side availability". This figure is easy to gather (the number of upgraded switches), but it gives an incomplete picture as this figure does not tell anything about the capacity within the network. Lack of capacity may imply that only a limited number of users are able to connect – or that user in reality only will have a bandwidth, which cannot be termed as broadband, available. This "practical availability" is also very important especially in relation end user's and their broadband experiences. In addition upgrade of the local exchanges may not be sufficient for reaching all customers. xDSL is distant dependent: the further away users live from the local exchange, the slower is the connection. Therefore broadband may not be available for the more distant customers before additional investments in the access network are implemented. Demand for content and infrastructure services As the demand for broadband infrastructure is driven by the demand for content, the drivers for services and infrastructure are highly interrelated although the indicators for the level of demand are different. The demand for content depends on the economy, socio-cultural factors and the penetration of broadband services (which again depends on both supply and demand). The demand for broadband connections depends on the same economic and socio-cultural factors and on the availability of relevant content. Economic factors The most important economic factor is income compared to the price for a broadband connection. Price is first of all a factor related to supply conditions. Income is certainly an important factor for penetration of broadband services. All countries ranking high in terms of broadband penetration are high income countries. However, income is not the only decisive variable. South Korea has a much higher penetration than countries with a similar or even a higher GDP per capita, and Germany has a fairly low penetration although their GDP are among the highest. Another economic factor of importance is distribution of income, as the number of households that can afford to use broadband depend on the distribution of income as well the level of income. It must however not be forgotten that broadband is demanded by businesses as well as households. Although households dominate the demand in the most advanced countries the use of broadband in businesses is important in the less advanced economies. In addition business applications may also stimulate demand
How to Achieve the Goal of Broadband for All
211
from the households. Therefore, the structure of economy is important for the overall demand. Thus an economy dominated by informational activities must be expected to generate more demand for broadband services than an economy based on agricultural production. Socio-cultural factors The most obvious socio-cultural factor can be termed the e-readiness of the society. E-readiness is related to historical factors as well as competences disseminated through the educational system. A number of international reports have tried to measure e-readiness (Dutta and Jain 2002). This is however mainly done by use of statistical indicators, which cannot really be termed as socio-cultural and which are included in the list of parameters. One possible indicator is the penetration of PCs. Although, this factor also depends on income, it also reflects the ability and willingness to make use of ICT applications such as broadband services. Another socio-cultural factor is time spent at home, which often is mentioned as an explanatory factor with regard to mobile broadband services. More time is spent on commuting and away from home in Asia compared to Europe. Combined with a low penetration of PCs, this creates a socio-cultural environment, where mobile solutions often are more convenient than fixed line solutions. Socio-cultural factors seem to have been one of the factors behind the high penetration in South Korea, as Koreans spend a larger share of their income on broadband communication than in any other country.
Policy intervention and regulation Public policy may be an important factor for stimulation of both demand and supply of broadband services. Most governments are well aware of the importance of the development of broadband communication for economic growth, employment rate and social welfare in general. They recognise that any information society fundamentally requires a sophisticated and reliable broadband infrastructure as the transmission link enabling citizens to access an ever growing repertoire of eservices and e-goods. Most countries have therefore defined policies for promotion of the information society. These include among others industrial policy, education, e-government and telecom regulation. In Europe the EU has launched the eEurope programme, which is a part of the Lisbon strategy aiming at improving growth and employment in Europe (European Commission 2002). Government intervention can aim to stimulate one or more of the four categories outlined in Table 1 (supply or demand site, content or infrastructure). The remedies can be categorised as facilitation, regulation or intervention. Policy initiatives vary from country to country. Some countries have focused on manufacturing of ICT equipment, while others have put more emphasis on the application of ICT technologies. They also differ in their prioritisation and intensity of governmental support. Some countries provide financial support for project
212
Morten Falch, Dan Saugstrup, Markus Schneider
stimulating use or production of ICT, while others focus on creation of a competitive environment e.g. through liberalisation of the telecom sector, and the remedies therefore range from direct subsidies, access (price) regulation and tax incentives to other less far reaching facilitation measures such as increasing transparency in the market place. While it is – at present – generally accepted that market forces alone do not provide optimal results in the communications sector, too much governmental intervention is also counterproductive. The effectiveness of these policies depends on the context, and the paper will analyse how policy measures have contributed to penetration of broadband in the case countries. Facilitation Facilitation measures are the mildest form of market interference. Facilitation measures lack the formal features of regulations (decisions and orders) and do not have legal consequences for third parties. In most cases, regulators or other governmental bodies merely act as market observers without any regulatory powers. The objective of facilitation is to ensure a good environment at the market for broadband services without direct intervention in the market. One particular type of this kind of industrial policy is ‘guide posting’, where a public institution takes the lead in creation of a common vision on future developments. Many national plans for the information society, e.g. the e-Japan plan, play this role. Common visions may include common standards. The national telecom authority or any other governmental body may also play an important role in development of common non mandated standards. Although, much of the standardisation work is done at the international level, there are still important areas, for instance in digital signatures and EDIFACT messages, where development of national standards is necessary. Regulatory bodies may also play an active role in increase of market transparency inter alia by providing information on prices, on availability of products and on consumer rights. In relation to Broadband access points, NRAs can offer price comparisons, product descriptions, geographical availability and the like. In many cases operators are required by law to provide this information anyway. As long as sensitive information is not included, increased market transparency can be tremendously effective in improving the competitive situation and thereby stimulate investments in broadband facilities. Another form of facilitation measures relate to the settlement of dispute between companies. The competence to make a binding settlement depends on the willingness of the companies to abide with decisions taken by the authority. This is different from traditional regulation where companies are subject to the decisions regardless of their consent. The public can also facilitate development through upgrade of competences and readiness to take up new technologies for instance through commission of extensive training programmes. Training and education is important both for demand and supply, as both users and producers of broadband services can benefit from this.
How to Achieve the Goal of Broadband for All
213
The lack of formality and possibility to bind third parties distinguish facilitation from regulation. For instance, when a NRA decides to provide an information service which allows users to search for Broadband access services by typing in their postal code, the NRA does not adopt a decision or order. Facilitation measures serve a very important function: They provide market information at no costs to the individual. Indeed, it is very cost-effective if a NRA uses the information provided to it by communication companies to increase market transparency – one of the core assumptions of perfect competition. Regulation Regulation is a more direct form of market interference. Formally regulation is characterised by its legal form and materiality by its legal consequences. Generally, though nomenclature differ from country to country, a distinction can be made in "decisions" binding only party in current proceedings, or "rules" which generally bind third parties, provided the subject fulfils the conditions set out in the rule. Regulation includes infrastructure as well as content regulation. In almost every country, regulation has been redesigned in order to promote competition in the telecom sector. Regulatory measures include obligations on interconnection and local loop unbundling as well as other measures supporting new entrants in their competition with the incumbent operators. Although competition is often seen as the most important objective for regulation, consumer protection, universal service and innovation are also addressed. Regulation of content deals with a number of new problems created by digitalisation and convergence of services. These include IPR, consumer protection with regard to e-commerce and e-payment etc. An important aspect of regulation is the way it is implemented by the NRA. Rules and decisions must be transparent, fair and free from political interventions in order to attract new market players. Therefore the competence and the independence of the regulator is an important parameter. Direct Intervention Direct market intervention is the strongest degree of governmental market interference. Essentially, market intervention is about actively providing the services as opposed to merely providing market information or regulating the market. Direct intervention will often be made through public funding of either infrastructure or content production. This strongest type of market interference conflicts in many respects with a liberal approach, where development mainly is left to the free market. It is argued that private companies are in a better position to provide the services. In addition, economies of scale are "lost" if the state is providing its own infrastructure. Market intervention negatively affects the incentives to invest in Broadband access
214
Morten Falch, Dan Saugstrup, Markus Schneider
points. However, the use of public sector activities to stimulate either demand or supply does not conflict with this approach. Many Government plans including eEurope and e-Japan stress that it is important that it is the private sector which takes the lead. The reality is however, that there are several examples of public funding of infrastructure projects providing broadband. As mentioned earlier some governments have provided funding for provision of broadband services in certain disadvantaged areas (for instance in Sweden and Canada or for specific purposes (e.g. research networks in Canada and in Europe). Support is also provided to the demand side for instance through tax exemptions for broadband users (as it is done in Denmark) or funding of broadband access to certain types of organisations. This is done in US, where grants are provided to instance community oriented institutions. The public sector can also stimulate demand by provision of broadband connections to its own institutions such as schools, hospitals, ministeries etc. Support to content development is most often given through development of content supporting public sector activities. E-government initiatives can be seen as a sort of direct intervention in content production as the public produces its own content, which can stimulate demand for both content and infrastructure. A related kind of support is financing of educational content. This is done for instance in South Korea.
Denmark Supply factors Broadband services are available for 97% of the population. Unlike the EU regulation framework, the Danish service obligation includes ISDN and 2MB lines. It has been discussed to include broadband services in the universal service obligation imposed on the incumbent operator, but when it turned out that broadband already was available to the vast majority of the population it was considered to be an unnecessary regulatory intervention. Broadband via DSL is offered by a number of operators. However the incumbent operator TDC has a market share of 79%. On top of this the other operators depend to a large extent on the TDC access network to reach their customers either by raw copper or bit stream access agreements. When DSL services were introduced in Denmark, the three operators TDC, Cybercity and Tiscali had equal market shares. But since 2000, TDC has gained the majority of the new (mainly residential) customers. The competitors have accused TDC for unfair competition, as TDC has demanded large fees on top of the interconnection charge to get access to their network. But this has not yet led to any intervention from the IT- and Telecom Agency or the Competition Board. Also competition in cable networks is limited, with two major operators e.g. TDC and Telia. It is clearly TELIA that have taken the lead in introduction of ca-
How to Achieve the Goal of Broadband for All
215
ble services in Denmark. TDC was very reluctant to go into this market as they preferred to offer ISDN and DSL services to their customers. Prices for broadband have remained rather high in Denmark compared with the bandwidth offered. This has stimulated the creation of a number of alternative providers such as neighbourhood organisations that have set up their own networks either based on existing cable infrastructures or WLAN. Also power companies are active in this area. They are rolling out optical fibres to a large number of households, and will thereby be able to offer an alternative to the access network of TDC. Although the Danish telecom market is considered to be one of the most liberal and competitive markets, competition on the market for broadband services still needs to be developed. This is properly one of the reasons for a rather slow takeup for high bandwidth DSL services. TDC has through efficient marketing been able to provide a high penetration of DSL services, but lacks the incentive to increase the bandwidth offered. As long as the competitive pressure is limited, offering of higher bandwidth will add to the costs, but will not create more customers. Demand factors The Danish market has, like the markets in the other Nordic countries, a high penetration of most kinds of telecom services. This has been explained by the fact that the major share of the population is able to afford these services. In addition, Danish consumers and enterprises are among the fastest to take up new technology. Denmark has one of the highest penetration rates of fixed phone lines although the penetration rate has been falling from 72 in 2001 to about 67 lines per 100 inhabitants in 20034. This is partly due to a substitution by mobile phones as the penetration rate of mobile phones has increased from 74 to 89 within the same period. The Nordic countries were among the first to introduce mobile telephones and used to have the highest penetration. However, their lead has decreased in the past couple of years, and a number of countries both inside and outside Europe have reached the same high level of penetration. Also with regard to broadband and Internet services, Denmark has been among the leading countries in Europe and it still maintains one the highest penetration rates in Internet access as well as penetration of ADSL and cable modem. The aggregate penetration of broadband services were in 2003 14.1 connections per 100 inhabitants. Two thirds of the connections were based on ADSL technology, about 25% on cable modem and about 5% on fibre to the home (National IT and Telekom Agency n.d.). It should however be noted that more than half of the connections supply a bandwidth below 144 kbps downstream, which is below some definitions of broadband, and certainly far below what is offered in e.g. South Korea. Several comparisons position Denmark as an advanced market for ICT products in particular in the mobile areas. According to a benchmarking analysis measuring 26 parameters within infrastructure, applications and market structure, 4
National IT and Telekom Agency, Telekom statistics various years: http://www.itst.dk
216
Morten Falch, Dan Saugstrup, Markus Schneider
Denmark is the most advanced ICT market next to Hong Kong and ahead of other Scandinavian and European countries (ITU 2002). In a similar benchmark on ‘ereadiness’ made by INSEAD, Denmark ranks as number eight after both Sweden and the UK (Dutta and Jain 2002). Seen from the demand side, the Danish point of departure for being among the leading countries in terms of broadband penetration is rather good. The Danish market is usually very quick to take up new technologies and a high and evenly distributed income implies that a high penetration can be obtained for most consumer services. In addition to this, Denmark already has a high penetration of PCs and Internet connections. The Danish Government has been very active to develop e-Government services, which also has contributed to the demand. Policy initiatives Denmark has since the mid 90s followed a deliberate policy stimulating competition on the telecom market, and the Danish market is considered to be one of the most competitive within the EU with fierce competition in particular on the market for mobile services. Denmark was also among the first countries to demand unbundling of the local loop. In contrast to most other EU countries unbundling is also demanded for optical networks. Denmark has introduced a special taxation scheme, which enables employers to offer PCs as well as broadband connections to their employees as a tax free benefit. Considering the high levels of income taxes in Denmark, this implies that tax reductions in reality pay more than 50% of the costs. This scheme has become very popular and many companies provide this opportunity to all of their employees as part of their salary.
Germany Supply factors Broadband penetration rates in Germany are in the middle range in Europe. A crucial factor affecting the broadband penetration rate and pricing in particular is the degree of infrastructure competition. The dominant position of Deutsche Telekom and the lack of alternative broadband infrastructure have had a negative impact on penetration rates. Germany is a country with a heavy reliance on one technology (xDSL) provided by the incumbent telecom operator. Alternative broadband infrastructures are virtually non-existent. In Germany, cable networks – which in most countries are the only serious infrastructure competitors – require substantial investments before broadband services can be offered, and the current ownership
How to Achieve the Goal of Broadband for All
217
structure makes it questionable whether long-term investments are made in the near future.5 It has been reported that KDG wants to invest 500 Million Euro in its cable networks. However, this sum is considered to be only a fraction of the investment needed. Indeed, given the ownership structure of KDG – three financial institutions (ApaX Partners, Providence Equity Partners and Goldman Sachs Partners) – one might question the owners' incentives to make long-term investments such as upgrading cable networks on a national basis. Only four of Deutsche Telekom's competitors are active on a national basis but their services are not available throughout the whole country. RegTP reports that in 2003 the market share of the incumbents' competitors accounted for 11% (up from 8% in 2002). In some very limited geographical markets the market shares of competitors are up to 40%.6 Prices on broadband services are, according to recent OECD figures quite expensive in terms of costs/MB, if compared to South Korea and Canada. In addition, the highest transmission speed is 2.3 MB, and most of the broadband products have download limits. PrimaCom offers free Broadband Internet but charges 0.1 Euro per MB (OECD 2003a). Another important factor affecting competition and Broadband penetration rates is convergence of services. In this respect, Kabel BW – soon to be a subsidiary of KDG – has started offering VoIP. Deutsche Telekom, on the other hand, offers a Video-on-Demand service called Vision.7 One can expect that this competition will considerably affect prices of services and indirectly broadband penetration rates. Prices for VoIP services offered by cable companies and Video on Demand offered by Deutsche Telekom are expected to continue to fall. Lower prices for services and applications coupled with increased interoperability between different networks could very well positively influence end user's incentives to sign up for broadband services. Demand factors According to the OECD, broadband penetration was 5.6% (around 4.6 million lines) in December 2003. 96% of all broadband lines are xDSL. Alternative infrastructures are virtually not available: Per 1,000 inhabitants there are only one subscriber for cable broadband and one for other infrastructures (satellite, FTTH and
5
6
7
Kabel Deutschland (KDG) recently purchased three regional cable networks and owns now most of the cable networks in Germany. The German competition authorities are currently investigating the acquisition on grounds of creating/strengthening a dominant position. Langsam zur Datenautobahn, 26.07.2004: http://www.rundschau-online.de/servlet/ OriginalContentServer?pagename=ksta/page&atype=ksArtikel&aid=1086537571613 If one wants to watch a movie on Vision, he needs to have a T-DSL account and the Microsoft Media Player (including its Digital Rights Management Systems) installed.
218
Morten Falch, Dan Saugstrup, Markus Schneider
PLC).8 The German NRA (RegTp) reports 450000 subscribers for satellite and 8000 for powerline9. Policy initiatives D21 initiative is Germany’s largest Public Private Partnership10. It is an economic initiative with almost 300 members from all spheres of business, politics and society. The objective is to foster the change to an information society in Germany. There is close co-operation between business, politics and societal organisations in almost 50 projects. Five ministries are involved and there are four subject areas with a task force. EGovernment/IT-Security, eHealth, growth/competitiveness and Education/Qualification/Equality of Chances. In those four areas there are currently 50 projects implemented. Concrete examples are IT-training for teachers, Internet access for schools, IT-ambassadors and girls' day. Today, public policies are not primarily targeting the supply with infrastructure, but more at stimulating demand and the "human factor" in particular. Many of the D21 initiatives specifically aim at end users, raising their awareness of new technologies and showing its advantages. So far, more than 120 000 teachers were trained in IT, there were 100 000 girls at the Girls Day in 2003 and there are more than 1700 honorary, unsalaried IT ambassadors schools can contact for introducing IT to pupils.
South Korea Korea has made major strides in information and communication technologies over the past decades. From being a country with almost no ICT access 30-40 years ago, Korea has become one of the leading countries regarding ICT access. Supply factors South Korea is today the leading country with regard to the development of broadband. It has, by far, the highest penetration, and the average bandwidth (4Mb) offered is also much higher than in Europe. The main reasons for the South Korean success within ICT and particularly in broadband and mobile communication are believed to be rooted in the level of competition. A high level of competition has resulted in very low prices and high access speeds. 8
http://www.oecd.org/document/31/0,2340,en_2649_34225_32248351_1_1_1_1,00.html Annual report 2003 of the Federal Network Agency: http://www.bundesnetzagentur.de/media/archive/215.pdf, pp 21–22 10 http://www.initiatived21.de 9
How to Achieve the Goal of Broadband for All
219
Prices on broadband services in South Korea are among the lowest in the world, and the available broadband speeds are also in a category of its own, ranging from baseline speeds of 1-2Mbps to premium connection speeds from 820Mbps – all at very affordable costs (25-40 USD per month) (OECD 2003a). A favourable geography and demography has lowered the infrastructure costs, which are reported to be as low as 14% of the total costs. This has both enabled low prices for end users and favoured facility based competition. But even more important in this respect is the fact that the block wiring in apartment complexes are owned by the landlords. This enables new entrants to bypass the incumbent without investing in their own local loop facilities. In addition to this, it should also be mentioned that a major part of all mobile subscribers use their mobile phone for access to the Internet, e.g. wireless Internet. Also Hotspot or WLAN services are widespread in South Korea, NeSpot (A KT Company) are offering WLAN service in all major cities at very reasonable tariffs e.g. approximately €30 per month). In May 2004, NeSpot was deploying more than 11.000 access points and expected to be deploying around 25.000 access points by the end of 2004.11 Demand factors The growth rate of Internet usages in South Korea has been remarkable, but most of all South Korea is known for its remarkable uptake in broadband Internet access. Regarding Internet usages, the number of subscribers has increased from approximately 140,000 in 1994 to around 30 million by the end of 2003, corresponding to a 65% penetration rate, where the broadband penetration rates account for almost 25%. Looking purely at the broadband penetration, 62% is DSL, 36% Cable modem and 2% other platforms.12 Internet cafés, the so-called PC-bangs, have stimulated the demand for broadband services, in particular the demand for on-line games. Here, people can try out and learn to use broadband services without paying a monthly subscription fee. The PC bangs also provide a critical mass for content providers. Prices for broadband services measured per Mb are among the lowest, but the Koreans also seem willing to pay a larger share of their income in order to have a high bandwidth. There is a vast amount of services available for users, not only entertainment services like on-line games (which is seen as a killer application), but also a wide range of educational services. Research, online gaming and email are by far the three most used activities when active on the Internet, with regard to VoIP only 5% indicated that they use the Internet for Internet telephony.13
11
Presented at the Broadband World Forum in Seoul, May 2004 by Mr. Myung-Sung Lee from SK Telecom. 12 http://www.oecd.org/dataoecd/58/17/32143101.pdf 13 2004 Statistical Report on Korea’s Internet, http://www.nac.or.kr
220
Morten Falch, Dan Saugstrup, Markus Schneider
Policy initiatives The Government has played an active role in stimulation of broadband. The Korean Government prepared early on, a comprehensive plan for the future Korean information structure, which aimed at deploying high-speed and high capacity networks, mainly through market competition and private sector investments, but also with governmental incentives. The government started already in the 1980s to build a nation-wide backbone for broadband services. This has been followed by a number of infrastructure projects serving the public sector and universities and provision of the basic infrastructure for connecting the private sector (e.g. the PC bangs) as well as private homes. Support is also given to the provision of broadband in less favoured regions. Government policy has also stimulated investments in the local loop and facility based competition through the cyber building certificate system introduced in 1997. Through this system, buildings are ranked according to their capacity to handle high-speed Internet and, since 2001, the Ministry of Construction and Transportation has demanded that these information and communication networks are installed in all new large apartment complexes. The Korean government’s original plan was to provide broadband networks to all households in the form of FTTH by 2015. In 2001 the plan was ratified, now aiming at providing broadband Internet services to 13.5 million subscribers in 2005 including government investments amounting to $1.5 billion. In the densely populated areas LAN Ethernet, VDSL, ADSL and CATV will be used to provide transmission speeds of 10-100Mbps (20Mbps on average). The government has interfered indirectly in the price setting by stating that prices should be not more than 30$ per month to become affordable. (2Mb connections cost 25$ per month). In 2002 the Korean Government further promoted and facilitated Internet telephony by establishing a VoIP Regulation Improvement Task Force, where the main purpose was to resolve problems related to Internet telephony and help the service to be firmly established. As one of the results, the Korean Government decided to create a new legal clause that designated VoIP as a common carrier service, as early as September 2004 – allowing VoIP service providers to register as either a common carrier or a special category telecommunication operator provided that the condition for call quality and user protection is in line with the general conditions.14 Demand has been stimulated through extensive educational programmes in IT involving all tiers of the society including housewives and residents in local communities. In total 14 million out of a population of 50 million have received training during three years. The Government has also been instrumental in developing content, as they supply high-quality educational content on the web.
14
NAC IT e-Newsletter Vol.4 No.3 (May 31, 2004).
How to Achieve the Goal of Broadband for All
221
Canada With a penetration of broadband services of 14.8% in December 2003, Canada is among the leading countries with regard to access to broadband. In 2003, for the first time, there were more high-speed Internet households (28%) than there were households with dial-up subscriptions (24%).15 Supply factors Cable has been an important factor for supply of broadband at the Canadian market. About half of all broadband connections are based on cable. In 1998 half of the Internet connections were provided by new entrants, but as still more access lines are upgraded to broadband their market share has been constantly falling. Only cable companies have been able to compete with the incumbent telecom operators on this market. Demographics is a key factor affecting the supply of broadband services. Investments needed for connecting rural areas are considerably higher than those for connecting urban areas. Moreover, demand in rural areas may be expected to be lower than in urban areas as people have less experience with computer applications. Universal provision of broadband services at affordable prices is therefore much more difficult to offer in sparsely populated countries than densely populated countries. Canada is the second biggest country in the world with a size of almost ten million sq km having around 31 million inhabitants (or three inhabitants per sq km). The fact that Canada has achieved the second highest broadband penetration rate would suggest that Canada could serve as an example, that it is possible to ensure broadband for all, also in large countries with a sparse population. However, 13 million people or half of the total population live in an area of 33000 sq km. This is important to keep in mind since costs for connecting these 33000 sq km are very low (compared to costs of connecting the other 99.97% of the country). Today only 28% of all Canadian communities have broadband access.16 However these 28% cover 80% of the total population. In other words, Canada faces a difficult challenge if it seeks to maintain its leading position in terms of broadband penetration rates in the years ahead. Costs for connecting the remaining 72% of the communities (equivalent to 20% of the population) living in the unconnected areas which makes up 99,97% of the land mass, are much higher than those made for the most profitable 28% of the communities connecting covering only 0.03 % or the total area.
15 16
The 2003 annual report of the Canadian NRA (http://www.crtc.gc.ca). http://broadband.gc.ca
222
Morten Falch, Dan Saugstrup, Markus Schneider
Demand factors Provision of Triple play services, and in particular provision of VoIP and video on demand, are expected to be decisive for the demand for broadband. Canada telecom and cable operators offer “triple play services” (Internet, telecommunication and broadcasting services).17 In addition, (global) Internet-based companies such as Skype and Apple's iTunes are new entrants and offer their services in competition with the incumbents. In Canada VoIP is being offered by a variety of companies. In addition to the incumbent Bell Canada, Sprint, Vonage and Primus offer local phone services using (partly) IP technology. This increases competition in the local phone market which has been traditionally been dominated by the incumbents Bell Canada and Telus.18 Vonage and Primus rely on VoIP only.19 The price difference between VoIP and traditional PSTN communication services is one of the decisive supply factors shaping the demand for VoIP as well as the demand for broadband connections offering this service (CRTC 2003). If pricing of VoIP is sufficiently low to offset minor quality losses and usage convenience, VoIP might be the application triggering people to sign up for broadband access. Primus offers its service for around 16 CAN$ and charges 140 CAN$ for hardware. Vonage has a monthly fee of 20 CAN$, a one time installation fee of 40 CAN$ but gives hardware away for free. In addition to these charges, the costs for having a broadband Internet connection must be added. In comparison, Bell Canada is charging 25 $ line rental and Sprint around 30$. Value added services (e.g. Voicemail, caller ID) are provided by all operators.20 Exact subscriber numbers for VoIP are difficult to gather. Recent figures speak of 15 000 paying subscribers in Canada.21 In 2004 only 23% of all Canadians were aware of VoIP.22 The national regulator (CRTC) has taken the preliminary view that Internet phone companies should be regulated in the same way as traditional telecom operators because both types of operators offer similar functionality. CRTC takes the preliminary view that technological neutrality requires VOIP operators to be subject to the existing regulatory framework. The public consultation process is still undergoing so the position of the CRTC is not final.23 Another application driving demand for broadband is expected to be Video-OnDemand (VOD) and online distribution of entertainment content (e.g. iTunes). In this respect, the Internet allows for a distribution of digital entertainment content 17
http://www.cedmagazine.com/ced/2001/0901/09e.htm (20. September 2004). http://www.nytimes.com (Canada's Phone Giants Face Internet Threat – 5. May 2004). 19 http://www.canoe.ca (Tough call for dialers – 12. May 2004). 20 Ibid. 21 http://www.canada.com (New tech threat to telcos Internet telephones: BCE, Telus wary of consumer shift to upstarts' systems – 18. September 2004). 22 http://andyabramson.blogs.com (67% of Canadians don’t know about VoIP – 11. June 2004). 23 http://www.crtc.gc.ca/PartVII/eng/2004/8663/c12_200402892.htm 18
How to Achieve the Goal of Broadband for All
223
over Broadband infrastructure. Costs have been considerable but are falling steadily.24 The incumbent telecom carrier Bell Canada is the largest operator of Digital TV (ExpressVu) but it does not offer the service over its broadband network.25 Telus has applied in 2003 for a VOD license but has not yet introduced this service. Currently, VOD over ADSL is only offered by a few companies in Canada. For instance, SaskTel in the province Saskatchewan has launched VOD in October 2003. It has been reported that SaskTel has 14 000 subscribers in May 2004. Moreover, Aliant in Eastern Canada is offering its TV on PC service.26 In June 2004, the incumbent Canadian telecom carrier Bell applied for a license to deliver cable over its DSL lines .While CRTC allowed phone companies SaskTel and Manitoba Telecom Services to offer cable-style television services, CRTC noted that the dominant position of Bell in the phone segment requires special attention.27 No firm decision has been made yet. Policy initiatives Universal coverage of provision of broadband services is a major challenge for a country with a size like Canada. A number of policy initiatives have been taken to extend coverage of broadband services to less populated areas. Satellite technology and bundling of demand is currently co-ordinated and fostered in order to lower costs for broadband connection in rural areas. Generally, one can distinguish between infrastructure and demand aggregation measures. Canada is very active in implementing – at federal, provincial, territorial and municipal level – both types of programs. All of them fall within the categories of infrastructure support demand aggregation (CRTC 2003). Alberta Supernet will link 4700 government offices, schools and health facilities in 422 communities across the province; this corresponds to connect around 80% of the population. There are two areas – the Base and the extended areas – which are to be connected seamlessly. The Base area consists of 27 larger communities and Bell West is investing 102 million $ in the roll-out. Once this roll-out is done Bell West will own the Base Area network. The smaller 395 communities are in the Extended area. The government will invest up to 193 million $ in the project and the Government of Alberta will own this part of the infrastructure. AxiaSuperNet Ltd. manages the whole network for a period of 10 years. The contract can be renewed after this period. In the Extended area, ISPs will also be able to connect to AlbertaSupetNet. AlbertaSupetNet connects schools, but not individuals. In the Base Area, bandwidth leasing for ISPs will not be possible. 24
http://www.point-topic.com (Video on demand – 10. May 2004). http://www.hollywoodreporter.com (Bell Canada: A winning example of industry convergence – 5. September 2004). 26 http://www.point-topic.com (Video on demand – 10. May 2004). 27 http://www.friends.ca (Cable, Telco lines blur by Mathew Ingram – 9. June 2004). 25
224
Morten Falch, Dan Saugstrup, Markus Schneider
Another provincial program is being implemented in the province of Saskatechwan. Saskatchewan's CommunityNet can be classified as an infrastructure support model. The government has committed over 70.9 million $ to the construction of a broadband network, which will link 1500 educational institutions, health care facilities and other public institutions in 366 communities.28 Two public sector organisations owned by the province, SaskTel and the Saskatchewan Communications Network provide infrastructure and services. SaskTel is able to expand its network to businesses and residential areas in smaller communities with links to CommunityNet. In 2003, 74% of the population was reached in Saskatchewan. The goal is to reach 95% of the population. Another provincial program – Villages branches du Quebec – falls in the category of demand aggregation. The programme aims to use the existing RISQ network in connection with – previously unconnected – local and regional facilities. The main idea is to aggregate demand and use public institutions as demand initiators allowing small businesses and residentials to “add their own demand”. The budget for the project is 75 million $. The funding program resembles the BRAND program but specifically targets educational and municipal institutions. Other similar programs are being implemented in Manitoba, Ontario, British Columbia, News Brunswick, Prince Edward Island, Nova Scotia, Newfoundland and Labrador.
Conclusion This paper aims to analyse the drivers and inhibitors towards penetration of broadband services at the national level. This question is important as the broadband infrastructure is the very foundation of any information society. Broadband is not only about access to music and movies, but to information in general. Broadband infrastructure and services are more than just economic factors and have a far wider impact on the society. Hand in hand with this political dimension of broadband, the degree of market intervention by governments is of utmost importance. This paper attempts to identify the most important factors affecting broadband penetration and thereby to create a framework for identification of policy measures that can stimulate growth in this area. We have made a distinction between factors affecting content and infrastructure, and a distinction between factors affecting demand and supply. Some factors cannot be influenced by governments (macro-economic environment, demographics) while others can only be influenced in the long run (e.g. educating people to use broadband services) and some can easily be influenced (e.g. setting of interconnection and access regulations by the NRA). Apart from the question which factors can be influenced, the main issue is how to positively influence those factors and whether this should be done centrally/locally or market-driven.
28
More info can be found on http://www.communitynet.ca
How to Achieve the Goal of Broadband for All
225
The countries analysed in this paper are all among the countries leading the development of broadband services, although Germany lags somewhat behind the others. All the countries are high income countries and income seems to be an important factor in explaining national differences. However, the level of income is not decisive. Germany has a higher per capita income than South Korea, but a substantially lower penetration of broadband. The role of geography is difficult to assess. Canada is the only country analysed in this paper with a widespread population. But a large share of the population is concentrated in a few high density areas, and broadband is not available in most of the remote areas. However, if it is to remain among the leading countries with regard to broadband penetration, it will become necessary to cover these areas as well. Competition seems to be an important parameter. Competition can be at two different levels: between different types of infrastructures and between different operators using the same or the same types of infrastructure. At present, competition between infrastructures takes place mainly between cable modem and DSL services. Here it seems to be important whether the incumbent telecom operator controls the cable infrastructure. The development in both Germany and Denmark indicate that cross ownership of infrastructures has a negative impact on the development. Competition between companies using the same or the same type of infrastructures is more developed in Korea and Canada than in Denmark. However, this does not seem to have had a severe impact on the penetration, but the bandwidth used in Denmark is lower than in the other two countries. Initially, market and government action focussed on the supply side with laying out infrastructure and believing people will use it. Government intervention with the aim of stimulating the supply of broadband services has played an important role in the success of South Korea. It is however clear that with increasing availability of broadband services to households and yet still a fairly low penetration, achieving the goal of broadband for all demands more than putting the infrastructure in place. Therefore there is now a shift towards more focus on the demand side. Denmark refuses to provide any support to infrastructure development, and many of the suggestions in Germany's D21 Initiative focus on the end user and how to increase his incentives to use broadband services. In this respect education and schools are at the forefront and focus must be on how to bring broadband to the younger generation. If any firm conclusions can be drawn from this paper, it must be that public policy does matter. Although technical and economic parameters such as income level play a role for the development of broadband services, successful implementation of broadband also depend on the kind of policy measures to be taken. These measures may include stimulation of both demand and supply of both content and infrastructure.
226
Morten Falch, Dan Saugstrup, Markus Schneider
References CRTC (2003) Broadcasting Policy Monitoring Report 2003. Canadian Radio-television and Telecommunication Commission. http://www.crtc.gc.ca Dutta S, Jain A (2002) The Networked readiness of Nations. INSEAD. http://www.weforum.org/pdf/Gcr/GITR_2003_2004/Framework_Chapter.pdf EU Communications Committee (2003) Broadband Access in the EU. http://www.si.dk/image.asp?page=image&objno=148800153 European Commission (2002) eEurope – An information society for all, COM 263 final. http://europa.eu.int/eur-lex/lex/LexUriServ/site/en/com/2002/com2002_0263en01.pdf FCC (2002) Inquiry Concerning the Deployment of Advanced Telecommunications Capability to All Americans in a Reasonable And Timely Fashion, and Possible Steps To Accelerate Such Deployment Pursuant to Section 706 of the Telecommunications Act of 1996. Third Report. http://www.fcc.gov/broadband/706.html Henten A et al. (2004) New Trends in Telecommunication Innovation. Published in conference proceedings, EuroCPR, March 2004, Barcelona ITU (2002) Internet for a Mobile Generation. Geneva http://www.itu.int/osg/spu/publications/mobileinternet OECD (2003a) Benchmarking Broadband Prices in the OECD. DSTI/ICCP/TISP 8/Final. http://www.oecd.org/dataoecd/58/17/32143101.pdf OECD (2003b) Broadband driving growth: Policy responses. DSTI/ICCP(2003)13/Final. http://www.olis.oecd.org/olis/2003doc.nsf/0/0a6e962d4162da9cc1256dba00559fea/$F ILE/JT00151116.pdf
Estimating the Demand for Voice over IP Services: A Contingent Valuation Approach1 Paul Rappoport2,*, Lester D. Taylor3,**, James Alleman4,*** * ** ***
Temple University, USA University of Arizona, USA University of Colorado, USA
Abstract The demand for Voice-over-IP (VoIP) services is receiving increasing attention as an alternative to traditional switched access based telephone services. This paper focuses on the underlying determinants of demand to assess the potential for VoIP. The authors utilise a model of demand based on a consumer’s willingness to pay and provide estimates of the elasticity of demand for VoIP. The intent of the paper is to add to the discussion of VoIP and to stimulate analysis.
Introduction The focus in this paper is on what can be seem as the logical next chapter for consumers in the ongoing “convergence” of the computer and telecommunications industries, namely, the residential market for Voice over Internet Protocol (VoIP) services. Although at present this market is minimal, a large number of companies (Vonage, AT&T, and Qwest, for example) and Wall Street analysts foresee it as potentially large opportunity, and are investing accordingly.5 The purpose of the present exercise is to take a sober look at the market for VoIP by using data on willingness-to-pay from a representative survey of U.S. households to provide estimates of the underlying price elasticity of demand for ‘best-effort’ VoIP ser-
1
2 3 4 5
This paper is adapted and updated from an earlier version published in Telektronikk. The authors thank Dale Kulp, president of Marketing Systems Group for access to the CENTRIS omnibus survey. E-mail:
[email protected] E-mail:
[email protected] E-mail:
[email protected] For a comprehensive look at VoIP providers see http://VoipWatch.com
228
Paul Rappoport, Lester D. Taylor, James Alleman
vices, as well as initial estimates of the size of the ‘best-effort’ VoIP market.6 This paper does not address a closely related issue, mainly digital telephone services offered by cable companies. VoIP is a common term that refers to the different protocols that are used to transport real-time voice and the necessary signalling by means of internet protocol (IP). Simply put, VoIP allows the user to place a call over IP networks. “Besteffort” VoIP is the provisioning of voice services using broadband access (cable modem, DSL, or wireless broadband). It is referred to as ‘best-effort’ because service quality and performance cannot be guaranteed by the provider. The traditional voice telephony system meshes a series of hubs together using highcapacity links. When a call is placed, the network attempts to open a fixed circuit between the two endpoints. If the call can be completed, a circuit that stretches the entire length of the network between the two endpoints is then dedicated to that particular call, and cannot be used by another until the originating call is terminated. The basic architectural difference between traditional telephony and IP telephony is that an IP network such as the internet is inserted between the telephony end-points, typically central offices. IP networks are packet-switched, as opposed to circuit-switched traditional telephony. Unlike circuit-switched networks, packets networks do not set up a fixed circuit before the call begins. Instead, the individual voice packets are sent through the IP network to the destination. Each packet may traverse an entirely different path through the network; however, the conversation is reassembled in the correct order before being passed on to the VoIP application. The "glue" that ties together the PSTN (Public Switched Telephone Network) with the IP network is known as an IP gateway. IP gateways perform many of the traditional telephone functions such as terminate (answer) a call, determine where the call is to be directed, and perform various administrative services such as user verification and billing before passing the call on to a receiving IP gateway. The receiving IP gateway, which may also be interconnected with the PSTN, dials the destination and completes the call.7 Pricing a new service is mostly a trial and error process, and the pricing of Best-Effort VoIP service has been no exception. Judging from the number of recent press releases, financial analyses, and articles written on VoIP, estimation of market size, consumer interest, and willingness-to-pay for VoIP services is still a “hot” subject. A recent Research and Markets report forecasts 2006 as the “takeoff” year for VoIP8. That report predicts that there will be 6.7 million residential VoIP customers by the end of 2005. Not all analysis has been bullish. A Goldman Sachs telecom services report notes that, as the VoIP threat evolves, it should not be viewed as catastrophic by the incumbent local exchange carriers (ILECs) 6
7 8
‘Best-effort’ refers to VoIP plans that provide voice services over the internet. This offering requires potential customers to have or be willing to have a broadband connection. ‘Primary line’ quality VoIP is provided by a service provider who owns or controls the infrastructure between the MTA (telephone enabled DOCSIS modem) and the gateway. See for example, http://www.cse.ohio-state.edu/~jain/cis788-99/ftp/voip_products/ http://www.researchandmarkets.com/reports/c17272
Estimating the Demand for Voice over IP Services: A Contingent Valuation Approach 229
(Goldman Sachs Telecom Services 2004). Business 2.0 published a story “Beware the VoIP Hype” in its December 9, 2003 issue describing a mismatch between expectations of investors and realities of the market. The author of that story noted that “… the big winners are likely to be the established companies that are already profitable and can afford to spend money on research and development and marketing. Most companies are not making money off the technology.”9 Before turning to technical details, it is useful to note just what it is that VoIP represents. Unlike some services that have emerged out of the electronic revolution, VoIP does not involve a new good per se, but rather a new way of providing an existing good at possibly lower cost and in a possibly more convenient manner.10 The good in question, of course, is real-time voice communication at a distance. The word possibly is to be emphasised, for voice communication is a mature good in a mature market, with characteristics that for all practical purposes are now those of a commodity. The ultimate potential market for VoIP, accordingly, is simply the size of the current voice market plus normal growth. Hence the evolution of VoIP is pretty much strictly going to depend upon the efficiency visà-vis traditional telephony that VoIP vendors can provision this market. To our knowledge, the present effort, which builds upon a previous study of the demand for broadband access using models of willingness-to-pay (Rappoport et al. 2003a), is the first to focus on the modelling of the demand for VoIP services. The analysis in this paper makes use of data from an omnibus survey conducted in March and April, 2004, by the Marketing Systems Group of Ft. Washington, PA,11 in which respondents were asked questions concerning their willingnesses-to-pay (WTP) for VoIP services. In the study of broadband access just referred to, price elasticities for broadband access were developed using extensions of a generally overlooked procedure suggested by Cramer (1969). The same analyses have been used in this study. Among other things, price elasticities for VoIP are obtained that range from an order of -0.50 for a fixed price of $10 to -3.00 for a fixed price of $70. In addition to the range of elasticities just mentioned, the principal findings of the paper are: • Market drivers include the distribution of total telephone bills (local and long distance); the distribution of WTP and the availability of broadband access to the internet. • The market size for best-practice VoIP is small. For example, at a price of $30, the estimated consumer market size is 2.7 million households.
9
Business 2.0 http://www.business2.com/b2/subscribers/articles/0,17863,534155-2,00.html 10 Cellular telephone provides an apt contrast with VoIP, for, while cellular, too, represents an alternative way of providing real-time voice communication, it also allows for such to take place at times not available to traditional fixed-line telephony, hence in this sense is a genuine new good. 11 www.m-s-g.com
230
Paul Rappoport, Lester D. Taylor, James Alleman
• Households with access to the internet, especially with broadband access, have a higher willingness-to-pay for VoIP services. The structure of the paper is as follows. The next section begins with a short descriptive presentation of factors that underlie the demand for VoIP services. Section III provides the underlying theoretical framework that guides the analysis, while Section IV discusses the data used in this analysis. Section V presents price elasticities for best-practice VoIP services derived from kernel-smoothed cumulative distributions of willingness-to-pay. Market-size simulations are presented in Section VI. Conclusions are given in Section VII.
Descriptive analysis The analysis of the demand for VoIP services can be viewed as the confluence of the three forces or factors. These include the distribution of total telephone bills; the probability that a household has or is interested in getting broadband access to the internet; and the household’s willingness to pay for VoIP service. Each of these forces help to define the potential market size. Fig. 1 displays the distribution of telephone bills (local and long-distance). The relevant assumption here is that a household’s interest in VoIP – and hence willingness-to-pay – depends on the household’s total telecommunication expenditures. Thus, presumably households with large telephone bills will be more interested in VoIP than households with smaller telephone bills. The fall-off in telephone expenditures after $50 shown in this figure suggests that the potential size of VoIP may be limited by the number of households that have combined local and long distance monthly telephone bills greater than $50. Fig. 2 shows the distribution of willingness-to-pay for VoIP for households that already have broadband access to the Internet. Since these are the households that would seem to have the most potential for migrating to VoIP, the prospective size of the VoIP market suggested by this the numbers in this distribution would appear to be modest. At a “price” of $40 per month, for example, the indicated size of market (as measured by the number of households with WTP greater than $40) is seen to be about 2 million households, while at $10 a month (which would almost certainly not be remunerative), the number is only 7 million. Fig. 3 examines the relationship between the distribution of income (left scale) and the broadband penetration rate (right scale). Since broadband access is presumed to be a requirement for best-practice VoIP, the strong positive relationship that is indicated to hold between broadband penetration and income makes it clear that the distribution of income (especially the upper tail) is an important determinant of the potential VoIP market.12 Fig. 3 suggests that any growth in broadband will have to come from lower to middle income households.
12
The relationship between WTP and income will be examined in section “Calculation of price elasticities”.
Estimating the Demand for Voice over IP Services: A Contingent Valuation Approach 231
25.0%
Percent of Households
20.0%
15.0%
10.0%
5.0%
0.0% 1-20 20-30 30-40 40-50 50-60 60-70 70-80 80-90 90-100 >100 Local and Long Distance Bill (dollars)
Households (million)
Fig. 1. Distribution of total telephone bill
8,0 7,0 6,0 5,0 4,0 3,0 2,0 1,0 0,0 1-10
11-20
20-30
30-40
40-50
Willingess-to-pay (dollars) Fig. 2. Distribution of willingness-to-pay for VoIP
50-75
>75
Paul Rappoport, Lester D. Taylor, James Alleman
25%
60%
Percent (Broadband Penetration)
Broadband Penetration
Income Distribution
50%
20%
40% 15% 30% 10% 20%
5%
10%
0%
Precent (Income distribution)
232
0% <15
15-25
25-35
35-50
50-75
75-100
>100
Income Strata
Fig. 3. Demand for broadband as a function of income
Theoretical considerations We begin with the usual access/usage framework for determining the demand for access to a network, whereby the demand for access is determined by the size of the consumer surplus from usage of the network in relation to the price of access. Accordingly, let q denote usage, and let q(p,y) denote the demand for usage, conditional on a price of usage, p, and other variables (income, education, etc.), y. The consumer surplus (CS) from usage will then be given by
CS = ³ q(z, y)dz .
(1)
Next, let ʌ denote the price of access. Access will then be demanded if CS ʌ
(2)
or equivalently (in logarithms) if lnCS ln(ʌ)
(3)
Alleman (1976, 1977) and Perl (1983) were among the first to apply this framework empirically. Perl did so by assuming a demand function of the form:13
13
Since its introduction by Perl in 1978 in an earlier version of his 1983 paper, this function has been used extensively in the analysis of telecommunications access demand (see, e.g., Kridel 1988; Taylor and Kridel 1990). The great attraction of this demand function is its nonlinearity in income and an ability to handle both zero and non-zero usage prices.
Estimating the Demand for Voice over IP Services: A Contingent Valuation Approach 233
CS =
³
∞ p
Ae−α p y β eu dz ,
(4)
where y denotes income (or other variables) and u is a random error term with distribution g(u). Consumer’s surplus, CS, will then be given by
CS =
Ae − α p y β e u
α
.
(5)
With net benefits from usage and the price of access expressed in logarithms, the condition for demanding access to the telephone network accordingly becomes: P(lnCS ln(ʌ)) = P(a - αp + αlny + u lnʌ) = P(u lnʌ - a + αp – βlny) ,
(6)
where a = ln(A/α). The final step is to specify a probability law for consumer surplus, which, in view of the last line in equation (6), can be reduced to the specification for the distribution of u in the demand function for usage. An assumption that u is distributed normally leads to a standard probit model, while an assumption that u is logistic leads to a logit model. Empirical studies exemplifying both approaches abound in the literature.14 The standard procedure for estimating access demand can thus be seen in terms of obtaining information on the consumer surplus from usage by estimating a demand function, and then integrating beneath this demand function. In the present context, however, our procedure is essentially the reverse, for what we have by way of information are statements on the part of respondents in a survey as to the most that they would be willing-to-pay for a particular type of VoIP service. This most accordingly represents (at least in principle) the maximum price at which the respondent would purchase that type of service. Thus, for any particular price of VoIP, VoIP will be demanded for WTPs that are this value or greater, while VoIP will not be demanded for WTPs that are less than this value. Hence, implicit in the distribution of WTPs is an aggregate demand function (or more specifically, penetration function) for VoIP service. In particular, this function will be given by: D(ʌ) = proportion of WTPs that are greater than or equal to ʌ = P(WTP ≥ ʌ) = 1 - CDF(ʌ) ,
(7)
where CDF(ʌ) denotes the cumulative distribution function of the WTPs. Once CDF’s of WTP’s are constructed, price elasticities can be obtained (without inter14
Empirical studies employing the probit framework include Perl (1983) and Taylor and Kridel (1990), while studies using the logit framework include Bodnar et al. (1988) and Train et al. (1987). Most empirical studies of telecommunications access demand that employ a consumer-surplus framework focus on local usage, and accordingly ignore the net benefits arising from toll usage. Hausman et al. (1993) and Erikson et al. (1998) represent exceptions.
234
Paul Rappoport, Lester D. Taylor, James Alleman
vention of the demand function) via the formula (or empirical approximations thereof): Elasticity (ʌ) =
d ln CDF(π ) . d ln π
(8)
Data employed in the analysis As noted, information on willingness-to-pay for VoIP service was collected from an omnibus national survey of about 8000 households in April and May, 2004, by the Marketing Systems Group (MSG) of Philadelphia. The omnibus survey, Centris15, is an ongoing random telephone survey of U. S. households. Each of the participants in the surveys utilised here was asked one (but not both) of the following two questions regarding their willingness-to-pay. • What is the most you would be willing to pay on a monthly basis for a service that provides unlimited local and long distance calling using your computer? • What is the most you would be willing to pay on a monthly basis for a service that provides unlimited local and long distance calling using your computer with internet connection at a cost of $20 per month?16 The first question was asked of those households that currently have broadband access, while the second version was asked of those households that did not have broadband access.
Calculation of price elasticities We now turn to the calculation of price elasticities in line with expression (8) above. The most straightforward way of doing this would be to define the elasticities as simple arc elasticities between selected adjacent points on the empirical CDF’s. Unfortunately, however, because the survey-elicited WTP’s tend to bunch at intervals that are multiples of 5 dollars, the values that emerge from this procedure are highly unstable, and accordingly of little practical use. To avoid this problem, elasticities are calculated using a kernel-based non-parametric procedure in which the “pileups” at intervals of 5 dollars are “smoothed out.” Since kernel estimation may be seen as somewhat novel in this context, some background and motivation may be useful. The goal in kernel estimation is to develop a continuous approximation to an empirical frequency distribution that, among other things, can be used to assign density, in a statistically valid manner, in any small neighbourhood of an observed frequency point. Since there is little 15 16
www.Centris.com $20 was selected since dial-up prices were approximately $20.
Estimating the Demand for Voice over IP Services: A Contingent Valuation Approach 235
reason to think that, in a large population, “pileups” of WTP’s at amounts divisible by $5 reflect anything other than the convenience of nice round numbers, there is also little reason to think that the “true” density at WTP’s of $51 or $49 ought to be much different than the density at $50. The intuitive way of dealing with this contingency (i.e., “pileups” at particular discrete points) is to tabulate frequencies within intervals, and then to calculate “density” as frequency within an interval divided by the length of the interval (i.e., as averages within intervals). However, in doing this, the “density” within any particular interval is calculated using only the observations within that interval, which is to say that if an interval in question (say) is from $40 to $45, then a WTP of $46 (which is as “close” to $45 as is $44) will not be given weight in calculating the density for that interval. What kernel density estimation does is to allow every observation to have weight in the calculation of the density for every interval, but a weight that varies inversely with the “distance” that the observations lie from the centre of the interval in question. Let ƣ(x) represent the density function that is to be constructed for a random variable x (in our case, WTP) that varies from x1 to xn. For VoIP WTP, for example, the range x1 to xn would be 0 to $700.17 Next, divide this range (called the ‘support’ in kernel estimation terminology) into k sub-intervals. The function g(x) is then constructed as:
§ x − xj · K¨ i © h ¸¹ ¦ Nh , i = 1, …, k . j =1 N
g(xi)
=
(9)
In this expression, K denotes the kernel-weighting function, h represents a smoothing parameter, and N denotes the number of observations. For the case at hand, the density function in expression (9) has been constructed for each interval using the unit normal density function as the kernel weighting function and a ‘support’ of k = 1000 intervals.18 From the kernel density functions, VoIP price elasticities can be estimated using numerical analogues to expression (8).19 The resulting calculations, undertaken 17
This is for the households with broadband access. For those households without broadband access [i.e., for households responding to Question (b)], the range is from 0 to $220. 18 Silverman’s rule-of-thumb, h
=
(0.9)min[std. dev., interquartile range/1.34](N-1/5),
has been used for the smoothing parameter h. Two standard references for kernel density estimation are Silverman (1986) and Wand and Jones (1995). Ker and Goodwin (2000) provide an interesting practical application to the estimation of crop insurance rates. 19 The kernel-based elasticities are calculated as “arc” elasticities using points (at intervals of ± $5 around the value for which the elasticity is being calculated) on the kernel CDF’s via the formula: ∆ CDF ( x ) / CDF ( x ) Elasticity ( x ) = ∆ WTP( x ) / WTP( x )
236
Paul Rappoport, Lester D. Taylor, James Alleman
at WTP’s of $70, $60, $50, $40, $30, $20, and $10 per month, are presented in Table 1.20 The estimated elasticities are seen to range from about -3.0 for WTP’s of $60–70 to about -0.6 for WTP’s of $10. Interestingly, the values in column 1 (for households that already have broadband access) for the most part mirror those in column 2 (which refer to households that do not). Since this appears to be the first effort to obtain estimates of price elasticities for VoIP, comparison of the numbers in Table 1 with existing estimates is obviously not possible. Nevertheless, it is of interest to note that the values that have been obtained are similar to existing econometric estimates for the demand for broadband access to the internet.21 More will be said about this below. Table 1. VoIP elasticities based on WTP kernal-smoothed CDF With WTP ($) 70 60 50 40 30 20 10
Broadband -2.8616 -3.0217 -2.7794 -1.7626 -1.0753 -0.7298 -0.5454
Without Broadband -2.9556 -2.4730 -3.0093 -1.5630 -1.0527 -0.7564 -0.6025
The potential market for VoIP As noted in the introduction, VoIP is not a new good per se, but rather provides a new way of supplying an existing good. Its success, consequently, is going to depend upon whether VoIP vendors can supply acceptable quality voice telephony at costs that are lower than those of the traditional carriers. VoIP is not a “killer-app” whose “explosion on the scene” will fuel a whole new industry. The voice telephony market is an old market, and the growth of VoIP is for the most part going to have to be at the expense of existing vendors.22 The purpose of this section is to
Thus, for $70, for example, the elasticity is calculated for x (on the kernel of CDF’s) nearest to 75 and 65. 20 Since Question (b) postulates an access cost of $20, the WTPs for households without broadband access are assumed to be net of this $20. 21 See, e.g., Rappoport et al. (1998), Rappoport et al. (1999), Rappoport et al. (2002a), Rappoport et al. (2002b), Rappoport et al. (2003a) and Rappoport et al. (2003b). 22 While the discussion here is couched in terms of new companies versus old, the argument is really with regard to technologies. If VoIP should in fact turn out to be superior in terms of quality and cost in relation to traditional circuit-switched technology, then existing telecommunication companies will have to adjust accordingly, which they almost certainly will do, rather than go the way of the dodo bird. The end result might be that traditional telcos simply transform themselves into full-scale internet service providers.
Estimating the Demand for Voice over IP Services: A Contingent Valuation Approach 237
take a sober look at what VoIP vendors might accordingly reasonably expect as an initial potential market. We should not expect a household to demand VoIP unless doing so leads to a reduction in the cost of voice communication. Not unreasonably, therefore, the potential market for VoIP can be viewed as consisting of those households for which both its telephone bill and WTP for VoIP are greater than price of the service. Potential markets employing these criteria using information from the Centris survey have accordingly been constructed for three different VoIP prices, namely, 50, 40, and 30 dollars. The resulting households that are estimated to be candidates for demanding VoIP are presented in Table 2. These numbers are small by any assessment, and stand in marked contrast with various estimates that have promulgated by industry analysts. There are approximately 31 million households with broadband access in the U. S., and to some, this is the size of the potential market for VoIP. However, if one simply looks at willingness-to-pay for VoIP that is greater than zero, the potential market drops to approximately 7 million households. A closer look at the willingness-to-pay function suggests that at a price of $30, the market size is less than 3 million households. There is room for some growth, especially if the number of households with broadband services grow. Nonetheless, the total size of the “best-practice” VoIP market is likely to remain at best modest.23 Table 2. Potential market for VoIP Price ($) 50 40 30
Households 810,000 2,170,000 2,700,000
Nor is it likely that we will see VoIP prices fall below $20. The typical cost model for VoIP is based on 1200 minutes of use. Assuming the cost per minute for transport and connection is $0.01, then the base cost of providing VoIP is $12. Factoring in the costs for equipment, marketing, back-office functions, technical support and customer acquisition and you soon have costs approaching $18 per month.
Conclusions This paper has analysed the consumer demand for best-practice VoIP service using information on willingness-to-pay for VoIP that was collected in early March and April, 2004, in an omnibus survey of some 8000 households. A theoretical 23
It is for these reasons that a number of large cable providers have opted to downplay best-practice VoIP in their telephony strategies and focus on providing basic telephony services over their IP-based network. Initial estimates for cable providers of the market size for IP-based telephone services are 20% of the current RBOCs market.
238
Paul Rappoport, Lester D. Taylor, James Alleman
framework has been utilised that identifies willingness-to-pay with consumer surplus from usage, which both allows for willingness-to-pay to be modelled as a function of income, education, and other socio-demographic factors, as well as the construction of a market demand function. The results of the exercise suggest that the demand for VoIP service is elastic (i.e., has an elasticity greater than 1 in absolute value) over the range of prices currently charged by VoIP service providers. The distribution of total telephone bills, the probability that a household has broadband access and the household’s willingness to pay for VoIP service are used to simulate potential market size for various prices of VoIP service. In all simulations, the potential market size is estimated to be small. Since the elasticities of the exercise are constructed from information elicited directly from households, and thus entail the use of contingent-valuation (CV) data, the seriousness (in light of the longstanding controversy surrounding the use of such data) with which our elasticities are to be taken might be open to question.24 However, in our view, the values that we have obtained are indeed plausible and warrant serious consideration. Added credence for our results, it seems to us, is provided by the fact that, with VoIP service, we are dealing with a product (voice telephony) with which respondents are familiar and already demand, unlike in circumstances (such as in the valuation of a unique natural resource or the absence of a horrific accident) in which there is no generally meaningful marketbased valuation can be devised. It is interesting to note that our estimated elasticities suggest that at a price around $30 demand shifts from inelastic to elastic. Vonage, the largest of the bestpractice VoIP providers, has continually reduced their price. Their current price is now $24.99.25
References Alleman J (1976) The Demand for Local Telephone Service. US Department of Commerce, Office of Telecommunications, OT Report, pp 76–24. Alleman J (1977) The Pricing of Local Telephone Service. US Department of Commerce, Office of Telecommunications, OT Special Report, 77-14, pp i–iv, 1–183. Andersson K, Myrvold O (2002) Residential Demand for ‘Multipurpose Broadband Access’: Evidence from a Norwegian VDSL Trial. Telektronic 2, 96:20–25 Andersson K, Fjell K, Foros O (2003) Are TV-viewers and surfers different breeds? Broadband demand and asymmetric cross-price effects. Paper presented at the Norwegian Annual Conference in Economics, Bergen, Telenor R&D, 1331 Fornebu, Norway Bodnar J, Dilworth P, Iacono S (1988) Cross-Section Analysis of Residential Telephone Subscription in Canada. Information Economics and Policy 4, 3:311–331 24
The critical literature on contingent valuation methods is large. See the NOAA Panel Report (1993), Smith (1993), Portnoy (1994), Hanneman (1994), Diamond and Hausman (1994), and McFadden (1994). On the other hand, particularly successful uses of CV data would seem to include Hammitt (1986) and Kridel (1988). 25 See http://www.citi.columbia.edu/voip_agenda.htm. See also http://www.vonage.com.
Estimating the Demand for Voice over IP Services: A Contingent Valuation Approach 239 Cramer JS (1969) Empirical Econometrics. Elsevier Publishing Co., New York Diamond PA, Hausman JA (1994) Contingent Valuation: Is Some Number Better Than No Number? Journal of Economic Perspectives 4, 8, fall:45–64 Erikson RC, Kaserman DL, Mayo JW (1998) Targeted and Untargeted Subsidy Schemes: Evidence from Post-Divestiture Efforts to Promote Universal Service. Journal of Law and Economics, 41, October:477–502 Goldman Sachs Telecom Services (2004) Wireline/Broadband Competitive Analysis, April 16 Hammitt JK (1986) Estimating Consumer Willingness to Pay to Reduce Food Borne Risk. Report R-3447-EPA, The RAND Corporation Hannemann WM (1994) Valuing the Environment though Contingent Valuation. Journal of Economic Perspectives 4, 8, fall:19–44 Hausman JA, Sidak JG, Singer HJ (2001) Cable Modems and DSL: Broadband internet Access for Residential Customers. American Economic Review Papers and Proceedings 2, 91, May:302–307 Hausman JA, Tardiff TJ, Bellinfonte A (1993) The Effects of the Breakup of AT&T on Telephone Penetration in the United States. American Economic Review Papers 2, 83, May:178–184 Ker AP, Goodwin BK (2000) Nonparametric Estimation of Crop Insurance and Rates Revisited. American Journal of Agricultural Economics, 83, May:463–478 Kridel DJ (1988) A Consumer Surplus Approach to Predicting Extended Area Service (EAS) Development and Stimulation Rates. Information Economics and Policy 4, 3:379–390 Kridel DJ, Rappoport PN, Taylor LD (2001) An Econometric Model of the Demand for Access to the internet by Cable Modem. In: Loomis DG, Taylor LD (eds) Forecasting the internet: Understanding the Explosive Growth of Data Communications. Kluwer Academic Publishers National Oceanographic and Atmospheric Administration (NOAA) (1993) Federal Register, 4601, 58, January 15 Maddala GS (1969) Limited –Dependent and Qualitative Variables in Econometrics. Cambridge University Press McFadden D (1994) Contingent Valuation and Social Choice. American Journal of Agricultural Economics, 76, November:695–707 Perl LJ (1978) Economic and Demographic Determinants for Basic Telephone Service. National Economic Research Associates, White Plains, New York, March 28 Perl LJ (1983) Residential Demand for Telephone Service 1983. Prepared for the Central Service Organization of the Bell Operating Companies, Inc., National Economic Research Associates, White Plains, New York, December Portnoy PR (1994) The Contingent Valuation Debate: Why Economists Should Care. Journal of Economic Perspectives 4, 8, fall:3–18 Rappoport PN, Taylor LD, Kridel DJ, Serad W (1998) The Demand for internet and OnLine Access. In: Bohlin E, Levin SL (eds) Telecommunications Transformation: Technology, Strategy and Policy, IOS Press Rappoport PN, Taylor LD, Kridel DJ (1999) An Econometric Study of The Demand for Access to The internet. In: Loomis DG, Taylor LD (eds) The Future of The Telecommunications Industry: Forecasting and Demand Analysis, Kluwer Academic Publishers, Dordrecht
240
Paul Rappoport, Lester D. Taylor, James Alleman
Rappoport PN, Taylor LD, Kridel DJ (2002a) The Demand for High-Speed Access to the internet. In: Loomis DG, Taylor LD (eds) Forecasting The internet: Understanding the Explosive Growth of Data Communications, Kluwer Academic Publishers, Dordrecht Rappoport PN, Taylor LD, Kridel DJ (2002b) The Demand for Broadband: Access, Content, and The Value of Time. In: Crandall RW, Alleman JH (eds) Broadband: Should We Regulate High-Speed internet Access. AEI-Brookings Joint Center for Regulatory Studies, Washington, D.C. Rappoport PN, Kridel DJ, Taylor LD, Duffy-Deno K, Alleman J (2003a) Forecasting The Demand for internet Services. In: Madden G (ed) The International Handbook of Telecommunications Economics: Volume II, Edward Elgar Publishing Co., London Rappoport PN, Taylor LD, Kridel DJ (2003b) Willingness-to-Pay and the Demand for Broadband Access. In: Shampine A (ed) Down to the Wire: Studies in the Diffusion and Regulation of Telecommunications Technologies, Nova Science Publishers Rappoport PN, Kridel DJ, Taylor LD, Alleman J (2004) The Demand for Voice over IP: An Econometric Analysis Using Survey Data on Willingness-to-Pay. Telektronikk 4.04, December, pp 70–83 Silverman BW (1986) Density Estimation for Statistics and Data Analysis. Monographs on Statistics and Applied Probability 26, Chapman and Hall, London Smith VK (1993) Non-Market Valuation of Natural Resources: An Interpretive Appraisal. Land Economics 1, 69, February:1–26 Taylor LD (1994) Telecommunications Demand in Theory and Practice. Kluwer Academic Publishers, Dordrecht Taylor LD, Kridel DJ (1990) Residential Demand for Access to the Telephone Network. In: de Fontenay A, Shugard MH, Sibley DS, Telecommunications Demand Modeling, North Holland Publishing Co., Amsterdam Train KE, McFadden DL, Ben-Akiva M (1987) The Demand for Local Telephone Service: A Fully Discrete Model of Residential Calling Patterns and Service Choices. The Rand Journal of Economics 1, 18, Spring:109–123 Varian HR (2002) Demand for Bandwidth: Evidence from the INDEX Project. In: Crandall RW, JH Alleman Broadband: Should We Regulate High–Speed internet Access. AEI– Brookings Joint Center for Regulatory Studies, Washington, D.C Wand MP, Jones MC (1995) Kernal Smoothing. Monographs on Statistics and Applied Probability 60, Chapman and Hall, London
Part 4: Integrating Citizens and Consumers in the Information Economy Master Plan
The Transformation of Media – Economic and Social Implications Benedikt von Walter1, Oliver Quiring2 Ludwig Maximilians University Munich, Germany
Abstract When trying to connect markets and societies, one focal point of concern is the media sector. Both, media markets and society are interdependent. Market behaviour in media influences society and society influences market behaviour. Information technology is a major driver of transformation for both, media society and markets as well as for their mutual interdependence. Traditionally, research focuses only one side, whereas an interdisciplinary view is neglected in most cases. On the following pages, we discuss the potential for such an interdisciplinary research concerning the media sector and try to show its importance especially with regard to the influence of information technology. Firstly, we shortly introduce basic principles and recent evolutions of the media sector. In a second step, we present the views of business administration and communication science on the sector and its recent developments. Thirdly, we identify a certain complementariness of both disciplines, discuss conceptual consequences and derive potential research perspectives resulting from these complementary views. As a main result, inside both disciplines we find process-related views and functional analyses, but with different objectives. Major research concerns cover economic efficiency for the case of business administration and social fit for the case of communication science. Accordingly, functional views in these disciplines centre on economic functions in one discipline and on social functions in the other. We argue, that both types of functions seem to be interdependent, as social functions afford economic functions to be fulfilled and vice versa. This and further identified complementarities open the field for interdisciplinary research on the media sector and its transformation by information technology.
1 2
E-mail:
[email protected] E-mail:
[email protected]
244
Benedikt von Walter, Oliver Quiring
Introduction The media traditionally have a lot of different functions. For example, it provides societies with information or helps to integrate societies (public functions) (Pürer and Raabe 1994; Vlasic 2004), while its production, distribution and reception are conducted in media markets (economic functions) (Schumann and Hess 2002). Nevertheless, within the last decades, a strong tendency towards an increasingly “pure” economisation of the media sector could be observed which was accompanied by media concentration on the one hand and deregulation of markets on the other. The growing impact of new digital technologies (e.g. the Internet) has been another major cause for transformation during the last decade and it still is. Both tendencies interact as digitalisation offers new opportunities for economisation while in turn economisation is a powerful driver of technological progress. This leads practitioners as well as the scientific community to quite fundamental questions about the future of the media sector. While economisation has been treated widely (Altmeppen and Karmasin 2003a; Siegert 2002), though controversially, the impact of digitalisation on social and economic functions of media companies has not been subject to comparably intense discussion so far. This article focuses on the media sector from two perspectives. On the one hand, a business administration perspective represents the economic aspects and functions of the media sector. On the other hand, a communication science perspective allows for a “social” look on the drivers and their implications for media functions. While an integration of both perspectives in one generic point of view does not seem possible at the moment, an interdisciplinary discourse with a strong focus on a special topic, as recently postulated by media economists (Altmeppen and Karmasin 2003b)3 is one promising objective. A second major aim derives from the first: If a complete approximation of the two disciplines is not possible, we want to identify complementary elements in the different approaches. Firstly, we present an overview of relevant evolutions using the German media sector as an example. A brief historic summary and an analysis of its consequences for the content provided and asked for are followed by a more detailed description of economic and technological drivers of transformation. Section three encompasses the two disciplines´ basic elements under investigation with the aim of identifying complementary approaches. In the following chapter we develop the first ideas for complementary approaches and research strategies. The last chapter closes the article with a short discussion of the results and plans for further research.
3
Altmeppen and Karmasin (2003b) write: „Progress in science should be more significant, the less scientific disciplines are involved with their claims and the more cooperative and problem-oriented work dominates. Not leading disciplines but leading terms which center on issues of the research task, should dominate …“ (translation by the authors).
The Transformation of Media – Economic and Social Implications
245
The media sector – recent evolutions Before going further into details of recent evolutions it seems to be appropriate to briefly describe the terms “media” and “media sector” in order to find a working definition for an interdisciplinary discourse. While business administration usually defines “media” as products and channels between senders and receivers that help to transmit different kinds of content (e.g. CDs, television sets, cable, personal computers), communication science has a slightly different view. Here, the term “media” is divided in so-called “first-order” and “second-order media”. First-order media are technical systems with different kinds of functions and potentials that serve the diffusion of content (Kubicek 1997). This part of the definition is quite similar to the business administration perspective. Second-order media are defined as socio-cultural institutions (such as television stations, public radio stations, newspapers and news agencies) that produce communication in the process of the dissemination of information with the help of first-order media (Kubicek 1997). Media sector Media
Business administration: media Communication science: first-order media
Business administration: media companies Communication science: second-order media
Business administration: consumers Communication science: recipients
Fig. 1. Same phenomena, different terminology – operational definitions of the terms “media” and “media sector”
Consequently, when using the term “media” we employ it in a broader sense. In this paper, media are both, first-order and second-order media (communication science) as well as media and media companies (business administration). The term “media sector” covers this broad understanding of media and additionally includes media recipients (communication science) as well as media consumers (business administration) respectively.
246
Benedikt von Walter, Oliver Quiring
Media transformation – the German case The development of the German media sector during the second half of the 20th century is marked by the expansion of media companies and content supply as well as changing journalistic practices and patterns of media use. Technological, political, legal and economic factors played a crucial role in this transformation. When the period of press control by the Allied Forces ended in 1949, the number of publishers issuing daily newspapers grew from 150 to about 600 in the mid1950s (Wilke 2002). However, it soon seemed that the newspaper market was overcrowded with new players without a sound economic basis and a process of economic concentration set in that favoured publishers who could rely on their technology and know-how established before World War II. Although the number of publishers decreased to about 350 in 2001, the number of local editions has remained almost stable (about 1500) since then. In the 1960s, public radio stations started to develop and to expand their program schedule. Moreover, as a result of political controversies and legal action taken by the German Supreme Court (Bundesverfassungsgericht) in 1961, first attempts to start a private television station (Deutschland Fernsehen GmbH) were aborted and a second public television station (ZDF, Zweites Deutsches Fernsehen) was established (Mathes and Donsbach 2002). Although private broadcast was ipso jure allowed since 1961, the German Supreme Court de facto ruled out private television due to shortness in technical frequencies. In 1981 the German Supreme Court finally decided, that it was time to establish private broadcast in Germany. The deregulation of broadcasting and new innovative technologies promoted the development of broadcasting since the mid-1980s and led to a stronger competition in the electronic media market (Schulz et al. 2004). Before the 1980s, German television viewers had only three to five public channels to choose from. Today, more than 90 per cent of the German population are able to receive up to 20 domestic and about 30 foreign channels. The number of radio channels grew from 32 in 1980 to more than 300 in 2003 (Schulz et al. 2004). Some sectors of the print media market seem to follow a similar trend. For example, within the last 20 years, the number of popular magazines increased from 271 to 847 (Schulz et al. 2004). With the advent of the Internet at the beginning of the 1990s, the number of companies supplying the German public with information and entertainment reached a new dimension. Today, German Internet users are able to get their information as well as their entertainment supply from an innumerable number of websites hosted almost all around the world. Moreover, new multimedia technologies offer the possibility to rearrange and combine media content in almost any conceivable way (for an overview see, e.g. Berghaus 1997; Brosius 1997; Vorderer 2000). The expansion of the media products and the increasing economic pressure on traditional as well as online editorial staff (see, e.g. Mast 1999; Quandt 2004) led to changes in traditional journalism. First of all, the composition of media supply changed (Kepplinger 2002; Krüger 1998; Marcinkowski et al. 2001). Newspapers lost their specific party political profile. Today, the most successful daily newspaper is the tabloid Bild-Zeitung (Schulz et al. 2004). Radio programs often present a heavy load of music and only small information modules, a lot of them filled
The Transformation of Media – Economic and Social Implications
247
with small talk and human interest stories. Private television channels also show a clear preference for entertainment and ‘infotainment’ over ‘pure’ information (Brants and Neijens 1998; Krüger 1998; Maier 2002). With the emergence of online journalism, journalistic practices like selecting from the supply of news agencies and rearranging already existing content became increasingly important (Quandt 2004). It is arguable how far the transformation of the media sector was driven by technological innovations and economic necessities. However, undoubtedly, the great majority of German media users welcomed many of the developments mentioned above. Within the last decade, there is a strong tendency to spend more and more time with electronic media and the percentage of the public having access to the Internet is growing from day to day (Eimeren et al. 2003; Gerhards and Klingler 2003; Ridder and Engel 2001). Nowadays entertaining content enjoys a good reputation (see, e.g. Früh 2003; Knobloch 2002; Vorderer and Weber 2003). Moreover, the theoretical possibilities for users to exert influence on media content and products have increased. Broadband cable with a feedback loop and interactive features allows media users an easy response and even offers them the opportunity to produce and distribute their own content (so-called user-generated content). Although there is an exciting range of new possibilities created by new and innovative technologies, not all of them are equally successful, as could be seen the case of Leo Kirch, the German media mogul who failed to establish digitised pay TV. In the beginning (until the second third of the 1990s), new digital technologies were welcomed with great euphoria even by scholars (see, e.g. Berghaus 1994). However, very soon, critical scholars questioned whether the new features were appreciated under every circumstance and in any situation (Brosius 1997; Schönbach 1997; Vorderer 1995). In the end, it will be argued, the acceptance and the monetary potential of new technologies not only depend on market drivers like economisation and digitalisation.4 It is also necessary to carry out sound studies on user action, user motives and user needs as well as on the economic and social functions of mass media to establish new media trends. Before we go into further details about the implications of this argument, we want to take a closer look at the two main market drivers of the so-called “digital revolution”. Technological drivers of change Talking about recent technological drivers of change for the transformation of the media sector, digitalisation plays the predominant role. Two main and interrelated keywords are associated with digitalisation: First, advances in computer technology make computers faster and able to store ever bigger amounts of data. Second, the interconnection of these computers via the Internet and its services becomes faster as well. The media sector is especially affected as not only the production 4
As has been shown above, economisation and the advent of new technologies always played a crucial role in the process of media development.
248
Benedikt von Walter, Oliver Quiring
processes but also the products themselves can be digitised (Hess and Schumann 1999). A brief description of major technological evolutions along the way of media products from production over bundling and distribution towards reception is given here for a rough overview on major evolutions (for some more detailed analysis see Albarran 1996; Hess and Schumann 1999; Kiefer 2003; Wersig 2000). Concerning media production the Internet facilitates the digital provision of content by authors. The term “multimedia” describes growing possibilities to integrate static formats (pictures, text) with dynamic formats (e.g. motion pictures, sound) to a richer media product. Due to these evolutions, the traditional classification of media into press, TV and film industries (Albarran 1996) becomes increasingly irrelevant (Siegert 2003). So-called multimedia databases can store huge amounts of different content types in one integrated system. In order to be found quickly they are assigned with semantic metadata according to standards like XML (eXtensible Markup Language) (Rawolle 2002). Unlike classical monolithic media products, digitalisation additionally helps splitting content into small modules in order to recombine them to a higher variety of products. All these processes can be operated by Content Management Systems (CMS) (Schumann and Hess 1999) which provide standardised access to and organisation of content. Reproduction technologies, which have been the major technology inside media companies in the past, lose importance due to the fact that computer technology allows recipients to cheaply reproduce files by downloading them and writing them to CD. In media bundling Content Management Systems can be deployed to easily recombine content modules to individualised products according to consumers´ preferences. In order to be able to assess these preferences, so-called collaborative or content-based filtering allows media companies to monitor the searching, selecting and buying behaviour of customers when they are online. Recipients are more and more willing and able to combine their products themselves, e.g. select certain songs on the Internet instead of buying a whole album. Consequently future technologies in this field might focus more on the classification and presentation of content. With respect to content distribution via Internet, TV companies are engaged in broadcasting movies and publishing houses offer on-demand services (Sjurts 2002). At the same time, so-called peer-to-peer based file sharing systems (Fattah 2002; Oram 2001) revolutionise the distribution process as content is provided and distributed in a decentralised way and generally without reimbursement for media companies or artists. Therefore, for the case of content distribution one major technological objective will be to regain control over these systems. Finally, content reception changes a lot due to digitalisation. Most recent evolutions point to an increasingly ubiquitous mobile access to content (“anytime, anyplace” paradigm) and an interrelated evolution towards stronger integration of media consumption into everyday life (e.g. evolution towards a “networked home” or “wearables” which are digital devices integrated into clothes) which together contribute to a fundamentally different perception of content. Especially the Internet allows recipients to choose from more alternative offers, to exchange media
The Transformation of Media – Economic and Social Implications
249
products among each other via peer-to-peer file sharing systems and to produce their own content at a low cost (“user-generated content”), evolutions which significantly enhance their bargaining power. Due to all the evolutions which facilitate the production, bundling and distribution of huge amounts of content, content reception is no longer a problem of access to content but of selection of relevant content. Although technologies like XML can help to find content according to its context, there is still a lot to be done and future technologies will have to focus even more on the selection of relevant content according to consumers´ preferences. Economic drivers of change Just as any other commercial company, media companies intend to cut costs or to increase revenues. In the following, we discuss major evolutions and challenges in the media sector concerning both, costs and revenues. Concerning costs, digitalisation has had a cost-cutting effect in nearly all fields of content production, bundling and distribution. Major technologies leading to this effect have been mentioned above. Although, on a first glimpse, new opportunities due to this evolution seem to be certain, this generally holds not true for one simple reason: Transaction costs (Williamson 1999) are now significantly lower for both, producers and recipients. While first-copy costs5 might be similar in a digitised media economy, transaction costs significantly decrease in the field of reproduction, bundling and distribution of content. These cost cuttings allow new competitors − including recipients (!) − to enter media markets much easier due to reduced entrance barriers. In a process of convergence, players from different branches like information technology and telecommunication technology enter the media markets (Latzer 1997). Furthermore, reception of content gets cheaper as consumers can access information products more easily and in many cases at virtually no cost. Concluding, radical cost savings are experienced but can not easily be converted into a source of income for media companies. In order to gain revenues, media companies typically act on different markets as they sell content (information and entertainment products) to recipients, and recipients´ attention and data to other companies which post advertisings in the media or employ user data for other purposes (Albarran 1996; Hass 2002a). Accordingly, revenues can be classified as direct (paid by the user) and indirect revenues (Zerdick et al. 2001). Direct revenues can be dependent (transaction-based fees) or independent from usage (subscription or broadcast fees). Indirect revenues are contributed either by other companies (in return for advertising or data mining) or by the state (subsidies). Large parts of the media sector face significant slumps in direct revenues due to the emergence of new competitors offering content at lower prices and even more 5
First copy costs encompass the costs which occur for the first unit of a media product and are assumed to be very high in the case of media products compared to (nearly negligible) reproduction costs.
250
Benedikt von Walter, Oliver Quiring
due to the availability of free content in the Internet (IFPI 2003), for example, in file sharing systems. Especially the evolution of free content gives rise to a reduced willingness to pay for media products in general and induces an even stronger incentive for media companies to concentrate on the economisation of the media production process. One way to achieve this objective is to lower quality of media products. However, this might in turn lead to less acceptance by recipients and set in motion a vicious circle of less quality, less user acceptance of media products and subsequently less revenues from advertising. This market phenomenon is also referred to as the “Akerlof Process” (Akerlof 1970; Sjurts 2004). More promising is the quest for new business models for media companies (Hass 2002a; Hess and Schumann 1999). Recent approaches to enhance direct revenues cover an individualisation of media products and pricing (price discrimination) (Hess 2002), a proactive integration of users as content producers (user-generated content), concepts of content re-utilisation (Hess and Schulze 2004) or Online Content Syndication (Anding and Hess 2002). Furthermore, technologies like video-ondemand, Pay TV and for-pay Internet services reduce the non-excludability6 of media products and thereby allow companies to generate new sources for revenues. In general, a trend towards usage-based pricing (instead of ownership-based pricing) can be observed due to the new possibilities to supervise the use of digital content. Despite this evolution it is still not certain if transaction-based revenues or “flat-fee” models will be more successful in the Internet (Hass 2002b). Furthermore, future business models will have to address a phenomenon called “information overload” (Picot et al. 2003). It describes the fact that the ever increasing amount of information offered in the Internet due to cheap production and distribution technologies is confronted with limited perception abilities of recipients. As a consequence, media companies will generate future direct revenues not so much by the mere provision of content, but by the selection and structuring of adequate content according to consumers´ preferences. Apart from offering new adequate business models to consumers, their behaviour has to be curbed when it comes to violation of intellectual property rights as it holds true for the case of media exchanged in file sharing systems. Adaptations in legislation to these recent technology-induced evolutions can be observed. For example, the German analogue to the Anglo-American copyright law (Urheberrechtsgesetz, UrhG) is being modified to restrict the right to private ownership. Thus, legislative action helps to assure direct revenues for media products. Indirect revenues generated by advertising have been a safe source of income for media companies in several branches of the media sector. Nevertheless the future of advertising is unclear due to digital technologies. Firstly, although expenditures on advertising are still growing (Picard 2002), only the most successful companies generate substantial revenues from ads placed on their Internet websites while most other websites do not reach sufficient user traffic. Secondly, ex6
Non-excludability describes the fact that once the good is provided, it is exceedingly costly to exclude non-paying customers from using it. Together with non-rivalry (the fact that the consumption or use of a good by one consumer does not diminish the usefulness of the good to another) it defines a public good.
The Transformation of Media – Economic and Social Implications
251
actly a rich information supply in the Internet drives down circulation of offline media which in turn induces less revenue from advertising in traditional media. Thirdly, TV advertising, traditionally a solid source of revenues for Free TV companies is threatened by the emergence of digital video recorders which allow the masking of ads during TV films. However, individualisation allows media companies to reach their recipients much more directly (Hess 2002), hence advertising companies can target customers in a more efficient way (Sjurts 2002). Concerning indirect revenues from the state (subsidies), an evolution towards less federal support for media production and distribution can be observed. This inevitably leads to a stronger focus on sales and especially on advertising revenues (Sjurts 2002) and contributes to a more complex situation for media companies. Concluding, digitalisation and the diffusion of Internet are currently not (as could be hypothesised) necessarily contributing to higher revenues due to an evident reduction of transaction costs (Buhse 2004; Sjurts 2002) but to a highly competitive situation concerning both costs and revenues. The mere provision of content will not be enough to gain significant revenues as the availability of free content contributes to a degradation of content towards an inferior (Hass 2002a) or complementary service (Walter and Hess 2004). This evolution may not point to abolition but to a need for a transformation of media companies incorporating a change in the functions they fulfil in the market for information goods.
Interdisciplinary discourse So far, we have introduced digitalisation and economisation as two main drivers of transformation, and have illustrated their strong interrelations and unsolved problems in the context of economic and journalistic objectives. Recently, many disciplines have increasingly theorised the topic of transformation inside the media sector. A predominant contribution is generated by the scientific community of media economics (Heinrich 1999a; Picard 1989; Schumann and Hess 2002; Siegert 2003; Sjurts 2001). Although media economics mainly focus on the intersection of its “leading disciplines”, business administration and communication science, the content of this intersection is still not clearly defined (Altmeppen and Karmasin 2003b). Our discourse begins with a polymorphic presentation of very basic approaches of both disciplines. The discussion is explicitly reduced to spotlights on one object of investigation inside both disciplines in order to be able to identify complementarities instead of differences. For each discipline the two drivers are mirrored on a simple construct in order to be able to derive a set of functions that may be affected by these drivers. We then try to integrate the functions derived from one discipline into the other discipline.
252
Benedikt von Walter, Oliver Quiring
The view of business administration The focus of business administration lies on the process of production, bundling and distribution of media products which is analysed with the aim of designing it more efficiently. Media markets are observed in order to identify new sources for revenues or new competition inside as well as outside the media sector. These markets face the special case of a double competition. While economic competition is addressed by business administration (discussed in the following), the (decreasing) journalistic competition or pluralism is a favoured issue for communication science (Altmeppen and Karmasin 2003b). A typical term in business administration is the “added value” and the corresponding descriptive concept of the “value chain” (Porter 1985). Value added is calculated as the difference between the market performance as a whole (turnover, changes in inventory) minus intermediate inputs (purchased material, external services, interest) (Picot 1991). The value chain structures companies according to different activities. These can be classified as primary (such as logistics, marketing and distribution) and secondary activities (for example, human resource management).7 Each activity can be subdivided into more detailed value chains. The underlying economic principle is the value / cost ratio: in each part of the value chain costs occur but the economic calculus is based on the expectation that the accumulated economic value added to the product not only outweighs its costs but generates revenues also referred to as margins. The concept of value chains can generally be applied to all institutions engaged in the creation of value, thus, also to the media value chain. The generic media value chain (see Fig. 2) describes the way media products take from creation over bundling towards distribution and reception while purposely abstracting from special cases for different media channels (e.g. broadcasting, film, music, print). Additionally, according to Porter’s concept (Porter 1985), the margin is explicitly mentioned although it is not really a step of the value chain. Production
Bundling
Distribution
Reception
Margin
Fig. 2. Model of a media value chain
In a first step, the production of media products requires creative work by an author, singer or journalist supplemented by technical tools like a computer and its applications. In a second step the work generated has to be bundled which encompasses the directed search, selection and aggregation of single media products, often with the aim to create a combined end product (e.g. newspaper) or to aggregate the content according to certain guidelines (e.g. novel). Contrary to the first step, 7
In the following, when we discuss the media value chain, our emphasis lies on primary activities.
The Transformation of Media – Economic and Social Implications
253
mostly companies (e.g. publishers, labels, TV stations) and not individuals are the main actors in this part of the media value chain. This is attributed to high investments in technologies necessary for these processes which only pay off if used several times. Concerning the production and bundling of content, different media formats have taken advantage of digital technologies to a very different degree. While the film, broadcasting and music industry has been using digital technologies for quite a long time, more recently also the printing industry is taking advantage of digitally enhanced production technologies (Albarran 1997). With respect to production the Internet can help to accelerate both the production process in a narrow sense (e.g. journalists gathering information for an article) and in a broad sense (e.g. journalistic articles collected for a newspaper edition from distributed journalists all over the world). Once inside the company, the technologies mentioned above facilitate the management of media products. Distribution is the third step of the value chain and comprises all processes necessary to bridge time and space on the way of the product from its production and bundling to its reception. In the physical world this affords logistic coordination and contracts on different trade levels until the end-customer has to complete the process of distribution by buying the media product in a shop. Digitalisation changes this process. Predominantly the Internet can reduce transaction costs to a different degree according to the media format. Broadcasting products have always been sent and perceived via electronic technologies so that the generic process of distribution is not changed. Nevertheless digital technologies significantly contribute to technological means of transport. Broadcast companies actually face competition by Internet technologies as increasing bandwidths allow the digitised transmission not only of radio but also of television signals. The Internet leads to substantial implications for the music and printing industry because the physical / analogue distribution of content as a traditionally major part of direct sources of revenue for companies in these sectors is not necessary anymore. Internet access provided, the transport of content from the publisher to retailers via physical means of transport becomes as obsolete for the supplier as walking to a shop for recipients. Due to evolutions like ever increasing bandwidths and compression standards distribution recently becomes steadily more efficient. One result of possibilities for cheap distribution of media products is a huge supply of free as well as for-pay content on the Internet. Consuming the media product is the last step of the chain, a process called reception in the context of media products. Digitalisation has several impacts on this stage. As mentioned above, recipients are not only offered new products, but are also able to consume media products in very different ways (e.g. via mobile end devices). All in all, their bargaining power increases significantly and sets media companies under pressure. This pressure can be depicted in the value added which is the margin that is driving the whole process of the value chain. It is defined as the difference between the price paid for the media product by the consumer minus all costs (“intermediary inputs”) occurring on the other steps of the value chain. While it is of central concern for each market player to optimise the value / cost ratio the situation is particularly difficult if the traded goods´ value is difficult to assess before
254
Benedikt von Walter, Oliver Quiring
purchase. This is clearly the case for media markets due to the phenomenon referred to as information paradox (Picot et al. 2003) which says that the quality of information can only be assessed after its consumption, but the product loses its value after consumption. This makes the production and distribution of media products particularly risky. If additionally, the same or a similar product is available for free on the Internet, users easily switch to the free offer because, as described above, there is no risk if the quality is bad as there is no price to be paid. Business administration research is primarily concerned with increasing the effectiveness of those steps of the media value chain where media companies are involved. More recently, customers have become an explicit part of media business models (Wirtz 2001) and the importance of the audience for the management of media companies (Albarran 1997) is stressed. Nevertheless, the integration of both neighbouring institutions still can be described as marginal compared with an indepth analysis of the processes inside media companies. In the following it will be shown why a more detailed analysis of recipients could be important by introducing an additional business administration concept, the theory of intermediation. According to intermediation theory an intermediary is employed if the additional transaction costs incurred are less than in the case of direct contact of supply and demand (Bailey 1998; Malone et al. 1987; Sarkar et al. 1998). Applying this concept to the business administration concept presented here, the companies engaged in reproduction, bundling and distribution of media products can be interpreted as agents intermediating between suppliers and clients (e.g. journalists and readers, singers and music fans, actors and spectators). Recent evolutions point to a technology-induced disintermediation (Chircu and Kauffman 1999) which means that certain intermediating institutions are excluded from the market because of more efficient alternatives. The Internet is a driver for exactly this kind of evolution. Due to the reduction in transaction costs for providing and receiving media products in the Internet, media companies find themselves increasingly disintermediated. Artists can not only produce their works more efficiently by themselves due to digital production technologies but also offer their work to recipients without intermediating publishers. Additionally, file sharing systems allow the direct exchange of media files among recipients and therefore can be interpreted as representatives of a new class of (electronic) intermediaries (Hess and Walter 2006). It can be argued that traditional intermediaries have to participate in electronic commerce and have to use unique assets, such as market expertise or well-known brands to re-introduce themselves in the process, thus effecting ‘re-intermediation’ (Chircu and Kauffman 1999). We propose that, in general, a stronger focus on the relevant functions of media companies on markets could help to re-intermediate traditional market players. In order to be able to assess “relevant functions” an analysis of both, traditional intermediary functions and functions demanded in the near future seems to be helpful. According to Picot et al. (Picot et al. 2003) intermediaries provide four general functions. They provide market participants (and not only customers) with information, they have to gain trust, they organise the combination and distribution of goods according to the customers´ wishes and, finally, they render further services according to special cases. For the case of me-
The Transformation of Media – Economic and Social Implications
255
dia companies these functions cover the selection of media products according to customers´ needs, a guarantee of quality, a pre-financing of artists and the handling of surrounding commercial processes (Schumann and Hess 2002). A more in-depth knowledge on suppliers (artists, authors, etc.) and recipients (music fans, readers, etc.) seems to be necessary to define future intermediary functions for media companies. Although market research is treating the issue of consumer wishes, the approach of communication science to the media sector can deliver fruitful additional insights with respect to these topics. The view of communication science In comparison to business administration, communication science does not primarily take a monetary perspective when investigating the media sector. It also analyses the social and organisational antecedents, factors and implications of media transformation. This approach offers the opportunity to see media transformation from a complementary perspective and may help to explain why certain technological and economic innovations lead to increasing revenues for media companies while others fail or turn out to be of minor importance. To give our argumentation a line of reasoning, we choose a formula8 that tries to identify the main elements of the communication process: the Lasswell Formula (Lasswell 1948) (see Fig. 3). Although its depiction of a communication process is held quite simple, the formula offers the advantage of showing some stunning similarities to the media value chain and therefore allows for a systematic comparison.
Lasswell Formula
Elements of the communication process
Who
Says what
In which channel
To whom
With what effect
Communicator
Message
Media
Recipient
Effect
Fig. 3. Lasswell Formula and elements of a communication process
The simple formula “who says what in which channel to whom with what effect” indicates that there has to be a communicator, a message, a medium and a recipient to establish effective communication and that communication causes effects. Each of these elements represents a unique field of research in communication science. Although the formula appears to show a strictly individualistic per-
8
Nota bene: The ‚Lasswell-Formula’ is not a model of communication, as indicated by the missing arrows between the boxes.
256
Benedikt von Walter, Oliver Quiring
spective, communication scientists carried out studies on the micro-, meso- and macro-level of communication. We do not intend to go into detail about the whole tradition in each field of research. Instead, we want to show the main effects of the two market drivers (economisation and digitalisation) on each element of the communication process. Economisation causes changes in the working routine and goals of communicators (who). Editorial staffs increasingly adjust their action to the goals of profit maximising and cost reduction, possibly at the cost of journalistic objectives such as information of the public or critic and control. In this process, editorials tend to lose autonomy, journalists change their main tasks from selection and investigation to entertainment and mere sale of content and orientate themselves towards the preferences of their consumers (see Meier and Jarren 2002). In short, some authors fear that economisation may lead to a decrease in content quality (says what) and an increase in the quantity of channels as media companies concentrate their forces and operate on a global basis (in which channel, for an overview see Blumler 2002; Meier and Jarren 2002). But it is still an open question how recipients (to whom) handle the massive expansion of media supply and the change in its composition. On the one hand, they seem to welcome the increase in entertainment supply and make regular use of it. On the other hand, they are forced to select the information that suits them best from a still growing body of supply, a fact that increases their transaction costs. The effects of economisation are still discussed controversially. While some authors fear that mass communication will increasingly lose its ability to fulfil social and political functions (for a pessimistic view see e.g. Blumler 2002; Schulz et al. 2004), it is also possible that the economisation of the media sector leads to a better understanding of the needs of the audience (see, e.g. Huber 1986; Mast 1999). Digitalisation as well as the emergence of the Internet and its services also had a massive impact on the elements of the communication process. On the one hand, it enabled media companies to extend economisation because the production, storage and distribution of content became cheaper. On the other hand, Internet users are used to free content, a fact that still leads to an extensive search for opportunities to gain profit from web appearances. With the advent of digitalisation, journalistic practices changed again (who). Today it is easier for journalists in all kind of media to retrieve content from all over the world. News agencies deliver digitalised information that is more comfortable to reproduce for newspaper journalists. Single radio and television reporters are able to produce and transmit their whole articles alone, a task that formerly afforded a group of journalists. Online editors spend much of their time with copy and paste processes and selection practices instead of investigation (Quandt 2004). Moreover, the organisational as well as editorial structures of online editorials differ from the traditional one in some respects: online editorial staffs sometimes have no special resorts, they face low financial and personnel resources and their topics are oriented towards other media publications (Donges and Jarren 2002). Moreover the boundaries between communicators and recipients (to whom) become increasingly blurred in the Internet (Blumler 2002). This development directly raises the question of the influence of digitalisation on the content offered (says what). On the one hand, some
The Transformation of Media – Economic and Social Implications
257
authors see a convergence of media content due to self-referentiality (i.e. different media increasingly tend to refer to each other) in the media system (Blöbaum 1999; Heinrich 1999b; Kohring 1999). This would suggest that recipients do not really get more variety, but more of the same. On the other hand, enhanced feedback loops in the Internet allow users to create their own content. This leads to a massive dissemination and decreased transparency of all kinds of content without simultaneous quality controls9. Digitalisation has also multiplied the channels that are available for the distribution of media products. Additionally, it offers the possibility of a technical convergence of formerly separate devices (e.g. telephone, PC, television set and radio). Although technological convergence is theoretically possible, it is still an open question which forms of convergence users would accept (for a critical discussion see Höflich 2002). Little is known yet on how exactly the new technologies are used by recipients in different situations. While some behave really interactively, intentionally look for information and make use of the new opportunities to participate in the creation of content, others simply use the Internet for entertainment and pastime (Ferguson and Perse 2000). Finally, the social effects of digitalisation are still unclear. On the one hand, the Internet offers better chances to participate in the creation and exchange of mass information. On the other hand, developments like the individualisation of content could lead to a fragmentation of the public into tiny segments that share a common information basis, a thesis developed during the emergence of private broadcasting (see Blumler 2002; McQuail 2002). To get a clearer impression of the implications of economisation and digitalisation of the media sector, we take a closer look at the functions of the media system as well as at the individual needs of media users. Paragraph three of the German “Landespressegesetze” (German legislation for the organisation of the press) mentions three such functions: The mass media should provide society with information, they should contribute to the formation of opinions and criticise and control those in power. Moreover, Ronneberger (Ronneberger 2002) mentions four additional social functions of the mass media: First of all, mass communication should contribute to the socialisation of media users. In a long-term perspective, it should communicate common norms and values as well as commonly accepted behaviour. From a short term perspective, it also has to contribute to social orientation which means that the media sector should provide its users with information that helps them to orientate themselves in an increasingly complex society. Furthermore, mass media should produce and distribute material that is appropriate for the recreation of the public. Finally, it is the duty of mass media to support political education. According to Pürer und Raabe (Pürer and Raabe 1994) the media system should also help to integrate diverse people into society and fulfils an economic function, i.e. it should accelerate the turnover of economic products by advertisings. Taken together, these functions reveal more or less four dimensions: The first could be described as the social (social orientation through information, 9
For an extensive discussion of the quality of online-communication see Beck et al. (2003).
258
Benedikt von Walter, Oliver Quiring
socialisation, integration), the second as the political (formation of opinion, critics and control, political education), the third as the economic (turnover of goods) and the forth as the spare time dimension (recreation). Although the effects of economisation and digitalisation on the economic and spare time dimensions of media functions have been discussed from the view of communication science, the situation in the political and social dimension is identified as much more critical. As already mentioned above, even the information sector tends to ‘tabloidisation’ and specialisation of content, a fact that might affect the political as well as the social dimension of media functions. Moreover, the digitalisation and possible individualisation of media products might lead to more and more fragmented and isolated parts of the public that do not share common values, beliefs and behaviour. It is foremost the political dimension that is controversially debated in communication science (for an overview see e.g. Donsbach and Jandura 2004; Haas and Jarren 2002; Mathes and Donsbach 2002; Schulz et al. 2004). On the one hand there is a (dominant) pessimistic view. The following topics have recently been discussed: Is the media sector still able to provide a sound basis for the individual formation of opinion? Did the concentration on entertainment create an apolitical public? Concerning elections, do people increasingly base their vote on personal features of top-candidates instead of political issues? On the other hand, some authors hope that digitalisation and the Internet platform will contribute to a more politically active public that has more space to express its own opinion in an unfiltered way and therefore will contribute to the democratic process (for an in-depth discussion see e.g. Hoecker 2002). We cannot give a final answer to these questions so far, although some aspects of the trends in media business might prove to be dysfunctional. As far as the social dimension is concerned there are also some indications of dysfunctionality. If media use dominates spare time activities, this may lead to a decline in “social capital” (by reducing the integrative power of norms, social networks and trust, see Putnam 1995), although the consequences of media use very much depend upon the content consumed (Norris 1996). If the individualisation of media content will proceed as a result of digitalisation, media content will switch from “mass media” to “specialised communication” (Maisel 2002), a process that might in the long run weaken the integrative power of mass media and the mediation of common values and commonly accepted behaviour. A more optimistic view can be derived from an individual perspective. The functions of mass communication find their expression in the needs of the individual: according to McQuail (McQuail 1983), the individual has socially and psychologically originating needs that are partly satisfied by mass media: Although some patterns of media use might prove dysfunctional (e.g. media use for escapism), and not all users share the same needs at the same time, most of the individual needs lead to expectations that contribute to a certain quality of media content. McQuail mentions four basic needs (McQuail 1983): a need for information (e.g. to get daily orientation, to get knowledge), for personal identity (e.g. search for common values, identification with other people), for integration and social interaction (e.g. talk about media content to find contact) and entertainment (e.g. relaxation, pastime). We argue that although more and more time is spent for enter-
The Transformation of Media – Economic and Social Implications
259
tainment programs (i.e. the need for entertainment is high), the individual user cannot totally abandon a core of commonly shared information. Highly individualised and specialised information might be useful as a complement (and not as a substitute) to a core of commonly shared information in certain situations (job, hobby or to find the way from a train station to a meeting). However, even in these situations – where interpersonal communication is essential − people have to find a way of mutual understanding. So the individual media user will − in the long run − find himself in a situation where his demands for appropriate information to satisfy his daily needs will not be met anymore. In this context, it is an open question, where he might find this content. Digitalisation offers more than one possibility in the long run. New media companies might be established, or the traditional media companies will have to anticipate the changing needs of their users. However, from a short-term perspective it can be expected that media users stick to their habits, a fact that might decelerate disintermediation inside the media sector. A large part of the public still subscribes to a daily newspaper, a traditional habit that is declining but that will not be abandoned immediately by all members of society if there is an alternative opportunity to get information. The Internet does not offer the only free content in the media sector. Germans can choose from a large amount of free television channels and radio stations. Moreover journalistic work cannot be simply substituted. Journalists not only select important news from those of minor importance, they also structure the content and give interpretation. At the moment, the topscoring websites in the information business are those of traditional media companies. Complementarities In this chapter the interdisciplinary discourse will be continued by trying to identify complementarities of both approaches in order to derive conceptual potentials as well as future research perspectives. The comparison of both concepts shows some astonishing parallels as well as some complementary views on the same phenomena. The value chain and the Lasswell Formula both allow an iterative, process-related view on the media sector. In detail, media (media products or first order media respectively) are kind of escorted on their way from production over different intermediary steps, until they reach the recipient (see Fig. 4). While the value chain is a special construct for economic issues, the Lasswell Formula also takes social, psychological, political and technological factors into consideration. With respect to the focus of research according to both concepts, we see that business administration centres very much on economic effectiveness and outcome across all steps of the value chain. The value chain concept together with the analysis of intermediaries suggests that business administration focuses on all those processes between the delivery of media products by the artist and its reception by consumers. Additionally, the efficient identification of both, artists and recipients is covered by talent scouts and market researchers.
260
Benedikt von Walter, Oliver Quiring
Media value chain
Production
Bundling
Distribution
Reception
Margin
Media
Lasswell Formula
Who
Says what
In which channel
To whom
With what effect
Fig. 4. Two process-related views on media
In communication science all elements of the communication process, including the “who” and the “to whom” are not treated as functions within companies but as elements of communication in society. From this view, the process of production and reception is influenced by factors inherent in the individual, but also by colleagues, friends, society or organisational restrictions in terms of time, space and money during the process of production. Concerning “says what”, business administration primarily measures the importance according to units sold whereas communication science is primarily interested in content (and content quality). With respect to the “channel” both disciplines discuss the institutions by which media products reach the recipient. As mentioned above, business administration is adept in the economic part of this sub-process as they optimise their objective functions under certain restrictions (money, space, time, competitors) (Frey 1990). The view on recipients (“to whom”) is, again, quite complementary. Business administration is traditionally interested in recipients in their role as consumers, whereas communication science is interested in the circumstances under which media are received. Last not least the “margin” mainly represents a revenue / cost ratio and therefore might be interpreted as the market-oriented “economic effect” of the process of media value chain. This is why a comparison to Lasswell’s “with what effect” makes sense. Nevertheless communication science mainly discusses the (political, social, psychological, etc.) effects of communication. Comparing the two views of the media value chain and the Lasswell Formula we may conclude that from each discipline’s view the other discipline’s major concern is rather a complementary view. The media value chain and the Lasswell Formula both represent processrelated views on the media sector.
The Transformation of Media – Economic and Social Implications
261
With respect to research concerns, we identify economic intention on the one side, while a social orientation is the major issue on the other side. Whereas these overall concerns might be discussed controversially, secondary objectives like quality, trust or the enforcement of (intellectual) property rights and the according functions are common to both disciplines. Major research concerns are “economic efficiency” for business administration and “social fit” in the case of communication science while the objective of the other discipline is rather seen as complementary respectively. Communication science employs economic objectives as a means to achieve journalistic objectives while business administration tends to employ journalistic objectives as a means of economic objectives. Consequently, the functional views on mass media we presented above are complementary as well. At closer inspection, the business administration view is centred on economic market functions to be fulfilled in order to generate a marketable value-added. The communication science approach has a focus on public functions of mass media and the needs and preferences of recipients. Both disciplines cover descriptive perspectives (“functions are fulfilled”) and normative aspects (“functions have to be fulfilled”). A comparison on a descriptive level shows that economic functions are prerequisites for social functions in a market-oriented media sector. The normative views on markets in one case (“how to optimise companies in markets”) and on the society as a whole in the other case (“how to optimise media in societies”) might be controversial in some cases. Taken together, they contribute to a more holistic view of the media sector. A second clear parallel is to be seen concerning generally expected effects. Differences lie in the interpretation of these effects according to each discipline’s objectives. For example, business administration discusses possibilities for individualisation in a quite optimistic way in which challenges are often simply treated as a potential for new concepts. On the other hand, communication science approaches the topic in a much more critical way as scholars fear a future lack of interpersonal connection via public topics. In a similar way, the tendency towards free content in the Internet is identified as a major consequence of digitalisation in both disciplines. In contrast to the issue of individualisation, communication science interprets this phenomenon − from a political and social perspective – in a differentiated and sometimes slightly positive way while business administration treats these evolutions as primarily threatening as traditional business models might be endangered. A third topic of importance in both disciplines is advertising. Business science positively defines advertising as another source of revenues and mainly discusses it critically if direct revenues and indirect revenues by advertising are competing
262
Benedikt von Walter, Oliver Quiring
objectives (e.g. users are only willing to pay for a newspaper if it primarily delivers content, not ads). Again, some critical communication scholars take a more differentiated view and also identify risks for journalistic quality and objectivity when ads are a major source of revenue. Both disciplines agree on the importance of major issues for current and future media but sometimes have a different view on opportunities and threats resulting from these drivers. This is mainly due to different research objectives in each view. Comparing the treatment of issues like individualisation, free content or advertising, we see a further difference between both disciplines. Business administration is centred very much in the nearest future: The earlier individualisation is exploited in the most efficient way, the better. The earlier the opportunities of free content for business models outweigh its threats, the better. On the other hand, communication science deploys a long-term perspective. Individualisation is predicted to have negative effects on society in the future and accordingly the longterm effect of free content on society is seen as crucial. With respect to the time horizon, communication science has a comparatively long-term view on the past and the present, while business administration takes a more short-term perspective on the present and the future in order to efficiently deliver pragmatic solutions. Concluding, similarities of both disciplines are important in order to identify links for an interdisciplinary treatment of the topic while some of the complementarities might contribute to an enhanced view on the media sector in each discipline.
Consequences The following section concentrates on two objectives. In a first step, we want to derive some conceptual potential from our analysis of complementarities, whereas in a second step, some concrete research perspectives will be shortly discussed. Conceptual potential As we have seen, both disciplines sometimes seem to have − at first sight − different views on the media sector. At a second look, however, the differences turn out to provide useful complementary views on the same phenomena.
The Transformation of Media – Economic and Social Implications
263
For business administration, this is especially interesting with respect to the fact that media companies face an increasing threat of disintermediation that is partly induced by an enhanced bargaining power of both, artists and recipients in the Internet. Additionally, a more profound analysis of critical success factors of media products, its context of usage and its effects could deserve more attention in a situation in which media companies face increasing competition by free content in the Internet. For communication science, a complementary perspective is valuable because the discipline does not hold a specialised tool-kit for efficient production, selection, bundling and distribution of content. This topic was not a focal point of concern for communication science in times when public institutions provided sufficient rules and economic backing for the fulfilment of important public functions of media. However, economisation, independently of its causes (deregulation or digital technologies), recently urges media institutions to pay more attention to economic principles in order to be able to fulfil these functions in the future. In these times, the know-how of business administration can help. Comparing the two disciplines’ view on functions, we saw that economic functions serve as prerequisites for social functions. This means that only as long as economic functions (efficient procession of the media value chain) are fulfilled a media product can serve its social, political and recreational functions. Nevertheless, the functional interdependence is bi-directional: If media products do not serve the needs demanded by its recipients – whether they want a common basis for communication or a profound discussion of political topics – they may not be willing to pay for content, a threat that media companies have to take more seriously in times of free content “at your fingertips”. Again, a closer look on recipients and the social implications of media use might lead to a better understanding of functions demanded by recipients and in turn allow media companies to make their products fit better for these demands. In addition to that, with better knowledge about users together with enhanced possibilities for usergenerated content due to Internet technologies, users may contribute to a more efficient production of content. Accordingly we argue for the case of (primary) producers (e.g., artists, journalists): to know what they want and under which circumstances they contribute high-quality input is increasingly important in a new situation in which they alternatively have the opportunity to offer their content at a very low cost via the Internet. Nevertheless, in contrast to customers, these primary producers normally are much more dependent on the company as they earn money from these institutions whereas personal Internet websites mostly do not allow significant revenue streams. Concluding, it is especially important for media institutions to pay attention to the functions demanded by their customers, a fact that in the long-term leads to trust resulting in customer loyalty. This is especially important in the case of media products as they are inspection goods. The long-term orientation of empirical research in communication science additionally allows grasping the generation of trust in recipients, a phenomenon that, for example, rather short-term market research is not able to measure but can help media companies to improve their product planning. The same is true for other phenomena that can only be monitored in the long run like the diffusion of media
264
Benedikt von Walter, Oliver Quiring
products not only in a society as a whole or certain groups of lead users but in an individual’s daily life. While business administration already quickly examines the possibilities of new technological innovations and discusses its potential for future business models, it could additionally undergo more in-depth analyses of users´ demands for a certain technology. Research perspectives We saw that communication science has quite a broad view on media whereas business administration is quite specialised on economic issues. Surprisingly, despite complementarities identified in conceptual issues, no such major complementarities can be identified with respect to methodology. Both disciplines employ deductive models and theory building, empirical tests of these models and theories and a continuative amplification of theories based on empirical results. Even several specific empirical methods (observation methods, interviews, socioscientific experiments, etc.) are used in both disciplines. Consequently not new research methods, but an inclusion of both specific views in joint research designs seems to be desirable for future investigations. Topics that might be addressed by joint research are, for example: A) How are the processes of production, bundling and distribution influenced by individual characteristics of professionals and their social interaction within these processes? B) Which effects has the marketability of media products on their journalistic quality and is there an effective way of achieving both objectives? C) Which are the social and personal characteristics and needs of the paying customer? Ad A) Business administration has a long tradition in theories of the production, bundling and distribution of goods and services. This is why it should contribute this expertise that includes management and organisation theories, concepts for cost calculation and marketing strategies as well as knowledge on fiscal or other legislative issues. Communication science knows much more of the social nature of the people that produce media products and about their working routine, facts that may help with the organisation of an effective production. For both disciplines, empirical investigations are of major importance. Therefore, we explicitly want to mention some possibilities. Large-scale surveys are a mighty instrument to show the distribution of wellknown features. But the situation in online business is a new one and relatively little is known so far of the changes implied by economisation and digitalisation. We suggest that it might be an appropriate strategy to explore the new situation and find out about the changes in the features of content production before testing for the distribution of these features using large-scale survey techniques. Empirical instruments that can be used for this task are for example in-depth interviews with
The Transformation of Media – Economic and Social Implications
265
producers of content, group discussions or observation studies (for an introduction see, e.g. Flick 2002). Only a few studies made use of these techniques so far (for an observation study of online journalists see, e.g. Quandt 2004). In-depth and long-term empirical analysis of journalists, authors and other artists can make the creative process of content creation more understandable and thereby more efficient. Ad B) Business administration is well aware that a high technical quality standard of products has to be guaranteed in order to achieve customer satisfaction. Nevertheless, for the case of media products the strong economic position of media companies in the past decades has led to a situation where content quality has been optimised according to production efficiency. Maybe this is one reason why customers´ loyalty is diminishing in times of alternative offers on the Internet. While a focus on more efficient production technologies is even more important in times of increasing competition, in the course of this development, the quality of the content gains new importance. Concerning the surveillance of content quality, two alternative (and rather complementary than exclusive) strategies are imaginable. On the one hand, content analysis (see, e.g. Früh 2004) helps to monitor the quality of already existing media content. Communication science has developed a lot of different indicators for journalistic quality that can be applied in these studies (see, e.g. Beck et al. 2003; Hagen 1985; Schatz and Schulz 1992). Already existing quality standards can also be helpful in the design of new content. On the other hand, the consumer’s impression of product quality has to be examined in a more detailed way in order to be able to offer products he will pay for even if free content is available in the Internet. In-depths interviews, focus group discussions as well as standardised surveys might help to find out about the standards of quality that consumers expect. Ad C) The adoption and acceptance of new products or services can be positively influenced by technological opportunities, resulting business models and adequate marketing strategies. Nevertheless, the consumer could be seen − in a more general view − as a person that has a variety of needs to be satisfied by media products and shows typical and repeating media usage patterns which strongly depend on the social environment and its dynamic (a striking example supporting this argument is the great success of SMS, a rather simple technology). As new technologies are usually developed on the basis of already existing answers to the needs of their users in the past (e.g., e-mail is a faster and more comfortable version of the traditional letter, online-newspapers are a more up-to-date version of traditional print newspapers etc.), it should be possible to find out about the features that made these already established products successful and try to optimise these features with the help of new technologies. Multi-method designs combining the advantages of interviews (large scale and in-depth), electronic tracking of media use patterns, media diaries and usability-testing provide powerful means to accompany the technical development process. We are well aware of the fact, that the ideas described above are neither all new nor exhaustive. Commercial market research, for example, employs all of these methods and carries out studies on all of the topics mentioned above. However,
266
Benedikt von Walter, Oliver Quiring
there are two major problems that make it difficult to draw coherent conclusions from market research studies to the general implications of media transformation. Firstly, market studies are usually carried out to accompany one specific innovation and the process of its implementation. This is understandable, as commercial institutes are paid to serve one specific customer. But these studies are usually not able to give an insight into the steady and complicated interactions of several innovations hitting the market at the same time. The second problem is much more trivial: Most of the data gathered by commercial market research is not available for academic investigation for several reasons: Sometimes institutes do not want to publish their data because they spent a lot of money on the development of new research methods, sometimes their customers want the results to be kept secret. Concluding, a further task for academic research might be to find a way of cooperation with commercial market research that allows collecting a large number of studies in order to compare the results and draw conclusions on a more general basis.
Discussion One could wonder at the fact that there is a need to bring the views of two disciplines on media together before being able to begin an interdisciplinary discussion. Nevertheless taking a closer look, we identify significant differences in terms of definitions as well as complementarities in approaches and focuses. Our introductory discussion of terms like “media” or “media sector” is to be seen as only one spotlight on those differences which seem to turn a discussion difficult. But this example serves as well to show that, although both approaches on media have had and will have their independent right to exist, there are interesting links. As we could show, first-order media in a communication science view have strong relations to what business administration calls media, while second-order media are what business administration calls media companies. According parallels could be identified with respect to the drivers of transformation because both disciplines identify them as equally important. Main potentials for complementary concepts and research designs lie in the focus of resulting effects and their interpretation, a fact that can be derived from a main concern for “economic efficiency” on markets in one case and the “social fit” of communication in the other. We do believe that in times of an ever increasing economisation of the media sector in combination with a stronger bargaining power of both, producers and recipients due to new technologies interdisciplinary research designs seem to be inevitable: Normative objectives mentioned in communication science can not be fulfilled, if the associated media system does not work economically. On the other hand, business models will be much more successful if in-depth data both on producers and on recipients are better understood. Together, the disciplines can contribute to a wellrunning media sector, an interest both disciplines definitely pursue.
The Transformation of Media – Economic and Social Implications
267
References Akerlof GA (1970) The Market for "Lemons". Qualitative Uncertainty and the Market Mechanism. Quarterly Journal of Economics 84:488–500 Albarran AB (1996) Media economics: understanding markets, industries and concepts. Iowa State University Press / Ames, Iowa Albarran AB (1997) Management of electronic media. Wadsworth Publishing, Belmont, CA Altmeppen KD, Karmasin M (eds) (2003a) Medien und Ökonomie. Westdeutscher Verlag, Wiesbaden Altmeppen KD, Karmasin M (2003b) Medienökonomie als transdisziplinäres Lehr- und Forschungsprogramm. In: Altmeppen KD and Karmasin M (eds) Medien und Ökonomie Bd. 1/1. Westdeutscher Verlag, Wiesbaden, pp 19–51 Anding M, Hess T (2002) Online Content Syndication – A critical Analysis from the Perspective of Transaction Cost Theory. In: Proceedings of the Xth European Conference on Information Systems, Danzig, pp 551–563 Bailey JP (1998) Intermediation and Electronic Markets: Aggregation and Pricing in Internet Commerce. Ph.D. thesis, MIT Beck K, Schweiger W, Wirth W (2003) Gute Seiten – schlechte Seiten. Qualität der Onlinekommunikation. Verlag Reinhard Fischer, München Berghaus M (1994) Multimedia-Zukunft. Herausforderung für die Medien- und Kommunikationswissenschaft. Rundfunk und Fernsehen 42:404–412 Berghaus M (1997) Was macht Multimedia mit Menschen, machen Menschen mit Multimedia? Sieben Thesen und ein Fazit. In: Ludes P and Werner A (eds) MultimediaKommunikation. Theorien, Trends und Praxis. Westdeutscher Verlag, Opladen, pp 73– 85 Blöbaum B (1999) Selbstreferentialität und Journalismus. Eine Skizze. Anmerkungen und Ergänzungen zum Panel Selbstreferentialität. In: Latzer M, Maier-Rabler U and Siegert G (eds) Die Zukunft der Kommunikation. Phänomene und Trends in der Informationsgesellschaft. Studien Verlag, Innsbruck, Wien, pp 181–188 Blumler JG (2002) Wandel des Mediensytems und sozialer Wandel. In: Haas H and Jarren O (eds) Mediensysteme im Wandel. Struktur, Organisation und Funktion der Massenmedien. Braumüller, Wien, pp 170–188 Brants K, Neijens P (1998) The Infotainment of Politics. Political Communication 15:149– 164 Brosius H-B (1997) Multimedia und digitales Fernsehen: Ist eine Neuausrichtung kommunikationswissenschaftlicher Forschung notwendig? Publizistik 42:37–45 Buhse W (2004) Wettbewerbsstrategien im Umfeld von Darknet und Digital Rights Management – Szenarien und Erlösmodelle für Onlinemusik. Deutscher UniversitätsVerlag, Wiesbaden Chircu AM, Kauffman RJ (1999) Analyzing Firm-Level Strategy for Internet-Focused Reintermediation. In: Proceedings of the 32nd Hawaii International Conference on System Sciences, Maui, Hawaii Donges P, Jarren O (2002) Redaktionelle Strukturen und publizistische Qualität. Ergebnisse einer Fallstudie zum Entstehungsprozeß landespolitischer Berichterstattung im Rundfunk. In: Haas H and Jarren O (eds) Mediensysteme im Wandel. Struktur, Organisation und Funktion der Massenmedien. Braumüller, Wien, pp 77–88
268
Benedikt von Walter, Oliver Quiring
Donsbach W, Jandura O (2004) Chancen und Gefahren der Mediendemokratie. UVK, Konstanz Eimeren van B, Gerhard H, Frees B (2003) Internetverbreitung in Deutschland: Unerwartet hoher Zuwachs. Media Perspektiven 8:338–358 Fattah HM (2002) P2P: How Peer-to-Peer Technology Is Revolutionizing the Way We Do Business. Dearborn Trade Publishing, Chicago Ferguson DA, Perse EM (2000) The World Wide Web as a Functional Alternative to Television. Journal of Broadcasting & Electronic Media 44:155–174 Flick U (2002) Qualitative Sozialforschung. rororo, Reinbeck bei Hamburg Frey BS (1990) Ökonomie ist Sozialwissenschaft – Die Anwendung der Ökonomie auf neue Gebiete. Vahlen, München Früh W (2003) Theorien, theoretische Modelle und Rahmentheorien. Eine Einleitung. In: Früh W and Stiehler HJ (eds) Theorie der Unterhaltung. Ein interdisziplinärer Diskurs. Herbert von Halem Verlag, Köln, pp 9–26 Früh W (2004) Inhaltsanalyse: Theorie und Praxis. UVK, Stuttgart Gerhards M, Klingler W (2003) Mediennutzung in der Zukunft. Media Perspektiven 3:115– 421 Haas H, Jarren O (eds) (2002) Mediensysteme im Wandel. Struktur, Organisation und Funktion der Massenmedien. Braumüller, Wien Hagen LM (1985) Die Informationsqualität von Nachrichten. Westdeutscher Verlag, Wiesbaden Hass B (2002a) Geschäftsmodelle von Medienunternehmen, Ökonomische Grundlagen und Veränderungen durch neue Informations- und Kommunikationssysteme. Gabler, Wiesbaden Hass B (2002b) Desintegration und Reintegration im Mediensektor: Wie sich Geschäftsmodelle durch Digitalisierung verändern. In: Zerdick A, Picot A, Silverstone R and Schrape K (eds) E-Merging Media: Kommunikation und Medienwirtschaft der Zukunft. Springer, Berlin, pp 33–57 Heinrich J (1999a) Medienökonomie, Band 2: Hörfunk und Fernsehen. Westdeutscher Verlag Heinrich J (1999b) Konsequenzen der Konvergenz für das Fach Medienökonmie. In: Latzer M, Maier-Rabler U and Siegert G (eds) Die Zukunft der Kommunikation. Phänomene und Trends in der Informationsgesellschaft. Studien Verlag, Innsbruck, Wien, pp 77– 86 Hess T (2002) Medienunternehmen im Spannungsfeld von Mehrfachverwertung und Individualisierung – eine Analyse für statische Inhalte. In: Zerdick A, Picot A, Silverstone R and Schrape K (eds) E-Merging Media: Kommunikation und Medienwirtschaft der Zukunft. Springer, Berlin, pp 59–78 Hess T, Schumann M (1999) Medienunternehmen im digitalen Zeitalter – eine erste Bestandsaufnahme. In: Schumann M and Hess T (eds) Medienunternehmen im digitalen Zeitalter. Gabler, Wiesbaden, pp 1–18 Hess T, Walter B von (2006) Toward Content Intermediation: Shedding New Light on the Media Sector. The International Journal on Media Management 8: in print. Hess T, Schulze B (2004) Mehrfachnutzung von Inhalten in der Medienindustrie. Grundfragen, Varianten und Herausforderungen. In: Altmeppen KD and Karmasin M (eds) Medien und Ökonomie. Westdeutscher Verlag, Wiesbaden, pp 41–62 Hoecker B. (2002) Mehr Demokratie via Internet? Aus Politik und Zeitgeschichte. Beilage zur Wochenzeitung Das Parlament B39/40:37–45
The Transformation of Media – Economic and Social Implications
269
Höflich JR (2002) Der Computer als "interaktives Massenmedium". Zum Beitrag des Uses and Gratifications Approach bei der Untersuchung computer-vermittelter Kommunikation. In: Haas H and Jarren O (eds) Mediensysteme im Wandel. Struktur, Organisation und Funktion der Massenmedien. Braumüller, Wien, pp 129–146 Huber R (1986) Redaktionelles Marketing für den Lokalteil. Die Zeitungsredaktion als Bezugspunkt journalistischer Themenplanung und -recherche. Minerva, München IFPI (2003) Jahreswirtschaftsbericht 2002. http://www.ifpi.de/jb/2003/54-60.pdf Kepplinger HM (2002) Mediatization of politics: Theory and data. Journal of Communication 52:972–986 Kiefer ML (2003) Medienökonomie und Medientechnik. In: Altmeppen KD and Karmasin M (eds) Medien und Ökonomie Bd. 1/1. Westdeutscher Verlag, Wiesbaden, pp 181– 208 Knobloch S (2002) Unterhaltungsslalom bei der WWW-Nutzung: Ein Feldexperiment. Publizistik 47:309–318 Kohring M (1999) Selbstgespräche. Der Begriff Selbstreferentialität und das Fallbeispiel Journalismus. Anmerkungen und Ergänzungen zum Panel Selbstreferentialität. In: Latzer M, Maier-Rabler U and Siegert G (eds) Die Zukunft der Kommunikation. Phänomene und Trends in der Informationsgesellschaft. Studien Verlag, Innsbruck, Wien, pp 189–198 Krüger UM (1998) Zum Stand der Konvergenzforschung im dualen Rundfunksystem. In: Klingler W, Roters G and Zöllner O (eds) Fernsehförderung in Deutschland. Themen, Akteure, Methoden. Nomos, Baden-Baden, pp 151–184 Kubicek H (1997) Das Internet auf dem Weg zum Massenmedium? Ein Versuch, Lehren aus der Geschichte alter und neuer Medien zu ziehen. In: Werle R and Lang C (eds) Modell Internet? Entwicklungsperspektiven neuer Kommunikationsnetze. Campus, Frankfurt a. Main/New York, pp 213–239 Lasswell HD (1948) The Structure and Function of Communication in Society. In: Lyman B (ed) The Communication of Ideas. Harper, New York, pp 37–51 Latzer M (1997) Mediamatik – die Konvergenz von Telekommunikation, Computer und Rundfunk. Westdeutscher Verlag, Opladen Maier M (2002) Zur Konvergenz des Fernsehens in Deutschland. UVK, Konstanz Maisel R (2002) Wandel des Mediensytems und sozialer Wandel. In: Haas H and Jarren O (eds) Mediensysteme im Wandel. Struktur, Organisation und Funktion der Massenmedien. Braumüller, Wien, pp 160–169 Malone TW, Yates J, Benjamin RI (1987) Electronic Markets and Electronic Hierarchies. in: Communications of the ACM, Vol. 30, pp. 484–497. Marcinkowski F, Greger V, Hüning W (2001) Stabilität und Wandel der Semantik des Politischen: Theoretische Zugänge und empirische Befunde. In: Marcinkowski F (ed) Die Politik der Massenmedien. Heribert Schatz zum 65. Geburtstag. Halem, Köln, pp 12– 114 Mast C (1999) Wirtschaftsjournalismus. Grundlagen und neue Konzepte für die Presse. Westdeutscher Verlag, Opladen Mathes R, Donsbach W (2002) Rundfunk. In: Noelle-Neumann E (ed) Fischer Lexikon Publizistik Massenkommunikation. Fischer Taschenbuch Verlag, Frankfurt a. M., pp 546–596 McQuail D (1983) Mass Communication Theory. An Introduction. Sage, London McQuail D (2002) McQuail's Reader in Mass Communication Theory. Sage, London
270
Benedikt von Walter, Oliver Quiring
Meier WA, Jarren O (2002) Ökonomisierung und Kommerzialisierung von Medien und Mediensystem. Bemerkungen zu einer (notwendigen) Debatte. In: Haas H and Jarren O (eds) Mediensysteme im Wandel. Struktur, Organisation und Funktion der Massenmedien. Braumüller, Wien, pp 201–216 Norris P (1996) Does Television Erode Social Capital? A Reply to Putnam. Political Science and Politics 29:474–480 Oram A (2001) Peer-to-Peer: Harnessing the Power of a Disruptive Technology. O'Reilly & Associates, Beijing et al. Picard RG (1989) Media economics: concepts and issues. Sage Publications, Newbury Park Picard RG (2002) The economics and financing of media companies. Fordham University Press, New York Picot A (1991) Ein neuer Ansatz zur Gestaltung der Leistungstiefe. Zeitschrift für betriebswirtschaftliche Forschung 43:336–357 Picot A, Reichwald R, Wigand RT (2003) Die grenzenlose Unternehmung – Information, Organisation und Management. Gabler, Wiesbaden Porter ME (1985) Competitive Advantage. New York Pürer H, Raabe J (1994) Medien in Deutschland. Band 1: Presse. UVK, Konstanz Putnam RD (1995) Bowling Alone. America's Declining Social Capital. Journal of Democracy 6:65–78 Quandt T (2004) Journalisten im Netz. Verlag für Sozialwissenschaften, Wiesbaden Rawolle J (2002) XML als Basistechnologie für das Content Management integrierter Medienprodukte. Wiesbaden Ridder C-M, Engel B (2001) Massenkommunikation 2000: Images und Funktionen der Massenmedien im Vergleich. Ergebnisse der 8. Welle der ARD/ZDF-Langzeitstudie zur Mediennutzung zur Mediennutzung und -bewertung. Media Perspektiven 3:102– 125 Ronneberger F (2002) Funktionen des Systems Massenkommunikation. In: Haas H and Jarren O (eds) Mediensysteme im Wandel. Struktur, Organisation und Funktion der Massenmedien. Braumüller, Wien, pp 61–68 Sarkar MB, Butler B, Steinfield C (1998) Cybermediaries in Electronic Marketspace: Toward Theory Building. Journal of Business Research 41:215–221. Schatz H, Schulz W (1992) Qualität von Fernsehprogrammen. Kriterien und Methoden zur Beurteilung von Programmqualität im dualen Fernsehsystem. Media Perspektiven 11:690–712 Schönbach K (1997) Das hyperaktive Publikum – Essay über eine Illusion. Publizistik 42:279–286 Schulz W, Zeh R, Quiring O (2005) Voters in the changing Media Environment. European Journal of Communication 20: 55–85 Schumann M, Hess T (1999) Content-Management für Online-Informationsangebote. In: Schumann M and Hess T (eds) Medienunternehmen im digitalen Zeitalter. Gabler, Wiesbaden, pp 69–87 Schumann M, Hess T (2002) Grundfragen der Medienwirtschaft. Berlin Siegert G (2002) Medienökonomie in der Kommunikationswirtschaft. Bedeutung, Grundfragen und Entwicklungsgeschichte. Lit., Münster Siegert G (2003) Medienökonomie. In: Bentele G, Brosius H-B and Jarren O (eds) Öffentliche Kommunikation – Handbuch Kommunikations- und Medienwissenschaft. Westdeutscher Verlag, Wiesbaden, pp 228–244
The Transformation of Media – Economic and Social Implications
271
Sjurts I (2001) Einfalt trotz Vielfalt in den Medienmärkten: Eine ökonomische Analyse. Universität Flensburg, Flensburg Sjurts I (2002) Strategien in der Medienbranche – Grundlagen und Fallbeispiele. Gabler, Wiesbaden Sjurts I (2004) Der Markt wird´s schon richten!? – Medienprodukte, Medienunternehmen und die Effizienz des Marktprozesses. In: Altmeppen KD and Karmasin M (eds) Medien und Ökonomie. Westdeutscher Verlag, Wiesbaden, pp 159–181 Vlasic A (2004) Die Integrationsfunktion der Massenmedien. Verlag für Sozialwissenschaften, Wiesbaden Vorderer P (1995) Will das Publikum neue Medien(angebote)? Medienpsychologische Thesen über die Motivation zur Nutzung neuer Medien. Rundfunk und Fernsehen 43:494–505 Vorderer P (2000) Interactive Media and Beyond. In: Zillmann D and Vorderer P (eds) Media Entertainment. The Psychology of its Appeal. Lawrence Erlbaum Associates, Mahwah, NY, London, pp 21–36 Vorderer P, Weber R (2003) Unterhaltung als kommunikationswissenschaftliches Problem: Ansätze einer konnektionistischen Modellierung. In: Früh W and Stiehler H-J (eds) Theorie der Unterhaltung. Ein interdisziplinärer Diskurs. Halem Verlag, Köln, pp 136– 159 Walter B von, Hess T (2004) A property rights view on the impact of file sharing on music business models – why iTunes is a remedy and MusicNet is not. In: Proceedings of the 10th Americas Conference on Information Systems (AMCIS), New York Wersig G (2000) Informations- und Kommunikationstechnologien – Eine Einführung in Geschichte, Grundlagen und Zusammenhänge. UVK Medien, Konstanz Wilke J (2002) Presse. In: Noelle-Neumann E (ed) Fischer Lexikon Publizistik Massenkommunikation. Fischer Taschenbuch Verlag, Frankfurt a. Main, pp 422–459 Williamson OE (1999) The economics of transaction costs. Elgar, Cheltenham Wirtz BW (2001) Medien- und Internetmanagement. Wiesbaden Zerdick A et al. (2001) Die Internet-Ökonomie: Strategien für die digitale Wirtschaft. Springer, Berlin et al.
Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action1 Monica Ariño
Abstract This chapter analyses media pluralism and diversity in European digital broadcasting through a study of access in communications markets. It is argued that the response of the European authorities to media pluralism challenges has been the implementation of detailed access regulation, coupled with reliance on competition law to keep markets open. The study reveals an interesting interaction between competition law and regulation, whereby competition law decisions have effectively regulated markets, and have triggered and shaped subsequent regulatory developments.
Pluralism in a digital paradigm: myths and realities Beyond technological change The year 2003 marked the first period in the history of modern communication in which more digital than analogue communication devices were sold, ranging from palm tops, digital cameras, DVD players, PCs, mobile phones, digital radio and TV-sets. Furthermore, everyday the degree of connectivity between these different devices becomes greater, allowing for a number of combined uses and other possibilities. What is remarkable, however, is not digitalisation itself, neither the technological merits associated with increased capacity, better quality, eradication of redundancies, reduction of transmission costs or interactivity. Beyond technological improvements, digitalisation has triggered significant changes in the very foundations of our communication, entertainment and information environments.
1
This chapter is a revised version of a paper presented at the ITS Biennal Conference in Berlin in 2004, and partly draws on PhD work carried out at the European University Institute (Florence) between the years 2000 and 2004. I thank Professor Massimo Motta, fellow researcher Alexandre DeStreel and participants at the ITS Conference for comments on previous drafts. Any errors remain my own.
274
Monica Ariño
Voice and choice One of the most salient features of the debate surrounding the digitalisation of television is the increased number of frequencies and, subsequently, of available channels. Satellite technology has promised to deliver as much as 1000 channels simultaneously, while coaxial cable – using compression – has demonstrated capacity to deliver over 2000 channels, in combination with telephone and internet access services. Do more channels mean more choice? Clearly, there is potential for greater variety in content as increased capacity creates incentives to target previously marginalised audiences. Expansion of channels would therefore offer a solution to the issue of minority neglect. Yet, such a conclusion is far from straightforward. Under advertising funding models the interests of the consumers are still satisfied only insofar as these are coincident with the interests of the advertisers (Cooper 2003). Subscription TV models, allegedly solve the problem of the disconnection between advertiser’s values of viewers and viewers’ valuation of programmes because it introduces a certain measure of preference intensities. Under price competition, the duplication and imitation prevalent in advertising-supported models might be reduced. Therefore, variety is indeed enhanced in pay-TV models to a certain extent. However, this again needs some cautious qualifications. Firstly, new channels, rather than providing new content often provide the same content, but for an entire day. Greater channel capacity has not resulted in a corresponding increase of programme genres and “outlet diversity (…) has been a poor replacement for content diversity” (Einstein 2004, p. 218). Secondly, when it comes to pluralism concerns, the crucial factor is the actual (not potential) impact of the media on citizens, which is measured by audience reach, rather than by number of channels. In fact, viewing patterns do not seem to have radically changed over the last years, and certainly not enough to suggest that viewers are now completely emancipated from media manipulation (Gibbons 2000). Controlled or in control? Digitalisation has supposedly ‘freed’ the viewer from the tyranny of the television schedule. Audiences will increasingly be able (and probably will want to) organise and re-order content as they please. In this context, a great deal of attention has been given to interactivity as a way to revolutionise traditionally passive communications on a point-to-multipoint basis.2 Interactivity and the personalisation of content arguably seem to reduce the need for regulation designed to ensure plurality and objectivity of content. In other words, the higher the level of active viewer choice, the lower the level of regulation. Certainly, it would seem disproportionate to apply full force rules that were
2
Examples of enhanced control include video-on-demand (VOD), pay-per-view (PPV) and personal video recorders (PVR).
Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action
275
conceived under the idea of an involuntary exposure, to the new interactive environment. However it remains unclear to what extent multi-channel and interactivity imply a real viewers’ emancipation. Interactivity and personalisation have not deprived the television from its ‘mass media’ character, and hence from its impact on public opinion. In effect, today interactive television is not very far from conventional television with regard to how it is organised and used. Television is used to connect to life, but also to disconnect from it and most consumers still want to be entertained. They do not interact with their television set the same way they do with their PC or their mobile phones. Sure enough, television is digital but most consumers can still be described as ‘analogue’. Furthermore, even in those cases where there is an active participation, interaction or choice initiated by the viewer, one should be wary about what is the real degree of control. The capabilities of personal video recorders (PVR), for example, are not limited to allowing users to set preferences so that the set-top-box (STB) automatically records their favourite shows, movies or news. The box also ‘suggests’ shows, movies and news. In effect, much of potential consumer control is ceded to network operators who can transform or use the STB intelligence to their benefit. For example, broadcasters can observe customers’ behaviour and learn about their habits and preferences, and then provide information and advertisement accordingly. All in all, the question of ‘who will control whom’ is, for the moment, unanswered. Media pluralism and market structures Additional concerns arise from the structure of media markets. Adaptation to the new environment has been translated into building large companies, both vertically and horizontally (McPhail 2002). Programme-makers have combined with packagers and broadcasters, or with distribution means, or with all of these. More recently, a wave of consolidation mergers can be observed across Europe. Media conglomeration trends have been enhanced by reductions in legal ownership restrictions in various countries.3 Clearly, the control of both the communications infrastructure and the gateways to access information, as well as much of the content, gives media companies significant power. Beyond competition concerns, what is the impact of market concentration on pluralism and diversity? Some claim that the number of media competing for audiences does not necessarily determine the diversity of viewpoints that are publicly aired (Picard, 1998). One could argue that a monopolist or two big media companies that provide markedly different programming can serve minorities better than a multiplicity of small media constrained by advertising and audience rates that provide fairly similar content.4 3 4
For example the UK, Spain and Italy. Sometimes big media conglomerates sponsor vastly different news outlets. News Corporation, for example, controls simultaneously the Sun (a popular British tabloid) and the conservative and elite London Times.
276
Monica Ariño
Even if debates about the relationship between ownership concentration and performance from a perspective of content-diversity have been long and controversial, the general perception exists that increased market concentration precludes diversity (Bagdikian 2004; Compaine and Gomery 2000). The fact that big media conglomerates target different audiences and provide a variety of content does not remove concerns regarding pluralism. These are not limited to the existence of different genres or formats; more importantly they relate to the power that one single entity might enjoy over citizens. Big media conglomerates effectively decide on the boundaries of public policy debates and they often influence legislative outcomes. The relaxation of cross-media ownership rules is but one example. From a pluralism perspective it is important that no entity has such power. A few should not be entitled to decide the extent of content diversity to which all are exposed. The value of small broadcasters and independent media outlets lies in their very existence. At any point in time, if there is a socially relevant and controversial issue (the war in Iraq, homosexual marriages or the EU referenda on the Constitutional Treaty), these small media can be used as platforms from where to speak, agree, disagree and advance social debate. The more there are, the higher the chances that the discourses will be diversified. Conclusion Current technological optimism ignores that an increase in the quantity of channels and interactivity do not by itself guarantee free and wider consumer choice (Gibbons 2000; Näränen 2002). Firstly, more channels do not inevitably mean more diversity of content, but possibly more of the same. Secondly, changes in viewing patterns will take time and have so far been limited. Thirdly, even if we do have more communication channels, threats to pluralism will remain as long as control over these channels continues to be globally concentrated in a handful of companies. Therefore, and despite the increase in channels, programmes, information, interaction and seemingly greater control of our media consumption, media pluralism is as much of a concern in digital as it was in analogue broadcasting. Few contest the case for intervention and there is wide consensus about the need to adapt traditional media regulations designed to ensure pluralistic supply of programmes in the new environment. However, standard media ownership rules prove increasingly inappropriate in many media markets. A great deal of attention has been paid to other regulatory tools such as access obligations, standardisation and interoperability, structural regulations, content regulation, self-regulation and general competition law. In the following sections some of these mechanisms and their impact on/contribution to the safeguard of media pluralism in the EU will be explored.
Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action
277
How does the EU face media pluralism challenges? Pluralism through access EU attempts to harmonise national pluralism approaches have ostensibly failed, as exemplified by the frustrated directive on media ownership in the mid nineties (Harcourt 1998; Doyle 1997). Not being capable to regulate pluralism directly, European authorities have concentrated on the safeguard of access, both through regulation and through competition law enforcement by the European Commission. The idea of access has a wide array of connotations in media communications markets. It can denote participation and representation in the media, the universal availability of content, ownership and use, as well as openness and interoperability. For the purposes of this chapter I follow van Cuilenburg and McQuail’s definition of access as “the possibility for individuals, groups of individuals, organisations and institutions to share society’s communications resources; that is, to participate in the market of distribution services (communications infrastructure and transport), and in the market of content and communication services, both as senders and receivers” (van Cuilenburg and McQuail 2003, p. 204). From a competition viewpoint, access is a central and often controversial matter. In order to compete, broadcasting companies need access to resources, technology, information, production and distribution systems, as well as to consumers. In oligopolistic scenarios where several large companies are present in all aspects of the communications process (i.e., creative, production and distribution functions) ensuring fair and non-discriminatory access to third parties becomes key. However, access issues in broadcasting are not only of an economic nature. They are connected with the socio-political dimensions of broadcasting, namely, effective citizenship, the right of freedom of expression and the protection of media pluralism and cultural diversity. Without access, the right to foster opinions, as well as to impart, distribute and receive information without government interference would be meaningless. Access is therefore not only a condition for effective competition, but also a premise for the effective exercise of the right of freedom of expression, being thus vital to ensure a democratic communications regime. For this reason, an analysis of how access is treated by EU regulatory authorities can contribute to our understanding of EU approaches to media pluralism. The main questions are: what principles inspire the set of conditions under which access to the media and the medium, in a wide sense, is granted? To what extent does pluralism influence access analysis in media markets? A framework for analysis: access to content, networks and platforms In order to carry out an analysis of the type described above it is useful to group media access issues that are equal or similar in scope, so that norms and decisions dealing with them can in turn be identified and compared. Access in broadcasting is requested always with one ultimate goal in mind, and this is to communicate, in
278
Monica Ariño
the broad sense of entertaining, informing, expressing and exchanging opinions, ideas or creations. This is why access needs to be considered at the sender, the conveyer and the receiver ends. If there are any barriers between these ends (typically bottlenecks) these will need to be considered, too. Table 1 illustrates the taxonomy of access issues that has been adopted for the purposes of the analysis. These are divided in three groups: access to content issues, access to platform issues and access to networks issues. Access to content issues arise at the sender end, access to distribution networks arise at the conveyer end, and issues of access to technology are transversal to all levels, but particularly relevant at the receiver end (the platform), where often proprietary solutions have been opted for. This three-tier distinction is not only useful for methodological purposes, but also indicative of one of the key structural problems of digital broadcasting markets, which is precisely their rigid vertically integrated model. Table 1. A framework for analysis
Content
Platform
Networks
Access to
Premium films and sports rights
Associated facilities: CAS, EPG, API
Cable, satellite, terrestrial, DSL, mobile
Issues
Duration/scope of exclusivity
Bottlenecks, standardisation, interoperability
Must carry, licensing, essential facilities
Tools in EU law
Competition Law _________________
Competition Law _________________
Competition Law _________________
TVWF Directive
Access Directive/ Framework Directive
Access Directive/ Universal Service Directive
The remainder of this chapter provides an overview of how EU authorities have so far dealt with access issues at all three levels. Because of space constraints a number of norms and decisions have been selected to illustrate the dynamics between regulation, competition, access and pluralism in the field of digital broadcasting, but the review is not comprehensive. Access to content Communications is about content. Telegraph and postal services, telephony, press, radio and television are all about exchange or dissemination of content. After all, broadcasting is nothing more than a content proposition (content is ‘king’). Content is a highly differentiated product and certain forms of programming exhibit little or no economic substitutability, granting content owners significant amount of market power within their own niche. Securing the rights to certain content, and in particular ‘must-haves’ such as premium movies and life sports
Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action
279
events, has become a commercial imperative for broadcasters (especially for payTV operators) as well as for mobile and IPTV service providers. These contents are scarce, but this scarcity has been artificially created through a system of long term exclusive contracts with the content right holders. Competition response: Limits to scope and length of exclusivity The conclusion of exclusive contracts is a legitimate way to guarantee the value of a programme, both for the organiser of the transmission and for the owner of the rights, and it is not necessarily anticompetitive.5 However, exclusive dealing might restrict competition if it has the effect of reducing output and limiting price competition. There are also concerns in cases where the exclusivity results in significant rights not being exploited by any broadcaster or on other platforms such as third generation mobile phones or internet platforms. The European Commission has closely scrutinised agreements in the market for sports broadcasting rights and, to a lesser extent, in the market for premium films. Broadly, in order to balance the competitive impact of the exclusivity, competition authorities have looked at: duration, quantity and market power (both upstream and downstream). (i) Eurovision saga The issue of access to sports rights was at the core of the long and complicated history of the Commission’s assessment of the Eurovision system which details rules governing the acquisition of broadcasting sports rights by the European Broadcasting Union (EBU). The story is well known. In 1989 the EBU applied for negative clearance or for an exemption for the Eurovision system. Exemptions to the Eurovision agreements were granted by the Commission in 19936 and 20007, but both were subsequently annulled by the Court of First Instance (CFI) in 19968 and 20029 respectively. The EBU system, among other things, gave rise to restrictions on competition for the acquisition of broadcasting rights as regards third parties. As a result, nonmembers would be deprived access to those rights. In 1993 the Commission accepted certain restrictions but insisted that existing arrangements for giving nonmembers access to programmes be strengthened. The CFI overruled the Commission’s exemption on the basis that the Commission had failed to examine whether 5
6 7 8
9
Case 262/81 Coditel SA, Compagnie Générale pour la Diffusion de la Télévision, and Others v. Ciné-Vog Films SA and Others (1982) ECR 3381. Commission decision EBU/Eurovision System, OJ 1993 L 179/23. Commission decision Eurovision OJ 2000 L 151/18. Joined Cases T-528/93, T-542/93, T-543/93 and T-546/93 Métropole Televisión SA and RTI v. Commission (‘European Broadcasting Union’) (1996) ECR II-649. Joined Cases T-185/00, T-299/00 and T-300/00 M6 v Commission, Gestevisión Telecinco v Commission and SIC v Commission (2002) ECR II-03805.
280
Monica Ariño
the EBU membership criteria were objective and sufficiently determinate to be applied uniformly and in a non-discriminatory way to potential members.10 In 2000, the Commission again exempted redrafted EBU rules. An important competition concern was the fact that EBU members had by then entered the thematic channels segment and there was a risk that the system of joint acquisition of rights would unfairly place other pay-TV competitors in disadvantage. The Court considered the system of third party access ineffective because exclusion of freeto-air commercial broadcasters was still likely. Even deferred coverage (which, is of no real interest in economic terms for general television channels) was subject to editing and embargo time limitations. On this basis, the Court stroke down the Commission’s exemption. (ii) UEFA This has been the leading case on collective agreements investigated by the Commission so far. The Commission challenged the trading practices of Champions League media rights.11 The Commission found that the collective sale of all of the rights in one package to a single broadcaster and on an exclusive basis for up to four years had distorted competition both vertically and horizontally. The Commission required UEFA to sell the rights in accordance with fair, open and nondiscriminatory tendering procedures and to break up (unbundle) the rights into different packages to allow more market participants access to the bids for those rights.12 The Commission also required that football clubs are not restricted from selling to free-to-air operators if there is no reasonable offer from any pay-TV broadcaster. (iii) English Premier League In the FA Premier League (FAPL) case, the Commission has gone one step further. The FAPL sells the rights jointly and on an exclusive basis on behalf of the clubs to television companies in Britain and Ireland. In practice, this has meant that only twenty five per cent of the matches are broadcast live. Only the major media groups (BSkyB in this case) can afford the acquisition and exploitation of such a bundle of rights. After the Commission’s intervention, the FAPL agreed to open the tendering procedures to at least two broadcasters and to break the rights into different packages.13
10
Against the decision of the Court see Meltz (1999). European Commission, Case no. IV/37.398 – UEFA Champions League. 12 European Commission Press Release IP/03/1105 of 24 July 2003, Brussels. 13 Commission Notice concerning Cases 38.173 and 38.453 on joint selling of the media rights of the FA Premier League on an exclusive basis, (2004) OJ C 115/3. 11
Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action
281
(iv) ARD In the area of access to film rights the Commission has ruled that exclusive content rights do not violate competition unless the terms of the exclusivity are absolute or for an excessive period of time. Most prominently, in its landmark ARD decision14, the Commission granted an exemption to the German public broadcaster only after the duration of its rights to an extensive library of feature films from MGM and United Artists had been significantly reduced.15 Since then, the Commission has repeatedly enforced reductions in the scope and duration of content rights. (v) Vivendi/Canal +/Seagram ARD has so far been the only occasion where the Commission formally examined film licensing agreements under Article 81. Limitations in scope and duration of rights to premium films have been mostly imposed as conditions in merger cases. For instance, in the Vivendi/Seagram/Canal Plus merger the parties (who would have had the world’s second largest film library and the second largest library of TV programming in the EEA) undertook not to offer more than 50% of the Universal’s film production to Canal +, thereby reducing foreclosure concerns by Canal + in the market for pay-TV.16 (vi) NewsCorp/Telepiú The conditions imposed on the NewsCorp/Telepiú merger reflect once more the Commission’s efforts to reduce the duration of exclusive contracts for premium content. The merger effectively gave the new entity (Sky Italia) a quasi monopoly in the pay-TV market in Italy, thereby substantially increasing its bargaining power vis-à-vis content providers. To alleviate these concerns the Commission required that Sky did not extend its exclusivity to exploitation on other platforms (such as terrestrial or cable) so that potential rivals would be free to acquire the rights for themselves. It also limited the duration of the exclusivity to three and two years for films and football rights respectively. Finally, it required that premium channels and pay-per-view football services be made available to those operators who so requested. This would facilitate competition from small-scale operators from competing platforms (for instance cable) that would then have access to premium content without the need to purchase the rights directly from the right holder. 14
Commission decision Film purchases by German television stations, OJ 1989 L 284/36. Originally for a period ranging from 15 to 16 years, the exclusivity of the rights was reduced by means of the introduction of several ‘windows’ ranging from 2 to 6 years during which the exclusivity of ARD was lifted, thereby allowing the films to be licensed to third parties. 16 Commission decision Vivendi/Canal+/Seagram (2000) OJ C 311/3. 15
282
Monica Ariño
Regulatory response: Major events legislation Competition law has been, overall, a powerful and effective tool to limit the length and scope of exclusivity rights for certain content. However, it has proved insufficient to ensure access to content in cases where concerns went beyond the adequate functioning of markets. Most prominently, in the 1993 Eurovision case one concern mutually shared by the Commission and national governments was that the climbing prices for sports broadcasting rights could result in public service broadcasters loosing bids to the benefit of commercial and pay-TV broadcasters. Neither of the last two would ensure total coverage of the population. The first review of the TVWF Directive offered a good opportunity to address the issue via regulation. In 1997, the Directive was modified and Article 3(a) was introduced. It allows MS to draw a list of events that they consider of ‘major importance to society’ that should be available for free life or deferred transmission.17 The Directive is under review at the time of writing, but it is expected that this provision will be maintained in the new version. Major events legislation is an attempt to control the acquisition of market power through limitations on access to key content. There are also cultural arguments that relate to the social function attributed to sports (Weatherill 2003) and its importance as a source of national integration and identity. Finally, there is an underlying intention in Article 3(a) to protect national broadcasting companies that might otherwise be prevented from broadcasting certain events. Access to distribution networks ‘If content is king, distribution remains the key to the kingdom’.18 Clearly the carrying of signals and information is a fundamental stage of the overall communications process. Access to a network does not only represent a physical connection to the network, but it also refers to the opportunity to benefit from the services generated by network usage. Access to networks is particularly problematic when the owner of the network competes at the retail level in the provision of the service with the access requester. As access is requested in order to earn profits from the access provider’s clients, the access provider will generally try to impede or at least influence the conditions under which access is granted to favour its own interests. The response of competition law: Essential facilities doctrine Under competition law access to infrastructure networks has mostly been dealt with under Article 82 of the EC Treaty, when it could be proved that the network
17 18
Article 3 (a) TVWF Directive. Report from the High Level Group on Audiovisual Policy chaired by Commissioner Marcelino Oreja, 1998, chapter 1.
Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action
283
owner, by refusing access, had abused its dominant position. A refusal to deal with an existing or potential customer has also been considered an abuse. Access to network capacity and network services has been a central issue in conventional telecommunications cases. The Commission has opened several proceedings under Article 82 to review conditions under which dominant operators (generally the incumbents) have granted access to their networks19, and has overtly used remedies of a regulatory nature (i.e., open access to networks, local loop unbundling and legal separation) in merger cases.20 In broadcasting, by contrast, the matter has been rather secondary and the biggest access controversies developed around platform technology (see next section). There were few merger agreements where the Commission was concerned with the maintenance of available network capacity and banned the operations partly for that reason. This was the case in NSD, a merger that featured the largest cable television operators in Norway (Norsk Telecom) and Denmark (TeleDanmark) and the largest provider of satellite television programmes in the Nordic region (Kinnevik).21 The Commission was particularly worried that the joint venture would give the new entity a large majority of the transponder capacity and did not accept the undertakings proposed by the parties, which were mostly behavioural and thus difficult to enforce.22 Also rejected were the undertakings proposed by the parties in MSG23 where Deutsche Telekom undertook to open up its networks for further digital transmission to avoid any shortage of channels24, again on the grounds that these were general declarations of intent, but hardly enforceable. In Premiere25, Deutsche Telekom committed to keep two digital channels open for use by a potential third party programme supplier and to expand cable capacity.26 The Commission accepted that in theory such condition would make it possible for competing programming to be made available on the platform, but considered one year to be insufficient time to set up an alternative platform. The Commission was also contrary to joint control by the Swedish and Norwegian Governments of a new company to hold the shares of Telia and Telenor27, because each incumbent was also active in the retail distribution of TV services and related markets in their respective countries. 19
Against Deutsche Telekom (Press Release IP/02/348 of 1 March 2002), Wanadoo (Press Release IP/01/1899 of 21 December 2001) and KPN (Press Release IP/02/483 of 27 March 2002. 20 The Commission imposed access to the fixed and mobile network services in Telia/Sonera (Commission decision Telia/Sonera [2002] OJ C 202/19), and for the first time to wireless networks in Vodafone/Manesmann (Commission decision Vodafone/AirTouch [1999] OJ C 295/2). 21 Commission decision Nordic Satellite Distribution (1995) OJ L 053/20. 22 NSD at par.159. 23 Commission decision MSG/Media Service (1994) OJ L 364/1. 24 MSG at par. 94. 25 Commission decision Bertelsmann/Kirch/Premiere (1999) OJ L 053/1 jointly with Commission decision Deutsche Telekom/BetaResearch (1999) OJ L 53/31. 26 Premiere at par. 146. 27 Commission decision Telia/Telenor, (2001) OJ L 040/1.
284
Monica Ariño
In all these cases, the vertically integrated structure of networks and content was decisive for the negative outcome. Had the network operator been active at the wholesale level only, the Commission would probably have been more lenient. This is confirmed by the fact that it has allowed certain operations under strict conditions of divestiture28, structural separation or commitments not to apply for further digital capacity.29 As most of the broadcasting-specific cases were decided negatively (NSD) or abandoned (Telia/Telenor) it is not easy to identify what criteria would be taken into consideration by the Commission when enforcing access to networks through competition law in broadcasting markets. Yet, largely building upon the Commission’s application of Article 82 and the essential facilities doctrine, as well as on European Court of Justice rulings in the media sector like Magill30 and Bronner31, one can conclude that, contrary to regulatory interventions in the communications sector, the only public interest considered by competition authorities when imposing on the owner of an essential facility the obligation to deal or give access is that of effective and free competition. In other words, access is not granted automatically simply because there is a need to it and the fact that the access requiring party offers different content or represents a minority group is not one of the criteria to consider. Regulatory response: ‘must carry’ rules The problem has been partially addressed through regulation via the implementation of ‘must carry’ rules which give some sort of a privileged access to communications infrastructure. These are rules that seek to ensure that certain radio and television broadcast channels and services (generally public service channels but not only) are made universally available to users. The reasons traditionally invoked to justify ‘must carry’ rules are universal accessibility of radio and television programmes and the need to guarantee a pluralistic offer to the public. At the European level the ‘must carry’ is regulated by Article 31 of the Universal Service Directive32, which allows the imposition of ‘must carry’ obligations on network operators when ‘a significant number of end-users of such networks use 28
I.e., in BiB/Open, condition no.4 required BT, who was a potential competitor of BSkyB, to divest its interests in cable television networks in the United Kingdom. 29 For instance, in BSkyB/KirchPay-TV access to the network was secondary, but the Commission still requested that Kirch did not apply for further digital cable capacity in Germany for a period of time so as to allow for competition to develop. 30 Joined cases C-241/ 91P and C-242/ 91P, Radio Telefis Eireann v. Commission, ("Magill"), (1995) ECR, I-743. 31 Case T-504/93, Ladbroke (1997) ECR II-923 and Case C-7/97 Oscar Bronner GmbH&Co. KG v Mediaprint Zeitungs- und Zeitschriftenverlag GmbH&Co KG (Bronner) (1998) ECR I-7791. 32 Directive 2002/22/EC of the European Parliament and of the Council of 7 March 2002 on universal service and users' rights relating to electronic communications networks and services (Universal Service Directive), O.J. (2002) L 108/51.
Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action
285
them as their principal means to receive radio and television broadcasts’.33 Recital 44 makes clear that, today, this includes cable, satellite and terrestrial broadcasting networks. The extension of ‘must carry’ rules to all delivery networks that are effectively used for the transmission of broadcasting content to a substantial part of the population should be celebrated. Not only is it in accordance with the principle of technological neutrality endorsed by the regulatory framework, but it also has a positive impact on pluralism and diversity. Today, a real choice between different platforms does not exist in all cases and it could certainly be the case that when an operator is seeking carriage agreements for its channels the key determinant is whether carriage will be successfully negotiated on satellite or terrestrial, cable being nothing more than an after-thought. Overall, the regime of ‘must carry’ rules is promising and will allow MS to tackle pluralism issues at the national level. It constitutes a good example of how regulation can go beyond the limits faced by a competition authority, which seems unlikely to require a must-carry type of obligation as a condition of a merger. Access to platforms Access to platforms refers to access to the technology (software) that makes the platform work. In a software-based digital environment where proprietary solutions proliferate, technology and access to it have acquired extraordinary importance. The dynamic nature, flexibility and complexity of digital software make it much harder to standardise than hardware, and even interoperability solutions are difficult to design. In digital broadcasting the most fervent access debates have taken place over the decoder or set-top-box (STB) technology (or over the so-called associated facilities34) where the principal bottlenecks arise. This has proved the most challenging and difficult area of intervention in the digital world, partly because a multiplicity of contradictory interests of various players (content producers, consumer electronics manufacturers, platform operators, broadcasters even telecommunications operators) meet in the decoder. Competition law response: Beyond access regulation The design of the STB and access to it was a central issue in the discussions surrounding merger attempts between Deutsche Telekom, the Kirch Gruppe and Bertelsmann in the German pay-TV market. The agreements were blocked twice.35 33
Article 31 (1) Universal Service Directive. Mainly, Conditional Access Sytems (CAS), Application Programme Interfaces (APIs) and Electronic Programme Guides (EPGs). 35 Commission decision MSG/Media Service (1994) OJ L 364/1 Commission decision Bertelsmann/Kirch/Premiere (1999) OJ L 053/1 jointly with Commission decision Deutsche Telekom/BetaResearch (1999) OJ L 53/31. 34
286
Monica Ariño
The Commission feared that, because of the vertically integrated structure of the companies, access would be limited for potential rival suppliers. Most prominently, in the Premiere case the merging parties offered extensive access commitments and the introduction of a compulsory conditional access licence, together with the disclosure of the application programme interface (API) information. The Commission, although it recognised that these conditions did ‘go some way to ensuring that third parties are not subject to discrimination where licensing is concerned’, decided ultimately to reject them on the grounds that the parties would still control the development of the decoder technology.36 The Commission ignored oligopolistic tendencies and appeared reluctant to “compromise the prospect of competition, however remote, for the possibility of a rapid launch of new digital pay TV services” (Veljanovski 1999, p. 61). Earlier restrictions are in striking contrast with recent green lights to monopolies that consolidate positions in already developed markets. Thus, in NewsCorp/Telepiú the Commission happily accepted the parties’ commitment to grant open access to the API on the basis of cost-oriented and non-discriminatory formula and to procure that NDS (a conditional access provider controlled by NewsCorp) would licence its technology to third parties on fair basis. The Commission, nonetheless, failed to achieve interoperability and was contented with the loose promise by the parties to negotiate simulcrypt agreements. This is surprising, particularly when the historical difficulties in reaching and implementing such agreements are considered, but it is consistent with previous case law. For instance, the BiB joint venture (renamed Open) was approved only after BSkyB (one of the parent companies) had agreed to grant third parties access to BiB services on non discriminatory basis and to develop and operate simulcrypt arrangements.37 Somewhat exceptional in this respect is the approach taken by the Commission in the KirchPay-TV case.38 In a rather unorthodox use of competition law for standardisation purposes, it required compulsory implementation of an open standardised API (in this case MHP), thereby effectively regulating interoperability through competition law. Yet, it failed to mandate a common interface and simply required the parties to enter simulcrypt agreements with other access providers. However, no simulcrypt agreement was ever concluded because licensing proved to be too burdensome.39 These conditions echo some of the undertakings rejected in MSG and Premiere. The reason why the Commission took a different view in this case might be related to the fact that the merger between Kirch and BSkyB was not vertical but horizontal and pan-European in scope. The prominent role that European competition authorities have had in defining platform regimes through the imposition of remedies (be it the request of simul36
Premiere at par. 139. Commission decision British Interactive Broadcasting/Open (1999) OJ 312/1. See in particular paragraphs 173 and 180. 38 Commission decision BSkyB/Kirch Pay TV (2000) OJ C 110/45. 39 This has been a common problem all throughout Europe and it has been argued that only a neutral and independent licensor could guarantee that licensing conditions are fair and non-discriminatory. (Rosenthal, 2003). 37
Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action
287
crypt agreements like in Newscorp/Telepiú or the more interventionist enforcement of technical standards in BSKyB/KirchPay-TV) is partly explained as an attempt by the Commission to compensate for the ineffectiveness, on the regulatory sphere, of the 1995 Advanced TV Standards Directive that set minimal regulations and did not mandate any particular standards for conditional access, leaving other facilities such as APIs and EPGs unregulated.40 Regulatory response: A market approach The broadcasting sector has traditionally being heavily regulated in terms of standards. In the early 90s, after a big EU policy failure generally known as the ‘MAC debacle’, there was a turning point in the EU policy for technical standardisation in broadcasting. The guiding principle would no longer be to steer the market but rather to promote a voluntary and industry-led approach to standardisation. Standards should be a direct consequence of the needs of the markets. This has largely been the principle that, since then, has inspired the regulation of platform components and in particular the Advanced TV Standards Directive, which simply prescribed the use of agreed digital transmission technologies, leaving coordination in detail to market players. Yet, the majority of conditional access systems being used or planned are still proprietary and mutually incompatible by definition and hardly any simulcrypt agreements have been reached. Furthermore, the Directive did not deal with APIs or EPGs, elements that, as mentioned above, have so far been regulated at the EU level mostly through competition law. All in all, the success of most of the Directive provisions has been relative. Most of these provisions, particularly the specific behavioural regime for conditional access systems, have been imported into the Access Directive.41 Importantly, the Directive exceptionally requires all operators of CA (and not only those with significant market power) to grant access to their systems under fair, reasonable and non-discriminatory conditions and to license their intellectual property rights to manufacturers on the same basis.42 The fact that the obligation is not restricted to SMP operators, while contrary to the spirit of the new framework, it reflects the Commission’s consciousness about the specificities of broadcasting and indirectly contributes to the protection of pluralism. However, its effectiveness is still limited. Firstly, even if the Access Directive also contemplates extending this obligation to other associated facilities like APIs or EPGs43, the separation between infrastructure and content implies that presentational issues, key for consumer awareness of information diversity, are not covered. Accordingly, there is no obligation of ‘due prominence’ or visibility on 40
Directive 95/47/EC of the European Parliament and of the Council on the use of standards for the transmission of television signals, (1995) OJ L 281/51. 41 Directive 2002/19/EC of the European Parliament and of the Council of 7 March 2002 on access to, and interconnection of, electronic communications networks and services (Access Directive), (2002) OJ L 108/7. 42 Article 5 of the Access Directive. 43 Annex I, Part II of the Access Directive.
288
Monica Ariño
navigation tools for public service broadcasters.44 Secondly it is unclear which access regime (the specific CA regime or the general SMP regime under Article 813) should be followed in the case that MS decide to extend access obligations to EPGs and APIs (Helberger 2002, 2004). Finally, the Directive certainly does not allow to tackle other bottleneck facilities or commercial practices that are related to the service rather than the transport level (typically programme bundling) and that are likely to have a direct impact in the content composition of the final information output (Gibbons 2004). The regulatory framework deals for the first time with the issue of interoperability. In line with the principle of technological neutrality, no interoperability standard has been mandated. Article 18 of the Framework Directive merely imposes on Member States the obligation to encourage interoperability of interactive digital television services (meaning portability across platforms and full functionality) as well as the use of an open API. The rationale behind a special regime for broadcasting-related facilities in the Access Directive and the attention given to interoperability in the Framework Directive is the following: the general access regime would ensure access to network capacity, but it would not guarantee access to the end-user. This is dependent upon access to the set-top-box. Recital 31 of the Framework Directive states that the ultimate objective of the interoperability regime is to promote ‘the free flow of information, media pluralism and cultural diversity’.45
Assessment A convergence of disciplines? Undeniably, ensuring access to transmission rights, to premium content, to platforms and to distribution networks has been a central focus of attention for both regulation and competition law enforcement in the broadcasting sector. As illustrated in Table 2, an interesting interplay between competition law and sector specific regulation can be observed. At the level of networks, where detailed and overall successful regulations such as ‘must carry’ rules existed, competition law has been rather limited to competition related issues, largely ignoring pluralism concerns. By contrast, at the content and platform levels, where regulation was either non-existent (except for Article 3(a) of the TVWF Directive) or not sufficient (as exemplified by the failure of the Advanced TV Standards Directive to achieve interoperable scenarios) competition law has often gone beyond its scope, effectively regulating markets.
44
However, MS are free to implement that kind of requirements. This is for instance the case in the UK. 45 See Recital 31 of the Framework Directive.
Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action
289
Table 2. EU intervention in the broadcasting sector
Content
Platform
Network
Regulation
Competition
Interplay
Major events: Article 3(a) TVWF
Access conditions (EBU) ______________________
Trigger _____________
Exclusivity restrictions (ARD, UEFA, Newscorp)
Substitute
CAS Regime: Access Directive ___________________
Simulcrypt agreements (Newscorp, KirchPayTV) ______________________
Substitute
Interoperability: Framework Directive. ___________________
Access obligations (BiB)
Substitute
______________________
_____________
Standardisation: Framework Dir
Standardisation (BSkyB/KirchPay-TV)
Trigger
Must carry rules: Universal Service Directive
Essential facilities ______________________
_____________
Complement
Open access, unbundling
Thus, competition law has been used both as a substitute for regulation and as a trigger of regulatory developments. For example, through BDB46, the Commission effectively influenced regulatory developments in the UK pay-TV market by establishing the principle that a national regulatory authority should not grant a licence to a dominant operator if by doing so dominance could be extended or strengthened (Temple Lang 1998). The MSG and NSD decisions inspired the treatment of issues of standard setting and access to bottleneck facilities in the Advanced TV Standards Directive. Also, it is interesting how a few weeks before the BiB decision, the Commission adopted the Access Notice47 which extended ONP principles of open and efficient access to and use of public telecommunications networks, to access issues in digital communications sectors generally (Galperin and Bar 2002). In this case the Commission went far beyond any action that would have been allowed under the Advanced TV Standards Directive. Finally, the principles in the Access Directive also mirror many of the conditions imposed on Kirch and BSkyB few years earlier. Levy shows how often in competition cases, the conditions go “further than anything prescribed by the national or EU regulations in force” (Levy 1999, p. 97).
46 47
BDB (On digital), Commission Notice, (1997) OJ C 291/11. Commission Notice of 31 March 1998 on the application of competition rules to access agreements in the telecommunications sector, (1998) OJ C 265/2 (Access Notice).
290
Monica Ariño
The NewsCorp/Telepiú case is yet another revealing example of how regulatory frameworks are forced through the use of competition law. As Caffarra and Coscelli explain, the dilemma for the competition authority was “whether to accept a further “regulated” consolidation through mergers, or to prohibit these mergers and then allow further “unregulated” consolidation as financial losses prompt market exit” (Caffarra and Coscelli 2003, p. 626). Notably, the Commission for the first time appointed a National Regulatory Authority (NRA) to be responsible for the implementation of the remedial conditions (Garzaniti and Liberatore 2004). Thus, NRAs might increasingly be responsible for both the implementation of the remedies under the Commission’s merger decisions and for the remedies imposed according to the regulatory framework for electronic communications. Both sets of remedies are likely to overlap, especially as far as access regimes are concerned. Competition law concepts and methodologies have also been imported into the sphere of regulatory practice. The new regulatory framework for electronic communications and services is paradigmatic in this respect. Indeed, regulation and competition disciplines are converging partially and developing a ‘common language’. The interplay is dynamic, interesting and in many ways positive. In this convergence, however, competition law dominates. This invasion of the regulatory sphere by competition law is almost unavoidable, especially if the EU competences in the field of broadcasting remain as weak as they are now (which in the light of the conflicting debates that surround the upcoming revision of the Television Without Frontiers Directive, is more than likely). Such primacy of competition law both as an instrument and as an inspiration for regulatory models is likely to continue and even intensify in the future. In this context, there is a risk that certain ‘non-competition’ interests will be disregarded unless they are taken on board by competition law authorities. Limits of competition law: balancing non-economic media specific considerations The analysis above shows that competition law has been the major tool in the hands of European authorities, who have certainly used it extensively in the audiovisual field. The Commission seems aware of the specificities of media markets, as the Eurovision saga clearly demonstrates. In cases such as Premiere the influence of media pluralism concerns in the final decision seems inevitable. However, never did the Commission expressly refer to any kind of public interest and this certainly did not appear as a determining factor in the analysis. Overall, and despite initial attempts to integrate non-economic media specific considerations in competition law assessment, in the great majority of cases (i.e., UEFA, ARD, Newscorp/Telepiú) these have only been marginally influential. ‘Pluralism’, ‘plurality’, ‘public service’ or ‘diversity’ are words that are hardly ever mentioned in most of the Commission decisions, when not totally absent. Operations have been prohibited when they risked harming ‘consumers’, but a reduction in the levels of pluralism and diversity was not taken as evidence of harm.
Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action
291
Still, one cannot be certain that these are ignored. Indeed, not all considerations are written down in the final text of the decision. For example, while the formal UEFA decision is confined to Article 81(1) and did not go into the examination of wider ‘cultural factors’ that could be considered under Article 81(3), Mr. Monti, has been quoted as observing that the decision “reflects the Commission’s respect of the specific characteristics of sport and of its cultural and social function”.48 In fact, it is rather hard to believe that any competition authority –national or European– can carry out a merger analysis in the media sector without being aware of pluralism-related issues. One could argue that it is up to the national regulator to impose further measures directed at the maintenance of media pluralism in the markets where it might be threatened. The Merger Regulation, in Article 21.4 explicitly contemplates this possibility.49 This is a valid argument were it not for the fact that Article 21(4) has hardly ever been used and that, in any case, its application is national in scope. Another concern with current competition approaches to media mergers is that, as exemplified in the Italian case, there is increasingly a tendency to impose behavioural conditions. One does not need to be a visionary to assert the predictability of conflicts. Shortly after the merger was cleared, disputes with cable operators over premium content started in Italy. Limits of EU regulation: Incomplete approach to access The recognised insufficiency of competition law has always been one of the principal justifications for regulatory intervention, which is mainly national in scope. There were attempts to come up with a European-wide approach to pluralism challenges in 1996 and 1997 but the Commission’s draft proposals for a directive on media ownership were fiercely opposed by industry and by Member States, particularly on account of the Commission’s disputed regulatory powers. As a result, there is no binding European law on media concentration and the Commission has no specific instruments with which to combat the threat to pluralism posed by the development of dominant sources of opinion. Therefore, it should come as no surprise that competition and access rules have been used indirectly as a tool to address an existing regulatory need. Access regulation in place has been, nevertheless, only partially effective. At the network level, ‘must carry’ rules are potentially the most efficient tool, but their effectiveness was so far limited in practice because they only applied to cable. The recent extension to other technologies is a positive way to ensure general access to public service offerings in all distribution networks. At the content level, regulation has addressed access to major events (primarily sports), something that 48 49
IP/O1/583, 20 April 2001 Article 21.4 (ex Article 21.3) of the Merger Regulation (Council Regulation (EC) No 139/2004 of 20 January 2004 on the control of concentrations between undertakings, OJ L 24/1) allows Member States to take measures to protect legitimate interests, among which the “plurality of the media”.
292
Monica Ariño
has little or nothing to do with pluralism, but has failed to deal with other contentrelated bottlenecks that have a major impact in the delivery of a pluralistic offer. Finally, at the platform level, the regulatory trust in commercial negotiation has been disappointing and markets have ostensibly failed to reach standardisation or interoperability solutions. The current situation creates particular pluralism threats because it is precisely at this level where the most serious bottlenecks arise.
Conclusion The response of the European authorities to media pluralism challenges in digital broadcasting markets has been the implementation of detailed access regulation and reliance on competition law to keep markets open. Access obligations have contributed to the maintenance of competition and, indirectly, to the protection of pluralism. From the analysis above it emerges that pluralism goals have so far been best achieved in two ways (i) through access regulation at the level of networks and (ii) through competition law intervention at the level of platforms. However, in the competition law sphere pluralism was the consequence rather than the goal and there remains a perturbing lack of transparency in the Commission’s practice. Similarly, in the regulatory sphere the impact of access regulation has been rather oriented to the achievement of economic policy objectives and less towards potential contributions to information policy. It is suggested that the boundaries of EU intervention in the media sector should be re-examined. Firstly, the development of a media specific competition law that acknowledges the peculiarities of this sector could be explored. This is not inconsistent with competition provisions in the Treaty of Rome, which provides scope for the consideration of other policy objectives within the application of EC competition rules (Gerber 1998; Monti 2002), and has been sanctioned by the case law of the Court of First Instance and the European Court of Justice.50 Approaches to market definition, market power (as opinion power) and efficiencies might change under a pluralism lens.51 Secondly, and in addition to rethinking competition law approaches to media markets, complementary regulatory measures will be required. This is because there will always be limits in the degree of subjectivity and discretion of a competition authority. This calls for a re-consideration of the scope of EU intervention in broadcasting markets. It is argued that media pluralism is not only a prerogative of the Member States but also, and within its fields of competence, a responsibility of the EU. The explicit reference to “freedom and pluralism of the media” in the EU
50
Case 26/76 Metro SB-Großmärkte GmbH & Co. KG v Commission (1977) ECR 1875 (Metro I); Case 42/48 Remia and others v. Commission (1985) ECR 2545. 51 These questions have been developed by the author in another article (Ariño 2004).
Pluralism in Digital Broadcasting: Myths, Realities and the Boundaries of EU Action
293
Charter of Fundamental Rights52, incorporated in the Constitution of the European Union, if approved, reinforces this responsibility. In a context of an ever-integrated Union, the issue of pluralism has transcended national borders and will need to be jointly addressed by all Member States. Whilst pluralism issues remain an exclusively national competence, national regulators have been often confronted with spillovers of cross-border broadcasting and are having difficulties in maintaining pluralism in their domestic markets, especially when they deal with international media conglomerates whose national affiliations are weak (Feintuck 1997). For the above reasons, the idea of a mediaresponsive competition law complemented with solid European based regulatory strategies (including soft law and co-regulatory measures) is worth considering, all the more in the light of the changes that have accompanied the digital revolution of the last decade.
References Ariño M (2004) Competition Law and Pluralism in European Digital Broadcasting: Addressing the Gaps. Communications & Strategies 54: 97–128 Bagdikian BH (2004) The Media Monopoly (seventh edition). Beacon Press, Boston Caffarra C, Coscelli A (2003) Merger to Monopoly: NewsCorp/Telepiú. European Competition Law Review 24(11): 625–627 Compaine BM, Gomery D (2000) Who Owns the Media? Competition and Concentration in the Mass Media Industry (Third Edition), Lawrence Erlbaum Associates, New Jersey Cooper M (2003) Media Ownership and Democracy in the Digital Information Age Promoting Diversity with First Amendment Principles and Market Structure Analysis. Center for Internet and Society, Stanford Law School Doyle G (1997) From 'Pluralism' to 'Ownership': Europe's emergent policy on Media Concentrations navigates the doldrums'. Journal of Information, Law and Technology 3 Einstein M (2004) Economics, ownership and the FCC. Lawrence Erlbaum Associates, New Jersey Feintuck M (1997) Regulating the Media Revolution: In Search of the Public Interest. Journal of Information, Law and Technology 3 Galperin H, Bar F (2002) The Regulation of Interactive TV in the US and the European Union. Federal Communications Law Journal 55(1): 61–84 Garzaniti L, Liberatore F (2004) Recent Developments in the European Commission's Practice in the Communications Sector: Part 3. European Competition Law Review 25(5): 286–298 Gerber DJ (1998) Law and Competition in Twentieth Century Europe: Protecting Prometheus. Clarendon Press, Oxford Gibbons T (2000) Pluralism, Guidance and the New Media'. In: Marsden C (ed) Regulating the Global Information Society. Routledge, London, pp 304–315
52
See Article 11.2 of the Charter of Fundamental Rights of the European Union.
294
Monica Ariño
Gibbons T (2004) Control over Technical Bottlenecks A case for Media Ownership Law? In: Regulating Access to Digital Television. IRIS Special, European Audiovisual Observatory, Strasbourg, pp 59–67 Harcourt A (1998) EU media ownership regulation: conflict over the definition of alternatives. Journal of Common Market Studies 36(3): 369–389 Helberger N (2002) Access to technical bottlenecks facilities: The new European approach. Communications & Strategies 2nd quarter 46 Helberger N (2004) Technical Bottlenecks in the Hands of Vertically Integrated Dominant Players: a Problem or the Driver Behind the Knowledge-based Economy? In: Regulating Access to Digital Television. IRIS Special, European Audiovisual Observatory, Strasbourg, pp 23–38 Levy DA (1999) Europe’s digital revolution. Broadcasting regulation, the EU and the nation State. Routledge, London McPhail T (2002) Global Communication. Allyn and Bacon, Boston Meltz M (1999) Hand It Over: Eurovision, Exclusive EU Sports Broadcasting Rights, and the Article 85(3) Exemption. Boston College International and Comparative Law Review Winter 23: 105–120 Monti G (2002) Article 81 EC and Public Policy. Common Market Law Review 39(5): 1057–1099 Näränen P (2002) European Digital Television: Future Regulatory Dilemmas. Javnost – The Public 9(4): 19–34 Picard RG (1998) Media Concentration, Economics and Regulation. In: Graber D, McQuail D, Norris P (eds) The Politics of News, The News of Politics. Congressional Quarterly Press, Washington, D.C, pp 193–217 Rosenthal M (2003) Open Access from the EU Perspective. International Journal of Communications Law and Policy 7, Winter Temple Lang J (1998) Media, Multimedia and European Community Antitrust Law. Fordham International Law Journal 21(4): 1296–1381 Van Cuilenburg J, McQuail D (2003) Media Policy Paradigm Shifts. Towards a New Communications Policy Paradigm. European Journal of Communication 18(2): 181– 207 Veljanovski C (1999) Competitive regulation of digital pay TV. In: Grayston J (ed) European Economics & Law. Palladian Law, Bembridge, pp 53–85 Weatherill S (2003) Fair Play Please: Recent Developments in the Application of EC Law to Sport. Common Market Law Review 40(1): 51–93
New Perspectives on Mobile Service Development Jan Edelmann1,*, Jouni Koivuniemi2,*, Fredrik Hacklin3,**, Richard Stevens4,*** * ** ***
Lappeenranta University of Technology, Finland Swiss Federal Institute of Technology (ETH), Zurich, Switzerland Gruppo Formula Spa, Italy
Abstract The future development of mobile applications and services has to take into account the user needs more carefully than before. The investments in mobile telephony, which are made from the point of view of technology, disregard the fact that users are not interested in technologies; they are interested in possibilities to fulfil their needs. The main problem for the ICT industry in mobility concerns its business that is mainly based on the technology-oriented view, inadequate understanding of users’ needs, and the uninteroperable and isolated application and service perspective. The main aim of this article is to explain the existing problems in the development of mobile applications and services explicitly, and illustrate them by examples on a route to our vision of taking better into account the user needs based on their roles and tasks. We conjecture that a comprehensive view of user role specific needs would beget a new arena in mobile business — the Integrated Mobile Applications and Services industry. The other aspect is that the user needs are only one part of the triple play of business, users, and technology. Intense co-operation between the players in the industry is required to jointly create standards and platforms that enable Integrated Mobile Applications and Services. Technological development requires platforms which enable the bundling of services and applications as easily as Lego bricks can be attached on different platforms.
1 2 3 4
E-mail:
[email protected] E-mail:
[email protected] E-mail:
[email protected] E-mail:
[email protected]
296
Jan Edelmann, Jouni Koivuniemi, Fredrik Hacklin, Richard Stevens
Introduction Many of the practiced business models have become or will soon be obsolete in the mobile industry. At the same time, it seems that companies’ interest to integrate users and their needs more tightly in development processes in digital business has increased. However, we claim that the industry still is confronted a fragmented, biased and defective view of the mobile users’ needs, and the absence of effective user integration mechanisms. We suggest that the business models, which explicitly address mobility, strong user-centricity, open collaborative networks and a sound technological basis, will be the most successful and sustainable in the long run. A sound triple play between the technology, business, and the user perspective in mobile future applications and services in Europe is needed. The current mobile business is based on different kinds of services and applications: phone calls, text and picture messages, icons and ring tones, location based services, and mobile device environments, just to mention some examples. In recent years the industry has noticeably concentrated on the so-called rich media (including videos, audio, and pictures), which the industry has anticipated to be the next money-spinner. From the user’s point of view it might not make any sense to concentrate only on a specific communication method or media because the real user needs probably exist somewhere else. The overall need of the user may not be to look at a picture of the communication partner — the need is, for instance, to check if the children really are at home or somewhere else. Seeing someone’s picture can be part of a full service solution. In other words, present mobile services and applications are very often functioning separately and distinctly apart from others; they often fulfil only one part of the overall need, whereas the combination of them could fulfil the need entirely. A good example could be that your device knows where you are, but it cannot connect that information to the fact that you should be somewhere else based on your calendar information. In our vision, the mobile technology enables the integration and bundling of services and applications (S&A) together in a way that was not previously feasible. Applications are, for instance, able to combine information from the user’s calendar (time and desired destination), current location (GPS), and public transport timetables. The vision leads to more user oriented business models and changes in the current value network where the mobile services and applications have been seen very differently. The sustainability of mobile business models in the future is strongly based on the companies’ ability to combine the user’s point of view with the right set of technological building blocks in close collaboration with renewing and re-orchestrating value network. In an opposite scenario there is a danger that the use or speed of adaptation of mobile services and applications will decline because of their high price, increased number of variation, difficulty, complexity, and incompatibility of usage. Workable interconnections and interoperability between different kinds of systems, operating systems, and services are not advancing, and the users are not offered appropriate applications and access while “being mobile” and “roaming” in social
New Perspectives on Mobile Service Development
297
communities. A classic example is WAP whose first releases were too difficult to use, too unreliable in terms of the technological basis, and too expensive to use (asset and time consuming). WAP was expected to be a killer application, but it turned out to be only a ‘hype’. This does not mean, however, that WAP and its followers could not be part of future applications and services. This is just a negative scenario that should be avoided. UMTS Forum (UMTS Forum 2001) listed 62 different broad service concepts and applications. They expected that most service concepts benefit mobile service providers through increased traffic. Noteworthy is that they approached the future business with separated services and applications, without taking into account the possibility of integration as a new business model. Contrary to that, the Nordic ICT industry, e.g., Nokia, TeliaSonera, and TietoEnator (in Nets Seminar, 2003), addressed the integration on different technological levels expecting that there will be convergence in technological layers. However, they forgot their customers. To reach the vision, there have to be common interfaces in different technological layers (e.g., web services architecture) to link applications together. The industry needs platforms and standards, which enable an easy configuration and installation of functions and components of services and applications (see e.g. Budde 2004). The whole value system (the industry) needs to collaborate in order to build a platform to allow the development of bundles of mobile applications and services, called Integrated Mobile Applications and Services (IMAS). To summarise, we claim that existing applications and services are mainly targeted to fulfil only particular needs in particular areas or environments without the concern of comprehensive understanding of the real user needs. Mobile applications and services are designed from the technological perspective, not the user’s perspective. From the point of view of the user this has meant applications and services that are often difficult and arduous to use (the multitude of applications for the same purpose, different terminals and devices with limited features), and an isolated set of different network and access technologies with severe problems in interoperability and openness. Understanding the user and the new unexpected uses of technology is a major challenge for companies in the future mobile business. The question arises; based on the different roles of people, how are we able to model their needs and to form the future applications and services to fulfil them as comprehensive constructions?
The user needs based on roles Existing business divides customers basically into two categories: business users and consumers. In reality there are no such groups; these groups only express some of our roles in the society. There are people who are either working or off work, but an overlap of environments is possible as well, i.e., working at home. Nowadays, the mobile services, devices, and applications are targeted to these groups for reasons of price differentiation. Very often, business users are offered mobile devices by their employer, and depending on the firm policies the user is
298
Jan Edelmann, Jouni Koivuniemi, Fredrik Hacklin, Richard Stevens
allowed or not to have those same devices for private use. It is not uncommon that people are carrying many mobile phones because of the devices’ lack of versatility. There are also services and applications that are separated and overlapping even though the same person uses them for the same purposes (e.g., multiple calendars on different devices). The same or interlinked information in different applications and services has to be entered or modified several times. From the business point of view, the industry is happy to sell things over and over again, but from the users’ point of view the mixture of applications and services (full service solutions) that are capable of maintaining the users’ access to required services and information is needed as they roam between different communities and environments. When the number of services and applications increases, the overlap escalates and the competitive business position can be improved by the full service way of thinking. The fulfilment of users’ needs should be the dominant criterion — not the development of technologies, even though the technological development is important in the user need fulfilment. Users act as members in different kinds of environments (Fig. 1), where they carry out tasks and activities in different roles (e.g., a craftsman at work, mother at home and a scout master at leisure). Users’ needs, which can be either manifested or latent, are derived from users’ roles and role-specific tasks. Comprehensive understanding of the user needs is crucial in order to develop full service solutions. To foster the emergence of easy-to-use Integrated Mobile Applications and Services, it is essential to define users’ needs. A thorough analysis of user environments, communities, context and roles is the backbone for understanding users’ present and future needs.
Fig. 1. Different user environments and overlapping need areas (Modified Edelmann and Koivuniemi 2004a)
The user can have one identity but many roles during the day in different environments. To fulfil the needs, service supplies have to understand how the person could easily use mobile services and applications in everyday tasks in different roles. The concept of the multi-channel communication model may enlighten the
New Perspectives on Mobile Service Development
299
problem. In two different roles, people have different needs based on the external requirements. They might need different types of communication channels even at the same time, and specific information in specific situations and moments in different forms. The understanding of different user needs is achieved by looking at the user’s identity as an integration of different roles, role-specific needs, and tasks in their roles. The mobile users want to roam effortlessly and seamlessly between communities, devices, networks, applications, and services. The identity of the user does not change when the user switches from one role and one environment to another even if different identifications or aliases may be used. Different roles that an individual has in everyday life can be defined through a set of tasks derived from the person’s voluntary and obligatory roles. The roles are a result of the persons’ own choices or what the society has imposed on them. Based on these roles, a role map can be created where each role has role-specific tasks. A combination of these roles and tasks is an ontological approach to individual’s optional and compulsory needs. Fig. 2 shows the relationship between different roles, presenting a possible role combination as an example.
Fig. 2. Example of role combinations
The needs of users are defined in terms of the context, the environment, the community, and the role of the user. The needs might be: convenience, efficiency, value for money, image, social contact, knowledge, entertainment, and safety.
300
Jan Edelmann, Jouni Koivuniemi, Fredrik Hacklin, Richard Stevens
When considering the reality of the mobile user, who most probably participates in the different communities within different environments, the needs are overlapping between the different environments. In each environment (Fig. 3) the user has different roles, performs different tasks, and behaves differently. There are some examples which clearly illustrate the reality of the overlapping needs of users. For instance, situations in which employees are able to perform their work from a home office vividly highlight the overlapping user needs between home and work. However, taking into account the fact that a mobile user or worker is actively participating in other communities and environments as well, the situation becomes more complex, since the mobile user has multiple overlapping need areas (refers to numbers 1–4) derived from the three core environments. To understand the user needs more broadly, the source and factors of needs have to be understood thoroughly. Although, it is not sufficient to ask people what they want, because the real needs are not spontaneous; they are profound and related to an inoperable causality. For that a multidisciplinary research on user needs concerning Integrated Mobile Applications and Services is required where, e.g., the Emphatic Design (Leonard-Barton and Rayport 1997) or the Kano model can be used for understanding the user environment and problems.
Fig. 3. Mobile user environments and overlapping user need areas
Fragmented technology base Theoretically speaking, today’s engineers are currently perfecting the full range of network, software and end-user device technologies to ensure that mobile users of
New Perspectives on Mobile Service Development
301
computing devices can access appropriate information from wherever they may be located according to the best possible connection at the most convenient price using the lowest possible power consumption. Ideally, users want their applications and data to remain persistent as they roam from one network to the next according to the Always Best Connected (ABC) paradigm. The existing wireless building blocks rely on the essential physical networks and protocols, security paradigms, devices and operating systems, the identification basis and persistence, location sensing, user applications and services, and fee management. There are too many competing components and technologies, and far too little reliance on open source or common physical or application layers as each component strives to get or protect an advantage in the highly competitive and dynamic wireless market. Interoperability of these components is, therefore, far from being commonplace. Many of these elements were developed based on incompatible technologies or with comparable technologies but using different protocols to provide commercial alternatives for competitors. Of course some of the divergence is logical where different networks satisfy different intended usage. Factors such as coverage, bandwidth, cost, application requirements or protocol standards make it more convenient to use broadband (mobile and fixed) networks for a given situation. Competing standards have, however, created seemingly insurmountable boundaries for applications trying to stay connected and retain sessions and data. A marked example is the mobile telephony networks, which forces wireless application and service developers to make the difficult choice between the numerous market standards, usually on the basis of where the application or service is to be sold. Based on coverage, developers must often ensure compliance with multifarious mobile network standards and technologies. As a consequence, decisions to be made are highly complex. The situation is of course much worse when hardware and software are concerned. Beyond obvious market pressures one problem is the clear lack of common standards and the abundance of “official” standardisation bodies. There is a large number of organisations, which is usual for dynamic new markets but unacceptable if the technology is subject to Metcalfe’s law5. The most important standardisation efforts in the field of mobile systems have been made by ITU-T, which has published a large number of mobile network recommendations. Another influential standards organisation in the field of mobile systems is the European Telecommunications Standardization Institute (ETSI). Other important standardisation organisations are the Japanese RCR and TTC, and the North American ANSI, EIA and TIA, and obviously the vigorous International Institute of Electronic Engineers (IEEE). Work, however, is underway to bring this havoc under control. What is needed? Essentially the industry needs to (and to some extent is starting to) agree on a standard platform to allow 2.5-3G systems to interoperate seamlessly with, e.g., 5
“Metcalfe's Law is expressed in two general ways: 1) The number of possible crossconnections in a network grows as the square of the number of computers in the network increases. 2) The community value of a network grows as the square of the number of its users increase.” http://searchnetworking.techtarget.com
302
Jan Edelmann, Jouni Koivuniemi, Fredrik Hacklin, Richard Stevens
WLAN to create true seamless roaming. To make this happen we need to continue to work on the technical building blocks, the wireless business model and seamless roaming in mix-match networks. The lack of complete technological development in several areas seems to add to the fragmentation. Security concerns have created a wide variety of vendor or hardware specific solutions with different authentication and certification technologies due to the fact that some of the de facto standards lack certificate-based authentication. Other building blocks like Mobile IP to keep the same IP address for applications must continue to be developed to ensure that they are completely transparent for applications running on them. Additional technologies like SIP (Session Initiation Protocol) must also be perfected to make certain that sessions stay alive, and adjust session quality as bandwidth decreases when moving to 2.53G networks from WLAN. The current wireless business practice also adds to the fragmentation as pay-asyou-go billing is carried out. Customers would like to have just one bill as they move across networks. This requires a lot of groundwork from all the actors to carry out roaming type agreements as is already established in the mobile telephony field. Although WLAN operators are already starting to cross WLAN into mobile telephony, business models and roaming agreements are a definite pitfall. Additional technical and legal questions may also arise when switching networks, as different tariffs and legal constraints may be in use in the various networks. Mobile integration is, however, happening and fragmentation will disappear as the Mobile Internet is becoming true and the users will accept new applications and services as “Seamless Roaming” is accompanied by advances in single signon, uninterrupted connectivity, always best connected, schemes home operator billing, automatic network selection, user profiles, and value added services and applications.
Fragmented business models based on strategic uncertainties The main reason for the strategic uncertainty in the information and communication technology (ICT) sector today is rooted in the heterogeneous development of technological innovations and, yet, the converging characteristics of the entire industry. Convergence in the ICT industry has been a visible trend since the late 1990’s (Sherif 1998), and the incentive to this industrial movement has been based on the increasing amount of technological intersections between information technology (IT) solutions and telecommunication systems (Backholm and Hacklin 2002). Today, many of these rapidly evolved paradigms are taken for granted (e.g., connecting to the Internet over a packet-switched telephone line ten years ago, versus, placing voice calls over packet-switched links today), as in fact, nowadays both converged technologies are mostly being referred to as one single industry segment — the ICT. However, this same convergence movement can be regarded as by far not completed yet, and can particularly be observed in the area
New Perspectives on Mobile Service Development
303
of mobile services (Backholm and Hacklin 2002; Camponovo and Pigneur 2003; Nikolaou et al. 2002). From today’s point of view, ICT convergence has mostly been perceived and discussed from the technological point of view, for instance, in terms of changes in protocol stack, standardisation issues and application opportunities in an all-IP based domain (Barnes 2002; Bos and Leroy 2001; Chiang et al. 2002; European Commission 1997; Moyer and Umar 2001; Patel and Dennett 2000; Sherif 1998). The current research challenge, however, is to investigate the effect of convergence on existing and new mobile business models, and to identify implications on the evolving mobile business services market structure. The relevant questions are, e.g., how to create added value as a mobile network operator in an all-IP based service domain, or what software platform to build new application services on, still guaranteeing strategic sustainability. As no holistic view of the converged mobile service market has been created yet, current players tend to derive their options from a short-term perspective, instead of building sustainable strategies for operating in a converged market (Hacklin and Marxt 2003; Hoch et al. 2001). This leads to a high fragmentation in the overall business model context, resulting in multiple islands of value creation, instead of an orientation at a more long-term, network-oriented value creation scenario. This deconstruction of established value chains forces single players to open up their operations towards collaboration in business model development, where currently a clear lack can be observed, since firms’ strategic thinking is too often focused on their own core businesses. Considering the value creation potential enabled by a converged industry, one could clearly observe that the leverage of cooperation and collaboration over user, business and technological domains are by far not fully utilised yet. This combination of the networked firm activity modelling approach with a classic value generation based view of firm activity (Amit and Zott 2001; Porter 2001; Porter 1985), i.e., the value network construct, can serve to illustrate and explain the value generation and flow within a more complex business transaction framework. A relatively small set of publications in the area of generic value creation in networks exists (Christensen and Rosenbloom 1995; Hinterhuber and Hinterhuber 2002; Kothandaraman and Wilson 2001; Zahn and Foschiani 2002), but recent contributions, in particular, cover this phenomenon within the ICT sector (Dietl and Royer 2003; Li and Whalley 2002; Lindgren et al. 2002; Talluri et al. 1999; Willcocks and Plant 2003; Zerdick et al. 2000), where the networked approach can serve to estimate the emerging competitive environment in a more appropriate way. There is a need for innovation research towards a profound understanding of the effects of convergence on mobile business models and implications for management in the ICT sector. This includes theories and models as a basis for the definition of tools, approaches and guidelines on a strategic level as well as operational implementation patterns for entrepreneurial planning and decision support tools. The outcome of such research should provide methodologies for estimating possible paradigm shifts in the industry, implications and generic migration strategies for the providers of current mobile solutions in a converging technological
304
Jan Edelmann, Jouni Koivuniemi, Fredrik Hacklin, Richard Stevens
environment. Especially, theories and models serving to explain complex value generation relationships and dependencies on the networked economy, and implications of such phenomena as convergence within this framework can provide essential support to this managerial challenge.
The elements of future mobile business The ongoing business development in the European ICT industry has shown the urgent need for understanding and anticipating the individual user in different environments, communities, contexts and roles. The increasing complexity related to both technological and market uncertainties set an extraordinary need for interdisciplinary research and related joint problem-solving, knowledge creation and integration in order to understand user needs comprehensively and the business opportunities related to them. Factors determining success and failure in future development are two significant and discriminating factors: 1) understanding end user needs, and 2) good internal and external communication between all value network actors (Chroneer and Laurell-Stenlund 2001). Integration and convergence in mobile services and applications will have an impact on the current mobile value network and make new business models possible. This new kind of environment will force today’s mobile industry to change for a new technological paradigm. A new kind of mobile service and application base deconstructs the traditional and homogenous role of, e.g., mobile network operators into different segments, and forces a different value network development than in traditional mobile telecommunication business. There are no guarantees that old mobile operator business models will sustain in their current form. It is necessary to develop different scenarios for the dissolution of traditional operators, service providers, sales companies and others to analyse the newly emerging value network. The new type of competitive situation requires to adopt a strategic options approach (Edelmann et al. 2004b) where the collaborative actions are utilised to attain future growth options. Lack of genuine collaboration in the development of mobile business today hinders positive future development in the mobile industry; nowadays, companies’ thinking is still too often focused on their every day business. Fig. 4 illustrates the idea of Integrated Mobile Service and Application development in the future. When the user needs are known, we are able to take them into the development process where the fulfilment of the user needs is constrained with the mobile ecosystem’s technological and economic possibilities. The approach where companies seek killer applications has been argued to be wrong; it is just too technology oriented. Companies should instead seek killer user experiences (Advani and Choudhury 2001). What needs to be done in order to foster the presented IMAS vision, is to aggregate the users’ needs from separate areas together and to form the underlying business and technological mechanisms and systems that take the advantage of user needs to the development of easy-touse full service solutions for users. Triple play between the business, user, and
New Perspectives on Mobile Service Development
305
technology is important to guarantee users’ novel mobile experiences, i.e., killer experiences.
Fig. 4. User needs and technological capabilities
The aforementioned requires a sound theoretical and methodological foundation, which takes into account the mechanisms and aspects needed in a holistic perspective. The areas to be addressed include: a) user integration techniques and mechanisms, b) application and service integration techniques and mechanisms, c) frameworks, models, definition and management of innovation, d) the elaboration of user-centric application and service design and development methodologies, and e) mobile ecosystem and value network mechanisms.
Conclusions The market of mobile services and applications is rather young. As elaborated above, there are a lot of different kinds of services and applications, but they are usually developed for one particular purpose. There are also many other different silos that hinder and slow down the development of mobility. For instance, some special mobile services are expensive to use just because they are targeted for business users, as conference services, which keeps the total number of users low. The unsuitability of services and applications to function together decreases also the demand and use of mobile services. All this has a negative influence on the technological development. Changing the concept of customer or user can remove the barriers between the silos. The development of the whole industry requires fostering co-operation between the market players as well as with the users. There is no consensus among market players what business models will be sustainable in mobile business in the future. However, in Finland some lead companies, as Nokia, have revealed their vision about the future mobile applications and services. They support our vision according to which the current way of operating
306
Jan Edelmann, Jouni Koivuniemi, Fredrik Hacklin, Richard Stevens
changes, and there will be platforms, which will enable the bundling of mobile applications and services based on the users’ role-specific needs. Comprehensive understanding of user needs requires new kinds of user integration mechanisms, which enable a direct participation in of users in the development processes of mobile applications and services. Bearing in mind the nature of Integrated Mobile Applications and Services, we claim that in the future the understanding of users’ needs is more generally an interest of a group of companies rather than a concern of single companies. As the emergence of Integrated Mobile Applications and Services requires comprehensive understanding of users’ overall needs, it is only possible to be fulfilled by converging strategies and a value system including several companies. This will lead to fundamental changes in the application and service development processes in terms of collaboration and openness. Re-segmentation of the customer base and re-orchestration of companies’ service offering is possibly necessary in order to fulfil users’ overall role-specific needs. The role of technology will remain crucial, but the emphasis will be on the actors’ ability to restructure the technology base, and to foster or quickly apply to new kinds of uses of technologies. There is a need for innovation research towards a profound understanding of which areas of the Integrated Mobile Applications and Services are most valuable and beneficial. A rich landscape of IMAS is upon us as soon as the underlying mechanisms in terms of ‘novel user experiences’, converged business models, and bundling technologies have been researched and developed. Acknowledgements: The paper is based on the outstanding work of the European WISY Society. We want to thank all contributors equally.
References Advani R, Choudhury K (2001) Making the most of B2C wireless. California Management Review 12(2): 39–49 Amit R, Zott C (2001) Value creation in e-business. Strategic Management Journal 22(6/7): 493–520 Backholm A, Hacklin F (2002) Estimating the 3G convergence effect on the future role of application-layer mobile middleware solutions. In: The proceedings of the 2002 International Conference on Third Generation Wireless and Beyond, San Francisco: World Wireless Congress Barnes SJ (2002) Under the skin: short-range embedded wireless technology. International Journal of Information Management 22: 165–179 Bos L, Leroy S (2001) Toward an all-IP-based UMTS system architecture. IEEE Network 15(1): 36–45 Budde P (2004) The time for 3G is arriving, but no cheering from the operators. Where is the business case? http://www.gii.co.jp/press/pa19736_en.shtml Camponovo G, Pigneur Y (2003) Analyzing the m-business landscape. Annals of Telecommunications 58 (1/2)
New Perspectives on Mobile Service Development
307
Chiang RCN, Young M, Baker N (2002) Transport of mobile application part signalling over Internet Protocol. IEEE Communications Magazine 40(5): 124–128 Christensen CM, Rosenbloom RS (1995) Explaining the attacker's advantage: technological paradigms, organizational dynamics, and the value network. Research Policy 24: 233– 257 Chroneer D, Laurell-Stenlund K (2001) Organizational changes in product development in various process industries. In: The proceedings of Portland International Conference on Management of Engineering and Technology, PICMET '01 Dietl H, Royer S (2003) Indirekte Netzwerkeffekte und Wertschöpfungsorganisation. Zeitschrift für Betriebswirtschaft 73(4): 407–429 Edelmann J, Koivuniemi J (2004a) Future development of mobile services and applications examined through the real options approach. Teletronikk 2: 48–57 Edelmann J, Kyläheiko K, Laaksonen P, Sandström J (2004b) Facing the future: competitive situation in telecommunications in terms of real options. In: Hosni YA, Khalil T (eds) Internet Economy: Opportunities and Challenges for Development and Developing Regions of the World. Elsevier Science, Amsterdam, pp 69–82 European Commission (1997) Green paper on the convergence of the telecommunications, media and information technology sectors, and the implications for regulation. towards an information society approach. Report, COM(1997)623 Hacklin F, Marxt C (2003) Assessing R&D management strategies for wireless applications in a converging environment. In: The proceedings of the R&D Management Conference 2003, RADMA, July, Manchester, England Hinterhuber A, Hinterhuber HH (2002) Die Orchestrierung virtueller Wertschöpfungsketten. In: Albach H, Kaluza B, Kestern W (eds) Wertschöpfungsmanagement als Kernkompetenz. Gabler Verlag, Wiesbaden, pp 278–301 Hoch DJ, Seaberg J, Selchert M, Sunder R (2001) The future of e-business services: opportunities beyond boom and bust. High tech practice: Report of McKinsey & Company Kothandaraman P, Wilson DT (2001) The future of competition: value-creating networks. Industrial Marketing Management 30: 379–389 Leonard-Barton D, Rayport JF (1997) Spark innovation through emphatic design. Harvard Business Review 75(6): 102–113 Li F, Whalley J (2002) Deconstruction of the telecommunications industry: from value chains to value networks. Telecommunications Policy 26(9/10): 451–472 Lindgren M, Jedbratt J, Svensson E (2002) Beyond mobile - People, communications and marketing in a mobilized world. Palgrave, New York Moyer S, Umar A (2001) The impact of network convergence on telecommunications software. IEEE Communications Magazine 39(1): 78–84 Nikolaou NA, Vaxevanakis KG, Maniatis S, I, Venieris IS, Zervos NA (2002) Wireless convergence architecture: a case study using GSM and wireless LAN. Mobile Networks and Applications 7(4): 259–267 Patel G, Dennett S (2000) The 3GPP and 3GPP2 movements towards an all-IP mobile network. IEEE Personal Communications 7: 62–66 Porter ME (2001) Strategy and the Internet. Harvard Business Review 3: 63–78 Porter ME (1985) Competitive advantage: creating and sustaining superior performance. The Free Press, New York Sherif MH (1998) Convergence: a new perspective for standards. IEEE Communications Magazine 36(1): 110–111
308
Jan Edelmann, Jouni Koivuniemi, Fredrik Hacklin, Richard Stevens
Talluri S, Baker RC, Sarkis J (1999) A framework for designing efficient value chain networks. International Journal of Production Economics 62: 133–144 UMTS Forum (2001) The UMTS third generation market - phase II: Structuring the service revenue opportunities. Report No. 13 Willcocks LP, Plant R (2003) How corporations e-source: from business technology projects to value networks. Information Systems Frontiers 5(2): 175–193 Zahn E, Foschiani S (2002) Wertgenerierung in Netzwerken. In: Albach H, Kaluza B, Kestern W (eds) Wertschöpfungsmanagement als Kernkompetenz. Gabler Verlag, Wiesbaden Zerdick A, Picot A, Schrape K, Artopé A, Goldhammer K, Lange UT, Vierkant E, LópezEscobar E, Silverstone R (2000) E-conomics - Strategies for the digital marketplace. Heidelberg: Springer, Berlin European Communication Council Report, pp 171–182
“I-Mode” in Japan: How to Explain Its Development Arnd Weber1, Bernd Wingert2 Forschungszentrum Karlsruhe, ITAS (Institute for Technology Assessment and Systems Analysis), Germany
Abstract This paper presents first results of a project about the development of i-mode and tries to draw conclusions on what a German or European reader may learn from the i-mode case. The literature about i-mode and the many success factors which have been discussed are reviewed, including cultural factors. One of the aims of the project is to explore the question whether i-mode is co-determined by cultural factors. As many interviews with experts and actors showed, i-mode appears at least to be influenced by such factors (e.g. quality orientation). However, the fierce competition at all levels (including radio infrastructure) is the most prominent economic factor. The project has been supported by the German Federal Ministry of Education and Research.
Introduction This paper presents first results of a research project conducted by the Institute for Technology Assessment and Systems Analysis, Research Centre Karlsruhe, Germany. The project has been supported by the German Federal Ministry of Education and Research under the programme of “Innovation and Technology Analysis” (ITA).3 The aim of the project is to understand the Japanese i-mode success, especially the possible impact of any cultural factors. This will serve as the basis for “lessons learnt” applicable to the further development of mobile communication in Germany and Europe. The paper provides, in Section 2, a short description of the gap between Europe and Japan. We review the state of research on the develop1 2 3
E-mail:
[email protected] E-mail:
[email protected] Project title: “Cultural Factors in Technical Development: ‘i-mode’ in Japan and Germany” (Kulturelle Faktoren in der technischen Entwicklung: ‘i-mode’ in Japan und Deutschland); BMBF project no. 16I1514.
310
Arnd Weber, Bernd Wingert
ment of the success factors in Section 3. We identify a couple of open issues of research, in Section 4. Subsequently, we discuss these issues and present findings from our interviews (Section 5). We end with Section 6, addressing how Europe may catch up. The paper is not meant as a systematic reconstruction of the complete i-mode development process but as a first presentation of some new insights (see Box 1 on i-mode for a short description, and Box 2 with a chronology of key events). We wish to express our sincere thanks to the experts we interviewed in Europe and in Japan (see list at the end). Some of the interviews took place during the “Mobile Intelligence Tour” to Japan, organised by Daniel Scuka (Wireless Watch Japan) and Jan Hess (Mobile Economy), from April 12 to 16, 2004 (Hess 2004).4
Europe falling behind Japan Europe is behind Japan in mobile data communications in several respects: Regarding 2.5G The mobile internet is frequently used: In Japan, mobile data services have taken up quickly, starting with the launch of i-mode, in 1999. In 2004, there were about 68 million mobile internet subscribers, of a total of about 88 million mobile phone subscribers (i-mode 40 million, EZweb 15 million, Vodafone Live 13 million). Mobile mail services are cheap, with about 1-4 Yen for an e-mail, or about 1-4 €cents, as opposed to a typical 19 €-cents, e.g., for an SMS in Germany. This increases use. Furthermore, as e-mails may contain URLs to be clicked-on (see Fig. 1 next page), use of the mobile internet, increases too. Therefore, revenues from sales of packages also go up. Official and unofficial websites flourish: The content available from the operators’ portals, the so-called official content, is either sold for a fee of, e.g., 300 Yen per month, or free. Content providers obtain about 90% of the fee, the rest is charged by the operator as a service commission for the bill collection service. Thus, official content providers earn a large share of the revenues (Vodafone Europe, e.g., charged about 50% in 2004). This has led to a number of several thousand content providers on the official portals. In addition, unofficial websites have emerged. Their number has been estimated to have reached 80,000. Subscribers use these official and unofficial sites for downloading ringtones and 4
We also wish to express our thanks to those who commented on earlier drafts, in particular Sandra Baron, Kerstin Cuhls, Keiichi Enoki, Jeffrey Funk, Michael Haas, Jan Hess, Sven Lindmark, Michael Rader, Ulrich Riehm, and Ray Tsuchiyama. Furthermore, we want to thank Asae Yokoi, who helped us decipher Japanese writing and thinking, as well as Andrea Hoffmann and Aki Sugaya who participated in the interviews in Japanese language.
“I-Mode” in Japan: How to Explain Its Development
311
games, for shopping and banking, and for obtaining many other types of information.
Fig. 1. i-mode internet e-mail on W-CDMA phone, with URL highlighted to be clicked for making a connection. NTT DoCoMo FOMA handset with 2.2” QVGA screen, 2004 (photo: Arnd Weber)
Cameras were integrated already in 2000: Another key point of progress can be seen in the fact that Japanese operators integrated cameras into their phones already in the year 2000 (see Fig. 2 from J-Phone). In 2004 integrated cameras of 12 megapixels resolution were common, optical zoom lenses were being introduced, resolutions of 3 and 4 megapixels are being prepared. Japanese manufacturers such as Sharp have been starting to export such high-end phones to Europe. Vodafone copied many characteristics of the NTT DoCoMo i-mode business model: “We moved from the European model quite considerably to a model closer to DoCoMo’s” as a representative of Vodafone put it [interview 19]. A key element is that the operator defines the phone’s user interface, cf. Vodafone Live. Nokia rejected such demands, and now that more and more operators aim at defining the user interface, this has become an issue for Nokia.
312
Arnd Weber, Bernd Wingert
Box 1: Key characteristics of i-mode i-mode is a mobile data service launched by NTT DoCoMo in 1999. Key characteristics are charges of 0.3 Yen, about 0.3 €-cent, per packet of 128 bytes (fees are lower in 3G). Access to the service is simply by pressing the special “i” button. Essential services are i-mode mail, the official websites (portal), as well as unofficial websites on the internet. i-mode mail is compatible with internet mail. It can contain URLs. This means that the users can easily access official and unofficial content by only one click. URLs can, of course, also be entered using the keypad. Furthermore, search engines make access of sites easy. As i-mode uses internet standards, everybody with access to a server and some knowledge of html can set up a site, by programming in c/i-html code (basically by using no frames and only small pictures). Using i-mode is very convenient, as one can easily exchange e-mails and URLs between PCs and mobile phones. One can even access Intranets. From the beginning in 1999, phones could be left “always on”, as only packets were charged, not time. This made it easy for users to continue viewing data as convenient. The i-mode service can be accessed by using basically (in terms of user interface, size and weight) a normal mobile phone. The “clamshell” design of many i-mode phones makes it easy to read relatively long mails or articles, as there is relatively large space for a screen. Screen resolution has been improved over time. There is no need to configure the service.
Above: Access to the English menu from the Japanese i-mode menu.
Right: Folding NEC i-mode phone (both of 2000).
“I-Mode” in Japan: How to Explain Its Development
313
Fig. 2. The first camera phone from J-Phone (Sharp 2000)
Regarding 3G Japan is leading in 3G: W-CDMA phones have been running since 2001, about 2.5 years earlier than, e.g., in Germany. During this time, they have been improved significantly (see Fig. 1). As of May 2004, there are more than 4 million subscribers using W-CDMA, essentially NTT DoCoMo customers. In addition, another standard ITU 3G-standard, cdma2000 1X, is being deployed even more successfully by KDDI, with about 14 million subscribers (Telecommunications Carriers Association n.d.). Mobile music sales comparable to iTunes: Japanese record companies have been selling high quality music (realtones) to mobile phones since 2002 (“Chaku uta”). In 2004, the company “Label Mobile” has been selling about 8 million of these shortened songs per month for about 100 Yen each (80 €-cents), used as ring tones (compare to the 7 million full songs Apple is selling each month in the US to a far larger population (Spiegel Online 2004). World record ARPU: Japanese operators have world record ARPUs (average revenue per user) with about €60, and higher for 3G. Part of the ARPU originates from the monthly fee, which is relatively high (to pay for the expensive handsets). Data ARPUs are in the range of €15, as subscribers use internet services a lot, e.g., for downloading games and music. Customers of NTT DoCoMo’s FOMA service (Freedom of Mobile Multimedia Access, based on W-CDMA) even pay a data ARPU of about € 25. TV integrated since 2003: In Japan, Vodafone is selling handsets with an (analogue) TV tuner, thus they are already gaining valuable experience ahead of the introduction of digital TV. The supplier of competitor KDDI, Qualcomm, is considering the integration of a different technology for video transmission, i.e. to transmit videos when spare capacity is available in the network [interview 10]. Thus, users can watch clips whenever they wish to. Of course, this is not like TV, but makes it easy to use in niche times. NTT DoCoMo is planning for a third al-
314
Arnd Weber, Bernd Wingert
ternative, using handsets with digital TV, but with a hard disk drive to store videos, which would also solve the problem that one may wish to view a broadcasted programme, but maybe not at the time it is being distributed. Regarding flatrates Wireless data flatrates: Japan has flatrates for wireless access to the internet. Full access, e.g. for laptop computers, is provided on the underused old (narrowband) PHS network, for 5,000 Yen, i.e. about €40. Restricted access only for use with mobile phones has been provided by KDDI, using Qualcomm CDMA technology, pricing the flatrate at 4,200 Yen. NTT DoCoMo had to follow in 2004. Regarding beyond 3G Wireless broadband being introduced: Japan appears to be already moving beyond the basic 3G standards. Since 2003, KDDI has been using cdma2000 1X EV-DO, with speeds of up to 2.4 Mbps. NTT DoCoMo is planning the release of HSDPA in 2005, with initial peak data rates of 3.6 Mbps (maximum 14 Mbps). Vodafone has announced a trial of FLASH-OFDM with up to 1.5 Mbps. EAccess has been awarded a trial license for TD-SCDMA (MC). Such infrastructures have also been called “mobile DSL”. Potential for wireless VoIP: High speed wireless data links, e.g. FLASHOFDM and TD-SCDMA (MC), may have the potential for wireless voice over IP. In Japan, supported by its fixed broadband infrastructure, there are already 4 million wired voice over IP users. The emerging wireless broadband infrastructure is threatening the current profits made from wireless voice, but from the consumer point of view, this is another area in which Japan might show superiority. In sum, it turns out that there are many areas in which Japan is more advanced than other countries. Besides learning about the advance position of Japan, there is another issue we wanted to learn about. This is whether there are any cultural factors influencing development. As the key example of leadership appears to be the fact that Japan developed the mobile data market beyond messaging very well, the first widely used such service, i-mode, became the case of our research.
State of research regarding i-mode A large number of scientific and more journalistic analyses of i-mode have been conducted. We wish to provide a short overview over their main findings. From hindsight resulting from our knowledge from the interviews, we find it wise to include certain references which at first glance do not appear to be ‘scientific’, but point to relevant factors. Therefore, the kind of factors to be considered is a mixed
“I-Mode” in Japan: How to Explain Its Development
315
bag, stemming both from “impressionistic research” or stories and more systematic approaches like those of Funk (Funk 2000, 2001, 2002), Fransman (Fransman 2000), Bohlin and Lindmark (Bohlin et al. 2003; Lindmark 2002; Lindmark and Bohlin 2003), and Haas (Haas 2003; Haas and Waldenberger 2003). Looking back on “old research” it appears that the early article by Ray Tsuchiyama (2000) was very well informed. Also, early analyses included the diploma thesis, back in 2001 by Devine and Holmqvist which, again from hindsight, appears to be well done (Devine and Holmqvist 2001). In the list to follow, we roughly differentiate between “key success factors” (having to do mostly with business models and marketing strategies) and “other determining factors”, including market and regulatory conditions as well as cultural factors. We do not aim at an explicit innovation model which systematically distinguishes between actors and their environment, between levels of analysis (individual, organisation etc.) or kinds of interdependencies (supporting conditions, competitors, structures etc.). The aim of this paper is in the first instance to describe this background of critical factors discussed in the literature and to report kind of new insights we gained in the interviews. More systematic modelling is of course needed. Box 2: Key events 1988: 1989: 1992: 1993: 1997: 1997: 1997: 1997: 1998: 1999: 2000: 2000: 2000: 2001: 2002: 2002: 2002: 2004: 2004:
NTT launches Hicap analogue system DDI launches TACS analogue system NTT Mobile Communications founded NTT Mobile Communications launches the digital PDC system DDI starts cdmaOne NTT Mobile Communications launches the DoPa data service Kei-Ichi Enoki, Mari Matsunaga, Takeshi Natsuno start work on imode J-Phone starts SkyWalker messaging service DDI and IDO announce to adopt cdma2000 NTT Mobile Communications launches i-mode (February) NTT Mobile Communications changes its name to NTT DoCoMo DDI and IDO form KDDI J-Phone launches camera handsets NTT DoCoMo launches W-CDMA KPN/E-Plus start i-mode service in Germany KDDI launches cdma2000 1X Chaku-uta music service launched eAccess obtains trial license for TD-SCDMA (MC) Vodafone announces trial of FLASH-OFDM
Numerous studies and articles pointed out so-called success factors, as these are of crucial importance if one wishes to repeat the success elsewhere. We list the success factors and reference those who “discovered” and made them available to
316
Arnd Weber, Bernd Wingert
English-speaking readers first (thus indicating that many factors may have been mentioned in the Japanese business press earlier). Often quoted success factors in the i-mode development are: • Low fees for data and content (Funk 2000; Kunii and Baker 2000; Stiehler and Wichmann 2000; Baker and Megler 2001; Devine and Holmqvist 2001). • Provision of content by companies other than the operator, openness for unofficial sites (Shapiro 2000; Stiehler and Wichmann 2000; Devine and Holmqvist 2001; Baker and Megler 2001; Funk 2001). It was also explicitly stated that DoCoMo thus stimulated competition between content providers (Haas 2003). • Provision of services for the general population, in particular youth and women (e.g., entertainment), not only for business users (Funk 2000, 2001; Shapiro 2000; Baker and Megler 2001). • Synchronisation of the whole value chain, with ease of use, and appropriate handsets (e.g., i-mode button) (Funk 2000, 2001; Shapiro 2000; Coulmas 2003; Oertel et al. 2001). • Supportive business model, e.g. with high revenue shares for official content providers (Devine and Holmqvist 2001). • Service not marketed as “internet” (Devine and Holmqvist 2001; Baker and Megler 2001). In the literature there are at least two success factors of debatable status because they cannot be attributed to known differences: • cHTML, as the competing EZWeb used WAP, but also became successful (Funk 2000, 2001), though cHTML, essentially a subset of HTML, made it easier for content providers to create content. • Packet-switching has been pointed out as a success factor (Stiehler and Wichmann 2000). Though this has initially been useful to keep prices low, as opposed to charging per minute, competitors had introduced similarly cheap pricing structures, without having a packet based infrastructure (Funk 2000). Our analysis shows that by 2001 all key success factors have been mentioned. For somebody interested in copying the business model this might have been enough. As we want to learn why DoCoMo was able to take “good decisions” we have to consider other determining factors including cultural ones: 1. Competition in radio infrastructure: Kunii and Baker (2000) mentioned in 2000 that there was competition between two Japanese digital mobile radio systems, PDC and PHS, in the early 1990s, threatening the high prices in DoCoMo’s PDC system (cf. Lindmark 2002). This threat of decreasing revenue contributed to making DoCoMo a less attractive company to work with, as Kunii and Baker wrote. We would not say that the authors created the view that this competition caused i-mode. However, we believe that this was an important factor, as we will discuss later. 2. Economic pressure from the US: In 1994, the US enforced more competition, leading to a surge in cellular services, as pointed out by Tsuchiyama (Tsuchiyama 2000; Ratliff 2000). The general boom in the mobile industry has been
“I-Mode” in Japan: How to Explain Its Development
317
fostered to a large extent by US pressures, and, thus, also i-mode has been triggered that way. Lindmark spelt out that Motorola’s analogue TACS mobile radio system was pushed by the US (Lindmark 2002). 3. NTT DoCoMo’s problems with congestion in the not yet paid-back PDC network, and the availability of better cdmaOne services: Tsuchiyama pointed out that the increase in competition led to problems with the quality of service which DoCoMo did not want to solve by even more investment into the same PDC technology. Again, competition in infrastructure made the situation more difficult for DoCoMo. In an interview, Daniel Scuka pointed out that Tsuchiyama was a relevant early analysis [interview 17]. Indirectly, in 2001, Baker and Megler based their results on Tsuchiyama, by referring to Scuka (Baker and Megler 2001). Tsuchiyama’s analysis seems to have been forgotten since, maybe because most analyses focussed on discussing the significance of the success factors (see Tee (2003) for an exception). After having conducted interviews with some key players as well as with academics, we see that Tsuchiyama’s analysis was justified. Ratliff’s contribution of 2000 (Ratliff 2000) is similar, but Tsuchiyama is more explicit and detailed on this issue. Other factors mentioned in the literature are: 1. Company culture: NTT DoCoMo had creative managers (Ratliff 2000), such as President Ohboshi, described in some detail by Beck and Wade (2002). NTT DoCoMo put marketing decisions first (Ratliff 2000). 2. Managerial freedom because of the way deregulation was handled: Lindmark and Bohlin found that the company culture has been influenced by giving DoCoMo a strong independence from the NTT holding, contributing to an “experimental attitude”. Thus the Japanese government was able to enforce significant deregulation without splitting NTT into small parts, as was done with AT&T (Bohlin et al. 2003). 3. Convenience stores: Shapiro had already mentioned in 2000 (Shapiro 2000), in notes about a lecture given by Mari Matsunaga, one of the key creators, that imode mirrored the concept of Japanese convenience stores, in which customers find the most important every day products whenever needed. This point has been presented in more detail in Matsunaga’s book (Matsunaga 2002). 4. Domination of operator over equipment manufacturers (Devine and Holmqvist 2001; Baker and Megler 2001; Fransman 2000; Haas and Waldenberger 2003). 5. Quick changes on the market (Devine and Holmqvist 2001): DoCoMo was not willing to wait for the WAP Forum to finalise their specification, one reason being competitor J-Phone’s success with its messaging service (Funk 2001). 6. Unfair behaviour of DoCoMo: When DoCoMo and its competitors shared PDC, DoCoMo discussed changes to the system with its manufacturers, but not with its competitors (Kano 2000; Funk 2002). 7. No long-run contracts: Funk mentions in a footnote that there are no long-run contracts in Japan, but does not discuss the significance of this lack for the development of i-mode, which we will discuss later (Funk 2002).
318
Arnd Weber, Bernd Wingert
8. On-giri: Scuka (2003) writes that the success of i-mode is at least partly based on the premise that the service provider is obliged to meet customer needs (ongiri). 9. Long-run service growth orientation: “The monopolistic temptation of shortrun profit maximisation needed to be tempered by a more long-run service growth orientation, as the DoCoMo payment scheme conveyed.” (Lindmark and Bohlin 2003) Lindmark and Bohlin were referring to the point that profits will rise in the long run, when the service is taken up, and cannot be maximised in the short run by charging high fees from content providers. In any case, they state that it is an orientation, an attitude of individuals or an element of a company culture. 10.Appropriation of pager technology by youth: High school students had appropriated pager technology for sending messages to each other, which drove mobile messaging culture (Kohiyama 2003). Though Kohiyama does not say himself that i-mode was caused by this, the link has been made by Senior Vice President Kei-Ichi Enoki, who clearly saw youth as drivers, according to Matsunaga (Matsunaga 2002). Sometimes analysts wrote that competition in Japan is very fierce, e.g. Scuka (2003) and Bohlin et al. (2003). We think this can be well explained by the factors mentioned above. Sometimes also “luck” has been mentioned as a key factor, e.g., by Beck and Wade (2002). We believe this has not been a major factor, as our interview results seem to indicate that there was a dominance of readily nameable reasons. It has been said that the Japanese innovation process can be called “Kaizen”, as it led to a continuous stream of innovations (i-mode, cameras, music etc.) (Scuka 2003). However, while Kaizen is about a continuous flow of small steps, innovations such as i-mode and cameras have rather been significant steps into a different direction. We omit the discussion of factors influencing the further diffusion of i-mode here, as this is not of relevance for explaining how the actors identified the success factors in the first place. Therefore we do not discuss topics such as interest in small electronic gadgets, the influence of commuting, gaming, etc. The classic distinction between invention and diffusion may be blurred though, if one assumes that the contents which led to a positive feedback process (ringing tones, screen savers, horoscopes) were “found” in the early innovation phase (as Funk interprets the process (Funk 2002) [interview 3]. Aspects of the diffusion process will be discussed in our final report, in this paper we concentrate on the genesis of imode.
“I-Mode” in Japan: How to Explain Its Development
319
Fig. 3. Vodafone branding on the phone hardware (Sharp, with 2 megapixels camera, 2003)
Open research questions Regarding some of the above causes effective during the development of i-mode, it would be beneficial to check their validity, and to learn why they came into existence in the first place. Fransman appears to agree that these are questions which still need to be looked into in Europe: “Why did non-Japanese mobile network operators ... not confront the same questions at the same time as DoCoMo’s Mr. Ohboshi (the former president) and why did they not come to the same conclusions?” (Fransman 2000, p. 253) Therefore it makes sense to check the Japanese situation in some detail, to gain a greater understanding of the players’ decisions, arguments and orientations. We identified the following issues to be analysed: • Why are there no long-run contracts? This seems to be of obvious interest, as a consumer can easily change operators without such contracts. Short-run contracts must have the effect of increasing competition. • Why do operators not aim at keeping the level of prices high, why is competition as fierce as reported? • Is competition in radio infrastructure a key cause for these phenomena? We raise this question simply in order to re-check whether this competition can really be regarded as important, as indicated by past research. To address the question carefully is also particularly important as there is a widespread view in Europe according to which radio technologies need to be standardised ahead of any mobile business to be made on top of them.
320
Arnd Weber, Bernd Wingert
• Why did DoCoMo put marketing decisions first? As DoCoMo originates from a former monopoly, with many technical people leading the company, why did they put marketing, low prices, etc., first? • Why do Japanese operators control the whole value chain and specify handsets? Writing specifications is very expensive. Specifying handsets per operator might increase costs through lowering batch sizes. On the other hand, Vodafone has been moving into that direction, with its “Vodafone live!” service. Therefore the question is particularly important why DoCoMo found it necessary to define services from end-to-end in the first place. • Is there any cultural influence, such as “on-giri”, or a “long-run service growth orientation” behind the success factors? The approach to address these questions was essentially not only to rely on literature and documents, but to conduct interviews both with experts and key actors (mostly in Japan) and listen to their arguments, views and reasons, and even “stories” which illustrate acting persons in situations. Similar approaches have been used earlier to understand the development of technologies, see, e.g., Noble (1984) about numerical machine tools, or a paper of our own about public key cryptography (Weber 2002). The findings are based on interviews with more than 40 experts.
Findings Why no long-run contracts? Long-run contracts of, e.g., two years duration make it impossible for consumers to switch to a competitor. In Japan, there are no such contracts. If a competitor comes out with a new service, or if somebody is unhappy with his or her provider, consumers can change. For instance, NTT DoCoMo charges 3,000 to 4,600 Yen (about €25-40) for cancellation of a contract with a discount scheme during the first year, which is much less than an average monthly bill. During the second year, they only charge between 1,000 and 1,600 Yen (NTT DoCoMo 2004). This possibility to change is a permanent threat to operators. Operators have to assure customer loyalty after the contract has been made, not before, as is frequently the case in Europe. We asked experts why operators do not have long-run contracts. We learned the answer from Shigehiko Naoe. It was DDI, a predecessor of KDDI, who abolished this type of contracts about 10 years ago. “They were the first to offer short-term contracts,” as Naoe put it [interview 13]. At that time, DDI had a TACS system, and NTT Mobile had Hicap. DoCoMo then had to follow. Since then, customer loyalty is not assured through contracts, but must be assured through other means, e.g. attractive new services.
“I-Mode” in Japan: How to Explain Its Development
321
Why fierce competition? One reason for the smaller competitors to behave as they do is that it is very difficult to gain customers from big DoCoMo. Therefore, competitors have looked for new attractive schemes. As a response to the lack of long-run contracts, they developed discount schemes to assure customer loyalty. The longer a customer stays with an operator, the larger the price discount. These schemes were first introduced by J-Phone (now Vodafone), as Matsunaga wrote: “J-Phone was … the first to introduce a discount service, where the longer the subscription, the larger the discount.” (Matsunaga 2002, p. 203) Takeshi Natsuno had later pointed out that a key reason for looking for new services were fierce price wars (Natsuno 2003, p. 109). Another reason for competition becoming very severe is that DoCoMo reportedly sometimes was felt to behave unfair. DoCoMo controlled significant parts of the innovation process. In so far as competitors used the same infrastructure, they had a disadvantage as they neither learned all details of the technology, nor learned about innovations very early. As they could not control the innovations that well, they had to compete on prices. Yet another reason for the fierceness of competition was that DoCoMo was not always such a wealthy company. Initially it was a relatively small subsidiary. Later, it experienced the crisis described by Tsuchiyama. “Mr. Ohboshi says ‘NTT would never fail, but DoCoMo could have collapsed. We had to survive on our own. We struggled.” (Financial Times 2001) Also for the creation of i-mode, the unstable situation was a key factor. Natsuno wrote that i-mode was “born with a sense of crisis.” (Morishima 1982, p. 4) Prices for i-mode were set low as the management at DoCoMo believed that prices would decrease because of competition anyway, and as DoCoMo wanted to gain a large share of the market. As a representative of Vodafone [interview 7] put it: “DoCoMo tries to race ahead, setting the pace.” Is competition in radio infrastructure relevant? As mentioned above, Motorola demanded access to the market. This argument has been put forward in articles by Tsuchiyama and Ratliff. According to a study from the American Chamber of Commerce in Japan, quoted by Tsuchiyama, the 1994 Cellular Telephone Agreement “abolished restrictions and delays in establishing the technical networks necessary to meet growing demand for cellular service ... Ultimately this brought about new competition, reduced prices and a tenfold increase in cellular service in two years.” (Tsuchiyama 2000) Similarly, Ratliff wrote (Ratliff 2000, p. 14): “In 1994, partially as a result of pressure from the U.S. government at the behest of Motorola Corporation, the Ministry of Posts and Telecommunications liberalised the cell-phone market. Individuals could now own cell phones, and DoCoMo faced formidable competition from a number of competitors with strong corporate backing.” But how did Motorola achieve concrete sales? We addressed this in our interviews and learned the following from
322
Arnd Weber, Bernd Wingert
Naoe. The background for this was the big success of Toyota cars on the US market. This contributed significantly to the trade deficit. The Japanese government could in theory have rejected Motorola’s demand. But this could have meant retaliation on the US market against Toyota. As Toyota controlled the operator IDO, it was straightforward for Toyota, through IDO, to buy Motorola’s TACS equipment. This contributed to the toughness of competition observed by other researchers. As Naoe put it: “The US government requested to develop the TACS system in Tokyo and Nagoya areas by IDO and to buy US equipment from Motorola. Toyota did not want to fight the US government.” [interview 13]
Fig. 4. cdma2000 1x EV-DO phone with high quality music from au/KDDI (Kyocera, 2004)
Thus, it can be said that the quality of Japanese cars was a key factor for competition in radio infrastructures to get tougher. This ultimately reduced prices and contributed to the sense of crisis at DoCoMo. The competition between the analogue TACS and Hicap systems was not the only one. Later, competition between the digital PDC and PHS systems was also important as Lindmark wrote: “Digital cellular operators responded to challenges from PHS by lowering prices and introducing new terminals.” (Lindmark 2002, p. 254) Again we see that competition in infrastructure was a key element for reducing prices, which in turn meant that operators’ profits were threatened, contributing to the sense of crisis inside DoCoMo mentioned above. Also, with the PDC system, competitors felt that DoCoMo behaved unfairly by controlling changes. Therefore they became interested in alternative infrastructures. As Hideo Okinaka of KDDI put it: “We found DoCoMo used their technological advantage as the virtual inventor of the PDC standard in differentiating themselves from competitors, therefore we decided to use cdmaOne.” [interview 16] In an interview with the authors, Natsuno stated that originally GSM was good, but in competition the other systems became better. His view is that competition in infrastructure is essential. He criticised the Europeans: “You guys avoided competition.” [interview 14] Enoki put it similarly in our interview: “The business is slowing down if one agrees on standards first.” [interview 2] While many European observers may still be proud of GSM and its economies of scale in production, the above considerations are pretty much in line with economic textbooks. As another interviewed expert, Thorsten Wichmann, put it:
“I-Mode” in Japan: How to Explain Its Development
323
“You should quote Hayek” [interview 20], who gave the famous presentation titled “Competition as a Discovery Procedure” (Hayek 1968). Why marketing first? The background for the development of i-mode has now become clear: There were severe price wars. There was a sense of crisis. Ohboshi felt that a new market segment was needed to avoid the erosion of profits by price wars. Natsuno (Natsuno 2003, p. 7): “The fixed-line telephone business in the United States provides a prime example of what happens when heated competition goes after a limited pool of subscribers: … discount schemes cut into revenues. The result is a war of attrition that erodes the strength of all participants … Ohboshi offered an alternative … (to) create a new market, data communications.” At first attempt, DoCoMo did not find the optimum. DoCoMo had tried to get a data service going, the DoPa service, a data service oriented at business users. The experts we interviewed debated whether it was a failure or not. In any case, they were looking for a more successful data service. We learned that the key was to ask specialists from marketing and the media. Ohboshi had given the freedom to Enoki to define the service with outsiders if needed. This has already been documented in the literature. But who had the seemingly good ideas at first? In order to find this out, we interviewed Masafumi Hashimoto, who had given advice to Enoki at an early stage. Hashimoto is an entrepreneur, owning companies such as the Suncolor printing company. “Enoki told me that NTT wants to develop content for mobile phones. He asked me, as I am from marketing. What do you think? I said: Content should better be offered by independent companies.” [interview 5] We asked Hashimoto why he thought content should be offered by outsiders, as one could imagine a mobile operator becoming a company similar to NHK (Japan Broadcasting Corporation) or BBC and buying content for re-selling. Hashimoto responded: “I see it from a marketing point of view. Content must be close to the user. BBC does not broadcast what is not in the news. The variety of information is essential.” Thus it was Hashimoto who first described, from a marketing point of view, that content should be offered by as broad a variety of providers as possible to make sure the service is attractive for as many customers as possible. Hashimoto described the core of what was later called the “content ecosystem”, which Natsuno made possible by suggesting the use of internet standards. Hashimoto is very experienced and successful in marketing. As is known from the literature, Hashimoto recommended to Enoki that Matsunaga should be responsible for the development of the data service. The literature also reveals that Matsunaga developed the pricing structure, such as pricing a single data service like a paper journal, i.e. at 300 Yen per month, about €3, though less in purchasing power. We asked her how she arrived at this proposal. She explained to us that her work for Recruit, where she was editor of a classified ads magazine, had been very important. “Every week I had to learn what the users want, to understand ordinary people. The match between readers and employers was necessary.” [interview 11]
324
Arnd Weber, Bernd Wingert
Regarding pricing, she “started from the question: What does the user want? DoCoMo said: 300 Yen is not possible. The service must cost about 1,500 Yen per user, minimum 1,000 Yen in order to recoup investment.” [interview 11] However, with such an approach, Matsunaga reasoned, one would get only about 3 million users, i.e. a maximum revenue of 3 billion Yen per month (about €25 million). “I thought it might be possible to have 10 million users, each paying 300 Yen. The maximum turnover would then be higher, as one might have much more than 10 million users. At 1,000 Yen it would not be possible to get much beyond (the 3 million users).” [interview 11] So starting from what users might be willing to pay, she arrived at a bold proposal which, in the end, contributed to the financial success of DoCoMo. “When I left in March 2000, board members told me that they were grateful for having decided to price the service at 300 Yen.” [interview 11] This makes clear that key concepts of i-mode – openness for content as diverse as possible (as opposed to a walled garden) and low prices – were key concepts from these marketing specialists. Another key element was to create a business model which allows content providers to earn sufficient revenue for always providing new content. It was Natsuno’s business model, which meant giving 91% of the revenues to content providers. Yet another key element of success was to achieve acceptance for the new approach by the largely technically oriented management of DoCoMo. As Matsunaga wrote in a single sentence, Hashimoto had apparently “figured out that since NTT was a male-dominated organisation, any capable man joining them would be thwarted by the existing modus operandi.” (Matsunaga 2002, p. 67) We asked Hashimoto how he would interpret this point: “I recommended Matsunaga-san because of her experience with Recruit. NTT is a very bureaucratic organisation. It was important for a woman to come in. For a woman it is easier to co-operate without frictions. Among men, there is a lot of competitive feeling, jealousy. For a woman it is easier to make her way, with men there are clashes. Against a man, a competitive attitude will be developed quickly. Others will think: Why is he in this position? I have the same education. But not every woman could take such a position. I recommended Matsunaga-san also because she was in the Tax Committee and had contacts to the Ministry of Posts and Telecommunications. The power and competence of Matsunaga-san were important, as well as her strong relations to the government.” We also asked Matsunaga whether it was essential that a woman had led the imode content development team. She said that she felt that many women are more natural, more social beings (seikatsu). To our surprise, she also told us that in the many interviews she had already given, the role of gender had never been addressed. Therefore we conclude: As a matter of fact, the changing role of women in society, the fact that there are influential women, was a factor contributing to the success of i-mode. We cannot exclude, however, that men might have achieved the same. On a different level we conclude that DoCoMo had identified its concepts in a need to define a new data service as well as possibly, to avoid future crisis and failure.
“I-Mode” in Japan: How to Explain Its Development
325
Why control of value chain? It is now pretty obvious that an operator who wants to make money with a data service must make sure that the user experience is consistent, and that the service is easy to use. As Natsuno told us, this is perfectly normal: “If you are a car manufacturer in Europe, BMW, Mercedes, you will think like this. The user interface is very important. In all high competition industries all companies are thinking the same.” [interview 14] NTT DoCoMo has built up the capability to control the user experience themselves. The mirroring of European operators, such as Vodafone, of the control of the user interface seems to indicate that this is normal in a competitive environment. Any cultural factors involved? Several of the experts we talked to did not like explaining i-mode success by “cultural factors”. This may be due to a peculiarity of the concept of culture itself, insofar as attributing an impact to “culture” implies relying on attributes and circumstances which are not amenable to decision and action. In this sense you cannot “make” culture, you have to work for, say, a corporate culture, long, hard and on a common ground. However, this is not the place to discuss conceptual fallacies of culture. At least we found some factors which have to be considered as co-influencing the i-mode success. Youth culture: As pointed out initially, the Japanese youth had a clear influence on the development of data services. Young people, in particular girls in major cities, had appropriated pager technology to communicate with each other. The “display pager” introduced in 1990 enabled, as Kohiyama wrote, “the caller's number to be displayed. It was quickly adopted as an essential means of communication among high school students, who relayed messages in pager code. This youthdriven creation of a new communication culture was a rare occurrence.” (Kohiyama 2003) For sending a message to a pager, it was possible to enter the message on the keypad of a fixed public phone, and to view it on the pager’s display, including some special characters such as a heart. Messaging via mobile phones was used by young people a lot when introduced in 1997, more than one year ahead of the introduction of i-mode. “J-Phone was overwhelmingly favourite among younger subscribers largely because its SkyWalker service allowed users to exchange written messages.” (Matsunaga 2002, p. 203) One could say that such a kind of youth culture influenced the way the mobile operators thought. When Enoki was thinking of how to develop data services, he did what he called “family marketing” [interview 2], i.e. observed his son and daughter, who had the same habits of sending data to their friends. So in a sense this facet of youth culture, a communication pattern of sending messages to each other via mobile devices, was a cultural factor influencing the development of i-mode. Both, in the interview and in a recent article, Hashimoto commented on different aspects of a change in the mentality of Japanese youth and as a prerequisite to
326
Arnd Weber, Bernd Wingert
the extraordinary spread of cellular phones among young people. He points to a concept of a “sense of continuous connection to a psychological circuit” (Hashimoto 2002) which subjectively may be felt both as being in a communication circle with friends or controlled (for instance by worried parents). This “sense of being connected”, as Tadashi Matsumoto phrased it [interview 9], may not be different from young people in other countries, but may have a special meaning in Japan. It may be assumed that the youth in other industrialised countries have essentially the same wishes to communicate, perhaps with differences in degree. However, historically, youngsters using pagers and phones for communicating among each other were a prerequisite to i-mode. Business culture: Haas and Waldenberger pointed out that there is a wellestablished business culture in Japan in which exclusive relationships are maintained between a company and its suppliers (Haas and Waldenberger 2003) [interview 4]. But as the exclusivity is disappearing, with the need of DoCoMo to buy camera phones from Sharp, a major supplier to Vodafone, exclusivity no longer seems to be a key element. However, as DoCoMo indeed co-operates sometimes intensively with selected manufacturers, one can call this a business culture. As the authors point out (Haas 2003), exclusive relationships do not apply to the whole innovation process, but only to the early phases. It may not be applicable to a situation of global competition. Quality orientations in consumer and supplier relations: One expert, who initially disagreed with the hypothesis of cultural influence, said he would feel “ashamed” [interview 8] if the phone’s user interface and the manuals were not consistent, as sometimes experienced in Europe. He continued: “Somebody needs to look after all these bricks which build the whole end-to-end service. That is a matter of mentality.” [interview 8] Others expressed that “consumers are very sensitive about details and quality. They care e.g. about a gap of 1mm between the door and body of a car.” [interview 15]. To a visitor, it is obvious that the handsets are very carefully designed and very densely packed. Gray Holland, a Designer with Frog Design, a company which had designed early Apple computers, stated: “In Japanese design, every little part, every little line, every little button is well thought-out” (Wired. 2001). In other words, it appears to us that there is a high interest in quality on the vendor’s as well as on the consumer’s side. Tsuchiyama pointed out a special attitude towards serving the customer in Japan and being in a quiet and faithful mood when thinking of your supplier, which is expressed in the concept of “anshinkan”. This means to pay “a very high level of attention and details ... to serving the customer” [interview 18]. This attitude developed after World War II. There is a fine connotation as “anshinkan” is also used to describe the “the tranquil and enlightened Buddha”, but there is a difference: “the peace of mind of the customer, unlike the Buddha, has been secured
“I-Mode” in Japan: How to Explain Its Development
327
through guaranteed delivery of goods and services, not deliverance from them” (Tsuchiyama 1999).5 Also, Scuka wrote about on-giri (Scuka 2003), denoting a concept of mutual obligations, like payment and delivery (Dietrich 1997; Kodansha Encyclopedia of Japan 2002). In 1968, Yoshino (Yoshino 1972) traced this pattern of social obligations back to the Tokugawa period, also pointing out the difference between “on” and “giri”, the first applying to hierarchical social relations, the latter to nonhierarchical. Modern survey research has to prove, though, if and how these concepts still work. Coulmas, a well-known expert on Japan in Germany, argues that the duty (giri) to reply a present or a favour is still alive as a means of shaping social relations, even in modern Japan. For the time being we would like to conclude that the needs from competition fell on very fertile ground, the inherited Japanese culture of serving the customer, and consumers’ interest in high quality goods. Competition as a cultural element: We noticed that there is a significant tradition of competition in Japan. Respondents pointed out that competition e.g. in mathematical skills has a long tradition. Tough competition in the educational system is well known. It was also expressed that it is no coincidence that in several fields there is not one national champion, but there are several competitors, e.g. there is NEC, Hitachi, Fujitsu, etc., as well as Toyota, Honda, and Nissan [interview 14]. Also the above quotations seem to indicate that competition is felt to be a normal state. We also are inclined to think that Japan as a nation is quite consciously aiming to be economically leading. When US warships came in 1853 to “open up” Japan, the Japanese leaders seem to have very early understood that they either faced the risk of sharing the fate of the Indians or the Black people, or had to win economically (cf. Morishima 1982). This aspect deserves further investigation, perhaps there were cultural reasons explaining why Japan aimed at competition with “Western” nations. It could then be argued that competition has a cultural background. Then also the generally high income levels could be regarded as culturally influenced. Without these incomes, no mass market for luxury devices would have emerged. Enoki referred to the Japanese concept of “consumer society” being a key element for developing new markets.
Conclusions Low diffusion of i-mode in Europe: One could think that the low diffusion of i-mode outside Japan shows that it is difficult to export these success factors. Indeed it is: Operating thousands of official and unofficial websites to make the service attractive for many people requires the participation of the most significant players, such as major media companies, banks, but also corporate users, etc.. A 5
As Asea Yokoi explained to us, the kanji for «shin» reads in Japanese as «kokoro» meaning heart, the centre, and in Sino-Japanese as «anshin» signifying the state of enlightenment in Buddhism.
328
Arnd Weber, Bernd Wingert
large dominant operator can get this going much easier than a smaller one. It appears that major operators in Europe prefer to stick to the SMS/MMS model of charging, with little revenues from the mobile internet. SMS transmission provides high income, with comparatively little investment into handsets. In terms of profitability, this may work. Rising ARPUs to Japanese levels by getting many websites going might seem to be a more attractive option for the operators, but would European users be willing and able to pay such high fees? Besides, one can notice that i-mode is diffusing slowly, apart from DoCoMo’s partners, for example, in France; there are already more than 3 million i-mode users outside Japan (Germany, the Netherlands, Belgium, France, Spain, Italy, Greece, Taiwan). Copying parts of the business model: DoCoMo’s i-mode has already influenced the European telecommunication markets significantly, not only regarding its partners’ services. The introduction of “Vodafone live” showed that controlling the user interface makes a lot of sense (see Fig. 3). As already quoted, a representative of Vodafone put it [interview 19]: “We moved from the European model quite considerably to a model closer to DoCoMo’s.” Other European operators are trying to follow suit. For example, Deutsche Telekom is now offering similar services as on the official i-mode menu in their “t-zones” service, and increasingly requires handset manufacturers to customise phones, e.g. Sharp. Thus, essential elements of the Japanese mobile communication model are being taken over in Europe. Other elements have not been copied widely, e.g. the high percentage of revenues for the content providers. We learned that there are discussions in Japan to allow the customers to change handsets, e.g., by changing the chipcard in 3G-models. One expert discussed that with smart cards (SIM/UIM), users could in principle use handsets not provided by their network operator. Hitoshi Mitomo said [interview 12]: “There should be discussion on the separation of the two functions.” This could achieve economies of scale in the production of handsets, he pointed out. However, operator-specific services might then no longer work properly. We see a trend for all operators providing the same services. But as competition has until recently led to the provision of new services, it might be too early to tell whether handsets have already reached a mature state. Catching up? As Scuka put it (Scuka 2003), Japan is in a process of continuous improvement of world-leading mobile services: Mobile Kaizen. Catching up would mean to try to become better quickly – a huge challenge, given the differences in development sketched initially. How could one get there? Higher quality of products and services: European operators moved a long way in terms of control of user experience from the early days of WAP-based services to the (partial) i-mode clones such as Vodafone live and t-zones. Yet leadership in terms of quality of services and user interfaces is still in Japan. More competition in Europe: The authors have the impression that Japan’s lead in several areas of mobile business, as illustrated initially, is also caused by competition at all levels. A major cost factor in mobile communications is the radio infrastructure. In Japan, KDDI had invested in a competing infrastructure to WCDMA, i.e. cdma2000, and started selling data services which require significant
“I-Mode” in Japan: How to Explain Its Development
329
bandwidth, such as music. To quote one expert: “KDDI went up like a rocket when DoCoMo had problems with W-CDMA” (see Fig. 4, a recent cdma2000 handset). This reinforces our impression that competition on that level is important. Standards make a lot of sense when exchange of data is concerned, e.g. with SMS, internet mail, html, MP3. An often heard argument against competition in infrastructure is that the multitude of national standards, in particular proprietary ones, reduced chances of Japanese manufacturers exporting equipment. It is of course essential that economies of scale in production can be achieved, for two reasons, low costs and exportability. Also, for Europe, it is highly desirable that one can make calls from abroad. So it would not be attractive if a multitude of radio technologies were used in the European countries. It might be feasible to have a limited number on the condition that these are supported by handsets. A future option which could be economically attractive would be a spectrum policy which allows for a European-wide operation of alternative radio technologies. Of course, this is a difficult topic in view of the expensive auctions for UMTS bands in many European countries. Therefore creating an awareness of the Japanese situation could be a first step.
References Baker G, Megler V (2001) The Semi-Walled Garden: Japan’s “i-mode Phenomenon”. www.redbooks.ibm.com Beck J, Wade M (2002) DoCoMo – Japan's Wireless Tsunami: How One Mobile Telecom Created a New Market and Became a Global Force. New York Bohlin E, Björkdahl J, Dunnewijk T, Hmimda N, Hulten S, Lindmark S, Tang P (2003) Prospects for Third Generation Mobile Systems. ESTO Project Report. Seville, http://www.jrc.es/ Coulmas F (2003) Die Kultur Japans. Tradition und Moderne. München Devine A, Holmqvist S (2001) Mobile Internet Content Providers and their Business Models – What can Sweden learn from the Japanese experience? Stockholm, http://www.japaninc.com/online/sc/master_thesis_as1.pdf Dietrich M (1997) Identifikation und Transferierbarkeit des japanischen Managementstils: eine empirische Untersuchung am Beispiel japanischer Produktionsunternehmen der Automobil- und Unterhaltungselektronikindustrie in Japan und Europa. St. Gallen (Ph.D.) Financial Times (2001) NTT DoCoMo: Scepticism confounded. December 13, 2001. http://specials.ft.com/wmr2001/FT3FIMNQ6VC.html Fransman M (2000) Telecoms in the Internet Age: From Boom to Bust to? Oxford Funk J (2000) The Mobile Internet Market: Lessons from Japan’s I-mode System. Paper, http://e-conomy.berkeley.edu/conferences/9-2000/EC-conference2000_papers/ Funk.pdf Funk J (2001) The Mobile Internet: How Japan Dialed up and the West Disconnected. Pembroke Funk J (2002) Competition Between and Within Standards: The Case of Mobile Phones. London
330
Arnd Weber, Bernd Wingert
Haas M (2003) Developing Mass Markets for Mobile Internet Services. Paper presented at Momuc Haas M, Waldenberger F (2003) The role of dominant players in network innovations: A new look at success and failure of the mobile internet. Paper, Japan Centre of the Ludwig-Maximilians-University, Munich Hashimoto Y (2002) The spread of cellular phones and their influence on young people in Japan. In: ICICS (Institute of Socio-Information and Communication Studies, University of Tokyo): Review of Media, Information and Society 2002 (7), p. 97–110. Hayek F (1968) Der Wettbewerb als Entdeckungsverfahren. Kieler Vorträge NF 56. Kiel Hess JM (2004) Seeing is believing! – Reporting from our Mobile Intelligence Tour to Tokyo. May 2004. http://www.mobiliser.org/article?id=76 Kano S (2000) Technical innovations, standardization and regional comparison – a case study in mobile communications. In: Telecommunications Policy 24, 305-321 Kodansha Encyclopedia of Japan. Access 2002. http://www.ency-japan.com/ Kohiyama K (2003) A Decade in the Development of Mobile Communications in Japan. http://www.ojr.org/japan/wireless/1059673699_2.php Kunii I, Baker S (2000) Amazing DoCoMo. In: BusinessWeek, Asian Edition, January 17 Lindmark S (2002) Evolution of Techno-Economic Systems – An Investigation of the History of Mobile Communications. Göteborg Lindmark S, Bohlin E (2003) Japan’s Mobile Internet Success Story – Facts, myths, lessons and implications. Paper submitted for ITS 14th European Regional Conference. Helsinki, Finland, August 23-24 Matsunaga M (2002) The Birth of i-mode. An analogue account of the Mobile Internet. Singapore Morishima M (1982) Why has Japan “Succeeded”? – Western Technology and the Japanese Ethos, New York Natsuno T (2003) i-mode Strategy. Chichester Noble D (1984) Forces of Production. A Social History of Industrial Automation. New York NTT DoCoMo (2004) Mobile Phone Catalog. Vol. 1 Oertel B, Steinmüller K, Beyer L (2001) Entwickung und zukünftige Bedeutung mobiler Multimediadienste. Berlin (IZT WerkstattBericht Nr. 49) Dezember 2001. http://www.izt.de/mmd/ Ratliff J (2000) DoCoMo as Nation Champion: I-Mode, W-CDMA and NTT’s Role as Japan’s Pilot Organization in Global Telecommunications. Santa Clara. http://www. tprc.org/abstracts00/docomopap.pdf Schoder D, Madeja N (2004) Explaining the Success of NTT DoCoMo's i-mode - the Concept of Value Scope Management, in: Shaw, M. J. (ed.): Electronic Commerce and the Digital Economy, Amonk, NY Scuka D (2001) How the US Helped Spark Japan's Wireless Net Revolution. In: Wireless Watch Japan. May 11, Tokyo. http://www.japaninc.net/newsletters/?list= ww&issue=7 Scuka D (2003) Mobile Kaizen and Why Japan Still Matters. 03/09/03. http://www. mobiliser.org/article?id=62 Shapiro E (2000) Mari Matsunaga: Reinventing the Wireless Web: The Story of DoCoMo's i-mode. November 14. http://www.japansociety.org/corpnotes/111400.htm Spiegel Online (2004) iTunes in Deutschland. Apple startet mit Songpreis von 99 Cent. June 15. http://www.spiegel.de/netzwelt/netzkultur/0,1518,304223,00.html
“I-Mode” in Japan: How to Explain Its Development
331
Stiehler A, Wichmann T (2000) Mobile Internet in Japan – lessons for Europe. In: ePSO-N 2&5. epso.jrc.es Tee R (2003) Contextualising the Mobile Internet. Masters Thesis, Univ. of Amsterdam. http://ecdc.info/publications/reports/2003_05_rt_mobile.pdf Telecommunications Carriers Association (n.d.) Number of subscribers. http://www.tca. or.jp/index-e.html Tsuchiyama R (1999) Costumer Service Goes High Tech: ‘Anshinkan’, ERP & SupplyChain Management. Business Inside Japan, May 1, 1999 Tsuchiyama R (2000) Deconstructing Phone Culture. How Japan Became A Leader in Mobile Internet. The Journal, July 2000 Vesa J (2003) The Impact of Industry Structure, Product Architecture, and Ecosystems on the Success of Mobile Data Services: A Comparison Between European and Japanese Markets. July 3, 2003. Presentation at ITS 14th European Regional Conference. Helsinki, Finland. August 23–24, 2003 Weber A (2002) Enabling Crypto. How Radical Innovations Occur. In: Communications of the ACM. Volume 45, Issue 4 (April 2002), 103– 107 Wired. (2001) Ichiban. 10 reasons why the sun still rises in the East. Sept. 2001. http://www.wired.com/wired/archive/9.09/topten.html Yoshino M (1972) Japan's Managerial System: Tradition and Innovation. Cambridge
332
Arnd Weber, Bernd Wingert
Interviews [1] Bohlin, Erik. Associate Professor, Chalmers University. March 2, 2004 [2] Enoki, Kei-Ichi. Executive Vice President, Managing Director, NTT DoCoMo. April 23, 2004 [3] Funk, Jeffrey. Professor, Hitotsubashi University, Institute of Innovation Research. April 22, 2004 [4] Haas, Michael. Ph.D. student, University of Munich. February, 9, 2004 (Presentation given at ITAS) [5] Hashimoto, Masafumi. President, Suncolor. April 18, 2004 [6] Hashimoto, Yoshiaki. Professor, University of Tokyo, Institute of Socio-Information and Communication Studies. April 28, 2004 [7] Imamura, Mica. General Manager, Vodafone. April 12, 2004 (during the Mobile Intelligence Tour) [8] Kanda, Yusuke. (former) President DoCoMo Europe. January 7, 2004 [9] Matsumoto, Tadashi. Professor, Oulu University, Centre for Wireless Communications. February 27, 2004 [10] Matsumoto, Ted. President, Qualcomm Japan. April 15, 2004 (during the Mobile Intelligence Tour) [11] Matsunaga, Mari. Director, Bandai. April 20, 2004 [12] Mitomo, Hitoshi. Professor, Waseda University, Global Information and Telecommunication Institute. April 19, 2004 [13] Naoe, Shigehiko. Professor, Chuo University, Office of the Faculty of Policy Studies. April 23, 2004 [14] Natsuno, Takeshi: Managing Director, NTT DoCoMo. November 4, 2002 [15] Okamoto, Tatsuaki. Professor, NTT Corporation. April 25, 2004 [16] Okinaka, Hideo. Vice President & General Manager, KDDI. April 12, 2004 (during the Mobile Intelligence Tour, Tokyo) [17] Scuka, Daniel. Wireless Watch Japan. March 10, 2004 [18] Tsuchiyama, Ray. Director of Business Development (Asia-Pacific), AOL Mobile/Tegic Communications. April 22, 2004 [19] Vodafone (NN). Statement made by a representative of Vodafone during a meeting of the International Computer Association. Tokyo, April 15, 2004 [20] Wichmann, Thorsten. Managing Director, Berlecon. January 28, 2004
Demand for Internet Access and Use in Spain1 Leonel Cerno2,*, Teodosio Pérez Amaral3,** * **
Universidad Europea de Madrid, Spain Universidad Complutense de Madrid, Spain
Abstract The aim of this paper is to analyse a new phenomenon: Internet demand in Spain. To do so, we use a new high quality data set and advanced econometric techniques for estimating internet demand functions, incorporating the socio-demographic characteristics of individuals. We begin with a graphic analysis of the data, searching for relationships between the different characteristics. Then we specify and estimate two econometric models, one for broadband access at home and another for internet use intensity. We find that 25.2% of the Spanish population accesses the internet at home, but less than half uses broadband connections. This demand is positively related to income and other technological attributes and negatively related to socio-demographic attributes such as habitat and age. Our results are compatible with previous literature for other countries, although there is an important difference: broadband internet connections are still considered as a luxury good in Spain.
Introduction Many socio-economic studies of both theoretical and empirical nature are currently being developed in relation to the phenomenon of internet service use and high-speed internet access (called “broadband”) in Spain and other countries. If we analyse this phenomenon from a historical perspective in terms of adoption of a new product, the internet as such is nothing extraordinary. As happens with 1
2 3
We thank Gary Madden, Russell Cooper and other anonymous referees for their comments and suggestions in the ITS 15th Biennial Conference in Berlin (Germany) and in the ITS Conference on Regional Economic Development in Galicia (Spain), respectively. In turn we acknowledge the financial support from the Cicyt (Spanish Ministry for Education and Research) through the Research Project SEJ2004-06948. E-mail:
[email protected] E-mail:
[email protected]
334
Leonel Cerno, Teodosio Pérez Amaral
some new products, internet demand gradually grew until, in a very short time, it became an indispensable product. Historically, the concept of connecting and using systems in a shared network that allowed for the connection of two computers began to be developed early in the 1960s. It was only in 1969 that the decision was made to implement an experimental network that would make it possible to exchange information between different computers. Since it is claimed that the internet phenomenon has changed the habits of families in developed countries, we must ask ourselves what relationships there are with the different factors that lead a family to acquire broadband internet. The case of Spain In Spain, there are some early descriptive studies carried out on the basis of independent surveys concerning the adoption of information technologies and communications in the country. The National Institute of Statistics (INE) began to compile this information as of 2001. The data in this paper were taken from the 2003 INE Survey on Equipment and Use of Information Technologies in the Home (TIC-H). There is a great variability in access to and use of the internet. The overall figures contained in Fig. 1 show that, while access to the internet is increasingly common in Spanish homes (25.2% of all homes), the distribution of this access is not homogeneous. 35.0
32.7
30.0 25.0
20.8
%
31.7
26.7
25.8 21.4
22.7
22.9
21.0
21.7
20.0 15.0
32.2
31.7
29.8 29.0
14.7
26.7 20.7
16.9 14.3
10.0 5.0
An da lu s Ar ia ag Ba As on l e tur a ia C ric I s an sl a ar y nd Is C l an C a d C as nta as ti b til l la ri a la - L Le a on M a C ch at al a o V ni Es al e a tre nc m ia ad ur G a al ic M ia ad r M id ur Ba ci N sq a a ue va C rra ou nt ry R io ja C eu ta M el il la
0.0
Source: National Institute of Statistics, INE, survey on February 2003, and own elaboration. Fig. 1. Internet access in Spain (households)
Demand for Internet Access and Use in Spain
335
This access is more widespread in Catalonia (32.7%), the Basque Country (32.2%), Madrid (31.7%) and Melilla (31.7%). This contrasts with Estremadura (14.3%), Castilla-La Mancha (14.7%) and Galicia (16.9%). The main political debate in this area is how to help consumers access the internet and, in particular, adopt broadband technology4. The debate begins by defining an internet connection and broadband technology5. For purposes of simplification, we will say that Broadband and High Speed are synonymous. Rappoport et al. (2003) clearly establish similarities and differences between the conventional telephone service and adoption of some kind of internet connection within the demand for telecommunications. What these two types of services have in common is that telecommunications are not consumed in isolation. There is a whole network of productive units involved. Besides the interdependence and externalities that this situation entails, access and use are different concepts. The first and foremost of the differences concerns the measurement of output, since in conventional telephony output is measured in minutes, whereas in the case of the internet it is measured by speed of file transmission. This leads to the second major difference between these two types of telecommunication services. In the case of some forms of internet access (such as the cable modem), the speed will be affected by the number of individual internet access lines that are transmitting at any given time, whereas this does not happen in conventional DSL lines. This study estimates models of internet access and use and compares the types of narrowband and broadband connections based on the consumer characteristics contained in a survey carried out by the National Institute of Statistics (INE). These preferences will be considered from two standpoints. On the one hand, we will take into consideration the individual conditioned by education, experience and income which, in turn, is conditioned by family size. On the other hand, we will study the demand for the internet service based on the family group that is demanding it and considering the equipment as an indicator of one part of household income.
Early studies A pioneer econometric study on the adoption of the broadband internet service is the one by Madden et al. (1996), who examined a database of 5,000 survey responses collected in Australian homes. These authors were the first to discover that demographic characteristics are one of the main influences on the individual 4
5
The concept of Universal Service Obligation could also include the Internet service with broadband technology. This involves a different line of research than the one proposed in this article. Authors such as Owen (2002) believe that it is a true mystery that the debate on the Economics of Broadband has reached the boiling point without yet having agreed on a definition of the term “Broadband” when there is a complete, accepted list of the services that includes its adoption.
336
Leonel Cerno, Teodosio Pérez Amaral
decision to use broadband internet services. For example, they demonstrate, among other things, that people who have not finished secondary school show less of an interest in using broadband internet service; people who live in homes with at least one native member from Europe or Asia are more interested; age also influences interest depending on whether the individual is younger or older than 65 years of age. Cassel (1999) uses a survey carried out in 1997 with 30,000 US Consumers. Goolsbee (2000) also examines the demand for broadband internet access with data from 100,000 answers to a survey carried out in 1999 in various U.S. cities. Duffy-Deno (2001) study a sample of 11,458 US households. Rappoport et al. (2002) estimate the broadband internet demand using a database with demographic information on homes in 10 US cities for 2001. Other recent studies that make special reference to the Willingness to Pay of internet users include the one carried out by Varian (2002), which uses data from a University of Berkeley project called “Internet Demand Experiment” and in which he estimates how many people would be willing to pay for different connection speed levels. Different reports have been written in addition to these studies, such as the one released by the U.S. Department of Commerce (2002) that also considers demographic influences on the adoption of broadband internet. This study uses the same model as the one proposed by Madden et al. (1996) but for surveys carried out in the United States and considering actual patterns of adoption. Another report is the one made by Jackson et al. 2003 in the Telecommunications Research Group of the University of Colorado, which addresses issues such as how to estimate preferences and profiles of consumers who adopt the broadband internet service. Finally, we should mention the report issued by the OECD (2001) which analyses the adoption of wideband internet connections in 30 countries.
The data Our objective is to provide empirical evidence in the debate on internet access and use, focusing primarily on the form of connection and the user profile. In this study, we use information provided by the Annual Survey TIC – H 2003 carried out by the INE in all of Spain and on two register designs – a family register and another for 10 to 14 year old users. Adoption of a type of internet connection, the nature of the type of connection and the use that is made of the service would be endogenous in the model proposed in this paper. Internet access from the home For the first analysis, there is a sample of 18,949 answers to the question: “do you have some kind of internet access in the home?”. The descriptive graphics are provided below.
Demand for Internet Access and Use in Spain
337
1,8 2
34,3
Narrowband
Broadband
Other
NR
74,6
Source: National Institute of Statistics, INE, survey on February 2003, and own elaboration. Fig. 2. Spanish households with internet access
2,000,000 1,800,000
Narrowband Telephone (Telephone)
Number of Connections
1,600,000 1,400,000
Broadband (DSL Broadband (DSL and Wire) wireless) and
1,200,000
Other forms Other Forms
1,000,000 800,000 600,000 400,000 200,000 0 less than 10,000 inhabitants
10,000 to 20,000 inhabitants
20,000 to 50,000 inhabitants
50,000 to 100,000 inhabitants
more than 100,000 inhabitants
Source: National Institute of Statistics, INE, survey on February 2003, and own elaboration. Fig. 3. Internet access from the home taking into account the habitat and connection form
Figs. 2 and 3 refer to access to the internet service from the home, taking into account the habitat where it is located. We see, for example, that of all homes with internet access (25.2%), the majority chooses the narrowband form of connection
338
Leonel Cerno, Teodosio Pérez Amaral
of a conventional telephone line (74.6%) and only 34.3% opt for the broadband form of connection or other forms6 (2.0%). We can see that there are families that choose more than one form of connection because the sum of these percentages does not equal 100. The gross data also show that the sample is categorised in accordance with the size of the respondent’s home. See Fig. 4 below: 1,200,000
1,000,000
800,000
Narrowband Telephone (Telephone) 600,000
Broadband (DSL Broadband (DSL and wireless) and Wire)
400,000
200,000
0 1 member household
2 members household
3 members household
4 members household
5 members and more
Source: National Institute of Statistics, INE, survey on February 2003, and own elaboration. Fig. 4. Internet access from the home taking into account the home size and connection form
We can see here that the telephone dial-up connection via modem persists in all the categories considered, with the homes with 4 members prevailing over the rest. However, we should note that there are considerable differences between the percentages of access via one channel or another depending on the size of the home. For example, in the homes with four members the telephone dial-up connection is 40.2% greater than broadband, especially compare to homes with one and two members when this difference is smaller than this (4.3% and 15.6% respectively). This could be explained by the fact that homes with one and two members present two well differentiated typologies: one type of home inhabited by middle-aged or young working people who prefer to access the internet directly from their workplace instead of from their homes, and another type of home where retired people live. As the latter will may have little interest in accessing the internet from their own homes, the average rate of internet access in general and via a broadband 6
For example WAP.
Demand for Internet Access and Use in Spain
339
connection in particular in these one- or two-member homes is less than for homes with more members. Internet use Fig. 5 contains a descriptive analysis of internet use that relaxes the restriction that access be only from the home. Another thing of note here is that the question is aimed at the individual and not at the home, as it was in the case of access. We can see that of a total of 12, 130, 100 people who have used the internet service in the last 3 months, the majority accesses from the home and via one of the types of connection commented on previously. In other words, the majority of the population that uses the internet (59.7%) does so from the home, followed by those people who connect to the internet from the workplace (41.3%), from other places (29.3%) and from centres of study (20.4%)7.
Source: National Institute of Statistics, INE, survey on February 2003, and own elaboration. Fig. 5. Internet use in the last 3 months by age and place of use
The place of use illustrates the segmentation of the Spanish population in terms of internet use. If we observe the graphic, we see that the main internet use in practically all ages will be at home, but we also see that younger people (15 to 24 years of age) use it a lot in their centres of study and mature people use it a lot from the 7
The percentages do not add up to 100 because we are considering multiple response tables.
340
Leonel Cerno, Teodosio Pérez Amaral
workplace. Another thing that this graph reveals is that the number of people who connect to the internet decreases as the age increases, although in relative terms the percentage of people who access the internet from the home increases (although not observable in the graphic because of the scale used, 72% of the respondents who are 75 years of age or older accesses the internet from the home, compared with the respondents in the 15 to 24 years of age interval, with 56.6% accessing from the home but 46.2% accessing the internet from the centre of study). As regards the level of qualification, we see from Fig. 6 that internet access will be used to a greater extent by people who have finished upper level studies (33.9%); they are followed closely by those people who have completed the second stage of secondary education (30.4%), and then by 19.4% of people who have completed the first stage of secondary education and 10.6% of people with upper level Occupational Training. We also see that people with upper level studies will connect a lot from their workplace, while this proportion differs in the other categories under consideration. As is to be expected, people who have completed a level of training other than those mentioned above (except for primary school) and people without any kind of training together represent a mere 0.5% of the respondents and are barely perceivable in the graphic. It is obvious that the level of education is very much associated with internet access and use. 3000000
Individuals
2500000 Home
2000000
Workplace Workplace Center of Study Other Places
1500000
Center of Study
1000000 500000
Other Places
Se co pp nd er ar y Le E ve du lP c. ro fe s si 2n on d. al St Ed ag uc e . S ec on da ry Ed uc U . ni ve rs i ty Tr ai ni ng
Sc ho ol ar y
Pr im
U
1s t. St ag e
W i th ou t
Tr
ai ni ng
0
Source: National Institute of Statistics, INE, survey on February 2003, and own elaboration. Fig. 6. Internet use in the last 3 months by study level and place of use
Considering next the professional status of the respondents, Fig. 7 below shows, as expected, that people who use the internet most are employed workers who connect to the internet almost indistinctly from the home and from the work-
Demand for Internet Access and Use in Spain
341
place (63.83% in all); they are followed by students (21.20%) who connect mainly from the centre of studies, which is followed closely by access from the home. Unemployed workers account for 6.07%, and the remaining 9% is divided among people who are not part of the working population (housekeepers, pensioners and unemployed). 5000000 4500000 4000000 Individuals
3500000 3000000
Home
2500000 Workplace
2000000 1500000
Center of Study
1000000
Other Places
500000
th er O
Pe ns io ne rs
ou se ke ep er s H
St ud en ts
er s W or k
ne m pl oy ed U
Em
pl oy ed
W or ke rs
0
Work Status
Source: National Institute of Statistics, INE, survey on February 2003, and own elaboration Fig. 7. Internet use in the last 3 months by work status and place of use
In general more than half of the people who use the internet are people whose habitat is in big cities (with more than 100,000 inhabitants) and provincial capitals. In the case of size of the respondent’s home, those who use the internet the most come from three-member homes, followed by those belonging to homes with four or more members. After this basic description of the data, we want to discover and measure the dependencies between the variables. We begin by using a demand model that is designed to explain the relationships between the variables of interest.
Modelling household access and usage A basic feature in modelling telecommunications demands is the distinction between access and usage. It is obvious that the usage of a given service by an individual is only possible if he/she has access to the service. Usage is conditional on access.
342
Leonel Cerno, Teodosio Pérez Amaral
At the same time an individual will choose to join a network only if he/she plans to make some use of it. Access is conditional on use. This observation is central in the pioneering model of Artle and Averous (1973) and still constitutes a cornerstone for modelling telecommunications demands (see Taylor 1994). Utility functions The theoretical evidence on internet demand suggests that the internet is used to save money and time. Following the framework of Taylor (1994)8, there are two types of agents:
G0 : Subset of agents without access to the net G1 : Subset of agents with access to the net The utility function of individual i is expressed as U i = U i ( xi , δ i qi )
(1)
i
where x is the vector of the other goods consumed by the ith agent, and the dichotomous variables determine the access status of the agent, i.e.:
q ∀i ∈ G1 qi ® ¯0 ∀i ∈ G0
1 if the agent has access (use) ¯0 otherwise
δi ®
The problem of maximising utility would then be expressed with individual utility functions for each type of agent as follows: U 1 = U 1 ( x1 , q ) U 0 = U 0 ( x0 )
if δ =1 if δ =0
(2)
Econometric approach The literature suggests that internet users differ from other users of telecommunications services in terms of the type of attributes that are relevant. This is supported by Rappoport et al. (2002) when they outline the differences between telephone demand and internet demand.
8
Taylor summarises the contributions to the theory of telephone demand from the mid1970s. Access/no-access plays an important role in the analysis.
Demand for Internet Access and Use in Spain
343
In accordance with this, Jackson et al. (2003) use a maximisation model of the work-leisure choice, assuming that the agents want to have income, leisure time and also online activity. The theory indicates that maximisation of the utility of an agent having access to the internet in the home is conditioned by the consumption of other goods and the allocation of time and money. A linear approximation of a conditional utility *
T
function would be U i = xi
β + ε i , where U i*
is the latent utility that agent i
experiences on accessing and/or using the internet, β is the vector of parameters to be estimated and represents the vector of the marginal utilities of each of the reT
gressors that are found in the vector xi , and
εi
is an error term.
The parameters of the individual latent utility function (the marginal utilities) are estimated on the basis of information from the responses to the questions posed. In this paper, we use this framework to analyse the characteristics of internet users in the case of Spain and to look for relationships between sociodemographic characteristics such as habitat, sex, age, studies completed and others and the agents that access the internet from the home or use the net through centres other than the home, e.g. a centre of study, workplace, etc. Considering the aforesaid relationship between access and use of the internet, we can say that a division of the sample of size N is now:
N1 : Number of agents with access to internet from home. N 2 : Number of agents with no access from home but with use in other places. N 3 : Number of agents without home access and no use. N = N1 + N 2 + N 3 We cannot measure directly the utility provided by internet because that information is not available. We take this into account when we specify the model. We show the decisions of access and use of internet in Fig. 8:
Fig. 8. Relationship between access and use of internet
344
Leonel Cerno, Teodosio Pérez Amaral
Fig. 1 suggests that the demand for a given type of connection to internet occurs if there is a previous choice to access from the home. Starting from there we determine a demand for broadband keeping in mind the following attributes: Access to internet Use of internet
= f [economic, technol., socio-demographic attrib.] = g [economic, technol., socio-demographic attrib. | access]
where each attribute will be quantified as shown below. The model used is based on maximisation of the utility of an agent that considers accessing the internet in a certain scenario. The use of internet is conditional on heaving access, as expressed in the second formula above. Specification Because of the type of data available, we will use a binary probit model with selection bias. Thus, for the first model specified for the ith individual, we fit two equations and will have both endogenous variables as binary dummies. The endogenous variable of the equation of broadband demand has the following structure:
y1*i = x1i β1 + u1i
(3)
y2*i = x2i β 2 + u2i
y1i = 1
if y1*i > 0
y2i = y2*i if y1*i > 0
y2 i = 0
if y1*i < 0
y2 i = 0
if y1*i > 0
It is referred to the type of internet connection at home (y2i = 1if it is a broadband connection), but it only makes sense when variable y1i = 1, i.e. the individual has internet at home. This means that the variable y1i non-randomly selects the sample for estimating a demand model to measure the effects on the type of internet connection at home. The regressors are explained in the following table:
Demand for Internet Access and Use in Spain
345
Table 1. Regressors Variable Name
Definition
Economic income Technologic pc laptop mobile
Family income index. =1, if there is a PC at home; =0 otherwise =1, if a laptop is owned; =0 otherwise =1, if a household member has a mobile phone; =0 otherwise Internet use (quarterly number of times = 70, 14, 3, 1, 0) Computer use (number of times per quarter = 70, 14, 3, 1, 0) =1, broadband internet connection at home; =0 otherwise
frequser usage comp broadband Social and Demographic bestudying study level housemembers habitat male age agesq
=1, if the respondent is studying; =0 otherwise Degree achieved in studies (measured by years of study). Number of residents in the household Population size =1, if the respondent is male; =0 otherwise Age of the respondent Square of age of the respondent
We approximate the demand for broadband access as: y2i = β 0 + β1 incomei + β 2 pci + β 3 laptopi + β 4 mobilei + β 5 frequseri + β 6 bestudyingi + + β 7 housemembersi + β 8 habitati + β 9 malei + β10 agei + β11 agesqi + IMR ( x )i + ε i
(4)
i = 1, ..., 18,948.
where IMR ( x ) =
φ ( zT γ )
ª1 − Φ ( z T γ ) º ¬ ¼
is the inverse of the Mills ratio; it is then added
as a regressor for correcting the sample selection bias. The endogenous variable y2i is a binary variable and represents the output of the indicator function referred to the individual who has broadband internet at home (that is, equal to 1 if the individual has broadband at home and 0 otherwise).
346
Leonel Cerno, Teodosio Pérez Amaral
Estimation and discussion The endogenous variable in the outcome equation is ‘having or not having broadband access at the home’. As noted earlier, the marginal effect on y2i is composed of the effect of the selection equation and the outcome equation. In other words, every predictor in the model may appear not only as an exogenous variable in the outcome equation, but also as a component of IMR (χ). One consequence of this is that the effect on n units of change in the vector of the exogenous variables is not simply n times the effect of one unit of change in this vector. This means that the change in the endogenous variable depends not only on the magnitude of the change, but also on the base from which the change takes place (Sigelman and Zeng 1999). Subscription to broadband access at home model results Table 2 below shows the estimation results of the first fitted model specified in equation (1.7). This estimation has been a binary probit model with a selection bias. We observe first that to be currently studying, the number of members in the home and being male are insignificant. However, the signs of the coefficients of the significant variables are as expected, since income, having a computer at home and frequent use of the internet have a positive relationship with the probability of having broadband service at home. However, the significance decreases with increasing age and with the habitat size. Taking into account only the significant variables, we highlight the following points: • A higher income implies a higher inclination to demand a broadband connection at home. This fact explains the positive sign of the coefficient of the variable income. • age is another variable that has a specific behaviour. In this model we include age twice: first in levels and second as square. This means that, coeteris paribus, the marginal effect on the probability of acquiring broadband at home is non-constant, it instead will be a linear function: ß9 + 2ß10 age. As we can see in the model, age has a negative coefficient and agesqr has a positive sign. This means that the effect on the probability of acquiring broadband at home at an early age is inverse, but it rises until it becomes positive at later ages ( as of 50 years). • The technological attributes have a positive role in the probability of demanding domestic broadband. If people want access to the internet at home, it is necessary to have a PC or laptop. Even so, we see that the variable mobile also has a positive effect on the probability of demanding broadband and of using internet frequently (frequser). • The size of the habitat has a negative coefficient, and this means a negative influence on the probability of broadband at home. This could be explained by the fact that in big cities there are more possibilities of access than in little cities or towns, for example, through cyber-cafes, municipal points, booths, etc.
Demand for Internet Access and Use in Spain
347
Table 2. Demand for broadband at home: estimation results of the probit model Regressors
βˆ
Constant
0.666
8.53
income
3.240
598.24
pc
0.144
4.60
laptop
0.055
2.23
mobile
0.052
**1.81
frequser
0.002
9.10
-0.002
*0.10
0.002
*0.35
-0.023
7.60
male
0.009
*0.65
age
-0.004
**1.67
agesq
0.00004
**1.70
IMR( x)
-0.212
12.85
bestudying housemembers habitat
|z|
Sample: 18,940 individuals, 4,470 of which have broadband at home. Log likelihood = -9229.138 (Prob. > χ2 = 0.00) (*) Insignificant. (**) Significant at 90% of confidence.
Internet use model and results In the following it is specified which refers to the number of different access modes used: home, workplace, centre of study and other places such as hotels, cyber-cafés, etc. We specify a multinomial logit model in which we consider the use of the internet and its determinants from the home, the worksite, the place of study and other places (hotels, cyber-cafés, airports, booths, etc)9. The explanatory variables that we use, although they are not exactly the same ones, follow the same philosophy of considering economic, technological and socio-demographic attributes. The dependent variable is a politomic variable referred to the use of internet in the four mentioned places. That is,
9
See Sigelman and Zeng 1999; compare with Rappoport et al. (2003).
348
Leonel Cerno, Teodosio Pérez Amaral
= 1 if Internet is used in one of the places considered ° = 2 if uses in two places ° USE j ® ° = 3 if uses in three places °¯ = 4 uses in the four places considered
The multinomial logit will be equivalent to:
ln
Pij
(1 − Pij )
= xiT β j
j = 1, 2, 3, 4.
(5)
where Pi1, Pi2, Pi3 and Pi4 are the probabilities of use of internet in one, two, three T
and four places considered. The xi is a vector of regressors of i, and ß is a vector of parameters. As we saw above in the first graphical analysis, the most frequent place of use is the home, followed by the workplace and the place of study. Indeed, we have a multinomial logit with an endogenous variable with four categories, one that will be equal to one if the internet is used in only one of the four places (may be at home), another that will be equal to two if used in two places (may also be at work or school), and three or four if used in almost all or all of the places considered, respectively. Furthermore, these places of use are not mutually excluding. The results can be seen in Table 3. In this second estimation we see that coefficients are differentiated, in general increasing in absolute value from left to right. This means that the use of the internet is directly proportional to the economic, technologic and socio-demographic attributes, in accordance with the results found in the literature. ߈ 3 , the coefficient of habitat, is insignificant except for USE = 3, while the meaning of the rest of the variables is clear. The variables that separate the profile of an intense user from that of a light user are: age, income and both technological attributes: broadband at home and computer use. For those that use internet from several places (e.g.: USE = 3 and USE = 4), the profiles are better explained by the income, having broadband at home (measured through broadband) and age. Notice that for heavy users, their profiles will be explained better by income, and we think this makes sense because if the user has the possibility of accessing internet from some places, such as the workplace or the centre of studies, for free, the user would also purchase a broadband connection at home only if she/he had a high income. These results confirm those of the previous model and are supported by the fact that in Spain a broadband connection at home is still considered a luxury good. See the following section.
Demand for Internet Access and Use in Spain
349
Table 3. Demand of internet use, results of estimations Regressors
USE = 1
USE = 2
USE = 3
USE = 4
Constant
-3.80 (20.09) 11.84 (16.10) 1.44 (15.01) 1.06 (37.08) 0.83 (10.55) -2.51 (6.19) -0.20 (7.96) 0.15* (1.19) 0.41 (7.38) -0.05 (21.68)
-6.57 (26.54) 21.23 (21.88) 1.74 (16.75) 0.75 (19.69) 1.51 (17.20) -5.46 (10.54) -0.20 (6.57) 0.02* (1.37) 0.58 (8.59) -0.06 (20.83)
-9.45 (18.90) 24.98 (13.30) 2.04 (13.06) 0.44 (5.01) 2.29 (6.19) -6.07 (6.08) -0.26 (4.53) 0.06** (2.29) 0.88 (6.88) -0.08 (12.83)
-14.29 (10.46) 38.87 ( 7.86) 2.09 (6.19) 0.41 (1.59)* 2.88 (7.81) -12.65 (4.85) -0.28 (2.00) 0.04** (0.59) 0.90 (2.81) -0.09 (5.13)
income broadband usagecomp bestudying studylevel housemembers habitat male age
Total of Observations: 18,940 Log likelihood = -9115.67 (Prob > χ2 = 0.00) Pseudo R2 = 0.4218 t statistics in parentheses. (*) Insignificant. (**) Significant with 95% of confidence level.
Marginal effects and elasticities Now we focus on the analysis of the marginal effects and elasticities of the two models. They are shown in Tables 5 and 6: Income is the variable that has the highest influence on the probability to adopt with 0.2407. This confirms the first results in the estimation. The technological attributes, as is obvious, also increase the probability of acquiring a broadband connection at home. Their coefficients corresponding to having a PC, a laptop a mobile connection to internet or using internet frequently are 0.0369, 0.0233, 0.0112 and 0.099, respectively. With regard to the habitat and age, both are related to a decrease in the probability of demanding broadband from home of -0.0041 and -0.011, respectively.
350
Leonel Cerno, Teodosio Pérez Amaral
Table 4. Marginal effects of the first model (internet access in home)
y = E [ yi | Pr( zi > 0) ] = 0.02377
dy dx
Variable
Mean
income
0.2407
0.3825
pc
0.0369
0.2214
laptop
0.0233
0.0290
mobile
0.0112
0.0149
usagin
0.0990
0.5095
bestudying
0.0024*
0.1201
housemembers
0.0010*
2.8789
habitat
-0.0041
4.2954
male
0.0050*
0.4357
age
-0.0011
49.8688
agesq
0.00001
2878.04
(*) Insignificant Table 5. Marginal effects of the internet use model
y = Pr(use = j )
j = 1, 2, 3, 4
Variable
0.0704
0.0159
0.0010
0.00003
Mean
income
0.7491
0.3193
0.0237
0.0013*
0.3845
broadband
0.1457
0.0486
0.0043
0.0002
0.0811
usage comp
0.0686
0.0105
0.0004
0.00001
0.5773
bestudying
0.0652
0.0400
0.0059
0.0004
0.1204
studylevel
-0.1574
-0.0828
-0.0058
-0.0004*
0.5525
housemembers
-0.0128
-0.0029
-0.0002
-8.92e-06
2.8805
habitat
0.0010*
0.0003*
0.00006
1.27e-06
4.2962
male
0.0267
0.0091
0.0009
0.00003
0.4356
age
-0.0031
-0.0009
-0.00008
-2.86e-06*
49.845
(*) Insignificant
In this next table, we see that in the first row there are the probabilities of internet use in one place (0.0704), in two places (0.0159), in three places (0.0010) and in four places (0.00003). The signs of the estimates are the same in every category
Demand for Internet Access and Use in Spain
351
of the endogenous variable, and in general the effects decrease as the number of places of use increases. For example, the influence of the “income” coefficient on the probability of use decreases as the influence of places increases (0.7491 for one place, 0.3193 for two and 0.0237 for three places). The variable “study level” has a negative sign in all categories of the endogenous variable. This can be explained by the correlation that it has with age (i.e, the higher the study level, the higher the age). This does not occur when we consider the aspect of currently studying (measured for the coefficients of the variable bestudying). With respect to elasticities of the variables, “income” and “age”, the results are shown in the following table: Table 6. Elasticities with respect to income and age in the demand for access and use of internet Access
income age
Use 1
2
3
4
1.1454
0.6986
0.2978
0.0221
Insignificant
-0.6789
-0.3748
-0.1088
-0.0097
Insignificant
We see first that the elasticity with respect to “income” is positive and the elasticity of “age” is negative. Furthermore, the demand for access at home is elastic with respect to income, but it will be inelastic in the case of demand for use. The percentage increase of “income” raises the use less than proportionally in all the categories of the endogenous variable. This will decrease as the places of use increase, in line with the results obtained in the estimation and in the marginal effects. Another result is the effect of “age”. This effect will diminish with the use of the internet, even though the decrease will be smaller as the number of places of use increases.
Conclusions and future research Our intention in this paper is to analyse the demand for internet from home in Spain. We use the framework provided in the literature on the residential consumer of internet access. We estimate the demand for the use of this service at home and also consider internet use in other places besides the home. In the first step, we describe what characteristics may affect the access and use of the internet, with a comparative analysis between the user typologies and the type of connection. First we find that 25.2% of the Spanish population accesses the internet, but less than half of them uses broadband connections (35.5% of internet connections). These percentages vary considerably by region. Further-
352
Leonel Cerno, Teodosio Pérez Amaral
more, we observe relationships between the social and demographic characteristics and internet use similar to those described in the literature for other countries. Then we specify and estimate a demand for broadband access from home through the two-step probit model with correction of the selectivity bias (Heckman 1979), and confirm that demand for the internet service is positively related to income and the technological attributes and negatively related to sociodemographic attributes such as habitat and age. These results are consistent with the literature for other countries. Then we estimate a model of demand for use of internet taking into account the number of places of internet use (four in all). We see that the effects of the three attributes are in general directly proportional to internet use. The quantification of the income and age effect to define user profiles, the marginal effects and the calculated elasticities demonstrate that broadband access at home is still considered a luxury good, but the use of this service can almost be considered as a necessity. In future research, it would be useful to have data on personal income and on the service prices in order to estimate a broadband demand function for different regions of Spain. Also, it would be useful to have more comprehensive information on individuals and to work with panels of individuals over several periods, in order to evaluate the dynamics affecting the demand for the internet service in Spain.
References Artle R, Averous C (1973) The Telephone System as a Public Good: Static and Dynamic Aspects. Bell Journal of Economics and Management Science 1, vol 4: 89–100 Cassel C (1999) Demand for and Use of Additional Lines by Residential Customers. In: Loomis, Taylor (eds) The Future of the Telecommunications Industry: Forecasting and Demand Analysis. Kluwer Academic Publishers, Boston Davidson R, MacKinnon J (1993) Econometric Theory and Methods. Oxford University Press Duffy-Deno KT (2001) Demand for Additional Telephone Lines: An Empirical Note. Information Economics and Policy 13: 301–309 Friedman M (1957) A Theory of the Consumption Function. Princeton University Press for the National Bureau of Economic Research Goodman A, Kawai M (1982) Permanent Income, Hedonic Price and Demand for Housing: New Evidence. Journal of Urban Economics 12: 214–237 Goolsbee A (2000) The Value of Broadband and the Deadweight Loss of Taxing New Technology. Mimeo, University of Chicago Heckman J (1979) Sample Selection Bias as a Specification Error. Econometrica 1, vol 47 Jackson M, Lookabaugh T, Savage S, Sicker D, Waldman D (2003) Broadband Demand Study: Final Report. Telecommunications Research Group, University of Colorado Madden G, Coble-Neal G (2004) Australian Residential Telecommunications Consumption and Substitution Patterns. Preliminary draft, 15th International Telecommunications Society Meeting, Berlin, Germany, September 4–7
Demand for Internet Access and Use in Spain
353
Madden G, Savage S, Simpson M (1996) Information Inequality and Broadband Network Access: An Analysis of Australian Household Survey Data. Industry and Corporate Change, Oxford University Press, pp 1049–1056 Mc Fadden D (1974) Conditional Logit Analysis of Qualitative Choice Behavior. In: Zarembka (ed) Frontiers in Econometrics. Academic Press, pp 1117–1156 Mc Fadden D (1984) Econometric Analysis of Qualitative Response Models. In: Grilliches Z., Intriligator M (eds) Handbook of Econometrics. Amsterdam, North Holland, pp 1376–1425 Meng C, Schmidt P (1985) On the Cost of Partial Observability in the Bivariate Probit Model. International Economic Review 1, vol 26 OECD Organisation for Economic Cooperation and Development (2001) The Development of Broadband in OEDC Countries. October 29 Owen B (2002) Broadband Mysteries. In: Crandall R, Alleman J (ed) Broadband: Should we Regulate High-speed Internet Access? AEI – Brookings Joint Center for Regulatory Studies Pérez Amaral T, Alvarez F, Moreno B (1995) Business Telephone Traffic Demand in Spain 1980–1991: an Econometric Approach. Information Economics and Policy 7: 115–134 Rappoport P, Taylor L, Kridel D (2002) The Demand of Broadband: Access, Content, and the Value of Time. In: Crandall RW, Alleman JH (eds) Broadband: Should We Regulate High-Speed Internet Access? AEI-Brookings Joint Centre for Regulatory Studies, Washington, D.C. Sigelman L, Zeng L (1999) Analyzing Censored and Sample-Selected Data with Tobit and Heckit Models. Political Analysis 8:2, The George Washington University Working Papers, December 16 Taylor LD (1994) Telecommunications Demand in Theory and Practice. Kluwer Academic Publishers Taylor LD (2000) Towards a framework for analyzing internet demand. Manuscript, University of Arizona U.S. Department of Commerce, and National Telecommunications & Information Administration (2002) A Nation Online: How Americans Are Expanding Their Use of the Internet. Varian H (2002) The Demand for Bandwidth: Evidence from the INDEX Project. Mimeo, University of California, Berkeley
Part 5: Integration of Markets
European Integration and Telecommunication Productivity Convergence Elisa Battistoni1, Domenico Campisi, Paolo Mancuso University of Rome “Tor Vergata”, Italy
Abstract On May 1st, 2004 ten countries entered the European Union (EU), thus realising the greatest enlargement since its establishment. The aim of this paper is to determine whether in the period 1995–2002, the telecommunication industries of these newly entered countries have been involved in a catching-up process towards the levels reached by the countries already in the EU. To this aim, we have first studied the production function of the telecommunications industries of the EU countries under two different approaches: the stochastic frontier and the data envelopment analysis approach. Then, starting from the estimates of technical efficiencies obtained with the two methods, we have analysed and tested the presence of both σ-convergence and β-convergence processes.
Introduction One of the EU goals is to become “the most competitive and dynamic knowledgebased economy in the world capable of sustainable economic growth” (CORDIS n.d.). It is generally accepted that ICT investments represent a fundamental indicator of innovation in knowledge-based economies: in particular, the telecommunication industry (i.e. carrier services) represents a major part of ICT expenditures, with a share of roughly 39% of the composition of the Western European ICT market structure (EITO 2002). During these last years the EU recognised the importance of ICT and, therefore, directly guided and supported the communication revolution, “setting the pace for opening markets, maintaining equal opportunities for all participants, creating a dynamic regulatory structure, defending consumer interests and even setting technical standards” (European Communities 2005).
1
E-mail:
[email protected]
358
Elisa Battistoni, Domenico Campisi, Paolo Mancuso
On May 1st, 2004, eight countries of Central and Eastern Europe (Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Slovakia and Slovenia), together with Cyprus and Malta, entered the EU, thus realising the greatest enlargement since its establishment. Bulgaria and Romania should enter the EU within a few years and Turkey is a candidate to join it as well. From an economic point of view, the immediate consequence of the enlargement will be a bigger and more integrated market: this will generate an incentive for economic growth for old as well as for new members of the EU. On the other hand, it is important to understand if the new countries show a catching-up pattern to the old members of the EU. In this paper we analyse the productivity of the telecommunication industry of the new countries of the EU: our goal is to determine whether a convergence process to the same industries of the old countries exists or not, and the nature of this process. The first step to reach our goal is to study the production function in the telecommunication industry; then, starting from the estimated production frontier, we will analyse the convergence process. Even if the main interest is to understand if there is a convergence pattern among all the EU countries, the analysis will be carried out also within the subsets of the new countries and of the old members. Therefore, in this paper we will consider three sets of countries to be analysed: the first made up by the new members of the EU (henceforth NM); the second made up by the countries that were already belonging to the EU before May 1st 2004 (henceforth OM); and the third is the set of all the European countries since May 1st 2004 (henceforth ALL). We will adopt two different methodologies both in the determination of the production function and in the analysis of the convergence processes. The production function will be estimated by using the Stochastic Frontier (SF) and the Data Envelopment Analysis (DEA). The convergence process will be analysed using both parametric and non-parametric techniques. In this way we will be able to compare the results coming from the application of different techniques to the same case-study. The paper is organised as follows. In section 1 the data and variables on which the analysis is based are described. Section 2 addresses the problem of the estimation of the production function by adopting first the SF approach (section 2.1) and then the DEA methodology (section 2.2). In section 3 the convergence problem is analysed referring both to the β-convergence and to the σ-convergence: in particular, starting from the values of efficiencies obtained by the SF approach, the βconvergence is studied by using non-parametric techniques (Wilcoxon test and Kendall’s W). On the other hand, when the analysis of the production frontier is conducted by the DEA technique, we use the resulting values of Malmquist indexes and we apply parametric techniques to determine the presence of the βconvergence. Comments and conclusions on the study are drawn in section 4.
European Integration and Telecommunication Productivity Convergence
359
Data and variables As previously said, in this paper we consider three sets of countries: the NM, the OM and ALL. Table 1 shows the countries belonging to the different panels. In conducting our analysis we use annual data from ITU World Telecommunication Indicators 2003 (ITU 2003) for all the three sets of countries. The period of observation starts in 1995 – the year of the previous enlargement before 2004 – and stops in 2002, the last year of observation in the ITU database. Data concern input and output measures of the telecommunication industry: in particular, we consider the staff and the annual telecommunication investments as input measures, whereas the output measure is represented by the total telecommunication service revenue. According to the ITU database, “staff” represents the “full-time staff employed by telecommunication network operators in a country for the provision of public telecommunication services. Part-time staff are generally expressed in terms of full-time staff equivalents” (ITU 2003). We use this indicator as representative of the input “labour” in the production function. Table 1. Countries belonging to the three panels of the study Name of the Panel NM
OM
ALL
Countries in the Panel Cyprus Estonia Latvia Lithuania Malta Poland Czech Republic Slovakia Slovenia Hungary Austria Belgium Denmark Finland France Deutschland Greece Ireland Italy Luxemburg Nederland Portugal Spain Sweden Great Britain The sum of OM and of NM
360
Elisa Battistoni, Domenico Campisi, Paolo Mancuso
On the other hand, the input “capital” is represented by the indicator “annual telecommunication investment”: this indicator refers to the “expenditure associated with acquiring the ownership of telecommunication equipment infrastructure (including supporting land and buildings and intellectual and non-tangible property such as computer software). These include expenditure on initial installations and on additions to existing installations” (ITU 2003). The measure of output could have been represented by indicators such as the total number of national calls or the number of subscribers, and so on: however, the ITU database shows many lacks of data for these indicators. We have therefore chosen to represent the output measure by the indicator “total telecommunications service revenue”, which, according to the description of the ITU database (ITU 2003), refers to “earnings from the direct provision of facilities for providing telecommunication services to the public (i.e., not including revenues of resellers) and includes revenues from fixed telephone, mobile communications, text (telex, telegraph and facsimile), leased circuits and data communications services”. The approach of representing the output by revenue measures rather than by physical quantity measures has typically been used for the telecommunication industry: indeed, this approach “is generally better in those industries where a large number of services are provided, or where there are significant quality differences among the services provided” (Christensen et al. 2003), as it is the case for the telecommunications industry. In order to guarantee a high degree of homogeneity among the data coming from different countries in different periods, we have expressed all monetary measures – i.e. the annual telecommunication investment and the total telecommunications service revenue – in deflated national currencies and under the Purchasing Power Parity (PPP) condition: thus, we are able to purify the data from the inflation rate and from the differences in the purchasing power of the different currencies. Data about the consumer price index are also from the ITU database, while PPP indexes are from Penn World Table 6.1 (Heston et al. 2002). It is worth noting that the expression of the monetary measures under the PPP condition gives us a “volume” of total telecommunications service revenues and of annual telecommunication investment. However, for the clearness of reading, from now on we will indicate as “revenues” the volume of “total telecommunications service revenue”, as “investments” the volume of “annual telecommunication investment” and as “staff” the input “total full-time telecommunication staff”. Input and output measures are summarised in Table 2a and in Table 2b. Table 2a clearly shows that the average number of employees has been decreasing all over the period of our study. In the meantime, the investments have shown a positive trend: therefore, it is possible to state that the countries in NM have been moving toward a capital intensive structure. Even if the countries in OM (Table 2b) have shown an increasing trend both in the staff and in the investments, the ratio investments to staff shows an upward trend over the period of the study: this indicates that also the countries in this panel have been moving toward a capital intensive structure.
European Integration and Telecommunication Productivity Convergence
361
Table 2a. Representation of input and output measures for the NM NMa 1995 Revenues (106) Mean 1299 Std. dev. 1588 Min 131 Max 4875
1996
1997
1998
1999
2000
2001
2002
1484 1746 136 5313
1915 2480 134 7598
2407 3462 151 10974
2789 4105 172 13064
3560 5062 184 15768
4035 5949 189 19031
4619 7055 194 22969
Staff (103) Mean Std. dev. Min Max
16.40 21.86 1.83 73.70
16.34 21.72 1.75 72.88
15.89 21.66 1.68 73.02
15.34 21.45 1.91 71.89
15.01 20.59 2.05 69.01
14.62 19.91 1.97 66.25
14.28 19.28 1.76 63.60
Investments (106) Mean 515 595 721 653 639 Std. dev. 633 776 999 780 869 Min 17 18 10 16 19 Max 1707 2082 2656 2079 2718 a Source: ITU World Telecommunication Indicators 2003.
720 962 26 2597
733 928 32 2481
760 912 39 2370
16.87 21.92 1.83 73.27
Table 2b. Representation of input and output measures for the OM 1995 OMa Revenues (106) Mean 9639 Std. dev. 10839 Min 236 Max 33661
1996
1997
1998
1999
2000
2001
2002
10237 11249 259 35561
11641 12561 284 37227
12966 13866 314 39835
14767 15356 315 43954
16543 17030 308 48649
18424 19954 373 60834
20681 23816 453 77996
Staff (103) Mean Std. dev. Min Max
59.51 67.10 0.82 222.00
60.91 69.43 0.83 219.20
62.67 70.61 0.87 220.00
63.87 73.01 0.88 221.40
65.34 74.82 0.89 240.70
65.68 75.17 1.49 241.80
66.25 75.67 2.49 242.91
Investments (106) Mean 2553 2693 3092 3011 3565 Std. dev. 2718 3215 3620 3419 3945 Min 66 137 79 101 82 Max 7921 9961 12203 11630 13702 a Source: ITU World Telecommunication Indicators 2003.
4405 4724 67 15903
3869 4055 84 13756
3540 3598 106 11898
60.74 68.97 0.80 229.70
Finally, the output has been constantly increasing both for the set of NM and for the panel OM all over the period of the study.
362
Elisa Battistoni, Domenico Campisi, Paolo Mancuso
Estimating the production function Starting from the data described in the previous section, we estimate the production function by means of two different methods: SF and DEA. The choice of adopting two different approaches to the same study can be explained by considering that both methods have advantages and disadvantages: on the one hand, indeed, the SF approach is a parametric technique and therefore it has the advantage to provide statistics on the “quality” of the model that represents the production function. This means that using an SF approach allows us to hypothesise different functional forms for the stochastic frontier and to choose among them, on the basis of values assumed by some kind of statistic measure. On the other hand, the DEA approach is a non-parametric technique and therefore it has the disadvantage of not providing statistical information about the goodness of the model. Nevertheless, the DEA provides much more information than the SF, like, for example, the values of Malmquist indexes and the decomposition of total factor productivity change into its components. Therefore, our choice must be intended as to collect as much information as possible on the particular instance of the problem analysed and to compare the results coming from different techniques applied to the same case-study. The analysis has been conducted using the two softwares developed by Coelli: FRONTIER 4.1 for the SF approach and DEAP 2.1 for the DEA approach. In this section, we will show the results coming from the two methods, and we will compare them. Analysis using the SF approach Following the user’s guide to FRONTIER 4.1 (Coelli 1996a), we have chosen to adopt the model 1 specification for the estimation of the production function. This model allows estimating the stochastic frontier production function for unbalanced panel data, i.e. for panel data which need not to be complete. The model assumes that the DUMs2 effects are distributed as truncated normal random variables and that they can vary systematically with time. The model specification is the following: Yit = x it β + (Vit − U it )
i = 1,..., N and t = 1,..., T
(1)
where: • i is the observed DMU in the panel • t is the observed time period 2
DMU stands for “decision making unit”, which is a more general and appropriate definition than “firm” for our study. In this paper the DMUs are the countries in each analysed panel.
European Integration and Telecommunication Productivity Convergence
• • • • • •
363
N is the number of DMUs in the panel T is the number of time periods Yit is the production of the ith DMU in the tth time period xit is a (k,1) vector of input quantities of the ith DMU in the tth time period β is a vector of unknown parameters Vit are random variables which are assumed to be independently and identically distributed according to N(0,σV2) distribution and to be independent of the (next defined) Uit The Uit variables assume the following specification U it = U i exp(− η(t − T ))
(2)
where: • Ui are non-negative random variables which are assumed to account for technical inefficiencies in production and are assumed to be independently and identically distributed as truncations at zero of the N(µ,σU2) distribution • η is a parameter to be estimated The software provides estimates for the β parameters and for η and µ on the basis of the maximum likelihood estimation method. Moreover, the software provides values and t-statistics for two other parameters, σ2 and γ, which are defined in Eq. 3. Given the specification of γ, this parameter can assume values only in [0,1]: in particular, we will find γĺ0 when σV2+ σU2>>σU2, whereas γ will be near to 1 when σV2+ σU2ĺσU2. In the latter case, σV2 will be negligible with respect to σU2 and, therefore, the error terms will have little impact on the model. σ 2 = σ V 2 + σ U 2 °° ® σU2 γ = ° σV 2 + σU 2 ¯°
(3)
To our aims, therefore, the higher the value of γ is, the better the model will represent the production function. Starting from the described model, the analysis has been conducted in a separate way for the three sets of countries. For each set of countries, we hypothesise a translog production function, defined as follows: ln(R ) = β0 + β1 ln(S) + β2 ln (I ) +
where:
1 1 β [ln(S)]2 + β4 [ln(I )]2 + β5 ln(S) ln(I ) + ε 2 3 2
(4)
364
• • • • •
Elisa Battistoni, Domenico Campisi, Paolo Mancuso
“R” represents the vector of observations of the output “revenues” “S” represents the vector of observations of the input “staff” “I” represents the vector of observations of the input “investments” βi are coefficients to be estimated ε represents the error term
For each panel the results of the analysis of the translog production function are shown in Table 3. Table 3. Results of the analysis by the SF approach Parameter NM β0a β1a β2 β3 β4 β5b σ2c γa µ ηa OM β0a β1a β2 β3 β4 β5 σ2a γa µa η ALL β0a β1a β2b β3 β4 β5 σ2a γa µa η a
Coefficient
Standard-error
t-ratio
0.45338114E+00 0.47184994E+00 0.13014621E+00 0.20315563E-02 -0.10904278E+00 0.19934268E+00 0.24564373E+01 0.99220579E+00 is restricted to be zero -0.43446533E-01
0.96881186E-01 0.16317347E+00 0.82722577E-01 0.16825555E+00 0.71545326E-01 0.94710126E-01 0.12810436E+01 0.46180258E-02
0.46797645E+01 0.28917074E+01 0.15732852E+01 0.12074231E-01 -0.15241077E+01 0.21047663E+01 0.19175282E+01 0.21485497E+03
0.82014979E-02
-0.52973900E+01
0.69352042E+00 0.43182609E+00 0.95641261E-01 0.11279625E+00 0.41708669E-01 -0.60968841E-01 0.19219482E+00 0.93336217E+00 0.84708293E+00 -0.54504423E-02
0.10921801E+00 0.98956412E-01 0.62997885E-01 0.93599965E-01 0.10701377E+00 0.95747757E-01 0.20832458E-01 0.19861304E-01 0.13060359E+00 0.61461601E-02
0.63498725E+01 0.43638011E+01 0.15181662E+01 0.12050886E+01 0.38975047E+00 -0.63676521E+00 0.92257390E+01 0.46994002E+02 0.64859085E+01 -0.88680449E+00
0.76707450E+00 0.44463601E+00 0.14781094E+00 0.12159778E+00 0.23405544E-01 -0.78214337E-02 0.37472402E+00 0.94118174E+00 0.11877431E+01 0.21378810E-02
0.11802651E+00 0.99190082E-01 0.68378384E-01 0.86162995E-01 0.52510427E-01 0.63966412E-01 0.32110709E-01 0.12547243E-01 0.22094599E+00 0.42129584E-02
0.64991715E+01 0.44826660E+01 0.21616618E+01 0.14112530E+01 0.44573137E+00 -0.12227407E+00 0.11669752E+02 0.75011042E+02 0.53757168E+01 0.50745362E+00
Statistical significance at 1% level. Statistical significance at 5% level. c Statistical significance at 10% level. b
European Integration and Telecommunication Productivity Convergence
365
As can be noted, the values of γ are all very close to unit and highly significant. The resulting equations for the stochastic frontier production functions are summed up in Table 4. Table 4. Equations of the stochastic frontier production functions for the three panels Countries NM OM ALL
Stochastic frontier production function ln(R)=0.45+0.47ln(S)+0.13ln(I)+0.20ln(S)ln(I)+ε ln(R)=0.69+0.43ln(S)+0.10ln(I)+ε ln(R)=0.77+0.44ln(S)+0.15ln(I)+ε
Analysis using the DEA approach The same analysis described in the previous section has been carried out by using the DEA approach, which is a very commonly used non-parametric technique. In adopting this approach it will not be necessary to impose any functional form for the production frontier nor any distributional form for the error terms (Carrington et al. 2002). The DEA measures and compares a set of DMUs that perform the same task starting from different quantities of input and reaching different quantities of output: in this approach, the performance of each DMU is a weighted output-to-input ratio and depends on the comparison of its input/output combination with the input/output combinations of all the other DMUs in the panel. The measure of performance for a specific DMU can be made adopting different points of view: more precisely, it is possible to define an input-oriented performance measure or an output-oriented one. In the first case, the question to be answered is “by how much can input quantities be reduced without changing the output produced?”, while in the latter case it is “by how much can output quantities be increased without changing the input utilised?”. In each case, the DEA creates a non-parametric envelopment frontier over all of the input/output observations of the different DMUs in the panel and it quantifies the efficiency of each DMU by measuring its distance from the created frontier. The adoption of the input-orientation instead of the output one, does not change the ranking in the efficiency level of the various DMUs: in other words, if the DMU called “A” is more efficient than the DMU called “B” under an inputoriented approach, the same is true under an output-orientation. Moving from an output to an input orientation only changes the values of efficiencies, when not operating under the assumption of constant returns to scale (CRS). It must be noted that the CRS assumption “is only appropriate when all DMUs are operating at an optimal scale” (Coelli 1996b), which is not always true. In our study we have chosen to adopt an output-orientation under the more general hypothesis of variable returns to scale (VRS): therefore, the DEA identifies a convex hull of intersecting planes such that all observations lie below or on the frontier itself.
366
Elisa Battistoni, Domenico Campisi, Paolo Mancuso
In our study we have used the software DEAP 2.1, developed by Coelli (Coelli 1996b): in particular, we have adopted a Malmquist DEA analysis3, in order to measure the Total Factor Productivity change (TFPc) and to decompose it into its two components: technical change (Tc) and technical efficiency change (TEc). The total factor productivity is the ratio of total output to total input and it is widely recognised as a comprehensive measure of productivity efficiency (Christensen et al. 2003). The software DEAP 2.1 provides, for this kind of analysis, the following outputs: • the distances from the frontier, needed for the subsequent determination of Malmquist indexes. In particular, the software determines four kinds of distances for each DMU in each year: 1. the previous period DEA frontier, under the CRS assumption 2. the current period DEA frontier, under the CRS assumption 3. the next period DEA frontier, under the CRS assumption 4. the current period frontier, under the VRS assumption • starting from the distances from the frontier, the software provides five Malmquist indexes – for each year and for each DMU – which are the following: 1. technical efficiency change (referred to a CRS technology) 2. technological change 3. pure technical efficiency change (referred to a VRS technology) 4. scale efficiency change 5. total factor productivity change It must be noted that all the Malmquist indexes of a specific time period refer to the previous year: therefore, the Malmquist indexes resulting from DEAP 2.1 start from the second period of the analysis. The results obtained from the analysis by the DEA approach are shown in Table 5. Year by year and for the three panels, Table 5 shows the average values of the TFPc and of its two components, the TEc and the Tc. A value of TFPc greater than unit points out that an improvement in productivity has occurred moving from an year to the following, whereas a value of TFPc less than unit denotes a decrease in productivity. The components of TFPc, i.e. TEc and Tc, can move in opposite directions. The data shown in Table 5 are graphically represented in Fig. 1. Fig. 1a shows a TFPc index for the NM which has been greater than that for the OM since 1996 up to 2001. Since the TFPc represents the variation in output not explained by a variation in input (Gouyette and Perelman 1997), in these periods the NM have faced a better productivity change than the OM. Moreover, for the NM the TFPc has been greater than unit for all the period of the study, whereas the TFPc of the panel OM has fallen under the unit in 2000. 3
The Malmquist DEA analysis provides the same results independently on the particular choice between CRS and VRS assumptions (Coelli 1996b). Therefore, our choice of adopting a VRS hypothesis has its importance only from a theoretical point of view.
European Integration and Telecommunication Productivity Convergence
367
Table 5. TFPc, TEc and Tc: average values for each set of countries 1995 NM TFPc TEc Tc OM TFPc TEc Tc ALL TFPc TEc Tc
1996
1997
1998
1999
2000
2001
2002
1.007 1.008 0.999
1.001 0.975 1.027
1.006 1.022 0.984
1.009 1.008 1.002
1.004 1.004 1.000
1.004 1.003 1.001
1.005 1.008 0.997
1.003 1.010 0.993
1.001 0.983 1.018
1.003 1.006 0.997
1.000 0.999 1.001
0.997 0.996 1.001
1.005 0.989 1.016
1.005 0.993 1.013
1.004 1.007 0.997
1.001 0.967 1.035
1.003 1.026 0.977
1.004 1.004 1.000
0.999 0.998 1.001
1.004 1.015 0.989
1.006 0.991 1.015
When the TFPc is divided into its two components (Figs. 1b and 1c) it is possible to note that the TEc for the NM has always been greater than unit with the exception of the value of 1997: this fact means that the various countries within this panel have been moving toward the efficient frontier in 1996 and in the period 1998–2002. The TEc of OM assumes a value less than unit in 1997 and from 1999 onward: therefore, in these years the OM have been moving away from the efficient frontier. Finally, the Tc for the NM has been greater than that of OM in 1996, 1997 and 1999. Moreover, in 1997, 1999, 2000 and 2001 it has also been greater than unit. Therefore, in these years a catching-up process of NM to OM is likely to have taken place. Linking the information on the three indexes for the NM, in the period 1996– 1997 Tc has increased, but this has been contrasted by a parallel reduction in TEc: the net result has been the reduction of TFPc for this panel. In a similar way, the growth in TFPc of the NM in the period 1997–1999 can be explained with a rapid decrease in Tc, that gave the possibility to the individual countries within this panel to move toward the production frontier and, therefore, to improve their TEc. Analogous considerations can be made for the sets of OM and for ALL.
368
Elisa Battistoni, Domenico Campisi, Paolo Mancuso
1,010 1,005 1,000 0,995 1996
1997
1998
1999
NM
2000 OM
2001
2002
A LL
Fig. 1a. Comparison of TFPc 1,030 1,020 1,010 1,000 0,990 0,980 0,970 0,960 1996
1997
1998
1999
NM
2000 OM
2001
2002
A LL
Fig. 1b. Comparison of TEc 1,040 1,030 1,020 1,010 1,000 0,990 0,980 0,970 1996
1997
1998 NM
1999
2000 OM
2001
2002
A LL
Fig. 1c. Comparison of Tc Fig. 1. Comparison of TFPc, TEc and Tc among the three sets of countries
Studying the convergence process Starting from the values of efficiencies resulting from the analysis of the production functions under the two approaches (SF and DEA), we have tested the presence of a convergence process among the countries in the different panels and studied its nature.
European Integration and Telecommunication Productivity Convergence
369
Indeed, to compare the results of the application of SF and DEA to the same case-study we cannot limit our attention to the values of technical efficiencies, but we must also verify if the consequences on convergence processes are analogous. This means that, starting from the values of technical efficiencies obtained by SF and DEA, we expect to get to the same results about the existence of a convergence process. Moreover, just like it has happened for the study of the production function, also in this case we have used different methodologies: in particular, nonparametric techniques will be applied to the efficiency values obtained by the SF approach, while parametric techniques to the results obtained by the DEA approach. In this way, we are able to compare the results deriving from the application of parametric and non-parametric techniques. There are both, advantages and disadvantages for each kind of technique: on the one hand, the first kind of technique has the advantage of providing statistical measures of the goodness and reliability of the results; on the other hand, nonparametric techniques have the advantage of relying on fewer assumptions about the population from which the sample data are collected. In this section, we will show the results coming from parametric and nonparametric techniques applied to the study of convergence processes and we will compare them. Analysis based on the results of the SF approach The software FRONTIER 4.1 provides predictions of individual DMU technical efficiencies from estimated stochastic production frontiers, defined as Eff i =
(
E Yi * | U i , x i
(
*
)
E Yi | U i = 0, x i
)
(5)
where Yi* is the production of the ith DMU. Obviously, the value of Effi will fall in the range [0,1]. Table 6 shows the average values of technical efficiencies provided by FRONTIER 4.1 for the three sets of countries in the different time periods. Our aim has been to determine whether a convergence process between the telecommunication industries of the NM and of the OM exists, and which are its characteristics: therefore, we have studied both the β-convergence and the σconvergence processes. Looking at the results in Table 6 it is possible to see that the values of standard deviations have been constantly increasing over the whole time period both for the NM and for the OM: therefore, there has not been any σ-convergence process within these two panels. On the contrary, the values of standard deviations have been decreasing for the panel ALL, thus highlighting the presence of σconvergence within this set of countries: it is therefore possible to state that the new members of the EU σ-converged to the old members in the period of our
370
Elisa Battistoni, Domenico Campisi, Paolo Mancuso
study. In particular, the σ-convergence rate (i.e. the annual rate of decrease in the standard deviations) of the panel ALL has been of 0.002%. Table 6. Technical efficiencies for the three panels (SF) 1995 1996 1997 NM Mean 0.4758 0.4643 0.4529 Std.dev. 0.3063 0.3107 0.3148 Annual average convergence rate OM Mean 0.4350 0.4335 0.4319 Std.dev. 0.2664 0.2668 0.2671 Annual average convergence rate ALL Mean 0.3360 0.3366 0.3371 25352 25352 25351 Std.dev. 3 1 8 (10-6) Annual average convergence rate
1998
1999
2000
2001
2002
0.4416 0.3187
0.4304 0.4193 0.3224 0.3259 -1.16%
0.4084 0.3291
0.3977 0.3321
0.4303 0.2675
0.4288 0.4272 0.2678 0.2682 -0.13%
0.4257 0.2685
0.4241 0.2689
0.3377 25351 5
0.3383 0.3389 25351 25350 1 6 0.002%
0.3395 25350 1
0.3401 25349 6
On the contrary, since the standard deviations (σ) in the distributions of technical efficiencies of the panels NM and OM have been constantly increasing over time, the gap among the performances of the various countries within these panels has been growing year after year. However, this fact does not necessarily mean that there has not been any kind of convergence among the technical efficiencies of these countries: indeed, it is still possible to find a β-convergence process. To study the presence of β-convergence processes, we have decided to use nonparametric techniques: as already mentioned, the main advantage of these techniques is that they rely on fewer assumptions about the population from which the sample data are collected than the parametric ones do. In particular, following Koski and Majumdar (Koski and Majumdar 2000), we have adopted the Wilcoxon matched-pairs signed rank test and Kendall’s W to determine whether there is β-convergence or not. The formulation for the Wilcoxon matched-pairs signed rank test is as follows: z=
T − µT σT
(6)
where °T = min{T , T } + − ° ° n (n + 1) ®µ T = 4 ° ° n (n + 1)(2n + 1) °σ T = 24 ¯
(7)
European Integration and Telecommunication Productivity Convergence
371
In this formulation, T+ represents the sum of positive ranks, whereas T– represents the sum of negative ranks. The null hypothesis of the test is that the countries have the same distribution of rankings in the two considered years, i.e. that there is no β-convergence. For large samples (n>15) the null hypothesis can be rejected if z<-zα/2 or z>zα/2; for little samples, on the contrary, the null hypothesis is rejected if the value of T is lower than a critical value (Tcritical). The Tcritical values referring to the number of DMUs in our small samples (NM and OM) and to the two-tailed test are shown in Table 7, whereas the results of the Wilcoxon tests – compared to the year 1995 – for the three panels are shown in Table 8. Table 7. Tcritical values for the Wilcoxon two-tailed test applied to small samples Set of countries NM OM
n 10 15
α 0.01 0.01
Tcritical 3 16
Table 8. Results of the Wilcoxon matched-pairs signed rank test Year 1995 1996 1997 1998 1999 2000 2001 2002
Value of “T” for Wilcoxon Test
Wilcoxon Z
NM
OM
ALL
0 0 0 0 0 0 0
0 0 0 0 0 0 0
-4.3724 -4.3724 -4.3724 -4.3724 -4.3724 -4.3724 -4.3724
It is worth noting that the sets of the NM and of the OM fall in the case of “little samples” (n=10 for NM and n=15 for OM): therefore, in these cases we have adopted the comparison of T to Tcritical to evaluate if the null hypothesis should or should not be rejected; on the contrary, in the case of the panel ALL (n=25) the decision has been taken basing on the value of z. In all cases the null hypothesis is not significant at a 0.01% level: therefore, it is possible to state that the panels NM, OM and ALL β-converged over the period 1995–2002. The same analysis for the β-convergence has been undertaken by the Kendall’s W test, which has the following formulation
¦¦ (R i t − R i ) m
12 W=
n
2
t =1 i =1 2
(
3
m n −n
)
(8)
372
Elisa Battistoni, Domenico Campisi, Paolo Mancuso
where • • • •
m is the number of rankings n is the number of countries Rit is the ranking of country i in the period t R i is the mean ranking of country i over the period of the study
The value of W varies in [0,1] and represents the degree of mobility in the distribution of rankings over the period of the analysis: in particular, the greater the mobility in the distribution (presence of β-convergence) the smaller the value of Kendall’s W. The value obtained for the Kendall’s W test for the three panels are shown in Table 9. In all cases the Kendall’s W assumes the value of zero: therefore, also this test allows us to state that there has been a β-convergence process within each set of countries over the period of the study. Table 9. Results of the Kendall’s W test New members Old members All EU countries
Kendall’s W values 0% 0% 0%
Summing up, from the study of β- and σ-convergence processes we have obtained the results shown in Table 10. Table 10. Results for the study of convergence New members Old members All EU countries
Convergence process β-convergence β-convergence σ-convergence β-convergence
For the sets of NM and of OM we can only find a β-convergence process: this means that even if the standard deviations of these distributions have not decreased in the period of the analysis, within the two panels the originally poorly performing countries have improved their performance at such a rate that they have overwhelmed the formerly well performing ones. On the contrary, for the panel ALL we find both a β-convergence and a σ-convergence process: therefore, within this panel not only the originally poorly performing countries have caught up the originally well performing ones (σ-convergence), but also their performance has improved at such a rate that they have overtaken them (β-convergence) (Koski and Majumdar 2000). In all cases, our analysis has underlined the presence of a catching-up process both within the panels NM and OM and among all the EU countries.
European Integration and Telecommunication Productivity Convergence
373
Analysis based on the results of the DEA approach Also the software DEAP 2.1 provides estimates for the values of technical efficiencies of the DMUs in the different panels: these estimates, as previously said, are referred both to the CRS and to the VRS assumption. In our analysis we have chosen the VRS assumption, which represents a more generalised situation. The results obtained by DEAP 2.1 are shown in Table 11, whereas Table 12 compares the results obtained with the two methodologies for each set of countries. Table 11. Technical efficiencies under VRS assumption for the three panels (DEA) 1995 1996 1997 NM Mean 0.9900 0.9938 0.9897 Std.dev. 0.0129 0.0077 0.0159 Annual average convergence rate OM Mean 0.9923 0.9961 0.9892 Std.dev. 0.0096 0.0068 0.0109 Annual average convergence rate ALL Mean 0.9850 0.9877 0.9823 Std.dev. 0.0151 0.0123 0.0164 Annual average convergence rate
1998
1999
2000
2001
2002
0.9886 0.0162
0.9879 0.0169 -6.86%
0.9882 0.0188
0.9865 0.0156
0.9917 0.0126
0.9899 0.0111
0.9891 0.0097 -12.32%
0.9909 0.0102
0.9902 0.0120
0.9873 0.0172
0.9825 0.0156
0.9807 0.0150 -2.25%
0.9850 0.0153
0.9867 0.0143
0.9844 0.0163
The results in Table 11 show that the technical efficiencies resulting from the DEA approach for the three panels have not constantly increased over the period of the study, nor have they constantly decreased: the trend of technical efficiencies over time has changed sign more than once. This is the first notable difference with the results obtained from the analysis using the SF approach (see Table 12): in this case there was a downward trend in the values of technical efficiencies with time for the panels NM and OM and an upward trend for the panel ALL. Moreover, as it appears looking at the Table 12, the mean values of technical efficiencies for the three panels are constantly higher under the DEA approach than they are under the SF approach: parametric techniques generally produce lower efficiency scores “because the estimated frontier binds the data less tightly than DEA” (Carrington et al. 2002). This fact generally implies also that fewer efficient DMUs are found when using parametric techniques. The last difference is that the results derived from the DEA analysis (Table 11) show that there is no presence of σ-convergence: the standard deviations of technical efficiencies for the three sets of countries have increased their values at average annual rates of 6.9%, 12.3% and 2.3% respectively for the panels NM, OM and ALL. On the contrary, in the case of the analysis by the SF approach we had noticed the presence of a σ-convergence process for the panel ALL.
374
Elisa Battistoni, Domenico Campisi, Paolo Mancuso
Table 12. Comparison of results from SF and from DEA NM
OM
ALL
1995 1996 1997 1998 1999 2000 2001 2002
SF 0.4759 0.4643 0.4529 0.4416 0.4304 0.4193 0.4084 0.3977
DEA 0.9900 0.9938 0.9897 0.9886 0.9879 0.9882 0.9865 0.9917
SF 0.4350 0.4335 0.4319 0.4303 0.4288 0.4272 0.4257 0.4241
DEA 0.9923 0.9961 0.9892 0.9899 0.9891 0.9909 0.9903 0.9873
SF 0.3360 0.3366 0.3372 0.3377 0.3383 0.3389 0.3395 0.3401
DEA 0.9850 0.9877 0.9823 0.9825 0.9807 0.9850 0.9867 0.9844
Mean Std. dev.
0.4363 0.0274
0.9896 0.0023
0.4296 0.0038
0.9906 0.0026
0.3380 0.0014
0.9843 0.0023
However, even in this case it is still possible that the countries belonging to the three panels β-converged in the period of our study. In order to study the β-convergence process, this time we adopt the Malmquist indexes resulting from the software DEAP 2.1. In particular, first of all we determine the coefficient of correlation between the initial level of technical efficiency and the subsequent value assumed by the TFPc: indeed, a negative correlation between these two indexes attests that backward countries assimilate technology spill-overs into higher growth rates, thus implying that a catching-up process has taken place (Gouyette and Perelman 1997). Then, following Bernard and Jones (1996), we also test the presence of a βconvergence process from the following least-square regression: y i,t = α + βYi,1 + ε i,t
(9)
where: • y i,t is the geometric mean of Malmquist indexes in all time periods for the country i • Yi,1 is the initial level of technical efficiency at the period 1 for the country i • α and β are parameters to be estimated • ε i, t is the error term for the country i A negative sign for the coefficient β implies the presence of a β-convergence process among countries. The results for the coefficient of correlation are shown in Table 13, those of the least-square regression in Table 14. Table 13 shows that all the coefficients of correlation have negative values: this means that a β-convergence process has occurred both within the NM and the OM and among the whole of the EU countries (ALL).
European Integration and Telecommunication Productivity Convergence
375
Table 13. Values for the coefficient of correlation for the three sets of countries NM OM ALL
Coefficient of correlation -0.564 -0.408 -0.529
These results are confirmed by the least-square regression method: indeed, Table 14 shows that all the β coefficients have negative signs. Moreover, Table 14 highlights that the new members of the EU show a β-convergence pattern towards the old members. Table 14. Results from the least-square regression Coefficient Std. error NM 1.183 0.092 αa -0.179 0.093 βb df 8 0.318 R2 OM 1.220 0.135 αa -0.220 0.136 β df 13 0.167 R2 ALL 1.171 0.056 αa -0.171 0.057 βa df 23 0.280 R2 a Statistical significance at 1% level. b Statistical significance at 10% level.
t-value 12.85226 -1.93032
9.018 -1.613
20.795 -2.989
Conclusions In this study we have divided all the countries that, at present, belong to the EU into three panels: the set of countries that have just entered the EU (“New Members”, NM), the set of countries that were already belonging to the EU before May 1st, 2004 (“Old Members”, OM), and the whole of the EU countries (panel ALL). For each panel we have analysed the production function to determine if there has been some convergence process among the telecommunications industries of the different countries. In developing our analysis we have adopted two methods: on the one hand, we have used a stochastic frontier approach (SF), which has the advantage of providing statistical indications on the goodness of the estimation; on the other, we have used a non-parametric methodology, the DEA, which has the advantage of not re-
376
Elisa Battistoni, Domenico Campisi, Paolo Mancuso
quiring functional nor distributional specifications on the data and on the production frontier. Both techniques have been applied to the whole of the EU countries (EU25) and to the two subsets of NM and of OM. The analysis using the SF approach resulted in the model specification of the production function and in the estimation of individual technical efficiencies for each country in each year of the study. On the other hand, by using the DEA approach in addition to the estimation of technical efficiencies we have obtained the TFP change index for each country and its decomposition in technical efficiency change (TEc) and technical change (Tc). The comparison of the results coming from the two methods shows that technical efficiency measures are constantly higher when using the DEA approach: this fact can be explained by considering that the production function determined by DEA binds the data more tightly than the SF approach does. Starting from the results obtained by the estimation of the production functions, we have analysed the convergence process, both within each subset of countries (i.e. the NM and the OM) and between the NM and the OM. Even in this case the two methodologies have provided different results, but similar paths. Indeed, both, under the SF and the DEA approach, we have found no presence of σ-convergence for the panels NM and OM, whereas we have highlighted the presence of β-convergence processes within each of these two panels: this means that in each panel, even if the standard deviation of the distribution of technical efficiencies has been growing with time, the countries that were previously lagging behind have been improving their performance faster than the initially more efficient ones and have overcome them. The same holds for what concerns the panel ALL under the DEA approach. On the contrary, for this panel we have also highlighted the presence of a σ-convergence process under the SF approach, so underlining that not only the originally poorly performing countries have caught-up the originally well performing ones (σ-convergence), but also their performance has improved at such a rate that they have overtaken them (β-convergence). Therefore, both the countries in the NM and in the OM have been characterised by convergence processes: in other words, within each of the two panels the countries with low initial levels of productivity have shown a higher growth rate of the TFP. In addition, we can also state that there has been a catching-up process between the NM and the OM of the EU. Moreover, having applied both parametric and non-parametric techniques to the study of the β-convergence process, the paper also allows to compare the results obtained from these two kinds of techniques: also from this point of view, we have got similar outcomes about the presence of a β-convergence process. Therefore, the results we have obtained with our analysis show similar conclusions both for the estimation of technical efficiency levels (although with different magnitudes) and for the presence of catching-up processes, independently of the specific methodology utilised.
European Integration and Telecommunication Productivity Convergence
377
References Bernard B, Jones CI (1996) Comparing apples to orange: productivity convergence and measurement across industries and countries. The American Economic Review 86 Carrington R, Coelli T, Groom E (2002) International Benchmarking for Monopoli Price Regulation: The Case of Australian Gas Distribution. Journal of Regulatory Economics 21: 191–216 Christensen LR, Schoech PE, Meitzen ME (2003) Telecommunications productivity. In: Madden G (ed) Traditional Telecommunications Networks. Cheltenham, UK Coelli T (1996a) A Guide to FRONTIER Version 4.1: A Computer Program for Stochastic Frontier Production and Cost Function Estimation. CEPA Working Paper 96/07 http://www.uq.edu.au/economics/cepa/frontier.htm last access March 14th 2005 Coelli T (1996b) A Guide to DEAP Version 2.1: A Data Envelopment Analysis (Computer) Program. CEPA Working Paper 96/08 http://www.uq.edu.au/economics/cepa/deap.htm last access March 14th 2005 CORDIS (n.d.) European Innovation Scoreboard 2003. Last access: March 14th 2005. http://trendchart.cordis.lu/scoreboard2003/index.html EITO (2002) European Information Technology Observatory 2002. 10th edn. Last access March 14th 2005. http://www.eito.com/download/EITO_2002-ICT_market_EITO.pdf European Communities (2005) Overviews of the European Union activities: Information Society. Last updated: June 2005, last access: March 14th 2005. http://europa.eu.int/pol/infso/overview_en.htm Gouyette C, Perelman S (1997) Productivity convergence in OECD service industries. Structural Change and Economic Dynamics 8: 279–295 Heston A, Summers R, Aten B (2002) Penn World Table 6.1. Center for International Comparisons at the University of Pennsylvania ITU (2003) World Telecommunication Indicators. Win*STARS Version 4.2 Copyright 2003 Koski HA, Majumdar SK (2000) Convergence in telecommunications infrastructure development in OECD countries. Information Economics and Policy 12: 111–131
Investment by Telecommunications Operators and Economic Growth – A Fenno-Scandinavian Perspective Tom Björkroth1 Turku School of Economics and Business Administration, Institute for Competition Policy Studies, Finland
Abstract This paper examines the effect of investments by telecommunications operators on the growth rate of real GDP, by using time-series data for Finland, Sweden and Norway between the years 1970 and 2001. This study makes use of the production function model provided by Ram (1986). The explanatory variables are chosen in line with stylised facts, and in order to isolate the effect of investment on telecommunications infrastructure. The estimation is based on pooled time-series estimation with corrections for column dependency. The results do not, thus far, provide any evidence that the investment expenditures of telecommunications operators have significantly altered the pace of economic growth. This result is in some contrast to previous studies using data from Central and Eastern Europe and from OECD countries. Moreover, comparing these results with a similar estimation of Finnish data only, the robust relationship between the share of private investment in output and economic growth is blurred. In exchange, the results support the importance of public capital formation to economic growth. The main conclusion of the paper is that in relatively developed economies, the effect of telecommunications on growth may arise from investments in user segments, rather than from investment expenditure on the supply side.
Introduction During the past 15 years, many economists have focused on the role of infrastructure in enhancing the performance of an economy. A majority have reported a significant positive relationship between spending on infrastructure and GDP growth. Much of this debate and research originated as a response to the contribution of 1
E-mail:
[email protected]
380
Tom Björkroth
Aschauer (1989) who argued that declining rates of public capital investment were among the major forces behind the decline in US productivity during the 1970s. Munnell (1990) shared Aschauer’s view on this point and, with a few exceptions, the studies focusing on the importance of public capital formation to economic growth have been unanimous, as far as the positive elasticities of public capital investment are concerned. More recently, e.g. Mamatzakis (1997) explores this relationship with reference to the Greek economy.2 As far as exceptions are concerned, Tatom (1991) argued that, after correcting for the most important methodological shortcomings of the earlier analyses, public capital has no significant effect on private sector productivity and thus does not accelerate economic growth. The study of Björkroth and Kjellman (2000) shows similar results for Finnish data. However, in that study, testing for precedence revealed that the direction of causation runs from public capital stock to productivity.3 An alternative approach to the one above, and more relevant to this study, relies on the fact that specific investments, such as those in infrastructure (or in some specific parts of it), machinery and equipment, are more strongly connected with productivity and economic growth than other forms of investment.4 This view finds strong empirical support in Canning (1999) and in Madden and Savage (1998, p.174). To quote: “In particular, investment in telecommunications infrastructure has the potential to improve national productivity and economic growth.”
In short, the gains from investments in telecommunications infrastructure reported in literature can be summarised as in Table 1.
2
3
4
In fact, the focus of Mamatzakis is set on a broad definition of infrastructure, instead of measuring the effect of public capital as a whole. (pp. 4–5). There are a number of studies that have reported similar results. In their recent Australian study, Otto and Voss (1998) do not find any evidence of excessive returns from public investments. Studies focusing on infrastructure also show results along these lines. Crihfield and Panggabean (1995) reported a very modest impact of public infrastructure on factor markets and even smaller effects on growth in per capita income. Holz-Eakin and Schwartz (1995) offer little support for the possibilities of infrastructure spending to boost productivity and Ford and Poret (1991) also report results from the United States, offering no evidence that infrastructure and productivity are related. However, they also report the presence of some transnational evidence that proves the opposite. They do not find their regression results robust enough to support a policy recommendation of accelerating infrastructure investment. See DeLong and Summers (1991).
Investment by Telecommunications Operators and Economic Growth
381
Table 1. Gains from investments in telecommunications infrastructure Gains from investments in telecommunications infrastructure in terms of: Efficiency 1.
Improvement in marketing information
2.
Accelerated diffusion of information and knowledge
Cost reduction 3.
Reductions in costs of transaction and transportation
Demand issues 4.
Increasing the demand for goods and services in the production of investment goods.
5.
Satisfying the existing excess demand, which is larger than for other types of capital.
6.
New business opportunities in both production and services sectors.
1. to 3: Madden and Savage (1998), 4: Röller and Waverman (2001), 5: Canning (1999), 6: Björkroth (2003)
Empirical investigations on how infrastructure investments affect economic growth are mostly based on cross-sectional analysis and on the use of broad aggregates of capital. Due to the lack of appropriate data, very few contributions provide results from time-series analysis. However, with a limited data set of 20 to 30 observations for each country, Ram (1986) makes a significant contribution in this context, and finds that results from time-series analysis are in line with the results obtained from panel data. Employing time-series enables us to use comparatively soft data and, by doing this for a number of countries, we may avoid some of the loss of generality that comes from a case study of a single economy. In short, making use of pooled time-series in this study allows us to enjoy the merits of both time-series and transnational considerations. The aim of this paper is to formulate and estimate a model, which enables us to determine the effect of telecommunications networks on economic performance, based on pooled time-series data. The first research question is therefore: “Can we, with reference to time-series data, formulate a general production function model for the Fenno-Scandinavian economies, and isolate the effects of investments in telecommunications networks?” With reference to the previous studies, a natural follow-up research question is: “Do results for Fenno-Scandinavia’s relatively high-income economies differ from those of previous studies?” If this is the case, we should be able to formulate some tentative conclusions regarding the dynamics and the relative importance of telecommunications investment to economic growth. This paper is organised thus: the following section deals with results from previous studies. In Section 3, we formulate a model for impact assessment. Section 4 provides a detailed description of the data and variables and Section 5 presents the estimation results and model diagnostics. Section 6 provides some concluding remarks.
382
Tom Björkroth
Previous related studies In addition to Madden and Savage (1998), the positive effects of telecommunications infrastructure on economic growth or on productivity are documented by Crandall (1997), Nadiri and Nandi (1998), Canning (1999), Röller and Waverman (2001), Cieslick and Kaniewska (2002) and Björkroth (2003). Madden and Savage (1998) focus on the impact of telecommunications investment on the economic growth in the Central and Eastern European (CEE) countries. They employ ‘Barro-regressions’ and find evidence that these investments stimulate growth in these relatively low- or middle-income countries. However, despite some positive estimates, Crandall (1997) argues that the effect of new telecommunications infrastructure on economic growth is not strong enough to prove the conclusions of large externalities to be right. Nadiri and Nandi (1998) make use of a division of infrastructure capital into private and public. Their results suggest that there is a strong relationship between the growth of communications infrastructure and the growth of output, both at the industry level and at the aggregate level. Canning (1999) finds that investments in telephones are substantially more productive than investment on average, which implies large externalities. Röller and Waverman (2001) focus on a number of Western European countries, and their simultaneous equation approach provides evidence that investment in telecommunications infrastructure generates economic growth. Björkroth (2003) employs time series for Finland from 1960 to 1998, and finds that the marginal product of telecommunications investment exceeds that of other private investment. Cieslick and Kaniewska (2002) focus on Polish regions and deliver the same message as Madden and Savage (1998), namely that economic growth depends upon investment in telecommunications infrastructure. A common feature for nearly all the studies mentioned above is that they use panel or cross-sectional data. Prior to many of these studies, however, Ram (1986) compared his results from panel data with those he obtained from time-series data. He argues that time-series, if available, yields a good overall view of the effects of the various components of economic growth. Direction of causation Many theories about economic growth rely on the fact that the direction of causation runs from investment to economic growth. Munnell (1992) summarised many of the points raised concerning the reliance on this one-way causation, and found it to be a legitimate hypothesis. She noted, however, that further capital investments, both private and public, go hand-in-hand with economic activity, and she argues that this mutual influence may exist without harming the coefficient of capital inputs.
Annual trans-national 1960–1990
21 OECD countries 1970–1990
49 Polish regions 1989–1998
Canning (1999)
Röller and Waverman (2001)
Cieslick and Kaniewska (2002)
log (output/worker)
log (GDP)
log (GDP/worker)
Time-series: Finland Growth of real GDP 1960–1998 * MP denotes the marginal product of the relevant explanatory variable
Björkroth (2003)
35 two-digit U.S. industries 1950–1991
Nadiri and Nandi (1998)
1) Production costs 2) Demand for inputs 3) Marginal benefit of highway capital
1) Non-farm employment 2) Financial / real estate employment 3) Growth of gross output
48 U.S. states 1989–1994
Crandall (1997)
Dependent variable(s)
1) Growth of real GDP 2) Growth of sectoral GDP
Data
Madden and Savage 27 CEE countries (1998) Annual data 1990–95
Author
Elasticity 0.17 –0.47
a) elasticity 0.154
Elasticity:0.14 – 0.257
Share of Investment of telecom operators in GDP MP=5.22 (10%-level)
log (Density of telecom network)
a) Penetration rate b) Prices of telecom services c) Investm in telecom infrastructure
log (Mainlines/worker)
Negative on 1) Complementary to private capital Increases productivity
a) positive on 2) b) not significant c) not significant
a) Share of fibre in lines b) Lines per population c) Share of ISDN in lines Net capital stock of communications industry
MP= 12.96–15.06 Elasticity =0. 918
Effect*
a) Telecom investment / GDP b) Growth of mainlines (proxy for growth of telecom invest.)
Explanatory variable
Table 2. Summary of previous studies on the effect of telecommunications infrastructure Investment by Telecommunications Operators and Economic Growth 383
384
Tom Björkroth
In their study of 40 U.S. metropolitan areas, Eberts and Fogarty (1987) witnessed the existence of causation running in two directions. Public capital investments affected private investments mainly in cities, which experienced most of their growth in the 1950s. In southern cities that grew faster after the 1950s, the causation ran from private to public capital investment. In his seminal contribution, Aschauer (1989) tried to solve the problem of causality by using lagged infrastructure investment as an instrument for contemporaneous investments in his regressions. He also split infrastructure investments into those judged ex ante to be important and those not important. Blomström, Lipsey and Zejan (1996) provide evidence to support two-way causation of private investment. This could suggest that the ‘acceleration principle’ might hold for telecommunications investments as well. Regarding telecommunications infrastructure, Madden and Savage (1998), Cieslick and Kaniewska (2002) and Björkroth (2003) have tested for direction of causation. Madden and Savage (1998) found that regional GDP preceded investment in telecommunications infrastructure, while there was no evidence for reverse causation. However, when they applied the number of mainlines and growth in the number of mainlines as proxies for telecommunications investments, there was strong two-way causation between these variables and the growth of regional and sectoral GDP respectively. Cieslick and Kaniewska (2002) report a strong causation running from teledensity to regional output. Results in Björkroth (2003) support the weak two-way causation between the share of telecommunications investments in GDP and the growth of real GDP. At a 10% level of significance, the causation from output to investment of Finnish telecommunications operators is ‘stronger’ than for the reverse effect.
Model of telecommunications investment and economic growth In this section, we shall follow and modify the model of Ram (1986). The total output (Y) of a two-sector economy is taken to be the sum of outputs of the private sector (Yp) and of government (Yg). The output of each sector is determined by the sectoral production functions, which are labour (L) and capital (K). In addition, private sector output enjoys some externalities from public sector output. By this we mean the use of public expenditure to increase human capital through spending on education and to ensure property rights. Moreover, including Yg as a separate argument in the production function of the private sector follows from the idea that public inputs are not a substitute for private inputs (Barro 1990). We may also make use of Kormendi’s (1983) approach, which relies on the fact that the private sector rationally accounts for the effect of government fiscal policy, instead of the ‘standard’ approach, which uses a wealth concept that includes the stock of gov-
Investment by Telecommunications Operators and Economic Growth
385
ernment debt.5 Kormendi also pointed out the possibility of government and private consumption being substitutes. If we want to motivate further the choice of government consumption as an explanatory variable, relying on the fact that output is determined by aggregate demand, the line of reasoning may go from increased government expenditure via increased private and total consumption to increasing private sector output. Government expenditure on R&D may also have an indirect effect on output growth. Public investments increase total investment expenditure and directly affect the output, but public investment may also spur on private investment, and these externalities may increase the marginal product of private capital. Modifying the model in Ram (1986), the production functions can be written as: Y p = ( K p , L p , Y g , HK p )
(1)
Y g = ( K g , K T , L g , HK g )
(2)
The total use of inputs is the sum of private and government sector inputs (L=Lp+Lg and K=Kp+Kg). In (1) and (2) we separate the stock of human capital as HK=HKp+HKg, but this separation is omitted in the empirical section, as this input is nearly impossible to measure separately for each sector. Furthermore we have separated telecommunications infrastructure (network) capital (KT) from other private capital, or Kp = Kp-T+ KT .6 Both sectors enjoy the use of telecommunications infrastructure as input in production; thus its impact on private sector output is two-dimensional. The choice of KT is motivated by the direct and indirect effects of telecommunications infrastructure on the production process. The direct effect arises from the aggregate demand and the indirect effect from its enhancement of the productivity of existing resources. Moreover, regarding these spillovers, I am tempted to quote Crandall (1997, p. 171): “To estimate the spillover effects of such investments, one would ideally estimate a production function whose arguments are labor, private non-telecommunications capital, telecommunications infrastructure capital and other public infrastructure. Given different levels of telecommunications investment across states, a pooled time- series cross sectional analysis based on state data would be ideal.”
Using (1) and (2), the change in total output can be simplified to the following total differential:
5
6
“Such a formulation implicitly assumes that the private sector is too myopic to account for any effects of government debt on future taxes, and ignores the benefits of government spending” (Kormendi 1983). In contrast to Ram (1986), we do not focus on the differences in the marginal productivity of labour and capital between private and public sectors.
386
Tom Björkroth
dY = ¦ i
∂Y p ∂K i
dK i + ¦ j
∂Y g ∂K j
dK j +
∂Y p ∂Y ∂Y dL + dY g + dHK ∂L ∂Y g ∂HK
(3)
Where i=T,p-T and j=T,g. If we replace the differential with the discrete equivalent ∆, after manipulation of the production functions, we will end up with Eq. (4) for aggregate economic growth. Note that we let the total labour force (TLF) be a proxy for the stock of human capital. ∆Y § I · = α 1¨ ¸ Y ©Y ¹ § ∆ Y g ·§ Y p ¸¨ α 5¨ ¨ Y g ¸¨ Y © ¹©
§ PUB · § ∆L · § TEL · + α 2¨ ¸+ ¸ + α 4¨ ¸ + α 3¨ © Y ¹ © L ¹ © Y ¹ · ∆ TLF ¸ + α 6 §¨ ¸ © TLF ¹
(4)
· ¸ ¹
Equation (4) includes the following variables: Y I TEL L PUB Yg Yp TLF
= = = = = = = =
Real GDP Private investment expenditure equal to (∆Kp) Investments in telecommunications infrastructure (equal to ∆KT ) Hours worked in the economy Public investments (equal to ∆Kg) Public sector output Private sector output Total labour force
The coefficients in equation (4) are conveniently formulated as either marginal products of inputs or as elasticities as follows:
α1 α2 α3 α4 α5 α6
= Marginal product of private capital, not including telecommunications networks. = Marginal product of telecommunications infrastructure capital. = Elasticity of total output with respect to hours worked. = Marginal product of public capital. = Elasticity of private sector output with respect to public output. = Elasticity of total output with respect to increases in TLF.
The ratio of investment to GDP forms a basis for our model. In their study of robustness of factors explaining growth, Levine and Renelt (1992) found the correlation between share of investment in GDP and growth very resistant to alterations in the set of explanatory variables. In fact, no other variable displayed such robustness in their transnational analysis. They also found that including I/Y as an explanatory variable made the effects of variables representing exports and foreign trade appear fragile, which was due to the positive correlation between these variables. In addition, they found the correlation between several variables describing
Investment by Telecommunications Operators and Economic Growth
387
government size and growth to be susceptible to alterations in the set of explanatory variables. Regarding the different categories of investment, both private and public investments are essential in the determination of output. We have separated these two in our model, being aware that, in a mixed economy, the concept of ‘public’ is somewhat flawed, since capital expenditure in public enterprises, for example, is classified as private investment, a point which is also noted by Barro and Sala-IMartin (1995, p. 441). Thus the effect of ‘true’ public capital formation may be underestimated. The level of aggregation may also leave some important effects unobserved, since there is evidence in such studies as Easterly and Rebelo (1993), that, for example, different categories of public investments, have different effects on economic growth.7 The use of a breakdown of investment in public and private components is supposed to be less problematic when using national data compared to using international samples. In the empirical section below, we have solved the problem with classification into ‘private’ and ‘public’ by relying on the classification made by the statistical authorities of the countries involved. The importance of human capital and its relation to physical capital, as determinants of output growth, is presented in Barro and Sala-I-Martin (1995, pp. 171211). In this study, we rely on the approach of Röller and Waverman (2001) and use TLF as a proxy of the stock of human capital. Despite its convenience, this variable may be inappropriate to take into account the improved skills and improved educational status of the labour force. Similarly to school enrolment statistics (Barro, 1991), the total labour force may still serve well as a crude proxy of the stock of human capital. Moreover, Levine and Renelt (1992) show that, in the regressions, the school enrolment rate is also susceptible to the choice of informational sets.
Data and variables The data used in this study is based on time series ranging from year 1970 to 2001, and is mainly obtained from such sources as Statistics Finland, Statistiska Centralbyrån (Sweden) and Statistisk Sentralbyrå (Norway). OECD Quarterly National Accounts together with ITU Yearbook of Statistics have also proved a valuable source of data. Fig. 1 below shows the development of real GDP as an index in the three economies. The GDP in 1970 is indexed to 100 for each country in order to reveal the different paces of growth.
7
See also Aschauer (1989), Ford and Poret (1991).
388
Tom Björkroth
Index of real GDP 1970-1999 Index (GDP in 1970=100)
300 250 200 150 FIN
100
SWE
50
NOR
19 70 19 73 19 76 19 79 19 82 19 85 19 88 19 91 19 94 19 97
0
Year
Fig. 1. Development of real GDP (index 1970 = 100
It is evident that Norway has experienced the fastest growth rate during the period of study, closely followed by Finland until the onset of the recession in 1991. Average GDP growth in Sweden was quite moderate during the 1970s and 1980s and even slower in the 1990s. The average annual growth rates for each country in each decade and during the entire sample period can be found in Table 3 below. Table 3. Average annual growth rates of GDP (%) Average growth of GDP during: 1970s
1980s
1990s
1970-2001
Finland
3.4
3.7
1.6
2.9
Sweden
2.0
2.2
1.9
2.0
Norway
4.7
2.7
3.4
3.4
The relative change in hours actually worked has fluctuated quite widely during the sample period and may be the main reason for the economic fluctuations shown in Fig. 2. What is clear from this figure is the onset of the decline in the late 1980s and early 1990s. The hours worked seem to decline first in Norway, then in Finland and lastly in Sweden. A rather similar pattern can be observed in the late 1990s. The severe recession in Finland and Sweden in the early 1990s is clearly visible.
Investment by Telecommunications Operators and Economic Growth
389
Increase/decrease in hours worked (dL/L)
2001
1999
1997
1995
1993
1991
1989
1987
NOR
1985
1983
SWE
1979
1977
1975
1973
1971
FIN
1981
0,06 0,04 0,02 0,00 -0,02 -0,04 -0,06 -0,08
Fig. 2. Relative change in total hours worked by country
Size of government
In our model, we use two variables associated with the size of the public sector: Firstly, public investment expenditure and secondly, the (growth of) public sector output. The latter is included in the variable measuring the externality effect of the size of the public sector. Regarding these variables, the use of time-series data has the advantage of revealing developments over time, instead of being just ‘snapshots’ from some specific years. Thus, we lessen the risk of employing merely external observations as an independent or dependent variable. For example, Kiander and Lönnqvist (2002) note that, for Finland and Sweden, government consumption expenditure is generally not correlated, but if one includes the 1990s in the analysis, the correlation becomes negative. In this case, the question of causation is crucial to ensure correct inferences. Of course, public investment and public sector output as such do not exclusively reveal the role of the public sector in output determination. The effect on education and on growth of tax rates and public sector debt or expenditure are serious alternatives for explanatory variables. Total labour force – a proxy for human capital
Röller and Waverman (2001) use total labour force as a proxy for the stock of human capital. It is questionable whether this variable is able to capture the results
390
Tom Björkroth
of the improved educational status of the labour force or the skills acquired by the active labour force. It should be beyond doubt that the stock of human capital has improved in all three countries during the sample period. If the growth of human capital is a concave function of time, and if the stock of human capital deteriorates during severe recessions, the change in the total labour force may serve as a good proxy for growth of TLF in Table 4. That would be to say that the 1970s is associated with faster growth in human capital than the 1980s. The slump in the early 1990s could then partly be held responsible for the deterioration of the stock of human capital in Sweden, which appears to have offset the growth in Finland and Norway during that decade. Table 4. Average annual growth rates of total labour force (%) Average growth of TLF during: 1970s
1980s
1990s
1970-2001
Finland
1.32
0.58
0.02
0.62
Sweden
1.21
0.59
-0.43
0.46
Norway
0.69
0.70
0.34
0.56
However, the TLF series is not necessarily in accordance with the data concerning ‘average years of schooling of the total population’, used in Moen (2001). This refers to the discussion in Aghion and Howitt (1999) of accumulation of human capital. Education is only one element in this, while ‘learning by doing’ (LBD) is the other, and clearly economic fluctuations together with increasing unemployment figures should affect this LBD component.
Estimation and results In the estimation stage, we shall use the data described above. This data set consists of annual observations from 1971 to 2001 for all three countries. Thus, the total number of observations per variable is 93. The (pooled) variables have the following descriptive statistics. In the columns for minimum and maximum values, the country and year are indicated for each variable.
Investment by Telecommunications Operators and Economic Growth
391
Table 5. Descriptive statistics *) Variable
N
Mean
∆y y t −1
93
0.0280
0.0233
It Yt −1
93
0.2001
TELt Yt −1
93
∆L L PUBt Yt −1 Extg
∆TLF TLF
St. dev.
Min (Country /Year) -0.0710 (Fin/1991)
Max (Country /Year) 0.0760 (Fin /1972)
0.0447
0.1164 (Swe/1994)
0.3087 (Nor/1987)
0.0064
0.0019
0.0023 (Nor/1993)
0.0120 (Swe/1985)
93
-0.00016
0.0180
-0.0703 (Fin/1991)
0.0405 (Swe/1999)
93
0.0324
0.0073
0.0200 (Swe/1987)
0.0492 (Nor/1972)
93
0.0212
0.0216
93
0.0053
0.0104
-0.0510 (Fin/1993) -0.0275 (Swe/1993)
0.1190 (Fin/1976) 0.0519 (Fin/1977)
*) Fin = Finland, Swe = Sweden, Nor = Norway
Some of the explanatory variables are also likely to correlate highly and this should be taken into account if the correlation is a result of some functional relationship. For example, the ratio of private investment to GDP shows a high correlation coefficient with both PUB/Y (.677) and Extg (.573). The latter two variables correlate as well (=.554). A high correlation, which may result from a functional dependency, is the (cor)relation between growth of hours worked and the growth of the total labour force (=.406). This may be a consequence of the aim of stabilising the unemployment figures and, as an indirect result, the ratio between hours of labour input and the size of the total labour force. The correlation matrix is shown in Table 6 below. Table 6. Correlation matrix Variable
∆y/y
I/Y TEL/Y ∆L/L PUB/Y Extg ∆TLF/TLF
1.000 0.181 0.187 0.600 0.297 0.227 0.303 ∆y/y
1.000 0.323 -0.077 0.677 0.573 0.356 I/Y
1.000 0.095 0.226 0.245 0.356 TEL/Y
1.000 -0.071 -0.067 0.406 ∆L/L
1.000 0.554 0.088 PUB/Y
1.000 0.126 Extg
392
Tom Björkroth
High correlation coefficients between independent variables need to be corrected in order to avoid biases arising from multicollinearity. One way to deal with this ‘column dependency’ is the Gramm-Schmidt orthogonalisation. Estimating the relationship between dL/L and dTLF/TLF with ordinary least squares gives the following estimates with t-statistics in parentheses: L t − L t −1 TLF t − TLF = − 0 . 004 + 0 . 700 L t −1 TLF t − 1 (-2.02)
t −1
+ γˆ t
(5)
(4.23)
Rearranging gives:
γˆ t =
γˆt
Lt − Lt −1 TLFt − TLFt −1 + 0.004 − 0.700 Lt −1 TLFt −1
(6)
then captures the effect of hours actually worked, when this is controlled for
the growth in TLF. Adding γˆt and dTLF/TLF to the equation to be estimated, then yields more appropriate estimates for these variables. Below we have estimated equation (4), first without and then with the application of the principle of Gramm-Schmidt orthogonalisation as described in equations (5) and (6). In a pooled time-series model like this, the fixed effects, with a dummy variable approach, seem natural. However, when testing with the equation 16.2.25 in Judge et al. (1982, p.484) whether or not the coefficients for country dummies were equal to zero, the F-test (F=0.1569) revealed that the null hypothesis of equal intercept could not be rejected. We also tried to re-parameterise the model by omitting one of the country dummies (Norway). The F-test suggested (F = -0.0830) that the null hypothesis of equal intercept could still not be rejected. Therefore we continued the analysis by omitting country-specific dummies. Estimating the model with a joint intercept gave the following results: To avoid inferences based on spurious relationships, the variables have to be co-integrated or stationary. The co-integration test on variables showed that variables I/Y and PUB/Y had to be differentiated in order to achieve a stationary state. We did this separately for each cross-section and then pooled the series again. We then performed an OLS estimation using stationary variables and still omitting the country dummies.
Investment by Telecommunications Operators and Economic Growth
393
Table 7. The results of the estimations Specification Variable Constant
(1) Estimate -0.005
s.e. 0.010
(2) Estimate -0.006
s.e. 0.010
It Yt −1
-0.049
0.064
-0.049
0.064
TELt Yt −1
0.683
0.931
0.683
0.931
∆L / L
0.775***
0.775***
0.139
0.139
γˆ t PUBtt/Yt-1
1.014**
0.325
1.014**
0.325
ExtYg
0.136
0.134
0.136
0.134
∆TLF TLF
0.105
0.224
0.647**
0.222
Fk-1,N-k
46.83
46.83
0.454
0.454
2
R adj.
DW 1.59 1.59 [rho] [0.196] [0.202] Jarque0.504 0.504 Bera a) s.e. = standard error of the estimate. b) specifications (1) and (2) were estimated using a heteroskedasticity consistent covariance matrix. c) */**/*** indicate significant parameter estimates at 10% / 5% / 1% levels respectively
These results are shown as specification (3) in the table below. This model seemed to suffer from autocorrelation in terms of error, which may bias the estimates. In specification (4) we have estimated the same model by using the pooling method by Kmenta (1986). This pooling technique applies a set of assumptions to the error covariance matrix. First, it allows the error variance to vary across crosssections, that is to say, it allows for heteroskedasticity. Secondly, it assumes that errors between cross-sections are not correlated, thus implying cross-sectional independency. Finally this model allows the error term to be an autoregressive func-
394
Tom Björkroth
tion for each cross-section. This ultimately yields a cross-sectional, heteroskedastic and timewise autoregressive model8. The results of this estimation are presented as specification (4). The results in specifications (3) and (4) do not give particularly strong support to the hypothesis, that the investments of telecommunication operators are a strong force behind economic growth for these relatively high income countries. Results from these specifications do also suggest that the positive effect of public investments found in specifications (1) and (2) may be a result of spurious correlation, while the effect of labour input and human capital have the expected positive parameter estimates. The externality effect of government size is positive in all specifications, which is in contrast with the results in Björkroth (2003), where only Finland was considered. Bringing Sweden and Norway into this analysis also blurred the arguably strong relationship between the share of private investment in GDP and economic growth.
8
The estimation is performed in 4 major steps: 1. A standard OLS estimation is run and residuals for each cross-section are obtained. 2. The autoregressive model for the residuals are calculated for each cross-section 3. The AR(1) parameters from step 2 are used to transform the observations (Kmenta 1986) and an OLS estimation is applied to the transformed model 4. General least squares estimators are obtained by using the diagonal error covariance matrix. If we assume that there is cross-sectional correlation, the full error covariance matrix is employed.
Investment by Telecommunications Operators and Economic Growth
395
Table 8. The results of the estimations Specification Variable Constant
(3) Estimate -0.014**
s.e. 0.006
(4) Kmenta Estimate -0.016*
s.e. 0.007
It Yt −1
0.109
0.098
0.049
0.076
TELt Yt −1
0.917
0.990
0.918
1.107
γˆ t
0.741***
0.154
0.626***
0.126
PUBtt/Yt-1
-0.390
0.566
-0.135
0.452
ExtYg
0.269**
0.124
0.247**
0.101
0.516**
0.221
0.461**
0.203
∆L / L
∆TLF TLF Fk-1,N-k
40.58
R2 adj.
0.407
DW [rho] Jarque-Bera
1.57 [0.191] 0.701
Buse-R2
0.297 1.70 [0.11]
a) s.e. = standard error of the estimate. b) specifications (1) and (2) were estimated using a heteroskedasticity consistent covariance matrix. c) */**/*** indicate significant parameter estimates at 10% / 5% / 1% levels respectively
Conclusions This study aimed at formulating and estimating a model, with which to determine the effect of telecommunications networks on economic performance in FennoScandinavia (Finland, Sweden and Norway).
396
Tom Björkroth
The estimations were based on pooled time-series data consisting of annual observations on relevant variables between 1970 and 2001. The answer to our first research question is twofold. This concerns whether or not we formulate a general production function model for the Fenno-Scandinavian economies and, with reference to time-series data, isolate the effects of investments in telecommunications networks. We were able to formulate such a model, but whether we have managed to isolate the effect in question is open to discussion. Our results indicate that the marginal product of operators’ investment in infrastructure capital does not significantly differ from zero. Thus, we have managed to isolate it only if this is the true effect. This result, however, by no means indicates that investments in telecommunications as a whole are characterised by low productivity. In this study, we have only focused on telecommunications infrastructure, which is provided by the supply side of these services. In our case, much private investment now includes spending on such things as equipment, and this effect is not isolated, but is rather left as a future avenue of research. Our follow-up research question (“Do results for Fenno-Scandinavia’s relatively high-income economies differ from those of previous studies?”) is answered in the affirmative. Regarding the dynamics and the relative importance of telecommunications investment to economic growth, a sinister interpretation of our results is that the effects of investment in telecommunications infrastructure on growth are relatively small in high-income economies in comparison with lowerincome economies.
References Andreasson K, Helin V (eds.) (1999) Suomen Vuosisata-Trendit. Central Statistical Office of Finland, Helsinki Aschauer DA (1989) Is Public Expenditure Productive? Journal of Monetary Economics 23: 177–200 Barro RJ (1990) Government Spending in a Simple Model of Endogenous Growth. Journal of Political Economy 3, 98, part 2 Barro RJ (1991) Economic Growth in a Cross Section of Countries. Quarterly Journal of Economics 106: 407–43 Barro RJ, Sala-I-Martin X (1995) Economic Growth. MacGraw-Hill, New York Berndt ER, Hansson B (1992) Measuring the Contribution of Public Infrastructure Capital in Sweden. Scandinavian Journal of Economics 94 suppl.: 151–168 Björkroth T (2003) Engine or Wheels of our Prosperity?-Infrastructure and Economic Growth and Effects of Liberalisation of the Finnish Telecommunications Market. PhDthesis, Åbo Akademi Department of Economics and Statistics Björkroth T, Kjellman A (2000) Public Capital and Private Sector Productivity-A Finnish Perspective. Finnish Economic Papers, 1/2000: 28–44 Blomström M, Lipsey RE, Zejan M (1996) Is fixed investment the key to economic growth? Quarterly Journal of Economics 111: 269–276 Canning D (1999) Telecommunications and Aggregate Output. CAER II Discussion Papers 56, December, Harvard Institute for International Development
Investment by Telecommunications Operators and Economic Growth
397
Central Statistical Office of Finland (1981) Revised National Accounts for 1960–1978. Statistical surveys Number 66, Helsinki Cieslick A, Kaniewska M (2002) Telecommunications Infrastructure and Regional Economic Development: The case of Poland. Paper presented at 13th regional conference of International Telecommunications Society, Madrid Costa J, da Silva, Ellson RW, Martin RC (1987) Public Capital, Regional Output, and Development. Journal of Regional Science 27: 419–37 Crandall RW (1997) Are Telecommunications Facilities ‘Infrastructure’? If They are, so What? Regional Science and Urban Economics 27: 161–179 Crihfield JB, Panggabbean MPH (1995) Is Public Infrastructure Productive? A Metropolitan Perspective Using New Capital Stock Estimates. Regional Science and Urban Economics 25: 607–630 DeLong JB, Summers LH (1991) Equipment Investment and Economic Growth. Quarterly Journal of Economics, May: 445–502 Draper NR, Smith H (1998) Applied Regression Analysis (3rd ed). John Wiley & Sons, Inc., New York Duffy-Deno KT, Eberts RW (1989) Public Infrastructure and Regional Economic Development: A Simultaneous Approach. Working Paper No. 8909, Federal Reserve Bank of Cleveland Easterly W, Rebelo S (1993) Fiscal Policy and Economic Growth: An Empirical Investigation. Journal of Monetary Economics 32: 417–458 Eberts RW (1986) Estimating the Contribution of Urban Public Infrastructure to Regional Economic Growth. Working Paper No. 8610, Federal Reserve Bank of Cleveland, December Eberts RW (1990) Public Infrastructure and Regional Economic Development. Economic Review Federal Reserve Bank of Cleveland 26: 15–27 Eberts RW, Fogarty MS (1987) Estimating the Relationship between Local Public and Private Investment. Working Paper No. 8703, Federal Reserve Bank of Cleveland Eisner R (1991) Infrastructure and Regional Economic Performance. New England Economic Review, Federal Reserve Bank of Boston, September/October, pp. 47–58 Ford R, Poret P (1991) Infrastructure and Private-Sector Productivity. OECD Economic Studies 17: 63–89 Harvey AC (1981) The Econometric Analysis of Time-Series. Phillip Allan, London Holz-Eakin D, Schwartz AE (1995) Infrastructure in a Structural Model of Economic Growth. Regional Science and Urban Economics, 25: 131–151 Hsiao C (1981) Autoregressive Modelling and Money Income Causality Detection. Journal of Monetary Economics 7: 85–106 Judge GG, Hill RC, Griffiths WE, Lütkepohl H, Lee TC (1982) Introduction to the Theory and Practice of Econometrics. John Wiley & Sons, New York Kiander J, Lönnqvist H (2002) Hyvinvointivaltio, sosiaalipolitiikka ja taloudellinen kasvu. Sosiaali ja terveysministeriön julkaisuja 2002:20. Helsinki Kmenta J (1986) Elements of Econometrics, (2nd ed.) MacMillan Kormendi RC (1983) Government Debt, Government Spending and Private Sector Behavior. American Economic Review 73: 994–1010 Kormendi R, Meguire P (1985) Macroeconomic Determinants of Growth: Cross-country Evidence. Journal of Monetary Economics 16: 141–163 Levine R, Renelt D (1992) A Sensitivity Analysis of Cross Country Growth Regressions. American Economic Review 82: 942–963
398
Tom Björkroth
Madden G, Savage SJ (1998) CEE telecommunications investment and economic growth. Information Economics and Policy 10: 173–195 Mamatzakis EC (1997) The Role of Public Sector Infrastructure on Private Sector Productivity in a Long Run Perspective. Mimeo. Queen Mary and Westfield College Mankiw NG, Romer D, Weil DN (1992) A Contribution to the Empirics of Economic Growth. Quarterly Journal of Economics. May 1992: 407–427 Mera K. (1973) Regional Production Functions and Social Overhead Capital: An Analysis of the Japanese Case. Regional and Urban Economics 3: 157–185 Moen OC (2001) Nordic Economic Growth in Light of New Theory: Overoptimism about R&D and Human Capital. Statistics Norway research department documents 2001/10 Moisala UE, Rahko K, Turpeinen O (1977) Puhelin ja puhelinlaitokset Suomessa 1877– 1977. Edited by Jutikkala, E. Puhelinlaitosten Liitto r.y. Helsinki Munnell AH (1990a) Why has Productivity declined? Productivity and Public Investment. New England Economic Review, Federal Reserve Bank of Boston, January/February: 3–22 Munnell AH (1990b) How does Public Infrastructure Affect Regional Economic Performance? New England Economic Review, Federal Reserve Bank of Boston, September/October: 11–32 Munnell AH (1992) Infrastructure Investment and Economic Growth. Journal of Economic Perspectives 6: 189–198 Nadiri MI, Nandi B (1992) Communications Infrastructure and Economic Growth in the Context of the U.S. Economy. Mimeo. New York University and NBER, AT&T Laboratories Otto GD, Voss GM (1998) Is Public Capital Provision Efficient? Journal of Monetary Economics 42: 47–66 Pindyck RS, Rubinfeld DL (1991) Econometric Models and Economic Forecasts (3rd ed.). McGraw-Hill Ram R (1986) Government Size and Economic Growth: A New Framework and Some Evidence from Cross-Section and Time-Series Data. American Economic Review 76: 191–203 Röller LH, Waverman L (2001) Telecommunications Infrastructure and Economic Development: A Simultaneous Approach’, American Economic Review 91: 909–923 Tatom JA (1991) Public Capital and Private Sector Performance. Federal Reserve Bank of St. Louis Review, May/June: 3–15
Investment by Telecommunications Operators and Economic Growth
399
Appendix 1: Plots of correlations between variables
dy/y
dy/y and TEL/Y (FIN, SWE NOR) 0.1 0.08 0.06 0.04 0.02 0 -0.02 0 -0.04 -0.06 -0.08
0.002
0.004
0.006
0.008
0.01
0.012
0.014 TEL/Y
dy/y
Public investments and dy/y 0.1 0.08 0.06 0.04 0.02 0 -0.02 0 -0.04 -0.06 -0.08
0.01
0.02
0.03
PUB/Y
0.04
0.05
0.06
Tom Björkroth
dy/y
Extg and dy/y
-0.1
-0.05
0.1 0.08 0.06 0.04 0.02 0 -0.02 0 -0.04 -0.06 -0.08
0.05
0.1
0.15
Extg
dL/L and dy/y
dy/y
400
-0.08
-0.06
-0.04
0.1 0.08 0.06 0.04 0.02 0 -0.02 -0.02 0 -0.04 -0.06 -0.08
dL/L
0.02
0.04
0.06
Investment by Telecommunications Operators and Economic Growth
401
dL/L and dTLF/TLF 0.06 0.04
dL/L
0.02 0 -0.04
-0.02
-0.02 0
0.02
0.04
0.06
0.04
0.06
-0.04 -0.06 -0.08
dTLF/TLF
dTLF/TLF and dy/y 0.1
dy/y
0.05 0 -0.04
-0.02
-0.05
0
0.02
-0.1 dTLF/TLF
402
Tom Björkroth
Appendix 2: On Gramm-Schmidt orthogonalisation The results in specification (1) need to be interpreted with caution. This is because of our choice of a proxy for human capital, which correlates with growth in hours actually worked. The correlation coefficient of 0.406 between the growth of the total labour force and the growth in hours actually worked may bias the coefficients concerned. This problem of this high correlation (collinearity) can be overcome by relying on the principle of Gramm-Schmidt orthogonalisation. Assume that the high correlation between the two independent variables can be described by the following relationship
∆L ∆ TLF =α + β +γt L TLF
(A.1)
If we want to use both these variables in the regression, we will have to control for the effect of dTLF/TLF on dL/L. This can be done by using the residual, which is now equal to:
γt =
∆L § ∆TLF · − f¨ ¸ L © TLF ¹
(A.2)
as a more correct independent variable instead of ∆L/L. In Draper and Smith (1998, p. 383) the relation like (A1) is called column dependence.9 They denote the general form ‘column transformation’ as ZiT=Zi-(Z(Z’Z)-1Z’Zi)
(A.3)
where: ZiT = The residual vector of Zi after Zi has been regressed on the columns of Z Z = The matrix of column vectors already transformed (in our case with two variables Z refers to the vector of ∆TLF/TLF). Zi = The next column vector of X, the matrix in the regression problem, to be transformed. In our case this refers to ∆L/L.
In terms of Eq. (A.1) ZiT corresponds to γt and (Z’Z)-1Z’Zi refers to the least squares estimate of β and Z represents ∆TLF/TLF.
9
See especially Draper and Smith (1998), chapter 16 “Ill Conditioning in Regression Data” pp. 369–386.
European Union Mobile Telecommunications in the Context of Enlargement Jason Whalley1, Peter Curwen2 University of Strathclyde, Scotland, UK
Abstract This chapter examines the implications for mobile telecommunication companies of the 2004 expansion of the European Union. Licence ownership, market structure and concentration are analysed, and the strategic options for those operators with a presence in the accession countries discussed.
Introduction On 1 May 2004, the European Union (EU) witnessed its single largest expansion when ten countries joined. The accession of these ten countries – Cyprus (South), Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia and Slovenia – has irrevocably changed the EU3. In terms of sheer geographic size, for example, the surface area of the EU increased by more than 700,000 sq km, allowing Berlin to claim that it is no longer located on the edge of Europe but at its heart, while another 74 million people were added to its population. Prior to the discussions on accession, there was a clear, albeit diminishing, divide between Eastern and Western Europe. While many industrial companies from the West had crossed into Eastern Europe, they had by no stretch of the imagination taken it over, despite Eastern Europe’s reputation for shoddy products. This was certainly the case where telecommunications was concerned, an industry traditionally treated as a ‘national champion’ and hence one where governments were somewhat ambivalent about stake-building by foreigners. On the one hand, they were less than keen on ceding control over the industry to foreigners, while on the other their incumbent operators badly needed new investment which was not available domestically. Those countries negotiating for accession were also well 1 2 3
E-mail:
[email protected] E-mail:
[email protected] For a discussion of the challenges of EU enlargement see, for example, Cottrell (2003) or Rachman (2001).
404
Jason Whalley, Peter Curwen
aware that they would be bound by the rules of the EU, which would restrict their ability to keep out ‘foreign’ companies that originated elsewhere in the EU (although some existing EU member states were quite practised at doing so). Equally, companies that had stayed out of Eastern Europe on principle would now see their way clear to cross into post-accession member states. The purpose of this paper is to explore two particular issues in the context of the mobile telecommunications market: Firstly, whether as a consequence of EU expansion, the accession countries will indeed be the recipients of inward investment from those companies that have, until now, largely ignored them; and secondly, whether those companies already active in the accession countries will seek to reinforce their existing positions. To this end, the paper is structured as follows. In the main section that follows, the ownership of mobile communication licences across the enlarged EU will be described. In addition, the geographical footprint of operators will be established, with a distinction being made between those operators that have invested in the accession countries and those that have not. In the second main section the focus shifts onto the accession countries, with the analysis being driven by the issues outlined above. Conclusions will be drawn in the final section.
Mobile communication licence ownership across the enlarged EU At the heart of the analysis of the implications of EU expansion on the strategies of mobile communications companies is Table 1. This table depicts second generation (2G) – known in Europe as the Global System for Mobile (GSM) – and third generation (3G) – known as the Universal Mobile Telecommunications System (UMTS) – mobile licence ownership across the twenty-five countries that are now member states of the EU. In essence, 2G represents a digital technology whose main purpose is to carry voice telephony while also accommodating lowspeed data transfer as exemplified by the short message service (SMS), while 3G is capable of much higher speeds of data transfer suitable for large data files and still and video photography. UMTS requires the licensing of new spectrum, but there is also an intermediate technology, known as the General Packet Radio Service (GPRS), that can operate at higher speeds than GSM while using the same spectrum. Table 1 builds on Whalley and Curwen (2003) in four ways. Firstly, the table differentiates between the two bandwidths used for what is generically called GSM, namely GSM 900 and 1800 (PCNs). Secondly, the table identifies when each mobile service was launched. Thirdly, the table details the number of subscribers that each company had at the end of December 2003. Finally, the table also identifies the fixed-wire incumbent operator (the PTO) for each country.
Fixed Wire
Telekom Austria
Belgacom
CyTA
Ceský Telecom
TDC
Eesti Telefon
TeliaSonera
France Télécom
Deutsche Telekom
Country
Austria
Belgium
Cyprus (South)
Czech Republic
Denmark
Estonia
Finland
France
Germany
Orange SFR Bouygues Tél T-Mobile Vodafone
CyTA Investcom Ceský Mobil EuroTel Praha T-Mobile TDC Sonofon Orange TeliaSonera EMT Radiolinja Eesti Tele2 TeliaSonera Radiolinja DNA Finland
Proximus Mobistar
Mobilkom T-Mobile
GSM1
07/92 07/92 05/96 07/92 0 6/92
04/95 10/03 03/00 07/96 09/96 03/92 0 3/92 01/01 01/01 01/95 01/95 09/96 06/92 12/91 01/00
01/94 0 8/96
12/93 10/96
20,300,000 14,700,000 6,600,000 26,300,000 24,688,000
492,000 168,000 267,000 2,428,000 1,374,000 760,000
552,000 1,550,000 4,215,000 3,860,000 2,470,000 1,112,000
4,201,000 2,615,000
3,163,000 2,040,000
Table 1. European Union mobile licence ownership, 31 December 2003
Orange SFR Bouygues Tél E-Plus mmO2 Vodafone
TeliaSonera Radiolinja DNA Finland
Ceský Mobil EuroTel Praha T-Mobile TDC Sonofon Orange TeliaSonera EMT Radiolinja Eesti
Base Mobistar
Mobilkom ONE tele.ring
PCNs1
05/94 10/98
03/98 01/98
06/99
10/98 05/00
8,200,000 5,590,000
576,000 525,000
1,252,000
1,430,000 517,000
Hi3G Denmark Orange TDC Telia Denmark Eesti Telecom Radiolinja Eesti Tele2 Finnet Group Radiolinja Tele2 TeliaSonera Orange SFR Bouygues Tel E-Plus Group 3G mmO2 T-Mobile Vodafone
EuroTel Praha T-Mobile
Hutchison 3G Mobilkom ONE tele.ring T-Mobile KPN Mobile 3G Mobistar Proximus Investcom
UMTS2
10/03
04/03 04/03 12/03
3,000
38,000 28,000 -
European Union Mobile Telecommunications in the Context of Enlargement 405
Fixed Wire
OTE
Matáv
Eircom
Telecom Italia
Lattelekom
Lietuvos Telekomas
EPT
Maltacom KPN Telecom
TPSA
Country
Greece
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Malta Netherlands
Poland
Table 1. (cont.)
Centertel Polkomtel PTC
Vodafone KPN Vodafone Orange mmO2
Baltkom LMT Omnitel Bité Tele2 EPT Tango
Pannon Vodafone Westel Meteor mmO2 Vodafone TIM Vodafone Wind
TIM Vodafone
GSM1
03/98 10/96 09/96
04/97 07/94 09/95 01/99 11/98
03/97 01/95 03/95 08/95 12/99 07/93 05/98
03/ 94 11/99 04/94 02/01 03/97 06/93 04/95 12/95 03/99
06/93 07/93
5,700,000 5,400,000 6,200,000
162,000 5,210,000 3,405,000 1,326,000 1,593,000
519,000 534,000 1,052,000 507.000 597,000 339,000 195,000
2,618,000 1,331,000 3,536,000 181,000 1,374,000 1,871,000 26,076,000 19,411,000 9,587,000
2,403,000 3,264,000
Go Mobile KPN Vodafone Orange mmO2 T-Mobile Centertel Polkomtel PTC
EPT Tango
Omnitel Tele2
LMT
CosmOTE Info-Quest TIM Vodafone Pannon Vodafone Westel Meteor mmO2 Vodafone TIM Vodafone
PCNs1
130,000
2,000,000
02/99
4,256,000 367,000
12/00
04/98 06/02
Orange KPN Vodafone mmO2 T-Mobile Centertel Polkomtel PTC
EPT LuXcom Orange Tango
H3G mmO2 Vodafone H3G IPSE 2000 TIM Wind Vodafone LMT Tele2
CosmOTE TIM Vodafone
UMTS2
2,400 -
10/03
340,000
-
09/03
03/03
12/03
406 Jason Whalley, Peter Curwen
Portugal Telecom
Slovenské Telekomunicácie Telekom Slovenije
Telefónica
Portugal
Slovakia
Spain
TMN Vodafone Optimus EuroTel Orange Mobitel Si..mobil Vega Telefónica Vodafone
GSM1 10/92 10/92 09/98 02/97 01/97 06/96 03/99 12/01 07/95 10/95
4,830,000 3,405,000 2,360,000 1,540,000 2,065,000 1,284,000 362,000 40,000 19,661,000 9,684,000
Telefónica Vodafone Amena
TMN Vodafone Optimus EuroTel
PCNs1 TMN Vodafone Optimus EuroTel Orange Mobitel
UMTS2
12/03
-
Telefónica Vodafone 01/99 8,170,000 Xfera Amena Sweden TeliaSonera TeliaSonera 11/92 3,838,000 TeliaSonera Hi3G 04/03 20,000 Tele2 01/92 3,310,000 Tele2 Tele2 Vodafone 09/92 1,432,000 Vodafone Vodafone TeliaSonera UK BT mmO2 07/94 13,050,000 T-Mobile 09/93 13,600,000 mmO2 Vodafone 12/91 13,947,000 Orange 04/94 13,370,000 3 UK 03/03 215,000 Orange T-Mobile Vodafone 1 The entries consist of name of operator, date when its service was first launched and the number of subscribers at the end of December 2003. Where an operator provides both GSM (900 MHz band) and PCNs (1800 MHz band), the subscriber data are generally provided for both services together in the GSM column. Subscriber data often differ depending upon the source, but such differences are not statistically significant in the context of EU countries. There is, however, some controversy over the counting of ‘inactive’ customers. For example, Vodafone and Orange in the UK delete customers who have been inactive (making, say, no outgoing calls and receiving fewer than four incoming calls per month) for three months, whereas some operators only do so after a year of inactivity and some not at all. 2 The term ‘launch’ in the context of UMTS can mean many things, but normally refers to the launch of a service for corporate customers via data cards inserted in laptops. A consumer service via handsets usually follows months later. The subscriber numbers are for fully launched services, and are unlikely to be wholly accurate. Source: Details obtained from regulators’ websites, company websites and media and Internet websites including http://news.ft.com, www.cellular-news.com, www.baskerville.telecoms.com, www.gsm.org, www.mobileisgood.com and www.totaltele.com.
Slovenia
Fixed Wire
Country
Table 1. (cont.) European Union Mobile Telecommunications in the Context of Enlargement 407
408
Jason Whalley, Peter Curwen
Extending Whalley and Curwen (2003) is advantageous in several ways. By detailing the number of subscribers that a company has in each country, the table begins to differentiate between a simple presence in a country where the company is not a significant player and a presence where the company is actually (one of) the largest in the market in terms of number of subscribers. By combining the service launch date and the number of subscribers, the table also provides an impression as to how fast the market is growing and how successful the mobile operator has been in gaining subscribers. Since the table identifies the fixed-wire incumbent for each country as well as the mobile operators, it is also possible to investigate whether or not the incumbent owns the largest mobile operator in each country. Such common ownership is important, as it will contribute, to a greater or lesser extent, to the competitiveness and openness of the mobile market. Using Table 1 as our starting point, it is possible to make a series of preliminary observations about the mobile market of the enlarged EU in general and the mobile markets of accession countries in particular as of 31 December 2003. Firstly, 2G mobile communication licences had been issued in all member states. Although eighty licences had been issued, the number of licences issued in each member state varied between two and five. Cyprus (South), Luxembourg, Malta and Slovakia had issued only two 2G licences, while the Netherlands uniquely had issued five. The most common number of 2G licences to be issued was three; thirteen member states had issued three licences compared to seven that had issued four licences. In contrast, not all EU member states had issued 3G licences. At the time of writing (in this case 30 June 2004), twenty-two2 of the twenty-five EU member states had issued 3G licences. Perhaps surprisingly, more 3G licences than 2G licences had been put on offer, but because not all of the licences on offer had actually been awarded, the number of operational 3G licences was fewer than the number of operational 2G licences. There were eighty operational 2G licences but of the eighty-seven4 3G licences on offer only seventy-eight had been taken up5,6. This was somewhat surprising as many governments indicated their desire to use the 3G licensing process as a way to increase the number of companies, and therefore the amount of competition, in the market. Only a minority of member states witnessed the launch of 3G services prior to the end of 2003, as shown in Table 1. By that point, only eleven mobile operators across the EU had begun to offer 3G services, and only seven were willing/able to announce the number of subscribers (see Table 1, footnote 2) although by June 2004 the number of (so-called) launches involved thirty-one operators in fourteen 4
5 6
The situation is a little cloudy. Hungary and Lithuania had definitely not awarded 3G licences while Cyprus (South) had guaranteed a licence to the second 2G licensee, Investcom, provided it launched within 10 years from 1 May 2004. Including Cyprus (South) but excluding Malta. In practice, one of the licences was withheld for reasons of non-payment in Slovakia, one revoked in Portugal and one returned to the regulator in Germany. To make matters even muddier, one licence was sold on in Sweden but the sale fell foul of the regulator and one licence in Germany is in abeyance and will probably be revoked.
European Union Mobile Telecommunications in the Context of Enlargement
409
member states. More than half of the 646,400 3G subscribers on 31 December 2003 were to be found in Italy, with the UK accounting for another 215,000. In both countries, the operator concerned traded under the same brand, namely ‘3’, although ownership of the brand was not identical in every country where it had launched. It was notable that ‘3’, or perhaps more accurately its main owner Hutchison Whampoa, accounted for five of the eleven launches during 2003, a phenomenon that was clearly related to its lack of 2G licences.
Market structure Across the enlarged EU, the incumbent fixed line operator owned the largest mobile operator in nineteen member states: Austria, Belgium, Cyprus (South), Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Italy, Latvia, Luxembourg, Netherlands, Portugal, Slovenia, Spain and Sweden. This observation could also be phrased more informatively in a slightly different fashion; that is, in just two of the fifteen ‘old’ member states of the EU the incumbent operator did not own the largest mobile operator. The exceptions were Ireland and the UK. In the case of Ireland, Eircom, the incumbent fixed operator, divested its mobile subsidiary, Eircell, in May 2001, and Vodafone subsequently acquired Eircell for €4.5bn in December 2001. BT also divested its mobile arm, mmO2. In November 2001, BT spun off mmO2 in order to ease the financial problems that it was facing in the aftermath of acquiring 3G licences and buying out its partners in its British, Irish, Dutch and German mobile businesses. Moreover, mmO2 was not the largest mobile operator in the UK and had not been for many years. This accolade had alternated between Orange, a subsidiary of France Télécom, and Vodafone. As of December 2003, all four GSM network operators had at least 13 million subscribers and only 897,000 subscribers separated the largest company, Vodafone, from the smallest, mmO2. As Table 1 shows, no other member state had anything like such equality between so many operators. This left four member states, all accession countries, where the incumbent fixed operator did not own the largest mobile operator. In all of these – Lithuania, Malta, Poland and Slovakia – the largest mobile operator was partially owned by foreign investors. In Lithuania, the incumbent fixed operator did not own a stake in any mobile operator, while in the case of the other three countries the incumbent owned a stake in the second-largest mobile operator. Related to the above is the observation that those mobile operators with multiple licences across the EU were usually the second- or third-largest operators in the market. Through combining the subscriber information contained in Table 1 with Whalley and Curwen (2003) which identified multiple licence ownership across Europe, it is possible to determine the market position of operators in EU member states. With two exceptions – Tele2 and mmO2 – each company identified below was the largest operator in its home market. Tele2 was the secondlargest operator in Sweden after TeliaSonera while mmO2 was the smallest of the four second-generation network operators in the UK. Tele2 was also exceptional
410
Jason Whalley, Peter Curwen
in another way: it was the only company identified in Table 2 that was not the largest company in any of the markets where it operated. If we focus on those mobile operators that were the largest operators in a foreign country, then a common trait was that those markets where they were the largest were comparatively small. For example, TeliaSonera was the largest mobile operator in the three Baltic States but these were among the smallest of all the EU markets. Orange (in Luxembourg) and Vodafone were also the largest operators in relatively small markets. Vodafone (along with TDC until March 2004) had a (minority) stake in the largest-operator in the modestly sized Belgian market and a fully-owned operation in Ireland and Malta. Deutsche Telekom was the largest mobile operator in Hungary and Poland, but whereas the Hungarian market was relatively modest in size with 7.2 million subscribers, the Polish market, with more than 17 million subscribers, was the sixth-largest in the EU. In this respect, therefore, Deutsche Telekom is unique among the mobile operators identified in Table 2, and is all the more remarkable when the new-entrant status of PTC, Deutsche Telekom’s Polish business, is taken into account. Table 2. Market position, 31 December 2003 EU Country
DT
Orange
mmO2
Telenor1
Operator Tele22
Telia Sonera
TDC
Vodafone3
Austria 2nd 3rd 3rd 3rd nd Belgium 2 1st4 1st Cyprus Czech R. 2nd Denmark 2nd 3rd 1st Estonia 2nd 1st Finland 1st France 1st 2nd st th 4 2nd Germany 1 Greece 2nd 2nd 3rd Hungary 1st nd Ireland 2 1st Italy 2nd 1st Latvia 2nd Lithuania 2nd 1st 3rd Lux 2nd Malta 1st 5th 2nd Neths 3rd 2nd 3rd 3rd Poland 1st 2nd Portugal 3rd 2nd Slovakia 2nd Slovenia Spain 2nd 1st 3rd Sweden 2nd 3rd 4th 1st UK 2nd 1 Telenor is the largest mobile operator in its home market of Norway, a non-EU member state. 2 Present through MVNO arrangement in Austria, Denmark, Finland and The Netherlands. 3 Present through a Network Partnership agreement in Austria, Cyprus (South), Denmark, Estonia, Finland, Lithuania, Luxembourg and Slovenia 4 Stake sold in March 2004. Source: Calculated by authors from data in Table 1.
European Union Mobile Telecommunications in the Context of Enlargement
411
Market concentration Drawing on the subscriber information contained in Table 1, it is possible to calculate the percentage of the mobile market controlled by the largest (two) mobile operator(s) as of 31 December 2003. As can be observed from Table 3 below, it was normally the case that the largest mobile operator controlled at least 40 percent of the market. Often, the percentage controlled by the largest operator was far greater. For example, in Spain, Telefónica accounted for 52.8 percent of all mobile subscribers. However, in three countries – The Netherlands, Poland and the UK – the largest mobile operator controlled less than 40 percent of all subscribers. For both the Netherlands and Poland, the largest mobile operator controlled at least 35 percent of the market, but in the case of the UK the market share of the largest operator was just 25.8 percent. Table 3. Market concentration, 31 December 2003 Country
Total number of 2G mobile subscribers (000s)
Market share of the largest operator as percentage of total 2G market
Austria 7,150 Belgium 8,068 Cyprus (South) 552 Czech Republic 9,625 Denmark 4,683 Estonia 927 Finland 4,562 France 41,600 Germany 64,778 Greece 10,290 Hungary 7,287 Ireland 3,426 Italy 55,074 Latvia 1,053 Lithuania 2,156 Luxembourg 534 Malta 292 The Netherlands 13,534 Poland 17,300 Portugal 10,595 Slovakia 3,605 Slovenia 1,686 Spain 37,799 Sweden 8,570 UK 53,967 Source: Calculated by authors from data in Table 1.
44.2 52.0 100 43.8 52.7 53.0 53.2 48.8 40.6 41.3 48.5 54.6 47.3 50.7 48.8 63.5 55.5 38.5 35.9 45.6 57.3 76.2 52.8 44.8 25.8
Market share of the largest two operators as percentage of total 2G market 72.8 84.5 -83.9 76.5 81.9 83.3 84.1 78.7 73.1 84.5 94.8 82.6 100 76.5 100 100 63.6 68.8 77.7 100 97.6 74.4 83.4 51.0
412
Jason Whalley, Peter Curwen
If the calculation is extended to include the second-largest mobile operator in each market, then in most member states the mobile market was, to all intents and purposes, a duopoly. The two largest mobile operators normally controlled over 70 percent of the market between them, with the only exceptions being the three countries identified above. However, with 68.8 percent of the Polish market being controlled by the two largest mobile operators it could be argued that the two exceptions that proved the rule were The Netherlands and the UK. Where three or more mobile operators had been licensed, a considerable gap often existed between the number of mobile subscribers controlled by the secondlargest operator and the number of subscribers controlled by the third-largest mobile operator. In seven member states, the subscriber base of the third-largest mobile operator was approximately half the size of the second-largest mobile operator. For example, in Sweden, Tele2 was the second-largest mobile operator with 3,310,000 subscribers at the end of December 2003, while Vodafone was the third-largest operator with just 1,422,000 subscribers. In other words, Vodafone had just 43 percent of the number of subscribers that Tele2 did. The other member states where the third-largest operator was approximately half the size of the second-largest were the Czech Republic, Denmark, France, Hungary, Italy and the Netherlands. In addition to the aforementioned seven countries, it is possible to identify another three member states where the third-largest mobile operator had considerably fewer than half of the subscriber base of the second-largest. In Germany, EPlus had around one-third of the subscribers of the second-largest operator, Vodafone. In Ireland and Slovenia, the size difference was even larger; in Slovenia the third-largest mobile operator had just 11 percent of the subscribers of the secondlargest, while in the case of Ireland the figure was 13 percent. It may also be noted that in the majority of EU member states the most recent mobile operator to launch its service was also the one with the fewest subscribers. Although this was true for fifteen EU member states, it is surprising that only four of these could be found among the accession countries. In other words, the date when a mobile operator launched its services was more important in the EU15 than in the accession countries. For five member states – Austria, Finland, Latvia, Slovenia and the UK – the gap between the smallest and the next-largest operator was less than one million subscribers, while for the remaining ten countries the gap was greater, sometimes considerably greater, than one million. For example, in Italy there was a gap of almost ten million subscribers between Wind, the last of the three 2G operators to launch its service, and Vodafone, the second-largest operator in the market. Gaps of more than two million subscribers could also be found in France (8.1 million subscribers), Germany (2.6 million), the Czech Republic (2.3 million) and Greece (2.1 million). In Belgium, Hungary, Ireland, Portugal and Spain the gap was less than one million subscribers.
European Union Mobile Telecommunications in the Context of Enlargement
413
Mobile communication markets in accession countries The first issue to address at this point is the extent to which mobile operators were EU-centric in respect of their geographical footprints, distinguishing between operators with a heavy presence in the former EU and those with a presence in the accession countries. Table 4 is drawn up so as to include those operators with licences in at least two accession countries. This is a modest enough total, but reflects the fact that only one operator, Vodafone, was present in more than four of the ten. Even here, however, there is a need to distinguish carefully between operators with licences and those companies operating under other arrangements. For example, it is possible for an operator to act as a mobile virtual network operator (MVNO) by leasing spare capacity on an incumbent’s network. Technically, the definition of an MVNO requires an operator to own its own switches and sell under its own brand, although there are also less rigorous ways to operate such as an enhanced service provider or simply as a reseller of another operator’s branded service. The primary advocate of the MVNO approach was Tele2 although, as Table 4 shows, it preferred direct investment in networks in accession countries while operating as an MVNO in more established markets. For its part, Vodafone preferred to negotiate Partner Agreements, involving no direct stake, whereby the network in question was usually re-branded with the original operator’s name hyphenated to that of Vodafone. By this means Vodafone enjoyed brand recognition without needing to lay out huge sums of money, and was able to introduce its Vodafone live! portal with associated roaming benefits, while the network owner enjoyed improved subscriber numbers and reduced churn because the Vodafone name was more attractive than its own. In practice, Vodafone owned stakes in only three accession countries, so the operator with the greatest presence was, in fact, Deutsche Telekom subsidiary T-Mobile. This is unsurprising since the geographical position of Germany clearly lends itself to investment in countries close to its borders, many of which are accession countries (with possibly more to come). This is an important point because it is immediately noticeable that three of the big five EU incumbent mobile operators, mmO2, Telecom Italia Mobile (TIM) and Telefónica Móviles do not appear in Table 4. For the latter in particular, this is ultimately a question of history, culture and language. Telefónica Móviles (and/or occasionally its parent) operated at the time in ten overseas countries, of which nine were to be found in Latin America; the only exception was Morocco, its immediate southern neighbour. In other words, apart from some toying with 3G licences that had so far resulted in nothing other than fairly substantial write-offs, the company had zero direct interest in the EU, let alone accession countries. This strategy, it must be said, had served it well up to that point in time. TIM, for its part, also had over five million proportionate subscribers7 in Latin America, albeit in only three countries, the same number in which it operated elsewhere in the world. In practice, its presence in a single ac7
Total number of subscribers multiplied by ownership stake.
414
Jason Whalley, Peter Curwen
cession country, the Czech Republic, merely represented a tiny stake in the operator controlled by T-Mobile, and was the least significant of its six overseas holdings. As for mmO2 (both before and after its divestment from what is now the BT Group), it had spent a period of retrenchment involving the shedding of minority interests such that it remained operational in only Germany, the Netherlands and the UK (plus the Isle of Man). Even so, it has to be said that it was never really interested in the accession countries, preferring to get involved in South-East Asia and North America. Table 4. Operators present in at least two accession countries EU Country Austria Belgium Cyprus Czech R Denmark Estonia Finland France Germany Greece Hungary Ireland Italy Latvia Lithuania Luxembourg Malta Netherlands Poland Portugal Slovakia Slovenia Spain Sweden UK Total Accession Total 1
Vodafone
Orange5
Y1 Y Y1
Y Y
Y1 Y1 Y1 Y Y Y Y Y Y
Y6
Y1 Y1 Y Y Y Y Y1 Y Y Y 7 22
Operator Tele2 Telia Sonera Y4
Y4 Y Y4
Y
Deutsche Telekom7 Y
TDC Y Y
Y Y Y Y Y2,3
Y
Y Y
Y Y Y Y Y
Y Y Y
Y3 Y Y
Y4
Y Y
Y
Y Y
Y3 Y
Y 2 10
Y
Y 3 9
3 9
4 8
2 5
Via Partner Agreement not involving direct investment. Network in abeyance. 3 3G only. 4 Trading as an MVNO. 5 France Télécom is about to reclaim 100 percent ownership of Orange and hence no distinction is made concerning ultimate ownership.. 6 Orange has accepted a takeover bid from TeliaSonera due for completion in October 2004. 7 The stakes are mostly held via mobile subsidiary T-Mobile. Source: Calculated by authors from data in Table 1. 2
European Union Mobile Telecommunications in the Context of Enlargement
415
It is also be useful for the purposes of clarification to examine briefly the operations of mobile companies in what used to be termed Eastern Europe since only some of its constituent countries had become accession countries. As of 31 December 2003, four EU incumbents had a significant presence involving investment in Eastern Europe, namely Telenor, OTE, T-Mobile and TeliaSonera. OTE, interestingly, had stakes in Albania, Armenia, Bulgaria, Macedonia, Romania and Serbia, so it had not profited, to put it mildly, from accession. TeliaSonera’s accession stakes were in practice entirely in the Baltic countries, so its stakes to the east, in Azerbaijan, Georgia, Kazakhstan, Moldova and Russia had also all missed the accession boat. For its part, Telenor had eleven overseas interests but, interestingly, it was not focussed upon the Nordic/Baltic area being present in only Denmark and Norway, whereas it had stakes in Albania, Greece, Montenegro, Russia and the Ukraine in respect of which it had also missed out on accession. T-Mobile accordingly stood out because it had stakes in four accession countries, of which three (Czech Republic, Hungary and Poland) generated more than two million proportionate subscribers as of 31 December 2003. In addition, it owned stakes in Croatia, Macedonia and Russia, so of its total of eleven overseas holdings, the majority were EU-based (Austria, Czech Republic, Hungary, the Netherlands, Poland, Slovakia and the UK) as a result of accession even if the USA comfortably generated the third-largest number of proportionate subscribers after Germany and the UK. What the above suggests is that there is a useful distinction to be made between the Baltic and Eastern European aspects of accession – Cyprus (South) and Malta are of little significance because of their size and lack of potential for the entry of major operators. Taking the three Baltic accession countries as a whole, the six operators listed in Table 4 generated nine entries although that was somewhat distorted by the Partnership Agreements. In contrast, the five broadly Eastern European countries generated ten entries. This was not a significant difference, so it is worth asking whether it resulted from the companies sampled. To answer this, we can return to the data in Whalley and Curwen (2003) which encompassed thirteen major European operators, and these reveal that increasing the sample size makes almost no difference when compared to Table 4 above. Of the ten accession countries, only two are affected at all by the inclusion of the additional seven operators, namely Hungary where Telenor had a substantial stake and the Czech Republic where TIM had a very small stake. It is also possible to establish whether any significance can be attributed to the fact that two Nordic countries – Iceland and Norway – were not members of the EU. In practice, Iceland was not significant since the only EU operator there was Vodafone via a Partnership Agreement, but in Norway we find (predictably) both Telenor and TeliaSonera (trading as NetCom GSM) as incumbents with Tele2 as a MVNO (although it had returned its 3G licence). In summary, accordingly, the situation was as follows at the time of accession: Vodafone had invested in accession countries in the former Eastern Europe (Group A) but had been keen to extend its footprint to the Baltic accession countries (Group B) without investing heavily. T-Mobile had heavily invested in Group A but was wholly disinterested in Group B. Orange was less involved in Group B
416
Jason Whalley, Peter Curwen
but equally indifferent to Group A. TDC was slightly interested in both, while both TeliaSonera and Tele2 were heavily invested in Group B while wholly disinterested in Group A. Curiously, Telenor (not in Table 4) was the only Nordic operator acting in a wholly non-Nordic manner where accession was concerned
Expansion and consolidation Given the aforementioned differences in the countries in which the mobile operators identified in Table 4 have chosen to invest prior to accession, an inevitable question to ask is whether the accession of new members states will encourage any changes in their strategic priorities. In the first sub-section the focus is on TMobile, Orange and Vodafone whose ability to further expand is interwoven with one another, whilst the second sub-section concentrates on those other mobile companies with a presence in the accession countries. T-Mobile, Orange & Vodafone If we begin with T-Mobile, then the strategic importance of the Eastern European countries to the company is clear for all to see. Indeed, the CEO Kai-Uwe Ricke, basking in predictions of massive cash inflows during 2004, stated in May 2004 that ‘Taking into account the EU’s enlargement towards the east, we are placing a special focus on this region’. It is possible to calculate the importance of this region to T-Mobile as at 31 December 2003 when it had in total 68.7 million proportionate subscribers. Of these, 26.3 million were in Germany and 43.9 million in total in the pre-accession EU. Accession transferred a further 7.7 million to that total, yielding 51.6 million in total in the post-accession EU. The rest were largely accounted for by the USA (12.8 million) and Russia (3.4 million) with Croatia and Macedonia adding 0.8 million between them. T-Mobile had a choice between moving into new countries and expanding into existing ones. In both cases, much depended upon existing shareholders and their willingness to sell. Faced with a cash offer above the market price many shareholders might have been expected to succumb, but Deutsche Telekom’s own shareholders were unlikely to sanction using up cash reserves to support a move into the likes of Moldova. Moreover, T-Mobile was not willing to fight for the 2G licence issued in Bulgaria in May 2004 – the stake in BTC which came with the licence was won by a private equity company in preference to Turk Telecom. Hence, the probability was that T-Mobile would prefer to increase its existing stakes, as listed in Table 5. In some cases, the purchase of additional equity would consolidate its existing control over the operator while in other cases the purchase could allow T-Mobile to take control of the operator for the first time. T-Mobile was particularly keen to acquire the 51 per cent of PTC it did not own in Poland, if only to keep one step ahead of Vodafone in a country with a modest penetration ratio. It allegedly upped its offer to €1.3 billion in June 2004, having had a slightly
European Union Mobile Telecommunications in the Context of Enlargement
417
lower offer rejected in September 2003. Thus, its existing stakes provided TMobile with ample incentives and opportunities to continue its Eastern Europeanfocused investment strategy. However, one intriguing prospect lay in the Czech Republic where, despite its majority stake in an incumbent, T-Mobile was alleged to be interested in acquiring EuroTel Praha via a bid for parent Ceský Telecom. Presumably, if it did so it would be forced to dispose of its existing network which was almost the same size, but this would get around the problem of trying to obtain full ownership of T-Mobile CZ. Table 5. T-Mobile, 31 December 2003 Total subscribers
Stake %
Proportionate Subscribers
Country
2,034,000 465,000 140,000 1,250,000 3,860,000 26,300,000 3,536,000 470,000 2,000,000 6,200,000 13,370,000 1,550,000 13,600,000 12,800,000
100 12.3 25.0 51.0 56.6 100 59.5 27.21 100 49.0 25.1 25.6 100 100
2,034,000 57,000 49,000 637,000 2,185,000 26,300,000 2,104,000 128,000 2,000,000 3,038,000 3,356,000 397,000 13,600,000 12,800,000
Austria Belarus Bosnia Croatia Czech Rep. Germany Hungary Macedonia Netherlands Poland Russia Slovakia UK USA
87,575,000
2,3
68,685,000
Total
Other main stakeholders Mezhdugorodnaya Svyaz – 51%4 Hrvatski Telekomunikacije5 State-owned HRT – 49% Ceske Radiokom – 39%6 Matáv7 Matáv8 Elektrim/Vivendi – 51% Sistema – 50.1% 9
-
1
Raised to 28.1 per cent during 2004. 2 Deutsche Telekom subsidiary Detecon obtained a licence in Lebanon in April 2004. 3 It sold its stake in Malaysia’s Celcom and in Globe Telecom of the Philippines during 2003. 4 Deutsche Telkom owns 25.1 per cent of Russia’s MTS which owns 49 per cent of LLC MTS. 5 Deutsche Telekom owns 51 per cent of Croatia’s HRT which owns 49 per cent of Eronet. 6 Deutsche Bank 71.9 per cent. 7 Deutsche Telekom holds its stake indirectly via Matáv which wholly owns the licensee. 8 Deutsche Telekom holds its stake indirectly via a Matáv-led consortium which owns 51 per cent of MakTel. 9 Deutsche Telekom owns 51 per cent of Slovenské Telekomunikácie which owns 51 per cent of EuroTel. The rest is shared by Verizon Communications and AT&T Wireless. Source: Compiled by authors from a variety of websites (see Table 1 above).
Will any of the other mobile operators identified in Table 4 follow T-Mobile and respond to accession by increasing their geographical coverage? As shown in Table 4, Vodafone was already present in all but three of the ten accession countries, so the scope for it to invest in more of these markets was actually quite limited. Of the three markets where Vodafone was not active, the most significant was the Czech Republic. Of the three 2G operators in the Czech Republic, two –
418
Jason Whalley, Peter Curwen
Ceský Mobil and EuroTel Praha – were potential acquisitions. The third operator, T-Mobile CZ, was majority owned by Deutsche Telekom, and thus unavailable unless, as noted above, T-Mobile was forced to sell it. In principle, EuroTel Praha could also be dismissed as an acquisition target of Vodafone since it was possibly being targeted by T-Mobile and, in any event, was a subsidiary of the incumbent PTO which was most unlikely to want to be split off from it mobile operations. However, Vodafone had recently indicated that it was willing, and had the financial resources, to acquire the entire operation, and it did have prior experience of hiving off and selling a fixed-wire network dating back to its takeover of Mannesmann. Ceský Mobile was owned by Telesystem International Wireless, which was possibly prepared to sell its 96.4 per cent stake if the price was sufficiently attractive. Having said this, it remained to be seen whether Vodafone’s shareholders would be prepared to countenance the comparatively expensive acquisition of the smallest of the Czech Republic’s three 2G operators. Given the existing investment, and the 16 per cent market share of Ceský Mobile, a more attractive course of action could have been to enter into a Network Partnership agreement. The other two markets, Latvia and Slovakia, were comparatively small. As a consequence, it was more likely that Vodafone would enter these markets through the use of Network Partnership agreements rather than an equity investment. However, this assumed that the existing 2G operators would enter into such an agreement. So far as Latvia was concerned, this was highly unlikely given who owned the two existing operators – Baltkom was owned by Tele2 while LMT was jointly owned by the Latvian state (51 per cent) and TeliaSonera (49 per cent). It was inconceivable that either Tele2 or TeliaSonera would sign a Network Partnership agreement with Vodafone as this would expand the regional coverage of their main competitor in the Baltic States. The situation in Slovakia was a little more complicated, not least because Vodafone’s ability to enter this market was also dependent on the strategic priorities and intentions of Orange. Orange had only a limited exposure to the mobile markets of the ten accession countries, with a presence across the EU that was increasingly skewed in favour of Western Europe. Fig. 1 below, where black signifies a country where Orange was either the majority or outright owner and grey where it was a minority owner, vividly illustrates this situation. Orange had made two accession country mobile investments; in Poland, where it – or strictly its parent – was a majority shareholder in PKT Centertel, and Slovakia where it owned 63.9 per cent of Orange Slovensko, the largest operator. These two investments were, however, somewhat detached from the other investments that Orange had made. Their relative peripherality was further reinforced when subscriber numbers are taken into account; Poland and Slovakia accounted for less than 10 per cent of the wider European subscriber base of Orange. Orange had also invested in Romania, a country that hoped to be among the next wave of accession countries, though when these subscribers were also included the three countries accounted for roughly 17 per cent of the European subscriber base. In contrast, France and the UK, which were the two largest mobile markets of Orange, accounted for over 75 per cent of its European subscriber base. However,
European Union Mobile Telecommunications in the Context of Enlargement
419
given that Orange was committed to consolidation based upon countries where it held majority stakes and hence control, preferably in conjunction with a top two ranking, and that parent France Télécom’s short-term need for cash had abated somewhat, a wholesale withdrawal from the former Eastern Europe no longer seemed likely. Nevertheless, it is of interest to ask who might buy these stakes if they did become available.
Fig. 1. The geographical coverage of Orange, June 2004
Vodafone would potentially be interested in the 63.9 per cent of Orange Slovensko owned by Orange, not least because this would complement its existing array of mobile businesses and could possibly act, at a later date, as a springboard for a move into the other Balkan states, but it is unlikely to happen. Not only are Vodafone’s shareholders unlikely to be willing to support an acquisition that adds a comparatively small number of subscribers in a market where growth expectations are limited – Slovakia’s population is only 5.4 million and 3.6 million already have a mobile phone – but there are attractive investments with more potential elsewhere. One such is Romania where Vodafone is already a minority shareholder in Connex – see Table 6 – and whose population is four times that of Slovakia. Telesystem International Wireless (TIW), holder of a 63.6 per cent
420
Jason Whalley, Peter Curwen
stake, has (in July 2004) agreed to buy all or part of the 14.4 per cent stake held by Deraso Holdings. Anything it declines will be offered to Vodafone. The potential of the market may be tempting for Vodafone, but even if TIW is willing to sell, it is going to demand a premium price. Outside of such an acquisition, Vodafone is unlikely to expand into new markets other than through Network Partnership agreements. Table 6. Vodafone, 31 December 2003 Total Subscribers
Stake %
Proportionate Subscribers
473,000 4,201,000 14,700,000 24,668,000 3,264,000 1,331,000 1,871,000 19,411,000 162,000 3,405,000 5,300,000 3,332,000 3,457,000 9,685,000 1,432,000 3,840,000 13,947,000
99.8 25.0 43.9 100 98.21 87.9 100 76.8 100 99.9 19.6 100 20.1 100 99.12 25.0 100
472,000 1,050,000 6,453,000 24,668,000 3,205,000 1,170,000 1,871,000 15,852,000 162,000 3,400,000 1,039,000 3,332,000 695,000 9,685,000 1,409,000 960,000 13,947,000
Albania Belgium France Germany Greece Hungary Ireland Italy Malta Netherlands Poland Portugal Romania Spain Sweden Switzerland UK
130,891,000
Total4
326,280,000
3
Country
Other main stakeholders Belgacom – 75% Vivendi Universal – 55.8% Antenna Hungária – 12.1% Verizon Comms – 12.2% TDC, KGHM, PKN – all 19.6% TIW 63.5% Swisscom – 75% -
1
Currently 99.4 per cent. Currently 100 per cent. 3 Vodafone sold its 7.0 per cent stake in (as yet un-launched) Spanish 3G operator Xfera in May 2004. 4 Including other worldwide holdings. Source: Compiled by authors from a variety of websites (see Table 1 above). 2
Other mobile companies with a presence in accession countries What of the other companies identified in Table 4? For different reasons, neither TDC nor TeliaSonera were likely to expand their geographical footprint as a result of EU expansion. TDC had only two remaining investments in accession countries, in Bité in Lithuania and Polkomtel in Poland. During 2003, TDC sold its holdings in the Czech Republic and the Ukraine, so it did not appear to see the former Eastern Europe as other than providing opportunities for financial investments. In any event, any additional investments by TDC in accession countries could be ruled out until the uncertainty over its own future was resolved. In mid2004, SBC Communications Inc. had sold 32.1 per cent of the 41.6 per cent of
European Union Mobile Telecommunications in the Context of Enlargement
421
TDC that it owned8, but to financial institutions rather than to another telco. Pending the completion of this sale, TDC stated that it would not enter into any negotiations regarding ‘potential partnerships or strategic transactions at group level’. Moreover, these would only resume once the new board had been able to conduct a strategic review of the company. Table 7. TDC, 31 December 2003 Total subscribers
Stake %
Proportionate subscribers
1,430,000 4,201,000 2,470,000 507,000 5,400,000 1,234,000
15.0 11.61 100 100 19.6 1002
215,000 487,000 2,470.000 507,000 1,058,000 1,234,000
Austria Belgium Denmark Lithuania Poland Switzerland
3
5,971,000
Total
15,242,000
Country
Other main stakeholders E.ON – 50.1% -
Vodafone, KGHM - 19.6% each -
1
Sold in March 2004. 2 TDC Schweiz is a separate company. The others trade as TDC Mobile. 3 TDC sold its stakes in the Czech Republic and the Ukraine during 2003. Source: Compiled by authors from a variety of websites (see Table 1 above).
The two accession investments that TDC still retained could be sold to free resources for use elsewhere although they did provide a significant proportion of its subscribers. Interestingly, Vodafone was already associated with both of these companies since it had a Network Partnership agreement with Bité in Lithuania and was a fellow shareholder in Polkomtel in Poland. Thus, one possible scenario would see TDC exit Lithuania and Poland through the sale of its stakes in Bité and Polkomtel to Vodafone. However, whether Vodafone would make such a purchase was dependent on its ability to convince its shareholders. The case for acquiring additional shares in Polkomtel was more compelling than that for acquiring Bité, primarily because Poland was a much bigger market with more growth potential than Lithuania, and whoever acquired Bité would anyway be likely to want to continue its Network Partnership agreement with Vodafone. Nevertheless, the stakes of Vodafone and TDC added together would still only amount to 39.2 per cent of Polkomtel, so Vodafone would presumably want to buy out sufficient other stakeholders at the same time to ensure majority ownership. As noted above, Deutsche Telekom, for one, expected Vodafone to strike in the reasonably near future, and it had to be Vodafone’s likeliest next move within the accession countries. With the exception of the three Baltic States, TeliaSonera had no other mobile investments in accession countries. This should not be taken as suggesting that TeliaSonera had only a limited international presence outside of its two home markets – Table 6 clearly demonstrates that this was not the case – but rather that its mobile investments were in a broad array of countries including some that may be 8
The remaining shares, approximately 8.4 per cent of the company, will be purchased by TDC itself at a later date.
422
Jason Whalley, Peter Curwen
among the next batch of accession countries. Telia and Sonera, prior to their merger, did take advantage of the 3G licensing process to enter Germany, Italy and Spain, three of the largest Western European markets. However, there followed a period of post-merger repentance involving the writing off of the investments in all three markets9. Table 8. TeliaSonera, 31 December 2003 Total subscribers
Stake %
Proportionate Subscribers
Country
912,000 525,000 492,000 2,428,000 307,000 1,016,000 990,000 534,000 1,052,000 176,000 190,000 1,198,000 6,175,000 3,838,000 19,000,000 468,000
37.51 100 49.0 100 62.0 11.02 38.0 60.0 90.03 74.0 26.04 100 43.85 100 37.3 30.0
342,000 525,000 246,000 2,428,000 190,000 112,000 376,000 320,000 947,000 130,000 49,000 1,198,000 2,705,000 3,838,000 7,090,000 140,000
Azerbaijan Denmark7 Estonia Finland Georgia Hong Kong Kazakhstan Latvia Lithuania Moldova Namibia Norway Russia Sweden Turkey Uganda
39,745,000
6
20,809,000
Total
Other main stakeholders Turkcell State – 27% Turkcell Turkcell; Kazakhtelecom – 49% State – 40% Turkcell Telekominvest; LV Finance Çukurova Group – 42% MTN – 52%
1
The stakes in Azerbaijan, Georgia, Moldova and Kazakhstan are held via Fintur Holdings, held jointly by TeliaSonera (58.55 per cent) and Turkcell (41.45 per cent). However, TeliaSonera claims the majority of the subscribers. 2 Since sold. 3 Raised to 100 per cent in August 2004. 4 Sold in May 2004. 5
As repeated on TeliaSonera’s website but elsewhere consistently stated to be 35.8 per cent although a Russian expert says it is formally 34.1 per cent!
6
During 2003, TeliaSonera sold 36 per cent of Bharti Mobile of India. It obtained a share of a licence in Iran during 2004. In Spain, it owns 34.2 per cent jointly with ACS and 2.2 per cent independently in the as yet un-launched 3G operator Xfera. In Italy, it holds a 3G licence but has written off its investment. 7 TeliaSonera has agreed to buy Orange Denmark with completion due in October 2004. Source: Compiled by authors from a variety of websites (see Table 1 above).
Interestingly TeliaSonera had begun to refer to itself as ‘the Nordic and Baltic telecommunications leader’, but although this might simply have been an appropriate description of its market position in these two regions, it did also raise the 9
Sonera wrote down the value of its investments in Group 3G and Ipse 2000 to zero at a total cost of SEK39.2bn in the second quarter of 2002 (TeliaSonera, 2003, p.53). This was followed in December 2002 by a SEK660m write-down on the value of its stake in Xfera, its Spanish 3G investment.
European Union Mobile Telecommunications in the Context of Enlargement
423
possibility that it would further reduce its international footprint. Without a ‘local’ partner to offset the risk inherent in investing in the next wave of accession countries, it became possible that TeliaSonera would sell more of its overseas investments, leaving it predominantly as a Nordic and Baltic operator. The April 2004 offer by TeliaSonera to take outright control of Eesti Telekom, although unsuccessful, together with the outstanding 10 per cent of Lithuania’s Omnitel in August 2004 and the subsequent sales of stakes in Hong Kong and Namibia, reinforced the feeling that its strategic priorities lay in the Baltic and Nordic states and not elsewhere. Nevertheless, TeliaSonera was set to be debt-free by the end of 2004, and had a substantial war chest for acquisitions, so a contraction of its international footprint was not a foregone conclusion. It is worth noting that the Finnish government appeared to have agreed to the effective takeover of Sonera by Telia on the understanding that TeliaSonera would pursue a strategy of growth. Ultimately, because TeliaSonera stated in June 2004 that its ambition was to take majority control of its foreign investments, and given the size of the proportionate subscribers involved, its strategy was dependent primarily upon its relationship with its main partners. For example, the relationship between TeliaSonera and Turkcell’s largest shareholder, Çukurova, had at times been fraught – Çukurova’s stake was confiscated by the government in 2003 as collateral against debts and was about to be returned in stages commencing in July 2004 – and the situation in Russia was permanently unsettled. Such problems are usually addressed either via a takeover or a withdrawal. It is significant that, in late June 2004, the Finnish deputy CEO of TeliaSonera, with responsibility for pursuing the purchase of majority stakes in Turkcell and Megafon, was dismissed by the Swedish CEO [George, 2004]. At the very least, this indicated that TeliaSonera would not ‘overpay’ to take control, but to remain a permanent minority investor hardly seemed an attractive proposition as TeliaSonera was prepared to acknowledge. The final company mentioned in Table 4 with a presence in the accession countries is Tele2. Although Tele2 operated in ten EU member states, it had made just three investments in the accession countries, namely Tele2 Eesti in Estonia, Tele2 Mobile in Latvia and UAB Tele2 in Lithuania. In other words, Tele2 had invested in the Baltic States that complemented geographically its presence in the nearby Nordic States. Such a concentration of investment is clearly evident from Fig. 2 below. Fig. 2 also draws attention to a second characteristic of Tele2’s investment strategy; that is, its tendency to use MVNO arrangements to enter new markets. Of the nine mobile investments that Tele2 had made, almost half were as a MVNO. Those countries where Tele2 owned a network are shaded black on Fig. 2, while those where it had entered into a MVNO arrangement are shaded grey. Of the five networks owned by Tele2, only one, Luxembourg, could be found outside of Sweden and the Baltic States. Thus, the geographical preference in terms of ownership, was marked as was the preference for control – only in Sweden, where it had an 87.3 per cent stake, did Tele2 not own the entire company.
424
Jason Whalley, Peter Curwen
Fig. 2. The geographical coverage of Tele2, June 2004
There was also a temporal element to Tele2’s strategy. Since 2000, the primary way through which Tele2 has entered new markets has been by setting up as a MVNO. Of the six EU member states that Tele2 had expanded into since 2000 only one, Luxembourg, has involved 2G network ownership, although given its small size, one of the main reasons for creating a MVNO – cost – was not an issue here. It may also be noted that Tele2 had acquired a 3G licence in Finland, although it had yet to launch the network, a factor necessitating the use of another operator’s 2G network. Moreover, all of the mobile markets that Tele2 had entered since 2000 as a MVNO were EU15 member states and not accession countries. However, Tele2’s strategy was a little more eclectic than it may appear to be on the basis of the above since it had recently acquired a regional 2G licence in Switzerland and had an ongoing operation in Russia although the number of mobile subscribers was modest. When the geographical focus of Tele2 and its use of MVNOs are combined, we can conclude that while it might choose to expand into new mobile markets in the future, these markets are more likely than not to be found among the EU15 member states than accession countries. As a consequence, Tele2 is unlikely to play
European Union Mobile Telecommunications in the Context of Enlargement
425
anything other than a minor role in the mobile markets of accession countries outside of the Baltic States.
Conclusions The above discussion has focused on the ownership of mobile communication licences in the enlarged EU. In the course of this a distinction has been made between the original 15 member states and the ten accession countries that joined in May 2004. Drawing such a distinction allowed those mobile communication companies with a presence in the accession countries to be differentiated from those that did not. The first conclusion that can be drawn is that the largest multiple owners of mobile communication licences identified by Whalley and Curwen (2003) have, with the exception of Vodafone, only a limited presence in the mobile communication markets of the ten accession countries. Both Tele2 and TeliaSonera have focused on the Baltic States while Deutsche Telekom has concentrated its attention on those Eastern European markets that either border, or are close to, its home market. This is not particularly surprising since liberalisation offered so many opportunities to expand into the other member states of the pre-accession EU, and the costs of licence acquisition plus network roll out were extremely burdensome. Hence, stake-building in the Baltic region or elsewhere in the former Eastern Europe was as likely to be influenced by political as much as economic considerations. Once the date was pencilled in for accession there was the possibility of renewed strategic interest in the accession markets, but it came at a bad time since most operators were struggling with the fall-out from the collapse that began in 2002. Few accordingly had the wherewithal, let alone the will, to make expansionary moves. The obvious candidate to do so was Vodafone, given its resources and its strategy based upon its international footprint, while an alternative contender such as Orange was forced to retrench to the point that it became, to all intents and purpose, a Western European-focused mobile operator with a presence in an increasingly scattered set of markets. While the need to raise capital for its parent company has abated, Orange, like TeliaSonera, is no longer interested in playing bit parts and wants to be a serious player or to exit. Exit is nevertheless easier said than done because of the shortage of buyers, and even the likes of Vodafone would be hard pressed to pay the kind of premium that Orange (or other potential sellers) would demand in the present investment climate. This suggests that operator footprints are unlikely to change, even among those operators wishing, for whatever reason, to exit accession markets. Insofar as stake-building is concerned, it does appear to be far more likely that operators will seek to consolidate their positions in existing markets through purchasing additional equity in companies where they already own a stake, but since these are short in supply and would unquestionably require a considerable control
426
Jason Whalley, Peter Curwen
premium to be paid, we can reasonably conclude that very few of the accession countries will witness ownership changes over the course of the next year or two. In this respect it is significant that, although Vodafone is present in seven accession countries, it does not own a network in all seven markets. Indeed, Vodafone owns a network in just two markets – Malta and Poland – and is present in the other five through the use of Network Partnership agreements. Those markets where Vodafone has used Network Partnership agreements are characterised by their small size. When this observation is combined with the propensity of Tele2 to use MVNO arrangements to enter markets, a final conclusion is that multiple licence owners are using a wider variety of entry modes than was previously the case. Tele2 has made MVNO arrangements in both small and large markets alike, whilst Vodafone has opted to establish Network Partnership Agreements in preference to the purchase of a network in small markets.
References Cottrell R (2003) A survey of European enlargement. When east meets west. The Economist, 22 November Curwen P (2002) The Future of Mobile Communications: Awaiting the Third Generation. Palgrave, Basingstoke George N (2004) Shadows over an unhappy Nordic marriage. http://news.ft.com, 29 June Rachman G (2001) A survey of European enlargement. Europe’s magnetic attraction. The Economist, 19 May TDC (2004) SBC intends to sell TDC shares. http://tdc.com/about/press/releases Tele2 (2003) Annual Report 2003. Stockholm, Sweden TeliaSonera (2003) Annual Report 2002. Stockholm, Sweden Whalley J, Curwen P (2003) Licence acquisition strategy in the European mobile communications industry. Info, 5(6): 45–57
Fourier-based Study of the Oscillatory Behaviour of the Telecommunications Industry Federico Kuhlmann1, Maria Elena Algorri2, Christian N. Holschneider Flores Instituto Tecnológico Autónomo de México (ITAM), Mexico
Abstract Even though business cycles are not new as a concept, they seem to be reasonably new in the telecommunications industry. In the past (up to very recent years), the behaviour of this particular industry sector had shown a steady growth, as measured by almost any kind of indicators: network and infrastructure deployments, equipment manufacturing, network usage, numbers of subscribers and users, stock prices, etc. However, recent developments have altered this long-standing trend and industry behaviour has shown clear contraction symptoms, which, hopefully, will soon evolve into a new and long growth period. Worldwide telecom investments have grown steadily (as expected), but year-toyear incremental investments, i.e., the difference of investments (expressed in present value) in years (k) and (k-1), display an oscillatory behaviour. This fact suggests the use of the Discrete Fourier Transform method (DFT) to quantitatively characterise this oscillatory behaviour. This paper analyses these oscillations using the DFT to construct and imagine possible scenarios for the near term future, which are based on recent historical data.
Introduction In Noam (2002), Noam analyses the long-term lessons of the recent upturn and downturn in the telecommunications industry. One of the theories which he uses in an attempt to explain industry behaviour is the so-called “Austrian” theory, which has its focus on the creation of a large overcapacity. The Austrian approach seems to reasonably describe the telecom industry, in which the various network operators over-optimistically projected long distance market growth. Everybody concentrated on building capacity, apparently to overwhelm competitors and gain 1 2
E-mail:
[email protected] E-mail:
[email protected]
428
Federico Kuhlmann, Maria Elena Algorri, Christian Norman Holschneider Flores
size. As mentioned in Noam (2002), capital expenditures had significant average annual growth rates from 1996–2001. Simultaneously, the cost of bandwidth fell by about 54% annually. Overcapacity was assisted by the lumpiness of telecom investments such as oceanic cables and its irreversible nature. It was further assisted by the tendency of Wall Street analysts to value a firm’s progress by measures of its physical infrastructure (such as number of cell-sites or fibre-miles), rather than by other more realistic indicators (like traffic or usage). Noam concludes that cyclicality will be an inherent part of the telecom sector in the future. To deal with these oscillatory dynamics, the most effective responses by companies and investors are to seek consolidation and cooperation. Hence, an oligopoly is likely to be the near term equilibrium market structure. This implies that governments, if seeking stabilisation, will need to reassess their basic policy approaches, which have long been focused on the enabling of competition. And this, in turn, could mean that the structure of the network industry of the future could look a lot more like the old telecom industry and less like the new internet.
Investment indicators Even though there are some years in which a general growth trend in worldwide telecommunications investments, expressed in present value US dollars, seems to be interrupted, Fig. 1 and Table 1 display a behaviour which can be characterised by a reasonably steady growth trend of the normalised total investment values over the period 1975–2002.
Fig. 1. Normalised total telecom investments
Fourier-based Study of the Oscillatory Behaviour of the Telecommunications Industry 429 Table 1. Present value total telecom investments Year
Present value (USD) in $
Year
Present value (USD) in $
1975
118,922,190,024
1989
145,934,740,673
1976
119,066,445,059
1990
151,377,929,497
1977
124,914,051,783
1991
161,249,455,813
1978
139,074,997,338
1992
166,704,424,456
1979
141,062,269,846
1993
167,709,265,726
1980
131,121,950,625
1994
167,515,542,114
1981
124,878,564,311
1995
190,091,460,636
1982
114,185,815,282
1996
199,959,343,524
1983
105,081,045,146
1997
198,305,235,047
1984
96,766,875,587
1998
196,085,940,420
1985
104,389,021,942
1999
207,105,127,315
1986
119,553,543,032
2000
228,608,361,134
1987
132,831,251,255
2001
206,943,833,359
1988
140,960,108,970
2002
122,213,707,831
Worldwide Telecom Investments (form ITU World Telecommunication Indicators [2003])
However, when analysing the year-to-year incremental values of these investments, an oscillatory behaviour can be observed (see Fig. 2 and Table 2; data were taken from ITU World Telecommunication Indicators [2003]3). This suggests the use of the Discrete Fourier Transform (DFT) method to quantitatively characterise the dynamics of this indicator (Cabello Murguia 1997).
3
Data processing was performed using the software package Matlab 7.0. An initial condition of 0 was assumed.
430
Federico Kuhlmann, Maria Elena Algorri, Christian Norman Holschneider Flores
Fig. 2. Normalised incremental telecom investments
Table 2. Incremental telecom investments Year
Yearly incremental investment (Inv. in year K – Inv. in year K-1) in $
1975
0
Year 1989
Yearly incremental investment (Inv. in year K – Inv. in year K-1) in $ 4,974,631,703
1976
144,255,034
1990
5,443,188,824
1977
5,847,606,725
1991
9,871,526,316
1978
14,160,945,555
1992
5,454,968,644
1979
1,987,272,508
1993
1,004,841,269
1980
-9,940,319,221
1994
-193,723,612
1981
-6,243,386,315
1995
22,575,918,523
1982
-10,692,749,029
1996
9,867,882,887
1983
-9,104,770,136
1997
-1,654,108,477
1984
-8,314,169,559
1998
-2,219,294,626
1985
7,622,146,355
1999
11,019,186,895
1986
15,164,521,090
2000
21,503,233,819
1987
13,277,708,222
2001
-21,664,527,775
1988
8,128,857,715
2002
-84,730,125,528
Negative values shown in italics.
Fourier-based Study of the Oscillatory Behaviour of the Telecommunications Industry 431
Discrete Fourier Transform (DFT) The DFT method can be used to characterise phenomena that display oscillations (Oppenheim et al. 1999). Fourier Transforms are used to represent the frequency components in a periodic sequence. However, with the correct interpretation, the method can be applied to finite duration sequences; the resulting Fourier representation for finite duration sequences is known as the DFT, and it can be used to represent a finite duration sequence of length N by another periodic sequence of period N. Consider the N samples of the sequence x(n) and assume that x(n)=0 except on the interval 0 ≤ n ≤ (N - 1). The corresponding periodic sequence of period N, in which x(n) is one period, will be denoted by ~ x (n)and is described by
~ x ( n) =
∞
¦ x(n + rN ).
(1)
r = −∞
Since x(n) is assumed to be of finite length N, there is no overlap between the sequences x(n+rN) for different values of r. Eq. (1) can therefore can be written as ~ x (n) = x(n mod N ) ,
(2)
where mod N stands for module N. The finite duration sequence x(n) is obtained from ~ x (n) by extracting a single period, i.e.
x (n),0 ≤ x ≤ N − 1 ~ x ( n) = ® ¯0, by
otherwise.
(3)
For notational convenience, it is useful to define the rectangular sequence RN(n)
1,0 ≤ n ≤ N − 1 R N ( n) = ® ¯0,
otherwise.
(4)
Thus, the above equation can be written as
x ( n) = ~ x ( n) R N ( n) .
(5)
with the convention WN = e – j(2ʌ / N), a data sequence ~ x (n), and its correspond~ ing DFT X (k) are related by
432
Federico Kuhlmann, Maria Elena Algorri, Christian Norman Holschneider Flores N −1 ~ X (k ) = ¦ ~ x (n)W Nkn
(6)
n =0
1 ~ x ( n) = N
¦ X~ (k )W
− kn N
(7)
.
Due to the fact that the sums in Eqs. (6) and (7) involve only the interval 0 ≤ n ≤ (N - 1), we can relate the sequence x(n) and its DFT X(k) by
°¦ N −1 x(n)W Nkn ,0 ≤ k ≤ N − 1 X (k ) = ® n =0 °¯0, 1 ° x ( n) = ® N °¯0,
¦
N −1 k =0
X (k )W N− kn ,0 ≤ n ≤ N − 1
otherwise.
otherwise.
(8)
(9)
The pair of transforms given by Eqs. (8) and (9) is known as a DFT pair. They are referred to as the analysis and the synthesis transformation, respectively. The DFT is useful for the identification of possible periodicities in finite length data series, as well as for the measurement of the relative magnitudes of any periodic components. Using Fourier analysis, any time series (regardless of its length and of whether it is periodic or not) can be expressed as a linear combination of sines and cosines (complex exponentials). The DFT is the tool to “transform” finite sequences into a series of complex exponentials. The DFT is expressed as a spectrum of magnitudes and phases of the sines and cosines that compose the sequence. We can read a DFT as a sequence of frequency components ranging from fmin = 0 to fmax, where fmax is the inverse of twice the sequence time resolution. The squared magnitude of every frequency component in the DFT indicates the energy of the time series, which is contained in that particular frequency (i.e., the contribution of a complex exponential of that frequency to the original time sequence). Thus, a reasonably periodic time sequence will have most of its energy concentrated in a few frequencies, whereas an aperiodic series will have its energy more evenly distributed over many frequencies. In the DFT all the frequency components are represented twice, since negative and positive frequencies appear symmetrically in the transformation. It is therefore sufficient to analyse half of the DFT components. Since we analyse only half of the DFT components (corresponding to the positive frequency power spectrum), we can associate the first half of the DFT samples in the positive energy spectrum to the low frequencies, and the second half to the high frequencies.
Fourier-based Study of the Oscillatory Behaviour of the Telecommunications Industry 433
Application The following procedure describes an application of the DFT in constructing scenarios for the future of telecom investments, once the oscillatory behaviour of the present value telecom investment data has been identified. 1. The yearly variation of the indicator (telecom investments) is calculated. 2. The DFT is applied to the results of step 1 multiplied by a Hamming4 window to avoid spurious effects (the DFT assumes that at least one complete cycle of the oscillatory behaviour is contained inside the window of analysis). 3. The high frequency components with less energy are eliminated. 4. The inverse Fourier transform of the remaining frequency components is calculated, the Hamming window is eliminated and the resulting sequence is interpreted as one or more cycles of an oscillatory time series. 5. Assuming that the macro-trends will not suffer major changes, the future values of the indicator can now be calculated by replicating the sequence in step 4. The input time series, representing normalised worldwide present value incremental investments in telecommunications for the period 1975–2002, are presented in Fig. 2. It can be seen, that the values for years 1980–1984, as well as those for 1994, 1997 and 1998 were slightly negative, contributing thus with the negative portion of a cycle. The positive contributions occurred during years 1985–1993. The new negative portion apparently started in year 2001 and continued in 2002. The “macro” trends, which could likely appear again in the future, are represented by the highest energy spectral components of the DFT, while the lower energy spectral components originate the generally slight deviations from this general trend. By observing the magnitude of the DFT of the input time series (see Fig. 3), it can be seen that most of the lower energy components are in the higher frequency band (samples 10 to 20). The origins of these low energy components are possibly minor short-term (i.e., fast) perturbations, which very likely will not reappear again in the future.
4
Hamming[n] = 0.54 – 0.46cos(2πn/N), 0 ≤ n ≤ (N - 1), see Oppenheim et al. (1999) for details.
434
Federico Kuhlmann, Maria Elena Algorri, Christian Norman Holschneider Flores
Fig. 3. DFT magnitude of the normalised incremental telecom investments of Fig. 2
In order to imagine a possible future behaviour of the input signal (incremental year to year investments), based only on macro trends, these low energy spectral components can be eliminated, and the Inverse Discrete Fourier Transform (IDFT) can be calculated, based on the remaining DFT components. We increasingly eliminated the lowest energy spectral components, until the total energy of the original signal was reduced by 30%. Up to this value the distortion, which was introduced to the original data, was not very significant5. Figs. 4 through 7 show progressively energy-decimated DFTs of the normalised incremental telecom investments shown in Fig. 2. Fig. 4a shows the magnitude of the complete DFT as shown in Fig. 3, and Fig. 4b shows the corresponding time sequence recovered using the IDFT that matches the data of Fig. 26. Fig. 5a shows the magnitude of the DFT after eliminating 10% of the signal energy (the lowest energy components), which correspond to high frequency components. Figs. 6a and 7a show progressively decimated DFT’s corresponding to 80% and 70% of the original signal energy. In Figs. 5b, 6b and 7b the time sequences recovered from the decimated DFTs can be observed. Encouragingly, since only high frequencies were eliminated, the form of the recovered time sequences shows little variation in all the cases. Note that the loss of high frequencies eliminates the high negative isolated peaks of the time sequences in Figs. 2 and 4, but leaves the slower varying signal envelope unaltered.
5
6
This distortion was not measured quantitatively, and the 30% energy reduction was based on a subjective evaluation. For better comparison with Figs. 5–7, the yaxis scaling has been changed (and therefore the last two samples are out of range).
Fourier-based Study of the Oscillatory Behaviour of the Telecommunications Industry 435
a)
b)
Fig. 4. a) Magnitude of the complete DFT of Fig. 2. b) The time sequence recovered from a) using the IDFT. Note there has been a scale change from Fig. 2
a)
b)
Fig. 5. a) DFT magnitude of Fig. 4a with 10% energy in the high frequencies eliminated. b) The time sequence recovered from a). Note there has been a scale change from Fig. 4
a)
b)
Fig. 6. a) DFT magnitude of Fig. 4a with 20% energy in the high frequencies eliminated. b) The time sequence recovered from a). Note the scale change with respect to Fig. 4
436
Federico Kuhlmann, Maria Elena Algorri, Christian Norman Holschneider Flores
a)
b)
Fig. 7. a) DFT magnitude of Fig. 4 with 30% energy in the high frequencies eliminated. b) The time sequence recovered from a). Note the scale change with respect to Fig. 4
Using the results of Fig. 7b (using the DFT with 70% of its original energy) and the time series recovered from it via the IDFT, we can try to “imagine” the near term future by replicating the time sequence in Fig. 7b (corresponding to years 1975 through 2002) to represent the incremental investment behaviour for years 2003 through 2020. Fig. 8 displays the data for years 2003 through 2020. Corresponding present values are given in Table 3. It can be seen, that based solely on the trends contained in recent historical data, a relatively short recovery period could be expected (the next 2 years), which, in turn, could trigger a new contraction period of approximately 5 years.
Fig. 8. Possible near term trends for incremental investments based on Fig. 7
Fourier-based Study of the Oscillatory Behaviour of the Telecommunications Industry 437 Table 3. Relation between the sequence recovered using the IDFT from the 70% energy DFT and the possible near term trends for incremental investments Recovered Sequence from 70% energy DFT
Renormalised Incremental Investment in $
Investment Value in $
-0.015
-1,270,951,883
0.0176
1,491,250,209
120,942,755,948 122,434,006,157
Year 2003 2004
0.0134
1,135,383,682
123,569,389,839
2005
-0.0052
-440,596,653
123,128,793,187
2006
-0.0139
-1,177,748,745
121,951,044,442
2007
-0.0226
-1,914,900,837
120,036,143,605
2008
-0.0423
-3,584,084,310
116,452,059,295
2009
-0.0556
-4,710,994,979
111,741,064,316
2010
-0.031
-2,626,633,891
109,114,430,424
2011
0.0389
3,296,001,883
112,410,432,307
2012
0.1095
9,277,948,745
121,688,381,053
2013
0.1171
9,921,897,699
131,610,278,752
2014
0.0646
5,473,566,109
137,083,844,861
2015
0.0312
2,643,579,916
139,727,424,778
2016
0.0569
4,821,144,143
144,548,568,920
2017
0.0774
6,558,111,716
151,106,680,636
2018
0.0374
3,168,906,695
154,275,587,331
2019
-0.0031
-262,663,389
154,012,923,942
2020
Negative values shown in italics.
Conclusion As pointed out in Noam (2002), there are several possible explanations for the oscillatory nature of the telecom industry, but the Austrian viewpoint, apparently applicable to this industry, explains it as follows: the last few years have witnessed an accelerated growth in capacity deployment, partially originated by new entrants competing for a market share, over-optimistically projecting their required capacities, which, in turn, had large network portions remaining “dark” and/or underutilised as a consequence. The recent down cycle (already showing signs of recovery) will probably not be completely reversed, until a larger portion of the installed capacity is used, a fact which very likely will happen when there is a significant growth on the demand side (for example, with massive offering of wideband and multimedia services using existing networks).
438
Federico Kuhlmann, Maria Elena Algorri, Christian Norman Holschneider Flores
This DFT-based method does not pretend to exactly predict industry behaviour for the next few years. It allows, however, to construct scenarios, which could eventually be used to design, and implement actions that allow a faster recovery from down-periods, and expand the durations of growth periods. It must be stressed, that these results do not attempt to “predict” the future, and they should be merely used to imagine “macro” trend-based possibilities for the immediate future. The objective of these results is trying to reach a better understanding of the industry, as well as finding possible origins of certain phenomena, so that actions and policies can be designed to reduce the effects and the durations of possible contraction periods. As stated in Noam (2002), it is impossible to predict, prevent or encourage recurrence, if the causes of certain phenomena are unknown or not clearly understood.
References Cabello Murguia R (1997) Future of Telephony in Mexico. BS thesis, Engineering School, National Autonomous University of Mexico (UNAM) ITU World Telecommunication Indicators (2003) Telecommunication Data and Statistics Unit, Telecommunication Development Bureau. International Telecommunication Union. Switzerland Noam EM (2002) How telecom is becoming a cyclical industry and what to do about it, Columbia University Oppenheim AV, Shafer RW, Buck JR (1999) Discrete-Time Signal Processing. Prentice Hall, New Jersey
The CAPEX-to-SALES TRAP Matthias Pohler1, Jens Grübling Dresden University of Technology, Germany
Abstract The traditional advantages of the former European (telecommunication) incumbents include neglecting risks, low volatility and a steady growth path with reasonable profitability. Through deregulation and liberalisation of the telecommunications market, the risk and volatility influencing parameters have changed fundamentally. This paper analyses the changes of the market, after the period of major governmental regulations, from January 1999 to June 2003 – and addresses hereby the significant investment shift of major European integrated and cellular operators which results in the CAPEX-to-SALES TRAP or in other words encumbrance and profitability crises. Using market based performance figures, volatility and risk measures we show the success (failure) of the investment strategies of the European telecommunication industry.
Introduction The telecommunications service market is one of the most growing markets in Europe.2 The total turnover of the sector increased from 174 Bill US$ in 1997 to 242 Bill US$ in 2001 which is equivalent to a compound average growth rate of 8.6 %. Deregulation and Liberalisation are characteristic of the above mentioned timeframe. This raises the question of how existing operators (incumbents) and new entrants (alternative carriers) operate in those growth markets which are influenced by several drastic exogenous and endogenous shocks. The change of the monopolistic market structure towards competition opened novel business opportunities throughout Europe and beyond. For example, within Europe, the number of wireless customers increased from 44 Mill in 1997 to 265 Mill in 2002. However, as those positive market dynamics are basically open to 1 2
E-mail:
[email protected] The figures are related to the European “core markets”: France, Germany, Austria, Italy, Netherlands, Spain, Switzerland und United Kingdom. Source: International Telecommunications Union (ITU), 2003.
440
Matthias Pohler, Jens Grübling
everybody, risk and volatility are likely to evolve. Risk is driven for example through increased cost-intensive investments in “centralised” next generation wireless infrastructures (e. g. UMTS) or spectrum acquisitions with unpredictable time to (mass) market. Volatility is driven for example by the aggregated oversupply of single firms investment strategies. Telecommunications is a cyclical industry. The recent upturn of the market resulted in a downturn which led to consolidation and cooperation which, in turn, may be followed by an oligopolistic market structure (Noam 2002). The following analysis attempts to incorporate this cyclicality so as to focus on a fundamental (overriding) strategy of an operator – the CAPEX-to-SALES ratio. This is how much a firm plans to invest from each Euro sales – or how many sales a firm needs for one Euro investment. As the “production model” within telecommunication is based on a longer time frame – from CAPEX over OPEX to Sales, it cannot be expected that CAPEX and Sales move in a synchronous run. For example, network rollout and customer acquisition is time-consuming. But in a medium to long-term perspective, growth rates of CAPEX and Sales should first be similar and second in the long run stable (between healthy 10 and 15 %). This paper is structured as follows: In chapter 2 we discuss the main exogenous and endogenous industrial drivers (or shocks) which changed the structure of the telecommunications market. We apply the Porter’s five forces in order to show the competitive environment within this former monopolistic market. Chapter 3 focuses on the cyclicality of the industry. We use the CAPEX-to-Sales ratio as well as debt and EBITDA figures in order to understand the reasons for and effects of cyclicality. In chapter 4 we use market performance figures in order to assess the impact of the investment strategies of the analysed European operators. We conclude with a short summary. We restrict our analysis to 11 major European operators. We use their main source of revenue as the determinant for the differentiation into two categories: Integrated operators and cellular operators. Later on we divide the integrated operators into high debt operators and low debt operators by using a long-term average Net Debt-to-EBITDA ratio in order to understand the risk assessments and evaluation of the capital market. As performance indicator we use the share prices and dividends of the operators. As benchmark we use the Morgan Stanley European Telecommunication Service Index as well as the Morgan Stanley Europe Index.
Change and structure analysis of the telecommunications market Particularly political and legal factors have resulted in a radical change in numerous countries which essentially affect two areas in the telecommunications industry (Dengler 2000). On the one hand entrepreneurial activity scaled up in private societies by a formal privatisation of former state-owned operators and through new market entrants. On the other hand the market was deregulated by reduction
The CAPEX-to-SALES TRAP
441
of state specifications and new laws and ordinances.3 The change of the telecommunications industry within the last 15 years can be summarised by three waves. Fig. 1 describes the essential reasons and factors which triggered the market change.4 Wave Wave 11 Start/mid Start/mid 90 90
Wave Wave 22 Late Late 90 90
Wave Wave 33 2001 2001 -- 04 04
• Deregulation
• Technology change
• Increase of debts
• Privatization
• Licensing
• Profitability crisis
• Privatization/New entrants • Liquidity of capital markets
• Opening of markets
• Investments • Growth • Internationalization
• Consolidation of markets • Restructuring of companies
Fig. 1. Phases of major changes in Telecommunications
At the beginning of the nineties the first wave was caused by exogenous changes of the framework conditions in the respective countries. These exogenous shocks appeared at different times and with different intensities. A two or multistage process of deregulation was carried out in most countries. In this way the incumbents could prepare for the radical market changes.5 The more diverse second wave was mainly caused by exogenous changes and reached its top at the end of nineties. Key factors like technology development, the beginning of an active licensing policy as well as the progress of liberalisation had a substantial influence on telcos strategic decisions regarding investments, innovations and expansions. The capital market and its protagonists represent a further essential source for strategic decisions, technology developments, expansions and innovations (Fransman 2001). Especially technology intensive industries like telecom require a huge amount of capital for their growth (Picot 2003). Until the downturn near the end of 2000 the stock market remunerates all incentives to growth intensively and very quickly. Instead of profitability the focus was put on fibre miles and cellsites. 3
4
5
Dengler 2000 does not decribe these political and legal changes as “Deregulation”. This process can rather be described as “new regulation” of telecommunications. Gerpott 1998 identifies four fundamental changes in the telecommunications market: 1) Developments in network technologies, 2) Growing customer demand, 3) Deregulation of telecommunication markets and 4) Privatisation process of PTOs; Gerpott 1998: p. 17 et sqq. However, over time the combination of deregulation, technology changes and liquidity of capital markets resulted in a loss of the incumbents’ conventional advantages.
442
Matthias Pohler, Jens Grübling
Correspondingly many telcos neglected their incentives to increase profitability. Finally, this resulted in oversupply and high risks for all market participants. New developments regarding infrastructure or handsets have a sustainable influence on telecommunications. Those technological innovations often cause a high level of uncertainty and ambiguity (Dengler 2000). Emerging new technologies had two substantial effects on the market. On the one hand, a substitution effect of the technologies occurred. For example, the presence of wireless technologies for the local loop or other disruptive technologies (WiFI, UWB, WiMax) led to fixed to mobile substitution. The second effect concerns basic productivity increases. In the fixed-line business, emerging technologies like optical networks led to an oversupply of network capacity (voice as well data) and respectively to lower prices of services. Parallel to this development particularly incumbents tried to cover all possible value-chain segments. Instead of successful expansion those strategies induced massive price pressure, lower revenues and led to financial distress. The described radical changes led to the opening of markets and affected the telecommunications carrier through the following main business drivers at the middle of the last decade (Dengler 2000): • Worldwide privatisation and the availability of capital increased the incentive for incumbents to acquire equity and invest in new geographical markets • Deregulation process made it possible to build up own resources in target markets • Also small operators have the possibility to acquire new network resources and enlarge their service and product range • Equipment companies and resellers have the chance to expand their activities forward and respectively backward through acquisitions and alliances. The third wave started at the beginning of 2001 and was mainly triggered by endogenous changes. The exact beginning of this period is controversial. On the one hand, it could be seen in the downward trend of the capital market and would therefore already have started in the first quarter 2000. An another clue about the launch of this wave can be seen in the return of 3G-licences at the end of 2002, for example in the case of “Quam” in Germany. The change of investment behaviour may be a further indication of the beginning of the third wave. The downturn of the capital market had a negative impact on telecommunications capital spending. In 2002 and 2003 high capital expenditure announcements led to a more negative reaction of the capital market. Thus, carriers cut their capital budgets dramatically. However, during the downturn of the capital market the financial need for telecoms were almost unchanged due the acquisitions of expensive 3G-licences and network rollout (Hoffmann 2003). Thus, the lack of equity capital led to a heavy debt load, particularly for incumbents like Deutsche Telekom, KPN or France Telecom. In the following periods the business risks of incumbents increased and made it more difficult to accumulate capital (Plaut 2003). The three phase model explained at the beginning of this section has farreaching consequences for the application of Porter's five forces model in telecommunications. The factors within the different phases are subject to a perma-
The CAPEX-to-SALES TRAP
443
nent complexity and dynamics which lead to a cumulative impact and result in a high level of rivalry and competition (Noam 2002). Fig. 2 shows the different factors from the technological, political, legal, socio-cultural as well as economic environment which influenced the competition level and relations within the telecommunications industry. If Porter's concept is related to the German market, the far-reaching strategic implications for the incumbent Deutsche Telekom become evident. Such changes were mainly caused by the changes of the telecommunications law in July 1996 (Gerpott 1998). Lower market entry barriers led to an increase in the number of telecom service providers which finally reduced the market transparency and calculability. Furthermore, alternative suppliers like energy supply companies established their market positions already before the change of the law situation. New competitors like Viag Interkom or Mannesmann had extensive resources in form of networks or licences. For instance Mannesmann could use the existing telecommunication infrastructure of the German railroad carrier Deutsche Bahn (Dengler 2000). Political and legal environment Barriers to entry • Sunk Costs • High capital requirements and fixed costs • Economies of Scale • Regulation • Product diversification • First mover advantages • Vertical integration • Switching costs
Bargaining power of suppliers • Switching systems • Transmission systems • Handset manufactures • Software producers
Intensity of bargaing power of suppliers • High switching costs • Domination of few large suppliers • High order volumes • No risk of forward integration
Technological environment New entrants • Energy service companies • Foreign telecoms • Industrial companies • City carriers
Intensity of rivalry • Market growth • Licence policy • Standardisation • Concentration ratio • Exit barriers • Intensity of innovations
Competitive rivalry within the telecommunications industry
Threat of substitutes • Television / Publishing • Internet • Mobile telecommunication
Bargaining power of customers • Private customers • Business services • Carrier’s Carrier
Intensity of bargaining power of customers •High number of customers and small number of operators •Large volume of customers •High price sensitivity •Low switching costs
Intensity of substitution • Relative prices of substitutes • Relative quality of substitutes • Switching costs • Substitution behaviour of customers
Economic environment
Source: According to Stritzl (2002), p. 86. Fig. 2. Porters’s five forces in the telecommunications industry
Social-culture environment
444
Matthias Pohler, Jens Grübling
Apart from the entrance of new suppliers the change of customers demand behaviour represents a further significant aspect. At the beginning of the first wave increasing growth rates in fixed as well as mobile business segments can be noticed, however with different intensities and saturations of consumers demand. Such consumer behaviour led to a higher level of uncertainty regarding substitution and integration of fixed and mobile services. A major impact on change and industry structure generated rapid technological innovations in related industries like hardware, software or semiconductor. As a result, the customers benefited due lower prices and the presence of substitution products. On the other hand, the number of equipment suppliers increased due new start-ups. This point is critical for the investment behaviour of telecommunications and might lead to a “CapEx Trap”. First, technological innovations induce shorter product life cycles. Consequently, the investment risk increases, because the time of amortisation decreases. On the other hand, all market participants who invested in a network externality environment have a strong incentive to make investments in emerging telecom infrastructure early enough to achieve a substantial user base (Chacko and Mitchell 1998). However, over time the rather static five forces concept must be seen from a dynamic perspective to derivate consequences for telecoms investment behaviour. In sum, the telecommunications industry structure has changed considerably and is undergoing a radical transformation. Managers make strategic decisions in an increasingly more turbulent market environment. First, the number of variables (e.g. technology, competition, capital market, network effects) included in the decision process, is rising. Secondly, the interdependencies between and cumulativeness of these variables increase which leads to a more complex telecoms environment. Finally, the extent and the frequency of the changes of the market factors increase the dynamic of the telecommunications industry (Burmann 2001). This trend is strengthened by deconstruction of established value chains of telecommunications. The result is a complex network of companies from different industries like telecommunication, media, and entertainment, each of which strives to gain advantages in an emerging industry (Li and Whalley 2002). All in all, the telecommunications sector which is influenced by a large numbers of dynamic factors today has become more cyclical than linear (Noam 2002). Given these premises it is a challenge to keep the balance between (investment) growth and profitability. Under these conditions emerging telecommunications have the challenge to avoid the “CAPEX-to-Sales Trap”. In the next sections, the effects of such a transformation process on investment behaviour, profitability, financial distress and firms risk will be outlined.
The CAPEX-to-SALES TRAP
445
Investments strategies and their results of European telecommunications carriers The following section primarily aims at describing the investment behaviour of European telecommunications carriers. Subsequently, balance sheet data will be presented so as to discuss the relationship between capital expenditures, sales and their consequences for telecoms profitability. In a capital-intensive business like telecommunications, carriers must permanently invest a significant amount of money to maintain their growth and meet customer needs. Investments in infrastructure or intangible assets (e.g. licencies) are the main sources of future growth. However, for telecoms the building of growth opportunities in balance with profitability becomes more difficult. The factors which determined the investment decision process are often of dynamic rather than static nature. Furthermore these factors (e.g. capital market, technology) develop with different temporal stability. Geographical factors like population density or firm specific resources (e.g. networks, know-how) can be seen as factors with long-term stability. In contrast, firm specific culture regarding growth and new technologies are aspects with mediumterm stability. The volatility of capital markets or regulation policies can be seen as short-term factors (Büllingen and Stamm 2001). In sum, these instabilities increase the investment risk of telecommunication carriers. Fig. 3 shows capital intensity ratios – measured by capital investments in % of sales – for European integrated carriers and carriers focused on mobile telecommunications6. Integrated as well as specialised carriers show a cyclical investment behaviour (Noam 2002). In 1998 and 1999 telecommunications routinely spent between 23-27 percent of their sales for replacing and upgrading their tangible and intangible assets. The liberalisation in combination with the emergence of new telecommunications technologies (e.g. UMTS) resulted in high expectations regarding future revenues and earnings. These dramatic changes in market conditions triggered an enormous increase of capital spending in 2000. Particularly mobile operators like Vodafone, Orange or Telefonica Moviles invested more than 120 percent of their revenues in new network equipment and in purchase of paneuropean licencies for 3G-services. During the restructuring period between 2001 and 2002, the carriers cut their spending to a healthy level. Such cyclicality can be explained by the factors price of services, demand and network capacity (Büllingen and Stamm 2001). The combinations of exogenous and endogenous factors like emergence technologies, strong competition, rising capital market or scarce resources may lead to an investment hype. First, a rising demand for new data services triggers a shortage of capacity. This situation induces an infrastructure competition, because in6
The group of integrated carriers includes Deutsche Telekom, France Telecom, KPN, Telecom Italia, Telefonica, British Telecom and Telecom Austria. The group of mobile carriers includes Vodafone, Orange, Telecom Italia Mobile and Telefonica Moviles. Year-end accounts of Vodafone and British Telecom are on the end of first quarter, but are counted completely to the previous year.
446
Matthias Pohler, Jens Grübling
cumbents and new competitors have a strong incentive to invest in new technologies. Such an effect is accelerated by the presence of network effects and first mover advantages in some telecommunication services. In consequence, the prices for telecommunications services decrease (Büllingen and Stamm 2001). In contrast to such investment trend many telecoms faced the problem that capital expenditures have grown faster than revenues (see appendix).
CapEx as % of revenue
Capital Expenditure / Sales (1998-2002) - Integrated carriers14 0 % 12 0 % 10 0 % 80%
58,0%
60% 40%
23,4%
27,4%
1998
1999
22,6%
14,4%
20% 0%
2000
2001
2002
CapEx as % of revenue
Capital expenditures / sales (1998-2002) - Mobile telecommunications 14 0 %
121,1%
12 0 % 10 0 % 80% 60% 40%
23,3%
26,1%
1998
1999
26,7%
16,9%
20% 0%
2000
2001
2002
Fig. 3. Capital spending in relation to sales 1998-2002
The high expectations for revenue growth from new data services (e.g. broadband, 3G) were not realised. Mergers and acquisitions as a part of excessive growth strategies and investments in additional capacity caused by infrastructure competition led to a dramatic drop of prices for telecommunications services, particularly in the voice traffic segment. Such capacity oversupply cannot be reduced in a short-term way (Noam 2002). Firstly, the majority of investments are “sunk investments” with a high degree of irreversibility. Secondly, telecommunications services are not storable and thus, the market participants have to invest steadily to meet customer’s expectations if demand increases. Furthermore, many carriers have built up bandwidth reserves for growing demand of future data services like video-on-demand (Büllingen and Stamm 2001).
The CAPEX-to-SALES TRAP
447
However, the high level of capital expenditure in 2000 led to many business failures following the development described above. In reaction to a weaker capital market particularly incumbents like Deutsche Telekom, KPN or France Telecom were not able to raise fresh equity capital. At the peak of the boom in 2000, companies which acquired 3G-licencies funded their excessive growth strategies more and more with loans which were easily obtained from banks. Fig. 4 shows the dramatic increase of leverage in the European telecommunications sector. Net Debt (1998-2002) - Inte grate d carrie rs-
in Bill. € 2 50
2 2 8 ,1
200
2 16 ,4
2 0 0 ,2
150 10 0 50
8 7,4
10 2 ,8
0
1998
1999
2000
2001
2002
Ne t Debt (1998-2002) - Mobile te le communications-
in Bill. € 2 50 200 150 10 0 50
7,4
2 0 ,3
19 ,8
1999
2000
3 1,5
3 6 ,0
2001
2002
0
1998
Fig. 4. Rise of debt in European telecommunications market
Furthermore, the optimistic capital expenditure programs led to a slowdown of asset productivity. The following figure summarises the aggregate EBITDA minus Capex for Europe’s leading integrated and mobile operators. Particularly in 2000, the balance between capital expenditure and profitability faced major difficulties. Integrated carriers as well as mobile operators invested more capital than generated by operating activities. The imbalance was magnified by the 3G-spectrum auctions in Europe and the excessive expansion in capacity induced by technological developments and infrastructure competition. The combination of financial distresses and changes in customer demand has triggered a restructuring of the European telecom sector. Thus, between 2001 and 2002 the
448
Matthias Pohler, Jens Grübling
profitability situation improved (when looking at EBITDA) due to cost savings, sale of assets or cutting investment projects. EBITDA minus CapEx (1998-2002) - Inte grate d carrie rs -
Bill. € 6 0 ,0 4 0 ,0
4 0 ,5 2 0 ,0
2 8 ,9 16 ,0
0 ,0 -2 0 ,0
2 2 ,2
1998
1999
2000
2001
2002
-4 0 ,0 -6 0 ,0 -59 ,0 -8 0 ,0
EBITDA minus CapEx (1998-2002) - Mobile te le communications -
Bill. € 6 0 ,0 4 0 ,0
15,6 2 0 ,0
1,4
0 ,4
1998
1999
0 ,5
0 ,0 -2 0 ,0
2000
2001
2002
-4 0 ,0 -3 9 ,7 -6 0 ,0 -8 0 ,0
Fig. 5. Accounting-profitability of European telecom operators
Market based performance assessment of investment strategies Overall market developments The following chart shows the upturn and downturn of the European telecommunications market from the capital market perspective. The timeframe from January 1999 to June 2003 clearly presents the business expectations which are closely connected with the investments in UMTS. Until the first auction of UMTS in the first quarter of 2000, telecommunications seemed to be an endlessly profitable business model. The total sector benefited from those tremendous expectations, consequently the MSCI European Telecommunications Service Index rose by 250 % during the
The CAPEX-to-SALES TRAP
449
pre-UMTS auction phase. Within the 1.5 year auction phase, the market value of the European carriers decreased by 70 % and stayed below the value it embraced at the beginning of 1999. The figures appear in an even more unfavourable light given that new disruptive technologies (IP-Protocol, wireless technologies, etc.) together with value added services are supposed to increase value within companies and across the value steps, and thus to drive revenues and profitability! The post-UMTS phase with a loss of 15 % can be rated as moderate. As high losses from the actual low level are rather unlikely, a new investment cycle with new technologies (UWB, WiMax) can begin. UK 38,23 Bill. € / 5
300
MSCI Telco Services Europe
auction start
MSCI Europe
auction end
250 Germany 50,40 Bill. € / 5
200
Italy 11,74 Bill. € / 5
150 UMTS France 9,90 Bill. € / 2
100
50
0
+ 256 % Pre-UMTS phase
- 73,8 % UMTS auction phase
1999 Q1 99
Q2 99
Q3 99
2000 Q4 99
Q1 00
Q2 00
Q3 00
- 15 % Post-UMTS phase 2001
Q4 00
Q1 01
Q2 01
Q3 01
2002 Q4 01
Q1 02
Q2 02
Q3 02
2003 Q4 02
Q1 03
Q2 03
Source: Bloomberg, own calculations Fig. 6. Performance of the European telecommunications market 01/1999 – 06/2003
Assessment of the total volatility and firm specific variances In this section we measure the volatility telecommunication companies apply as well as the method of risk segmentation (Dubacher and Zimmermann 1989). We divide the total volatility of each firm into four segments (see Fig. 7). The first three segments measure the stock dependency of each investment on world influences (World, W), region specific influences (Europe, EU) and telecommunication industry specific influences (Industry, Ind.). As the focus lies on the exposure of the firm specific risks of an investment, we subtract the coefficient of determination (R2) of all three variables from one. R2 shows the explanatory power of the used variable in the regression.7 A firm specific component value of 20 % would mean that 20 % of the volatility of an investment (risk) is determined by internal 7
For a further explanation of the coefficient of determination see Gujarati 2003, p. 81.
450
Matthias Pohler, Jens Grübling
(firm specific) factors. Investors will penalise those operators with an extraordinary risk premium (unsystematic risk) which apply – in comparison to their competitors – unconvincing strategies. Especially those companies are of major interest – at least from a strategic management point of view – which offer large scale and firm inherent drivers for the share price development. The overall aim would be the identification of the most influential drivers of the company’s value.
Total Return Variance ǔ2
World Component – MSCI World –
R2(W)
Country Component – MSCI EU –
R2(W;EU) – R2(W)
Industry Component
R2(W;EU;Ind) – R2(W;EU)
– MSCI Telecom Services EU –
Firm Specific Component
1– R2(W;EU;Ind)
Source: According to Dubacher/Zimmermann 1989, p. 77 Fig. 7. Method of risk decomposition of European operators
The calculation of the four risk components is conducted by the following regressions: Ri = ĮȚ + ßȚRm(W) + İ(m(W))i
(1)
RȚ = ĮȚ + ßȚRm(W) + ȖȚRm(R) + İ(m(W;R))i
(2)
RȚ = Įi + ßȚRm(W) + ȖȚRm(R) + įȚRm(Ind) + İ(m(W;R;Ind))i
(3)
with Rm(W): Rm(R): Rm(Ind): İ(m(W))i:
Return of World Index (MSCI World) Return of Region Index (MSCI Europe) Return of Industry Index (MSCI Telecommunications Europe) Return of Equity (share i) (Error term), which is independent from world market influences Return of Equity (share i) (Error term), which is independent İ(m(W;R))i: from world market and region market influences İ(m(W;R;Ind))i: Return of Equity (share i) (Error term), which is independent from world market, region market and industry specific influences
The CAPEX-to-SALES TRAP
451
The analysis is based on 13 operators which are grouped according to their main revenue source into integrated and cellular operators. To apply the model of risk decomposition, we used weekly returns over a period of 2, 2.5 and 4.5 years. The time frames are selected so as to measure different values resulting from possible strategic changes over time. However, each time frame is designed in such a way that each sample encompasses at least 104 observations (see Fig. 8). We controlled for normal distribution and significance. European Integrated Operators
European Wireless Operators
• Deutsche Telekom AG
Code • DTE GR
Market Cap* • 55,8
• Vodafone Group PLC
Code • VOD LN
Market Cap* • 115,3
• Telefonica S.A.
• TEF SM
• 49,7
• Orange SA
• OGE FP
• 37,2
• France Telecom SA
• FTE GR
• 47,5
• Telefonica Moviles S.A.
• TEM SM
• 30,3
• Telecom Italia SPA
• TIT IM
• 36,8
• TIM Spa
• TIM IM
• 17,9
• BT Group PLC
• BT/A LN
• 17,7
• MMO2 PLC
• OOM LN
• Swisscom AG
• SCMN VX
• 16,4
• Koninklijke KPN NV
• KPN NA
• 15,4
• Telekom Austria AG
• TKA AV
• 4,9
* 30/06/03 in Bill. Euro
• 4,9
* 30/06/03 in Bill. Euro
Firm N Firm … Firm 2 Firm 1 1999 I
II
III
2000 IV
I
II
III
2001 IV
I
II
III
2002 IV
I
II
III
2003 IV
I
II
104 Weeks/Data points
Fig. 8. Sample and timeframe of analysed operators
The coefficients of determination and volatilities in Fig. 9 show that the risk structure in telecommunication is mainly determined by firm inherent strategies. This exceedingly applies to integrated operators which surpass the average company-specific volatilities of cellular operators. The focus on core activities seems to reduce the risk – even in turbulent times as the values of Vodafone and TIM SPA reveal. Large integrated companies such as Deutsche Telekom, which did not separate their business units in the past, display extremely high volatilities which can be ascribed to company-internal reasons. The rise of the coefficient of determination (R2) attests to the critical attitude of the capital market in times of general downturn (confer Deutsche Telekom AG as well as KPN NV).
452
Matthias Pohler, Jens Grübling
Firm Specific Volatility of Integrated Operators Volatility 99 - 06/03 Volatility 1999-06/2003
Deutsche Telekom AG Telefonica S.A. France Telecom SA Telecom Italia SPA BT Group PLC Swisscom AG Koninklijke KPN NV Telekom Austria AG
Total 166,9 53,6
Specific 15,4 114,2
Specific R2 (corr.)* 99-06/03 99-00 99-00 - 01-06/03 01-06/03 99-06/03 28,6 68,4 80,9 24,36 33,5 32,5 28,9 28,9
95,7 95,7 45,1
13,0 27,7
34,7 34,7
23,3
141,1 66,5
27,5 58,3
22,7 22,7
49,8
41,3 41,3
105,8 49,8
67,0 30,9
60,0 60,0 59,9
65,6 63,5
63,4 63,4 62,0
102,3 48,2
27,9 59,3
57,9 58,2
57,3 57,4
58,0 58,0
67,4 67,4 31,7
20,6 43,8
63,9 64,8
59,5 60,6
64,9 65,0
161,5 76,2
46,3 98,2
45,9 45,9
71,6
60,8 60,7
88,2 88,2 30,9
30,3 85,2
n/a
100,7 97,7
97,8** 96,6**
Firm Specific Volatility of Wireless Operators Volatility 1999-06/2003 Volatility 99 - 06/03
Vodafone Group PLC Orange SA Telefonica Moviles S.A. TIM SPA MMO2 PLC * In percent
Total Specific 36,0 17,0 95,3 44,9
Specific R2 (corr.)* 99-00 99-0001-06/03 00-06/0399-06/03 99-06/03 50,8 37,8 50,8 21,4 37,8
69,2 45,0
27,5 17,0
n/a
39,7
39,7**
61,9 38,3
18,2 29,4
n/a
47,7
47,7**
85,7 40,3
33,0 16,0
47,5 47,5
28,7 26,7
38,6 38,6
65,7 47,8
37,6 22,5
n/a
57,3
57,3**
** 01/2001 – 06/2003
Fig. 9. Firm specific volatility in telecommunications
Market performance and risk In the following section we return to the cyclicality of the CAPEX-to-Sales ratio, i.e. the “CAPEX-to-Sales Trap”. We combine the investment strategy of both types of operators (integrated and cellular) with performance measures. We apply Beta estimations by using the Morgan Stanley European Index (MXEU) as well as market returns. Market returns are calculated by building a market-value weighted index taking each firms share price development as well as dividends and other benefits. Dividends and other benefits are reinvested with a risk-free interest rate in order to compare with Morgan Stanley Index. We calculated three own indexes: High debt integrated: Deutsche Telekom, France Telecom, KPN Low debt integrated: Telecom Italia, Telefonica, British Telecom, Telecom Austria Cellular: Vodafone, Orange, Telecom Italia Mobile, Telefonica Moviles The upper part of Fig. 10 and Fig. 11 illustrates the facts developed so far. Ratios between 20 % and 30 % could be justified by investments towards the transition to broadband (fixed line business) (Telenor 2003). Higher ratios are penalised by the financial markets as beta and market return indicate. Leverage plays an im-
The CAPEX-to-SALES TRAP
453
portant role when risk is assessed by the market, indicating that high debt strategies result from realised growth in fibre miles and cellsites as well as internationalisation. However, the growth in revenues could be realised but not at the speed of investment growth (see in appendix).
CapEx as % of Sales
75%
6 6 ,4 %
50%
55,7% 3 4 ,6 %
25%
2 6 ,5% 2 2 ,7% 16 ,8 %
14 ,9 %
2 2 ,1%
18 ,8 %
14 ,4 %
0%
2,5
2 ,2 3
Beta
2 1,6 2
1,50
1,5
1,53 1,3 4
1,6 3 1,3 1
1
1,2 8 1,0 3
0 ,9 4
0,5 100% 81,3%
80% 60% 53,8%
Return
40%
35,9%
20%
19,0%
0%
-17,4%
-20%
-39,5%
-40%
-45,8%
-60% -66,4%
-80% 1998
1999
2000 High debt Integrated
-53,7%
2001
-49,8%
2002
06/2003
Low debt Integrated
Fig. 10. Sector Performance of integrated operators according to their investment strategies
When accepting the efficient capital market theory we must conclude that after the aggregated investment “bubble” the expectations about each firm’s sum of potential future discounted cash-flow is decreasing. This is true both for the integrated as well as cellular operators between 2000 and 2002. For example, the market return of high debt integrated operator decreased in 2000 by 66,4 % (2001 by 53,7 %; 2002 by 49,8 %). The capital market penalised the firms (or shareholders) at the moment as the CAPEX-to-SALES Trap occurred. For the cellular operators, the CAPEX-to-SALES ratio of 120 % in 2000 is far beyond the reasonable measure – even in view of sharply increasing revenues. Admittedly, market entry and the establishment of networks entail investments
454
Matthias Pohler, Jens Grübling
prior revenues. Nevertheless the whole mobile industry, including existing players, went into this trap.
CapEx as % of Sales
150%
12 1,1%
Cellular operators
125% 100% 75% 50%
2 3 ,3 %
2 6 ,7%
2 6 ,1%
16 ,9 %
25% 0% 2 1,8
1,8 5
Beta
1,6 1,4 1,2
1,3 0
1,2 5
1,2 0
1
1,0 2
0,8 0,6 60% 40%
4 5,8 %
Return
20%
2 4 ,5%
0% -20% -2 0 ,5%
-40%
-2 9 ,5%
-60%
-3 7,9 %
-80% 1998
1999
2000
2001
2002
06/2003
Fig. 11. Sector performance of cellular operators according to their investment strategies
To reduce investments, there are also well known business opportunities in the “virtual operator space” (Pohler and Recker 2003). Furthermore, cooperation may successfully reduce the amount of investments and risk. Examples can be seen in the cooperation of Vodafone and Swisscom in Switzerland as well as of Microsoft Networks and Telefonica in the German market. When using the Morgan Stanley European Telecommunication Service Index in order to show risk structures of the three groups of operators we obtain the following picture (see Fig. 12). Low debt operators are able to reduce risk even in turbulent times (year 2000). Cellular operators are assessed within the telecommunications market as risk neutral. High debt operators, however, face high risk which makes them inflexible for future business opportunities.
The CAPEX-to-SALES TRAP 1,3
1,2 5 1,17
1,19
0 ,9 7
0 ,9 8
1,15
455
1,2 6
Beta
1,15
1
1,0 3 1,0 0 0 ,9 1
0,85
0 ,9 6
0 ,8 9 0 ,8 2
0 ,8 1
0 ,79
0,7 1999
2000
2001
High debt Integrated
2002
06/2003
Low debt Integrated
Cellular
Fig. 12. Risk development within telecommunications
In the next section we adopt the view of an investor intending to invest in telecommunication companies. We use an excess-return to excess-volatility matrix in order to show if one of the three types of operators can outperform the market. As assumption for the market development, we use the Morgan Stanley European Telecommunication Service Index. The analysis reveals that the low debt integrated carriers as well as the cellular carriers performed better than the market (except for 1999). Cellular operators paid for the excess return a slightly higher risk (ı), whereas the low debt operators are rather close to the most favoured upper left corner – which means higher returns with lower risk than the market. High debt integrated carriers “bought” their excess return in 1999 and first half of 2003 at the expense of risk.
RiRm
-30%
30%
30%
30%
20%
20%
20%
10%
10%
0% -20%
Cellular
Lowdebt integrated
High debt integrated
-10% 0% -10%
10%
20%
30%
RiRm
-30%
10%
0% -20%
-10% 0% -10%
10%
20%
30%
RiRm
0%
-30% -20% -10% 0% -10%
-20%
-20%
-20%
-30%
-30%
-30%
ǔi-ǔm
ǔi-ǔm
Source: Bloomberg, own calculations Fig. 13. Return-Volatility matrix of European telecommunications
ǔi-ǔm
1999 10%
20%
30%
2000 2001 2002 02/03
456
Matthias Pohler, Jens Grübling
Conclusion At the beginning of this paper (Chapter 2), we discussed the drivers (e. g. deregulation, technology developments, liquidity of markets, etc.) of the major changes in the telecommunications market in some detail. As those drivers – which mostly have been external shocks occurring promptly after one another – have been subject to a permanent complexity and dynamics, they had a cumulative impact. The result is a telecommunications market which went through an up- and downturn in a cyclical manner. The present market is characterised by oversupply, a high level of rivalry, competition and low return on investment. The outcome is consolidation and the move towards an oligopoly – at least in some segments like next generation wireless telecommunication networks. One reason for the poor performance of the European operators can be seen in the CAPEX-to-SALES TRAP which had a negative effect on the profit and loss accounts as well as on the capital market assessments of each firms ability the generate future (discounted) cash-flows. The question arises whether the market failed (unpredictable) at large or if the top management should have anticipated the market development more correctly at an earlier stage. It appears that less aggressive investment strategies like those of Swisscom or Telekom Austria provided the stockholders with a more reasonable rate of return. Given the cyclicality of the telecommunications market, several additional aspects may need to be incorporated so as to absorb the next cycle amplitude. 1) Standardisation is important but time to market in order to generate revenues should be reduced. In the case of UMTS this has taken more than 10 years so far. There are basically no assessment practices on how to rate the user’s needs in a 10 year timeframe in terms of bandwidth and required services. 2) Investments in decentralised mobile communications technologies such as WiMax and UWB might be cheaper and less complex. Regional carrier could exploit their local market knowledge while using the core network of large carrier. However, hardly any carrier will be able to finance a centralised, full coverage next generation mobile communication system (G4+). 3) Resource trading concepts, virtual domains and partnerships could reduce investment and risk. Firms should focus on their specific advantages and reduce their activities to a certain step of the value chain. Also the fragmentation of the value chain (deconstructed scenario) as seen in the computer industry before might be a solution.
References Büllingen F, Stamm P (2001) Entwicklungstrends im Telekommunikationssektor bis 2010. Studie im Auftrag des Bundesministeriums für Wirtschaft und Technologie, Wissenschaftliches Institut für Kommunikationsdienste, Bad Honnef http://www.bmwi.de/Redaktion/Inhalte/Downloads/br-entwicklungstrends-imtelekommunikationssektor,property=pdf.pdf
The CAPEX-to-SALES TRAP
457
Burmann C (2001) Strategische Flexibilität und Strategiewechsel in turbulenten Märkten. Die Betriebswirtschaft 2, 61: 169–188 Chacko M, Mitchell W (1998) Growth Incentives to Invest in a Network Externality Environment. Industrial and Corporate Change 4, 7 Dengler J (2000) Strategie integrierter Telekommunikationsanbieter. 1. Aufl., Wiesbaden, DUV Dubacher R, Zimmermann H (1989) Finanzmarkt und Portfolio Management. No 1, vol 3, pp 66–85 FitchRatings (2003) European Telecoms Sector Report. http://www.fitchratings.es/Informes/European%20Telecoms%20Sector%20Report%20 Autumn%202003.pdf Fransman M (2001) Analysing the evolution of industry: The relevance of the telecommunications industry. Economics of innovations and new technology, vol 10, pp 109–140 Gerpott T (1998) Wettbewerbsstrategien im Telekommunikationsmarkt. 3. Aufl., SchäfferPoeschel, Stuttgart Gujarati DN (2002) Basic Econometrics. 4th edition, McGraw-Hill, Boston Hoffmann K (2003) Merger & Akquisition: Ein nachhaltiger Weg für die globale Expansion im Telekommunikationsmarkt? In: Picot A, Doeblin S Telekommunikation und Kapitalmarkt. 1. Aufl., Gabler, Wiesbaden Li F, Whalley J (2002) Deconstruction of the telecommunications industry: from value chains to value networks. Telecommunications Policy, vol 26, pp 451–472 Noam EM (2002) How Telecom Is Becoming A Cyclical Industry, And What To Do About It. http://www.citi.columbia.edu/ elinoam/articles/cyclicality.htm Picot A (2003) Telekommunikation und Kapitalmarkt – Eine Einführung. In: Picot A, Doeblin S Telekommunikation und Kapitalmarkt. 1. Aufl., Gabler, Wiesbaden Plaut T (2003) Beeinträchtigt der Verschuldungsgrad großer Telekom-Unternehmen die Fähigkeit für Innovationen? In: Picot A, Doeblin S Telekommunikation und Kapitalmarkt. 1. Aufl., Gabler, Wiesbaden Pohler M, Recker S (2003) Virtual Domains in Multi-Layered Structures to Foster Competition and Service Diversity. In: Cunningham P, Cunningham M, Fatelnig P Building the Knowledge Economy. Issues, Applications, Case Studies. Part 2, IOS Press, Oxford Stritzl P (2002) Der deutsche TV-Kabelmarkt. Spiele ums Netz, Dynamik und Strategien. Schäffer-Poschel, Stuttgart Telenor (2003) Fixed Line – Capital Markets Day
458
Matthias Pohler, Jens Grübling
Appendix European telecoms – financial summary 1998-2002 Mobile operators
Integrated carriers DTAG
BT
FT
KPN
TIT
TEF
TA
sum / avergage
TIM
TEFM
OGE
VOD
sum / average
Revenues (in Bill. €)
1998 1999 2000 2001 2002
35,1 35,5 40,9 48,3 53,7
25,9 31,1 42,1 35,0 28,7
24,6 27,2 33,7 43,0 46,6
7,9 9,1 13,5 12,9 12,8
25,1 27,1 28,9 30,8 30,4
17,5 23,0 28,5 31,1 28,4
3,4 3,7 3,8 3,9 3,9
139,5 156,7 191,5 204,9 204,5
6,1 7,5 9,4 10,3 10,9
3,1 5,0 6,4 8,4 9,1
4,9 7,6 12,1 15,1 17,1
4,8 11,2 21,3 32,4 43,1
18,9 31,2 49,2 66,2 80,2
CapEx including 3Glicencies (in Bill. €)
1998 1999 2000 2001 2002
4,8 6,0 23,5 10,9 7,6
7,5 15,4 33,6 5,6 3,5
4,7 5,0 14,3 9,0 7,6
1,9 2,5 11,3 3,2 1,1
6,2 5,5 16,5 8,2 5,2
4,5 7,5 17,5 8,3 4,3
0,9 1,0 0,9 0,8 0,7
30,5 42,9 117,7 45,9 30,0
1,1 1,0 6,3 4,1 2,1
0,6 1,4 13,6 2,1 1,0
1,7 2,5 10,7 3,3 3,3
1,1 3,3 24,7 6,3 7,6
4,4 8,2 55,2 15,9 14,1
CapEx excluding 3Glicencies (in Mrd. €)
1998 1999 2000 2001 2002
2,9 3,5 7,6 9,5 6,8
4,6 5,2 7,1 5,5 3,5
4,7 5,0 7,2 8,1 7,4
1,9 2,5 3,8 2,9 1,1
6,2 4,4 13,0 8,2 2,5
4,5 6,3 7,6 6,4 4,3
0,9 1,0 0,9 0,8 0,7
25,7 27,9 47,3 41,4 26,2
1,1 1,0 3,6 3,8 2,1
0,6 1,4 1,5 1,7 0,9
1,7 2,5 3,3 3,3 3,3
1,1 3,0 5,8 5,9 7,5
4,4 8,0 14,2 14,7 13,8
1998 1999 2000 2001 2002 1998 1999 2000 2001 2002
18,2 16,0 14,2 15,9 16,1
7,6 6,7 6,4 9,6 8,9
8,4 9,6 10,4 11,9 14,6
3,0 3,0 3,4 3,4 4,4
11,2 11,2 11,3 12,9 13,3
9,3 10,9 11,9 12,8 11,7
1,8 1,5 1,1 1,5 1,5
59,3 58,9 58,7 68,1 70,4
2,1 2,7 4,2 4,6 4,9
1,3 1,7 2,3 3,3 3,7
0,8 0,9 1,8 3,3 5,1
1,6 3,4 7,3 5,1 15,9
5,8 8,7 15,5 16,4 29,7
13,4 10,0 -9,3 5,0 8,5
0,1 -8,6 -27,2 4,1 5,4
3,8 4,6 -3,9 3,0 7,0
1,0 0,5 -7,9 0,2 3,2
5,0 5,7 -5,2 4,8 8,0
4,8 3,4 -5,6 4,5 7,4
0,9 0,5 0,1 0,7 0,9
28,9 16,0 -59,0 22,2 40,5
1,0 1,6 -2,0 0,5 2,8
0,7 0,3 -11,4 1,2 2,7
-0,9 -1,6 -8,9 0,0 1,9
0,5 0,1 -17,4 -1,2 8,3
1,4 0,4 -39,7 0,5 15,6
1998 1999 2000 2001 2002
35,4
1,4
13,1
5,6
9,8
18,9
3,3
87,4
0,3
1,7
3,3
2,1
7,4
39,4
12,4
14,6
4,6
8,1
21,1
2,6
102,8
0,7
4,6
5,5
9,4
20,3
56,1
39,7
61,0
21,9
17,0
29,0
3,4
228,1
0,7
4,5
5,0
9,5
19,8
63,5
19,5
63,4
14,9
22,3
29,6
3,3
216,4
1,1
7,2
6,2
17,0
31,5
61,7
13,6
68,0
12,4
17,8
23,5
3,2
200,2
1,8
8,7
5,9
19,7
36,0
1998 1999 2000 2001 2002 1998 1999 2000 2001 2002
2,0
0,2
1,6
1,9
0,9
2,0
1,8
1,5
0,1
1,3
4,3
1,3
1,3
2,5
1,8
1,5
1,5
0,7
1,9
1,7
1,7
0,3
2,8
6,0
2,8
2,3
4,0
6,2
5,8
6,4
1,5
2,4
3,2
3,9
0,2
2,0
2,9
1,3
1,3
4,0
2,0
5,3
4,4
1,7
2,3
2,2
3,2
0,2
2,2
1,9
3,3
1,9
3,8
1,5
4,7
2,8
1,3
2,0
2,1
2,8
0,4
2,3
1,1
1,2
1,2
51,6% 45,1% 34,7% 32,9% 30,0%
29,2% 21,7% 15,2% 27,5% 31,0%
34,1% 35,2% 31,0% 27,7% 31,3%
37,6% 32,8% 25,2% 26,3% 34,3%
44,5% 41,3% 39,1% 42,0% 43,6%
53,0% 47,4% 41,8% 41,2% 41,3%
53,1% 39,7% 27,9% 38,2% 38,8%
43,3% 37,6% 30,7% 33,7% 35,7%
34,6% 35,9% 45,1% 44,9% 44,8%
41,4% 33,3% 35,3% 39,6% 40,9%
15,7% 12,1% 14,6% 21,8% 30,1%
34,0% 30,6% 34,1% 15,8% 36,9%
31,4% 28,0% 32,3% 30,5% 38,2%
CapEx (including licencies) / Sales
1998 1999 2000 2001 2002
13,6% 16,8% 57,5% 22,5% 14,2%
28,9% 49,5% 79,9% 15,9% 12,1%
18,9% 18,4% 42,5% 20,8% 16,2%
24,5% 27,6% 83,4% 24,6% 8,9%
24,6% 20,1% 57,0% 26,5% 17,1%
25,8% 32,8% 61,5% 26,9% 15,2%
27,1% 26,4% 24,1% 21,0% 16,9%
23,4% 27,4% 58,0% 22,6% 14,4%
17,6% 13,8% 66,7% 40,4% 19,4%
18,0% 28,0% 213,6% 25,2% 11,4%
34,1% 33,2% 88,3% 21,7% 19,2%
23,4% 29,4% 115,9% 19,6% 17,7%
23,3% 26,1% 121,1% 26,7% 16,9%
CapEx (excl. licencies) / Sales
1998 1999 2000 2001 2002
8,1% 9,8% 18,5% 19,6% 12,6%
17,9% 16,8% 16,8% 15,9% 12,1%
18,9% 18,4% 21,5% 18,8% 16,0%
24,5% 27,6% 28,5% 22,9% 8,9%
24,6% 16,4% 45,1% 26,5% 8,1%
25,8% 27,3% 26,8% 20,5% 15,2%
27,1% 26,4% 24,1% 21,0% 16,9%
21,0% 20,4% 25,9% 20,8% 12,8%
17,6% 13,8% 37,7% 37,2% 19,4%
18,0% 28,0% 22,8% 20,1% 10,1%
34,1% 33,2% 27,6% 21,7% 19,2%
22,8% 27,0% 27,4% 18,1% 17,4%
23,1% 25,5% 28,9% 24,3% 16,5%
Revenue growth
1999 2000 2001 2002
0,9% 15,4% 18,0% 11,1%
20,2% 35,4% -16,9% -18,1%
10,5% 23,7% 27,8% 8,4%
15,0% 48,0% -4,8% -0,6%
8,2% 6,7% 6,6% -1,4%
31,4% 24,1% 9,0% -8,5%
10,0% 2,1% 1,4% 1,3%
12,3% 22,2% 7,0% -0,2%
21,3% 26,4% 8,8% 6,0%
62,2% 27,1% 31,9% 8,7%
55,0% 59,0% 25,1% 13,2%
134,3% 90,6% 52,3% 33,0%
68,2% 50,8% 29,5% 15,2%
CapEx growth
1999 2000 2001 2002
21,9% 116,7% 25,6% -28,9%
12,6% 35,4% -21,6% -37,4%
7,3% 44,9% 11,7% -8,0%
29,5% 52,4% -23,3% -61,4%
-27,8% 193,4% -37,4% -69,7%
39,0% 22,1% -16,7% -32,3%
6,9% -6,7% -11,5% -18,4%
40,7% 174,6% -61,0% -34,6%
-4,8% 245,1% 7,4% -44,7%
152,6% 3,8% 16,1% -45,6%
51,1% 32,0% -1,5% 0,1%
177,6% 92,9% 1,0% 27,6%
86,4% 570,7% -71,2% -11,3%
EBITDA growth
1999 2000 2001 2002
-11,8% -11,3% 12,1% 1,3%
-10,7% -5,1% 50,3% -7,7%
13,9% 9,0% 14,3% 22,1%
0,1% 13,8% -0,6% 29,5%
0,3% 0,9% 14,7% 2,4%
17,5% 9,5% 7,4% -8,4%
-17,6% -28,4% 38,9% 2,7%
-0,8% -0,3% 15,9% 3,5%
25,8% 58,8% 8,4% 5,8%
30,3% 34,8% 48,1% 12,1%
19,7% 92,1% 86,3% 56,5%
110,9% 112,4% -29,5% 210,5%
49,8% 78,8% 5,3% 81,4%
11,4% -2,4% 33,8%
5,2% 6,7% -2,7%
17,6% 14,8% 14,0%
14,4% 10,7% -0,7%
5,0% 4,6% 14,6%
14,0% 6,5% 3,0%
3,7% -1,1% -7,4%
10,3% 4,6% 29,9%
15,6% 24,7% 25,1%
32,5% 31,3% 19,8%
38,1% 63,6% 27,2%
77,5% 101,1% 22,6%
31,8% 53,8% 143,6%
EBITDA (in Mrd. €)
EBITDA-CapEx (in Mrd. €)
Net Debt (in Mrd. €)
Net Debt / EBITDA
EBITDA-Marge
Revenue growth 98-02 EBITDA growth 98-02 CapEx growth 98-02
Source: Annual reports, FitchRatings (2003)
Modelling Regulatory Distortions with Real Options: An Extension1 James Alleman2,*, Paul Rappoport3,** * **
University of Colorado, USA Temple University, USA
Abstract The introduction of uncertainty can make a significant difference in the valuation of a project. In a regulatory environment, this manifests itself, inter alia, in situations where regulatory constraints can affect the valuations of a firm’s investment which, in turn, has an adverse impact on consumers’ welfare. In particular, the inability of a regulated firm to exercise any or all of the delay, abandon, start/stop, and time-to-build options has an economic and social cost. With this view in mind, we specify a model using real options methodology where regulatory delay constraints impact the firm’s cash flow and its investment valuation. The use of real options analysis can address issues of regulation that have not been adequately quantified. We show that regulatory constraints on cash flow have an impact on investment valuations. Specifically, a model is developed to estimate the cost of regulation by constraining the delay option. We show that the cash flow constraints and the inability to delay has significant costs. Since some costs are not recognised in a static view of the world, this failure to recognise the operation and implications of non-flexibility by regulators (which can be modelled by real options methods) will lead to a reduction in company valuations which in turn will lead to a reduction in economics welfare. The impact of regulation changes the magnitude of uncertainty as measured by the variance used in the real options model. For example, price caps, access charges, or other regulatory devices dampen the possibility of a high return, thus reducing the variance of returns. We model the regulatory constraint by constrain1
2 3
This paper expands previous work by the authors – Alleman and Rappoport (2002) and draws on an earlier result by the authors and Hirofumi Suto (Alleman et al. 2004). The authors would like to thank Larry Darby, Alain deFontenay, Gary Madden, Eli Noam, Michael Noll, Scott Savage, and Chris Schlegel for useful comments and discussion of the ideas developed in this paper. Of course, the usual disclaimer applies. E-mail:
[email protected] E-mail:
[email protected]
460
James Alleman, Paul Rappoport
ing variance, ı2 in the option model. As intuition would suggest, as the constraint becomes tighter, the probability that the deferred option will pay off is diminished.
Introduction4 The real options pricing approach utilises financial option principles to value real assets under uncertainty. In this paper, we evaluate regulatory actions ex post in order to determine the impact of regulatory constraints on investment decisionmaking. The genesis of this research began when one of the authors evaluated telecommunications cost models whose foundation was based on the applications of traditional discounted cash flow analysis – exactly the method that real options methodology has shown can give terribly wrong results (Alleman 2002a). For example, if regulation does not account for management’s flexibility to respond to constraints that regulation imposes on the firm, we show that the firm may make management and financial decisions which are inefficient from society’s perspective Regulation may lead to constraints on prices or on profits. Regulation also surfaces in the context of the obligation to serve. Under the current practice in most countries, whenever a customer demands service, the incumbent carriers are obligated to provide the service. It is part of the common carrier obligation. Under the obligation to serve requirement, if the customer proves unprofitable, the carrier must nevertheless serve this customer. This is in contrast to discretionary services offered by the telephone company providers, such as digital subscriber line (DSL) for providing broadband service. If the telephone company chooses to limit the availability of broadband service it is utilising its delay option. If the company faces a common carrier obligation to provide broadband services and to maintain payphones, the requirement to provide broadband services eliminates the company’s option to delay. Concerns with the “digital divide” have prompted proposed legislation in the United States to mandate the increased deployment of broadband services. If passed, this action would limit the companies’ ability to delay and may forbid it to exit.5 In the dynamic world, demand, technology, factor prices and many other parameters of interest to a company are subject to uncertainty. The principal uncertainty is the demand for goods, which, in turn, impacts cash flow, investment valuations, profits, and economic depreciation among other economic variables. 4 5
This section is adapted from Alleman and Rappoport (2002). Regulation forecloses other options in the industry. If the telephone company wants to cease providing payphone service, we have an example of the abandonment option. If the company faces an obligation to provide payphone services and to maintain them, then the inability to exit the payphone business eliminates the firm’s ability to exercise its abandonment option. Since the rapid increase in mobile telephone usage, payphone service has experienced a serious decline in its revenues. When the phone companies attempt to exit the service, the regulators forbid it. This is true in the United States and Japan. The authors have not investigated it in other countries.
Modelling Regulatory Distortions with Real Options: An Extension
461
We construct a model using an investment model incorporating the real option method. With this model that addresses this underlying state of uncertainty we show, parametric, the cost of regulatory constraints. This model demonstrates that regulation has a cost, thus showing that regulation can restrict the flexibility of the firm. This reduced flexibility is due to the imposition of price constraints, or by the additional costs associated interfering with the firm’s use of the delay, abandonment, or shutdown/restart options. One of the clearest examples of the telecommunications regulators’ failures to apply dynamic analysis is in the use of cost models and a type of long-run-incremental-cost methodology to determine prices and obligations-to-serve subsidies (Alleman 1999; Alleman and Rappoport forthcoming).
Previous research The literature underscoring this research is divided into three areas: first, regulation’s impact on investment – either rate-of-return or incentive regulation – usually in a static context though occasionally with dynamic models of investment behaviour; second, generic real options analysis; and third, real options applied to telecommunications. The first two areas have been adequately reviewed elsewhere and briefly discussed below. Application of real options methods to telecommunications is discussed in greater detail. Regulatory research For a review of telecommunications regulation prior to the late-eighties, see Kahn (1988); a review of the current state-of-the-art in telecommunications is found in Laffont and Tirole (2000). The static and dynamic aspects of investment under various forms of regulation and optimal (Ramsey) pricing may be found in Biglaiser and Riordan (Biglaiser 2000). Most of this literature assumes static models of which Averch-Johnson (1962) is the most well known. These models show that rate-of-return regulation does not provide incentive for the firm to minimise costs or to make capital investments. If the firm’s growth is handled at all, it is through exponential models with time as the explanatory variable. Economic depreciation is treated exogenously. The dynamic models are deterministic, complete information growth models. Real options research The literature on real-options research from the financial perspective is reviewed and integrated in Trigeorgis (1996), and from the economist’s perspective covered extensively in Dixit and Pindyck (1994) or, for a briefer account, in their 1995 article (Dixit and Pindyck 1995). Economists usually only look at the delay option.
462
James Alleman, Paul Rappoport
The finance literature is fuller in its coverage of the various aspects of all of the options available to a firm, for example Hull (2003) has an extensive coverage of options, as does Luenberger (1998). For comprehensive guides to real options, see Copeland and Antikarov (2001) or Damodaran (2001). Real options applied to telecommunications A limited but growing literature exists in the applications of real options to telecommunications. Ergas and Small (2000) have applied the real options methodology to examine the sunk cost of assets and the regulator’s impact on the distribution of returns. They attempt to establish linkages between regulation, the value of the delay option and economic depreciation. Small (Small 1998) studied investment under uncertain future demand and costs with the real options method. More recently, d’Halluin et al. (2004a) have applied real options methodology to an ex post analysis of capacity in long distance data service. The same authors also applied the methodology to the wireless service issues (2004b). Pak and Keppo (2004) have applied the approach to network optimisation; and Kulatilaka and Lin apply the methodology to strategic investment in technology standards (2004). Several papers have addressed the access pricing issues. Hausman has applied the real options methodology to examine the sunk cost of assets and the delay option in the context of unbundled network elements (UNEs) (Hausman 1999). Similarly, Hausman and Meyers (2002) estimate the magnitude of errors by the failure of regulators to account for sunk costs in the railroad industry, which can be applied to telecommunications. Hori and Mizuno (2004) have applied real options to access charges in the telecommunications industry. Lozano and Rodríguez (forthcoming) used a lattice approach (for its intuitive appeal) to show that access pricing is higher than the traditional net present value approach. Clark and Easaw (2004) address access pricing in a competitive market. They show, as have others, that when uncertainty is considered, the price should be higher than under certainty. Entrants should pay a premium to enter the market in order to reward the incumbent for bearing the risk of uncertain revenues. Pindyck (2004) shows that failure to account for sunk costs leads to distortions in investment incentives and distortions in unbundled network elements, a variant of access pricing. Similarly, Pindyck (2005a) shows that sharing of infrastructure at rates determined by regulators subsidises entrants and discourages investment when sunk costs are not properly considered in the determination of the prices. He suggests how these prices can be adjusted to account for sunk costs. In a later presentation, Pindyck (2005b) demonstrates how sunk costs serve as an entry barrier and demonstrates its effect on market structure. Alleman and Rappoport (forthcoming) show how sunk cost should be treated as an opportunity cost, and, if unrecognised, can lead to inappropriate regulatory policy. This sunk cost can be viewed as a dynamic case of the efficient component pricing rule.
Modelling Regulatory Distortions with Real Options: An Extension
463
Real options What are real options? A financial option is the right to buy (a call) or sell (a put) a stock, but not the obligation, at a given price within a certain period of time. If the option is not exercised, the only loss is the price of the option, but the upside potential is large. The asymmetry of the option, the protection from the downside risk with the possibility of a large upside gain, is what gives the option value. The idea is similar with real options analysis. The manager identifies options within a project and their exercise prices. If the future turns out to be good, the option is exercised; if the outlook turns out to be bad, the option is not exercised. If the option is not exercised the only loss is the price of the option. The real options analysis provides a means of capturing the flexibility of management to address uncertainties as they are resolved. The flexibility that management has includes options to defer, abandon, shutdown/restart, expand, contract, and switch use (see Table 1). This methodology forces the firm to modify its simple view of valuation to one that more closely matches the manner in which the firm operates. The use of real options lets the firm modify its actions after the state-of-nature has revealed itself. For example, if demand fails to meet expectations, the firm may chose to delay investment rather than proceed along its original business case. The deferral option is the one that is generally illustrated and is treated as analogous to a call option. But real options analysis can be applied to evaluation of other management alternatives, for example shutdown and restart, time-to-build, or extend the life of a project or enterprise. Table 1. Description of options Option Defer Abandon Shutdown & restart Time-to-build Contract Switch Expand Growth
Description To wait to determine if a “good” state-of-nature obtains To obtain salvage value or opportunity cost of the asset To wait for a “good” state-of-nature and re-enter To delay or default on project – a compound option To reduce operations if state-of-nature is worse than expected To use alternative technologies depending on input prices To expand if state-of-nature is better than expected To take advantage of future, interrelated opportunities
Analysis Assumptions and model To explain the application of real options to model regulatory distortions, the following stylised assumptions are made.
464
James Alleman, Paul Rappoport
Cash flows shift each period based on a probability – the cash flow is high (a good result) with probability of q or low (a bad result) with probability of (1–q). The model is for two periods and the intertemporal cross-elasticities of demand are assumed to be zero. These simplifying assumptions are enough to capture the effects of time and uncertainty; it leads to an easy understanding of the methodology; and serves as a foundation of the more complex analysis. We explore only one facet of this simple, but not unrealistic assumption, that cash flows are uncertain. We will explore the role of management’s flexibility in dealing with two uncertainties when management is constrained in its behaviour by regulation: first by the obligations to serve and then with a regulatory constraint on prices. We contrast this with management’s unconstrained actions. (This analysis is applicable only when the firm has freedom from other regulatory constraints, such as quality of service constraints. While this will not change the nature of the results, it may well change the magnitude of the options.) Under the traditional engineering-economics methodology, the value of the investment would be evaluated with an expected value of the discounted present value of the profit function. This requires, inter alia, the determination of the “correct” discount rate. To account for uncertainty the rate is adjusted for risk, generally using the capital asset pricing model (CAPM).6 In the regulatory context, this would be equivalent to the determination of the rate-of-return for the firm. In the rate-base, rate-of-return regulation context, a “historical” year is chosen and the rate-of-return determined. Prospective costs and revenues are assumed to be estimated with the past and the historical year, representing the mean of that past. Before competition entered the telecommunications industry, discounted present value techniques were a useful analytical technique. The industry had stable, predictable revenues and cost, and hence, cash flow; but more recently, the industry has become volatile (Noam 2002; Alleman 2002b). Our analysis differs from this present value approach in that it treats the investment and cash flow prospectively as a model in which an investment has two possible outcomes: a good result or a bad result. A simple binomial real option model can analyse the investment.7 Viewed in this fashion, the question is: what is the investment worth with and without management flexibility? In addition to the cash flow constraint, we explore the delay option in detail. Earlier we noted other conditions in which regulation can have impact on valuations: abandonment, shutdown/restart and time-to-build options.
6
7
See any good financial text for a description of this methodology, for example Bodie and Merton (2000). The simplicity of the model should not mis-lead the reader. The two period models can be expanded into an n-period model. Cox et al. (1979) show how to solve these models and how, in the limit, the results converge to the Black-Scholes option pricing result.
Modelling Regulatory Distortions with Real Options: An Extension
465
Regulatory distortion The economist is concerned with social welfare. The nominal purpose of regulation is to optimise social welfare and ensure that monopoly rents are eliminated from the firm’s prices. This requires knowledge of economic cost and benefits. Generally benefits are measured by the consumers’ surplus; economic costs are estimated by the firm’s historical accounting costs. Both can be difficult to measure, but what we argue here is that not recognising some costs, i.e., not knowing what to measure, means that the social welfare is distorted and decreased. The interaction of the regulation with valuation bears on welfare in several dimensions. First, unrecognised costs on the part of the regulatory community means that the prices set by it will not be correct. Second, if the financial community recognises that the regulator is not accounting for all the costs of the enterprise, then it will be more expensive to raise debt and equity capital, which, in turn, will increase the cost in a vicious cycle, raising the cost to consumers. An example of a major cost that has not been adequately identified or quantified is the obligation to serve. Under the current practice in most countries, whenever a customer demands service, the incumbent carriers are obligated to provide the service. It is part of the common carrier obligation. Some have interpreted this obligation to serve as a mandate to provide “universal” broadband service. Under universal broadband the firms would not be able to assess the market, determine the best time to enter and where best to enter. Instead, they would be on a specific time and geographic schedule. The firms would have lost the option to delay. If the customer proves unprofitable, the carrier still must retain this customer. Thus, they also lose their right or option to abandon the service.8 In the broadband situation, the incumbent carriers are precluded from exercising the option to delay. A related option is the ability to shutdown and restart operations. This, too, is precluded under the regulatory franchise. Finally, the timeto-build option, which includes the ability to default in the middle of a project, would not be available in the current regulatory context. The lack of options has not been considered in the various cost models that have been utilised by the regulatory community for a variety of policy purposes. Clearly, the lack of these options imposes a cost to the firm and to society.9 In a previous paper (Alleman and Rappoport 2002), we used the deployment of DSL to illustrate the delay option, and then the learning option. We indicated how both may be quantified and suggest the parameters which are relevant for these options. In this paper we draw from previous work to evaluate the delay option (Alleman et al. 2004). How can options be valued? Black and Scholes examined a method nearly 30 years ago for pricing financial options, and it's been much refined since then. See, 8
9
The argument that the expansion of the network provides an external benefit – an externality – beyond the value of an additional subscriber may be an offset to this cost, but the externality argument is not compelling in the United States or any area that has significant penetration of telephone service, see Crandall and Waverman (1999). This is not to imply that these public policies should be abandoned, but in order to weigh the policy alternatives, their costs must be understood.
466
James Alleman, Paul Rappoport
for example (Nembhard 2000), for various methods and techniques to solve these real options problems. The financial options technique can also be applied to physical or real assets. To understand the intuition of the method, consider the stock-option comparison. Three things influence the price of stock options: the spread between the current and the exercise price, the length of time for which the option may be exercised, and the volatility of the stock in question. The current price of the asset is known, the exercise price at which the stock can be purchased in the future is set, as well as the time in which the option can be exercised. The greater the difference between the two, the lower the options price because only a great change in the market will make the stock's value climb above the exercise price and pay off for the owner. And big shifts are less likely than small ones. The date at which the option expires also is a factor. The longer the option lasts, the greater the chance that the stock price will become higher (“in-the-money”) and go beyond the exercise price, and that the owner will make a profit. So, the price is higher. Finally, the volatility of the stock price over time influences the option price. The greater the volatility, the higher the price of the option, because it's more likely the price will move above the exercise price and the owner will be in-the-money. Black and Scholes consider these factors to solve the problem of pricing the option. One important attribute of real options as opposed to the traditional discounted cash-flow analysis is the treatment of uncertainty. In discounted cash-flow analysis, increased risk is handled by increasing the discount rate; the more risk, the higher the return the company has to earn as a reward for investing. This has the effect of decreasing the value of the cash flow in later periods. Thus, uncertainty reduces the value. But in a real-options approach, the value would be increased, because managers have the flexibility to delay or expand the project – the greater the uncertainty, the greater the value. Delay distortion, obligation to serve Consider the following two-period model in which an investment will have two possible outcomes: a “good” result, V +, or a “bad” result, V –, with the probability of q and (1 – q), respectively. Under traditional practices, this would be evaluated by the expected value of the discounted cash flow of the two outcomes. Current investment analysis suggests it can be valued with the options pricing methods using Black-ScholesMerton; Cox-Ross and Rubinstein or other techniques.10 Many methods exist for solving this problem. The intuition is that delay has a value, since it allows the firm to have the state-of-nature revealed.11 The relevance in the regulatory context is that the regulated firm does not have the delay option available to it – it must
10
Dixit and Pindyck (1994) develop an example using traditional methods but account for the ability to delay the decision. 11 See Alleman et al. (2004) for a detailed example.
Modelling Regulatory Distortions with Real Options: An Extension
467
supply the basic services as required by its franchise.12 What is the cost of this inflexibility? It is the value of the option to delay! Outcome Investment
q
V+
(1 – q)
V–
I0
Fig. 1. Two period binomial outcomes
Delay distortion, obligation to serve with regulated cash flow The above analysis only captures the obligation to serve. Additional constraints are generally imposed by regulation. For example, there is often a constraint on earnings or on prices imposed using rate-of-return constraints on capital. Price caps represent a more flexible mode of regulation. In the first case, rate-of-return, a revenue ceiling is imposed on the firm based on operating costs and the rate-ofreturn on the un-depreciated investment. Because of the distortions noted previously, the regulators have turned to the second form of control, price caps (also called incentive regulation). Here the firm is allowed to change price by no more than a general price index.13 We model this by constraining the “good” cash flow. The intuition is that the regulator sets the rate-of-return based on the total earnings (cash flow) of the investment based on the risk-adjusted cost of capital. Using the above example, but capping the cash flow of the good outcome to be equal to the discounted cash flow equal to zero, we can emulate this process. We have not empirically estimated the value of these options for regulated companies, but as seen below, it can be significant depending on the regulatory constraint. In general, the value of the flexibility will increase as the uncertainty about the future increases. The value of flexibility also increases when management has the ability to quickly respond to new information and circumstances (Ahnani 2000). For example, for telephone companies considering the faster deployment of DSL, one of the relevant parameters would be the scope of the existing infrastructure. Given the projected demand, how many customers are situated along an existing trunk route? To put it another way, does meeting expected de-
12 13
The exception of discretionary services such as DSL has already been noted. Usually a productivity factor is included in the calculation, which limits the increase in the price increase. See Laffont and Tirole (2000) for a discussion of incentive regulation in the telecommunications industry.
468
James Alleman, Paul Rappoport
mand require the installation of new trunks (in addition to the loops)? Answers to these questions are critical to the valuation of the options facing the company.
go
q1
q1
1 - q1
stop
1 - q1
cash flow constraint
stop
Fig. 2. Unconstrained and constrained cash flow
A delay criterion14 With this background in mind, we now summarise an earlier result (Alleman et al. 2004, hereafter ASR) in order to apply the methodology to model regulatory distortions due to the inability to delay investment projects. Recall a financial option is the right to buy (a call) or sell (a put) a stock, but not the obligation, at a given price within a specific period of time. As noted, several approaches are possible to determine the theoretical value of an option, based on different assumptions concerning the market, the dynamics of stock price behaviour, and individual preferences. Option pricing theory is based on the no-arbitrage principle, which is applied to the underlying stock’s distributional dynamics. The simplest of these theories is based on the multiplicative, binomial model of stock price fluctuations, which is often used for modelling stock behaviour discussed earlier (also see ASR). To model the delay option, we utilise the continuously additive model developed by ASR which is summarised below. First we provide an example of the delay option, then we recap the option pricing formula which we use to derive the decision criterion statistic. We will use this to show the impact of the regulatory constraint on the investment decision. Example: Value of the option to defer One real option alternative is the deferral option which is based on the concept of the call option, as shown in Fig. 3 (Luehrman 1998a). Consider a project which is not currently profitable. A deferral option gives one the option to defer starting this project for one year to determine if the price in14
This section summarises Alleman et al. (2004).
Modelling Regulatory Distortions with Real Options: An Extension
469
creases enough to make the investment worthwhile. This right can be interpreted as a call option. The numerical example illustrates its value. Table 2 displays the project’s present value of future cash flow. V is assumed to be normally distributed with mean $100 million (= S) and standard deviation $30 million (= ı). The risk-free rate is 6% (R=1.06); the exercise price one year later is $110 million. With these assumptions, the project’s present value is $103.8 or $110/1.06 million. Table 2. A project Defer Option Present value of operating future cash flow Investment in equipment Length of time the decision may be deferred Risk-free rate Riskiness
Variable S K T rf S
$100 million $103.8 million 1 year 1.06 $30 million
Conventional NPV is given by S-K = 100 – 103.8 = -3.8 million. This project would have been rejected under NPV criterion. However, applying the call option formula in the pricing equation (see below), the value of deferring the project one year is calculated as the defer ROV (Real Option Value): +∞
ROV = C = ³ (V − K ) f (V )dV
(1)
§ 1 (V − S ) 2 · ¸¸dV exp¨¨ − 2 2π © 2 σ ¹ K +∞ § 1 (V − 100) 2 · 1 ¸¸dV exp¨¨ − = ³ (V − 103.8) 30 2 2π © 2 ¹ 103.8
(2)
K
+∞
= ³ (V − K )
1
The flexibility to defer this project is valued at $10.2 million. Adding NPV and ROV gives a positive value of $6.4 (= -3.8 + 10.2). This is called Expanded NPV or ExNPV. ExNPV represents the value of this project including future flexibility (Trigeorgis 1996). Consequently, the optimum decision is to “defer,” i.e. “wait and watch” the market! Indeed, one would follow this procedure so long as ExNPV is greater than zero or equivalently, ROV > |NPV|. ASR generalised and formalised this procedure.
470
James Alleman, Paul Rappoport $15.00
$10.2 $10.20
$10.00
$6.4 $6.40 $5.00
$0.00
($3.80) $3.8
($5.00)
NPV= = NPV
ROV = value to defer
ExNPV ExNPV
Fig. 3. Expanded Net Present Value (ExNPV)
The Continuous Additive Model In the ASR model the net present value of the project is normalised by the standard deviation i.e.
D=
| m'− I | σ'
where m’ is the discounted cash flow, I is the initial investment and σ’ is the standard deviation. In the standard case where
D=
m'− I σ'
is less than zero the investment is rejected. With the real options methodology, one examines if the option value of delay “overcomes” the negative present value. The ASR model solves for D at the point where the delay real option value is equal to normalised NPV, which is .278. This simple decision criterion determines, if NPV < 0, whether to “wait-and-watch” to see if the state of nature improves or to forsake the investment. Thus in the case of case of NPV < 0 the decision criterion is: • ROV > |NPV| ĺ Wait and watch the opportunity carefully; • ROV < |NPV| ĺ Do not invest.
Modelling Regulatory Distortions with Real Options: An Extension
471
ASR derived this relationship as follows: Decision under conditions of uncertainty should be made on the basis of the current state of information available to decision makers. If the expectation of the NPV were negative for the investment, the conventional approach would be to reject the investment. However, if one has the ability to delay this investment decision and wait for additional information, the option to invest later has value. This implies that the investment should not be undertaken at the present time, but it leaves open the possibility of investing in the future.15
Decision criterion statistic For the purpose of analysing the relationship between NPV and the option value associated with the single investment, ASR assumed that the random variable of interest is the present value of future cash flow V, which is assumed to be normally distributed V ~ N(m’, σ’). The investment cost I is assumed to be a constant. In the conventional method, NPV is expressed as: NPV = E [V – I] = E[V] – I = m' – I
(3)
Following Herath and Park (2001), a loss function is introduced. When no investment takes place, obviously, the cash flow is equal to 0. But imagine the situation that V > I, where the opportunity loss is recognised as V - I. Therefore the loss function is: L(V) = 0, if V < I = V - I , if V > I
(4)
The expected opportunity loss can be calculated as: +∞
E[ L(V )] = ³ L(V ) f (V )dV −∞
+∞
(5)
= ³ (V − I ) f (V )dV I
This function is the payoff of a call option, using the pricing formula of a call option above (Alleman et al. 2004). 15
See Alleman et al. (2004) for derivation of this statistic.
472
James Alleman, Paul Rappoport
Assuming an investment can be deferred until new information is obtained, this value is the same as the defer option for the investment. Moreover, the value is also equal to the expected value of perfect information (EVPI) for this investment opportunity (Herath and Park 2001). +∞
³ (V − I ) f (V )dV
ROV =
(6)
I
Fig. 4. Opportunity Loss function and ROV (NPV< 0)
When the terminal distribution of V is normal, the real option value can be calculated using the unit normal linear loss integral: +∞
LN ( D) = ³ (V − D ) f N (V )dV
(7)
D
where fN(V) is the standard normal density function. ROV = σ’ LN (D)
(8)
Modelling Regulatory Distortions with Real Options: An Extension
473
Where
D=
| m'− I | . σ'
(9)
When a manager makes an investment decision, her optimal decision is not to invest if NPV = m'-I < 0. Then she may compare the NPV and the defer option value. If she finds that the option value is larger than the absolute value of NPV (= | m'-I |), she has the option to defer and watch for positive changes in the investment opportunity. If the option value is too small to compensate the NPV (< 0), she will abandon this investment proposal. ASR solved the normalised equation ROV = |NPV| and found D is 0.276. Moreover, the probability that the payoff of this defer option is positive if d = -D* at time 0 can be calculated as follows:
ªV − I º P[V − I > 0] = P « > 0» ′ σ ¬ ¼ ª V − m′ + m′ − I º > 0» = P« σ′ ¬ ¼
m′ − I º ª V − m′ = P« >− ′ σ σ ′ »¼ ¬
(10)
= P[N > − d ]
[
= P N > D*
]
= .39 where N is standard normal distribution. Because V is N ~ (m’, σ‘), (V-m’)/ σ‘ is N ~ (0, 1). Therefore, the probability that the payoff of the defer option is positive is 39%. Does it seem to be a high probability to abandon this option? Yes, it does! The criterion d < -D* means, “Do not invest now” but does not mean “Abandon the defer option.” The defer option itself has value though the expectation of NPV is deeply negative. If holding the option does not require any cost, it does not have to be thrown away! Just wait and watch what happens in the next period. Below we use the above results to model the regulatory constraint.
Regulatory distortions modelled The impact of regulation can be simulated by constraining the variance. The price cap or below “cost” access charges or other regulatory devices dampen the possi-
474
James Alleman, Paul Rappoport
bility of a high return contract and this affects the variance of returns. Thus, the strength of the regulatory constraint can be modelled by constraining σ’. We do so by defining α as the regulatory constraint on σ’, namely ασ’, which affects the decision statistic d=
m′ − I . σ′
α lies between zero and one (0 < α < 1), where 1 would represent no constraint on the regulated firm and zero would represent the most severe constraint on the firm. We then examine the probability of the real option to be greater than or equal to the absolute value of the net present value as α varies between zero and one. We then examine a function that shows how the probability of the delayed option values is reduced as α varies. As intuition would suggest, as the constraint becomes tighter, the probability that the deferred option will pay-off is diminished. This is shown in Fig. 5. As α approaches zero, the probability of the defer option being "in-the-money” is reduced significantly. This implies that the value of the investment will be reduced. Indeed, recall that the real option value can make the difference between waiting and watching to invest and abandoning the investment altogether. Investments with negative discounted present value may be undertaken at a later date if the real option value is sufficient to overcome the negative NPV. As the regulatory constraint becomes tighter, this is less likely to occur. 0.45
0.40
0.35
proability
0.30
0.25
0.20
0.15
0.10
0.05
0.00 1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
alpha
Fig. 5. A Probability delay option is greater than NPV as regulatory constraint increases.
Modelling Regulatory Distortions with Real Options: An Extension
475
Summary We have shown that the introduction of uncertainty can make a significant difference in the valuation of a project. This manifests itself, inter alia, in the manner in which regulatory constraints can affect the value of the investment. In particular, the inability to exercise the delay, abandon, start/stop, and time-to-build options has an economic and social cost. Moreover, we show that regulatory constraints in cash flow have an impact on investment valuations. Regulators and policymakers cannot afford to ignore the implications and methods developed by real options analysis. Effective policy dealing with costs cannot be made without a fundamental understanding of the implications of real options theory. The real options approach is a powerful tool to be used to address the effect of uncertainty on regulatory policy. Real options methodology offers the possibility to integrate major analytical methods into a coherent framework that more closely approximates the dynamics of the firm’s behaviour without heroic assumptions regarding the dynamics of the environment. Applying real option valuation methodology shows that the new decision index d – the uncertainty adjusted NPV – and D* = 0.276 – the break-even point of NPV and ROV (real option value) – gives a clear solution to make a decision under uncertainty. When making decisions, managers have to observe only three parameters: expectation of future cash flow, its uncertainty, and the amount of investment to acquire the project. The examples using the new criterion show its usefulness.
Future research We plan to expand our work to empirically estimate the magnitude of the delay and other options. Another area we feel would be fruitful to explore is the issue of economic depreciation with these models. Clearly, (economic) depreciation is determined, inter alia, by the price the asset commands in the market (Hotelling 1925; Salinger 1998). In contrast to others, we allow uncertain demand to set the optimal capacity path. Rather than assuming a user cost-of-capital, we can assume economic depreciation is determined endogenously by the demand function.
References Ahnani M, Bellalah M (2000) Real Options With Information Costs. Working Paper. University of Paris-Dauphine and Université de Cergy, January, pp 1–22 Alleman J (2002a) A New View of Telecommunications Economics. Telecommunications Policy 26: 87–92, January Alleman J (2002b) Irrational Exuberance? The New Telecommunications Industry and Financial Markets: From Utility to Volatility Conference. 30 April
476
James Alleman, Paul Rappoport
Alleman J (2002c) Real Options, Real Opportunities. Optimize 2, vol 1: 57–66, January Alleman J (1999) The Poverty of Cost Models, the Wealth of Real Options. In: Alleman J, Noam E (eds) The New Investment Theory of Real Options and its Implications for the Costs Models in Telecommunications. Kulwer Academic Publishers, pp 159–179 Alleman J, Noam E (eds) (1999) The New Investment Theory of Real Options and its Implications for the Costs Models in Telecommunications. Kulwer Academic Publishers Alleman J, Rappoport P (2002) Modelling Regulatory Distortions with Real Options. The Engineering Economics 4, vol 47: 390-417, December Alleman J, Suto H, Rappoport P (2004) An Investment Criterion Incorporating Real Options. Proceeding of the Eighth Annual International Conference on Real Options: Theory Meets Practice. Montréal, Canada, 17–19 June Alleman J, Rappoport P (forthcoming) Optimal Pricing with Sunk Cost and Uncertainty. Paper presented to the International Telecommunications Society Africa-AsiaAustralian Regional Conference. Perth Australia, 28–30 August Amram M, Kulatilaka N (1999) Real Options. HBS Press Averch H, Johnson LL (1962) Behavior of the Firm under a Regulatory Constraint. American Economic Review 5, vol 52: 1052–1069 Biglaiser G, Riordan M (2000) Dynamics of Price Regulation. RAND Journal of Economics, vol 31, issue 4 (winter): 744–767 Black F, Scholes M (1973) The Pricing of Options and Corporate Liabilities. Journal of Political Economy, vol 81: 637–659 Bodie Z, Merton RC (2000) Finance. Prentice-Hall, Inc., Upper Saddle River, NJ Clark E, Easaw JZ (2004) Optimal Network Access Pricing for Natural Monopolies when Costs are Sunk and Revenues are Uncertain. Proceeding of the Eighth Annual International Conference on Real Options: Theory Meets Practice. Montréal, Canada, 17–19 June Copeland T, Antikarov V (2001) Real Options: A Practitioner’s Guide. Texere LLC Copeland T, Koller T, Murrin J (2000) Valuation: Measuring and Managing the Value of Companies. McKinsey & Company Inc. Cox JC, Ross SA, Rubinstein M (1979) Options Pricing: A Simplified Approach. Journal of Financial Economics 3, vol 7: 229–264 Crandall R, Waverman L (eds) (2000) Who Pays for Universal Service: When Subsidies Become Transparent. Brookings Institute Press, Washington, D.C. d’Halluin Y, Forsyth PA, Vetzal KR (2004a) Wireless Network Capacity Investment. Proceeding of Real Options Seminar. University of Waterloo, Waterloo, Ontario, Canada, 28 May d’Halluin Y, Forsyth PA, Vetzal KR (2004b) Managing Capacity for Telecommunications Networks. Proceeding of Real Options Seminar. University of Waterloo, Waterloo, Ontario, Canada, 28 May Damodaran A (2001) Dark Side of Valuation. Prentice-Hall Dixit AX, Pindyck RS (1995) The Options Approach to Capital Investments. Harvard Business Review 3, vol 73: 105–115, May–June Dixit AX, Pindyck RS (1994) Investment under Uncertainty. Princeton University Press, Princeton, NJ Dobbs IM (2004) Intertemporal Price Cap Regulation under Uncertainty. Economic Journal 114: 495, 421–440 Ergas H, Small J (2000) Real Options and Economic Depreciation. NECG Pty Ltd and CRNEC, University of Auckland
Modelling Regulatory Distortions with Real Options: An Extension
477
Hausman J (2002) Competition and Regulation for Internet-related Services: Results of Asymmetric Regulation. In: Crandall R, Alleman J (eds) Broadband Communications: Overcoming the Barriers Hausman J (1999) The Effect of Sunk Costs in Telecommunications. In: Alleman J, Noam E (eds) The New Investment Theory of Real Options and its Implications for the Costs Models in Telecommunications. Kulwer Academic Publishers Hausman J (1997) Valuing the Effect of Regulation on New Services in Telecommunications. Brooking Papers on Economic Activity, Microeconomics, 1–54 Hausman J, Myers S (2002) Regulating the US Railroads: The Effects of Sunk Costs and Asymmetric Risk. Journal of Regulatory Economics 22, vol 3: 287–310, Kulwer Academic Publishers Herath HSB, Park CS (1999) Economic Analysis of R&D Projects: An Option Approach. The Engineering Economist 1, vol 44: 1–32 Herath HSB, Park CS (2001) Real Options Valuation and Its Relationship to Bayesian Decision Making Methods. The Engineering Economist 1, vol 46: 1–32 Hori K, Mizuno K (2004) Network Investment and Competition with Access to Bypass. Proceeding of the Eighth Annual International Conference on Real Options: Theory Meets Practice. Montréal, Canada, 17–19 June Hotelling H (1925) A General Mathematical Theory of Depreciation. Journal of the American Statistical Association, vol. 20: 340–353, September Hull JC (2003) Options, Futures and other Derivatives. Fifth edition, Prentice-Hall, Upper Saddle River, NJ Kahn AE (1988) The Economics of Regulation: Principles and Institutions. Volume I and II, MIT Press, Cambridge, MA and Wiley, New York Kulatilaka N, Lin L (2004) Strategic Investment in Technological Standards. Proceeding of the Eighth Annual International Conference on Real Options: Theory Meets Practice. Montréal, Canada, 17–19 June Laffont J-J, Tirole J (2000) Competition in Telecommunications. MIT Press, Cambridge, MA Lozano G, Rodríguez JM (forthcoming) Access Pricing: A Simplified Real Options Approach. June 2005 (draft). Paper presented to the International Telecommunications Society Africa-Asia-Australian Regional Conference. Perth Australia, 28–30 August Luenberger DG (1998) Investment Science. Oxford University Press Luehrman T (1998a) Investment Opportunities as Real Options. Harvard Business Review: 51–67, July–August Luehrman T (1998b) Strategy as a Portfolio of Real Options. Harvard Business Review: 89–99, September–October Mauboussin M (1999) Get Real. Credit Suisse Equity Research, June Nembhard HB, Shi L, Park CS (2000) Real Option Models For Managing Manufacturing System Changes in the New Economy. The Engineering Economist 3, vol 45: 232–258 Noam E (2002) How Telecom Is Becoming A Cyclical Industry, And What To Do About It. The New Telecommunications Industry and Financial Markets: From Utility to Volatility Conference. 30 April Newbery D (1997) Privatization and liberalization of network utilities. Presidential Address, European Economic Review 41: 357–383 Noam E (2001) How Telecom Is Becoming A Cyclical Industry, And What To Do About It. The New Telecommunications Industry and Financial Markets: From Utility to Volatility Conference
478
James Alleman, Paul Rappoport
Ofcom (2005) Ofcom’s approach to risk in the assessment of the cost of capital. (Document amended on 02/02/05), 26 January Pak D, Keppo J (2004) A Real Options Approach to Network Optimization. Proceeding of the Eighth Annual International Conference on Real Options: Theory Meets Practice. Montréal, Canada, 17–19 June Park CS, Herath HSB (2000) Exploiting Uncertainty – Investment Opportunities as Real Options: A New Way of Thinking in Engineering Economics. The Engineering Economist 1, vol 45: 1–36 Pindyck RS (2004) Mandatory Unbundling and Irreversible Investment in Telecom Networks. NBER Working Paper No. 10287 Pindyck RS (2005a) Pricing Capital under Mandatory Unbundling and Facilities Sharing. NBER Working Paper No. 11225 Pindyck RS (2005b) Real Options in Antitrust. Presentation to the Real Options Conference. Paris, 24 June Salinger M (1998) Regulating Prices to Equal Forward-Looking Costs: Cost-Based Prices or Price-Based Costs. Journal of Regulatory Economics 2, vol 14: 149–164, September Small JP (1998) Real Options and the Pricing of Network Access. CRNEC Working Paper available at http://www.crnec.auckland.ac.nz/research/wp.html Smith JE, Nau RF (1995) Valuing Risky Projects: Option Pricing Theory and Decision Analysis. Management Science 5, vol 41: 795–816, May Trigeorgis L (1996) Real Options: Managerial Flexibility and Strategy in Resource Allocation. The MIT Press Willig R (1979) The Theory of Network Access Pricing. In: Trebing HM (ed) Issues in Public Utility Regulation. Michigan Sate University Public Utility Papers