Knowledge and Information Technology Management: Human and Social Perspectives Angappa Gunasekaran and Omar Khalil University of Massachusetts, USA Syed Mahbubur Rahman Minnesota State University, Mankato, USA
IDEA GROUP PUBLISHING Hershey • London • Melbourne • Singapore • Beijing
Acquisition Editor: Managing Editor: Development Editor: Copy Editor: Typesetter: Cover Design: Printed at:
Mehdi Khosrowpour Jan Travers Michele Rossi Maria Boyer LeAnn Whitcomb Integrated Book Technology Integrated Book Technology
Published in the United States of America by Idea Group Publishing (an imprint of Idea Group, Inc.) 701 E. Chocolate Avenue, Suite 200 Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.idea-group.com and in the United Kingdom by Idea Group Publishing (an imprint of Idea Group, Inc.) 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 3313 Web site: http://www.eurospan.co.uk Copyright © 2003 by Idea Group Publishing (an imprint of Idea Group, Inc.). All rights reserved. No part of this book may be reproduced in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Library of Congress Cataloging-in-Publication Data Knowledge and information technology management : human and social perspectives / [editors], Angappa Gunasekaran, Omar Khalil, Mahbubur Rahman Syed. p. cm. Includes bibliographical references and index. ISBN 1-59140-032-5 (cloth) -- ISBN 1-59140-072-4 (ebook) 1. Knowledge management. 2. Information technology--Management. 3. Human capital. I. Gunasekaran, A. II. Khalil, Omar. III. Rahman, Syed Mahbubur, 1952HD30.2 .K6347 2002 658.4'038--dc21 2002068792
British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library.
NEW from Idea Group Publishing • Digital Bridges: Developing Countries in the Knowledge Economy, John Senyo Afele/ ISBN:1-59140-039-2; eISBN 1-59140-067-8, © 2003 • Integrated Document Management : Systems for Exploiting Enterprise Knowledge, Len Asprey and Michael Middleton/ ISBN: 1-59140-055-4; eISBN 1-59140-068-6, © 2003 • Critical Reflections on Information Systems: A Systemic Approach, Jeimy Cano/ ISBN: 1-59140-040-6; eISBN 1-59140-069-4, © 2003 • Web-Enabled Systems Integration: Practices and Challenges, Ajantha Dahanayake and Waltraud Gerhardt ISBN: 1-59140-041-4; eISBN 1-59140-070-8, © 2003 • Public Information Technology: Policy and Management Issues, G. David Garson/ ISBN: 1-59140-060-0; eISBN 1-59140-071-6, © 2003 • Knowledge and Information Technology Management: Human and Social Perspectives/ Angappa Gunasekaran, Omar Khalil and Syed Mahbubur Rahman/ ISBN: 1-59140-032-5; eISBN 1-59140-072-4, © 2003 • Building Knowledge Economies: Opportunities and Challenges, Liaquat Hossain and Virginia Gibson/ ISBN: 1-59140-059-7; eISBN 1-59140-073-2, © 2003 • Knowledge and Business Process Management, Vlatka Hlupic/ISBN: 1-59140-036-8; eISBN 1-59140-074-0, © 2003 • IT-Based Management: Challenges and Solutions, Luiz Antonio Joia/ISBN: 1-59140-033-3; eISBN 1-59140075-9, © 2003 • Geographic Information Systems and Health Applications, Omar Khan/ ISBN: 1-59140-042-2; eISBN 1-59140076-7, © 2003 • The Economic and Social Impacts of E-Commerce, Sam Lubbe /ISBN: 1-59140-043-0; eISBN 1-59140-077-5, © 2003 • Computational Intelligence in Control, Masoud Mohammadian, Ruhul Amin Sarker and Xin Yao/ISBN: 1-59140037-6; eISBN 1-59140-079-1, © 2003 • Decision-Making Support Systems: Achievements and Challenges for the New Decade, M.C. Manuel Mora and Guisseppi Forgionne/ISBN: 1-59140-045-7; eISBN 1-59140-080-5, © 2003 • Architectural Issues of Web-Enabled Electronic Business, Shi Nan Si and V.K. Murthy/ ISBN: 1-59140-049-X; eISBN 1-59140-081-3, © 2002 • Adaptive Evolutionary Information Systems, Nandish V. Patel/ISBN: 1-59140-034-1; eISBN 1-59140-082-1, © 2003 • Managing Data Mining Technologies in Organizations: Techniques and Applications, Parag Pendharkar ISBN: 1-59140-057-0; eISBN 1-59140-083-X, © 2002 • Intelligent Agent Software Engineering, Valentina Plekhanova & Stefan Wermter/ ISBN: 1-59140-046-5; eISBN 1-59140-084-8, © 2003 • Advances in Software Maintenance Management: Technologies and Solutions, Macario Polo, Mario Piattini and Francisco Ruiz/ ISBN: 1-59140-047-3; eISBN 1-59140-085-6, © 2003 • Multidimensional Databases: Problems and Solutions, Maurizio Rafanelli/ISBN: 1-59140-053-8; eISBN 159140-086-4, © 2003 • Information Technology Enabled Global Customer Service, Tapio Reponen/ISBN: 1-59140-048-1; eISBN 159140-087-2, © 2003 • Creating Business Value with Information Technology: Challenges and Solutions, Namchul Shin/ISBN: 159140-038-4; eISBN 1-59140-088-0, © 2002 • Advances in Mobile Commerce Technologies, Ee-Peng Lim and Keng Siau/ ISBN: 1-59140-052-X; eISBN 159140-089-9, © 2003 • Mobile Commerce: Technology, Theory and Applications, Brian Mennecke and Troy Strader/ ISBN: 1-59140044-9; eISBN 1-59140-090-2, © 2003 • Managing Multimedia-Enabled Technologies in Organizations, S.R. Subramanya/ISBN: 1-59140-054-6; eISBN 1-59140-091-0, © 2003 • Web-Powered Databases, David Taniar and Johanna Wenny Rahayu/ISBN: 1-59140-035-X; eISBN 1-59140-0929, © 2003 • e-Commerce and Cultural Values, Theerasak Thanasankit/ISBN: 1-59140-056-2; eISBN 1-59140-093-7, © 2003 • Information Modeling for Internet Applications, Patrick van Bommel/ISBN: 1-59140-050-3; eISBN 1-59140094-5, © 2003 • Data Mining: Opportunities and Challenges, John Wang/ISBN: 1-59140-051-1; eISBN 1-59140-095-3, © 2003 • Annals of Cases on Information Technology – vol 5, Mehdi Khosrowpour/ ISBN: 1-59140-061-9; eISBN 159140-096-1, © 2003 • Advanced Topics in Database Research – vol 2, Keng Siau/ISBN: 1-59140-063-5; eISBN 1-59140-098-8, © 2003 • Advanced Topics in End User Computing – vol 2, Mo Adam Mahmood/ISBN: 1-59140-065-1; eISBN 1-59140100-3, © 2003 • Advanced Topics in Global Information Management – vol 2, Felix Tan/ ISBN: 1-59140-064-3; eISBN 159140-101-1, © 2003 • Advanced Topics in Information Resources Management – vol 2, Mehdi Khosrowpour/ ISBN: 1-59140-062-7; eISBN 1-59140-099-6, © 2003
Excellent additions to your institution’s library! Recommend these titles to your Librarian! To receive a copy of the Idea Group Publishing catalog, please contact (toll free) 1/800-345-4332, fax 1/717533-8661,or visit the IGP Online Bookstore at: [http://www.idea-group.com]! Note: All IGP books are also available as ebooks on netlibrary.com as well as other ebook sources. Contact Ms. Carrie Stull at [
[email protected]] to receive a complete list of sources where you can obtain ebook information or IGP titles.
Knowledge and Information Technology Management: Human and Social Perspectives Table of Contents
Preface ........................................................................................................................... i Section I: Introduction to Knowledge and Information Technology Management Chapter I ....................................................................................................................... 1 Developments in Managing Innovation, Knowledge and E-Business Benn Lawson, The University of Melbourne, Australia Danny Samson, The University of Melbourne, Australia Chapter II ....................................................................................................................14 Sources of Knowledge Acquisition by U.S. Managers: An Empirical Analysis Jaideep Motwani, Grand Valley State University, USA Pradeep Gopalakrishna, Pace University, USA Ram Subramanian, Grand Valley State University, USA Section II: Integration of Business and Knowledge/ Information Technology Management Chapter III ..................................................................................................................30 Information Systems and Business Strategy: A Concurrent Planning Model Antonio Torres-Perez, University of Valencia, Spain Isidre March-Chorda, University of Valencia, Spain Chapter IV ..................................................................................................................51 Integrated QFD and Knowledge Management System for the Development of Common Product Platform Walter W. C. Chung, The Hong Kong Polytechnic University, Hong Kong Colin K. S. Tam, The Hong Kong Polytechnic University, Hong Kong Michael F. S. Chan, The Hong Kong Polytechnic University, Hong Kong Chapter V ...................................................................................................................72 An Expanded Model of the Effects of Organizational Culture Upon the Acceptance of Knowledge Management Nancy C. Shaw, George Mason University, USA Francis D. Tuggle, American University, USA
Chapter VI ..................................................................................................................89 Information-Based Integration for Complex Systems E. Szczerbicki, The University of Newcastle, NSW, Australia Chapter VII .............................................................................................................. 104 An Experimental Analysis of the Effectiveness and Efficiency of Teams with Partial Problem Domain Knowledge Dinesh A. Mirchandani, Grand Valley State University, USA Jaideep Motwani, Grand Valley State University, USA Chapter VIII ............................................................................................................ 115 Collaboration in the Large: Using Videoconferencing to Facilitate Large-Group Interaction Diane H. Sonnenwald, University of North Carolina at Chapel Hill, USA Paul Solomon, University of North Carolina at Chapel Hill, USA Noriko Hara, Indiana University, USA Reto Bolliger, National Science Foundation STC-ERSP, USA Thomas H. Cox, University of North Carolina at Chapel Hill, USA Section III: Knowledge and Information Technology Management in Virtual Enterprises Chapter IX ............................................................................................................... 138 A Dynamic Perspective on Knowledge Creation in Virtual Teams—In a Search for New Insights Violina Ratcheva, The University of Nottingham, UK Chapter X ................................................................................................................ 153 The Impact of Trust in Virtual Enterprises T. T. Wong, The Hong Kong Polytechnic University, Hong Kong Henry C. W. Lau, The Hong Kong Polytechnic University, Hong Kong Chapter XI ............................................................................................................... 169 Market of Resources as an Environment for Agile / Virtual Enterprise Dynamic Integration and for Business Alignment Maria Manuela Cunha, Instituto Politécnico do Cávado e do Ave, Portugal Goran D. Putnik, Universidade do Minho, Portugal A. Gunasekaran, University of Massachusetts, USA Section IV: Knowledge Management in E-Commerce Environment Chapter XII .............................................................................................................. 192 Managing Business-Consumer Interactions in the E-World Sushil K. Sharma, Ball State University, USA Jatinder N. D. Gupta, The University of Alabama in Huntsville, USA
Chapter XIII ............................................................................................................ 214 Electronic Money and Payment Systems Santosh K. Misra, Cleveland State University, USA Jayavel Sounderpandian, University of Wisconsin–Parkside, USA Chapter XIV ............................................................................................................ 251 A Managerial Perspective on E-Commerce: Adoption, Diffusion and Cultural Issues Thuong T. Le, The University of Toledo, USA S. Subba Rao, The University of Toledo, USA Dothang Truong, The University of Toledo, USA Section V: Human and Social Aspects of Knowledge and Information Technology Management Chapter XV .............................................................................................................. 268 Human and Social Perspectives in Information Technology: An Examination of Fraud on the Internet C. Richard Baker, University of Massachusetts, USA Chapter XVI ............................................................................................................ 283 The Role of Trust in Information Technology Management István Mezgár, Computer and Automation Research Institute, Hungary Zoltán Kincses, Eötvös Loránd University of Sciences, Hungary Chapter XVII ........................................................................................................... 305 Inexperienced Software Team and Global Software Team Kim Man Lui, The Hong Kong Polytechnic University, Hong Kong Keith C. C. Chan, The Hong Kong Polytechnic University, Hong Kong Chapter XVIII .......................................................................................................... 324 The Knowledge Edge: Knowledge Management and Social Learning in Military Settings Leoni Warne, Defence Science and Technology Organisation, Australia Katerina Agostino, Defence Science and Technology Organisation, Australia Irena Ali, Defence Science and Technology Organisation, Australia Celina Pascoe, University of Canberra, Australia Derek Bopping, Defence Science and Technology Organisation, Australia About the Authors ................................................................................................... 354 Index ......................................................................................................................... 365
i
Preface Success in an increasingly competitive market depends on the quality of knowledge which organisations apply to their major business processes. For example, a supply chain depends on knowledge of diverse areas, including raw materials, planning, manufacturing, and distribution. Likewise, product development requires knowledge of consumer requirements, new science, new technology, and marketing. Knowledge is broadly defined as credible information that is of potential value to an organisation. Knowledge management (KM) is a function of generation and dissemination of information, developing a shared understanding of information, filtering shared understandings into degrees of potential value, and storing valuable knowledge within the confines of an accessible organisational mechanism. Since 1990, organisations are increasingly focusing on learning and knowledge creation. This indicates that an organisation should utilize its intellectual capacity and improve knowledge flows among its members to achieve a competitive advantage. The influence of global competitiveness and development in information technology has led to the recognition that knowledge and the capacity to develop knowledge are the resources that have tremendous impact on achieving a sustainable competitive advantage. The learning culture is the integrated system of ideologies, values, and beliefs that provides behavioral norms for knowledge management in the organisation. The learning climate is the way that organisations operationalize knowledge management cultures. Thus, the learning climate is the behavioral manifestation of the learning culture. Essential to knowledge development is the understanding of the elements of an organisation’s culture and climate that facilitate the development and maintenance of knowledge management initiatives. Critical to the flow of information and knowledge is knowing the ‘how’ (tacit knowledge) and knowing the ‘about’ (explicit knowledge) distinction of knowledge transferability. Knowledge building involves generation and dissemination of information, followed by shared interpretation of processed information into “knowledge.” Knowledge building depends not only on information processing but also on shared interpretation of information and filtering of knowledge into degrees of importance. Knowledge development includes mechanisms for evaluating the quality and usefulness of processed information, developing a shared understanding of the information, and filtering knowledge to be kept in accessible organisational memory. Knowledge management initiatives are undertaken for the purpose of achieving better organisational efficiency and effectiveness, with the goal being able to
ii achieve superior performance. In this respect, spending resources on developing knowledge without a plan to use the knowledge to achieve success is of limited value to organisations. Knowledge management has performance implications at various levels of an organisation: individual process-level (such as supply chain cycle time, product development initiatives, and globalization efforts), functional level (such as performance of various organisational domestic and international units), and the overall performance of the organisation (such as ROI and sales growth). Knowledge management helps an organisation to gain insight and understanding of its own experience. Specific knowledge management activities help focus an organisation on acquiring, problem solving, dynamic learning, strategic planning, and decision making. Computers and communications systems are good at capturing, transforming, and distributing highly structured knowledge that changes rapidly. The KM process involves four major steps: knowledge creation/acquisition, knowledge storage/organisation, knowledge distribution, and application. However, the socio-cultural factors and information technology resources available influence the way the knowledge management process is used for improving organisational competitiveness. There are few books covering aspects of organisational knowledge and information technology management mainly from the viewpoint of technology. We did not come across any book that has included in-depth focus on human and social dimensions of knowledge and information technology management. The objective of this book is to discuss the human and social aspects of knowledge and information technology management. The book emphasizes the human and social aspects of IT-based knowledge creation and sharing, and management systems and techniques. The book contains 18 chapters from professionals, researchers, and the business community that discuss many of the issues highlighted above in knowledge and information technology management. The chapters have been grouped into five interrelated sections. • • • • •
Introduction to knowledge and information technology management Integration of business and knowledge/information technology management Knowledge and information technology management in virtual enterprises Knowledge management in e-commerce environment Human and social aspects of knowledge and information technology management
The chapters in Section I discuss the importance of managing innovation, knowledge, and e-business, and possible sources of knowledge acquisition. Chapter 1 explains how leading innovators leverage e-business tools to harness knowledge residing in all areas of their value chain, including suppliers and customers. Also, a
iii case study of Cisco Systems, Inc. is used to illustrate this new operating model. Chapter 2 in this section presents a study that specifically examines the relationship among perceived accessibility, perceived task complexity, and the information-gathering behaviour of U.S. managers. One of the major conclusions resulting from this study is that the accessibility of an information source, and not the perceived complexity of the task at hand, influences the choice of source used. Section II focuses on the integration issues between business and knowledge/ information technology management. Chapter 3 examines the importance attached to information systems for the formulation of corporate strategy. It discusses the integration between information and decision-making processes. Chapter 4 presents an integrated QFD and knowledge management system for the development of common product platform. Chapter 5 reviews the literature on knowledge management and organisational culture, and identifies 13 factors that are poised to affect the adoption of technological change, KM in particular. It integrates these factors into a two-layer model of the effects of organisational culture upon the knowledge workers in the organisation, and uses four case studies to test the efficacy of the model. Chapter 6 shows how the information resource can be used in integration issues by introducing the problem of information-based integration, proposing a solution, and illustrating the solution with an example. Chapter 7 deals with an experimental analysis of the effectiveness and efficiency of teams with partial problem domain knowledge. Teams that had partial problem domain knowledge did not perform better than teams that had no specific problem domain knowledge. Chapter 8 describes the role of videoconferencing in facilitating large-group collaboration. Both social and technical infrastructures are required to enable and empower collaboration. Section III discusses issues related to knowledge management in the virtual enterprise. Chapter 9 contributes to the ongoing debate on the nature of knowledge creation and sharing in a distributed organisational environment, and current understanding of the sources of creative potential of virtual teams. The author adopts the view that an in-depth understanding of new knowledge creation depends on considering knowledge as socially constructed, or more simply stated as embedded in the organising practices of human activities. Chapter 10 proposes a decision support system based on neural network and data mining technologies, and uses a case example to illustrate the feasibility of incorporating inter-firm trust in real industrial situations. Chapter 11 presents the role of the Market of Resources as an enabler of the process of dynamic Agile/Virtual Enterprise Integration. The authors also specify the market resources’ structure, creation, and operational framework. Section IV is dedicated to knowledge management in e-commerce environments. Chapter 12 shows how the emerging knowledge management concepts can be used to create an appropriate framework for managing business-consumer relationships for understanding and retaining customers. Chapter 13 discusses the basic
iv requirements of electronic money as a medium of exchange in e-commerce transactions. The authors review the basic requirements of a transaction—atomicity, anonymity, durability, and non-reputability—and discuss a payment system that is needed in order to satisfy these requirements. Chapter 14 describes and discusses the issues of adoption and diffusion of e-commerce from a macro perspective, as well as the impact of cultural issues on e-commerce. Section V addresses human and social perspectives of knowledge and information technology management. Chapter 15 of this section focuses on one aspect of this question, namely the existence and extent of fraud perpetuated through the Internet. The chapter further discusses whether fraud using the Internet constitutes a new category of fraud or just a classic form of fraud committed through other means. Chapter 16 presents an overview of the possible attacks against information systems, and introduces tools and technologies that are appropriate to increase the level of trust of the users. Chapter 17 addresses some interesting issues on inexperienced global software teams and virtual software teams. It presents a new managerial, technical, and social approach to develop an effective global virtual software team. The last chapter, Chapter 18, sheds light on important experiences with the knowledge management and social learning in military settings. The book will be useful for professionals, researchers, and practitioners from the business community seeking an understanding of the current and emerging human resources and social issues in knowledge and IT management in the global knowledge economy. It will enable such business professionals to approach the increasingly adoptable and autonomous technologies with increased confidence. The book will also provide information technology researchers and professionals with a better understanding of how to apply information technologies to knowledge management in global e-marketing. It should also be useful to students as a text for courses in the areas of knowledge, information and IT management, as well as in human resource and social development in e-commerce environment. Readers should be able to understand the challenges arising from the new technologies and the growing demand for the development of human resources and social implications, and performance measurement in such an environment. Angappa Gunasekaran, PhD University of Massachusetts, Dartmouth, Massachusetts, USA Omar Khalil, PhD University of Massachusetts, Dartmouth, Massachusetts, USA Syed Mahbubur Rahman, PhD Minnesota State University, Mankato, Minnesota, USA
v
Acknowledgments The editors would like to acknowledge the help of all involved in the collation and review process of the book, without whose support the project could not have been successfully completed. A further special note of thanks goes also to all the staff at Idea Group Publishing, whose contributions throughout the whole process from inception of the initial idea to final publication have been invaluable. In particular to Michele Rossi, who continuously prodded via e-mail for keeping the project on schedule and to Mehdi Khosrowpour, whose enthusiasm motivated us to initially accept his invitation for taking on this project. We wish to thank all of the authors for their insights and excellent contributions to this book. Most of the authors of chapters included in this edited book also served as referees for articles written by other authors. Thanks go to all those who provided constructive and comprehensive reviews. In closing, we are grateful to our parents who by their unconditional love have steered us to this point, and to our wives, sons and daughters who have steadfastly supported us throughout this project. Special thanks go to Ms. Latha Gunasekaran for her diligent work in assisting the review process and compilation of all the chapters, helping to maintain all the documents related to this book over a period of two years. Angappa Gunasekaran, PhD University of Massachusetts, Dartmouth, Massachusetts, USA Omar Khalil, PhD University of Massachusetts, Dartmouth, Massachusetts, USA Syed Mahbubur Rahman, PhD Minnesota State University, Mankato, Minnesota, USA
Section I Introduction to Knowledge and Information Technology Management
Developments in Managing Innovation, Knowledge and E-Business 1
Chapter I
Developments in Managing Innovation, Knowledge and E-Business Benn Lawson and Danny Samson The University of Melbourne, Australia
ABSTRACT Successful innovation is seen as a top priority within many organisations (Porter, Stern, & Council on Competitiveness, 1999). Innovation is the mechanism by which organisations produce the new products, processes and systems required for adapting to changing markets, technologies and modes of competition (D’Aveni, 1994; Dougherty & Hardy, 1996; Utterback, 1994). This process requires the application of knowledge in some new or novel way. However, the explosion in knowledge and increasingly specialized technologies and markets has meant that a single firm alone is unlikely to possess all the relevant knowledge required to innovate. Consequently, organisations have been searching for new ways of overcoming this difficulty. One such mechanism is harnessing information technology to facilitate new organisational structures suited to managing innovation and knowledge into the 21st century. Information technology has expanded the opportunities (and challenges) of undertaking innovation. Twenty-four-hour product development processes are now a reality as multi-nationals race new products to the market. This chapter shows how leading innovators leverage e-business tools to harness knowledge residing in all areas of their value chain, including suppliers and customers. A case study of Cisco Systems, Inc. is used to illustrate this new operating model. Copyright © 2003, Idea Group, Inc.
2 Lawson & Samson
BACKGROUND Broadly viewed, innovation and particularly technological progress have been key drivers in the development of human civilization and the increasing standard of living and productivity witnessed through the 20th century. Entire new industries and fields of knowledge, like biotechnology, communications and information technologies, have developed and subsequently revolutionized everyday life. Innovation as a concept has vast scope and depth. This chapter focuses particularly on the management of innovation at the organisational level. Within an organisation, innovation may range from the highest levels of capital investment producing radically new products through to the most basic changes in product design, manufacture, supply or packaging. An innovation might even involve the use of marketing to modify the perceptions of an organisation’s customers. Some innovations create new industries and destroy others (Utterback, 1994). Innovation is not just about the commercialization of new technologies, and is thus broader than a research and development function. It includes the myriad of small and large changes constantly made to products, processes, administration, systems and markets. In an environment where control over quality, cost and flexibility is becoming commonplace, innovation is increasingly recognised as an important strategic dimension. In fact, innovation may ultimately become the only sustainable advantage available to an organisation. This is due to innovation being fundamentally a dynamic concept with advantage coming from innovating faster and more effectively than competitors (Collins & Smith, 1999). This target is becoming more difficult as organisations learn and improve their innovation processes. Organisations today require an increasing commitment to innovation simply to stay in the same position, much less improve (Porter et al., 1999). In the search to increase the effectiveness of their innovation processes, leading innovators are looking for ways to harness the knowledge residing in their organisation and externally within their network of suppliers, customers and other institutions. Businesses no longer compete based on the assets they possess, but rather on their ability to harness and diffuse knowledge throughout their operations. Drucker (1993) observed that productivity improvements in the 21st century arise from the application of knowledge to knowledge. This is in contrast to the industrial economy of the 20th century where productivity improvements were achieved through the application of knowledge to natural resources, labor and machines. Knowledge has thus driven increases in productivity and competitiveness. At a macro-level, innovation is the means by which knowledge is incorporated into economic activity. Knowledge lies at the heart of the innovation process. It relies heavily on the creation, utilization and diffusion of knowledge within companies and new product development processes (Cohen & Levinthal, 1990; Fiol, 1996; Teece, Pisano & Shuen, 1997). Knowledge is viewed as an important means of linking and leveraging the different areas of the business (Cohen & Levinthal, 1990). Nonaka and Takeuchi (1995) summarize the link between knowledge and innovation stating that “successful companies can create knowledge, disseminate it through the
Developments in Managing Innovation, Knowledge and E-Business 3
organisation and embody it in new technology and products. These activities define a knowledge-creating company, whose sole business is continuous innovation.” The ability of the firm to absorb knowledge from internal and external sources and incorporate it into products and services is termed their absorptive capacity (Cohen & Levinthal, 1990). This concept recognises that knowledge about resources, capabilities, markets, technologies and opportunities is generated not only from internal sources, like research and development, but also from external sources such as customers and suppliers. Successful innovators having high absorptive capacity are able to incorporate knowledge effectively into their products and services. Hargadon and Sutton (2000) describe the process of innovation within a firm possessing this high absorptive capacity. Their study examined the knowledgebrokering cycle used by IDEO, a leading product design firm, to capture good ideas, keep them alive, to imagine new uses for old ideas and to put promising concepts to the test. Innovation in this case occurs through information and knowledge sharing, supported by sourcing ideas externally or internally. Other examples of companies who have learned to transfer knowledge and technology from one product platform to another include Motorola, who built on its portable pager business to develop cellular phones, and Corning, who used their expertise in glass technology to become a world-leader in optic fibre production. The positive relationship between the possession of knowledge and innovation has been well documented. However, many organisations, particularly those with decentralized structures, require a means of facilitating the transfer and diffusion of this knowledge to enable innovation. This is achieved by harnessing information technologies. Information technology helps knowledge become visible at all levels of the organisations. Boynton (1993) calls information technologies ‘systems of scope’ in the sense that they help in the sharing of global knowledge in the firm. The application of information technologies provides immense scope for innovation to discard old processes, diffuse local innovations globally, remove constraints to innovation and create entirely new innovative practices and models (Metz, 1999). Indeed if we imagine the business as a human body, e-business is the central nervous system linking all the limbs and muscles together and transferring information instantaneously throughout the body. The use of information technologies in such a manner is often cited as one of the key drivers of increased productivity worldwide. These systems include knowledge management systems (KMS), supply chain management (SCM) and customer relationship management (CRM). Much research has focused on the positive benefits of information technologies for the identification, codification, diffusion and application of knowledge residing throughout an organisation. However, relatively little attention and research has focused on the relationship between information technology, knowledge management and innovation. The remainder of the chapter looks at this relationship through an examination of the evolution in organisational structures for innovation. A new innovation-oriented structure termed “network innovation” is used as an example of the effective application of information technology, facilitating knowledge and enabling superior innovation outcomes.
4 Lawson & Samson
The most innovative companies have recognised this relationship. They view information technology not merely as a means of achieving short-term cost reductions and efficiencies, but rather, as a longer-term mechanism for gathering, processing and diffusing knowledge throughout the organisation and its external partners. The application of this knowledge increases the potential for successful innovation. Innovation is innately a process of human thought and creativity enhanced by information technologies aimed at enabling people within the company and network to communicate effectively and improve innovation outcomes.
EVOLVING FORMS OF ORGANISING FOR INNOVATION Tremendous growth in knowledge and technology has underpinned the many innovations that revolutionized society. The techniques pursued for managing innovation have also had to evolve concomitantly with this growth in knowledge and technology. These stages of evolution in the nature of innovation are discussed below.
The Lone Innovator In the early 19th century, innovation was driven predominantly by immensely talented, creative and visionary individuals. Inventors such as Thomas Edison, Alexander Bell and Henry Ford formed their own laboratories and factories to commercialize their product ideas. Like no other time in history, a single individual could not only possess the knowledge, but also the ability to create (and destroy) entire industries. The foundation of many of the great companies of today, like General Electric and AT&T, was built during this period. However, aside from the original product idea, innovation did not feature heavily in most companies’ strategies or operations. These companies were typically vertically integrated and heavily involved in all aspects of their value chain. The ability to produce for the mass market at low cost was paramount and the management of knowledge not systematically considered.
The Organisation as the Innovator As the 20th century progressed, the breadth and depth of knowledge and resources required to innovate increased considerably. No longer could a single individual maintain competence in increasingly specialized markets and technologies. The focus of innovation had to shift from individual pursuits and toward collective effort housed within an organisational structure. In many cases only large organisations have the financial resources and ability to coordinate the myriad of skills required to bring an innovation successfully to the market. This geographic reach and scale enabled by global corporations has been critical to the success of many innovations.
Developments in Managing Innovation, Knowledge and E-Business 5
Highly innovative companies view innovation as a mechanism for creating new knowledge. Innovation is not seen as a user of scarce resources with uncertain paybacks. These companies recognise that the business units producing profits today may not be the best opportunities for business tomorrow. Disney, Sony, 3M, HewlettPackard, ABB and Intel are seen as prime examples of such organisations. These companies possess a broad competence base, enabling them to produce new products and services more quickly and at lower cost than their competitors. They incorporate explicit processes and strategies for systematic, continuous innovation. Successful innovators have been able to redefine industries and product portfolios time and time gain, consistently outperforming their competitors. Knowledge management enabled by information technologies has come to the fore in this operating model. Innovative companies have generally adopted a personalization strategy to managing knowledge (Hansen, Nohria & Tierney, 1999). They invest in information technology for the purpose of facilitating conversations and exchange of tacit knowledge. To make these relationships work, they invest in building networks of people. Knowledge is shared face-to-face, but also over the telephone, email and videoconferencing. Although other strategies like codification may work for other purposes, innovation requires original thought and iterative problem solving. Companies following an innovation strategy must have technical knowledge transferred to product development teams. These knowledge management strategies are distinct from the operational information systems required to support manufacturing and value chain operations. The vast majority of research into managing innovation and knowledge has been focused on this operational level. This view treats firms as autonomous entities searching for competitive advantage from external sources or their own internal resources and capabilities (Gulati, Nohria & Zaheer, 2000). Research and development, internal investments and organisational systems have all supported internal innovations. More recently, the importance of developing effective exchange relationships with units and individuals beyond the boundaries of the firm has been investigated. This has been in response to recognition that firms are increasingly competing as a network innovator, rather than a single organisation.
THE NETWORK AS THE INNOVATOR In today’s world of rapid change, uncertainty and hyper-competition, a new breed of innovative company is beginning to emerge. The continued and unrelenting trend of knowledge creation has meant that in many industries it is no longer possible for a single organisation to possess all of the required information, knowledge and skills to innovate effectively. As a consequence, this new breed of organisation is extending their sphere of innovation to incorporate their network of suppliers, customers and institutions. These network innovators have split their activities internally and in effect outsourced part of their innovation processes. Each node of the network becomes
6 Lawson & Samson
a source of unique knowledge and skills knowledge managed via electronic systems. This structure allows the organisation to ‘capability shop’ leveraging external sources of knowledge and expertise into their new product development and innovation processes. Quinn (2000) describes the process of outsourcing innovation and the options available to firms in innovating along the various stages of their value chain. This model of innovation capability is consistent with Rothwell’s (1992) conceptualization of a fifth-generation innovation process reflecting much more intensive use of information technologies to create and reinforce internal and external linkages. This new and dispersed organisational form presents many new challenges and opportunities for innovation. Many of the techniques for managing innovation in contemporary organisations cannot be easily applied, particularly where each company may bring different history, norms and understandings. A mechanism is required to bring these elements together. This is through the application of information technologies. Networked innovators use information technology as the glue to bind the various nodes of the network together. Close relationships and alliances with customers and suppliers are linked and managed via e-business tools, like Customer Relationship Management (CRM) and Supply Chain Management (SCM). This has resulted in reduced product development cycle times, lower costs and stronger products that better meet market demands.
Management of the Network At the level of competitive advantage, this approach to innovation networking represents a core capability of the firm. Networks constructed by organisations are idiosyncratic and path dependent (Gulati & Gargiulo, 1999), making them difficult for competitors to imitate or substitute. This is particularly so for innovation networks. Similarly, the resources accessed within the networks are idiosyncratic, arising from the combination of unique nodes of the network available to the firm. Thus, competitive advantage can be more broadly viewed as arising from a firm’s networks and the resources they can access. Additionally, the management of these networks is a new capability that these innovators must understand. The success of the network itself is hugely dependent on the implicit relationships and personal contacts among top managers, alliance managers and project champions. The need to successfully manage these relationships should be continually communicated to all parties to prevent politics, minor difficulties and relationships problems from escalating. Effective network management facilitates the process of transferring information and knowledge between parties through personal contact and information systems. The use of information technologies improves the company’s capabilities and ability to innovate by interacting with best-in-world knowledge sources throughout their network. These organisations lever their internal innovation capabilities many times over through effective IT, and personal and motivational links with outside sources (Quinn, Zien & Baruch, 1997). Information systems capture data and ideas
Developments in Managing Innovation, Knowledge and E-Business 7
from all sources in the network, giving the company an innovative edge beyond its own capabilities. Some suggest these benefits are in the order of 10 to 100 times. These companies use information technologies not just to draw on knowledge, but also to improve and apply to new technological breakthroughs, pushing the firm into new markets and creation of new industries. Utilizing the linkages with external parties via information technology, and enhanced monitoring offered by new systems, organisations like Cisco and Dell can decrease innovation cycle times and costs, and decrease investments and risks by equal amounts, enhancing the value of their innovations to customers and shareholders. Ultimately, just as firms no longer have the need to be vertically integrated in a production sense (i.e., outsourcing parts of their supply and production), so it is for innovation itself. The great potential for at least partially outsourcing innovation lies in combining the depth of knowledge and systems that suppliers and other partners can bring, with the deep understanding of customer needs and market insights residing within the organisation. Innovation outcomes and the business performance that results will measure the proof of the value of innovation networks.
Harnessing Suppliers’ Capabilities Outsourcing innovation allows selected suppliers to innovate freely within the organisation’s product and process development models. Even large manufacturers across many industries have used their network of suppliers and complementary associations to innovate more successfully. Examples include aerospace (Boeing/ Airbus), transportation (Ford/Chrysler), communications (AT&T) and energy (Mobil). These companies have substantially outsourced product development to their suppliers, retaining control only over their core capabilities. Moreover, Internet software companies routinely release beta versions of their products, encouraging thousands of external parties to use their own resources to error-check, problem solve and develop new applications which add further value to their product. Examples include Netscape with their Navigator browser, Apple with OSX and Microsoft with Windows. This interactive development process allows the company to improve development times, reduce market risk, incorporate customer insights, achieve customer commitment and thereby enhance the chances of a successful innovation.
Understanding Customer Needs There is clearly a difference between those firms that do achieve an essentially minimal level of measuring customer satisfaction and those that strive to be at the leading edge of driving the creation of new requirements. Sony and 3M are well known as companies that do not wait and follow, but rather work with lead customers to ‘stay ahead of the curve.’ Indeed, each of these companies has been known to proactively create the next curve, well ahead of customers consciously knowing their requirements. Almost without exception, successful innovators bring the ‘voice of the customer’ pervasively into their innovation and product design processes. They
8 Lawson & Samson
work with lead customers to create and specify new requirements, then match product/service designs and production capabilities to these.
Enabling Social Networks: Human Resource Aspects The application of information technology has built up the social capital critical to innovation by linking employees globally in a local manner. For example, Boeing broke through in their use of the state-of-the-art, digital design computer system called CATIA for the development of their new 777 commercial jet family. In addition to the many technical benefits which dramatically reduced new product development cycle time and errors, the system also linked Boeing engineers worldwide for the first time. CATIA contained built-in flags and alerts enabling engineers to link worldwide and form a personal innovation network within the company where none had previously existed. These engineers were now able to exchange thoughts and ideas at a personal level through information technology. Suppliers were also brought into this network. In such environments, people work collaboratively with their colleagues both within and outside their firms. Firms such as ABB, renowned particularly in Europe for their excellent leadership, encourage their staff to take risks, such that mistakes are acceptable so long as their ‘batting average is high’ in decision making. Such firms encourage creativity in their staff and find ways to keep their technical staff challenged and satisfied in a technical stream of work. Such companies adjust their human resource strategies of reward, recognition, promotion and performance measurement to build in the collaborative culture and creativity aspects of innovation that are required to achieve effective inter-firm as well as intra-firm innovation. One of the significant success factors is achieving the right balance between technical excellence and business focus and excellence. Further it is not enough to achieve these two things separately, but in truly great innovator companies, technical prowess and business focus are closely integrated. While systems and formal processes can help with the achievement of this integration, the importance of human resource policy settings cannot be underestimated. The attitudes and behaviors of people, and the incentive systems within which they work, are key to the achievement of successful innovation that includes the commercial aspects as well as the technical success of new products and processes.
Risks of Networking and Outsourcing Innovation Quinn et al. (1997) reports on three practices that most prevent the benefits from innovation being realized within such networks. These are the insistence on the use of standardized practices, unwillingness to move from hourly charge rates to value pricing and finally, attempting to manage the ‘how’ and not ‘what’ of innovation. Firms are also concerned about the risks involved in outsourcing their development operations to suppliers. If not properly managed this may mean they lose advantages from outsourcing innovation in terms of integrated systems, higher quality, lower cost, improved flexibility and minimal front-end investment (Quinn et al., 1997). However, many firms also don’t take into account the risks and costs they
Developments in Managing Innovation, Knowledge and E-Business 9
regularly incur by not incorporating external parties, like lost innovation, delays, poor design and inability to access ‘world’s best’ knowledge. Similarly, the commitment of management time, effort and investments can be underestimated.
CASE STUDY – CISCO SYSTEMS, INC. Cisco Systems is illustrative of a new breed of innovative organisations that have e-business at the heart of its operating model. Cisco uses its substantial investment in innovation capability and information technology infrastructure as its primary engine of wealth creation. The company is not focused on building up physical assets whose use is limited. Instead, it uses knowledge, one of the rare commodities whose value grows the more it is applied, to innovate. This has enabled Cisco, Inc. to become one of the most successful companies in history, with its stock rising on average 80% annually to become a world leader in network routers. The very rapid growth rate experienced by Cisco over the past decade would stress most organisations beyond breaking point. The implementation of information technology and e-business tools has enabled Cisco to cope with the ramp up of operating a large business while remaining flexible in the dynamic world of network technologies. The company has evolved to become what we term an ‘innovation engine,’ outsourcing approximately 85% of its manufacturing and focusing its attention on managing the strategic technologies and uncertainties of its business. In fact, Cisco competes as a flexible network of companies with central direction and using information technology to link the different parts. This is illustrated in Figure 1. Cisco Systems links its manufacturing sub-contractors to an Enterprise Resource Planning (ERP) system providing real-time production and forecast sales data. Its subcontracted manufacturers possess the same information as Cisco, enabling the company to manage production far more effectively, and freeing up Cisco management to concentrate on key strategic issues and uncertainties. It is estimated that only 20% of Cisco products pass through a Cisco warehouse. Figure 1: Conceptualisation of the Cisco Systems Innovation Network Acquisitions Technology & Intellectual Property
Joint Ventures/ Alliances
Resellers Cisco Systems, Inc. Manufacturing SubContractors
• •
Internal Innovation Management of Their Innovation Network
Customers
Configurator
10 Lawson & Samson
Cisco also uses its suppliers to promote the development of new innovative components, hardware and software innovation. For example, Cisco has formed the Cisco Hosting Applications Initiative (CHAI) made up of around 30 vendors and service providers with the aim of developing new router technology and optimizing performance on current products. In addition, Cisco manages most of its customer relationships via the Internet. The product delivered to the customer is also managed through a program called the Cisco Configurator, which translates customer needs into hardware specifications. The Cisco website also links customers and their problems in a discussion group format, in effect outsourcing much of the customer sales support function. If major customer problems aren’t resolved within an hour, Cisco’s engineering managers receive an email and follow up with personal phone calls to the customer. The company also maintains contact with leading customers through its network of resellers, which is carefully managed. This greater depth of knowledge and numerous inputs of customer problems, issues and thoughts through the Configurator and website, have enabled Cisco to become more innovative rather than simply remaining internally focused and R&D oriented. Cisco concentrates on developing a deep understanding of customers and customer support needs. A shared information system enables efficient and enhanced relationships with upstream suppliers of components and products. The common language enforced through information systems improves human communication, and enables knowledge from all areas of the network to be captured and shared with speed and accuracy that paper-based knowledge systems cannot match. Like many high-technology companies operating in markets with rapidly advancing technologies, Cisco Systems, Inc. cannot do everything itself, and the company is well aware of this. The company grows by buying the products, technologies and intellectual property it can’t or doesn’t want to develop. Research and Development efforts (around 13% of revenues) are largely focused on integrating these acquired technologies. In fact, Cisco has acquired more than 51 companies in the past 6.5 years–21 of them in the last 12 months. Cisco has also entered into strategic alliances with many other companies, including EDS, IBM, Hewlett-Packard, Microsoft and Motorola, to access additional market and technological knowledge. Notably Cisco has invested where it can add value to their customers–namely access to new technologies. Cisco generally does not favor one technology over another, but instead listens to customers, offers multiple technology alternatives and provides customers with the flexibility to choose. In the highly uncertain environment of network routers, Cisco has avoided the commitments in expensive infrastructure and inventory, preferring to outsource many of these functions to suppliers who possess core capabilities in these areas. Much of the meteoric rise of Cisco Systems can be attributed to use of information technology to manage knowledge. The company has avoided the trap many companies fall into when implementing information technology of ignoring the people perspective. Cisco recognises when personal contact needs to overrule the use of information technology. This is particularly so when dealing with new
Developments in Managing Innovation, Knowledge and E-Business 11
acquisitions and innovation. By doing so, the company retains a flexibility and growth rate rare in the industry. The structure of Cisco and its network of suppliers enables the company to support innovations across multiple technologies and customer demands that Cisco alone could not achieve. A framework illustrating the relationship between knowledge, e-business and innovation within Cisco Systems is shown in Figure 2. Cisco Systems integrates its innovation network by harnessing e-business tools and techniques, including Supply Chain Management and Customer Relationship Management, to enable successful knowledge transfer throughout its network of customers, suppliers and other partners. This has led to superior innovation outcomes and excellent business performance. These tools and techniques, however, do not represent Cisco’s strategy. Rather, they enable Cisco to operate a business model structure that provides competitive advantage in their industry.
FUTURE DIRECTIONS FOR RESEARCH Over the past century, high-performing innovators have evolved their operating structures to facilitate creativity and innovation. Much of this evolution in structure has attempted to identify, manage and apply the world’s growing knowledge base. This trend shows no signs of slowing. As the leading innovators illustrate, there are immense benefits from outsourcing innovation to suppliers, customers and other institutions. However, just as networks increase the potential for innovation, so do the challenges also increase. These companies are learning new skills to manage this process. Innovation for competitive advantage is inherently a dynamic game. We can expect to see more expansive and tight linkages for innovation in all aspects of the value chain. As the next decade unfolds, we will see more attention from managers and governments on how best to manage innovation within and particularly across firms Figure 2: Relationship of knowledge, e-business and innovation within Cisco Systems The Cisco Innovation Network E-Business enabling effective knowledge transfers
Cisco Systems, Inc.
Superior Innovation Performance
Business Performance
12 Lawson & Samson
and industries. Additional research is required to investigate how leading innovators manage their network structures for maximum benefit. This research should extend beyond the improvement achieved in financial and non-financial aspects of firm performance through information technology, but also to the human and socialization aspects. The importance of face-to-face contact in managing the network and unlocking the knowledge residing within has already been shown. The literature shows the positive relationships between knowledge and innovation, information technology and knowledge. Future research should focus on the interactions among these variables. While the opportunities have developed first in rapidly growing industries such as information technology, telecommunications and biotechnology, research is needed to test the value and validity of innovation networks in segments that are not growing so quickly.
CONCLUSIONS The challenges of innovating in today’s environment are immense. The knowledge required to innovate has become increasingly specialized in most markets and technologies. Organisations have had to evolve their structures and processes to best use this distributed knowledge and achieve superior innovation outcomes. The modern breed of innovative company pursues a network approach to innovation, linking the nodes together via information technology. Although relying heavily on information technology for knowledge management and control over operational processes, these companies recognise that innovation is a human process requiring individual and group thought and interaction. Consequently, information technology is used as a means of enabling social interactions and free exchange of ideas and knowledge, rather than being solely a tool for cost reduction and efficiency.
REFERENCES Boynton, A. C. (1993). Achieving dynamic stability through information technology. California Management Review, 35(2), 58-77. Cohen, J. and Levinthal, D. A. (1990). Absorptive capacity: A new perspective on learning and innovation. ASQ, 35(1), 554-571. Collins, J. and Smith, D. (1999). Innovation metrics: A framework to accelerate growth. Prism, First Quarter, 33-47. D’Aveni, R. A. (1994). Hypercompetition: Managing the Dynamics of Strategic Maneuvering. New York: The Free Press. Dougherty, D. and Hardy, C. (1996). Sustained production innovation in large, mature organisations: Overcoming innovation-to-organisation problems. Academy of Management Journal, 39(5), 1120-1153. Drucker, P. F. (1993). The New Society: The Anatomy of Industrial Order. New Brunswick, CT: Transaction Publishers.
Developments in Managing Innovation, Knowledge and E-Business 13
Fiol, C. M. (1996). Squeezing harder doesn’t always work: Continuing the search for consistency in innovation research. Academy of Management Review, 21(4), 1012-1021. Gulati, R. and Gargiulo, M. (1999). Where do interorganisational networks come from? American Journal of Sociology, 104(5), 1439-1493. Gulati, R., Nohria, N. and Zaheer, A. (2000). Strategic networks. Strategic Management Journal, 21, 203-215. Hansen, M. T., Nohria, N. and Tierney, T. (1999). What’s your strategy for managing knowledge? Harvard Business Review, March-April, 106-116. Hargadon, A. and Sutton, R. I. (2000). Building an innovation factory. Harvard Business Review, 157-166. Metz, P. (1999). Innovation in a wired world. PRISM: The Journal of Arthur D. Little, First Quarter. Nonaka, I. and Takeuchi, H. (1995). The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. New York: Oxford University Press. Porter, M. E., Stern, S. and Council on Competitiveness. (1999). The New Challenge to America’s Prosperity: Findings from the Innovation Index. Washington DC: Council on Competitiveness. Quinn, J. B. (2000). Outsourcing innovation: The new engine of growth. Sloan Management Review, Summer, 13-28. Quinn, J. B., Zien, K. A. and Baruch, J. J. (1997). Innovation Explosion: Using Intellect and Software to Revolutionise Growth Strategies. New York: The Free Press. Rothwell, R. (1992). Successful industrial innovation: Critical factors for the 1990s. R&D Management, 22(3), 221-240. Teece, D. J., Pisano, G. and Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18(7), 509-533. Utterback, J. M. (1994). Mastering the Dynamics of Innovation: How Companies Can Seize Opportunities in the Face of Technological Change. Boston, MA: Harvard Business School Press.
14 Motwani, Gopalakrishna & Subramanian
Chapter II
Sources of Knowledge Acquisition by U.S. Managers: An Empirical Analysis Jaideep Motwani Grand Valley State University, USA Pradeep Gopalakrishna Pace University, USA Ram Subramanian Grand Valley State University, USA
ABSTRACT Questionnaire surveys of 156 U.S. managers are used to study knowledge acquisition behaviors. The study specifically examined the relationship among perceived accessibility, perceived task complexity, and the information-gathering behavior of U.S. managers. One of the major conclusions resulting from this study is that the accessibility of an information source, and not the perceived complexity of the task at hand, influences the choice of source used. Other study results are discussed and implications are offered for practicing managers. In addition, a knowledge management framework based on perspectives of the various management disciplines is also presented.
INTRODUCTION Knowledge management (KM) has become the latest strategy in increasing organisational competitiveness. It is the most innovative, creative, and important Copyright © 2003, Idea Group, Inc.
Sources of Knowledge Acquisition by U.S. Managers 15
management concept to come along in the last 25 years. It doesn’t imply downsizing, restructuring, or reorganising; but rather, KM reflects a point made by Lew Platt, former CEO of Hewlett-Packard (HP): “If HP knew what HP knows, we would be three times as profitable” (Coates 2001). Researchers are calling it the only solution for competitive advantage in the new century (Evans, 1997; Hedlund, 1994; Hibbard, 1997; Martinez, 1998; Trussler, 1998). According to Robert H. Buckman, CEO of Buckman Labs, the purpose of the KM and sharing system at his corporation is to “facilitate communication across all of the organisation’s boundaries, so that the entire company works together to help everyone to be the best they can be” (Buckman, 1998, p. 11). Many forward-thinking companies are realizing the value in systematically capturing, analyzing, archiving, and distributing knowledge. From Motorola’s Six Sigma program to the integrated KM systems of today, firms have derived substantial value from effectively managing their knowledge assets. A recent survey by Ernst & Young’s Center for Business Innovation and Business Intelligence reports 94% of the respondents admit they could better use the knowledge in their companies through more effective management, 40% have KM systems up and running or in development, and 25% have plans to develop KM strategies in the next year (Hibbard, 1997, p. 2; Evans, 1997, p. 2). A survey by the Delphi Consulting Group in Boston reports even stronger results, with 70% of the companies it surveyed saying they plan to make their first investments in KM in the next one to three years (Hibbard & Carrillo, 1998). According to a recent study by Ipsos-Reid and Microsoft Canada Co., a majority of Canadian business leaders indicate that KM practices have created value by improving organisational effectiveness, delivering customer value, and improving product innovation and delivery (Anonymous, 2001). The study reveals that 65% of Canadian companies practicing KM believe it has given their organisation a competitive advantage. The attractiveness of the term KM appears to have been prompted by three major forces, according to the Knowledge Management Resource Center (www.kmresource.com): increasing dominance of knowledge as a basis for organisational effectiveness; the failure of financial models to represent the dynamics of knowledge; and the failure of information technology by itself to achieve substantial benefits for organisations. Knowledge can be characterized in many ways. Popular taxonomies distinguish between tacit and explicit knowledge, general and situated context-specific knowledge, and individual and collective knowledge (Spender, 1996). Knowledge sources may lie within or outside the firm. Internal knowledge may reside within peoples’ heads; embedded in behaviors, procedures, software, and equipment; recorded in various documents; or stored in databases and online repositories. Common sources of external knowledge include publications, universities, government agencies, consultants, and knowledge brokers, among others (Zack, 1999). There are two prominent themes dominating the field of KM: knowledge creation and knowledge use. The latter provided the initial spur for the field and still dominates academic and practical discourse. Interest in knowledge creation,
16 Motwani, Gopalakrishna & Subramanian
however, is increasing noticeably. According to von Krogh (1998), “Knowledge creation is the key source of innovation in any company. ” How organisations obtain relevant information is crucial to the development of an empirical theory of organisations. Specifically, this chapter focuses on the process and tools whereby information can be captured, communicated, and analyzed into useful knowledge. In this study, we look at how top mangers of U.S. companies acquire knowledge about the external business environment. We identify the variables that influence an individual manager’s decision to use a particular information source for acquiring external information. The literature is equivocal on whether accessibility of information or the complexity of the task at hand is the key determinant of the source used (Culnan, 1983). This has particular importance to the field of KM because information collection is the first step in the process, and the source of information may very well impact the quality of information collected and hence, the knowledge created by the organisation. The chapter is organised into the following parts: the next section describes the KM literature with particular reference to the process of collecting information. The subsequent section focuses on the study methodology and results. The final section discusses the study’s findings.
THEORETICAL FRAMEWORK ON KM The existing literature on KM, especially the knowledge creation literature, can be classified based on the perspectives of the various management disciplines (strategic management, organisational behavior, production management, and information management). Below is an overview of how these disciplines view KM.
Strategic Management Perspective Winter (1993) argues that organisational knowledge and competence are forms of strategic asset that appropriately deployed; enhance the firm’s long-run adaptation in the face of competitive and other environmental contingencies. The strategic management perspective of KM creation literature centers on the resource-based theory of the firm (Wong, 2000). The resource-based strategy paradigm emphasizes distinctive, firm-specific, and thus hard-to-copy assets, skills, and knowledge. They are referred to generically as core competencies or distinctive capabilities that confer competitive advantage on the firm that possesses them. Resource-based thinking about the firm (Pitt and Clarke, 1999) equates capability with the firm’s exploitation of its tangible and intangible value-generating assets and resources. Proponents of resource-based theory suggest that knowledge-based advantages are difficult to imitate when the reasons for superior performance cannot be identified or controlled (Dierickx and Cool, 1989; Gulati et al., 2000; Lippman and Rumelt, 1982). Advocates of the theory maintain that resources that are well protected from imitation can be a durable source of advantage, and several authors
Sources of Knowledge Acquisition by U.S. Managers 17
have discussed numerous mechanisms that increase the cost of replication (Barney 1991; Ghemawat, 1986; Mahoney and Pandian, 1992), and classes of resources that are inherently tough to copy (Barney and Hansen, 1994; Castanias and Helfat, 1991). Causal ambiguities (Teece, 1998; Winter, 1987), concepts of knowledge base and intellectual capital (Grant, 1996; Tsoukas, 1996, Stewart, 1997; Teece, 1998, Sullivan, 1999), and the occurrence of knowledge creation in strategic alliances (Contractor and Lorange, 1998; Kogut, 1988; Phan and Peridis, 2000) have also been a focus in studies of knowledge resources in the strategic management field.
Organisational Behavior Perspective The field of organisational behavior views knowledge creation from the perspective of organisational learning (March, 1991; Nelson and Winter, 1982; Nonaka, 1994; Spender, 1996). Organisational learning is about how organisations can gain a better action repertoire in increasingly complex and dynamic environments by expanding their knowledge base (De Geus, 1988; Fiol and Lyles, 1985; Nonaka and Takeuchi, 1995). For these environments it is not the knowledge itself, but the learning capabilities that determine effectiveness (Grant, 1996). Although many authors on organisational learning show the importance of organisational learning, surprisingly the learning needs concept has not been explicitly defined. Four approaches to learning needs are recognized here: (1) knowledge gap analysis for identifying strategic knowledge needs (Helleloid and Simonin, 1994); (2) classification of problems to select operationally required knowledge and skills (Tampoe, 1994); (3) coping with organisational tremors and jolts by anticipation, response, and adjustments of behavioral repertoires (Meyer, 1982); and (4) decisional uncertainty (contingency) measurement (Duncan and Weiss, 1979). While organisational knowledge can take several forms, knowledge is generally referred to as either explicit or tacit (Buckman, 1998; Hedlund, 1994; Hibbard, 1997). Explicit knowledge is that which is already documented: located in files, manuals, databases, etc. Tacit knowledge, called by some “the greatest knowledge base in any company,” is that which is tucked away in employees’ heads (Buckman, 1998, p.12). By accessing, sharing, and implementing both explicit and tacit knowledge, organisations can influence behavior and achieve improved performance both individually and organisationally, and “the more effective organisations are at learning, the more likely they will be at being innovative” (Argyris, 1992).
Production Management Perspective Knowledge acquired by an organisation over long periods of time is a valuable asset of the organisation concerned. In the world of manufacturing, design knowledge of the products is vital for the manufacturers in maintaining its competitive advantage and the commercial success of the enterprises. Leveraging the design knowledge associated with their products is especially critical for SMEs who operate under difficult conditions.
18 Motwani, Gopalakrishna & Subramanian
Various research scholars interested in the process of technological innovation have also initiated research pertaining to the process of knowledge creation in the production management field. According to Wong (2000), “the process of knowledge creation is intimately linked to the process for its use and transformation into products and services through the concept of innovation.” Innovation research demonstrates the need for firms to have complementary assets or other receptive technical capacity in order to translate new technology into commercial success (Thorburn, 2000). These assets are both formal and informal, or tacit, and need to be embedded in an organisation if it is to build its core competencies (Lei et al., 1997). Also, the success of formal technology licensing can be increased when tacit knowledge is transferred at the same time (Wong, 2000).
Information Systems Perspective Advances in information technology have propelled much of the excitement around KM. Information technology has provided new tools to better perform the activity of building knowledge capital. Two important areas in particular have contributed to the birth of modern KM systems: communication (or network technologies) and relational databases (Sarvary, 1999). When these tools are employed, people start thinking explicitly about the underlying business processes. Where does information originate? What parts of the process can be or should be automated? Is the process as it stands today worth automating, or should a new process be built? Essentially, information technology has a critical role in raising this consciousness because its use requires the firm to re-evaluate the entire KM process and its role within the firm. The combination of information technology and co-aligned organisational processes can significantly enhance learning and competitive advantage. Knowing how to create, select, interpret, and integrate information into a usable body of knowledge is the focus of this discipline (Borghoff and Pareschi, 1998; Liebowitz, 1999; Liebowitz and Wilcox, 1997; Slater and Narver, 1997). In addition, the conversion of tacit to codified or explicit knowledge assists in knowledge transfer and sharing, thereby helping to make the firm more innovative and productive (Davenport et al., 1998; Mansfield, 1985; Teece, 2000). According to Teece (2000), there are three broad objectives advanced by information system scholars pertaining to KM. These are: (1) The creation of knowledge repositories (data warehouses) for external information, particularly competitive intelligence; internal information, such as internal research reports; and informal internal knowledge, like discussion databases. (2) The delivery of improved knowledge access and hence reuses through the development of userfriendly analytical tools. (3) The enhancement of the organisation’s knowledge environment, including the willingness of individuals to freely share their knowledge and experiences. Data warehousing can be defined as a process that extracts data captured by multiple business applications and organises it in a way that is meaningful to
Sources of Knowledge Acquisition by U.S. Managers 19
the business, supporting the need to inform decision makers. Two types of data warehousing software are used extensively in support of KM initiatives (Sarvary, 1999): • software that supports the transfer of operational data to the warehouse (i.e., data extraction, cleansing, transformation, loading, and administration); • warehouse management (e.g., software that supports ongoing data management through the use of multi-user database server software). In conclusion, the extant literature on KM reiterates the importance of organisational variables that impact the collection of information that is subsequently transformed into knowledge. Figure 1 provides a knowledge management framework based on the perspectives of the four management disciplines discussed above. In the first part of the framework, knowledge is obtained and created from data/ information obtained from the four disciplines. Practitioners and researchers to build and/or test theory then use this knowledge. Lastly, the theories developed and/or tested result in specific knowledge use/applications in each of the four disciplines. This process keeps on evolving leading to better models and applications being developed.
METHODOLOGY Sample Data was collected by means of a questionnaire which is described in detail in the next sub-section. The questionnaires were administered to managers who were enrolled in the part-time MBA program of a university in the mid-west. Multiple Figure 1: Knowledge management framework based on the perspectives of the various management disciplines Strategic Management Discipline
Organization Behavior Discipline
Strategic Management Discipline
Organization Behavior Discipline
Theory Building
Knowledge Creation
Leads to
Leads to
Knowledge Use/ Applications
Theory Testing
Production Management Discipline
Information Management Discipline
Production Management Discipline
Feedback
Information Management Discipline
20 Motwani, Gopalakrishna & Subramanian
sections of employed students formed the population. One-hundred-and-fifty-six (156) completed questionnaires formed the data set. The authors tested for response bias, comparing the results of early respondents with those of late respondents (who are akin to non-responders, because they responded only after repeated reminders) using chi-square tests of independence (Armstrong and Overton, 1977). The comparisons were made using a few demographic variables. No significant differences were found.
Survey Measures Used in the Study The questionnaire titled “Managerial Information Acquisition Behavior Survey” was divided into five primary sections. Section 1 focuses on the “source of information” used by managers to acquire industry information. Nine separate items were used to capture this construct. Five-point Likert scales were used to measure the frequency of use of various “sources of information.” A score of 1 indicated the “source of information” was never used, while 5 indicated the source was used once a week. Section 2 of the survey queries respondents about the degree of accessibility of various sources of information. Section 3 addresses the uncertainty faced by managers with reference to multiple publics like customers, suppliers, and competitors, in addition to several environmental factors. In the next section of the survey, managers are asked to report about their “information sharing habits,” with peers and significant others in the workplace. Finally, demographic questions focus on the following: length of employment, level of highest education attained, age of manager, line versus staff responsibilities, and type of organisation (service or manufacturing) that the respondent works for. Service firms formed 53.2 percent of the sample in comparison to 46.8 percent for manufacturing firms. The questions on the survey instrument were drawn from Culnan’s survey (1983), updated to reflect recent changes in sources of information. The data set was analyzed using t-tests, correlation analysis, and regression analysis. We used t-tests to test for differences between the two groups, manufacturing and service, since our interest was in comparing the means of two distinct populations. To test for association between “information source use” and “accessibility,” we used the common statistical measure of association, viz., coefficient of correlation. Finally, to test the linear relationship between the independent and dependent variables, we used regression analysis.
RESULTS Table 1 presents the mean frequency and corresponding ranks of “use of information sources” by managers in the manufacturing and service industries. T-test results comparing the two industries are also presented here. The mean ranking of information sources used in the manufacturing industry is as follows. Superiors were ranked as the most important source of information, followed by personal subscriptions and then peers. In the service industry, managers ranked
Sources of Knowledge Acquisition by U.S. Managers 21
Table 1: Mean frequencies of use for nine information sourcesa
Manufacturing Source
Service Mean Rank S.D. Mean Rank S.D. t-value 3.89 2 1.25 3.88 4 1.19 .070
Personal Subscriptions Company Library 2.75 7 1.26 2.81 7 1.49 -.245 Databases 3.16 6 1.51 3.31 6 1.22 -.672 Superiors 4.01 1 1.03 4.06 3 1.14 -.268 Subordinates 3.84 5 1.19 3.73 5 1.22 .536 Peers 3.88 3 1.05 4.66 1 .90 -1.79* Internal Documents 3.85 4 1.08 4.23 2 5.33 -.598 Consultants 2.21 9 0.90 2.01 9 .89 1.350 Other Outsiders 2.55 8 1.04 2.63 8 1.08 -.504 a Scale:1=never, 2=1-2times a year, 3=4-5 times a year, 4=once a month, 5=once a week *p≤.10
“peers” first followed by “internal documents and next “superiors” as information sources. In the same table, t-test results reveal differences between managers’ use of information sources in the manufacturing and service sectors. Among the nine information sources, the use of “peers” as sources of information revealed somewhat weak differences between the two industries (p<.10). In Table 2 the correlations between “information source use” and “accessibility” take into consideration the relationship between these variables. Also included in this table are the correlations between source of information use and complexity. As can be seen in Table 2, the correlations between “source used” and “accessibility to information” in the service sector exhibit “very weak (.225)” to moderately strong (.639) associations. All of the associations are statistically significant at p<.001 level. In the manufacturing industry correlations ranging from a low of .241 (very weak association) to a high of .747 (strong association) were detected. Also, the correlations are statistically significant at p<.001. However, in the case of correlations between source of information used and complexity, none of the nine items under sources exhibit statistically significant correlations. The correlations exhibited range from “none” to “very weak” at best. This is true for both the industries. Table 3 shows two separate sets of regression analyses, run with “accessibility” and “complexity” as independent variables, while the nine items under sources of information use were specified as dependent variables. First, Table 3a takes a look at the relationships in the service sector and Table 3b pertains to the manufacturing industry. A total of nine regression equations are shown in each table. Table 3a reveals F is significant (.001
22 Motwani, Gopalakrishna & Subramanian
Table 2: Correlations between information source use and accessibility and complexity Source Personal Subscriptions Internal Impersonal - Library -Databases Internal Personal -Superiors -Subordinates -Peers Internal Documents External -Consultants Other Outsiders
Accessibility Complexity Service Manufacturer Service Manufacturer .305** .747** .049 -.088 .626** .509**
.694** .597**
.039 .018
.154 -.019
.639** .431** .428** .225**
.241 * .331** .506** .534**
-.130 .047 .184 .036
-.027 .154 .208 .089
.540** .597**
.470** .345**
.092 .069
-.043 .199
**p<.001
Table 3a: Regression results to describe source use based on accessibility and task complexity Dependent Var.
Service (d.f.=2,79) Accessibilitya Complexitya Personal Subscriptions .305*** -.048 Internal Impersonal Library .626*** -.001 Databases .509*** -.002 Internal Personal Superiors .639*** -.049 Subordinates 431*** .01 Peers .428*** .083 Internal Documents .225** -.018 External Consultants .540*** .022 Other Outsiders .597*** -.07
Adj. R2 .082
F 8.112**
.385 .259
51.636*** 28.33***
.40 .176 .173 .039
55.1*** 18.25*** 18.163*** 4.34**
.282 .348
32.861*** 42.68***
____________________________________________________________________________________________________________ a
Entries are standardized regression coefficients ***p≤.001 ** p≤.05
are statistically significant. The adjusted R2 ranges from .039 to .40. In Table 3b the adjusted R2 and significant F-value for all nine regression equations signify that the null hypothesis is rejected. The adjusted R2 ranges from .044 to .589. Again, all the computed betas of accessibility are significant at p<.001 levels. With the exception of personal subscriptions, all the other eight “sources of information use” are not significant.
Sources of Knowledge Acquisition by U.S. Managers 23
Table 3b: Regression results to describe source use based on accessibility and task complexity Dependent Var.
Personal Subscriptions Internal Impersonal Library Databases Internal Personal Superiors Subordinates Peers Internal Documents External Consultants Other Outsiders
Manufacturing (d.f.=2,71) Accessibilitya Complexitya
Adj. R2
F
.779***
-.209***
.589
52.663***
.694*** .603***
.027 -.095
.474 .355
65.991*** 40.083***
.241*** .331*** .506 .534***
.024 .109 .167 .032
.044 .097 .246 .275
4.26** 8.745 24.48*** 28.296***
.470*** .345***
-.091 .124
.210 .106
20.182*** 9.46***
____________________________________________________________________________________________________________ a
Entries are standardized regression coefficients ***p≤.001 **p≤.05
Tables 4 and 5 contain the t-test results for use of information sources in both manufacturing and service organisations. In the case of manufacturing organisations (see Table 4), among the group of nine items, only one was found to be statistically significant across the groups of “line” versus “staff” employees, viz., databases as sources of information. In Table 5, among the nine items only “library” as sources of information used was found to be somewhat statistically different (p<.10) between line and staff employees in service organisations.
DISCUSSION This study examined the relationship among perceived accessibility, perceived task complexity, and the information-gathering behavior of managers in manufacturing and service organisations. Since information gathering is an integral part of the knowledge-creation process, the findings of the study contribute to the field of knowledge management. A main contribution of this study is the finding that it is the accessibility of an information source and not the perceived complexity of the task at hand that influences the choice of source used. This is consistent with early research in the field, but contradicts Culnan’s (1983) assertion. A possible reason for this finding is that accessibility is paramount. If particular information is inaccessible or difficult to access, then regardless of the complexity of the task at hand, it is unlikely to be used.
24 Motwani, Gopalakrishna & Subramanian
Table 4: Mean differences between line and staff employees for use of information sources in manufacturing organisations
Source t-value sig. (2-tailed) _____________________________________________________________ Personal Subscriptions -1.718 .090* Internal Impersonal Library -.742 .461 Databases -2.205 .031** Internal Personal Superiors .317 .752 Subordinates -.770 .444 Peers -.954 .343 Internal Documents -1.182 .241 External Consultants -1.1025 .309 Other Outsiders -.947 .347 __________________________________________________________ **p≤.05,*p≤.10
Table 5: Mean differences between line and staff employees for use of information sources in service organisations
Source t-value sig. (2-tailed) _____________________________________________________________ Personal Subscriptions -.165 .869 Internal Impersonal Library -1.965 .053 Databases .086 .931* Internal Personal Superiors -1.543 .135 Subordinates -.017 .987 Peers -.502 .617 Internal Documents .906 .377 External Consultants -.595 .554 Other Outsiders 1.526 .131 **p≤.05, * p≤.10
This underscores the relative importance of task complexity and brings into sharp focus the accessibility of an information source. This has important implications both for users of as well as providers of information.
Sources of Knowledge Acquisition by U.S. Managers 25
Personal sources appear to be more popular than impersonal sources. This is brought out in Table 1, where, for manufacturing industry managers “superiors” ranked first and “peers” for service industry managers. Again, it is likely that because of the ease of access of these personal sources, they are preferred over impersonal sources such as publications and databases. Managers tend to want to talk to their superiors or peers to collect information simply because it is easier to do that than to seek published information. This is indeed surprising given that, in this day and age, electronic databases are produced with user ease of use in mind. Apparently, there is a mental block that makes users less inclined to use these sources and more inclined to seek personal information sources. “Staff” employees in organisations perform a “boundary-spanning” role. They perform a gatekeeping function by acquiring information from outside the organisation and disseminating this information to others in the organisation. This is in contrast to “line” employees who are typically more insulated from the external environment. A priori it would appear that staff employees would tend to use outside sources of information more than line employees. This is only partially supported by the results of this study. Of all the information sources examined in the study, only “databases” appear to be used more by staff than by line. This is consistent with extant theory because databases typically emanate from outside an organisation. However, there were no significant differences between line and staff employees on other external information sources such as library and publications. It is likely that organisations no longer want to insulate line employees from the outside world. By forcing line employees to interact with the external environment, organisations may become more competitive by exhibiting a higher degree of market orientation. The current study extends prior work on information use by managers. Replications and extensions would allow us to better understand the factors that influence the choice of various sources of information. This is important because information collection puts the organisation on the path to knowledge management.
FUTURE RESEARCH DIRECTIONS While this study adds to the growing body of literature on knowledge management, subsequent research should contribute to a more complete understanding of the entire process of knowledge acquisition and use. For example, it is possible that there are certain factors that moderate the knowledge acquisition process, such as organisational resources, industry type, and competitive intensity. The impact of these factors has to be empirically established. Similarly, certain factors may mediate the knowledge acquisition process. These factors may be size of the organisation, age, and technological intensity of the industry. Practical implications of these mediating factors would help organisations develop a plan for knowledge management.
26 Motwani, Gopalakrishna & Subramanian
REFERENCES Anonymous. (2001). Knowledge management’s early adopters. Computing Canada, 27(8), 15. Argyris, C. (1992). On Organisational Learning. Cambridge, MA: Blackwell. Armstrong, J. S. and Overton, T. S. (1977). Estimating non-response bias in mail survey. Journal of Marketing Research, 14(3), 396-402. Barney, J. (1991). Firm resources and sustained competitive advantage. Journal of Management, 17, 99-120. Barney, J. and Hansen, M. (1994). Trustworthiness as a source of competitive advantage. Strategic Management Journal, 15 (Winter Special Issue), 175190. Borghoff, U. M. and Pareschi, R. (Eds). (1998). Information Technology for Knowledge Management. New York: Springer-Verlag. Buckman, R. H. (1998). Knowledge sharing at Buckman Labs. Journal of Business Strategy, (19), 1-15. Castanias, R. P. and Helfat, C. (1991). Managerial resources and rents. Journal of Management, 17, 155, 171. Coates, J. (2001). Knowledge management is a person-to-person enterprise. Research Technology Management, 44(3), 9-13. Contractor, F. and Lorange, P. (1998). Cooperative Strategies in International Business. Lexington, MA: Lexington Books. Culnan, M. J. (1983). Environmental scanning: The effects of task complexity and source accessibility on information gathering behavior. Decision Sciences, 194-206. Davenport, T., DeLong D. and Beers, M. (1998). Successful management projects. Sloan Management Review, Winter. De Geus, A. P. (1988). Planning as learning. Harvard Business Review, 66(2), 7074. Dierickx, I. and Cool, K. (1989). Asset stock accumulation and sustainability of competitive advantage. Management Science, 35, 1504-1514. Duncan, R. and Weiss, A. (1979). Organisational learning: Implications for organisational design. Research in Organisational Behaviour, (1), 75-123. Evans, B. (1997). Knowledge management–Fuel for innovation. InformationWeek Online, October 20, http: //www.informationweek.com. Fiol, C. M. and Lyles, M. A. (1985). Organisational learning. Academy of Management Review, (10), 803-813. Ghemawat, P. (1986). Sustainable advantage. Harvard Business Review, 64(5), 53-57. Grant, R. M. (1996). Toward a knowledge-based theory for a firm. Strategic Management Journal, 17, 109-122. Gulati, R. Nohria, N. and Zaheer, A. (2000). Strategic networks. Strategic Management Journal, 21, 203-215.
Sources of Knowledge Acquisition by U.S. Managers 27
Hedlund, G. (1994). A model of knowledge management and the nform corporation. Strategic Management Journal, (15), 73-90. Helleloid, D. and Simonin, B. (1994). Organisational learning and a firm’s core competence. In Hamel, G. and Heene, A. (Eds), Competence-Based Competition, 213-40. Chichester: John Wiley & Sons. Hibbard, J. (1997). Knowledge management: Knowing what we know. InformationWeek Online, October 20, http://www.informationweek.com. Hibbard, J. and Carillo, K. (1998). Knowledge revolution. Information Week Online, January 5, http: //www.informationweek.com. Kogut, B. (1988). Joint ventures: Theoretical and empirical perspectives. Strategic Management Journal, (9), 319-332. Lei, D., Slocum, J. and Pitts, R. (1997). Building cooperative advantage: Managing strategic alliances to promote organisational learning. Journal of World Business, 32(3), 203-224. Liebowitz, J. (Ed). (1999). Knowledge Management Handbook. Boca Raton, FL: CRC Press. Liebowitz, J. and Wilcox, L. C. (Eds). (1997). Knowledge Management and its Integrative Elements. Boca Raton, FL: CRC Press. Lippman, S., and Rumelt, R. (1982). Uncertain irritability: An analysis of interfirm differences in efficiency under competition. Bell Journal of Economics, 13, 418-438. Mahoney, J. and Pandian, J. (1992). The resource-based view within the conversation of strategic management. Strategic Management Journal, 13, 363-380. Mansfield, E. (1985). How rapidly does new industry technology leak out? The Journal of Industrial Economics, 34(2), 217-223. March, J. G. (1991). Exploration and exploitation in organisational learning. Organisational Science, 2(1), 71-87. Martinez, M. N. (1998). The collective power of employee knowledge. HR Magazine, 43(February), 88-94. Meyer, A. D. (1982). Adapting to environmental jolts. Administrative Science Quarterly, 27, 515-37. Nelson, R. and Winter, S. (1982). An Evolutionary Theory of Economic Change. Cambridge, MA: Harvard University Press. Nonaka, I. (1994). A dynamic theory of organisational knowledge creation. Organisation Science, 5(1), 14-37. Nonaka, I. and Takeuchi, H. (1995). The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. London: Oxford University Press. Pan, S. and Scarbrough, H. (1999). Knowledge management in practice: An exploratory case study. Technology Analysis & Strategic Management, 11(3), 359-374. Phan, P. and Peridis, T. (2000). Knowledge creation in strategic alliances: Another look at organisational learning. Asia Pacific Journal of Management, 17, 201-222.
28 Motwani, Gopalakrishna & Subramanian
Pitt, M. and Clarke, K. (1999). Competing on competence: A knowledge perspective on the management of strategic innovation. Technology Analysis & Strategic Management, 11(3), 301-316. Sarvary, M. (1999). Knowledge management and competition in the consulting industry. California Management Review, 41(2), 95-107. Slater, S. and Narver, J. (1997). Information search style and business performance in dynamic and stable environments: An exploratory study. Report No. 97104. Cambridge, MA: Marketing Science Institute. Spender, J. (1996). Making knowledge the basis of a dynamic theory of a firm. Strategic Management Journal, 17 (Winter Special Issue), 45-62. Stewart, T. (1997). Intellectual Capital: The New Wealth of Organisations. New York: Doubleday. Sullivan, P. H. (1999). Profiting from intellectual capital. Journal of Knowledge Management, 3(2), 132-142. Tampoe, M. (1994). Exploiting the core competences of your organisation. Long Range Planning, 27(4), 66-77. Teece, D. J. (1998). Capturing value from knowledge assets: The new economy, markets for know-how, and intangible assets. California Management Review, 40(3), 55-79. Teece, D. J. (2000). Strategies for managing knowledge assets: The role of firm structure and industrial context. Long Range Planning, 33, 35-54. Thorburn, L. (2000). Knowledge management, research spinoffs and commercialization of R&D in Australia. Asia Pacific Journal of Management, 17, 257275. Trussler, S. (1998). The rules of the game. Journal of Business Strategy, January/ February, (19), 16-19. Tsoukas, H.(1996). The firm as a distributed knowledge system: A constructionist approach. Strategic Management Journal, 17 (Winter Special Issue), 11-25. von Krogh, G. (1998). Care in knowledge creation. California Management Review, 40(3), 133-153. Winter, S. (1987). Knowledge and competence as strategic assets. In Teece, D. J. (Ed.), The Competitive Challenge: Strategies for Industrial Innovation and Renewal: 159184. Cambridge, MA: Blackwell. Winter, D. (1993). Hometown team says U.S. has closed quality gap with Japanese. Ward’s Auto World, (29), 37-39. Wong, P.-K. (2000). Knowledge creation management: Issues and challenges. Strategic Management Journal, 17 (Winter Special Issue), 193-200. Zack, M. (1999). Developing a knowledge strategy. California Management Review, 41(3), 125-146.
Sources of Knowledge Acquisition by U.S. Managers 29
Section II Integration of Business and Knowledge/Information Technology Management
30 Torres-Perez & March-Chorda
Chapter III
Information Systems and Business Strategy: A Concurrent Planning Model Antonio Torres-Perez and Isidre March-Chorda University of Valencia, Spain
ABSTRACT The main purpose of this chapter is to deep into the importance attached to the Information Systems for the proper formulation of the corporate strategy. After introducing the basis of the strategic planning tradition model, which suggests the subordination of the Information Systems (IS) to the Business Strategy, we propose a new model that views the IS as an strategic instrument suited for the strategy formulation stage, rather than operational tools for the strategy control phase. Success in the application of this model we call “Concurrent Business/IS Strategic Planning Model” will closely depend on the consistency, coherence and soundness of the IS, both internal and external. In order to contrast the level of application of this model, an empirical case-study fieldwork was undertaken to rise up some empirical evidence on the degree of alignment between the entrepreneurial practice and the theoretical model proposed. This last section contains the main results arising from the empirical analysis.
INTRODUCTION The main purpose of this chapter is to examine the importance attached to information systems for the proper formulation of corporate strategy. Before starting with the core content of the chapter, we should clearly state the position and value of information within organisations, by analysing the role assigned to information resources by different organisational theories. Copyright © 2003, Idea Group, Inc.
Information Systems and Business Strategy
31
Information has become a key resource for most organisations, along with human resources or technological assets. A proper use of these resources underlies the competitiveness prospects of most organisations. Having stated the key character of information as an input to be considered in a company’s decision-making process, we should remember that not all information is equally important, nor should it all be treated in the same way. Several typologies of information exist, such as the one proposed by Cornellà (1994): 1. According to the capacity of synthesis and the types of decisions to be made on the basis of it: Operational information / Tactical information / Strategic information 2. According to the place where the information is generated: External information / Internal information 3. According to the degree of specificity of the information: Guidance information / End user information 4. According to the degree of immediacy that their treatment requires: Active information / Inactive information
INFORMATION SYSTEMS Information systems (IS) are usually extremely complex, which makes it difficult to provide an exact definition of them. As a starting point we use the one proposed by Andreu, Ricart and Valor (1996: 13): “An integrated group of processes, primarily of a formal nature, developed in a user-computer environment, which operate on a quantity of structured data about an organisation, whose function is to gather, process and distribute selectively all the information needed for the management and the proper functioning of the organisation.” Some nuances can be added to this initial definition: IS refers basically to formalised processes. However, IS should not neglect informal processes, of growing concern and presence in current organisations. IS makes use of computers, although conceptually an IS can still exist without computers. However, nowadays, the use of computers is almost compulsory to ensure a minimum of productivity, profitability and scope of the information gathered. Having defined what an information system means, we now turn our interest to the main purpose of these systems, i.e., to integrate information in support of the decision-making process. From the vast array of applications of IS in the management of companies, three basic information systems can be identified: MIS (Management Information System), DDS (Decision Support System) and EIS (Executive Information System). The MIS is not a new concept. It was defined in the ’60s by Ackoff (1967) as a system that furnishes the managers–across the organisation–with detailed and summarised information from company databases on operations and performance. MIS encompasses both databases and a series of routines for data treatment. The system is structured around a set of previously determined rules for decision
32 Torres-Perez & March-Chorda
making. The MIS gathers mainly operational information and reveals its usefulness by providing solutions to problems of an operational nature; its main limitation is that it can be used only when the problem to be solved has been previously programmed. This implies the existence of a rule for problem-solving . Also, its level of aggregation is low (Antill and Clare, 1991). DDSs on the other hand go a step further in the level of aggregation. They are fed by tactical information and are orientated to satisfying information demands from middle managers and staff. The main divergence from MIS lies in the capacity to deal not only with previously structured decisions but also with poorly structured or completely unstructured ones (Keen and Morton, 1978; Sprague, 1980). Finally, EIS are defined as information systems with a broad level of aggregation, which incorporate information of a strategic nature, both from the firm and the Figure 1: Typologies of information (author compilation)
EIS Strategic
DSS
Tactical
Operational Management Software Financing Internal / external logistics Commercial Purchases
Operational
Human Resources Production ...
Information Systems
Types of decisions
Figure 2: IS as an integrated group of processes (adapted from Andreu, Ricart and Valor, 1996) IS
COMPANY INFRASTRUCTURE HUMAN RESOURCES MANAGEMENT
MARGIN
TECHNOLOGY DEVELOPMENT PURCHASES
SERVICES INPUT LOGISTIC
PRODUCTION
:
OUTPUT LOGSTIC
,
COMMERCIALIZING AND SALES
MARGIN
Information Systems and Business Strategy
33
environment. They provide a broad volume of information and a high level of flexibility, enabling them to be used as a tool for analysis and action by top managers (Rockart and Treacy, 1982).
THE INFORMATION SYSTEMS IN THE ORGANISATION The IS can be compiled within the set of subsystems that form any organisation. The IS must be consistent with the other subsystems conforming the organisation, with the mission of fostering their coordination. The value chain proposed by Porter (1985) is a valuable tool for placing the IS in this frame of subsystems. The IS belongs to the infrastructure of the company, with the responsibility of obtaining and providing information from/to any area of the organisation. The tendency to decentralize the typical IS functions may lead to the emergence of functional IS. Another approach to placing the IS within the organisation is the one provided by Hax and Majluf (1984). From a systemic perspective, this model integrates the information/communication system with the rest of the systems conforming the organisation. For these authors, the IS are just one element forming the infrastructure of the company, in close interdependence with the other systems, as represented by Figure 3. The interdependence between the different systems revealed by this model goes further, to include the consistency between the systems and the mission and objectives of the organisation. This correspondence intra-systems and extrasystems holds several implications, the basic one being the need for balance among all the systems, in such a way that any change in one of them will have implications for the rest (will tend to unbalance the rest). Here, the need to ensure the fit or balance between the information system and all the others arises as a basic requirement of this approach. Hence, any modification in the IS should take into account the probable impact on the other systems of the organisation. Figure 3: IS as one element forming the infrastructure of the company (Hax and Majluf, 1984) PLANNING SYSTEM
CONTROL SYSTEM
ORGANIZATIONAL STRUCTURE
INFORMATION / COMUNICATION SYSTEM
COMPENSATION AND INCENTIVES SYSTEM STRATEGIC AND OPERATIONAL LEVELS
34 Torres-Perez & March-Chorda
INFORMATION SYSTEMS AND CORPORATE STRATEGY From the previous section we can derive that all functions related to the IS (design, planning, use,...) also have a close relationship with the competitive strategy of the firm. On the one hand, information systems are tools to support strategy, both in the formulation and the implementation and control phases. On the other hand, ISs are able, on their own, to generate competitive advantages for the company. Creating competitive advantage implies facing the challenges of the competitive environment. The strategic process starts with the phase where the mission is internalized by all the members of the organisation, which we call vision. Business strategies determine the way of achieving the general objectives attached to the organisation´s mission. Tactics act in the short run and focus on the measurable objectives which the general objectives can be divided into. Finally, business plans identify the resources to be allocated by the firm to fulfill those objectives, taking the form of budgets and operational plans. These four essential factors condition the use of IS. Also, five elements identify the IS as a supportive tool: • Competitive options: IS represents a model for assessment of competititive options. • Role and relationships: The role played by CEOs who consider the information system as a competitive resource and the importance of the top managers in the design of IS. • Definition and re-definition: Indicate how the IS can change or clarity value for customers, by changing the business, products or services, or the organisation’s processes. • Telecommunications: As an information vehicle for linkage with customers, suppliers and partners. • Success factors: The competitive position obtained by those firms that have used their information systems efficiently.
INFORMATION SYSTEMS AND STRATEGY Given the structural configuration of the economy, the challenges of the present day require the use of strategic planning to properly communicate the company´s mission to all the organisation, as well as to fulfill that mission through the formulation of the best possible strategies and tactics. Technology change and globalisation are reshaping the environment at an accelerated rate. In this framework, a competitive model explaining the factors which underlie the positioning of firms is needed. Figure 4 shows a simple model with the four most essential factors for attaining a competitive position. Undoubtedly, the market, the use of technology, the regulatory legislation and the working environment and values shared by the employees, play
Information Systems and Business Strategy
35
Figure 4: Essential factors for attaining a competitive position (Callon, 1996) Market
Tecnology
Regulations
Employees
ORGANIZATION
Business Processes
a primary role in indicating how the business and the organisational processes should be developed. Companies attempt to position themselves in terms of these four factors in a way that leads them to an advantageous position. That may be possible if the firm finds the best way to compete by reducing the life cycles of its products, enhancing its organisational flexibility and improving its responsiveness to customers. One of the keys to achieving all this is to transmitting the right message to the right person at the most suitable moment, as well as furnishing the resources needed to give support to the strategies and tactics mentioned in the previous section The basic task of strategic planning is to ensure the correct understanding of the vision, the strategies and the tactics of the firm. Strategic planning incorporates a systematic exploration of the opportunities and threats to be encountered in the competitive environment, in order subsequently to make the best decisions. Figure 5 shows the traditional approach to business planning. In this model, the planning task of the information system must be done parallel to the rest and duly integrated within the whole process. This model suggests a top-down view, which many organisations are starting to fight against. But in reality, ISs are often developed following a bottom-up approach. It is easy to see the benefits that can be derived from efficient integration between strategic planning and the information system. Figure 6 shows how IS planning should be integrated within strategic planning. The process starts with the preparation of a Strategic Business Plan. The IS Plan is based on the core elements of the Business Plan. Properly designing the architecture of the information system is of critical importance in a context of rapid development of computing and telecommunications supply. In this first stage, the future paths of development of the information systems and the telecommunications networks should be projected. The next step in the design of the IS rests on planning at the tactical level, including elements such as timing, costs, skills requirements, distribution of respon-
36 Torres-Perez & March-Chorda
Figure 5: Traditional approach to business planning (author compilation) Vision
Strategy
Tactics
IS Traditional Role
Figure 6: Planning goal (Callon, 1986) PLANNING GOAL
Corporate Strategic Planning
Corporate Strategy
Information Strategic Planning
Information Strategy
Architectural Planning
Tactical Planning
Implementatio Planning Implementation
Architecture
Short run objectives
Project plans
sibilities and so on. Finally, the IS planning process ends with the implementation of the plan, taking the form of different projects. Adjustment or alignment between the use of the IS and the Business Strategies can be observed on different dimensions. Two basic factors can strongly condition from the outset the success expectations of any IS: the first is the perception by the top managers (CEOs: Chief Executive Officers) of the mission and role to be attributed to the information systems. The second refers to the skills and knowledge held by the top manager in charge of the information system (CIOs: Chief Information Officers).
Information Systems and Business Strategy
37
Perception is the key word in this search for the fit between IS and Strategy. In the present world of business, ISs tend to suffer from a certain lack of credibility, in part due to the shortage of skills in CIOs, to the meagre resources assigned to this area, to the lack of convincing leadership exerted by these managers, or maybe due to lack of commitment and skills on the part of CEOs. Some empirical evidence reveals the persistence of problems of communication and coordination between managers responsible for IS and the top managers or CEOs, the reasons for this lack of mutual understanding not being clear. If the fault lies on the side of the top business managers, the reasons probably include: • Top managers tend to act in a reactive and defensive way to problems arising, neglecting the long-term perspective. • Lack of motivation to become involved with IS strategies, due either to feeling uncomfortable in this field or to knowing nothing about it. • It seems difficult to determine distinctively the role to be played by the IS in business strategy. • Tendency to underestimate the value of the IS as a competitive weapon. • Sometimes the introduction of IS is perceived as a mechanism for inciting changes in the company´s mission or in the business processes, which might cause a sort of rejection or negative perception by the final users who are unwilling to incorporate these changes. • The cost of introducing IS is usually relatively high and easily measurable, whereas the benefits derived from their use are not so clear in quantitative terms beforehand. • Fears about the impact the IS might have on the tasks fulfilled by an important group of future users. On the side of the CIOs, the factors hampering the balance between IS and strategy might include: • Often the CIOs fail to understand the challenges and goals of the business, so they are not in a position to decide the role to be played by the information systems. • In organisations with a small or non-existent computing system, the efforts needed to implant one IS would require an investment in time and resources the company cannot afford. • Some ISs fail to succeed due to the CIOs’ tendency to underestimate the skills and competencies the organisation needs to acquire for the proper implementation of these systems. • Some CIOs prefer to keep their distance from the managers responsible for drawing up the strategies of the company, and therefore, do not get sufficiently informed about business goals and strategies. • The tendency to judge the investment in IS as an opportunity cost, leads to assessing these systems by their measurable efficiency rather than by their support to the firm’s strategic actions.
38 Torres-Perez & March-Chorda
•
The high level of dispersion usually attached to information technologies, resulting from the lack of definition of the architectural hardware supporting the information system, causes new problems that are difficult to solve.
Models of Strategic Planning with Information Systems The traditional strategic planning model suggests the subordination of the IS to Business Strategy. The final architecture and configuration of these systems will be determined by the strategic lines drawn in a previous stage. As shown in Figure 7, the Business Strategy guides and leads the IS strategy. In this situation, ISs become supportive tools for the implementation and control of business strategy, but on the other hand, due to their subordination, fall short as instruments for the formulation of strategies. In this model, information technologies are treated as an endogenous variable. The question is posed in the following way: Given the strategic guidelines, which is the most suitable technology to provide support to the strategy? This approach seems valid and acceptable at the conceptual level, but reveals some weaknesses when determining the objectives to be attained by the IS. As suggested by Callon (1996), this traditional approach could be enriched by introducing a new dimension that suggests combining the corporate vision with the technological environment in order to come across with new business opportunities sources. This model seeks to determine how information technologies can create new strategic business capacities and how the technology can incite changes in the business. The model represented by Figure 8 goes beyond the previous one, by considering technology as an exogenous variable, with capacity to stimulate strategic capacities and skills. In this model, IS plays a proactive and broader role, becoming the technological core for corporate businesses and a key ingredient in the strategic planning process. As one condition required to make this model work properly, the CIOs must be determined to dedicate part of their time to strategic planning activities, Figure 7: Business strategic planning and IS (author compilation) Business Strategy
Dictates
Supports
IS Strategy
Determines
Information Technologies in the firm
Information Systems and Business Strategy
39
Figure 8: Business strategic planning and IS (author compilation) Vision
Technological Environment
Opportunities
Dictates
Business Strategy
IS Strategy
Supports
Determines
Information Technologies in the firm
Figure 9: A concurrent business/IS strategic planning model (I) (author compilation)
Information Technologies Field
Business Field
Impact Strategic Plan
Opportunities
Organization
Organization and Business Processes
IT Opportunities
Alignment
IS Architecture and Organization
to the detriment of the daily IS activities. Accordingly, the cost of maintaining this department will rise, and so broader skills will be needed in the CIOs to enable them to participate in the process of strategy formulation. The broadest and most comprehensive model we propose in this section takes as a starting point the one developed by Parker, Trainor and Benson (1989), completed with contributions by other authors like Hax and Majluf (1991), Andreu, Ricart and Valor (1996, 1997), Bloch and Pigneur (1998), Orellana and Torres (1997), among others.
40 Torres-Perez & March-Chorda
This model endows information systems with a proactive role through the whole process of strategic planning, rather than seeing it as a supportive instrument for the strategy implementation and control phases. Under this model, the organisation’s strategic guidelines duly incorporate all the IS components that might contribute to obtaining competitive advantages. The model distinctively identifies the relationships between business planning and information technology planning. The proper combination of strategic plan, business processes and structure, together with information systems, brings new opportunities for information technologies to become a key competitive factor, as shown in Figure 9. To better interpret Figure 9, let us clarify some concepts, especially those referring to information technologies: Strategic Planning: Organisational vision and guiding force in the process of decision-making, rooted in the competitive environment in which the company operates. It reflects the role and tasks to be performed by managers, and the product portfolio and market on which the company wishes to compete. Business Processes and Organisational Structure: Directly obtained from the Strategic Plan, these refer to issues of entrepreneurial culture, systems for measuring productivity, human capital empowerment, adjusting the organisational structure to the business strategies, etc. Architecture and Structure of Information Systems: This is the component to be aligned with the Business Processes in such a way as to guarantee the expected synergies between them. Several dimensions must be monitored: • Information systems existing in the firm. To what extent these systems are critical for the daily functioning of the firm. • Access to and availability of information for all the workers that request it. • Procedures to help the final user of the IS. • Integration in networks of both producers and receivers of information. This means extending the IS with expanded functions to the industry value chain, to reach customers and suppliers. • IS management: The information systems must be judged as one business within the company’s business, having the rest of the company´s functional subsystems as customers. • Information technology planning: includes the IS requirements in terms of hardware and software, to properly support the business strategies, from formulation to control phases. • User training plan: to ensure the organisation’s members take advantage of the benefits associated with using IS. Opportunities in information technology (IT): their extent and scope will be determined by: • Level of dependence on information technology for succeess in the market. • Competitors’ intensity in IS as a mechanism for differentiating their products and services.
Information Systems and Business Strategy
41
•
Information technology culture: managers’ awareness of the role played by IT in producing competitive advantage. • Customers’ expectations of the value added to the products and services they purchase by the use of information technologies by the supplier. How much extra are they prepared to pay for the addition of that value? • Detection of opportunity windows opened up by the supply of products or services (either new or existing) based on new information systems. • Joint work with information technology suppliers, either to develop new computing solutions or to take better advantage of the potential of existing equipment. In conclusion, this model views information systems as strategic instruments suited to the strategy formulation stage, rather than operational tools for the strategy control phase. On the other hand, this is an open model, which in contrast to the traditional approach, recognises the capacity of information technologies to stimulate opportunities for the firm. Success in the application of this model will depend heavily on the consistency, coherence and soundness of the IS, both internal and external. As reflected in Figure 10, the external consistency refers to the necessary alignment between the IS and the structure of the sector, as well as the fit with the internal structure of the company and with the competitive strategy chosen to position it within the sector. Internal consistency has already been explained in Section 1 and refers to the necessary balance between the information system and the other systems in the firm, being aware of the strong interdependence among systems, and the role IS plays in integrating the systems forming the company. To end this second section, we provide in Figure 11 a summary of the interrelationships existing in our proposed concurrent Business/IS Strategic Planning Model.
EMPIRICAL ANALYSIS Scope and Objectives Our purpose in this section is to present some empirical evidence on the degree of alignment between entrepreneurial practice and the theoretical model proposed Figure 10: The external consistency (Andreu, Ricart and Valor, 1996) STRUCTURE OF THE SECTOR
ORGANIZATIONAL STRUCTURE IT / IS ROLE
COMPETITIVE STRATEGY
42 Torres-Perez & March-Chorda
Figure 11: Theoretical model (author compilation) Information Technologies Field
Business Field Managers role
Innovation
IT dependency on
Vision
IT use by referents
IT culture
the sector
Customers’ expectations
Reaction Capacity flexibility Impact
Opportunity windows
IT Opportunities
Strategic Plan ( product, market)
Cooperation with suppliers
Opportunities
Quality Organisation Entrepreneurial Culture
Critical role by IS
Productivity Alignment
Organisation and Business Processes
Access/availability of information
Architecture and Organisation of IS
Objectives Supportive procedures Measuring systems
Integration in network
Empowering
Structure – Strategy Fit
Training plans for users
IS functional extension IT Plan
IS Management
in Figure 11. For this purpose we have explored the relationships between IS and the strategic planning tasks developed by a sample of Spanish firms. The method for obtaining the information consisted of personal interviews with the CEOs and the CIOs of four companies, on a case-study basis. These four companies are located in the Spanish region of Valencia and belong to different sectors: furniture, metal-mechanics, plastics and services. Two of them exhibit a strongly innovative character, having received national awards in recognition of their innovative capacity. All of them fall into the category of medium-sized firms, with turnovers ranging from a minimum of $5 million to a maximum of $25 million, over 50 employees and presence in international markets. One of the key selection requirements was to have recently implanted information systems, for the managers to be able to make a comparative assessment of the situations before and after the investment in IS processes.
Companies’ Background Table 1: Key data Sector Activity
Firm A Furniture Home furniture
Firm B Firm C Metal-mechanics Services Lighting Civil engineering projects
1976 Average
1970 Leader
1984 Average
Firm D Plastics Plastic components for the automobile industry 1963 Average
Activity started in Competitive position Turnover (2000) thousand Euros Employees Main markets
7.813
27.046
4.886
12.621
63 Spain, EU
180 Spain, EU, Asia
117 Spain, France
Nº Product lines
12
6
53 Spain, EU, Near East 2
4
Information Systems and Business Strategy
43
To better illustrate the key traits of the companies under analysis, Table 1 displays some key data on the profiles of these companies: These firms show marked heterogeneity, a medium size in the Spanish industrial context and strong orientation towards foreign markets. Table 2 reveals some features characterising the CEOs and CIOs of these four companies. An outstanding trait is the medium experience accrued by CEOs in information systems management, basically limited to an average users level. Concerning the CIOs, their qualification is generally technical but not managerial. This orientation probably leads them to underestimate the strategic role the IS can play in their firms. As expressed in Table 3 the usual information technologies are notably installed in the four companies. At the hardware level, the proportion of computers per employee is satisfactory. Not so positive is the situation concerning software programmes. The high rate of use of office software contrasts with the low presence in the operations and production systems. Penetration of Internet in the firms under study is significant and email is widely employed.
Methodology of Analysis The methodology employed to obtain the data included a series of personal interviews with the CEOs and CIOs of these four firms with a structured questionnaire. Table 2: Features characterising the CEOs and CIOs CEO Owner Age Qualification Years in the post Experience in information technology Experience in Information Systems CIO Age Qualification Years in the post Experience in information technology Experience in information systems
Firm A
Firm B
Firm C
Firm D
Yes 45 Economics 16 High
Yes 65 Economics 22 Medium
No 38 Engineering 6 High
No 47 Engineering 12 High
Medium
Medium
High
Medium
31 Computer Eng. 3 High
52 Engineering 12 High
33 Computer Eng. 2 High
45 Engineering 10 High
Medium
Medium
High
High
44 Torres-Perez & March-Chorda
Table 3: The usual information technologies are notably installed Number of central computers Numbers of workstations Computers per worker Departments with computers: . Top management . Administration / Financing . Supplies . Production . Operations . Human Resources . Commercial / Marketing Frequently used software: . Ofimatics . Accounting / Financing . Logistic and Operations . Production control . Calculations . Design . Human Resources Management Employees using Internet ? Use of email: . Common accounts of ISP . Internal corporate . Internal and external
Firm A 2 15 0.24
Firm B 4 45 0.25
Firm C 3 34 0.64
Firm D 2 51 0.44
yes yes yes yes not not yes
yes yes yes yes yes yes yes
yes yes yes yes yes yes
yes yes yes yes yes yes yes
yes yes no no no no no no
yes yes yes no no yes no yes
yes yes yes no yes yes yes yes
yes yes yes yes no yes no yes
no no yes
no no yes
no no yes
no no yes
The first interview with the CEO attempted to ascertain some basic characteristics of the firm, regarding processes, organisational structure and the competitive environment in which it operates. The second interview, again with the CEO, was structured in the following parts: Part I: Description of the firm’s strategic process before implanting the current IS: Members participating in the process Methods employed to undertake the strategic analysis Process of strategy formulation Systems of strategy implementation Control mechanisms Role played by the previous information systems, if any, in the strategic process Part II: Analysis of the information systems of the company: Situation of the company before the IS was implanted Analysis of the IS effectively implanted Part III: Analysis of the current strategic activity: Description of the strategic process after implanting the IS Method for strategic planning of the information systems
Information Systems and Business Strategy
45
The theoretical model proposed was shown to the CEO, to afterwards establish similarities and divergences with the IS effectively installed in his firm. Finally, the third interview took place with the managers responsible for the information systems in the firm (CIO), with the purpose of contrasting their opinions with those of the CEOs, as follows: Part I: Description of the role played by the CIO and his department in the firm Part II: Description of the information systems of the company
Empirical Results All the information gathered from these three interviews in each company was aggregated to come up with a set of empirical results. Given the small size of the sample–only four companies–the purpose of this empirical analysis was never to draw a comprehensive and precise picture of the information systems operating in Spanish medium-sized firms, but is much more modest. This pilot empirical study tries to approximate the role ISs seem to play in a few companies, and especially to test out the degree of alignment or fit between these systems and the proposed theoretical model of concurrency between business planning and IS. This section contains the main results arising from the empirical analysis, divided into three parts. Part I refers to the strategic planning processes followed by the company before the introduction of the current IS. Part II gives the most common traits of the IS recently introduced. Part III highlights changes brought about by the IS in the strategic planning process. This final part attempts to describe the degree of fit between the proposed model and the model effectively used by the company.
Part I: Before the New IS The four companies under analysis display an average level of application of information technologies, representative of Spanish medium-sized firms. In particular, they have nowadays their own Computing Services Department, devoted to technical maintenance of the computing systems installed in the company. Despite being well-endowed, with one computer per person, computers are merely used as office tools, without any interconnectivity role. All the companies had maintained so far a sort of functional IS covering fields whose information was perceived as strategic for the company, particularly accounting and commercialization. Usually conceived as control instruments, the ISs were subordinated to the different functional subsystems of the firm, and did not have an influence on the existing strategic activity procedures. This subordinate role is consistent with the position held by the IS in the organisational chart, usually integrated within the General Administration Department. The CIOs’ tasks were usually supervised by the head of this department, the one in touch with the top managers. In this framework, the role the CIOs could play in strategic activity was practically non-existent. Even strategies directly concerning information systems were usually formulated by the general managers, once the strategic business plan had been defined.
46 Torres-Perez & March-Chorda
However, this situation was not considered optimal by the CEOs. They already recognised the need to obtain aggregated information of quality in time for the proper decision-making process. For all the CEOs, this need was the main reason justifying the introduction of new multifunctional ISs. In conclusion, the situation portrayed by the four firms, before the new IS was implanted, corresponds to a traditional approach to business planning /IS, in which the IS planning is completely subordinated to strategic business planning, with a role of simple operational control instruments solely applicable in the phase of strategy implementation.
Part II: Characteristics of the New IS Recently introduced The interviews with the CEOs confirmed that the ultimate justification for the introduction of a new IS was the growing concern regarding information as a supportive basis for the strategy planning process. Problems of dispersion and incompatibility among different kinds of information had so far hindered their aggregation to further support strategic activity in the firm. The main reasons given for introducing a new and updated IS were: • Need to homogenize the information produced by all departments • Need to reduce costs associated with the development of IS at the department level and the costs of integrating different functional IS • Need to reduce costs of coordination among departments and with customers and suppliers • To enable the existence of tools dynamic enough to give support to the processes of strategic decision making • To facilitate organisational learning All these reasons given by the CEOs are in line with the most up-to-date guidelines for the Strategic Management of Information Systems. However, they may be no more than declarations of intent. What really matters is the actual use the managers make of the IS effectively introduced. The four companies have introduced IS developed by outside specialist firms, following the classical introduction procedures of adapting the new IS software to the business processes existing in the firms. The new systems have never been introduced with the purpose of promoting the re-engineering of processes leading to a better fit between the internal capacities of the system and the structure of the firm. Therefore, these four companies, and probably the vast majority of firms installing new IS, by focusing on the existing business processes, fail to profit from the IS’s capacity to stimulate authentic reengineering processes, one of the most outstanding and powerful areas in which the new IS can have a significant impact. None of the ISs installed cover the whole range of areas of the firms, but just few of them, particularly: • Financing • Commercial • Internal and external logistics
Information Systems and Business Strategy
47
• •
Purchases Production (MRP II) just in one company All the CEOs and CIOs interviewed agree the new IS should be introduced gradually, first covering basic or strategic issues to progressively enlarge their competencies to new basic modules as they consolidate. This view, broadly shared, reflects a sort of conservative attitude in the managers, as they do not seem to believe in the theoretical benefits associated with a complete introduction of IS on a larger scale. New IS have always been supported by external consulting firms. In the introduction process, the CIOs have usually played a liaison role between the consulting companies and the CEOs firmly involved in this process. In the CEOs’ view, the work of external consulting firms has not been entirely satisfactory, as the process of IS installation has usually taken longer and has incurred higher costs than expected. Furthermore, once implanted, these systems did not completely fit with the specific characteristics defining the host firms. CEOs believe the origin of this mismatch or lack of fit between IS and the firm lies in the consultancy company’s difficulties in obtaining a broad and precise knowledge of specific key issues concerning the objectives, structure and business processes of the client firm. In our view, the reasons for this mismatch go further. On the one hand, it is part of the CEO’s responsibility to make sure the consultants have the necessary information about the strategy and processes of the firm, a responsibility that should not be neglected. On the other hand, the lack of experience and qualifications of IS professionals in in Spain, due to the short span of time doing these tasks, is also a factor. As soon as the introduction of new IS becomes more generalised, and the competition among IS software firms increases, we believe these obstacles and problems of internal consistency will tend to disappear.
Part III: After the Introduction of the New IS After a few years in operation, an overall assessment of the new IS is fairly positive, according to the CEOs. However, there is room for improvement on several issues. First, the fit between the capacities inherent in the IS and the information needs stated by the firm is far from optimal. CEOs argue that it is difficult to make the right choice of one specific IS among the wide variety available in the market, since the actual capacity and properties of each system is unknown for the future user. They also demand greater flexibility in ISs in order to make them more adaptable to the specific characteristics of the firm. In the CEOs’ view, the IS available in the market tend to be too wide-ranging and general, and lack the versatility to adapt to particular requirements and situations.
48 Torres-Perez & March-Chorda
As already stated, the four companies under analysis, and probably the vast majority of medium-sized firms in Spain, do not usually follow a strategic planning process like the one presented earlier, when deciding to install new IS and undertake new lines of action orientated by the IS. The selection process is rather precipitate and fails to incorporate a sound strategic analysis. A proper selection process would start with a deep analysis to identify the needs motivating the purchase of the system, as well as the procedures to adjust it to the specific strategic process followed by the firm, and the reengineering processes needed prior to installation and full operation. On the benefit side, CEOs recognise that the quality, level of aggregation and the timely availability of information, the main operational goals of the new IS, have improved significantly. On the other hand, however, the broader goals, associated with the capacity to intervene in the process of strategy formulation, remain unachieved with the new IS. All the CEOs interviewed agree with the principles of the theoretical model of concurrency between strategy and IS presented in this chapter, and recognise its positive implications. They even declare their intention of gradually progressing in the direction indicated by this model.
CONCLUSIONS This section reviews the main conclusions drawn from the study. First, it is worth recalling the key value that different business administration theories ascribe to information, a critical input in the strategic process with capacity to generate competitive advantages, under certain conditions. Accordingly, the systems responsible for gathering, storing, treating and diffusing information to the potential users within the organisation are also destined to play a key role in most business organisations. These systems are known as information systems. The existing literature places the planning of IS and information technology as one basic task in the management of firms, with capacity to create competitive benefits, as has traditionally occurred in some large firms. However, the capacity to generate significant benefits from the IS seems lower in smaller firms. Practice proves that introducing a more advanced IS is a difficult and risky task. Some problems usually emerge beforehand, including a wrong definition of the technologies involved in the systems, mistakes in the selection of information or an inappropriate position of the IS within the organisational structure of the firm. After the introduction of the IS, problems arise from the different perceptions held by the three basic actors involved in the success of the IS: CEOs, CIOs and final users. Several models or approaches try to explain the linkages between business strategic planning and the IS. The generation of sustainable competitive advantages will depend at least partly on the selection of the most suitable model. Two approaches are the most widespread. The traditional model proposes the subordina-
Information Systems and Business Strategy
49
tion of information systems to business strategy, the IS becoming an operational tool that gives support to the process of strategic decision making. As stated in the empirical analysis section, this model involves little risk for the company, but also a low strategic profitability. Most CEOs consider this model to be a starting point for properly integrating the IS within the organisational structure. The second model, called the Concurrent Business/IS Strategic Planning Model, defends the development in parallel of the information system and the business strategy, so that both fields are mutually reinforced. This is a rather complex model which tries to match two fields, first the business, which includes the strategic plan, dependent on the vision of the firm, and the role played by key issues such as innovation, quality, flexibility and market segment. The business also entails the organisation and business processes concerning, among others, productivity, entrepreneurial culture and the structure-strategy fit. The second field refers to information technologies, and captures the opportunities opened by IT in the present competitive environment, as well as the architecture and organisation of information systems devoted to spotting those opportunities and turning them into competitive advantages. From the empirical study undertaken in four Spanish medium-sized firms, we can derive several conclusions. Most top managers tend to consider the different models of strategic business planning/IS as sequential stages in the introduction of IS in the company, which need to be covered one after another in order to ensure the correct fit and avoid negative impacts on the structure and the human resources of the firm. The empirical evidence found in the firms reveals some deficiencies, including the coexistence of several insufficiently interrelated ISs, communication problems, mismatch of the IS with the organisational structure, and disconnection between the business strategic planning and the IS planning. In this framework, the empirical study shows that the theoretical model proposed is far from being implanted in the firms, despite revealing a broad capacity to stimulate competitive advantages through the use of IS, already recognised by the CEOs. To better test out these initial results, the pilot empirical study presented here should be revised in a few years, to examine the evolution of the recently implanted IS, and to check if the path proposed by the theoretical model is being followed, or if on the other hand, the IS is behaving according to the traditional model. In the four companies under analysis, the new IS have been running for too short a time to allow definitive conclusions to be drawn. A second assessment in a few years’ time would render sounder and more definitive results on the real possibilities and benefits derived from IS in supporting the processes of strategy formulation.
REFERENCES Andreu, R., Ricart, J. R. and Valor, J. (1996). Estrategia y Sistemas de Información. Madrid: McGraw-Hill.
50 Torres-Perez & March-Chorda
Andréu, R., Ricart, J. R. and Valor, J. (1997). La Organisación en la Era de la Información. Madrid: McGraw-Hill. Antill, L. and Clare, C. (1991). Office Information Systems. London: Blackwell. Bloch, M. and Pigneur, I. (1998). The extended enterprise: A descriptive framework, some enabling technologies and case studies in the Lotus Notes environment. Journal for Strategic Information Systems. Callon, J. (1996). Competitive Advantage Through Information Technology. New York: McGraw-Hill. Cash, J., McFarlan, F. W. and McKenney, J. (1990). Gestión de los Sistemas de Información de la Empresa. Los Problemas que Afronta la Alta Dirección. Madrid: Alianza Editorial. Cornellá, A. (1994). Los Recursos de la Información. Madrid: McGraw-Hill. Cuervo, A. et al. (1996). Introducción a la Administración de Empresas. Madrid: Civitas. Emery, J. C. (1990). Sistemas de Información Para la Dirección: El Recurso Estratégico Crítico. Madrid: Diaz de Santos. Hax, A. and Majluf, N. S. (1991). The Strategy: Concept and Process. New York: Prentice-Hall. Keen, P. and Scott-Morton, M. (1978). Decision Support Systems: An Organisational Perspective. Reading, MA: Addison-Wesley. Mélèse, J. (1979). Approches Systemiques des Organisations. Paris: Hommes et Techniques. Orellana, W. and Torres, A. (1997). La información en la alta dirección. Una actualización de los EIS a las nuevas oportunidades de las tecnologías de la información. Paper for XI AEDEM Congress, Lleida. Parker, M., Trainor, C. and Benson, R. (1989). Information Strategy and Economics. Englewood Cliffs, NJ: Prentice-Hall. Porter, M. (1985). Competitive Advantage: Creating and Sustaining Superviser Performances. New York: The Free Press. Rockart, J. F. and Treacy, M. E. (1982). The CEOs goes online. Harvard Business Review, January-February. Sprague, R. H. (1980). A framework for the development of decision support systems. IS Planning for Decision Making. Van Nostrand Reinhold.
Integrated QFD and Knowledge Management System 51
Chapter IV
Integrated QFD and Knowledge Management System for the Development of Common Product Platform Walter W. C. Chung, Colin K. S. Tam and Michael F. S. Chan The Hong Kong Polytechnic University, Hong Kong
ABSTRACT This paper provides a framework to implement quality function deployment (QFD) with knowledge management (KM) in the form of an integrated quality and knowledge management system (IQKS). This inter-organizational information system enables the sharing of information among customers, manufacturers and suppliers in the new product development process. It links up various strategically independent and autonomous business entities together in a common product platform (CPP). Up to now little has been published academically on sharing vital information involving the give-away of a firm’s bargaining position about customers, product specification and process requirement. The common product platform facilitates an innovative organization to persuade like-minds in coming together and opening up useful and relevant information to all parties interested in creating a new level of competitive advantage in assessing the dynamics of market realities. They share knowledge and learn to support new organizational capabilities to leverage information technology that incorporate market knowledge, design knowledge, process knowledge and production knowledge. Copyright © 2003, Idea Group, Inc.
52 Chung, Tam & Chan
INTRODUCTION In the new economy, customers, manufacturers and suppliers dispersed across various geographical locations use information technology to talk to each other to coordinate engineering design and the flow of materials through different manufacturing operations. Hence, a product can have a choice to be made in various processes across organizations, at different geographical locations and utilizing different logistics for delivery. This enables the entire operational process to be linked up across organizations for customization by the participating firms. A new level of competitive advantage may be achieved when the operational processes of different firms can be adapted with the dynamic market environment. Such advantage can be crucial to the development of new products. A product is manufactured through subassemblies and parts that can be put together. Each subassembly is represented by bills-of-materials (BOM). They can be deployed in a common product platform (inter-organizational information system) by partnering firms linked together with an information technology network. The coordination, marketing, design, engineering and production control can be more effective. This chapter is divided into four sections. The first section addresses the view on contract manufacturing. It suggests how a manufacturing organization can leverage IT to compete in the information age. In the second section, relevant literature is reviewed on the area of extended enterprise. It portrays that knowledge has become an important consideration in developing extended enterprise architecture. For the third section, a conceptual framework of extended enterprise architecture is derived from observation on contract manufacturing. It reflects the necessary components and the linkage of quality function deployment (QFD) and knowledge management system (KMS) to form a framework for developing an integrated interorganization information system. A case study is used to illustrate how the conceptual framework can be operationalized in the form of a common product platform making use of the bill-of-material (BOM) modules.
LITERATURE REVIEW Contract Manufacturing Contract manufacturing is more than a firm sub-contracting out its manufacturing process and is about a strategy on positioning the firm in the future of providing a service of value. The firm operating with a contract manufacturing strategy is adopting a management paradigm that has far-reaching implications in terms of ownership, management succession planning and resources deployment. Contract manufacturing is a phenomenon occurring in various manufacturing industries. It extensively aligns partners for collaboration in serving the customers. The firm in focus will only keep those processes that it is strong in and outsource the rest of the processes to suppliers or competitors. Without a good understanding of its implication on the nature of business and investments in appropriate technology, knowledge and research, manufacturing firms are likely to miss business opportuni-
Integrated QFD and Knowledge Management System 53
ties in the ventures that create wealth and create customers for growth and profits. Currently, views on contract manufacturing are diversified. In here we summarize a few points that are purported to be the key enablers of contract manufacturing. They are: 1. Changed view of international production strategy from “stand alone” operations to “deep integration” of operations with partners in a supply chain. 2. The increasing separation between the supply side and demand side of business: e.g., supply-side consolidation for economies of scale; demand side fragmentation enabled by e-commerce. 3. Knowledge separation enabled by digital factory concept. Manufacturing knowledge can be separated from design knowledge; or marketing knowledge can be separated from design knowledge. 4. The transition from a “command and control” style of management to a “onestop shopping” self-adaptive supply chain style of management to enable customization of value.
The Change of International Production Strategy A “stand-alone” operation for control is leveraging on a level of technology investment that encourages the moving of production facilities to a low-wage manufacturing environment for competitiveness. (Most Hong Kong manufacturing companies have relocated their production bases to Mainland China after China announced its open door policy in 1978). However, a “deep integration” operation is about the strategy of developing an integrated production system to support disserting the different steps of a production process and have different parts undertaken by different countries according to their relative cost and logistical advantages. (UNCATDA, 1993, 1999).
Increasing Separation Between Supply and Demand Side The advance in information technology and networking economy, the development of electronic commerce and the proliferation of B2B online exchanges further separate the supply and demand sides of many industries. On the demand side, there is a rise of customer expectation. Customer relationships or learning relationships are being developed to retain future business value and growth (Peppers & Rogers, 1997). On the supply side the pressure of getting economies of scale and scope lead to changes in the traditional industry structure. Many companies are now consolidating to realize economies of scale and scope to recoup their investment in business infrastructure (Hagel & Singer, 1999).
The Observation of Knowledge Separation In many industries, design knowledge is being separated from manufacturing knowledge. This knowledge was once difficult for articulation but now can be made explicitly with substantial organizational implications. Manufacturing organizations can outsource even more to a network of suppliers (Magretta, 1998a). Central to this
54 Chung, Tam & Chan
development is the development of “connected assets” (Blitz, 1999). Today contract manufacturing thrives on this trend with the development of sub-systems template and the setting up of a common product platform that can be readily integrated upon customers’ desire (Meyer & Lehnerd, 1997). The ability to develop and quantify these assets is the key for future business development.
The Self-Adaptive Supply Chain Style of Management The ability for a team of organizations working as an extended enterprise through dynamics partnerships is the key for future success. The existence of a partnership ties the stakeholders of organizations together in developing commitments and sharing a risktaking culture. Corporate competence will have to be integrated to address changing market opportunities. The cost of maintaining a set of vertically integrated competencies is prohibitive for many businesses (Agility Forum, 1997). The ability to interface with others within a value net is the key for future success. The table below illustrates some of the contract manufacturing activities recently found in various manufacturing organizations and industries. The reported contract manufacturing scenarios revealed that manufacturers organize themselves to leverage IT to share information and compete more effectively. The reported cases led manufacturing management to a new era of competition. The new competition is not based on economies of scale, but much depends on how quickly enterprises are aligned together to come up with new business processes tailored to the market requirement. This resembles the business model of mix and match in a real-time exchange market. Thus, contract manufacTable 1: A summary of recent contract manufacturing activities Case 2001 Ericsson and Flextronics (Gartner, 2001) Outsourcing Outsource the manufacturing of mobile phone to Flextronics by transferring plants and employees from Ericsson 2000 Motorola and Flextronics (Thurm, 2000) Outsourcing Motorola hires Flextronics to manufacture consumer electronics Alliance Cross investment in stake-holdings 2000 Arima and Compaq–Notebook Contract Manufacturer in Taiwan (Baum, 1999) Partnering Notebook research center assists the manufacturing of notebook computers 1999 Nortel Network (Royal, 1999) Outsourcing Outsourcing the manufacturing of telecommunication equipment 1999 Sara Lee (Royal, 1999) Outsourcing Selling factories and outsourcing manufacturing of food products. Concentrate on servicing brand name
Integrated QFD and Knowledge Management System 55
turing here is defined as a partnership among manufacturing organizations, customers and suppliers working together as an extended enterprise using IT links. Such an alignment model requires the support of extended enterprise architecture.
Modeling Extended Enterprise An extended enterprise can be described as a partnership between enterprises where the goal is to achieve competitive advantages by forming formal linkages (contracts) and maintaining cooperation distributed throughout the network. In the extended enterprise the collaborating firms are encouraged to focus on their core competencies (Szegheo & Andersen, B, 1999). With the partnering elements becoming more important, extended enterprise modeling seeks a holistic view to avoid the optimization of one partner at the expense of the others. Activities are used as the basic elements for modeling an extended enterprise. The challenge is that the activities are dispersed across the network, crossing organizational boundary, making information collection and modeling very difficult. IT linkages will have to be deployed more systematically to collect the necessary information to enable such a model to become a reality.
CONCEPTUAL FRAMEWORK A knowledge-based view is proposed as the basis for the formulation the extended enterprise architecture, hence the term “knowledge-based extended enterprise architecture.” The idea is to encourage partnering in the extended enterprise to form the basis of developing a knowledge-sharing practice. Four core components of the extended enterprise architecture have been identified. They are market knowledge, design knowledge, process knowledge and production knowledge.
Market Knowledge Market knowledge such as customer lists, market research, data on service or demand patterns are considered to be an important element for the existence of an extended enterprise. Market knowledge serves as the primary input to the extended enterprise and is translated into specific customer needs, perception and preferences. It is a conduit to enable the extended enterprise to interact with the market in order to reconfigure itself for the market. Sometimes, discovering hidden value propositions of the market can assist extended enterprise to move away from manufacturing a “me-too” product.
Design Knowledge Design knowledge is associated with research and development efforts. It embraces the technological content of the product, essential to a business, and is supplied by knowledge workers within the participating firms of the extended enterprise. The design knowledge is very important in supporting the product requirement derived from market knowledge. In many circumstances, combining
56 Chung, Tam & Chan
market and design knowledge uncovers hidden value propositions for new customers. Early involvement with customers to design a new product can be beneficial to reduce design lead-time. Thus, participating extended enterprise organizations that contribute or invest in activities to make instrumental goods early in new product development can improve overall extended enterprise performance.
Process Knowledge Process knowledge is a derivative of manufacturing knowledge. It is concerned with workflow management, capacity management, logistic arrangement and scheduling arrangement. Knowledge workers contributing these kinds of knowledge can be sourcing managers, supply chain managers and schedulers. They ensure the order fulfillment process is in place for delivery purposes. The knowledge in this area is generated and acquired through extensive work and know-how in infrastructure management.
Production Knowledge Production knowledge is also a derivative of manufacturing knowledge. It is associated with production of physical goods. Investment in manufacturing technologies and management helps develop such knowledge in a manufacturing organization. Smart use of manufacturing technologies and management of knowledge work give insights into the development of instrumental goods that can be deployed for improving realization activities. The performance is usually associated with volumes, quality and cost. When material costs and overhead costs represent a large proportion of overall product costs, product knowledge is essential in trying to keep the cost structure under control. Production knowledge can combine with other types of knowledge in the extended enterprise architecture to improve performance. For example, early supplier involvement enables the sharing of knowledge by both manufacturers and suppliers, and they can benefit by shortening the time to market.
Common Product Platform – Embedding Knowledge in Product A common product platform is proposed as a vehicle to establish a knowledgesharing routine within the extended enterprise. This provides participants a reference model of a product that can be modified by the different partners of the extended enterprise architecture. Figure 1 shows the concept of a common product platform supporting the use of different knowledge to customize a product. As a knowledge-sharing routine, the common product platform allows knowledge from different areas to interact with each other to create new ones. For example, design knowledge can interact with manufacturing knowledge in a way that the product can be designed more effectively for production.
Integrated QFD and Knowledge Management System 57
Figure 1: The concept of a common product platform (CPP) Customer requirement, market trend
Market Knowledge
Product design, product performance
Delivery, logistics, order fulfillment, scheduling
Design Knowledge
Process Knowledge
Materials cost, capacity planning, etc.
Production Knowledge
Common Product Platform
Develop a System to Leverage the “Voice of Customers” with Knowledge Management Based on the common product platform, a system for integrating quality function deployment (QFD) with a knowledge management system (KMS) is to be developed. It incorporates the QFD and the KMS systems. The QFD is a tool for enterprise modeling that allows internal as well as external staff to communicate and contribute knowledge in a systematic manner. It is a useful modeling approach to discover the interrelationship among customer demands, engineering requirements and manufacturing processes. Central to the QFD is the house of quality that defines the WHAT and HOW as part of the relationship. The WHAT and HOW is captured by a knowledge management system (KMS). Conceptually thinking, the captured interrelationship can be shared and reused. The knowledge management component is to facilitate sharing and reuse by providing functions such as capture, filter, storage, distribution and applications. Figure 2 illustrates the concept of Integrated QFD and KMS (IQKS). To operationalize the common product platform, the BOM can be built into different phases of the quality function deployment to support the phased operation of the QFD process. The four phases of QFD are linked together for information and through sharing of embedded knowledge. The KMS provides the knowledge repository to share information regarding the captured QFD interrelationship. The migration from HOW to WHAT, and the translation of information from one phase to another in QFD, indicates transform knowledge. The information collected and processed serves a common language for exchange in product and process information. As a result, guiding the collection of useful information (transfer information) gradually builds up the knowledge management system. The knowledge layer is an open-type pool, which allows user interface involvement to collect, retrieval knowledge for specific application. The layer captures the relationship within the QFD matrix. The accumulation of this captured knowledge is stored in the knowledge layer embedded within the KMS.
58 Chung, Tam & Chan
Figure 2: The common product platform architecture (IQKS) supported by QFD and KMS Data, Procedures, Process
Transform Knowledge I
Transform Knowledge IV
Transform Knowledge III
Transform Knowledge II
QFD+KMS Integration Engine
Knowledge
How Much
PHASE II Design Deployment
Transfer Knowledge II
PHASE III Process Planning
Relationships
PHASE IV Process/Quality Control Transfer Planning Knowledge II
Information Layer Knowledge Layer
KNOWLEDGE MANAGEMENT SYSTEM user 1
user 2
user 3
user 4
QFD Interface
Product Requirements
KMS Interface
Transfer Information I
Relationships
WHAT
User X
User Interface
Relationships
Process Parameters
HOW
HOW WHAT
How Much
How Much
PHASE I Product Planning
Part Characteristics
Process parameters
Relationships
WHAT
Part Characteristics
HOW
Technical Characteristics
Company Measure
Customer Wants
WHAT
Customer surveys
HOW
Conflicts
CASE STUDY Background The motor fabrication company (MFC) is one of the world’s largest component manufacturers. It manufactures a range of electric motor products to cater for industries like automotive, home appliance, personal goods, consumer electronics, etc. Their existing customers are associated with well-known brands with their products for worldwide distribution. MFC is engaged with their customers extensively in contract manufacturing initiatives such as outsourcing, partnering and alliance. MFC provides customer support to serve the needs of their customers in new product development. The customers are getting enhanced services from MFC rather than the product itself, and these services include co-design, R&D, engineering, logistics, etc. MFC customers do not need to be concerned with supply chain management, procurement and shipment logistics of the motor products, because all these functions are taken care of by MFC. In many instances, there are a range of products that are similar. For example, MFC manufactures the moving components for the consumer printers’ market. MFC makes assessment on each of the product within the market based on each of their customer’s product mapping. MFC gains competitive advantage through providing better services to customers. By establishing the position of the customer in the product map (Figure 3), the
Integrated QFD and Knowledge Management System 59
Figure 3: Customer-product mapping in consumer printer market Services: For example: Print quality, speed
High
Consumer Printer
Customer B
Customer A
Medium
Customer C Low
Low
Medium
High
Volume
company interacts with customers and gives tailored services of attractive value. Interacting with customers at MFC is a very complex task. It involves contributions from numerous parties from sales, research and development, costing, engineering and procurement to the external suppliers. Figure 4 shows the complexity involved. Interacting with the customers and providing support by answering their queries involves the participation of executives in the organization engaged in knowledgeintensive work. The interaction pattern typically starts with receiving the “customer voice” to the actual commitment of production of electric motors. It is not limited to communication with customers but to a range of parties, including suppliers and logistics services providers. Interactions with the customers at this stage are complicated, and information flow has significant impact on most manufacturing organizations. Success in the venture very much depends on leveraging the use of knowledge for a speedy response. It is critical to embed information into the product. By offering services of value in the value chain, each customer concern may be addressed differently. Products that do not appear as high-value to one customer may appear to be highly valued to another. Furthermore, a previous customer may provide misleading data to MFC in handling other customers. Since the number of parts and associated fixtures proliferate, it requires huge efforts to coordinate. The erosion of value-adding activities creates difficulties for MFC in integrating their performance with the value chain. They can be seen as the limiting factors for
60 Chung, Tam & Chan
Figure 4: Organising to provide a service of value to the customers Customer, Sales Product Configuration, Product Requirement
Costing Group Product Cost Estimation
Project Engineering Product Development
Testing and R&D Group
Product Testing and Technical R&D
Cross-Functional Team
Manufacturing Development
Tooling Engineering Tools & Components Development, Fixture Development
Scheduling Schedule Arrangement
Process Engineering
Suppliers/Warehouse
Process Design, Product and Process Specification
Inventory, Material Supply
Production
Production Planning, Direct Production
higher performance from the perspective of customers. There is an urgent need to streamline the information flow for managing the interactions with customers.
A Knowledge Management Process Model An integrated quality knowledge System (IQKS) is conceived at MFC to link all the knowledge silos with the bills-of-materials (BOM) structure of electric motors. The knowledge silos are represented by the pools of knowledge accumulated as a result of specialization in sales and marketing, product design, process planning and production. Each knowledge silo corresponds to a phase of the quality function deployment (QFD). When arranged in this way, the knowledge silos enable knowledge input into the WHAT and HOW of the QFD tool. The BOM information model combines the data in a structure for sharing. The BOM information is embedded into a knowledge management system to support the four core activities of value creation: 1) information and knowledge acquisitions; 2) knowledge filter; 3) knowledge storage; and 4) knowledge distribution (see Figure 5). The BOM information models are run with the support of databases, intranet, extranet and internet. The purpose is to physically enable the communication of more accurate customer requirements, including target price, development cost and product attributes.
Integrated QFD and Knowledge Management System 61
Figure 5: A learning model to link up knowledge silos Information & Knowledge Acquisitions
Knowledge Filtering
Knowledge Storage
New Knowledge Built
Sales QFD Phase I
Knowledge Distribution
Knowledge Application
Design BOM
Process BOM
Productio Production BOM
QFD Phase II
QFD Phase III
QFD Phase IV
In Figure 2, the architecture of the IQKS system consists of information the layer and the knowledge layer. The information layer has BOM databases for internal and external users. The internal BOM databases collect the product design phase information, which is captured from each QFD phase migration under a FORMAL practice. The external BOM databases differ from the internal ones in that they are used to collect any INFORMAL information, which at the collecting moment may not be very useful to anyone but its owner; however, it might also make a difference in the long term. The users can get access to the information using simple searching and browsing criteria. The knowledge layer is a combination of information, data and graphical display. The accumulation of the captured knowledge is stored under the knowledge layer. This knowledge layer is an open-type pool, which allows user interface involvement to collect and retrieve knowledge for specific application. A pre-defined search criterion is created to capture queries. The system is linked with internal departments, customers and suppliers. Users are granted with different access rights depending on their status. The information layer allows the users with a knowledge perspective to add into the BOM databases (under the QFD structure) for knowledge capture and retrieval. The QFD structure enables transition of information on the nature and underlying links between WHAT and HOW. At MFC, the implementation of an integrated QFD knowledge system (IQKS) represents an achievement of leveraging knowledge-based manufacturing technologies to bring in the needed information at the early stage of the internal business operations. The implementation has the steps: a. To link up each QFD phase. b. To link up every individual or department and show the detail job progress, under a common information technology platform. c. To capture the employees’ routine deliverables and output to fulfill the knowledge acquisition
62 Chung, Tam & Chan
d. e.
To share the useful information and knowledge for corporate growth. To provide a freedom searching under web-based infrastructure. The IQKS defines the tasks that a knowledge worker needs to perform at each phase of the QFD. These tasks are, as mentioned previously, product planning (house of quality), design deployment (parts deployment), and manufacturing planning (process planning) and production planning (production operations planning) in relation to customer’s want (requirement). The WHAT and HOW govern the formalization of knowledge by introducing a common platform to synchronize the use of information. These represent the knowledge that can be captured and disseminated as the means of future production. The means of production also serves as the manufacturing input to internal business operations in order to make the operation more responsive to the external environment. The knowledge management infrastructure is to be overlaid on the existing corporate environment. It utilizes the IT network to link up the different personnel together including sales, design engineers, process engineers and production engineers (see Figure 6).
Figure 6: Knowledge management infrastructure model
Integrated QFD and Knowledge Management System 63
A Knowledge Management Prototype This section describes the user’s interface for using the system of Web-based knowledge center. The internal portion (intranet) is rolled out first to structure the group use of the knowledge framework, thus allowing the QFD phase IV of information (Figure 2) to be shared and distributed on a daily basis. Next is to establish the web-based knowledge center via the Internet portion. The illustration is shown in Figure 7. The core of the prototype was produced under Lotus Domino Notes environment as shown in Figure 7. While the Lotus Domino Notes enables data to be captured under an Internet framework, the user interface of the IQKS system includes certain database classifications with respect to the usage nature under QFD phases. Figure 7 shows the basic database icons, which formulate the basic IQKS system’s components and are defined below. Different manufacturers (due to product nature’s variance) could have more or less or different icons but the database must be under the common-BOM-platform with monitor and control by a centralized system (IQKS: CAT-D). Figure 7: The IQKS system interface on Lotus Notes
64 Chung, Tam & Chan
CAT-D CAT-D is the component architecture technology database. It serves as the global common-BOM platform to link communications of each QFD phase. It is a tool for global transparency of information. Throughout the input from various persons and departments, information is able to be shared and distributed as knowledge through this layer. CAT-D is divided into internal and external portion with support from intranet (organization users) and Internet (customers) respectively.
E-mail Policy E-Mail Policy states and clarifies the company policy as well as security warning to all users, internal and external to MFC.
Bulletin Board This is essential for organization cross-communication (geographical aspects and business unit nature) purposes, as well as capturing and sharing the external information. The board is pre-defined by expert team and it stipulates what kinds of information are allowed to be placed in the system. Since employees or customers already “think” (filter) what should be put on the board, filtered and valued information can be stored.
Discussion Database Discussion Database acts as an Information Record Buffer to temporarily collect and store communication information such as meeting minutes. The users are allowed to write down discussion about anything, but IQKS classifies these as vast unstructured information and are stored for general discussion only. The inside contents are cleaned every two months to free up server space. Any information considered valuable for permanent storage is transferred into Bulletin Board for corporate distribution.
Standard Product Standard Product (motor) holds the records of company standard products. It is linked with the Internet for customer selection or for placing an order. Considered to be the primary form of electronic business over the web, it facilitates the transactions of “commodity” motor products of MFC. It also acts as a product catalog. The “hit-rate” shall be recorded for sales and marketing in support of customer relationship management to analyze the number of interests and demands from real-time market.
Technical Support Technical Support directs question and answer services from internal and external customers. This database has links with standard product databases. This is because a customer, having once selected their favorite products and who might like further customizations, would like to receive direct confirmation of technical possibilities. It is part of introducing engineering services to the front line of customer services.
Integrated QFD and Knowledge Management System 65
Engineering Database Library Under the CAT-D framework, Engineering Database Library captures and stores the engineering information such as product testing record and characteristics. It facilitates the empowerment of internal staff by sharing critical engineering knowledge.
Raw Material Specification This is a library of raw material specifications and is shared among staff at MFC. The main users of such library include supply chain department, purchasing department, engineering department, R&D, and design & product engineers. Each of them uses the information in a very different perspective.
DC Sample Specification This stores product sample records, BOM, specifications and quality test results. Engineers routinely input their design specifications and the BOM information. Product-testing workers and testing engineers input the quality test records. The information is combined in such a way that any new knowledge arising from the know-how in routine capturing can be articulated through some pre-defined information transformation criteria.
Fixture Library Fixture Library facilitates process planning through hyper-linkage to engineering database that contains the process details such as fixture design and setting and so on. The engineering database is dynamically linked with other databases to derive information stored under the fixture library.
Production Specification Production Specification facilitates production and workflow design by recording the production line layout, workflow design, existing line utilization rate and capacity.
Customer Situation Analysis Customer Situation Analysis facilitates quality control. It captures planning records of product complaint history, its subsequent quality improvement analysis along with the failure report. A group sharing of failure know-how from quality audit is preferred to traditional handling by a single person, from a learning perspective.
The IQKS Phases The sales, agent or customer service can use the IQKS system and input the customer requirement into the CAT-D system (through Internet portion). The “hitrate” for specific items and products reflects the market trends and demands and fulfills the QFD phase I where information is retained for future searching and references. The senior management (sales & marketing, business manager) derives appropriate actions.
66 Chung, Tam & Chan
Figure 8 shows internal staff accessing the customer historical inquiries through an internal Lotus Notes system. The information regarding customer requirement is stored as list of requirements (LOR), which defines customer specifications of the motor product. The LOR is a combination of information from database and various documentation including drawings and associated graphical presentation on performance. The Lotus Notes system supports these documents with its document management system. In product design phase (QFD phase II), the LOR (or Sales BOM) from phase I is linked to the product design BOM using the same project number. Basically, LOR is translated into product design BOM by a group of experts and stored under a database accessible in the Lotus Notes environment (see Figure 9). This BOM is able to match and merge with the product information; Figure 10 exemplifies the result. Searching and selecting is allowed by referring to product models, component part names or part number. Referring to these keys allows associated information to be extracted. This directly shows whether the product design is capable of fulfilling the customer requirements. With the Lotus Notes documents sharing environment, file attachment and graphical presentation is technically feasible. The following scenario demonstrates what key deliverables can be provided by showing what was known and what was unknown on a daily basis. Figure 8: IQKS phase I
Integrated QFD and Knowledge Management System 67
Figure 9: The translation of LOR to design BOM, QFD phase II
1.
A standardized terminology of products, product attributes (process/production, planning quality) can be mapped out, particularly to products of similar nature (see Figure 10). This serves as a common basis of knowledge sharing as part of the common product platform. Design lead-time can be reduced because inexperienced staff can pick up the product design by using the appropriate information. In short, the common basis of knowledge enables selflearning to occur among inexperienced staff. 2. What is known can be captured explicitly under a structured format. Design processes on product characteristics, causes and solutions can also be captured. The process BOM equivalent to QFD phase III can be accessed under the Lotus Notes environment (see Figure 11). The information is stored in the database dynamically linked with design BOM information. The project numbers, product
68 Chung, Tam & Chan
Figure 10: Integrated LOR with design BOM to facilitate learning
Figure 11: Process BOM
Integrated QFD and Knowledge Management System 69
models and components all serve as the common searching criteria. Documents regarding fixtures, machines settings, maintenance record and application records are shared within the document management system of Lotus Notes. The last phase of QFD phase IV gives the production BOM that is linked to the IQKS architecture. It deliberately combines quality control with product specification information together as production and quality control information. The commonly used searching criteria are also supported by the Lotus Notes. Figure 12 illustrates the production BOM interface under the Lotus Notes environment. Traditional practice may only share pieces of information, and it is up to the staff to put them together in a meaningful way. Performance information can be presented in a graphical format important to the user.
DISCUSSION The prototype of the common product platform is currently being expanded to cover suppliers and customers in order to enable a true collaborative mode in the customization of business process. However, since separate organizations are different in many ways (for example, the difference in part specification), the difference in manufacturing environment can very much compromise the progress of the common product platform. If the participating organizations are not quickly realizing the potential benefits of the platform, the implementation team can lose control of the project with a consequence of budget overruns and resulting cooling down of interests of sponsors from senior management. Figure 12: Production BOM and QFD phase IV
70 Chung, Tam & Chan
The central accomplishment of this common product platform is the idea to facilitate extended enterprise responsiveness, which in turn represents a large portion of competitive advantage for the participating organizations. Extended enterprise responsiveness focuses on change and dynamic business reconfiguration at the process level. Each participating organization must manage their performance for the benefits of the entire value network through the exchange of information. Since each participating organization must manage their own performance, this means they will perform activities that are most likely to be in their best interest. For example, suppliers that aim to utilize low-cost labor for manufacturing will most likely incline towards economies of scale, thereby producing product in large quantities. Low cost is their order-winning criteria, and they can squeeze reasonable profit margin from the low-cost commodity parts or suppliers. When complex products require the support of these commodity parts, other participating organizations within the extended enterprise can join in the collaboration to deliver complex products while retaining product performance such as low cost or time-to-market. The only challenge is that they must be previously aligned together to allow such business configuration to be engaged. Since manufacturing organizations must manage their own manufacturing environment for performance, more understanding is needed in how collaboration can be materialized in different conditions. More importantly, these conditions govern the coordination of economic activities dispersed across enterprises. Therefore the attempts to make individual enterprise systems compatible to each other are the key concern for extended enterprise operations. New understanding should focus on how the market knowledge, design knowledge, process knowledge and production knowledge impact on different manufacturing environments: make to stock, make to order and engineer to order. A lack of protocol or standardization is the major attributing factor that inhibits individual enterprise systems from being more compatible with each other.
SUMMARY AND CONCLUSION The challenge of managing information for product development must be addressed for many manufacturing organizations entering into the new economy. The creative nature of work or “not invented here syndrome” inherited in generating new products or new services discourages knowledge workers to reuse information and lures them into solutions that involve redesigning the entire part which ultimately affect the efficiency of partnering with suppliers during early supplier involvement. The excessive amount of time spent and cost involved in searching for right information and assuring their relevancy are unnecessary in the first place because parts can be reused with slight modification. This chapter operationalizes the concept of common product platform for contract manufacturers to manage the product development. It makes use of databases to facilitate the reduction of a wide variety of parts in order to simplify
Integrated QFD and Knowledge Management System 71
communication and to provide speedy customer responses. The platform merges the concepts of QFD and knowledge management to form an integrated IQKS system. The core is a knowledge-based system aimed at facilitating communication with customers and empowering knowledge workers. The bill-of-material becomes the backbone of the system in managing knowledge for use and reuse.
ACKNOWLEDGMENT The authors acknowledge that this is the outcome of research supported by the Central Research Grant of The Hong Kong Polytechnic University under project number G-V581.
REFERENCES Agility Forum. (1997). Next Generation Manufacturing. Blitz, A. (1999). Connected Manufacturing Enterprise. Ernst & Young, www.businessinnovation.ey.com. Baum. J. (1999). Small is beautiful. Far Eastern Economic Review, November 18. Deighton, N. (2001). Ericsson to Stop Terminal Manufacturing. www.gartner.com. Gartner Group. Hagel III, J. and Singer, M. (1999). Unbundling the corporation. Harvard Business Review, March-April. Hill, T. J. (1990). Manufacturing Strategy: Developments in Approach and Analysis. PhD Thesis. Yorkshire, UK: The British Library British Thesis Service. Magretta. J. (1998a). The power of virtual integration: An interview with Dell Computer’s Michael Dell. Harvard Business Review, March-April. Magretta. J. (1998b). Fast, global and entrepreneurial: A supply chain management, Hong Kong style. Harvard Business Review, September-October. Meyer, M. H. and Lehnerd, A. P. (1997). The Power of Product PlatformBuilding Value and Cost Leadership. New York: The Free Press. Peppers, D. and Rogers, M. (1997). Enterprise One to One. New York: Doubleday. Royal , W. (1999). Contract manufacturing: Perils and profits. Industry Week, November. Szegheo, O. and Andersen, B. (1999). Modeling the extended enterprise. International Enterprise Modeling Conference, Verdal, Norway. Thurm, S. (2000). Motorola to hire flextronics to make consume electronics. Asian Wall Street Journal, June 1. UNCATDA. (2000). World Investment Report Investment, Cross-Border Mergers and Acquisitions and Development. Zuckerman, A. (2000). Compaq switches to pre-position inventory model. World Trade, April.
72 Shaw & Tuggle
Chapter V
An Expanded Model of the Effects of Organizational Culture upon the Acceptance of Knowledge Management Nancy C. Shaw George Mason University, USA Francis D. Tuggle American University, USA
ABSTRACT Knowledge management (KM) entails several benefits for those organizations that practice it successfully. However, the success of KM is predicated on the organization possessing a suitable corporate culture. We present a model of a generic worker’s daily pattern of behaviors and examine whether or not this behavioral pattern is or is not conducive to the ready implementation of a KM system. Next, we expand upon that model to exhibit the role of corporate culture, especially as manifested in 13 different variables. The expanded model is corroborated by using it to explain why two organizations successfully adopted KM (The World Bank, and RWD Technologies, Inc.) and why two organizations were unsuccessful in their attempts to adopt KM (The Peace Corps, and a Private University). We conclude by offering suggestions for fruitful lines of future research. Copyright © 2003, Idea Group, Inc.
An Expanded Model of the Effects of Organizational Culture 73
INTRODUCTION The economies of the industrialized nations have entered into a new age– variously called the “Information Age,” the “Third Wave,” or the “Electronic Economy.” These labels refer to a transition that has taken place in the past few decades. Although there are still a few economies that primarily rely on traditional manufacturing, most economies rely on automated manufacturing and informationdependent service industries. These new economies are known as knowledgebased economies and are heavily reliant on information technology (OECD, 1996). While the means of production in the Industrial Age was industrial capital (plants, equipment, machinery), the means of production in today’s economy is knowledge capital. To succeed in the new world order, companies must adopt a new conceptual framework where knowledge is treated as a core corporate asset, and not as an expense (Housel and Bell, 2001). Knowledge management (KM) is a compelling new information technology that can help organizations leverage their knowledge capital for increased competitive advantage (Davenport and Prusak, 1998). Some benefits of KM include ongoing access to the intellectual assets of the firm (even after key employees leave or retire), the ability to leverage the expertise resident in all facets of the organization, use of tacit knowledge accumulated over decades of experience, etc. However, for a firm to enjoy the advantages of KM, a unique mindset and culture must be adopted by the organization in question. Indeed, having the right organizational culture in place is seen in the literature of KM as being the sine qua non of the field (Davenport, 1997; Mizumori, 1998; Rifkin, 1996). Unless the organization’s culture is sufficiently supportive of the activity and interaction patterns called for by KM, it will suffer the same fate as has befallen many organizations that attempted to implement Total Quality Management or Business Process Reengineering—initial acclaim followed quickly by abandonment. Unless KM systems are embedded within the larger context of organizational work practices, they are unlikely to be used (Nissen et al., 2000). A key issue for KM practitioners is to ensure that their organization creates and maintains an environment in which collective knowing and knowledge-based processes emerge and evolve (Brown and Duguid, 1991). A critical question for organizations that are thinking of attempting to extract the value implicit from KM is to what degree are they ready to have KM successfully adopted by the people in the organization? In this chapter, we help organizations answer that question. We do so by first reviewing the literature on KM and organizational culture—this results in the derivation of 13 different factors that are posited to affect the adoption of technological change, KM in particular. We then integrate these factors into a twolayer model of the effects of organization culture upon the knowledge workers in the organization. We then examine a set of four case studies of firms that wrestled with introducing KM—two did so successfully while two have yet to experience success – in order to test the efficacy of the model. Finally, we show how organizations can use this two-stage model to determine whether or not they are ready for the introduction of KM
74 Shaw & Tuggle
(or any other high technology computer system) and if not, exactly what changes to their culture and system of organizational practices need to be made beforehand.
LITERATURE REVIEW AND GENERATION OF THE MODEL The definitive work on corporate culture is by Schein (1992). Schein’s approach to identifying culture is anthropological—his interests are in developing a system to describe culture fully and accurately, and at those objectives, he has succeeded. Methodologically, Schein’s work implies that a mere typology will not suffice to capture the full richness of organizations. Organizational culture manifests its effects upon the organization through its mediated effects upon each individual in the organization. Therefore, before one can appreciate which cultural factors are salient to the question of the adoption of KM, one must first understand the behavior of a typical individual in an organization, and then one must understand how culture conveys its effects upon that individual. Generally defined as the shared beliefs, values, and assumptions that guide sensemaking and action in organizations (Ott, 1989), organizational culture encompasses both individual and group-level phenomena (Louis, 1985). This leads us to a two-layer model of individual behavior—the first layer is the worker, at one instant in time, in isolation (economists refer to this as a case of static equilibrium). The second layer takes that first layer model as a baseline and adds in cultural effects. Given that organizational culture is ultimately manifested in, and maintained by, the efforts and actions of individuals, in order to fully understand an organization’s culture we must examine the individual-layer manifestations (Harris, 1994). We wish to account for why some individuals readily adopt new computer technologies, such as KM, and why others actively resist such technologies.
First Layer: A Model of the Individual Relating to the Acceptance of KM Following Homans (1950), a worker in a steady-state routine of producing work will maintain that state if and only if three factors are balanced. Specifically, an individual’s daily work life consists of Activities, Interactions, and Sentiments. We shall analyze these three factors relative to the introduction of KM. First, the individual engages in a set of daily work Activities. The individual must be capable of executing these Activities. The Activities could be performed at a desk, in a classroom, on a factory floor, at a client site, or anywhere the company deems fit. Second, the individual engages in a set of Interactions with coworkers. These Interactions may be to exchange task-related information, to perform part of a project task, or simply to engage in social interchange with coworkers. Interactions may occur face-to-face, via telephone or voice mail, via email, via videoconferencing, or in any other modality in which two or more people exchange information.
An Expanded Model of the Effects of Organizational Culture 75
Third, the individual holds a set of Sentiments. The Sentiments (attitudes) will cover virtually every dimension of the individual’s work life—in particular, the Activities that are performed, and the Interactions that are conducted (as well as the people with whom this individual interacts). An important element of this first-layer model is that these three facets of an individual’s work life must be in a mutually supportive relationship in order for work homeostasis to be present and to be maintained. That is, for the person to be at equilibrium, the person’s Sentiments must value the Activities the person performs and must value the quality of the Interactions the person carries out (including the people that are interacted with). Similarly, the Interactions must assist the person in carrying out the Activities, and the set of Interactions must convey positive Sentiments for the person. Finally, the person must “enjoy” carrying out the set of Activities, must be competent at carrying them out, and must engage in some requisite set of Interactions in order to fully discharge all the duties and responsibilities of the position (see Figure 1). With that first layer in place, consider what happens when an organization attempts to introduce a new KM system (KMS) into this individual’s work life. The new KMS will likely cause changes to the person’s Activities (the KMS will have to be used), Interactions (knowledge will have to be shared; knowledge will have to be requested from others), and Sentiments (the individual must feel that the effort of learning the new KMS is worthwhile; the individual must feel that using the new KMS is beneficial). Whether or not the new KMS is accepted or resisted is a function of the particular set of Activities, Interactions, and Sentiments held by the target person. Without knowing what the particular pattern of that triad of variables is, it is impossible to predict in advance whether or not the new KMS will be accepted or rejected by the target person.
Figure 1: One-layer model of an individual at work in equilibrium
76 Shaw & Tuggle
Second Layer: A Model of the Effect of Culture on the Individual The second layer of our model has to do with the effects of the organization’s culture upon the way in which the focal worker’s individual behavior is altered. That is, once the focal worker is in a state of equilibrium, the worker’s daily behavior falls into a set routine of activities, patterns of interaction with immediate coworkers, and attitude clusters about activities and interaction patterns. An individual’s motivation to adopt a new system such as KM can be mediated by the culture of their organization. An extensive review of organizational culture literature generated an extended list of factors that may be germane to the adoption of KM. For illustrative purposes, we selected 13 that (looking ahead to the four case studies) are particularly salient. In a very real sense, these 13 factors may be considered to be 13 separate propositions. These will be examined for validity in our four case studies of KM adoption or lack thereof (see Table 1). Now consider the impacts of these 13 cultural factors on the individual (see Figure 2). This model “drills down” on the model presented by De Long and Fahey (2000, p. 116) wherein they merely stipulate that the dimensions of culture affect human behavior, which in turn affects the propensity to adopt or reject a KMS. Our model identifies the salient part of work-related human behavior, and postulates how separate parts of an organization’s culture affect distinct dimensions of an individual’s work-related behaviors. Table 1: Organizational factors that may be germane to the adoption of KM
Is information sharing encouraged? Is there widespread trust in the organization? Do the reward and recognition systems promote initiative and innovation? Are the workers optimistic and curious? Is the culture a strong one?
Davenport and Prusak, 1988 Shaw, 1997 Peters and Waterman, 1982; Nissen et al., 2000
Pfeffer, 1994 Deal and Kennedy, 1982; Brand, 1998; Sieloff, 1999; Pan, 1998 Is the culture a positive one? Goffee and Jones, 1998 Is the culture an adaptive one? Kotter and Heskett, 1992; Caldwell and O’Reilly, 1995; Nemeth and Staw, 1989 Does the firm tolerate well-intentioned errors? Tuggle and Shaw, 2000 Is the reuse of material encouraged? Tuggle and Shaw, 2000 Is teamwork encouraged? Tuggle and Shaw, 2000 Is the firm without the Not Invented Here Tuggle and Shaw, 2000 syndrome? Do the firm’s organizational practices, Tuggle and Shaw, 2000 processes and control systems support KM? Is the firm technologically advanced? Tuggle and Shaw, 2000
An Expanded Model of the Effects of Organizational Culture 77
Figure 2: Two-layer model of an individual at work (showing influence of cultural variables with dotted lines)
CASE STUDIES Over the past several years, the authors have closely examined several organizations that were in the process of adopting KM techniques and technologies. While several of these organizations were successful in their endeavors, there were always a few that never quite made it. In an attempt to try to understand why some organizations were successful and some were not, the authors began to examine a variety of cultural aspects of these organizations in order to determine if there were any specific cultural factors that helped or hindered the acceptance of KM—thus, four case studies are examined.
The World Bank The World Bank’s mission is to fight poverty, improve living standards, and improve the quality of life through sustainable growth and investment in people (www.worldbank.org). Owned by 182 member countries, the bank loaned more than $15 billion in 2000. According to Stephen Denning, program director of KM for the
78 Shaw & Tuggle
bank during 1996-2000, the bank underwent a huge cultural transformation from “being a bank” to “being a knowledge-sharing organization” (Barth, 2001). KM at the World Bank is aimed at sharing know-how and experience in order to increase individual effectiveness, transfer information and knowledge to the organizational level, ultimately making it easily accessible so that all individuals are informed to take effective actions. While the preliminary focus of the system was on improving the effectiveness of World Bank staff through providing just-in-time, just-enough knowledge to task teams on call, their goal was to meet the needs of both internal and external users by the end of the year 2000. The KMS is aimed at changing the way the Bank operates internally and transforming the organization’s relationships with external clients, partners, and stakeholders, and will be one of their key strategic thrusts for the 21st century. The creation of the KMS, organized around a matrix structure that combines geographical regions and thematic groups, enabled the bank to identify “communities of practice” which facilitated the building of the knowledge base. Ongoing support and resource commitments from upper management have resulted in a KMS that has been used as a model for many other organizations that are interested in developing a KMS of their own. The main sources of content of the knowledge management system are (a) World Bank operations, (b) answers to questions arising from help desk inquiries, (c) content from external knowledge partners, and (d) feedback and contributions from external users.
Acceptance of Technological Innovation at the World Bank The daily activities of the average employee at the World Bank include numerous opportunities for knowledge sharing (see below). Furthermore, most work, when not performed in meetings or over the telephone, is done electronically. There has been a pronounced shift away from paper towards electronic documents, graphics, spreadsheets, and presentations. Given the skill level of World Bank employees, transitioning to such changes is facile. Almost nothing gets done at the World Bank by an individual acting alone. Teams work on projects, teams that are multicultural, multiethnic, multigender, and diverse along almost every conceivable dimension. Meetings are commonplace; they occur in Washington, they occur in the field, and they (increasingly) occur in cyberspace. There is high esprit d’corps at the World Bank. Virtually all employees believe, deep in their hearts, that their work, individually and collectively, is saving lives, improving the quality of life for the world’s disadvantaged, and making the world a better place for humanity. The nobility of their mission sometimes makes them behave in ways that outsiders might describe as coldly rational, but that modus operandi would be heatedly defended and justified by employees. Partly as a consequence of the strong identification with the mission of the institution, employees are constantly seeking out new ways to improve delivery of services to clients and
An Expanded Model of the Effects of Organizational Culture 79
to optimize internal operations. When the then-new CEO, James Wolfenson, proposed knowledge management as a way to improve the World Bank’s operation, the idea was met with little resistance and a great deal of empathy. The culture at the World Bank facilitated the successful adoption of KM. The attributes of the prototypical worker (well-educated, technologically adept, and highly motivated to realize the bank’s objectives) combined with the cultural conditions found at the bank (technological sophistication, continued training and emphasis on skill development, teamwork, empowerment, and egalitarianism) created an environment that made it easy for the workers to accept the new technology. The culture allowed for the employees to easily adapt to changes to their activities, meetings, and attitudes that were a result of the implementation of the KMS.
RWD Technologies, Inc. RWD was founded in January 1988 on the observation that the nature of our society was becoming increasingly technological, resulting in a market opportunity to help organizations operate their high-technology systems and equipment. RWD is headquartered in Columbia, Maryland. There are 11 domestic regional offices and two international regional offices. RWD has clients in 28 countries, clustered in 30 different industries, all in the private sector. RWD is organized as a large matrix structure. On the axis of approaches, it has a set of TCOEs—Technical Centers Of Excellence which represent the high technologies that RWD has expertise in, such as web-based technologies, enterprise information systems, and lean manufacturing. On the axis of customers, there are a set of ICOEs—Industry Centers Of Excellence which represent the customer clusters that RWD has depth of expertise in assisting. In conducting its projects for its clients, RWD utilizes a small number of proprietary, corporate methodologies in its business. These corporate methodologies exist on the company’s intranet—employees across the globe can refer to these corporate methodologies to refresh their memories of the exact details of which step is performed when and how. All projects are performed by teams; a corporate directory allows team leaders to identify who has the right set of skills for a project, who has requisite industry experience, and who has sufficient time availability for the project. One of the important practices at RWD is the notion of continuous improvement. This manifests itself in some cases as a lessons learned database (residing on the corporate computer) and also by questioning how technologies could be improved to deliver better service to the clients. Before any new project is undertaken, it is required that the team leader access the lessons learned database to research knowledge on the client firm and industry, projects of this type, and the technology to be used on the project. At the conclusion of a project, it is required that the team leader enter new lessons learned into the database. The corporate intranet is used for both formal and informal communication. Project information, group communication, and even time sheets are housed on the intranet.
80 Shaw & Tuggle
Acceptance of Technological Innovation at RWD Individuals are recruited for and selected by RWD on the basis of two main considerations: their ability to work in team settings and their technological sophistication. Training is often performed online, and routine organizational matters (the policy manual, the organizational chart, the time billing system, etc.) also require considerable computer proficiency. Software upgrades occur frequently, and they are also carried out online. Software innovation is a valued activity. The annual performance review system explicitly makes note of several of these dimensions. The performance appraisal form is a 20-question evaluation system; three of the questions address the individual’s ability to work effectively in a team setting. Two of the questions explicitly address the individual’s ability to keep abreast of technological chances germane to the person’s client projects. One question on the form explicitly addresses whether or not the person is contributing to the KM lessons learned database as well as using that database on a regular basis. The conclusion is that RWD is well positioned to try new technological innovations as well as developing technological innovations themselves to offer to their clients. RWD has a KM system in place that it expects its employees to use and to extend. Their culture is summed up in their tagline: “We bring people and technology together.”
The Peace Corps Since its establishment in 1961, the Peace Corps and its volunteers have been dedicated to providing opportunities for individuals who want to build a better life for disadvantaged nations. Part of their recent strategic plan included the design and implementation of a KMS that permits secure collection, processing, and redistribution of information throughout the organization (Peace Corps, 1998). The Peace Corps’ mission of aiding people in foreign countries and promoting understanding between Americans and citizens of these countries could be greatly enhanced through increased sharing of knowledge. As a support division within the Peace Corps, the Center for Field Assistance and Applied Research (referred to hereafter as the Center) aids overseas field staff in completing projects in developing countries. A KMS could potentially provide information to all levels of the Peace Corps, thus aiding in upper-level decision making and strategic planning. Part of the Peace Corps’ strategic vision for the year 2000 was to “seek new levels of effectiveness and efficiency.” The Peace Corps required a more effective process for capturing, analyzing, and sharing knowledge resources. In the past, the organization relied heavily on the verbal conveyance of information–an approach that provides limited opportunities to record lessons learned and define solutions to frequently encountered problems. This is particularly problematic due to the Peace Corps’ policy of a maximum of five years’ of employment. In accordance with its strategic goals, the Peace Corps began exploring the potential utilization of a knowledge management system in 1998. The primary goal
An Expanded Model of the Effects of Organizational Culture 81
of the Center is to support the Peace Corps’ activities through information management and collection. Information flows through the Center in an informal manner. Generally, employees contact each other directly with requests for information. Some employees act as knowledge brokers, directing individuals who have inquires to the correct person or location. The Project Assistants and other employees who have been working at the Peace Corps for two or three years are perceived as having a wealth of experience and knowledge about the location of needed information and they significantly contribute to the dissemination of knowledge. The country desk units of the Peace Corps act as clearinghouses by obtaining all relevant information from representatives in the field and then deciding where they should go. In addition to a bottom-up flow of information, ad-hoc requests are made that originate from the Center. Peace Corps employees have not been provided with a full appreciation of the process and benefits of KM. This factor is largely due to a lack of sufficient human resources dedicated to this critical area. There are a few knowledge brokers, employees that have relatively extended service with the Corps and have acquired broad-based expertise, who facilitate knowledge sharing. Other experts also exist throughout the organization, but the coordination of their efforts has not been formalized to a sufficient degree.
Acceptance of Technological Innovation at the Peace Corps Two very different types of individuals are recruited for work at the Peace Corps. One type is the individual recruited to work at an office, a staff individual, if you will. The second type is the individual recruited to work overseas in direct contact with a “client.” The skills, motivations, and career aspirations of these two different types of individuals sets up two distinct subcultures within the Peace Corps that practically preordain problems. The Peace Corps (as of this writing) has not been successful in implementing KM, in spite of that being one important objective for the organization. The KM effort at the Peace Corps was hindered by a variety of different factors. In particular, they failed to adequately capture and retain the institutional knowledge of departing employees. There was no substantial commitment (either financial or administrative) from upper management. While there was a great desire to develop an electronic KM system, the informal knowledge-sharing networks that had developed over the years at the Peace Corps continued to be the primary method of knowledge transfer. In addition, the lack of an enterprisewide network that would enable the implementation of compatible systems made access to needed data difficult at best. The employees already had a strong, informal network in place that they were very comfortable with. That, coupled with the lack of technology that would facilitate electronic knowledge sharing, created an environment where KM could not succeed. While they did experi-
82 Shaw & Tuggle
ence some successful knowledge-sharing initiatives, the KM effort has been largely unsuccessful.
Private University The university is a private mid-size university on the East Coast. Established over 100 years ago, it has more than 10,000 students, equally divided between graduate students and undergraduate students, and over 600 full-time faculty members. The university is organized as a college of liberal arts and science and several professional schools within the university. This case study will center upon one of the professional schools, the school of management. The primary goal of the university for the next century is to build a distinctive, global university, marked by the highest levels of academic excellence and creativity. The management school is expected to add to the global distinctive competence of the university. One part of the management school’s efforts in this area is to use computer technology to achieve this end. Describing itself as at the “forefront of business and technology,” the management school has a mission to include technology and innovation in all facets of student and faculty life. The school recently completed major renovations on a multi-story office-classroom building. Included in the renovations were “high-tech” classrooms, intended to be the showcase of the school and used for both executive education as well as normal classroom usage. The classrooms contain Internet ports for each student, as well as a standard AV presentation system for lectures. Numerous break-out rooms for either faculty or student use were worked into the design. Other than the previously mentioned minor technological innovations, there was no additional hardware or software initiative included in the design. The school uses the same platform as the rest of the university (Lotus Notes E-mail) for communication and knowledge sharing. While there has been some interest in using specific technologies for teaching (e.g., Lotus Notes Classroom, online educational courseware, standard Web pages), there is no other knowledge-sharing technology in use by the school. Events and items of interest are sent by standard e-mail group-list technology to the faculty, staff, and students. While the university has an extensive investment in Lotus Notes technology, other than for individual course use, groups within the school (department or research) do not use the Lotus Notes group-ware capability. Knowledge is shared in the traditional manner: during department meetings, over lunch, or while drinking coffee.
Acceptance of Technological Innovation at the Private University Throughout the university, there is the prevalent opinion that computer technology is an enabler of the processes of producing high quality instruction and distinctive research. In particular, computer technology is not a dimension on which this
An Expanded Model of the Effects of Organizational Culture 83
university chooses to compete in the academic marketplace—computer technology is a support service, not an end in itself. This school and university present a curious blend of technological sophistication and technological innocence. It is clear that the university and the management school both exhibit fragmented organizational cultures. Specifically, there is a hightech subculture among certain subgroups, and there is a low-tech subculture among other subgroups. These subgroups coexist peacefully on campus because of a tacit agreement not to argue over the proper role of technology in education. There are pockets in the management school that effectively utilize a KM system, and there are pockets in the school that resist the use of any form of computer technology. Consider: one group of technologically literate faculty exchange documents and threaded comments upon works in progress by use of the university’s Lotus Notes groupware system. Consider: several committees in the school schedule meetings by leaving voice mail messages for one another. Consider: at least one department chair refuses to use the desktop computer supplied by the university— documents are hand-typed; this person does not send or receive email or otherwise avail the potential of a networked personal computer. Thus, even in a professional school whose mission includes the promotion of global business through computer technology, there are segments of the school (sometimes important segments) that do not utilize the power of modern computation. That the university tolerates such unevenness is a testimony to the vagaries of tenure coupled with a posture of “satisficing” within its budgetary constraints.
APPLICATION OF THE MODEL These four case studies are broadly consistent with our two-layer model. Let us examine the four case studies in light of the model sketched in Figure 2. First, note that for the two successes in the adoption of KMS (the World Bank and RWD Technologies), all of the 13 posited variables are in the direction supportive of KM adoption success. This corroboration is encouraging. Next, focus attention on the two case studies in which KM failed to be successfully adopted. The difficulties the Peace Corps has had in implementing KM are traceable to many of the cultural factors we have identified in the model. Consider first the factors behind Activities: their personnel are not attuned to high technology, they are tolerant of well-intentioned errors, there is no reward system for sharing and reusing knowledge, and they do possess an adaptive culture. Thus, two of these four factors mitigate against successful KM adoption. Next, consider the factors behind Interactions: trust is present, teamwork is not stressed, information is readily shared, and there is a weak culture present. Again, of the four factors, exactly two of these factors support KM adoption and two counteract it. Finally, consider the factors behind Sentiments: there is not an NIH syndrome, reuse of material is deemed important, they are a governmental bureaucracy with hierarchy, centralized
84 Shaw & Tuggle
decision making and control, and centralized budgetary authority; people are optimistic and intellectually curious, and the culture is a positive one. One out of these five factors works to frustrate the adoption of KM. In summary, five of the 13 factors are opposed to the adoption of KM. That seems to be sufficient in this case to have blocked the adoption of KM. However, it is close—recent inquiries at the Peace Corps reveal that they are still endeavoring to adopt and disseminate a KM system; there has been some limited progress. As noted later, the findings from the case of the Peace Corps raise questions which we are unprepared to answer about the relative strength of the effects of the 13 factors—it is almost certainly the case that not all carry the same weight. Finally, consider the case of the private university. First, for its Activities, it lacks a consistently strong techno-skill base, it is intolerant of errors, it has no reward and recognition system that promotes the use of KM, and it has a rigid culture. All four of these factors work against the adoption of KM. Next, for its Interactions, it is not a trusting organization, it does not operate through teams, there is no history of information sharing, and it possesses a weak culture because of its many subcultures. Again, all four of these factors work against KM. Finally, for its Sentiments, there is not an NIH syndrome, it does not discourage the reuse of materials, it does not follow a set of organizational practices or control systems that are consistent with the use of KM, people in the organization are optimistic and intellectually curious, and the organization has a negative culture—people see it as a means to their own ambitions. Three of these five factors are consistent with the use of KM. Thus, overall, the private university reports that 10 of the 13 factors are not supportive of KM being in place. It should therefore come as no surprise that KM is not even on the planning horizon of this institution.
LESSONS LEARNED Three lessons can be inferred from this work: (1) our two-layer model appears to be descriptively accurate; (2) understanding an organization’s culture appears to give valuable insights regarding the disposition of its people to accept or reject technological innovations; and (3) the first two lessons lead to a prescriptive use of the model. Specifically, organizations can use the two-layer model to assess their readiness to introduce or adopt a KM system. Specific cultural factors that require adjustment can be identified during the assessment process prior to the introduction of any new technology. (There are, of course, other cultural factors that may be pertinent. For example, Christensen and Shu, 1999, identify polite/rude, formal/ informal, and comforting/impersonal among others.) Firms should carry out the following steps in order to understand whether or not their culture is apt for the successful adoption of KM. First, they need to understand the common working environment of individuals in the firm—that is, they need to understand the typical individual’s Activity, Interaction, and Sentiment patterns. What actions does that typical person carry out over the course of a normal day?
An Expanded Model of the Effects of Organizational Culture 85
What tools does that person use to get work done? Whom does that individual interact with? Who initiates those interactions? Are those interactions work-related or social interactions, or do they contain elements of each? How much interaction occurs over the course of a day? How many different individuals and groups does that person interact with? What sentiments does this person hold about the job, the work environment, his/her career, the boss, coworkers, managers in general, the organization, and the industry it is in? Second, they need to assess whether or not that pattern is conducive to the successful adoption of KM. Does that typical individual engage in Activities and Interactions and hold a set of Sentiments such that migrating to a KM system is a small step or a big leap? After identifying a “gap” between exhibited and desired patterns, a change program can be designed and implemented. Third, the firm then needs to identify where it stands on each of the 13 organizational cultural factors outlined above that are germane to the firm. Once identified, these cultural factors should be assessed in a similar fashion as the individual factors, and, where one or more gaps exist, to design and undertake additional programs of change.
DIRECTION OF FUTURE RESEARCH Several avenues of future research suggest themselves. For one, our two-layer model is in need of more rigorous and systematic testing. For example, it may be found that not all of our 13 cultural factors are needed to determine the success of KM adoption. Are some of these 13 factors more important than others in determining whether or not KM gets adopted? On the face of it, it would seem unlikely that all 13 factors would carry the same weight in the decision. Are these 13 factors independent of one another or is their presence or absence correlated? One could suspect, for example, that teamwork and information sharing would be correlated with one another. Are there interaction effects between some of the factors? That is, if one factor is there in the presence of a second factor, is the effect of that first factor magnified? Additionally, a program of action research seems called for. This would entail organizational diagnosis, assessment, and change (as warranted) in order to move an organization from a position of being not likely to succeed at KM adoption to the point at which KM was successfully adopted after a change regimen was put in place. Finally, a good part of our two-layer model, with some slight generalization, carries over to an organization’s adoption of any new high-technology system, whether it is the migration to an enterprise resource planning system, a new database system, a new customer relationship management system, or any other substantially new and different IT system. As such, organizations need methods of determining what the barriers to successful implementation are and of designing programs to remove or minimize such barriers. Our two-layer model offers one such approach.
86 Shaw & Tuggle
CONCLUSION This chapter presents a two-layer model that has been used to explain the effects of organizational culture on the adoption or rejection of KMS. As such, this model represents a significant advance over previous models of the effects of culture upon such decisions, e.g., De Long and Fahey (2000) and Shaw and Tuggle (2001). The model was developed through field studies and an extensive literature review on organizational culture and KM. In particular, the model as now formulated stipulates 13 propositions about cultural factors posited to affect an organization’s decision to embrace or reject technological change, KM in particular. An examination of four case studies of organizations undergoing KM implementation projects was conducted to test our model and these 13 cultural factors. We consider the positive results of the case studies to corroborate our model and indicate that it is worthy of more careful examination and testing. For example, we are now more confident that our 13 factors separately operate through the model given in Figure 2 to impact the adoption/rejection decision. However, we do not yet know (1) if some of these factors are relatively more important than others (e.g., if two factors appear simultaneously but in different directions, does one’s effects swamp the impact of the other’s effects?) or (2) if there are any interaction effects among the 13 factors (e.g., if two factors appear simultaneously, do they work together to strengthen the collective impact that they have, more so than just “adding” their independent effects?). We have focused upon the decision to adopt or reject KM in this chapter. Is this model equally valid for other software systems such as ERP, or are different factors or a different model structure called for? We found that an organization’s culture can impede or enhance the acceptance of KM technology and processes. To the extent that the adoption of KM practices and technology creates changes to an individual’s daily Activities, Interactions with coworkers, Sentiments, and to the extent that the organization’s culture allows for or encourages these changes, the acceptance or rejection of the KMS may be predictable. Given the success of our two-layer model at explaining the decision behaviors of four quite disparate organizations, there is a basis for believing that the effects of organizational culture upon the decision to adopt or reject computer technology, KM in particular, may be predictable.
REFERENCES Barth, S. (2001). The knowledge bank. Knowledge Management, 4(6), 24-26. Brand, A. (1998). Knowledge management and innovation at 3M. Journal of Knowledge Management, 2(1), 17-24. Brown, J. and Duguid, P. (1991). Organizational learning and communities of practice: Toward a unified view of working, learning and innovation. Organization Science, 2(1), 40-57.
An Expanded Model of the Effects of Organizational Culture 87
Caldwell, D. and O’Reilly, C. (1995). Promoting team-based innovation in organizations: The role of normative influence. Fifty-Fourth Annual Meeting of the Academy of Management. Christensen, C. and Shu, K. (1999). What is an organization’s culture? Harvard Business School Case #9-399-104, Boston, MA, May 20. Davenport, T. H. (1997). Known evils, common pitfalls of knowledge management. CIO Magazine, June 15. Davenport, T. H. and Prusak L. (1998). Working Knowledge: How Organizations Manage What They Know. Boston, MA: Harvard Business School Press. Deal, T. E. and Kennedy, A. A. (1982). Corporate Cultures: The Rites and Rituals of Corporate Life. Reading, MA: Addison-Wesley. De Long, D. W. and Fahey, L. (2000). Diagnosing cultural barriers to knowledge management. Academy of Management Executive, 14(4), 113-127. Goffee, R. and Jones, G. (1998). The Character of a Corporation: How Your Company’s Culture Can Make or Break Your Business. New York: Harper Business. Harris, S. (1994). Organization culture and individual sensemaking: A schema-based perspective. Organization Science, 5(3), 309-321. Homans, G. (1950). The Human Group. New York: Harcourt, Brace and Company. Housel, T. and Bell A. (2001). Measuring and Managing Knowledge. Boston, MA: McGraw-Hill Irwin. Kotter, J. P. and Heskett, J. L. (1992). Corporate Culture and Performance. New York: The Free Press. Louis, M. R. (1985). Introduction: Perspectives on organizational culture. In Frost, P. J, Moore, L. F., Louis, M. R., Lundberg, C. C. and Martin, J. (Eds.), Organizational Culture, 27-30. Beverly Hills, CA: Sage. Mizumori, R. K. (1998). Knowledge management—5 Ws & 1 H. KM Magazine, November. Nemeth, C. and Staw, B. (1989). The tradeoff of social control and innovation in groups and organizations. Advances in Experimental Social Psychology, (22), 175-210. Nissen, M., Kamel, M. and Sengupta, K. (2000). Integrated analysis and design of knowledge systems and processes. Information Resources Management Journal, 13(1), 24-43. OECD Report on Knowledge Based Economies. (1996). Ott, J. S. (1989). The Organizational Cultural Perspective. Chicago, IL: The Dorsey Press. Pan, S. (1998). A socio-technical view of knowledge sharing at Buckman Laboraties. Journal of Knowledge Management, 2(1), 55. Peace Corps. (1998). The Peace Corps 2000: A Strategic Vision for a New Century. Center for Field Assistance and Applied Research. Peters, T. and Waterman, R. H. (1982). In Search of Excellence: Lessons from America’s Best-Run Companies. New York: Harper and Row.
88 Shaw & Tuggle
Pfeffer, J. (1994). The Human Equation: Building Profits by Putting People First. Boston, MA: Harvard Business School Press. Rifkin, G. (1996). Buckman Labs: Nothing but net. Fast Company, June/July. Sieloff, C. (1999). If only HP knew what HP knows: The roots of knowledge management at Hewlett-Packard. Journal of Knowledge Management, 3(1), 47-53. Schein, E. (1992). Organizational Culture and Leadership (second edition). San Francisco, CA: Jossey-Bass. Shaw, R. B. (1997). Trust in the Balance: Building Successful Organizations on Results, Integrity, and Concern. San Francisco, CA: Jossey-Bass. Shaw, N. C. and Tuggle, F. D. (2001). A model of the effects of organizational culture upon the acceptance of a knowledge management system. Working Paper. Tuggle, F. D. and Shaw, N. C. (2000). The effect of organizational culture on the implementation of knowledge management. Proceedings of Florida Artificial Intelligence Research Symposium (FLAIRS 2000), 166-169. Orlando, FL. World Bank. (2001). http://www.worldbank.org.
Information-Based Integration for Complex Systems 89
Chapter VI
Information-Based Integration for Complex Systems E. Szczerbicki The University of Newcastle, NSW, Australia
ABSTRACT Engineering, operations research, and management science use scientific and engineering processes to design, plan, and schedule increasingly more complex industrial systems in order to enhance performance. One can argue that the systems have grown in complexity over the years mainly due to increased strive for resource optimization combined with a greater degree of uncertainty in the system’s environment. Information is seen as one of the main resources that managers try to use in an optimal way. In this chapter we show how this resource can be used in integration issues. We introduce the problem of information-based integration, propose a solution, and illustrate the proposed solution with an example.
INTRODUCTION Managing complex systems requires a greater understanding and knowledge about the role of information in systems operation. Today, a growing complexity of information flow is a characteristic of enterprises which concerns products to be manufactured, services to be offered, processes and company structures. Complex systems also operate in changing environments surrounded by numerous uncertainties and disturbances. Difficulties arise from unexpected tasks and events and from a multitude of possible failures and other interactions during the attempt to control various activities in dynamic environments. Therefore, management of Copyright © 2003, Idea Group, Inc.
90 Szczerbicki
information is one of the most important aspects to be considered in intelligent systems, which are expected to solve unforeseen problems, even on the basis of incomplete and imprecise information. This chapter starts with some discussion of the importance of information in operation management as well as new challenges in information modelling, visualisation, and communication in information society. Then we turn our attention to one of the challenges, i.e., information-based integration, and discuss it in the context of a system’s design process. We illustrate our approach with an explanatory example, and then conclude the chapter.
INFORMATION MANAGEMENT IN THE NEW MILLENNIUM: SOME CHALLENGES Information becomes an increasingly more important resource in all kinds of business, industrial, and service operations. Changes, uncertainty, imprecision, and complexity became the most important factors affecting the behavior of modern markets. Functioning in such markets requires an increasing amount of information to be processed in substantially shorter periods of time. Therefore the time span left for decision making is dramatically decreased. Some major problems associated with the above facts can be traced down to the level of operational management of a company. The efficiency of management at that level depends mainly on the amount of time needed to react to changes in both internal and external environments in which a given company is functioning. Operational management level is increasingly more often a decisive factor in company’s survival and expansion. We have to concentrate on this level of management as much as on tactical and strategic levels that are usually the main focus of companies today. Operational level at which tasks are processed is no longer a stable one. A number of parameters associated with these tasks can change and will be changing more and more often as the dynamics of the external environment increase.
Demand for Information Functioning in uncertain and imprecise conditions requires predictions of future states of environment in which systems operate. It requires increasingly more efficient and intelligent decision support tools that are able to cope with unexpected changes. Application of such tools usually means significantly bigger demand for information, and therefore a larger amount of information will have to be processed at all management levels. Information that is needed often originates at different, geographically distributed sources and is available in different forms and different coding. Thus, new tools will also be needed to cope with this emerging problem of information diversity. The challenge of the new millennium will be to retrieve and transform huge amounts of different forms of information into knowledge needed to support our decision-making processes.
Information-Based Integration for Complex Systems 91
Information Theory Managing companies in the new century, the century of information society, will necessitate the use of new means of communication with external environment. It will also require much greater adaptability of companies, it will require the companies of the new millennium to be transformed into intelligent, learning organizations able to cope with globalization of information resources. This globalization means that the main problem will not be the access to information but the ability to mine it and then to transform it into a useful operational and strategic resource. The increasing frequency of changes in the state of the environment in which a company is operating creates a new important challenge associated with time. Time becomes a decisive factor in information retrieval and decision-making processes. Managing complex industrial systems (manufacturing, processing, distribution, servicing, mining, etc.) that function in uncertain information-rich environments requires greater understanding and knowledge about the role of information in systems operation. To gain this understanding, a theory will be needed that could be used to model and evaluate information flow in different situations. In fact, our needs for the new century go well beyond the above in requirement for a theory considering important practical issues of information, i.e., delays, incompleteness, imprecision, and loss in value. The current practice of dealing with such issues are mostly when problems are detected and reactively. This situation may not be desirable and definitely may be a major drawback for complex systems that more and more rely on the timeliness and quality of information for their operation. A theory, in this respect, would greatly enhance the understanding of the various factors that influence the quality of information to the benefit of better decisions in adequate time. Development of such a theory is one of the major challenges in information society of the new millennium. The value of information that flows within a given subsystem is different for different information structures and different environments (Szczerbicki, 1993, 2000). It can be considerably affected by two major attributes of information: incompleteness and delay. The highest value will be possessed by a full information structure (including all relevant information possible). On the other hand, gathering information in a dynamic environment causes its delay. Both delay and incompleteness can be represented by losses in the value of the information structure. Currently, there is no theoretical foundation for such a representation but managers of the new millennium will certainly need it. The delay of information combined with the dynamics in the environment can cause substantial losses in the value of information as a useful resource in decision support. We have to turn huge amounts of information into knowledge needed for our knowledge-based systems very fast. Quick perception of information becomes an important issue. Another challenge emerges here–visualization of information.
Visualization of Information We no longer have time to study pages of reports and columns of data. We have to visualize information quickly and effectively. New tools are needed to support the
92 Szczerbicki
ways we communicate information to the decision maker. Visualization and presentation of information becomes one of the most important areas of research in Cybernetics and Artificial Intelligence. Visualization can help make sense of the flood of data. When applied with some insight into visual perception, and with attention to the nature of the data and how the data are to be used, visualization can become a very powerful tool in future intelligent information systems. Current and future research trends in this area include such important topics as (Rogowitz and Treinish, 1990; Lefkowitz and Herman, 1992; Tufte, 1998, 1994): • color and information; • complexity and clarity of human perception; • use of multi-media, Internet, and WWW screens; • animation for scientific visualizations; • design of efficient computer interfaces. Use of color in presenting data becomes an increasingly more important research topic. The key issue here is the colormap which may be defined as a mapping from a data to a color. In most color maps red is mapped to the highest data value, blue to the lowest, and the other data values are interpolated along the full extent of the rainbow spectrum. An example would be a temperature profile mapped over a land mass on a weather map. But there are some unsolved problems related to color data representation. Color is a perceptual phenomenon. What is commonly called color is only one of three parameters. Another is the brightness of the signal–intensity. The third is the admixture of white–saturation. To add to the complexity of the problem, the above parameters’ relationship to what is perceived is nonlinear. Color perception issue in data mapping is one of the challenges in information management for the new century.
Information-Based Integration From the informational resource perspective, systems are often viewed as sets of components (agents, subsystems) characterized by informational closure and supporting separate functions. Many organizations operate in this highly compartmentalized manner. It appears that the general direction of systems operation in the future, however, is toward linking together function-specific agents into a fully integrated system. An integrated system is a system that consists of agents that efficiently contribute to the task, functional behavior, and performance of a given system as a whole. It is believed that such an integration can be achieved through the flow of information. The proposed solution for this challenge is presented next.
INTEGRATION THROUGH THE FLOW OF INFORMATION Systems become increasingly complex. Their decomposition into smaller units is the usual way to overcome the problem of complexity. This has historically led to
Information-Based Integration for Complex Systems 93
the development of atomized structures consisting of a limited number of autonomous subsystems that decide about their own information input and output requirements, i.e., can be characterized by what is called an information closure. Autonomous subsystems can still be interrelated and embedded in larger systems, as autonomy and independence are not equivalent concepts. These ideas are recently gaining very strong interest in both academia and industry, and the atomized approach to information flow modeling and evaluation is an idea whose time has certainly come (Gunasekaran and Sarhadi, 1997; O’Grady, 1999). Integration process is of great importance to the theory and practice of a number of practical fields, for example Computer Integrated Manufacturing (CIM), however, it is an area which is not well understood (Tharumarajah, 1998). Integration is concerned with a systematic generation of alternatives in order to create new systems or improve the existing ones. It usually includes the following issues: • selection of appropriate agents (subsystems), • interconnection of the agents through the flow of information. The approach proposed in this section employs a system-theoretic hierarchical integration which provides a practical tool for reasoning with agents and recognizes the importance of interaction in knowledge-based systems and intelligent systems (Pacholski and Wejman, 1995). It is based on a working hypothesis that it is possible to develop hierarchically structured integrated systems by reasoning with autonomous agents (subsystems). As a result, the architecture and information flow at the integration stage takes into consideration the following assumptions: • hierarchical representation of agents with various levels of detail is suited for modeling of integrated systems, • any hierarchical model of an integrated system can be characterized by its agents and connections that were applied during the matching process. Traditionally, system analysts have been solving the integration problem in an ad hoc manner. In this section, a formal integration approach is presented which is suitable for computer implementation.
Related Work Integration of agents (subsystems) represented by models has gained considerable attention. Model development and synthesis (that resembles agent development and integration) is frequently based on the general systems theory, and it uses hierarchical structures and a number of model base concepts (Wyzalek, 1999; Rolstadas and Andersen, 2000; Quinn, 1992; Hassan et al., 1994; Askin and Standridge, 1993). In Askin and Standridge (1993) hierarchical manufacturing systems developed from functional components and integrated by information that is passed between components are discussed. In Raczkowsky and Reithofer (1998) the future development of a hierarchical communication model for coordination of a set of agents performing several functions is addressed. A conceptual modeling approach to represent the complexities in CIM systems, including such issues as information acquisition, storage, dissemination, and the time and costs associated with such informational activities, is proposed in Tharumarajah (1998). The problem
94 Szczerbicki
of coordination of multiagent manufacturing systems developed to fulfill their functional requirements advocating a decentralized approach in which each agent has relative autonomy over its own actions is discussed in Pacholski (1998) . In O’Grady (1999) the application of modular paradigm to integration of systems in which planning, grouping, and scheduling are the central functional areas is described. The role of the flow of information in the process of integration is discussed in Prakken (2000). In Kamrani and Sferro (1999) the integration of manufacturing agents (information islands and automation islands) using knowledge-based technology is proposed for the factory of the future.
The Principles of Agent Integration The following are the basic concepts associated with the integration used for creating hierarchically structured systems at different levels of abstraction: • agents, • connections, • ports, • subsystems. An autonomous agent is a group of people, machines, robots, and/or guided vehicles that decides about its own informational requirements (autonomy) and that represents function(s) that is/are part of the system’s functional requirements (Szczerbicki, 1993). Connections and ports link, through the flow of information, the agents and subsystems. A subsystem is a set of connected agents. A description of the agent includes the following: (i) functions, (ii) informational input, (iii) informational output, (iv) information structure. In the function section of an agent description, the characteristic functions are listed. The characteristic functions are those that have been generated during the functional decomposition stage and then chosen to be represented by the agent during the representation stage. Informational input and output refers to the variables describing the state of the external environment of an agent. For an input, only those variables are included, the information about which is necessary to perform functions represented by an agent. For an output, only those variables are included that are affected by the functions specified for an agent. Internal information structure models an information flow inside an agent. It is defined during the agent representation process. Agents are matched using informational input and output. For example, in the domain of manufacturing systems, information may represent material availability, tool availability, machine availability, number of parts produced, number of products assembled, and the like. After the matching has been accomplished, the informational input variable of a given agent represents the value of the informational output variable of the agent to which it has been connected. In Figure 1, the input variable X of agent A2 is matched with the output variable Y of agent A1. The syntax of an agent connection is given as: Agent.Output_variable ——> Agent.Input_variable or using the symbols in Figure 1: A1.Y ——> A2.X
Information-Based Integration for Complex Systems 95
Any structure developed during the integration process may be enclosed into an autonomous subsystem using ports. The informational input and output ports provide an interface to the subsystem environment. This interface allows one to develop hierarchical structures. There has been little use of multiple hierarchies of abstraction in the process of systems integration in the literature published, although the idea appears to have considerable merits. To illustrate the notion of ports, in Figure 2 agents A1 and A2 have been matched using the A1.Y ——> A2.X connection. Figure 1: Agent connection X
A1
A2
Y
A1 Y
X A2
Figure 2: Subsystem A1.A2 as a closure of agents A1 and A2 A1.A2 A1 P
P
Y
X Z
Z A2
Z
A1.A2
P
96 Szczerbicki
The informational input port of the subsystem A1.A2 represents the value of an input variable Z of agent A2: A1.A2.Z = A2.Z The informational output port of the subsystem A1.A2 represents the value of an output variable P of agent A1: A1.A2.P = A1.P The frame of the subsystem A1.A2 is as follows: (Subsystem (A1.A2) (Agents (A1.A2)) (Output_port A1.A2.P = A1.P)) (Input_port A1.A2.Z = A2.Z)) (Agent_connection (A1.Y ——> A2.X))) Subsystem A1.A2 can be later matched with another subsystem or an agent. Different levels of abstraction at the integration stage correspond to various levels of abstraction at the related functional decomposition. For a complex process, the system analyst may begin at the highest level of abstraction with a general view of the process and go into more details by “opening” subsystems (such as A1.A2 in Figure 2) that are elements of the hierarchical structure of the overall system.
Generation of Agent Connections The syntax for connection of agents has been introduced above. It is easy to notice that without any additional mechanism for generation of connections, the bulk of the resulting matchings would not posses either a physical or logical meaning, and could lead to systems that might not be valid, for example, systems without a boundary. In addition, in the absence of constraints guiding the generation of connections, the number of all possible matchings for a complex system (with large initial number of agents) would become difficult to manage. Generation of connections between agents and/or subsystems is guided by production rules. They may represent qualitative knowledge and posses the following important characteristics (Bender, 1996): • Modularity: Each rule defines a small, relatively independent piece of knowledge. • Incrementability: A new piece of knowledge can be added without modification of the existing production rules. • Modifiability: The existing production rules can be changed relatively independently of other rules. • Transparency: Production rules facilitate explanation of the system performance. The rules refer to all possible types of connections, i.e.: • agent ——> agent, • subsystem ——> subsystem, • agent ——> subsystem, • subsystem ——> agent.
Information-Based Integration for Complex Systems 97
Production rules are structured to ensure that: an overall system is integrated in such a way that all agents are included. The set of agents at the initial state of integration represents all functions specified at the functional decomposition stage. • all boundary informational inputs and outputs are included in the integrated system and they represent the only way that the system communicates with its external environment. • only physically and logically feasible matchings of agents can be explored (i.e., only feasible variants of the integrated system). The agents and subsystems in the production rules supporting the integration are referred to as elements. An informational input boundary element is the one that accepts information from an environment of the system, and an informational output boundary element is the one that provides informational output to the environment. Two production rules that indicate whether the integration solution is feasible are presented next. Rule 34 IF there is only one element left THEN do not generate connections •
Rule 35 IF a single element that is left includes boundary inputs and outputs only THEN it is an overall system Production rule 36 sets a starting point for generation of connections and thus the beginning of the process of overall system building. It requires that the integration process begins with an input boundary element. Rule 36 IF there are more than one element THEN select a connection for an input boundary element Production rule 37 is used for inclusion of all elements in the integrated solution. Rule 37 IF there are elements other than the boundary elements THEN do not specify any connections that involve boundary elements only Production rules 38 and 39 make sure that agents that have been defined as boundary elements at the representation stage also create the boundary of the overall system. Rule 38 IF an element is an input boundary element THEN it cannot accept an input from any other element
98 Szczerbicki
Rule 39 IF an element is an output boundary element THEN it cannot provide an input to any other element Production rules 40 and 41 generate the actual connections. Rule 40 sets the matching priority for agents with exactly the same informational input and output. Since it is not always the case, production rule 41 specifies the priority for generating the connections between elements with only partially identical input and output variables. Rule 40 IF two elements have identical output and input variables AND there are no production rules that prevent from connecting them THEN specify the connections for these elements Rule 41 IF there are no elements with identical input and output variables AND there are elements with partially identical input and output variables AND there are no production rules that prevent from connecting them THEN specify the connection for these elements beginning with the closest match The notion of the closest match can be explained with the following example. Assume that X1, X2, and X3 are the informational output variables of agent A1, and that there are no other elements with identical input variables. There are, however, elements A2 and A3 with input variables X1, X2 and variables X1, X4, and X5, respectively. The match between A1 and A2 (two identical variables) is closer than between A1 and A3 (only one variable is identical). Figure 3 presents the way in which elements with only partially identical output and input variables are connected. Production rule 42 develops a multilevel hierarchy of the inner structure of the overall integrated system. Rule 42 IF a connection for an input boundary element has been specified THEN continue with selecting connections for elements that have not been listed in the specifications Figure 3: Connecting agents with similar input and output variables
X1
Y1
X1
X1 A2
Y2 Y3
A1
X2
X2
X3
X2 X3
A1.A2
Information-Based Integration for Complex Systems 99
Production rule 43 is applied when all inner (not boundary) elements have already been included in existing subsystems but the integration process has not been completed. Rule 43 IF there are boundary elements only THEN specify connections between them All the above production rules have been structured independently of the application domain and cannot be modified by a system analyst. The analyst may, however, add domain-based production rules. They may follow, for example, the safety requirements, marketing requirements, or other constraints imposed by the analyst.
The Integration Algorithm Integration of agents and subsystems into an overall system begins at the lowest level (level 1) of abstraction represented by the initial number of agents generated at the agent representation stage. It is performed according to the algorithm presented next. Step 1. Open a set AGENT_BASE consisting of all agents defined at the representation stage. Set level = 1. Step 2. Apply production rules to generate connections between elements in AGENT_BASE. Step 3. If no connections are generated, stop; otherwise, match elements in AGENT_BASE into pairs using the existing connections. Step 4. Define informational input and output variables for subsystems generated by the matching process. Step 5. Remove from AGENT_BASE all elements that have taken part in the matching process. Step 6. Add to AGENT_BASE all subsystems generated by the matching process. Step 7. Set level = level + 1 and go to Step 2. According to the above algorithm, in order to enter the next level of integration it is enough to generate only one connection in Step 2. Elements of the agent base that are not matched at level i are considered for matching at level i+1. The algorithm terminates at the level at which it is not possible to match agents and/or subsystems into pairs (no connections are generated). Two cases are possible. First, matching is no longer possible because all agents and subsystems have been exhausted and an overall system has been built. Second, although there are subsystems and/or agents available, it is not possible to match them. In this case, the integration path explored has not led to the overall system and none of the subsystems at the highest level includes all functions specified in the functional space for the designed system.
Illustrative Example As an hypothetical example, let us assume the initial agent base as presented in Figure 4. In Figure 4 agent A1 is an input boundary element. Agents A6 and A8
100 Szczerbicki
Figure 4: The initial state of an agent base X1
X3
X1
A1
A2 X4
X2
X3
X3 X1
X1
X3 A3
X1
A4 X4
X4
X2 X2
X1
X1
X3 A6
A5
X1
X2
X4
X3
X3
X1
A7 X2
A8 X3
are output boundary elements (note the dotted backgrounds for input and output boundary elements). The integration algorithm is used to build an overall system. Step 1. AGENT_BASE = {A1, A2, A3, A4, A5, A6, A7, A8} (as shown in Figure 4), level=1. Step 2. A1 ——> A4 (rules 36, 37, 38, and 40). Since the agent base includes more than one element, production rule 36 is fired, which means that the search for a connection begins with agent A1 (A1 is an input boundary element). Rule 37 excludes connections with agents A6 and A8. Rule 38 directs the search towards agents that may accept, as their input, the output of agent A1 (it excludes connection such as, for example, A5——>A1). Finally, rule 40 selects agent A4 as the best match for agent A1, i.e., connection A1— —>A4 is specified. The connection A1——>A4 is generated with production rules 42, 39, and 40. Since a connection for the only input boundary element at this level has been specified, production rule 40 requires that the search for connections continues with elements that have not been provided in the specification list. Agent A2 is such an element. Rule 39 excludes connections A6—>A2 and A8—>A8 (A6 and A8 are output boundary elements). Rule 44 excludes connection A2——>A8. Rule 40 specifies agent A6 as the match with the input variables that are identical to the output variables of agent A2, so that the connection A2—>A6 is generated. A5——>A3 (rules 42, and 40).
Information-Based Integration for Complex Systems 101
The next element that has not been specified in the connections is agent A3 (production rule 42). Rule 40 points towards agent A5 as the best match for A3 (the output of A5 is identical with the input of agent A3). A7——>A8 (rules 42, and 40). Only two elements have been left without connections specified. Rules 42 and 40 lead to generation of the connection A7——>A8. Step 3. Connections specified in step 2 allow to develop the following subsystems: S1 = A1.A4, S2 = A2.A6, S3 = A5.A3, S4 = A7.A8. Step 4. Input and output variables for subsystems generated in step 3 are defined. Step 5. AGENT_BASE = {S1, S2, S3, S4}. Step 6. Level = 2. Step 2. S1——>S3 (rules 36 and 40). At level 2 production rules 44 through 43 are applied to a new state of the agent base at which connections are generated for the four subsystems S1, S2, S3, and S4. Step 3. S5 = S1.S3. Step 4. Input and output variables are defined for S5. Step 5. AGENT_BASE = {S2, S4, S5}. Step 6. Level = 3. Step 2. S5 ——> S4 (rules 36, 41, and 43). Step 3. S6 = S5.S4. Step 4. Input and output variables are defined for S6. Step 5. AGENT_BASE = {S2, S6}. Step 6. Level = 4. Step 2. S6 ——> S2 (rules 36 and 43). Step 3. S7 = S6.S2. Step 4. Input and output variables are defined for S7. Step 5. AGENT_BASE = {S7}. Step 6. Level = 5. Step 2. No connections are generated (rule 34). Step 3. Stop. Note that the overall system S7 is a valid integrated solution as all its input and output variables represent the connections with the system environment (rule 35). The five-level hierarchical tree of system S7 is presented in Figure 5. In the example discussed the integration path explored satisfies the constraints expressed with production rules and, at the same time, leads to a valid overall system.
CONCLUSION Information flow integration is one of the major activities of the design process of an integrated system . The outcome of the integration process is the overall system integrated through the flow of information. In this chapter, a rule-based approach was proposed for the integration problem formulated as follows.
102 Szczerbicki
Figure 5: Five-level hierarchical tree of the overall integrated system S7 SYSTEM 7
SUBSYSTEMS
S6
SUBSYSTEMS
S5
SUBSYSTEMS
A2
S2
S4
S1
S3
AGENTS
AGENTS
AGENTS
AGENTS
A6
A7
A8
A1
A4
A5
A3
Given the informational inputs and outputs of agents (subsystems), find the overall system being designed that meets the desired functions and is integrated through the flow of information. Elements of an agent base are integrated using an algorithm into an overall system that has a hierarchical structure. General production rules supporting generation of connections for agents and subsystems were developed. The general production rules relate to the underlying systems theory. They are structured independently of the system’s domain and cannot be modified by a system analyst. Production rules ensure that only feasible variants of the designed system are explored. The algorithm and production rules were applied for building of an overall system using a hypothetical example with eight agents at the initial stage of integration.
REFERENCES Askin, R. G. and Standridge, C. R. (1993). Modelling and Analysis of Manufacturing Systems. New York: John Wiley & Sons. Bender, E. A. (1996). Mathematical Methods in Artificial Intelligence. Cocoa Beach, FL: IEEE Press. Gunasekaran, A. and Sarhadi, M. (1997). Planning and management issues in enterprise integration. Concurrent Engineering: Research and Application, 5, 98-100.
Information-Based Integration for Complex Systems 103
Hassan, A. A., Hershey, J., Schroeder, J., Sohie, G. R. L. and Yarlagadda, R. K. R. (1994). System Design. San Diego, CA: Academic Press. Kamrani, A. L. and Sferro, P. R. (1999). Direct Engineering: Toward Intelligent Manufacturing. New York: Kluwer Academic Publishers. Lefkowitz, H. and Herman, G. T. (1992). Color scales for image data. IEEE Computer Graphics and Applications. O’Grady, P. (1999). The Age of Modularity. Adams and Steele Publishers. Pacholski, L. (1998). Fuzzy logic application in ergonomic renewal of multiagent manufacturing systems. Cybernetics and Systems: An International Journal, 29, 715-728. Pacholski, L. and Wejman, M. (1995). Soft Modelling of the Ergonomicity of the Multiagent Manufacturing Systems. New York: Taylor and Francis. Prakken, B. (2000). Information, Organization and Information Systems Design. New York: Kluwer Academic Publishers. Quinn, J. B. (1992). Intelligent Enterprise. New York: The Free Press. Raczkowsky, J. and Reithofer, W. (1998). Design of consistent enterprise models. Cybernetics and Systems: An International Journal, 29, 525-552. Rogowitz, B. E. and Treinish, L. (1998). Data visualization: The end of the rainbow. IEEE Spectrum. Rolstafas, A. and Andersen, B. (2000). Enterprise Modelling. New York: Kluwer Academic Publishers. Szczerbicki, E., Kubiak, B. F. and Korowicki, A. (2000). Re-engineering and information management issues in concurrent systems analysis for performance enhancement. Systems Analysis, Modelling, Simulation, 38, 141155. Szczerbicki, E. (1993). Acquisition of knowledge for autonomous cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, 23, 13021315. Tharumarajah, A. (1998). A self-organising model for scheduling distributed autonomous manufacturing agents. Cybernetics and Systems: An International Journal, 29, 461-480. Tufte, E. (1994). Envisioning Information. Cheshire, CT: Graphics Press. Tufte, E. (1998). Visual Explanations. Cheshire, CT: Graphics Press. Wyzalek, J. (1999). Systems Integration Success. Auerbach.
104 Mirchandani & Motwani
Chapter VII
An Experimental Analysis of the Effectiveness and Efficiency of Teams with Partial Problem Domain Knowledge Dinesh A. Mirchandani and Jaideep Motwani Grand Valley State University, USA
ABSTRACT Knowledge Management Systems are increasingly becoming important to both practitioners and researchers. One area of application of such systems is the formation of organizational teams with appropriate knowledge content to solve complex and novel problems. A common predicament, however, is that teams are often formed with only partial problem domain knowledge. This study examines if teams that have partial problem domain knowledge are more effective and efficient than teams that do not have specific problem domain knowledge. It finds that partial problem domain knowledge may in fact be worse than no problem domain knowledge. Several implications for researchers and practitioners are derived from this result.
INTRODUCTION Information systems that focus on creating, gathering, organizing and disseminating an organization’s knowledge are known as Knowledge Management Systems Copyright © 2003, Idea Group, Inc.
An Experimental Analysis of the Effectiveness and Efficiency of Teams 105
(KMS) (Markus, 2001; Alavi and Leidner, 1999). The importance of KMS is underscored by the fact that several organizations are now taking steps to manage their knowledge explicitly by appointing chief knowledge officers (CKOs) to do so (Holsapple and Joshi, 2000). CKOs perform a variety of roles, including serving as the chief designer of the knowledge architecture, the head technologist for knowledge technologies, and the primary procurement officer for external knowledge content (Grover and Davenport, 2001). Clearly, there is a growing recognition that knowledge has become an important basis for competitive advantage between firms (Guay, 2001; Nidumolu et al., 2001). Knowledge has even been suggested as the most strategically significant resource for an organization (Pfeffer and Sutton, 2000; Grant, 1996). Typically however, knowledge in organizations is gained through experience and interactions with both processes and individuals (Schulz, 2001; Mulholland and Zdrahal, 2001). The recognition of this nature of knowledge is evident in the emphasis placed in many organizations on “learning by doing, where newcomers to the organization are expected to gain much of their knowledge and skills through a handson approach, even though it may be relearning something that someone else in the organization already knows (Nidumolu et al., 2001). Knowledge management on the other hand aims at knowledge reuse within organizations and the development of organizational memory systems to aid this reuse (Markus, 2001).
REVIEW OF LITERATURE KMS can take the form of knowledge repositories or of data maps. Knowledge repositories are databases of documents written by knowledgeable individuals whereas knowledge maps are searchable indexes of expertise held by individual employees of an organization (Davenport and Prusak, 1998). Using the former, an organization can gain immediate access to knowledge relevant to its needs, and with the latter it can best utilize the individual strengths of its employees. According to Gray (2000), by making knowledge searches more effective, KMS can improve the variety of knowledge present on problem-solving teams. Teams are collectives that have the characteristic of shared interdependent work (Lovelace, Shapiro and Weingart, 2001; Dyer, 1977). They are a useful mechanism for pooling and using the diverse knowledge and skills of employees (Drucker, 1994). Improved team knowledge diversity can lead to more accurate and complete analysis of complex problems, thereby improving the effectiveness of the solutions teams generate. This ability of KMS is important because organizations are increasingly using teams to solve complex problems. In fact, according to Gordon (1992), 82 percent of companies with 100 or more employees use team structures. Also, 68 percent of Fortune 1,000 companies use self-managing teams (Lawler, Mohrman and Ledford, 1995). KMS that can help create teams with appropriate knowledge content to solve problems can be invaluable to an organization. In fact, in companies such as Boeing,
106 Mirchandani & Motwani
team members are networked together according to roles, tasks and project deliverables. They also have access to an electronic library of best practices that can be shared and updated on a project-by-project basis thereby improving the company’s performance and reducing costs (Guay, 2001). This chapter thus focuses on examining the performance of knowledge-based problem-solving teams. The formation of such teams is an important application of KMS that can improve organizational performance and create competitive advantage for it (Gray, 2000). Gray’s proposition is also supported by information processing theory (Galbraith, 1973) because to be effective, teams require sufficiently diverse knowledge to properly assess and understand the problems they face. Inadequate problem assessment can lead to poor decisions (Tushman and Nadler, 1978). However a challenge that often arises for managers is that the full scope of a problem is rarely evident when the problem is first encountered. A manager facing such a problem must assemble a team without fully understanding the problem domain. Thus the manager may not identify all the required knowledge bases when putting together the team, and the team may not have the knowledge variety to understand and solve the problem. For most organizations in such situations, a related issue is that knowledge searches require time and effort to identify potential knowledge holders and contact them to assess the relevance of their knowledge. Managers may not be willing to conduct a thorough enough search for the right expertise (Davenport and Prusak, 1998). This increases the probability that decisions will be made using incomplete or deficient knowledge. Inadequate team knowledge variety at inception can however be corrected by searching for and incorporating the missing knowledge into the structure by adding new team members (Gray, 2000). Indeed, most organizations use emergent teams whose structure evolves in response to a problem. But given the fact that team knowledge is often partial or incomplete, it is particularly relevant to examine the effectiveness and efficiency of teams in solving problems about which the team has only partial expertise. Contrasting this performance to the performance of teams that have no expertise in the problem domain can help indicate the usefulness of partial problem knowledge to teams. We undertake this examination to gain a better understanding of team composition and team knowledge content.
METHODOLOGY A large state organization (> 2,000 employees) was solicited to participate in the study. Details of this organization are withheld to respect its desire for privacy. Individual participants were drawn from the organization’s functional areas such as accounting, finance, operations, information systems, and marketing. The functional area from which the participant was drawn was used as that participant’s area of expertise or knowledge domain, with their consent. This simplification replicates the behavior of a satisficing manager (Davenport and Prusak, 1998), and eliminates the conduct of a thorough knowledge search. Another reason for using functional areas
An Experimental Analysis of the Effectiveness and Efficiency of Teams 107
is that many organizations primarily rely on functional area affiliation in determining team composition, e.g., joint application development (JAD) efforts for information systems (Shelly, Cashman and Rosenblatt, 2000). Organizations consider such cross-functional teams as effective and productive ways to work that can provide multiple perspectives and experiences to decision makers (Lovelace, Shapiro and Weingart, 2001; Nidumolu, 2001). To ensure reliability of results, only those participants who had at least some level of college education in their functional area were included in the sample. Sixty teams each comprising two members, were thus initially formed by the researchers. These 60 teams were then divided into two groups each comprising 30 teams. In both groups, each team was presented with a problem to solve related to a particular functional area. The first group (of 30 teams) was presented with a problem related to the functional area of marketing. Of the 30 teams in this group, 15 teams had at least one team member who considered himself knowledgeable about marketing. The remaining 15 teams had both members drawn from functional areas other than marketing. The second group (of thirty teams) was presented with a problem related to the functional area of finance. Of the thirty teams in this group, 14 teams had at least one team member who considered himself knowledgeable about finance. The remaining 16 teams had both members drawn from functional areas other than finance. The purpose of using two groups, each presented with a different problem domain, was to ensure validity of the results and to eliminate the possibility of biases resulting from inappropriate problem selection. To measure the performance of the teams on solving the problems, the researchers used two parameters: (1) the correctness of the solution generated by the team which was used as a measure of problem-solving effectiveness (recorded as a binary variable), and (2) the time taken by the group to solve the problem which was used as a measure of problem-solving efficiency. Two hypotheses were examined: H1. Teams that have at least some knowledge of the problem domain will be more effective than teams with general business knowledge but not specific knowledge of the problem domain. H2. Teams that have at least some knowledge of the problem domain will be more efficient than teams with general business knowledge but not specific knowledge of the problem domain. The alternative hypotheses examined were: H1a. There is no difference in problem-solving effectiveness between teams that have at least some knowledge of the problem domain and teams that have general business knowledge but not specific knowledge of the problem domain. H2a. There is no difference in problem-solving efficiency between teams that have at least some knowledge of the problem domain and teams that have general business knowledge but not specific knowledge of the problem domain.
108 Mirchandani & Motwani
The rationale for hypotheses H1 and H2 is derived from the works of Simon (1945) and Mintzberg et al., (1976). Teams that have some problem domain knowledge are likely to understand the problem better and come up with superior solutions.
DATA ANALYSIS Two-way contingency table analyses were conducted to evaluate whether teams with partial problem domain knowledge were more effective than teams with no problem domain knowledge (Hypothesis H1). Significant values of Pearson c2 and Cramer’s V would indicate such a difference in problem-solving effectiveness. As shown in the tables below, partial problem domain knowledge was found to be unrelated to problem-solving effectiveness. Thus we reject H1 and support its alternative hypothesis H1a. For the first group, Pearson χ2 (1, N = 30) = 1.677, p = .195, Cramer’s V = .236 (see Tables 1, 2, and 3 below). This indicates that there is no difference in the effectiveness of teams that have partial knowledge of marketing and no knowledge of marketing in solving a problem derived from the functional area of marketing. For the second group, Pearson χ2 (1, N = 30) = .010, p = .919, Cramer’s V = .018 (see Tables 4, 5, and 6 below). This indicates that there is no difference in the Table 1: Marketing problem analysis–Crosstabulation
Marketing Knowledge
Team has no knowledge of marketing Team has some knowledge of marketing
Total
Count Expected Count Count Expected Count Count Expected Count
Analysis of marketing problem Incorrect Correct Analysis Analysis 2 13 3.5 11.5 5 10
Total 15 15.0 15
3.5
11.5
15.0
7 7.0
23 23.0
30 30.0
Table 2: Chi-square tests
Pearson Chi-Square Continuity Correction Likelihood Ratio Fisher's Exact Test Linear-by-Linear Association N of Valid Cases
Value 1.677 .745 1.721 1.621 30
df 1 1 1 1
Asymp. Sig. (2-sided) .195 .388 .190 .203
Exact Sig. (2-sided)
Exact Sig. (1-sided)
.390
.195
An Experimental Analysis of the Effectiveness and Efficiency of Teams 109
Table 3: Symmetric measures
Nominal by Nominal
Value -.236 .236 30
Phi Cramer's V
N of Valid Cases
Approx. Sig. .195 .195
Table 4: Finance problem analysis–Crosstabulation
Finance Knowledge
Team has no knowledge of finance Team has some knowledge of finance
Total
Count Expected Count Count Expected Count Count Expected Count
Analysis of finance problem Incorrect Correct Analysis Analysis 6 10 5.9 10.1 5 9
Total 16 16.0 14
5.1
8.9
14.0
11 11.0
19 19.0
30 30.0
Table 5: Chi-square tests
Pearson Chi-Square Continuity Correction Likelihood Ratio Fisher's Exact Test Linear-by-Linear Association N of Valid Cases
Value .010 .000 .010
df
.010
1 1 1 1
Asymp. Sig. (2-sided) .919 1.000 .919
Exact Sig. (2-sided)
Exact Sig. (1-sided)
1.000
.610
.921
30
Table 6: Symmetric measures
Nominal by Nominal N of Valid Cases
Phi Cramer's V
Value .018 .018 30
Approx. Sig. .919 .919
effectiveness of teams that have partial knowledge of finance and no knowledge of finance in solving a problem derived from the functional area of finance. We used Mann-Whitney U tests to evaluate the hypothesis (H2) that teams with partial problem domain knowledge are more efficient than teams with no problem domain knowledge. Significant values of z would indicate that there is indeed a difference
110 Mirchandani & Motwani
Table 7: Mann-Whitney test Time taken to analyze marketing problem
Groups with some Team has no knowledge of marketing Team has some knowledge of marketing Total
N
Mean Rank
Sum of Ranks
15
17.07
256.00
15
13.93
209.00
30
Table 8: Test statisticsb
Mann-Whitney U Wilcoxon W Z Asymp. Sig. (2-tailed) Exact Sig. [2*(1-tailed Sig.)]
Time taken to analyze marketing problem 89.000 209.000 -.987 .323 a
.345
a. Not corrected for ties. b. Grouping Variable: Knowledge of marketing
Table 9: Mann-Whitney test Time taken to analyze finance problem
Groups with some Team has no knowledge of finance Team has some knowledge of finance Total
N
Mean Rank
Sum of Ranks
16
18.41
294.50
14
12.18
170.50
30
Table 10: Test statisticsb
Mann-Whitney U Wilcoxon W Z Asymp. Sig. (2-tailed) Exact Sig. [2*(1-tailed Sig.)]
Time taken to analyze finance problem 65.500 170.500 -1.951 .051
a. Not corrected for ties. b. Grouping Variable: Knowledge of finance
a
.052
An Experimental Analysis of the Effectiveness and Efficiency of Teams 111
in problem-solving efficiency. However, the results of the tests were insignificant for Group 1 (z = -.987, p = .323) (Tables 7 and 8), and for Group 2 (z = -1.951, p = .051) (Tables 9 and 10). Thus we reject H2 and support its alternative hypothesis H2a.
DISCUSSION OF RESULTS The results of the analysis are somewhat surprising because it is reasonable to expect that teams with partial problem domain knowledge will perform better than teams with general business knowledge but not specific problem domain knowledge (Simon, 1945). The insignificant differences in problem-solving effectiveness and problem-solving efficiency indicate that partial problem domain knowledge is no better than general business knowledge in tackling business-related problems. Further, in 10 out of 29 cases, teams with partial problem domain knowledge incorrectly analyzed the given problem, an error rate of 34% (in contrast to the 26% error rate of teams with no specific problem domain knowledge), which clearly is a cause of concern. This finding places a greater burden on managers responsible for forming teams to ensure that the knowledge content on a team matches the problem domain closely. Putting together teams with only partial problem domain knowledge because it saves on time and effort rather than do a thorough knowledge search may in fact turn out to be more costly to organizations in the long run as the above analysis indicates. This clearly justifies the investment in a KMS for an organization. The results however should also be interpreted with some caution, as in experimental settings it is difficult to replicate organizational settings and informal working relationships. For instance, in this experiment, the team members were brought together solely for the purpose of a single problem-solving episode. Perhaps a continued relationship (as is common in organizational settings) would have improved the team synergy. Also the Hawthorne effect may have negatively affected team performance.
IMPLICATIONS FOR RESEARCH AND PRACTICE KMS are increasingly important to organizations, and researchers are only now beginning to develop theory to support the field as its applications grow (Markus, 2001). Teams are fundamental to any organizational structure and the application of KMS to develop knowledge-based teams is beginning to come to maturity. This chapter highlights the importance of effective knowledge searches during team composition and thus the value of KMS. It brings to fore the potential costs associated with partial problem domain knowledge, thereby cautioning against satisficing human behavior in the formation of problem-solving teams. It thus adds to the growing theory in the field. This research used teams with only two members, one of whom determined whether the team had problem domain knowledge based on his/her functional area affiliation. Typically however organizational team sizes average about nine members
112 Mirchandani & Motwani
(Offermann and Spiros, 2001). This simplification made it possible to conduct this experiment. However replicating the study with larger teams would be useful because issues of intra-team communication and group dynamics would become more apparent and perhaps affect the results. Again, for the purpose of simplification, this study used academic cases drawn from the functional areas of marketing and finance to represent the problems that the teams were assembled to solve. Organizational problems however are more complex and multidimensional. They are often novel, unstructured and lacking in clear precedent (Simon, 1977). Thus teams must first understand the problem, analyze their knowledge needs in relation to the problem, and adapt themselves by adding or deleting to the team’s knowledge content in order to solve the problem (Gray, 2000). To experimentally model such organizational problems is difficult because by definition such problems lack precedent, thus an alternative to experimental analysis may be to shadow teams in actual problem-solving episodes. Thus future research could longitudinally shadow the problem-solving episodes of teams. Perhaps this would also reveal the development of informal working relationships and comfort levels among team members, which may be an important component of eventual success. Finally, it was surprising to find that teams with partial problem domain knowledge had a higher error rate in problem analysis than teams without specific problem domain knowledge. In retrospect, one possible reason for this outcome may be poor design implementation. According to Gray (2000), superior analysis may well be wasted if the design is poorly implemented. On the other hand, perhaps this was an influence of the Hawthorne effect where team members affiliated to the functional area from which the problem originated made or contributed to unforced errors because they felt their performance on the problem was being scrutinized more closely than the other team members’. An alternative explanation is that perhaps they overanalyzed the problem, feeling they had to compensate for the other team members’ lack of knowledge of the problem. In either case, the result is unexpected and perhaps indicates that there should be some kind of balance on teams, i.e., teams should be composed of pairs or dyads of members with knowledge about a particular problem area rather than a single member. This issue merits further investigation. The study has clear implications for practitioners. Teams with partial problem domain knowledge are neither effective nor efficient. In forming a team, it is worthwhile to undergo a costly or even time-consuming search for adequate knowledge content, and to adapt the team’s knowledge as appropriate when tackling a problem. This study clearly underscores the importance of maintaining flexibility in team evolution, i.e., the use of emergent teams. Managers need to be wary of taking the easy path of leaving teams static, perhaps in order to not ruffle the feelings of some team members, which would be detrimental to the good of the organization (Offermann and Spiros, 2001). As this study shows, knowing a little about a problem domain is perhaps worse than knowing nothing about the problem domain. Hence, managers who believe that a team with partial knowledge content will do well and
An Experimental Analysis of the Effectiveness and Efficiency of Teams 113
that they need not worry about team composition after initially forming a team may be erroneous in their judgment. Ideally, though, team members themselves should be the ones aware of their team’s incomplete knowledge for tackling a problem, and should bring this to the attention of the manager; this rarely happens. Teams thus try to solve problems the best they can rather than the best their organization can. Thus the burden belongs with the manager to open channels of communication with the team to ensure they have the capability to solve the problem. Managers, however, tend to shift this burden to the teams who may be reluctant to admit their lack of knowledge to the manager. In either case, better communication would benefit the organization.
CONCLUSION This research examined two hypotheses, both of which expected teams that had partial problem domain knowledge to perform better than teams that had no specific problem domain knowledge. Both hypotheses were rejected and their alternative hypotheses instead supported. The results thus challenge the old adage that “a little knowledge is better than no knowledge.” In fact, the study finds that on problemsolving teams, partial problem domain knowledge can be worse than no problem domain knowledge. Several implications for research and practice can be derived from this finding and are discussed in the chapter.
REFERENCES Alavi, M. and Leidner, D. (1999). Knowledge management systems: Issues, challenges, and benefits. Communications of the AIS, 1. Davenport, T. and Prusak, L. (1998). Working Knowledge: How Organizations Manage What They Know. Boston, MA: Harvard Business School Press. Drucker, P. (1994). The age of social transformation. The Atlantic Monthly, November, 53-80. Dyer, W. (1977). Team Building: Issues and Alternatives. Reading, MA: AddisonWesley. Galbraith, J. (1973). Designing Complex Organizations. Reading, MA: AddisonWesley. Gordon, J. (1992). Work teams: How far have they come? Training, 29, 59-65. Grant, R. (1996). Prospering in dynamically competitive environments: Organizational capability as knowledge integration. Organization Science, 7(4), 375-387. Gray, P. (2000). The effects of knowledge management systems on emergent teams: Towards a research model. Journal of Strategic Information Systems, 9, 175-191. Grover, V. and Davenport, T. (2002). General perspectives on knowledge management: Fostering a research agenda. Journal of Management Information Systems, 18(1), 5-21.
114 Mirchandani & Motwani
Guay, B. (2001). Knowledge management is a team sport. Computing Canada, 27(15), 23. Holsapple, C. and Joshi, K. (2000). An investigation of factors that influence the management of knowledge in organizations. Journal of Strategic Information Systems, 9, 235-261. Lawler III, E., Mohrman, S. and Ledford Jr., G. (1995). Creating High Performance Organizations: Practices and Results of Employee Involvement and TQM in Fortune 1000 Companies. San Francisco, CA: Jossey-Bass. Lovelace, K., Shapiro, D. and Weingart, L. (2001). Maximizing cross-functional new product teams’ innovativeness and constraint adherence: A conflict communications perspective. Academy of Management Journal, 44(4), 779-793. Markus, M. (2001). Toward a theory of knowledge reuse: Types of knowledge reuse situations and factors in reuse success. Journal of Management Information Systems, 18(1), 57-93. Mintzberg, H., Raisinghani, D. and Theoret, A. (1977). The structure of unstructured decision processes. Administrative Science Quarterly, 21, 246-275. Mulholland, P. and Zdrahal, Z. (2001). A methodological approach to supporting organizational learning. International Journal of Human-Computer Studies, 55(3), 337-367. Nidumolu, S., Subramani, S. and Aldrich, A. (2001). Situated learning and the situated knowledge web: Exploring the ground beneath knowledge management. Journal of Management Information Systems, 18(1), 115-150. Offermann, L. and Spiros, R. (2001). The science and practice of team development: Improving the link. Academy of Management Journal, 44(2), 376-392. Pfeffer, J. and Sutton, R. (2000). The Knowing-Doing Gap: How Smart Companies Change Knowledge into Action. Boston, MA: Harvard Business School Press. Schulz, M. (2001). The uncertain relevance of newness: Organizational learning and knowledge flows. Academy of Management Journal, 44(4), 661-681. Shelly, G., Cashman, T. and Rosenblatt, H. (2000). Systems Analysis and Design (fourth edition). Cambridge, MA: Course Technology. Simon, H. (1945). Administrative Behavior. New York: The Free Press. Simon, H. (1977). The New Science of Management Decision. Englewood Cliffs, NJ: Prentice-Hall. Tushman, M. and Nadler, D. (1978). Information processing as an integrating concept in organizational design. In Nadler, D. and Tushman, M. (Eds.), Managerial Behavior (fourth edition), 157-190. New York: Columbia University Press.
Collaboration in the Large 115
Chapter VIII
Collaboration in the Large: Using Videoconferencing to Facilitate Large-Group Interaction Diane H. Sonnenwald, Paul Solomon, Noriko Hara, Reto Bolliger and Thomas H. Cox University of North Carolina at Chapel Hill, USA
ABSTRACT This chapter discusses the social, organizational and technical challenges and solutions that emerged when facilitating collaboration through videoconferencing for a large, geographically dispersed research and development (R&D) organization. Collaboration is an integral component of many R&D organizations. Awareness of activities and potential contributions of others is fundamental to initiating and maintaining collaboration, yet this awareness is often difficult to sustain, especially when the organization is geographically dispersed. To address these challenges, we applied an action research approach, working with members of a large, geographically distributed R&D center to implement videoconferencing to facilitate collaboration and large group interaction within the center. We found that social, organizational and technical infrastructures needed to be adapted to compensate for limitations in videoconferencing technology. New social and organizational infrastructure included: explicit facilitation of videoconference meetings; the adaptation of visual aids; and new participant etiquette practices. New technical infrastructure included: upgrades to videoconference equipment; the use of separate networks for broadcasting camera views, presentation slides and audio; and implementation of new Copyright © 2003, Idea Group, Inc.
116 Sonnenwald, Solomon, Hara, Bolliger & Cox
technical operations practices to support dynamic interaction among participants at each location. Lessons learned from this case study may help others plan and implement videoconferencing to support interaction and collaboration among large groups.
INTRODUCTION Collaboration is a strategic component of many research and development (R&D) efforts. Because challenges resulting from the need to solve complex problems may often be best addressed by collaboration among experts who apply complementary knowledge from different disciplines, or specializations within disciplines. Indeed, national agencies, such as the U.S. National Science Foundation (NSF), have established grant programs, such as the Science and Technology Center program and Industry-University Research Center program, that provide funding to large multi-disciplinary and multi-institutional R&D groups to address complex problems. Typically centers funded by these national agencies have 50 to 100 or more participating faculty, undergraduate and graduate students, postdoctoral fellows and industry members. These groups are often geographically distributed, and not all members may have worked together or even interacted with each other previously. Therefore, it is often challenging to establish and maintain collaboration. Awareness of the activities and potential contributions of others is fundamental to initiating and maintaining collaboration, yet this awareness is difficult to sustain. To address these challenges, we have been studying structures and processes within the NSF Science and Technology Center for Environmentally Responsible Solvents and Processes (NSF STC-ERSP) using an action research approach (Stringer, 1999; Whyte, 1997). Our approach investigates social, organizational and technical aspects of large-group collaboration, and iteratively recommends and evaluates mechanisms to facilitate collaboration among group members. Thus, our action research approach is evolutionary in nature, building on existing social and technical infrastructures, and continually exploring new ways to facilitate collaboration over time. This chapter describes these efforts in connection with one collaboration awareness mechanism, large-group videoconferences. Our efforts have focused on both social and technical infrastructures that are required to enable and empower collaboration. We conducted 25 interviews with center members and observed approximately 50 videoconference meetings over 12 months. Through these interviews and observations, we identified “best practices” for collaboration in the large. Problems of transition from co-located to multi-site meetings using videoconferencing will occur in most organizations and the benefits of broader participation may only be realized when time and resources are invested to notice what does not work, or what is not happening, and to explore and evaluate alternatives. This requires investigating and exploring ways that the social infrastruc-
Collaboration in the Large 117
ture of the organization and the technical infrastructure at the participating locations can better facilitate large-group collaboration. At the NSF STC-ERSP, our investigation yielded new social and organizational best practices, including: facilitation before, during and after videoconference meetings; the adoption of visual aids to match videoconference technology constraints; and the adaptation of participant, or audience, etiquette. It also yielded new technical practices including: upgrading of videoconference equipment; using separate networks for broadcasting camera views, presentation slides, and occasionally voice; and implementing new technical practices to support dynamic interaction among participants at each location (e.g., active camera operation; improved sound quality). These new practices have enhanced the effectiveness of videoconferencing, leading to its adoption within the center and enabling frequent, needs-based meetings across distances.
BACKGROUND Previous studies investigating videoconferencing vary in terms of their focus, setting and technical system. We developed a matrix to highlight some ways of viewing videoconferencing (see Table 1). in order to raise issues that we needed to consider in our study. In the table, we categorized studies by the nature of the research setting (pairs, small groups of three to seven people and large groups that include more than eight people) and the focus of these studies, i.e., the impact videoconferencing has on interpersonal interaction, work outcomes and processes, and participant, or user, satisfaction. In sum, lessons learned from these studies include: audio is crucial (Tang & Isaacs, 1993; Whittaker, 1995); video adds some value especially when it is used as data (Nardi, Schwarz, Kuchinsky, Leichner, Whittaker & Sclabbassi, 1993); video sometimes does not affect task performance, but increases participant satisfaction (Kies, Williges & Rosson, 1996; Olson, Olson & Meader, 1995; Tang & Isaacs, 1993); video use reduces certain kinds of interactions compared to face-to-face partially because of a lack of turn taking cues (Cadiz, Balachandran, Sanocki, Gupta, Table 1: Previous studies on socio-technical aspects of videoconferencing Research Focus
Research Setting Pairs
Small group (3-7 people)
Large group (more than 8 people)
Interpersonal Interaction
Masoodian et al. (1995) Heath & Luff (1991)
Sellen (1992) Rice (1993) Isaacs et al. (1995) Ruhleder & Jordan (2001) Barefoot & Strickland (1982) O’Conail, Whitaker & Wilber (1993)
Participant Satisfaction
Nodder et al. (1999)
Tang & Isaacs (1993) Kies et al. (1996) Patrick (1999)
Work Outcomes & Process
Nardi et al. (1993) Olson, Olson & Meader (1995)
Finholt et al. (1998) Mark, Grudin & Poltrock (1999) Gowan & Downs (1994) Ruhleder, Jordan & Elmes (1996)
118 Sonnenwald, Solomon, Hara, Bolliger & Cox
Grudin & Jancke, 2000; Ruhleder & Jordan, 2000; Isaacs, Morris, Rodriguez & Tang, 1995; Sellen, 1992); and the adoption of videoconferencing includes both social and technical aspects (Gowan & Downs, 1994; Finholt, Rocco, Bree, Jain & Herbsleb, 1998; Ruhleder, Jordan & Elmes, 1996; Patrick, 1999). In the following, we highlight some of the studies that are particularly relevant to our study.
Interpersonal Interaction Various studies have examined how video influences interpersonal interaction. Barefoot and Strickland (1982) note that there have been three positions regarding the impacts of media on interaction. One position is that media may facilitate interaction because it enables interaction that otherwise may not occur. A second position is that media may impede interaction because media eliminate, or destroy, some of the cues available in face-to-face interaction. A third position is that media may have no influence on interpersonal interactions. Heath and Luff (1991) also suggest that the form of communication access that works best depends on the nature of tasks and type of sociality that are desired. In studying the impact of videoconferencing on interpersonal interactions, all of these positions have found some support. Masoodian, Apperley and Frederickson (1995) found no statistical difference in speech duration, number of utterances and turn taking, and duration of mutual silence between pairs working face-to-face, with audio only or with video and audio. The pairs worked on a problem-solving task that had a correct answer. Sellen (1992) found similar results with respect to speech duration and turn taking. However, Sellen reports that there was more simultaneous speech in the face-to-face condition and that study participants found it more difficult to take control of the conversation in the video condition. Ruhleder and Jordan (2001) report similar findings, and conclude that delays inherent in videoconferencing technology today cause these problems, especially when the delay is apparent only to participants at one location. Barefoot and Strickland (1982) further suggest that video often impedes expressions of conflict and disagreement during discussions. When comparing faceto-face interaction with video (television) mediated interaction, Barefoot and Strickland found that ‘conflict’ was more prevalent in the face-to-face group and, as a result, face-to-face groups produce better integrated solutions to the change of work procedure problem they were addressing. Similarly, Rice (1993) found that participants in an R&D organization ranked (desktop) video fourth after face-toface, telephone and meetings in the appropriateness for “exchanging information, negotiating or bargaining, getting to know someone, asking questions, staying in touch, exchanging time-sensitive information, generating ideas, resolving disagreements, making decisions and exchanging confidential information” (p. 458). Video also ranked low in appropriateness for exchanging confidential information. However, (desktop) video ranked third in appropriateness for staying in touch. Another related study by Isaacs et al. (1995) compared the delivery of presentations via (desktop) videoconferencing and face-to-face lecture. They found
Collaboration in the Large 119
that speakers tended to prefer giving lectures in face-to-face mode because they felt more comfortable and closer to the audiences, whereas audiences tended to prefer receiving lectures through desktop videoconferencing because of convenience. In terms of interpersonal interaction, their study found that presentations in face-to-face settings seemed to allow richer interactions than through desktop videoconferencing. Audience members were inclined to ask questions one after another, and speakers tended to stimulate more audience involvement when lectures were given in person. In a similar vein, O’Conail, Whittaker and Wilber (1993) examined the nature of spoken communication in order to identify reasons for unsuccessful videoconferences. One of their findings is that audiences were likely to interrupt less often in videoconferencing systems than face-to-face meetings, thus reducing the interaction between speaker and the audience. These findings suggest that videoconferencing may work fairly well in situations where people are separated across physical distances and a face-toface meeting is not possible, or where visual information needs to be shared and acted on. These findings further suggest that there is something about physical distance that is maintained by the video medium which inhibits discussion, and thus, videoconferencing, as it is presently constituted, may not be appropriate for brainstorming and conflict resolution.
Participant Satisfaction Based on a tradition of usability engineering, several studies have investigated participant, or user, satisfaction with specific aspects of videoconferencing technology. Nodder, Williams and Dubrow (1999) describe how they conducted iterative usability evaluations on a videoconferencing (and shared application) software application to increase participants’ satisfaction with the application. Tang and Isaacs (1993) confirm that high-quality audio is crucial for supporting remote collaboration among small groups. Kies, Williges and Rosson (1996) report that low video frame rates did not affect task performance in distance learning situations but did negatively affect participant satisfaction. Patrick (1999) also makes recommendations for session organizers to improve videoconferencing sessions, such as providing appropriate visual information by considering video bandwidth for a particular session; paying attention to lighting, camera placement and camera move; providing high-quality audio; and evaluating in advance whether tasks are appropriate for videoconferencing. Moreover, Patrick’s recommendations for software developers include developing tools to distinguish between non-interactive and interactive uses and to support informal communication, user feedback on running a videoconferencing session, and conference organizing features such as polls. In addition to investigating specific aspects of videoconferencing technology, Tang and Isaacs (1993) surveyed participants’ attitudes about (roombased) videoconferencing systems. Participants reported that the advantages of using room-based videoconferencing included availability of visual contacts with their collaborators, and time and travel savings. The disadvantages included
120 Sonnenwald, Solomon, Hara, Bolliger & Cox
difficulty in scheduling a room for videoconferencing, poor audio quality and poor visual materials. The participants were also asked to suggest new capabilities that would make the current videoconferencing systems more satisfying. The suggestions included a shared drawing surface, a larger screen and the ability to access multiple sites. Tang and Isaacs (1993) also developed and evaluated a prototype desktop videoconferencing system to support remote collaboration. Their findings indicate that desktop videoconferencing did not affect the amount of communication, and was considered by its users to be an adequate replacement for face-to-face and room-based videoconferencing. They conclude that, despite previous research that found no significant effects by adding video, participants preferred to use video because it helped collaborators understand each other better as a richer set of cues was available. Finholt, Rocco, Bree, Jain and Herbsleb (1998) report on a three-month field trail of desktop videoconferencing in a 125-person software development organization. Study participants reported a low use of the technology but moderate satisfaction. In addition, they reported the system was slow and the organizational technical infrastructure did not at first adequately support the technology. However, participants also reported novel uses of the system, including using the desktop videoconferencing as one might use a room-based system to connect multiple participants in one location to multiple participants in another location. Videoconferencing is also successfully used at Boeing (Mark, Grudin & Poltrock, 1999). There was wide participation in meetings held via videoconferencing, saving participants time and stress related to travel. Meetings that had a formal structure or a facilitator who knew both how to fix technical problems and ways to engage remote participants were most satisfying. Similarly, Gowan and Downs (1994), Ruhleder, Jordan and Elmes (1996) and Patrick (1999) found that group members in an organization found it difficult to schedule, set up and use videoconferencing technology; learning to use the technology is a social, group learning process. From these studies, we find that participants are often satisfied or moderately satisfied with videoconferencing technology. Participants tend to use the technology in limited, but sometimes novel, ways, finding its most appropriate use for their context. Both a technical and social infrastructure can facilitate the adoption and use of videoconferencing.
Work Outcomes and Processes Few studies have focused on the impact that videoconferencing has on work outcomes and processes. Until recently, video was shown to have no effect on the quality of work unless the work involved negotiation (Short, Williams & Christie, 1976). However, Nardi, Schwarz, Kuchinsky, Leichner, Whittaker and Sclabassi (1993) show that video, which shows data that otherwise could not be viewed by team members, does increase the quality of work outcomes. Olson, Olson and Meader (1995) reported that people accomplished assigned tasks through video as
Collaboration in the Large 121
well as face-to-face and slightly better than audio-only in terms of the quality of the output. However, they find that video is less effective for supporting some work processes. The groups using video spent more time setting up initial stages to clarify each other’s points compared to the face-to-face groups. In conclusion, Gale (1992) suggests that videoconferencing research has been focusing too much on formal communication, while ignoring social factors, such as the difficulty of access to videoconferencing equipment and “a lack of understanding of the way in which people work” (p. 520). Tang and Issacs (1993) stress the importance of conducting research in work settings. As Kling (1996) notes, “people and organizations adopt constellations of technologies and configure them to fit ongoing social patterns” (p. 19). In the case of videoconferencing, this suggests the need to incorporate videoconferences in the ongoing social systems of organizations, and investigate its impact on interpersonal interactions, participant satisfaction and/ or work outcomes and processes. One way to begin doing this is by employing a socio-technical approach (Eason, 1988) to actively involve participants in the planning and conduct of such videoconferences, so that the sessions meet the specific needs of the participants. This chapter presents one such case study that incorporates a socio-technical action research approach to evolve large-group videoconferencing practices to facilitate collaboration in a geographically distributed R&D organization, the STC-ERSP. We first discuss the social and organizational infrastructure that has evolved to increase the effectiveness of videoconferences for participants, and second discuss the technical infrastructure that has evolved to provide innovative videoconferencing capabilities.
SOCIAL AND ORGANIZATIONAL INFRASTRUCTURE Social and Organizational Setting The STC-ERSP consists of four geographically dispersed universities, including the North Carolina Agricultural and Technical University, North Carolina State University, University of North Carolina at Chapel Hill and the University of Texas at Austin. At each university, there are approximately 10 to 37 undergraduate and graduate students and postdoctoral fellows, and six to 10 faculty who are members of the center, for a total of 110 members. These members do not work full time for the center, as students are enrolled in degree programs and must take courses, etc., and most faculty teach as well as conduct research outside the auspices of the center. At the time we began this work, the center was organized into four physical science research teams. Each team consisted of six to nine faculty members, and three of the four teams had faculty members from each university. Each team also had six to 29 student and postdoctoral fellow members. Many students and postdoctoral fellows were asked to be members of two teams, and each team had student members from each university.
122 Sonnenwald, Solomon, Hara, Bolliger & Cox
Similar to other centers and organizations, there was limited interaction among center members before the center was established. For example, data reported in a sociometric survey completed by members (60% response rate) indicated that only 22.9% of center members had interacted with other center members prior to the establishment of the center. Thus, the center is a large, geographically distributed group whose members are not full-time participants and who may have previously had little or no interaction with each other. In this respect the center is typical of the emerging genre of federally funded, university-based research centers.
Types of Videoconferences Three types of meetings in the STC-ERSP are held using videoconferencing: center-wide meetings, group meetings and faculty (principal investigator) meetings. Center-wide meetings are held infrequently (e.g., once every six to eight months); these meetings include all members at all universities and have been used to share information among all center members (see Figure 1). For example, a center orientation meeting was held that introduced the center’s mission, organizational structure and center-wide activities several months after the center was established. At these large meeting, as with most large meetings, interaction among members is somewhat limited due to the number of participants and time limitations. Group meetings are held weekly; all center members are invited to attend these meetings. However, students and postdoctoral fellows are strongly encouraged to attend these meetings when the presentations are given by members of their team. Each meeting typically lasts 1.5 to 2 hours, and includes 20 to 30 participants. During this time, members (primarily students and postdoctoral fellows to date) present and discuss their work. Students are required to present their work once or twice per year at these meetings. In addition, these meetings have been used to present outreach activities and opportunities and to illustrate the use of videoconference-related technologies. Each presentation during these meetings typically lasts 20 to 45 minutes with integrated discussion. Thus, these meetings are a vehicle for bringing people Figure 1: A center-wide videoconferencing meeting
Collaboration in the Large 123
together to share, learn, raise problems, offer solutions, and perhaps achieve other, as yet undetermined, outcomes. As faculty and student members reported: I always learn something. Even if everything in [the other] group meeting isn’t interesting to me, I can … read a manuscript and still listen to things that seem separate from what I am interested in and I will pick up something that I didn’t know. By attending these conferences and listening to explanations from other people, I [began to] understand research much more clearly. Faculty, or principal investigator (PI) meetings, occur on an as-needed basis, typically once every four to six months. These meetings are typically used to plan upcoming projects and activities and are organized by the center directors or by faculty. Initially these meetings were held using audio conferencing only, but faculty members are beginning to hold these meetings using videoconferencing.
Facilitation of Videoconferences Irrespective of the type of meeting being held via videoconferencing, each meeting has a facilitator or moderator. For the group meetings, a student from each project group is assigned the role of facilitator. This responsibility rotates among the students approximately every six months. While the center directors, in consultation with faculty and students, determine policy for the student presentations, student facilitators schedule the presentations as well as perform the following responsibilities: (a) Compose an e-mail message to all center members announcing the upcoming meeting topics. Abstracts for the presentations are included in this message when available. Other center-wide announcements and norms regarding the videoconferencing may be included in this message. (b) At the beginning of the meeting, welcome everyone, verify that audio and video communications are working from the audience’s perspective, and ask if there are any general announcements. (c) If there are any technical problems at any time, the facilitator is responsible for informing the videoconference technical staff and relaying the status of the technical problems to all locations. Often, the technical staff is located in an adjacent control/operations room, and the facilitator may use a dedicated headset to talk with the staff. (d) Introduce each presenter; manage the question/answer period as needed. (e) Provide a 10- to 15-minute break between presentations. The break also allows participants who cannot stay for a subsequent presentation to leave with minimal interruption as well as informal discussion of completed presentations. (f) Close the meeting, thanking participants. (g) After the meeting, the facilitator publishes the highlights of the meeting. These are one to two paragraphs in length and are sent to all center members via email and published in a secure discussion forum area of the center’s web site.
124 Sonnenwald, Solomon, Hara, Bolliger & Cox
It can be tempting for facilitators (and presenters) to forget that there are people at other locations who want to participate in the discussion. The participants at remote locations may need to be reassured that they are part of the meeting and encouraged to speak. Speakers have, consequently, been requested to stop periodically and ask if there are questions. Initial ideas regarding these responsibilities emerged from observations of videoconferences and discussion with center members and technical staff at each location by the authors who are members of the social science research team of the center. As Gowan and Downs (1994) recommend, facilitation of a videoconferencing meeting leads to an efficient meeting. Further, a meeting was held between the social science research team staff and the student facilitators and technical staff to discuss and refine these practices. Thus students and staff participated in their formulation. The e-mail announcements and summary messages facilitate interaction in several ways. Because some topics cross project team boundaries, these announcements make it possible for anyone who is interested in the topic to know when to attend. They also allow center members to get a bird’s-eye view of research progress within the center, increasing their awareness of center activities. As one participant reported: The beauty of the videoconferences is the way they send the titles out in advance and then you can go to different [group meetings] and see what you want to see. That helps so much. If you don’t know what the titles are going to be then you might… only go to [your] own [group meeting]. So if I’m a simulator and I see somebody’s giving a talk in one of the other [group team meetings regarding] something I’m interested in, I just go [to that videoconference]. Furthermore, the facilitator role provides students with an opportunity to practice leadership and meeting facilitation skills—skills sought by prospective employers. It also fosters interaction among the student facilitator and presenters. While this interaction is relatively minor in nature as, in this context, students are not co-located and have previously never interacted with one another, these types of formal interaction mechanisms are a first step towards more meaningful and sustained interaction as they promote awareness of expertise and provide a foundation for future collaborative relationships.
Adaptation of Visual Aids Visual aids, such as slides, are important as they can aid in the retention of the material being presented as well as help participants understand what the presenter is saying when, for instance, the audio is a bit garbled. However these aids often need to be adapted for use in videoconference settings due to constraints imposed by the technology. Use of TV monitors in videoconferences, for instance, instead of the large screens commonly used for the display of overhead slides or a PowerPoint presentation in conference or classroom settings, make a difference—text and graphics that are very readable on a large projection screen may be difficult to read on a monitor, where the monitor is some distance from those trying to read the screen.
Collaboration in the Large 125
Guidelines for Microsoft PowerPoint presentations/transparencies typically suggest a minimum of 20-point font size for headlines and 16-point for other text (Ross & Dewdney, 1998). While this works well in most presentation situations, it is too small for the TV monitor situation. We consequently advised presenters to go bigger. We found that text in 28-point font size was readable from the back of our videoconference rooms. We further advised using keywords or short phrases over sentences on slides. That is, presenters are asked to avoid including everything that they wish to say on the display. We found that all UPPER CASE TEXT was harder to read than lower case (with capitalization as appropriate) when displayed on a screen or monitor. Size was an issue for graphics too. Many presenters included multiple charts, graphs, etc. on a slide. This can be an effective way of placing related views of data together to show the ‘shape’ of what happened in an experiment comparatively. It is not an effective way of communicating details. By moving from overview to details—that is, a larger full-screen view of a single graph–the audience can better see the details (e.g., units of measure). This effect could also be achieved by zooming in on the details of a graph or creating follow-up screens that blow up the details. The traditional black text on a white background of many presentations is not as effective a color scheme as a dark background with text in a light color. A dark blue background with yellow header and white text is a color scheme that provides better visual clarity, especially on a TV monitor, than black and white. Red text tends to look blurry on a TV monitor. A template for slides with these guidelines in mind was developed and distributed to center members through e-mail and included on the center’s web site. Before electronic white boards were installed, we found that paper copies of slides worked better than transparencies when the overhead camera was used to project the slides because they minimized the reflection from the lights. The use of the overhead camera also allowed the presenter to zoom in to details of a paper slide, something that is not possible with an electronic presentation.
Participant Etiquette Practices Because videoconferences differ from face-to-face meetings, a set of participant videoconference etiquette practices was developed. We expect these practices to evolve further over time. One practice focuses on self-identification. During videoconferences, it is not always possible for the presenter and other audience members to see who is asking a question because anyone can ask a question and camera operators cannot always switch camera focus and video displays fast enough to show who is asking the question. Knowing who posed a question sometimes provides clues regarding the best response and provides the presenter the opportunity to later follow up with the questioner at their discretion. Thus, we developed the common practice of questioners first saying their name and location, i.e., “This is Reto from UNC at Chapel Hill.” Initially others in the audience, including the facilitator, would prompt participants if they forgot to say their name and location. Now this practice is widely used without prompting.
126 Sonnenwald, Solomon, Hara, Bolliger & Cox
There is also a need to explicitly communicate problems to videoconference technical staff. If a participant (usually the facilitator) reports a technical problem, they give their location and state what the problem is and where it is coming from, e.g., “This is Chapel Hill and we have no sound from Texas.” This is the type of information our technical staff told us they need to investigate and solve problems. Another practice focuses on microphone awareness. In most videoconference rooms, the microphones are always on; almost all sounds in one location can be heard in other locations. This includes whispers or side comments, munching on chips, sneezes and page turning. In response to this constraint, participants cover the microphone closest to them when sneezing, etc., and limit their page turning and other activities not directly related to the meeting. In the frenzy of preparing a presentation, a presenter may fail to realize that the presentation is an opportunity to advance their research. Thus, the main purpose of the presentation for many presenters may at times be to get it over with as quickly as possible. Yet, when this happens it is an opportunity lost, as this was an occasion to get help as the presenter helps others learn. One possibility is to encourage those present to consider problems encountered by the researcher by saying: “Here is something that I’ve been struggling with. Do you have any suggestions?” Similarly, it may be encouraging to those who aren’t initiated in the mysteries of a particular experimental method or instrumentation to stop and say: “Would anyone like me to discuss why we are using this experimental approach?” or to help those in the audience who don’t want to interrupt the flow of the presentation to say: “Are there any questions?” It is helpful to give the participants some time to respond along with these opportunities as it often takes a bit of time to formulate responses. Similarly, participants at remote locations need to have an opportunity to offer feedback to let the presenters know when they are lost, cannot see important details on the screen, or would like a more detailed explanation. In some sense, these practices are simple and intuitive, making them relatively easy to implement. However, they were not self-evident at the beginning. As a center, we had no common experiences with videoconferencing, and we first applied our standard, face-to-face meeting practices in videoconference situations. Frequently this was not effective because the constraints of videoconferencing differ from those in face-to-face meetings. For example, Heath and Luff (1991) found that a gesture is comparatively not effective over video. We needed to experience and learn about these constraints to find ways to modify our practices to better cope with them. This sort of reflection in practice (Schön, 1983) is fundamental to organizational learning (Cohen & Sproul, 1996).
Evolution and Dissemination of Practices Initially videoconferencing was met with reluctance from some center members and technical staff because it required people to do familiar things differently and the social and technical infrastructure was in a beginning stage of development. As one member reported: “Early on I thought [the videoconference] was a complete waste of time.”
Collaboration in the Large 127
An important thing to realize is that problems of transition from the one site to multi-site video presentation can be overcome and the benefits of broader participation realized. However, returning to the reflection in practice idea, participants need to invest some time to notice what does not work or what is not happening and use what is not working to suggest alternatives. Videoconference participants were, consequently, encouraged to reflect and offer constructive feedback. The videoconference meetings are a particular kind of communicative event (Saville-Troike, 1989). Among the center’s communication structures, it is a vehicle for bringing together people with a broad common interest in one of the thrust areas to share, learn, raise problems, offer solutions, and perhaps achieve other as yet undetermined outcomes. If what is happening is not what the administrators, presenters or other participants wish to happen, it is within their power to raise that as an issue and seek solutions. For example, when discussing ways to utilize videoconferences in the future, a student commented: “Maybe I can discuss my [research] problems [during] the videoconferences.” Changes to group practices need to be discussed with all participants. We have done this in various ways, including presentations and discussions at meetings, publication of group practices on the center’s web site, the inclusion of “tips” in announcements of meetings, and training sessions to illustrate and teach the use of videoconferencing technology. Center management also took a lead role in facilitating the adoption of these practices by consistently using these practices in meetings and encouraging others to do so. Overall, there is a need to avoid letting videoconference technology get in the way of what needs to happen for both the purposes of the participants and center in general. A well-organized and managed meeting can be effective despite the technology; however, technology cannot make a poorly managed meeting better (Schwartzman, 1989). The videoconferences can be whatever the participants wish them to be, but only with reflection and constructive action.
TECHNICAL INFRASTRUCTURE Technical Setting Each university participating in the center has videoconferencing facilities that were established to primarily support distance education programs. Each facility is maintained and operated by a combination of full-time staff and parttime (student) staff, and there is variation with respect to technical capabilities between the facilities. The staff is trained to support distance education courses that primarily use a lecture-based format and are broadcast to locations within the university’s state. Three of the four universities, located in the same state, participate in a statewide educational videoconference communications network. The network is centrally controlled/operated, and uses proprietary analog technology to provide video and audio links among universities (and community colleges and high schools)
128 Sonnenwald, Solomon, Hara, Bolliger & Cox
in the state. As a result, most videoconference technical staff at the universities in this state primarily interacts with the centralized staff. We decided to take advantage of existing university videoconferencing facilities, and work with the videoconference technical staff to purchase additional videoconferencing equipment and establish new operational practices to enhance the technical quality of videoconferences. In this way, we leveraged our funding dollars, and provided some benefits to everyone who uses videoconference facilities at the participating universities. Patience and persistence were sometimes required in working through administrative procedures that were originally established to support distance education courses broadcast from a single university location. For example, at several of the participating universities, courses are given priority in scheduling the use of large videoconference rooms, and the course schedules are often planned three to five years in advance. A workaround involved establishing one and no credit courses for the weekly group meetings and to schedule as many of these weekly meetings in advance as possible. Of course, each university has it owns scheduling process. Coordinating scheduling across four universities is not necessarily a trivial matter. An alternative approach includes establishing and maintaining a separate, independent videoconference facility at each university. This approach would provide more control over the design and use of each videoconference facility. However, establishing and maintaining an independent facility will typically cost more in terms of equipment purchases and ongoing operating expenses. In addition, unless there are sufficient funds to staff technical support personnel at each location, quality, customized and advanced videoconference capabilities that currently require more than turning on a switch to operate could not be supported.
Videoconference Room Layout Figure 2 illustrates a physical layout typical of many of our videoconference rooms. This layout was developed in collaboration with university videoconference technical staff and has some commonality with the videoconference layout developed at Argonne National Labs (Childers, Disz, Olson, Papka, Stevens & Udeshi, 2000). To provide a maximum view of participants, two large screens are used. At one location these screens are 120" large (along the diagonal) and are wall mounted. To reduce noise, the LCD display projectors for these screens are ceiling-mounted. Typically, one screen has a quad-split screen display that shows three of the remote locations. The other screen is a large display of another location; each location is periodically displayed, however, when the presenter is at a remote location, typically more time is devoted to show the presenter and the presented materials. In addition to these large-screen displays, a large touch-sensitive electronic whiteboard is used to display the presenter’s slides. The presenter, and anyone in the audience at any location, can write on their local electronic whiteboard and the result is transmitted to all locations (see Figure 3). This allows participants to highlight aspects of their slides, create notes in real time, and to save these notes for later
Collaboration in the Large 129
Figure 2: Example of a videoconference room physical layout (not to scale) 72” electronic whiteboard Loud-speakerDesk
PC connected to electronic whiteboard & Internet
120” Screen
(Ceiling mounted)
L C D
120” Screen
L C D
Desk for presenter
L C D
Loud-speaker
(Ceiling mounted) Control monitor viewable by presenter
Loud-speaker
Desks, chairs & microphones for participants Loud-speaker
Figure 3: A student using an electronic whiteboard during a videoconference
reference. Our locations use a SmartBoard from Smart Technologies and an LCD projector connected to a personal computer (PC) to provide this capability. Alternative technologies include rear projection systems that eliminate projector shadows and plasma displays that operate more quietly than projectors. We currently do not use rear projection systems due to cost and space constraints, and we do not use plasma displays due to current size limitations of the technology. Two to four speakers strategically placed around the room broadcast sound. Each presenter uses a wireless microphone, and microphones to capture comments from the audience are typically installed on every other desk. The microphones on
130 Sonnenwald, Solomon, Hara, Bolliger & Cox
the desk are always on, and, sometimes, unintended whispers and sounds from paper shuffling are broadcast.
Telecommunications Infrastructure Several network communications technologies are used to support the videoconferences (see Figure 4). As previously mentioned, a centralized statewide videoconference network is used among the three North Carolina locations. This network uses proprietary analog microwave technology. The University of Texas at Austin uses ISDN videoconference communications technology. This ISDN signal is transmitted to UNC at Chapel Hill and is broadcast to the other two North Carolina universities. The audio signal is sent together with the video signal over these networks. This does not always work well; audio quality can be poor and audio can be lost completely. As research has indicated (e.g., Olson, Olson, & Meader, 1995; Patrick, 1999; Tang & Isaacs, 1993), audio quality is typically more important than video quality so poor quality or no audio is not conducive to effective meetings. We have been working with videoconference staff to resolve this problem. An ISDN conference phone has been purchased in an effort to upgrade audio quality, and cellular/digital phones with speakers have also been purchased to provide auxiliary audio capabilities when needed. Most presentations during meetings use PowerPoint slides running on a PC connected to an electronic whiteboard and the Internet. This allows a Microsoft NetMeeting session to be established among the PCs at all locations. PowerPoint (and other applications as needed) are executed within this NetMeeting session. Access is controlled by IP addresses, i.e., only computers with the pre-specified IP Figure 4: Current telecommunications network configuration
In-state Location 1 Internet for NetMeetingTM (PowerPoint Presentation)
In-state Location 2
In-state Analog Network for Video & Audio
In-state Location 3 Cell Phone (Back-up)
ISDN/H.320 Connection (Video & Audio)
Out-of-state Location
Collaboration in the Large 131
addresses can participate in the NetMeeting session. Previously the PowerPoint display was first processed through a scan converter and then broadcast over the video network described above. However, the (NTSC) video picture resolution is only 525 lines, or 500 x 400 pixels, and this low resolution is problematic in large rooms. We achieve a higher resolution using NetMeeting over the Internet. Transmission delays due to Internet traffic variability have not as yet been a problem because we are only broadcasting slides that do not change frequently.
Technical Operations As mentioned previously, we collaborate with each university’s videoconference technical staff. From the onset, we asked them to work with each other and us to do whatever was necessary to make the videoconference meetings successful. This requires “buy-in” from technical staff at every location. A common pitfall to avoid is the attitude: “You’re not from my department, I’m just doing you a favor letting you use my videoconference room.” Specifically we asked the technical staff, in some instances for the first time, to manage multiple types of audio and video signals, provide and maintain high-quality audio and video among all locations throughout the entire meeting, dynamically operate cameras, and add or upgrade technology in their videoconference rooms.
Multiple Types of Signals In most distance education courses, the outgoing broadcast is typically a view of the instructor and their teaching materials, and the one incoming broadcast is typically a panoramic view of the remote classroom. Thus typically technical staff only need to manage one incoming video and audio signal, and the camera operation is primarily a “point and focus” task with occasional monitoring. Our needs required that they manage multiple incoming video and audio signals to allow each other location to see and hear the remote locations. Furthermore, in our setting most distance education courses are in-state courses that utilize the centralized network. A single protocol and standard operating procedures are used throughout the network. Our center videoconferences required the addition of a new network connection with a different telecommunications protocol. This required new equipment and introduced more complex operating procedures. For example, some equipment had to be re-positioned so that an operator could effectively reach the new combination of switches in the time allotted when managing a videoconference.
High-Quality N-Way Audio and Video “High quality” in our setting is defined by low latency, clear n-way audio among all locations, and “reasonable” n-way video among all locations. Both audio and video should persist throughout the duration of the videoconference. As other studies have illustrated (e.g., Olson, Olson & Meader, 1995; Patrick, 1999; Tang & Isaacs, 1993), audio is more important than video for effective
132 Sonnenwald, Solomon, Hara, Bolliger & Cox
interaction during most meetings. Individuals can, for the most part, compensate for lack of video if audio is available, however, video cannot make up for the lack of audio. Furthermore, we require high-quality audio throughout the duration of the meeting because participants at any time from any location may wish to ask a question or make a comment. Audio quality has been problematic. As one member reported: “We had a lot of problems with the sound…if that were a little smoother, it would be nicer.” To address this, technical staff now does a sound check with no one in the room 10 minutes before each videoconference. This check helps to identify and resolve any problems. We also have a cellular speaker phone available for use if the audio network problems cannot be quickly resolved.
Dynamic Camera Operation To facilitate interaction among participants irrespective of their location, we would like all meeting participants to be able to see whoever is talking as much as possible. For example, if Sue is presenting at one location and Bill asks a question at that same location, the outgoing video should show Sue when she is speaking and switch to Bill when he is speaking. This requires constant active camera operation (or sound-activated camera control) throughout the videoconference. This was not a standard operating procedure when we began videoconferencing. It is generally common practice for technical staff to set up a camera with a wide shot of the audience, do a microphone check and then leave the scene completely. This has disastrous effects for spontaneous, interactive discussions. Interestingly, the etiquette practice of speakers identifying themselves and their location helps technical staff to provide this capability. Those short preferences alert staff to the need to change the camera view and give them a few extra seconds to accomplish the task.
Equipment Modifications As discussed previously, each university had videoconference facilities before the center was established. We have worked and continue to work with the technical staff that manage and operated these studios to upgrade and provide new equipment that can facilitate our videoconferences and be used in other videoconferences that take place in these studios, creating a win-win situation. These upgrades and new equipment purchases have ranged in scope from upgrading PCs to support current versions of NetMeeting and PowerPoint to buying and installing SmartBoards and LCD projectors. Several universities have also “matched” these purchases, providing additional components needed such as 120" screens.
CONCLUSION Facilitating collaboration among a large geographically dispersed group whose members may not have met previously and whose membership changes is a complex challenge. The NSF STC-ERSP approached this challenge by investigating and
Collaboration in the Large 133
implementing both social and organizational practices and technology, with an initial focus on large-group, interactive videoconferencing. Our work has been evolutionary and collaborative in nature. Social and organizational practices or infrastructure, such as the role of a facilitator during a videoconference, use of visual aids, and participant etiquette, have evolved with insights from the literature and reflection on our experiences. Providing effective, interactive videoconferences among multiple sites has also required the implementation of different technologies and, perhaps more importantly, the evolution of new technical operation practices, including active camera operation and high-quality n-way video and audio. Future efforts include investigating strategies to help make the weekly group videoconference meetings less formal. Students have reported they feel that their talks at these meetings must be well rehearsed and thought out, which is not necessarily a bad thing, though this situation becomes problematic when presenters avoid pointing out difficulties and their own questions due to their emphasis on a polished presentation. Others have reported that they feel uncomfortable asking tough questions because they do not want to embarrass the presenters, when the asking of such questions might help presenters overcome difficulties in their research or become aware of relevant matters that they were not aware of or had not considered. Additional exposure and use of the technology may help reduce these perceptions of formality; however, this alone may be insufficient. One strategy is to have key faculty (i.e., recognized experts) present work in progress and have colleagues add their constructive comments. This modeling may show by example that informal discussions are both appropriate and helpful in this venue. Another strategy includes having time allocated during the weekly videoconference meetings for individuals and groups of individuals to discuss topics. For example, faculty and students interested in a particular type of instrumentation could use this time to share recent experiences and ask for advice. These types of informal information exchange require trust among participants, and furthermore that trust must in large part be created and maintained using technology not previously used (Jarvenpaa & Leidner, 1999; Iivonen & Huotari, 2000). Future technical efforts include streaming the meetings over the Internet to allow interested individuals at corporations and national labs to participate in some videoconferences from their desktop. To achieve this, several challenges exist. For example, security practices must be implemented to restrict viewing to designated individuals, and full-screen video viewing on PCs is required for slides and other details to be easily seen. In addition, telephone calls from each individual at a remote corporation or lab would have to be patched into, or merged with, the videoconference audio to enable those individuals to interact during meetings. We have also received requests to extend the videoconference capabilities to include additional locations, such as funding agencies, corporate sponsors, national labs and universities whose scientists collaborate with center members. We envision that technical and social challenges will continue to emerge throughout this expansion effort. For example, expectations regarding participant etiquette may need to be shared with first-time participants who, in turn, may suggest new practices.
134 Sonnenwald, Solomon, Hara, Bolliger & Cox
In summary, many challenges emerge when facilitating collaboration among a large, geographically dispersed group. Reflecting on and learning from our experiences and sharing that learning is one way to advance our understanding of these complex challenges. These new practices have enhanced the effectiveness of videoconferencing, leading to its adoption within the center and enabling frequent and needs-based meetings across distances.
ACKNOWLEDGMENTS We wish to thank Joe DeSimone (director), Ruben Carbonnell (co-director) and Ev Baucom (executive director) of the NSF STC for Environmentally Responsible Solvents and Processes for their continuing support. We also wish to thank the university videoconference technical staff for their efforts running the videoconferences; the center members and directors for their support and willingness to try new things; and Denis Gray for comments on this paper. This material is based upon work supported by the STC Program of the National Science Foundation under Agreement No. CHE-9876674.
REFERENCES Arapis, C. (1999). Archiving telemeetings. Proceedings of ACM 1999 Conference on Information and Knowledge Management, 545-552, November. New York: ACM Press. Barefoot, J. C. and Strickland, L. H. (1982). Conflict and dominance in televisionmediated interactions. Human Relations, 35(7), 559-566. Cadiz, J. J., Balachandran, A., Sanocki, E., Gupta, A., Grudin, J. and Jancke, G. (2000). Distance learning through distributed collaborative video viewing. Proceedings of CSCW, 135-144, December. New York: ACM Press. Childers, L., Disz, R., Olson, R., Papka, M., Stevens, R. and Udeshi, T. (2000). Access gird: Immersive group-to-group collaborative visualization. Proceedings of the Fourth International Immersive Projection Technology Workshop. Cohen, M. D. and Sproull, L. S. (Eds.). (1996). Organizational Learning. Thousand Oaks, CA: Sage Publications. Eason, K. (1988). Information Technology and Organizational Change. London: Taylor & Francis. Finholt, T.A., Rocco, E., Bree, D., Jain, N. and Herbsleb, J.D. (1998). NotMeeting: A field trial of NetMeeting in a geographically distributed organization. SIGGROUP Bulletin, 20(1), 66-69. Gale, S. (1992). Desktop videoconferencing: Technical advances and evaluation issues. Computer Communications, 15(8), 517-525. Gowan, J. A. and Downs, J. M. (1994). Videoconferencing human-machine interface: A field study. Information & Management, 27, 341-356.
Collaboration in the Large 135
Heath, C. and Luff, P. (1991). Disembodied conduct: Communication through video in a multi-media office environment. Proceedings of CHI’9, 99-103. New York: ACM Press. Herring, S. (1999). Interactional coherence in CMC. Journal of Computer Mediated Communication, 4(4), June. Available on the World Wide Web at: http://www.ascusc.org/jcmc/vol4/issue4/index.html. Iivonen, M. and Huotari, M.-L. (2000). The impact of trust on the practice of knowledge management. Proceedings of the ASIS&T Annual Meeting, 421429. Medford, NJ: Information Today. Isaacs, E. A., Morris, T., Rodriguez, T. K. and Tang, J. C. (1995). A comparison of face-to-face and distributed presentations. Proceedings of CHI ‘95, 354-361. New York: ACM Press. Jarvenpaa, S. L. and Leidner, D. E. (1999). Communication and trust in global virtual teams. Organization Science, 10(6), 791-815. Kies, J. K., Williges, R. C. and Rosson, M B. (1996). Controlled laboratory experimentation and field study evaluation of videoconferencing for distance learning applications. Technical Report HCIL-96-02. Available on the World Wide Web at: http://hci.ise.vt.edu/lab/htr/HCIL-06-02/HCIL-96-02.html. Kling, R. (1996). Social controversies about computerization. In Kling, R. (Ed.), Computerization and Controversy (second edition), 16-21. San Diego, CA: Academic Press. Kling, R. (2000). Learning about information technologies and social change: The contribution of social informatics. The Information Society, 16(3). http:// www.slis.indiana.edu/TIS/articles/Kling16(3).pdf. Accessed March 30, 2002. Masoodian, M., Apperley, M. and Frederickson, L. (1995). Video support for shared work-space interaction: An empirical study. Interacting with Computers, 7(3), 237-253. Mark, G., Grudin, J. and Poltrock, S. (1999). Meeting at the desktop: An empirical study of virtually collocated teams. Proceedings of ECSCW’99, 159-178. Lyngby, DK: Technical University of Denmark. Nardi, B. A., Schwarz, H., Kuchinsky, A, Leichner, R., Whittaker, S. and Sclabassi, R. (1993). Turning away from talking heads: The use of video-as-data in neurosurgery. Proceedings of INTERCHI’93, 327-334. NY: ACM Press. Nodder, C., Williams, G. and Dubrow, D. (1999). Evaluating the usability of an evolving collaborative product. Proceedings of GROUP’99, 150-159. NY: ACM Press. O’Conaill, B., Whittaker, S. and Wilber, S. (1993). Conversations over videoconferences: An evaluation of the spoken aspects of video-mediated communication. Human-Computer Interaction, 8, 389-428. Ochsman, R. B. and Chapanis, A. (1974). The effects of 10 communication modes on the behavior of teams during co-operative problem-solving. International Journal of Man-Machine Studies, 6, 579-619. Olson, J. S., Olson, G. M. and Meader, D. K. (1995). What mix of video and audio is useful for small groups doing remote real-time design work? Proceedings of CHI’95, 362-268. NY: ACM Press.
136 Sonnenwald, Solomon, Hara, Bolliger & Cox
Patrick, A. S. (1999). The human factors of MBone videoconferences: Recommendations for improving sessions and software. Journal of Computer Mediated Communication, 4(3). http://www.ascusc.org/jcmc/vol4/issue3/patrick.html. Accesed March 30, 2002. Rice, R. E. (1993). Media appropriateness: Using social presence theory to compare traditional and new organizational media. Human Communication Research, 19(4), 451-484. Ross, C. S. and Dewdney, P. (1998). Communicating Professionally: A How-toDo-It Manual for Library Applications. New York: Neal-Schuman. Ruhleder, K. and Jordan, B. (2001). Co-constructing non-mutual realities: Delaygenerated trouble distributed interaction. Computer Supported Cooperative Work, 10, 113-138. Ruhleder, K., Jordan, B. and Elmes, M. (1996). Wiring the “new organization”: Integrating collaborative technologies and team-based work. Annual Meeting of the Academy of Management. Available on the World Wide Web at: http:/ /alexia.lis.uiuc.edu/~ruhleder/publications/96.academy.html. Saville-Troike, M. (1989). The Ethnography of Communication. New York: Basil Blackwell. Schön, D. A. (1983). The Reflective Practitioner. New York: Basic Books. Schwartzman, H. B. (1989). The Meeting. New York: Plenum Press. Sellen, A. (1992). Speech patterns in video-mediated conversations. Proceedings of CHI’92, 49-59. New York: ACM Press Short, J., Williams, E. and Christie, B. (1976). The Social Psychology of Telecomunications. London: John Wiley & Sons. Stringer, E. T. (1999). Action Research (second edition). Thousand Oaks, CA: Sage Publications. Tang, J. and Isaacs, E. (1993). Why do users like video? Studies of multimediasupported collaboration. Computer Supported Cooperative Work, 1, 163193. Whittaker, S. (1995). Rethinking video as a technology for interpersonal communications: Theory and design implications. International Journal of HumanComputer Studies, 42(5), 501-529. Whyte, W. F. (1997). Creative Problem Solving in the Field. Walnut Creek, CA: AltaMira Press.
Collaboration in the Large 137
Section III Knowledge and Information Technology Management in Virtual Enterprises
138 Ratcheva
Chapter IX
A Dynamic Perspective on Knowledge Creation in Virtual Teams—In a Search for New Insights Violina Ratcheva The University of Nottingham, UK
ABSTRACT Virtual teams have been defined as teams of self managed knowledge workers, linked by information technology to share skills, costs and access to each other’s markets. The key purpose of such teams is “new knowledge creation,” an in-depth understanding of which can only be developed in the context of the complex interaction processes involved. The focus of this study, therefore, is the dynamics of organising social activities in which knowledge is embedded. The chapter aims to contribute to the debate about the unique nature of the knowledge creation processes in virtual partnerships by offering an integrated view on knowledge management and inter-organisational interaction and communication patterns in virtual teams being a powerful combination for the future of knowledge management practices. The chapter presents an initial conceptual framework of knowledge creation in virtual partnerships, which builds on recent research studies and theoretical developments in virtual team dynamics, knowledge networking and biological phenomenology.
INTRODUCTION It is increasingly argued that work organisation is undergoing rapid transformation similar in magnitude with the rise of the bureaucratic form in the late nineteenth Copyright © 2003, Idea Group, Inc.
A Dynamic Perspective on Knowledge Creation in Virtual Teams 139
century (Miles & Snow, 1986; Powell, 1991; Drucker, 1988). It is suggested that the ‘matrix’ organisation based on project teams, which emerged a few decades ago and replaced the more traditional bureaucratic forms, is being superseded by organic and virtual organisations. These new forms are based on dynamic networks, where slimmed-down organisations buy-in services and facilities by ‘subcontracting’ to external agencies. Organic or dynamic networks consist of loosely connected ‘webs of agents and brokers’ across industries, with a central core staff setting the strategic direction and providing the operational support necessary to sustain the network. With a range of facilities bought-in, the boundaries of the organisation become highly fluid and dynamic. The “firm is really a system of firms–an open-ended system of ideas and activities, rather than an entity with a clear structure and definable boundary” (Morgan, 1986, pp. 79). Since the boundaries of such networked enterprises are difficult to determine, we may speak of blurred boundaries which are constructed socially by the network members. By taking this perspective, the focus shifts from products and firms as units of analysis to people, organisations and interaction processes that bind together in ongoing relationships (Webster, 1992). Similarly, Reich (1991, pp. 81) depicts a firm as “a façade, behind which teams an array of decentralised groups and subgroups continuously contracting with similar diffuse working units all over the world.” Using new technologies to work better, faster and cheaper, many businesses are finding that virtual teams can bridge these boundaries and provide a considerable competitive advantage. The formation of such inter-organisational teams allows organisations to improve efficiency and productivity by strengthening their knowledge base. However, the key to obtaining long-term competitive advantage is not to be found in the administration of existing knowledge, but in the ability to constantly generate new knowledge, which can be applied in novel combinations of products and services (Seufert et al., 1999). Virtual teams have been defined as teams of self-managed knowledge workers, linked by information technologies to share skills, costs and access to each other’s markets. Previous studies confirmed that such teams represent novel patterns of interactions as they incorporate diverse expertise without permanent arrangements. The key purpose of such teams, therefore, is new ‘knowledge creation,’ a comprehensive understanding of which can be developed in the framework of the networked relationships and interaction and communication patterns. The formation and development of such teams, therefore, cannot simply be considered in terms of processing information, making decisions and solving problems as they are based increasingly on new knowledge creation. This chapter aims to contribute to the debate about the nature of knowledge creation and sharing in a distributed organisational environment and improves the current understandings about the sources of the creative potential of such teams. It has adopted the view that an in-depth understanding of new knowledge creation depends on considering knowledge as socially constructed, or more simply stated as embedded in the organising practices of human activities (Kogut & Zander, 1992). An underlying belief, therefore, in the development of the proposed conceptual
140 Ratcheva
framework is that new theoretical insights should go beyond ‘knowledge’ as an output and consider also the human-to-human interaction processes which contribute to the development of new understandings. The chapter also emphasises the need for developing fresh new insights by using diverse and inter-disciplinary theoretical approaches, supporting the view as Knight (1921) phrased it: “We live in a world full of contradiction and paradox, a fact of which perhaps the most fundamental illustration is this: that the existence of a paradox of knowledge depends on the future being different from the past, while the possibility of the solution of the problem depends on the future being like the past.” The framework presented in this chapter builds on previous work on interaction processes in virtual teams (Ratcheva & Vyakarnam, 2000) combined with recent developments on knowledge networking and biological phenomenology. The theoretical propositions, which are developed, indicate that establishing and cultivating competence-based teams involve complex social processes. Such teams are not simply an evolutionary form of collocated teams and represent novel patterns of interactions and social exchange. Therefore, the issues around socialising in virtual teams are distinctively different from “human collaboration in work in primitive and developed societies which has always depended for its perpetuation upon the evolution of a non-logical social code regulating the relations between persons and their attitude to one another” (Mayo, 1997, pp.21). This will require new roles to be adopted by managers and new understandings to be developed by knowledge workers about the challenges of working in distributed organisational environments.
THE VIRTUES OF VIRTUAL WORKING An underlying belief of this chapter is that effective knowledge creation depends on the specific enabling context. Adopting the Japanese idea of ‘ba’ (Nonaka & Konno, 1998), the enabling context is considered as a shared virtual space that fosters emerging relationships. Therefore, in order to be able to conceptualise knowledge creation processes in virtual teams, the distinctive features of virtual organisations, virtual organising principles and virtual teams will be first defined.
Defining Virtual Organisations The literature on virtual organisations is characterised by heterogeneous definitions and concepts. Despite the large number of publications, there is still a substantial uncertainty in dealing with ‘virtuality’ as neither a common understanding nor consistent concepts of virtual organisation exist. There is also very little empirical research to show how ‘virtuality’ can provide strategic advantage to organisations. This is mainly because the traditional organisational characteristics like business boundaries and predefined management structures are missing. It is, therefore, no longer a case of one organisational configuration being replaced by another but of
A Dynamic Perspective on Knowledge Creation in Virtual Teams 141
transfiguration of the very concept of organisation itself in which the notion of real organisation becomes less relevant. Most of the definitions follow the common approach of defining the term ‘virtuality’ analogous to virtual memory of modern computer systems (Mowshowitz, 1997). Therefore, ‘virtuality’ denotes an “as-if-reality” (Davidow & Malone, 1992) which implies absence of human components, as well as non-human elements such as buildings and offices. People and their organisations, therefore, are ‘becoming disembodied’ which according to Barnatt (1995), is one of the most important defining characteristics of virtual organisations. A number of studies try to capture the essence of virtual organising principles. They have described mainly an organising logic that is especially relevant when a collection of geographically distributed, functionally and/or culturally diverse entities are linked by electronic forms of communication and rely on lateral, dynamic relationships for coordination. The virtual organisation has been often described as one which is replete with external ties, managed via teams that are assembled and disassembled according to needs (Grenier & Metes, 1995; Lipnack & Stamps, 1997) and consisting of employees who are physically dispersed from one another. Similarly, Byrne (1993, pp .99) defined a virtual corporation as a “temporary network of independent companies–suppliers, customers, even erstwhile rivals–linked by information technology to share skills, costs and access to one another’s markets,” creating a best-of-everything organisation (Miles & Snow, 1995). The virtual organisation features many distinct characteristics compared to other forms of network organisations and cooperative models. The virtual corporation is a temporary network that is neither set up for an agreed period of time nor is an open-ended cooperation, i.e., joint ventures. Once a specific market opportunity is allocated, the partners quickly unite and pool their resources according to customers’ needs. The partnership lasts as long as the market opportunity is beneficial for the cooperation partners (Byrne, 1993).
The Nature of Virtual Teams Despite the lack of a robust definition about what virtuality means in an organisational context and when is a virtual organization really virtual, there is a consensus that different degrees of virtuality exist (Hoffman et al., 1995; Gray & Igbaria, 1996) and within this, different organisational structures can be formed. Focal building blocks of such structures are the distributed cross-functional expert teams collaborating globally. The specific characteristics of virtual teams, therefore, are best identified in the boundary crossing nature of the teams’ communications, interactions and forming relationships across space, time and organisations, enabled by information technologies (Kristof et al., 1995; Townsend et al., 1996; Grenier & Metes, 1995). Davidow and Malone (1992, pp. 6) describe the formation of such teams as “something like atoms temporarily joining together to form molecules, then breaking up to form a whole new set of bonds.” Teamwork in a virtual organisation is essential to tap into the best talent to create the highest quality and fastest response to customer needs. A number of benefits are
142 Ratcheva
associated with virtual teams such as responsiveness, lower costs, and improved resource utilisation necessary to meet ever-changing task requirements in highly turbulent and dynamic global business environments (Steward, 1994; Mowshowitz, 1997; Snow et al., 1996). Virtual teams are beginning to be seen in a variety of disciplines. Currently, scientists and researchers are linking together electronically with distant research sites to tackle scientific and human behaviour dilemmas. Organisations are also utilising virtual structures to conduct research and development projects developed by teams of experts from all over the globe, connected electronically (O’HaraDevereaux & Johansen, 1994). Virtual teams have been also described as “superior” and “high performance teams” (Kinlaw, 1991, pp. 13) which are composed of individual members with varying types of expertise. Kristof et al. (1995) also defined a virtual team as a selfmanaged knowledge work team, with distributed expertise, that forms and disbands to address a specific organisational goal. Because the team is self-managed, the members are not governed by an authority that controls the fate of the team or its members. Members are often part of multiple teams and report to different individuals in their ‘home’ organisations (Kristof et al., 1995). They are autonomous and have a broad range of authority and responsibility for their goals, means and deliverables. In such self-managed teams, trust is the means of social control and coordination (Iacono & Weisband, 1997).
KNOWLEDGE CREATION IN AN ORGANISATIONAL ENVIRONMENT The emergence of new organisational forms and working practices contributed to the recent reconceptualisation of the organisational knowledge creation processes (Nonaka & Takeuchi, 1995; Grant, 1996). Two main perspectives regarding the management of knowledge have emerged: an internal perspective regarding the knowledge-related issues inside organisational boundaries and an external perspective focusing on knowledge in an inter-organisational environment. The internal perspective on knowledge management has initially focused on understanding ‘knowledge-intensive’ firms, characterised by a high proportion of highly qualified staff (Starbuck, 1992; Alvesson, 1993; Nonaka & Takeuchi, 1995). Further, these issues were considered from a perspective standpoint according to which, knowledge is a kind of economic asset or strategic resource, core competence and a source of innovations (Spender, 1996). The rise of knowledge management has also been linked to the rapid advances in information and communication technologies which inevitably has led to a technology bias of many studies on organisational knowledge (Orlikowski, 1996). In contrast, the external inter-organisational perspective on knowledge management recognises as increasingly important the relationships between the enterprises and the economic environment or the stakeholders. Such analyses predomi-
A Dynamic Perspective on Knowledge Creation in Virtual Teams 143
nantly concentrate on inter-organisational knowledge-transfer and knowledge-use processes across organisational boundaries. Some of the recent research studies, adopting that perspective, integrate the inter-organisational and social networking approaches and knowledge management perspective and provide a holistic view of the knowledge work processes, and their importance for fostering continues innovations (Seufert et al., 1999; Augier & Vendelo, 1999). These emerging approaches have been defined as ‘networking community view of knowledge management’ (Swan et al., 1999) and ‘knowledge networking framework’ (Seufert et al., 1999). A common characteristic is their emphasis on knowledge as constructed through active networking among individuals, groups, organisations and communities. By following the above perspective on studying knowledge management practices, the author adopts the view that a holistic understanding of the knowledge creation processes in a virtual working environment requires an integrated viewpoint of the ‘networked knowledge’ in the context of the unique nature of the social communication and interaction processes taking place in virtual partnerships.
UNDERSTANDING KNOWLEDGE CREATION IN VIRTUAL PARTNERSHIPS From the idea-generation phase of a new product or service around which a new team of experts is formed to the launch phase, the creation of new knowledge can be viewed as a central theme of the virtual partnership formation. The purpose of forming such teams, therefore, is because members have collective knowledge which is not held by any of the individual members. However, this collective knowledge is not present by definition when the team is assembled and it is only consequently developed. According to Nonaka’s (1994) ‘spiral’ model of knowledge creation, the organisational knowledge is created through a continued dialogue between tacit and explicit knowledge. While the explicit knowledge is easy to communicate and express as it resides in symbols, technical documentation, etc., the tacit aspect can only be described as a personal non-verbal form of knowledge embedded in routines and cultures (Polanyi, 1966). Badaracco (1991) also refers to the tacit knowledge in individuals and social groups as ‘embedded’ knowledge. Nonaka (1994) points out in his model that the knowledge creation process depends on developing interactive relationships between the ontological and epistemological dimensions of knowledge. While the epistemological dimension refers to ‘knowledge’ as ‘justified true beliefs’ which reside in people, the justification can only be achieved through social interactions between individuals to which Nonaka refers as ontological dimension. A step further in these analyses is that to bring personal knowledge into a social context within which it can be amplified, it is necessary to have a ‘field,’ defined as ‘ba,’ that provides a place in which individual perspectives are articulated and higherlevel concepts are developed (Nonaka & Konno, 1998). ‘Ba,’ therefore, can be thought of as a shared physical, virtual or mental space or shared space of
144 Ratcheva
relationships which provides a contextual platform for advancing individual and collective knowledge. Nonaka and Konno (1998) also distinguish between originating, interacting, cyber and exercising ‘ba’ to which I will refer later in the proposed conceptual framework. The inter-personal relationships in a dynamic business environment are also likely to change, transform, readjust over short period of time and lead to changed patterns of interactions and behaviours. A useful theoretical foundation for understanding such processes and the way they affect the creation of collective knowledge is provided by Salomon’s (1993) concept of distributed cognition. According to this concept, reciprocal relationships exist between individuals’ and distributed cognition. They interact with one another in a spiral-like fashion, whereby the individuals’ inputs, through their collaborative activities, affect the nature of the joint, distributed system, which in turn affects the individual cognitions. The subsequent participation is altered, resulting in subsequent altered joint performance and products. The team efforts in virtual partnerships can be viewed as transferring knowledge from its ‘embedded’ form and ‘embodying’ it into novel products and services. Therefore, the potential for developing new knowledge is embedded in the team members personal beliefs, experiences, know-how which can be brought out, articulated and justified only through active communications and interactions among team members throughout the existence of the virtual partnership. Despite the temporary nature of the virtual partnerships, the interaction patterns are likely to change as a result of the interplay between individual and collective cognition. Understanding and managing these changes is important as they can affect performance and final output. However, the context in which team interactions take place is unique because it is not simply a fixed set of surrounding conditions but a wider dynamic process of which the individual cognition is only a part and requires developing a dipper understanding.
Richness of Interactions in Virtual Teams Research works that reflect on the richness of social and human aspects of virtual teams’ interactions have just started to emerge. Recent studies concentrate mainly on the media richness of communications and the degree to which multimedia technologies can provide rich channels of communication in order to facilitate task coordination among globally dispersed team members. Early research projects investigating the impact of alternative means of communication on teams’ members were guided by the information richness theory (Daft et al., 1987) which implicitly assumes that communication media inherently possess characteristics which affect how strongly the social context cues are conveyed (Sprout & Kiesler, 1991; Walther, 1996, 1997). According to Lea and Spears (1992), computer-mediated communication is perceived as impersonal and lacking in normative reinforcement as a result of which the exchange of socioemotional content is reduced. Previous research has also established relationships between the development of relational links among team members and the effectiveness of information exchange which improve the interaction experiences of virtual teams (Warkentin et
A Dynamic Perspective on Knowledge Creation in Virtual Teams 145
al., 1997). Developing relational links according to Warkentin et al. (1997) involves performing activities related to the members’ support and group well-being functions by establishing position of members, defining task roles of group members and establishing norms of group interactions. Similarly, McGrath (1990) offers the TIP theory (time, interaction and performance), according to which the development of relational bonds in groups involves not only support among team members in performing production functions but also an active involvement in the group wellbeing and members’ personal support. The willingness of team members consciously and actively to perform their duties critically depends on developing trustworthy relationships. In an environment without formal control and coordination mechanisms, trust has been described as a ‘heartbeat’ which can prevent geographical and organisational distances of team members from turning into unmanageable psychological barriers (Jarvenpaa & Stamps, 1997; Kristof, 1995). The literature acknowledges the existence of impersonal or institutional forms of trust in virtual teams in addition to interpersonal forms. According to Luhmann (1979) impersonal trust is based on the appearance of ‘everything in proper order,’ rather than on an emotional bond, knowledge or past history of interactions. Meyerson et al. (1996) developed the concept of ‘swift’ trust to explain how temporary teams can enjoy high levels of trust, even through members do not share any past affiliation and cannot necessarily expect to have any further associations. The concept of ‘swift’ trust maintains that ‘unless one trusts quickly, one may never trust at all.’ Because there is not a sufficient time to develop trust through interpersonal means, team members import expectations of trust based on their local organisational environment, industry practices or role-based stereotypes. Positive expectations of trust motivate members to take a proactive part in the team, which can result in strengthening the trustworthy relationships among team members. The research to date regarding the interactions in a virtual environment predominantly concentrates on isolated factors with regard to the cultural, technical, communication and logistical issues emerging as barriers for partnering in a distributed environment. However, a comprehensive understanding of the knowledge creation processes requires a holistic view of the interactions rather than a fragmented perspective. The development of such understandings, therefore, requires an interdisciplinary perspective, which incorporates ‘sociopsychological’ and ‘technoeconomic’ forces with an impact on teams’ formation and development. Recent empirical results indicate that business interactions in a virtual environment follow a specific logic which can be described as non-linear, ‘cyclical selfenergising processes’ (Ratcheva & Vyakarnam, 2000). Such relationship patterns indicate strong interrelated links between the development of inter-personal/interorganisational relationships, and the teams’ actions towards the common goal. The empirical results of this study, although limited in scope, indicate that the social and interpersonal elements of interactions developed and intensified at a later stage of virtual partnership development. One of the main conclusions of the study was that virtual teams’ interactions involve dynamic processes, the consideration of which
146 Ratcheva
should take into account not only the individual interactions but also the organisational context in which they are embedded.
Towards a Conceptual Framework of Knowledge Creative Interaction Processes in Virtual Partnerships A starting point in developing the framework is that the interaction patterns and processes in virtual teams should be an integrated part of any knowledge creation model. However, the review of related works earlier in the chapter indicated that the theoretical constructs of virtual organisations, virtual teams and generally virtual organising principles are still missing, although, their future importance is well recognised. At the same time we cannot simply apply the conventional logic and regard virtual partnerships as ‘open’ input-process-output systems which in terms of knowledge will provide far too simplistic explanation such as capturing information from the external environment and processing it into new knowledge. The fact that conceptual developments are lagging behind in comparison to the dynamics of the technological and global economic changes led to the need to adopt theoretical constructs from other fields. Further analyses were developed by referring to recent research in biological phenomenology and neurophysiology, and especially the development of the autopoiesis theory also known as theory of ‘selfproduction’ (Maturana & Varela, 1987; von Krogh & Roos, 1995). The proposed framework refers only to the main principles of the autopoiesis theory which are applied in the context of virtual teams. Autopoiesis theory explains the nature of living entities which undergo a continual process of internal self-production. Because autopoieses theory is a general systems theory, it can be applied on other than biological phenomena. Luhmann introduced at a social level the distinction between normatively closed and cognitively open systems (Luhmann, 1986). An autopoietic social system within this distinction is simultaneously closed (normatively) and opened (cognitively). Some of the recent studies provided a new original interpretation of the normative closure and cognitive openness, describing them as two interactive knowledge flows (Maula, 2000). The cognitive openness represents a knowledge link to the external environment, which maintains organisational learning. Normative closure is secured by the development of organisation-specific rules of interactions and communication, and norms of behaviour which maintain the utilisation of accumulated knowledge. The normative closure also means that the accumulated knowledge through external interactions affects the way an organisation operates, and in return the way it operates affects the creation and acquisition of new knowledge. Therefore, the knowledge creative processes in such organisations can be considered as purposeful actions coordinating the interactions between the external and internal environment which lead to generation of new understandings and knowledge. Adopting the main principles of the autopoiesis theory, we consider virtual teams as autopoietic entities. Figure 1 presents a conceptual framework of the knowledge creative interactive processes according to which there are three
A Dynamic Perspective on Knowledge Creation in Virtual Teams 147
Figure 1: Knowledge creative interaction processes Changing external environment Customer Requirements
Technological advancement
Competitive offerings
…………...
Level 1
Originating “BA”
Negotiation of Behaviour patterns
Actions towards the common goal
Dialoging ‘BA”
Achieving business goals
Personal goals
Cyber “BA”
Calculative trust Expected outcomes
“Swift” trust Ability to deliver
Ex
Team life span
Level 3
Project outcome
er cis in g“ BA ”
Level 2
Team formation Initial attraction
Interpersonal bonding emerge
Patterns of relationships development
interrelated levels. Levels 1 and 2 present the knowledge flows throughout the formation and development of the partnership. As the development of interpersonal and trustworthy relationships follow specific patterns, a third level is included which presents the process of formation of inter-personal relationships throughout the life span of the partnership and the way they affect the work-related interactions. However, the three levels are considered in interaction rather than separately because a new knowledge is created only through achieving successful synergy between them. The process usually starts as a group of experts self-organise themselves as a team to exploit a spotted market opportunity or to apply a technological advancement. Three interrelated stages of relationship development are considered (Level 2). Because of the temporary nature of the project, team members usually import into the partnership their perceptions and understandings about each other’s potential to contribute. Our previous study (Ratcheva & Vyakarnam, 2000) established that the factors causing the initial attraction among team members are based on recognition of complimentary expertise, sound professionalism, previous joint working experience and potential access to other business networks. Relationship-building is at that stage, therefore, are based on the potential to act and are highly depersonalised. As indicated at Level 3, they are calculative in nature and initial trust is based on expectations. This is followed by negotiating the boundaries of team behaviour patterns which proved to be an influential factor on each team’s integrity and followup performance. Once the working rules are established, team interactions are
148 Ratcheva
characterised by cyclical inputs of actions, deeper communication and sharing of ideas, and new initiatives. This cycle is close to what Nonaka and Konno (1998) refer as ‘originating ba,’ when the knowledge-creation process begins. They also established that at that stage the actual physical activities and face–to-face experiences are the key to sharing of tacit knowledge. At the second cycle of partnership development (Level 2), the team as a whole starts to develop its own behaviour patterns which proved to be an influential factor in achieving team integrity and follow-up performance (Ratcheva & Vyakarnam, 2000). The established norms of behaviour and team roles are specific and unique for each team and depend on the goals to be achieved. Nonaka and Konno (1998) refer to this stage as ‘dialoging ba’ which is more consciously constructed. As virtual teams do not have structures of authority, the particular roles in the team adopted by each member are identified in a process of dialog, sharing mental models, reflection and analysis. According to Nonaka and Konno (1998), to construct “dialoging ba” and trigger conversations is important to select people with the right mix of specific knowledge and capabilities. The expertise required in the team should also be redefined as a result of actively interacting with the external environment in terms of changed customer requirements, monitoring new competitive offerings, new technological advancements, etc. There also should be established formal mechanisms for continuous monitoring of market changes. It is expected that the external changes will lead to redefinition of roles and responsibilities in the team, bringing complimentary external expertise. This will cause further changes in the team’s patterns of interactions and knowledge base. Such patterns are also consistent with the concept of distributed cognition (Salomon, 1993). Developing a team with the appropriate mix of expertise results in speeding up the progress of the project which increases members’ confidence in the ability of the team to deliver and as a result stimulates accelerated interpersonal relationships. Once the working rules are established, team interactions are directed toward the project final goal and are characterised by cyclical inputs of actions, deeper communication and sharing of ideas, and new initiatives. It is likely that at that stage team members work from distant locations, and the communications and interactions are related to the tasks’ performance and project assembly. This cycle of interactions is a variation of what Nonaka and Konno (1998) define as ‘cyber ba’ or a place of monologue. Similarly ‘cyber ba’ is associated with generation and systematisation of explicit knowledge supported by information and network technologies followed by a final justification of the product concept. A successful project outcome incorporates achieving personal and business goals. Therefore, the end of the project and dissolving the partnership is not an end of the knowledge creation at individual and team levels. Similarly to the ‘exercising ba’ (Nonaka & Konno, 1998), the explicit knowledge materialised in the project outcome is converted in a new tacit knowledge through a process of reflection and learning and brought into new projects and partnerships.
A Dynamic Perspective on Knowledge Creation in Virtual Teams 149
CONCLUSIONS As new media and communication technologies have led to a significant change in the ways we interact and work together, it is important not to constrain this phenomenon to its novel information processing side but to consider virtualisation as a social process. These distant ways of work arrangements and business partnerships have a significant impact on social interactions and relationship development in a business context and led to reconceptualisation of the traditional understandings about organisational norms, roles, identity and culture. The author adopts the view that the creation of new knowledge is socially embedded in interaction and communication practices. Therefore, new knowledge-creation processes in virtual partnerships reside in the connections of experts, and the interaction and communication patterns and rules established among team members determine how knowledge is accumulated. The chapter presents an initial conceptual framework of the dynamic knowledge-creation processes in virtual teams which build on some of the latest theoretical and conceptual developments in the areas of knowledge management and virtual organizations. It is also suggested that adopting theoretical constructs from other fields can be a fruitful future direction for conducting research in emerging areas as that can lead to new and deeper insights into arising issues. A next step of this study is to test the proposed framework by developing a number of in-depth case studies on virtual partnerships. The proposed framework also indicates that establishing and cultivating competence networks involve complex social processes. These will require managers to adopt new roles and knowledge workers to develop new understanding of the challenges of working in distributed organisational environments.
REFERENCES Alvesson, M. (1993). Organisations as rhetoric: Knowledge intensive firms and the struggle with ambiguity. Journal of Management Studies, 30(6), 997-1015. Augier, M. and Vendelo, M. T. (1999). Networks, cognition and management of tacit knowledge. Journal of Knowledge Management, 3(4), 252-261. Badaracco, J. (1991). The Knowledge Link. Boston, MA: Harvard Business School Press. Barnatt, C. and Starkey, K. (1994). The emergence of flexible networks in the UK TV industry. British Journal of Management, 5(4). Byrne, J. (1993) The virtual corporation. Business Week, February 3, 98-103. Daft, R. L., Lengel, R. H. and Trevino, L. K. (1987). Message equivocality, media selection and manager performance: Implications for information systems. MIS Quarterly, 11(3), 355-368. Davidow, W. H. and Malone, W. S. (1992). The Virtual Corporation. New York: Edward Burlinghame Books/Harper Business, Harper Collins Publishers.
150 Ratcheva
Drucker, P. F. (1988). The coming of the new organisation. Harvard Business Review, January/February, 34-51. Grant, R. (1996). Toward a knowledge-based theory of the firm. Strategic Management Journal, 17, (Winter Special Issue), 109-122. Gray, P. and Igbaria, M. (1996). The virtual society. ORMS Today, December, 44-48. Grenier, R. and Metes, G. (1995). Going Virtual: Moving Your Organisation into the 21st Century. Upper Saddle River, NJ: Prentice Hall. Hoffman, D. L., Novak, T. P. and Chatterjee, P. (1995). Commercial scenarios for the web: Opportunities and challenges. Journal of Computer-Mediated Communication, 1(3), Available on the World Wide Web at: http:// www.ascusc.org/jcmc/vol1/issue3/hoffman.html. Iacono, C. S. and Weisband, S. (1997). Developing trust in virtual teams. Proceedings of the Hawaii International Conference on Systems Sciences, Hawaii. Jarvenpaa, S. L. and Ives, B. (1994). The global network organisation of the future: Information management opportunities and challenges. Journal of Management Organisation Systems, 10(4), 25-57. Kiesler, S., Siegel, J. and McGuire, T. (1991). Social aspects of computer-mediated communication, In Dunlop, C. and Kling, R. (Eds.), Computerisation and Controversy: Value Conflicts and Social Choices, 330-349. Boston, MA: Harcourt Brace. Kinlaw, D. (1991). Developing Superior Work Teams. Lexington, MA: Lexington Books. Knight, F. (1921). Risk, Uncertainty and Profit. Boston, MA: Houghton-Mifflin. Kogut, B. and Zander, U. (1992). Knowledge of the firm, combinative capabilities, and the replication of technology, Organization Science, 3(3), 383-397. Kristof, A. L., Brown, K, G., Sims Jr., H. P. and Smith, K. A. (1995). The virtual team: A case study and inductive model. In Beyerlein, M. M., Johnson, D. A. and Beyerlein, S. T. (Eds.), Advances in Interdisciplinary Studies of Work Teams: Knowledge Work in Teams. Greenwich, CT: JAI Press. Lea, M. and Spears, R. (1992). Paralanguage and social perception in computermediated communication. Journal of Organisational Computing, 2(3), 321-341. Lipnack, J. and Stamps, J. (1997). Virtual Teams: Reaching Across Space, Time and Organisations with Technology. New York: John Wiley & Sons. Luhmann, N. (1986). The autopoiesis of social systems In Geyer, F. and Van der Zouwen, J. (Eds.), Sociocybernetic Paradoxes, 172-192. Beverly Hills, CA: Sage Publications. Maturana, H. R. and Varela, F. J. (1987). The Tree of Knowledge. Boston, MA: New Science Library. Maula, M. (2000). The senses and memory of a firm-implications of autopoiesis theory for knowledge management. Journal of Knowledge Management, 4(2), 157-161.
A Dynamic Perspective on Knowledge Creation in Virtual Teams 151
Mayo, E. cited in B. Norton (1997). Quick guides to the gurus. Professional Manager, November, 21. McGrath, J. E. (1991). Time, interaction, and performance (TIP): A theory of groups. Small Group Research, 22(2), 147-174. Meyerson, D., Weick, K. E. and Kramer, R. M. (1994). Swift trust and temporary groups. In Kramer, R. M. and Tayler, T. R. (Eds.), Trust in Organisations: Frontiers of Theory and Research, 166-195. Thousand Oaks, CA: Sage Publications. Miles, R. and Snow, C. (1995). The new network firm: A spherical structure built on a human investment philosophy. Organisational Dynamics, 23, 9-32. Morgan, G. (1986). Images of Organisation. London: Sage Publications. Mowshowitz, A. (1997). Virtual organisation. Communications of the ACM, 40(9), 30-37. Nonaka, I. and Konno, N. (1998). The concept of “ba”: Building a foundation for knowledge creation. California Management Review, 40(3), 40-54. Nonaka, I. and Takeuchi, I. (1995). The Knowledge Creating Company. How Japanese Companies Create the Dynamics of Innovation. New York and Oxford: Oxford University Press. Nonaka, I. (1994). A dynamic theory of organisational knowledge creation. Organisation Science, 5(1), 14-37. O’Hara-Devereaux, M. and Johansen, R. (1994). Global Work: Bridging Distance, Culture and Time. San Francisco, CA: Jossey-Bass. Orlikowski, W. (1996). Evolving with notes: Organizational change around groupware technology. In Ciborra, C. (Ed.), Groupware and Teamwork: Invisible Aid or Technical Hindrance. Chichester, UK: John Wiley & Sons. Polanyi, M. (1966). The Tacit Dimension. New York: Anchor Day Books. Powell, W. W. (1991). Expanding the scope of new institutionalism. In Powell, W. W. and Dimaggio, P. J. (Eds.), The New Institution in Organizational Analysis. Chicago and London: University of Chicago Press. Ratcheva, V. and Vyakarnam, S. (2000). A holistic approach to virtual entrepreneurial team formation. The International Journal of Entrepreneurship and Innovation, October, 173-182. Reich, R. B. (1991). The Work of Nations: Preparing Ourselves for the 21st Century Capitalism. New York: Knopf. Salomon, G. (1993). No distribution without individuals’ cognition: A dynamic interactional view. In Salomon, G. (Ed.), Distributed Cognitions: Psychological and Educational Considerations, 111-138, Cambridge, UK: Cambridge University Press. Seufert, A., von Krogh, G. and Bach, A. (1999). Towards knowledge networking. Journal of Knowledge Management, 3(3), 180-190. Snow, C. C., Snell, S. A. and Davison, S. C. (1996). Use transnational teams to globalise your company. Organisational Dynamics, 24(4), 50-67. Spender, J. C. (1996). Making knowledge the basis of a dynamic theory of the firm. Strategic Management Journal, 17, (Winter Special Issue), 45-62.
152 Ratcheva
Sproull, L. and Kiesler, S. (1991). Making connections: Computers can enhance employee commitment–at a cost. Employment Relations Today, 18, 53-71. Starbuck, W. (1992). Learning by knowledge-intensive firms. Journal of Management Studies, 29(6), 713-740. Steward, T. A. (1994). Managing in a wired company. Fortune, 130(1), 44-56. Swan, J., Newell, S., Scarbrough, H. and Hislop, D. (1999). Knowledge management and innovation: Networks and networking. Journal of Knowledge Management, 3(4), 262-275. Townsend, A., DeMarie, S. and Hendricson, A. (1996). Are you ready for virtual teams? HR Magazine, September, 123-126. von Krogh, G. and Roos, J. (1995). Organisational Epistemology. London: St. Martin’s Press, Macmillan Press. Walther, J. B. (1997). Group and interpersonal effects in international computermediated collaboration. Human Communication Research, 23(3), 342-369. Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23(1), 1-43. Warkentin, M. E., Sayeed, L. and Hightower, R, (1997). Virtual teams versus faceto-face teams: An exploratory study of a web-based conference system. Decision Science, 28, 975-996. Webster, F. E. (1992). The changing role of marketing in corporation. Journal of Marketing, 56, 1-17.
The Impact of Trust in Virtual Enterprises 153
Chapter X
The Impact of Trust in Virtual Enterprises T. T. Wong and Henry C. W. Lau The Hong Kong Polytechnic University, Hong Kong
ABSTRACT The nature of work is changing–to adapt to the global market. Many enterprises will concentrate on core activities and outsource other services to those with specialist expertise. Outsourcing is one way in which the pool of available knowledge can be enlarged and enhanced. Virtual enterprises are likely to rely on such knowledge to meet customers’ demands on a customer-built or small batch production basis. Although information technology plays an important role in linking the core company with its partner companies, it remains subservient to the humans that form the virtual enterprise. For effective knowledge management, it is clear that the electronic handshake would need to be based on trust between partner companies as well as the correct protocol. However, current practice showed that trust between top management teams was rarely considered in the selection of partner companies. A review of the relevant literature indicated that neither scholars nor practitioners agree on a single model of inter-firm trust that applies to all partner evaluation contexts. Hence a decision support system based on neural network and data mining technologies is proposed. A case example is used to illustrate the feasibility of incorporating inter-firm trust in real industrial situations.
INTRODUCTION Globalisation is an issue currently affecting many organisations and is one that has profound implications for the nature of work. In order to compete in an international setting, companies are increasingly turning to trans-national teams. Copyright © 2003, Idea Group, Inc.
154 Wong & Lau
These are seen as an effective and flexible means of bringing both skills and expertise to bear on specific problems. This form of organisation is called virtual enterprises, which comprises a network of alliances, temporarily linked together for competitive advantage, that share common value chains and business processes supported by distributed information technology (Davidow & Malone, 1992; Business Week, 1993). A virtual enterprise aims to incorporate one or more best practice core competencies from one organisation with different best practice core competencies from additional organisation(s) through networking and forming alliances to produce a very high-level product or service which would be difficult to compete against. Virtual enterprises can provide growth quickly at a fraction of the cost of tackling the market alone. In the past it was more cost-effective to own all aspects of the value chain–vertical integration was the business model of choice. In today’s global market, focus is critical. Owning the value chain may actually put an organisation at a competitive disadvantage due to the lack of flexibility and financial commitment true vertical integration represents. Selecting the right partners and nurturing these relationships can help a company focus on what creates the most value for customers and concentrate on its core activities. Virtual enterprises also offer versatility. They create new, viable market options and allow companies to deal more effectively with the uncertainties and complexities of today’s highly competitive global market. Following Jarvenpaa et al. (1998), we define a global virtual team to be a temporary, culturally diverse, geographically dispersed, electronically communicating work group. The notion of temporary in the definition describes teams where members may have never worked together before and who may not expect to work together again as a group (Lipnack & Stamps, 1997). The characterization of virtual teams as global implies culturally diverse and globally spanning members that can think and act in concert with the diversity of the global environment (DeSanctis & Poole, 1997). Finally, it is a heavy reliance on computer-mediated communication technology that allows members separated by time and space to engage in collaborative work. The reasons that virtual enterprises are becoming so prevalent nowadays include: low overhead, flexibility, minimum investment, and high productivity. By owning few resources and focusing in on the organisation’s expertise, the company can keep high levels of productivity while allowing its partners to do the same. Both the partners in a virtual enterprise and the individuals who work for a virtual enterprise are allotted greater flexibility. The partners can focus on core competencies, while individual workers may have the ability to telecommute from their homes. In a virtual enterprise, companies are linked by the free flow of information. There is no hierarchy, no central office, and no vertical integration: just the skills and resources needed to do the job. Each participating company contributes what it is best at. It can be seen that since no single company will have all the skills necessary to compete in the Global Electronic Market, these arrangements will become the norm. One of the keys to the success of the virtual enterprise is the use of information technology (IT) to facilitate these alliances.
The Impact of Trust in Virtual Enterprises 155
Creating a virtual enterprise takes more than just information technology. A recent study on issues of information technology and management concluded that there is no evidence that IT provides options with long-term sustainable competitive advantage. The real benefits of IT derive from the constructive combination of IT with organisation culture, supporting the trend towards new, more flexible forms of organisation (Gamble, 1992). Information technology’s power is not in how it changes the organisation, but the potential it provides for allowing people to change themselves. Creating these changes however presents a whole new set of human issues. Among the biggest of these challenges is the issue of trust between core and partner organisations in the VE.
KNOWLEDGE MANAGEMENT AND TRUST In the new economy, knowledge is increasingly seen as central to the success of organisations and an asset that needs to be managed. Since the 1980s, many organisations have taken various steps to reduce staffing levels and as people leave, they take with them a valuable stock of corporate knowledge of a particular domain. Domain knowledge can be relatively easy to replace but the knowledge of how a company operates is built up over a long time and can be irreplaceable, at least in the short term. In addition, many organisations now have to cope with the increasing globalization of business that forces collaboration and knowledge sharing across both time and space. There is now an urgent need for knowledge management in such organisations. The ability of an enterprise to manage knowledge as an asset (and provide a good return on investment) is seen as the key to survival in a global business environment in which the efficiencies of mass production of commodity goods have been successfully exported to emerging economies. The core issue of knowledge management is to place knowledge under management remit to get value from it– to realize intellectual capital. That intellectual capital can be regarded as a major determinant of the difference between a company’s book price and the total value of its physical assets. For a successful company, this difference can be considerable, representing the difference between the way the company is seen by financial experts and by the market. For example, there is a great difference between the book price and share value of recently launched biotechnology companies, whose market value is clearly based on their knowledge assets, rather than traditional capital. However, while the world of business is experienced in managing physical and financial capital, virtual enterprises have difficulty in finding solutions to simple practical questions concerning knowledge management, such as: “We are involved in an exciting project with five other companies. How can we all tell whether all these partners would collaborate?” “Market needs change often these days and we are always bringing new partners into projects. How can we select the right partners?”
156 Wong & Lau
The main issue is that partner organisations, the powerhouse of the virtual enterprise, are quite intractable from a knowledge management point of view. By their very nature such partners create a great deal of new knowledge, which as such is of high value to the virtual enterprise (VE). However, the knowledge of how and why they created what they had created is not clear since it involves the interactions among a group of different people. Since virtual partners do not have frequent faceto-face interaction, the core company has to have total faith that the partners will do the job they are assigned, and do it right. This leaves core companies with the daunting task of selecting partners who are not only able to work on their own, but can also function within a team structure–self-motivated partners. When it works, the organisation processes flows nicely (Musthaler, 1995). However, when one partner starts slacking, the ramifications are dire in consequence. A frequent occurrence is when a partner enters into the virtual enterprise network with certain expectations, but those expectations are modified by an unexpected technology breakthrough. Suppose we have a partner who was brought into the VE to develop and supply aluminum casing for a certain brand of notebook computer, and it is discovered late in the design and process development that a newly developed plastic composite material will offer similar strength but much lighter than the aluminum casing material. Assuming that the partner cannot manufacture the composite casing, so it makes sense for the virtual enterprise to switch partners. And let us further suppose that the original partner has invested a considerable amount of capital in developing the opportunity for the virtual enterprise, in addition to carrying out research on the design of aluminum notebook casing. In the traditional business system, each partner sub-optimizes for its selfish goals, which in this case would produce a sub-optimal product, a notebook computer with an aluminum casing. A better business system will agily adjust to the customers’ need, and at the same time reward the aluminum casing partner for essentially putting itself out of the virtual enterprise network. In such a case, that partner has to have trust that the virtual enterprise will deal with it fairly whether it is fully in the virtual enterprise or out. And the virtual enterprise has to have trust in each partner that they will strive to optimize the enterprise even when it reduces or eliminates its own role. In the following section recent literature on interpersonal and organisational trust will be explored in order to throw some light on its impact on knowledge management in virtual enterprises.
RELATED RESEARCH ON TRUST Trust has long been of interest to a variety of researchers. Mayer et al. (1995) define trust as “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party.” Trust among organisational members is critical for virtual enterprises. Without trust, commitment to the goals of the organisation can waver, as members
The Impact of Trust in Virtual Enterprises 157
perceive the alliance as weak or disintegrating, fractured by misunderstanding or mistrust (Handy, 1995). Trust is particularly important in a VE that requires constant and close attention to shared commitments to safety and reliability, as well as a shared willingness to learn and adapt (Davidow & Malone, 1992). It has been suggested that trust permits a VE to focus on its mission, unfettered by doubts about other members’ roles, responsibilities and resources, and that with trust, synergistic efforts in interorganisational missions are possible (Gabrowski & Roberts, 1998). Developing trust in a VE is a complex task. It requires fairly constant, smallgroup activities among members, because it is difficult to trust people you do not know well, who you have not observed in action over time, and who are not committed to the same goals (Handy, 1995). Trust plays an important synthesis role because with trust, VE with fluid organisational structures can leverage the ability and willingness to learn (Coyle & Schnarr, 1995), thereby enhancing performance and attention to reliability over time. VE with high levels of trust among their members can effectively utilize interactions and communication processes at their interfaces so members can learn together, and can develop shared mental models of reliability and a shared culture of safety. Finally, high levels of trust also contribute to strengthening linkages among member organisations. Trust among organisational members is an important prerequisite to changing those linkages to alliances, thus mitigating risks, as organisations are reluctant to adopt alliance-like organisational structures that make them vulnerable to the uncertainties of the global environment, and to impacts from other organisations, without some assurances of shared vulnerability (Handy, 1995; McAllister, 1995). Aerospace conglomerates jointly developing mission- and safety-critical systems–comprising of defense contractors, universities, government departments, and other private organisations (Augustine, 1997)–are good examples of the need for such trust in a VE. In these organisations, members have different backgrounds, experiences, goals, objectives, and understandings, but belong to the conglomerate to pursue shared development of mission- and safety-critical aerospace systems. For effective knowledge management to take place, however, members needed to trust in the other members’ talents, capacities, willingness to work, and interests in the alliance (Augustine, 1997; SmartBridge, 1997). Without such trust, members could duplicate other members’ efforts, could subvert the mission and goals of the conglomerate by providing private information about conglomerate members to the members’ competitors, and could introduce inefficiencies and costs pathological to the VE goals. Trust on the SmartBridge project, however, had a temporal quality (Grabowski & Roberts, 1998). Initially, when VE members were excited about the opportunities joint software development and integration posed, much proprietary product and planning information, as well as member-confidential development and integration strategies were exchanged. Over time, however, as the VE matured, and the strength of the linkages between some members faded, the initial trust between some members faded. Some members, although partners on paper, were excluded from
158 Wong & Lau
planning and integration discussions late in the project, and less proprietary information overall was exchanged as the project matured. Thus, as this VE matured, trust among some members of the VE waned, suggesting that management of trust in VE requires at least as much effort and interest as management of the organisation and its linkages. Meyerson, Weick, and Kramer (1996) developed the concept of swift trust for temporary teams whose existence, like partners in a VE, is formed around a common task with a finite life span. Such teams consist of members with diverse skills, with a limited history of working together, and with little prospect of working together again in the future. The tight deadlines under which these teams work leave little time for relationship building. Because the time pressure hinders the ability of team members to develop expectations of others based on first-hand information, members import expectations of trust from other settings with which they are familiar. Analogous to the Social Identification/ Deindividuation (Lea & Spears, 1992) and hyperpersonal (Walther, 1997) models, individuals in temporary groups make initial use of category-driven information processing to form stereotypical impressions of others. After the team has begun to interact, trust is maintained by a “highly active, proactive, enthusiastic, generative style of action” (Meyerson et al., 1996). High levels of action have also been shown to be associated with high-performing teams (Iacono & Weisband, 1997). Action strengthens trust in a self-fulfilling fashion: action will maintain members’ confidence that the team is able to manage the uncertainty, risk, and points of vulnerability, yet the conveyance of action has as a requisite the communication of individual activities. In summary, whereas traditional conceptualizations of trust are based strongly on interpersonal relationships, swift trust de-emphasizes the interpersonal dimensions and is based initially on broad categorical social structures and later on action. Since members initially import trust rather than develop trust, trust might attain its zenith at the project inception (Meyerson et al., 1996). Another way of studying partners in the virtual enterprise is to look at how they are held together by tasks. Task structures centre around the establishment of norms which are based on reciprocal expectations held by the partners. Since partnership in a virtual enterpirse is more complex than games assume, rules cannot be entirely specific but cooperation involves the creation of norms of behavior which form part of partnership roles. The uncertainty attached to partner action can therefore be controlled through task execution, and in this way individual partners maintain a structure which also specifies status. But in a virtual enterprise, tasks of partners will need to be re-negotiated more quickly and individual partners need the ability to be creative about ways of completing tasks based on the principles of trust (Giddens, 1993). The individual partner in a virtual enterprise now has to go beyond basic task cooperation and legal arrangements typical of traditional patterns of hierarchical work organisation towards a search for greater trust and autonomy. Trust allows partners to take risks with themselves by being able to make mutual disclosures to others and to develop new task execution styles. This has particular validity for
The Impact of Trust in Virtual Enterprises 159
institutions in the form of self-directed work groups and teams when they are trying to manage rapid change. People who work together in groups are used to acting out roles, but within a structure of authority approved by the organisation they belong to, whether this is a business organisation or a scientific community. Norms of behavior can only work if people know what to expect, but expectations also require some prior commitment which is based on belief. When rapid change occurs, existing role structures can be destroyed but without being replaced immediately with legitimate alternative structures. However, people will resist getting rid of norms that have meaning for them (Lippitt & Lippitt, 1978), particularly where they think they are already well accepted and approved. Norms will therefore be preferred which are derived from trust in legitimate authority which is acceptable. When norms are destroyed, and with them roles, we get the emergence of anomie and the organisational fabric can be damaged. People will try to re-establish roles in order to reduce their insecurity and prefer to believe in someone or something which is reliable and conforms to what they expect. But this decision must go beyond reason because they also have to make an emotional commitment to act in an expected manner. The individual has to trust before commitment can work, but under anomic conditions or where coercion is used, trust will be weak and commitment will only be notional. This therefore has implications for organisations which are badly handled or where methodologies ignore aspects of trust. Changing from one situation to another will take time in order for individuals to learn at an emotional level how to come to terms with loss and growth (Marris, 1974). People who have already invested time and energy in learning how to manage a situation may find change unwelcome when having to learn new roles. The creation of roles can be seen in virtual enterprise formation and is believed to go through the stages of forming, storming, norming, performing and adjourning (Handy, 1995). Trust is vital for commitment during the norming phase. The storming phase is anomic in the sense that people are challenging each other for authority over what is true, correct, and what they want. Order is reached when people accept norms as legitimate according to some criteria and then they can perform. There should be a consensus over norms but there is a possibility of a minority coercing the others and then legitimacy is weak. Sufficient time must be allowed for people to work through the process using open communications, otherwise they cannot be committed, particularly if there is a requirement for a major change in norms, as one would expect when both planned and unplanned changes occur frequently. It is also important to consider how cooperation can be affected by virtual partnership and whether competition will result and with this conflict. Cooperation is a special form of behavior which is concerned with reciprocal actions which are to each others’ mutual benefit. Selfish behavior can in a strategic sense lead to cooperation under certain conditions where both parties are mutually dependant on each other for an uncertain period ahead (Axelrod, 1990). At an individual level, in one-off encounters, people can afford to be selfish (using game-like strategies) because they may not meet each other again. However, where they are likely to
160 Wong & Lau
encounter the same people again on a repetitive basis, the need for cooperation increases because the risk of future retaliation may rise. If trust does exist between people, there can still be dissent, but this will not involve conflict which is at each others’ expense. In the case of selfish behavior, trust in the other person is not required and behavior may become competitive, as soon as people realise that they are no longer dependent on each other. Alternatively role relationships may exist which are exploitative because of unequal power and low levels of trust. Changes in norms may reflect different values and this may result from a different culture affecting the situation, as in a takeover or by a change in the composition of the virtual enterprise. Values are important because they are concerned with prioritizing what we want and will condition our expectancies. If we take for granted that our values are correct, then we will expect them to be fulfilled and this will increase the chance of conflict or coercion. If the core company thinks that partners’ expectations are not important, then the chance to negotiate norms will be more difficult. The risks of failure increase if methodological approaches are also primarily technically based, as opposed to seeking the active input of human beings and considering the human processes of communication involved. In summary, while trust has long been a major issue in the organisational literature, there is little agreement on a single model of trust that applies to all partner evaluation contexts. One can only see from the literature that although trust is pivotal in reducing the high levels of uncertainty endemic to the global and technologically based environment, inter-firm trust was not explicitly considered in the evaluation of partner companies in a VE. The authors therefore wish to incorporate inter-firm trust as a major criterion, and a partner evaluation decision support system, based on the recent work of the authors, is suggested below (Lau & Wong, 2001; Lau et al., 2001).
NEURAL DATA MINING SYSTEM (NDMS) In larger organisations, many different types of users with varied needs utilize the same massive data warehouse to retrieve the right information for the right purpose. While data warehouse is referred as a very large repository of historical data pertaining to an organisation, data mining is more concerned with the collection, management and distribution of organised data in an effective way. The nature of a data warehouse, includes integrated data, detailed and summarized data, historical data, and metadata. Integrated data enable the data miner to easily and quickly look across vistas of data. Detailed data is important when the data miner wishes to examine data in its most detailed form, while historical data is essential because important information nuggets are hidden in this type of data. On-line analytical processing (OLAP) is an example of architectural extension of the data warehouse. OLAP refers to the technique of performing complex analysis over the information stored in a data warehouse. Moreover, there is currently no universally accepted conceptual model for OLAP. Merwe and Solms (1998) address this issue by proposing a model of a data cube and algebra to support OLAP operations on this
The Impact of Trust in Virtual Enterprises 161
cube. The model they present is simple and intuitive, and the algebra provides a means to concisely express complex OLAP queries. Once a data warehouse is set up, the attention is usually switched to the area of data mining, which aims to extract new and meaningful information. In other words, a pool of ‘useful information’ that has been stored in a company data warehouse becomes ‘intelligent information,’ thereby allowing decision makers to learn as much as they can from their valuable data assets. In this respect, neural network can be deployed to enhance the intelligence level of the OLAP application. Neural network searches for hidden relationships, patterns, correlation, and interdependencies in large databases that traditional information-gathering methods (such as report creation and user querying) may have overlooked. The responsibility of the neural network is to provide the desired change of parameters based on what the network has been trained on. Intrinsically, a sufficient amount of data sample is a key factor in order to obtain accurate feedback from the trained network. As neural network is meant to learn relationships between data sets by simply having sample data represented to their input and output layers (Herrmann, 1995), the training of the network with input and output layers mapped to relevant realistic values with the purpose to develop the correlation between these two groups of data will not, in principle, contradict the basic principle of neural network. With a trained network available, it is possible that recommended action can be obtained with the purpose to rectify some hidden problems, should that occur at a later stage. Therefore, in the training process of the neural network, the nodes of the input layer of the neural network represent the data from the OLAP, and those of the output layer represent the predictions and extrapolations. It should be noted that the output information from the OLAP could be used to refine the OLAP data cube so as to continually update the database over time.
Characteristics of NDMS The data interchange within the NDMS encompasses three modules, namely OLAP module, Data Conversion (DC) module, and Neural Network (NN) module (Figure 1). The data repository, which aims to support efficient data interchange among the three modules, is essential for the coordination and updating of information from various sources. As for the OLAP module, it consists of descriptive data (dimensions) and quantitative value (measures), both of which generate the OLAP data cube by building up two elements, namely, fact table and dimension (Erik, George & Dick, 1999). In the fact table, the required data and user-defined methods for analysis are specified clearly. In the descriptive data of OLAP, the different dimension levels are defined for further computational use on different views of the OLAP data cube. Typical dimension includes location, company, and time, whereas typical measure includes price, sales, and profit. With a multidimensional view of data, the OLAP module provides the foundation for analytical processing through flexible access to information. In particular, this distinct feature can be used to compute a complex
162 Wong & Lau
Figure 1: Characteristics of NDMS
OLAP Module
M d l Data Repository
Data Converter Module Neural Network Module
query and analyze data on reports, thereby achieving the viewing of data in different dimensions in a more easy and efficient way. To illustrate the benefits of OLAP as compared to the traditional approach of data management using SQL, an example is shown here to benchmark their underlying methodologies of associated operations. In the SQL approach, when a user needs to retrieve information across multi-tables, users must clearly define the necessary tables for finding the specific information. For instance, when a user needs to know how much sales is taken for a certain year and city and uses Internet as promotion media, the tables and their relationships must be clearly defined. In normal practice, a query command line written in Structured Query Language (SQL) as shown below can be used to retrieve information from the tables. “Select sum(a.store_sales) from sales_fact_1999 a, promotion b, region c, store d, where a.store_id = d.store_id and c.region_id = d.region_id and b.promotion_id = a.promotion_id and c.sales_country = China and b.media_type = Internet” When using the OLAP module, the table used for the query and the data used to perform the calculation are defined separately. Then, the user builds up a complex calculation on individual members to meet the specific requirements. Because the calculation and analyses have been pre-computed in OLAP server previously, only a simple Multi-Dimension eXpression (MDX) is necessary to construct for retrieving identical result as shown in the following. “Select [Measures].[Store Sales] on columns, [Store].[Sales_country] on rows from sales where ([Promotion].[Media_Type].[Internet], [Region].[Sales_country].[China])” In the above expression, it can be shown that MDX is simpler and clearer than SQL statement. When the user requirement is changed, only a small part of the OLAP data cube is needed for alteration in order to fulfill the user requirement. In this respect, some minimal change of the MDX is necessary. Comparing with the traditional approach, the SQL statement needs to be
The Impact of Trust in Virtual Enterprises 163
rewritten to meet the new requirement. In general, MDX is suitable for creating decision support functionality, and a typical example has been provided as follows. “IF ([Measures].[Unit_Sales] > 1000, [Measures].[Store_Sales] * 0.8, [Measures].[Store_Sales])” Before the implementation of OLAP, the calculated member is constructed by the measures. In a case example that aims to find a suitable business partner for a particular task, the method has been depicted in Figure 2.
Difference Between Import and Export Partner Evaluation Criteria The evaluation of virtual partners generally involves distinct types of decisions. For instance, in the evaluation of import partners, one focuses on a partner to handle a company’s product in an export market. Hence access to markets and market knowledge are likely to be of key importance, together with reliability and control over conditions of sales. On the other hand, evaluation of export partners involves the identification of a partner for the importation and local marketing of a product. In this case product-related variables such as product quality, price, production capabilities and service support are generally considered to be of key importance. In both cases trust between top management teams of the core and partner companies is of fundamental importance. In Figure 2, the scores of trust level, price level and quality level for each project are used. Then, the average score of a job is calculated by average scores of the trust level, price level and quality level of the previous jobs that have been done by a certain partner company. Finally, the overall average of the partner company is determined by accumulating the pre-specified weighting of the latest jobs. The overall average can then be assessed by the CEO of the core company. Since the OLAP technology is a user-friendly and software independent tool, so it can be embedded in most client/ server development tools and web development tools. In this case example, after the customer submits the requirement to the NDMS server, MDX query can then be executed in order to retrieve available service providers based on the core competence specify. With the OLAP module as a front-end component, the Neural Network(NN) module is employed as a back-end part of the NDMS, which concentrates on providing essential information such as alertness of abnormal scenarios based on the pattern of historical data. However, since the output data from OLAP data cube may not be able to be used directly by the NN module due to possible data Figure 2: Partner scoring method Average (Trust Scores + Price Level Scores + Quality Level Scores)
Job Score
Rating Calculation
Partner Company
164 Wong & Lau
incompatibility, it is essential that a data translation mechanism is incorporated to act as a “bridge” to link the two modules together to form an integrated unit. The DC module is meant to play this important role. In brief, the DC module concentrates on achieving efficient data transfer between the OLAP module and the NN module, which requires specific data format for mapping the input nodes to guarantee proper operation. With the formatted data available via the DC module, the focus is now turned to the NN module, which aims to setup a suitable network topology in order to identify any correlation within the data pool. The NN module is meant to project possible outcomes–good or bad–based on the available pattern of data, thereby alerting users of detected abnormal behavior in terms of company performance and other hidden business issues. This provides important advice to support critical decision-making in the VE. In the following case example, parameters such as trust between management teams, product quality and product cost are abstracted from recent company performance records. With the assistance of an expert team, many companies’ past behavior based on the selected parameters can be classified and ranked. In general, for the setup of a neural network, a number of inputs are required to enable such network to take into account the various multiple factors that may influence the performance assessment of a particular company. In this research, the neural network consists of 15 input nodes (five sets of the last five records including quality, cost and delivery) and five output nodes (various suggested actions to be taken), as shown in Figure 5. To achieve the objective of producing a reliable “trained” neural network, statistical data have to be mapped to the input and output nodes of the neural network. In this respect, it is recommended that at least 100 sets of data are required to train the network in order to develop a reliable module for the NDMS. With the availability of a fair amount of data sets, the next step is to train a neural network. This means the parameters include network construction and training data files are needed for definition. Then, users can apply analysis tools to provide insight into how the network is to be trained as well as the appropriate topology of the network for the specific purpose. It is also likely that users will need to fine-tune the training parameters such as iteration number, learn rate, momentum, etc (Qnet, 2000) so that the specified values match the training characteristics for the model. After the completion of the training process, such neural network can then be used as a knowledge repository to monitor companies’ performances, and provide decision support to users who then consider necessary actions to be taken. After the training process, the trained neural network can then be recalled in order to process the new inputs through the network. In order to describe in more detail about the recall operation under the NN module, five latest track records of a company are required to be mapped to the input nodes for analysis. Output data from the NN module will predict the company’s performance based on the design of configuration of the trained network. A case example in the next section will elaborate how this works.
The Impact of Trust in Virtual Enterprises 165
CASE EXAMPLE To validate the feasibility of NDMS, a prototype system has been developed, based on the framework of the NDMS as proposed in this chapter. Pursuing the NDMS infrastructure that has been defined in the previous section, the OLAP module has generated a pool of useful data, and accordingly, the NN module have created a reliably trained neural network. Next, five latest track records of a company has been gathered and listed as follows. In this case three factors (quality, cost and trust between management) are being considered; performance score point (PSP) ranging from 1 (least point) to 7 (highest point) is used to assess the partner company as shown below. Company A Latest record 2nd latest record 3rd latest record 4th latest record 5th latest record
Trust between top management 6.6 5.4 4.8 4.4 3.0
Product quality PSP 3.5 4.7 5.0 5.6 4.0
Product cost PSP 6.5 5.5 5.1 4.1 4.0
After such information has been input, the NN module gives an assessment report back to the user, thus supporting the user to take action if deemed necessary. In the following table, “0” output from the NN node indicates a negative suggestion to the associated statement and “1” is the positive suggestion whereas “0.5” indicates that there is not enough data to justify a firm suggestion. Company A
Output from NN module Potentially competent 0.5 Dependability of company 1 Price quoted compatible with current market situation 1 Service quality is compromised to meet the quoted price 1 Further assessment of company performance is required 1 Based on the NN output results as shown in the table, it can be seen that although Company A is trustworthy, it seems to have a problem in meeting the agreed quality level set and it is suggested that further assessment regarding the company’s performance is needed. Based on the suggestion of this assessment report, Company A was approached in order to find out the reason behind the continual downgrade of performance in terms of product quality. After an organised investigation of the issue, it was found that several of the senior staff of the product quality assurance group left the company to start their own business. Because of this unexpected change, the company suffered an unprecedented “brain-drain,” resulting in the sudden decline of quality level of certain mainstream products.
166 Wong & Lau
Because of the situation, Company A has been suggested to adopt some best practices related to quality assurance. In this case, the Total Quality Management (TQM) practice has been adopted and some necessary tools have also been acquired in order to implement such practice in the company. At this stage, it is still difficult to tell if the company could significantly reverse the downturn performance in terms of product quality. However, because of the signal generated from the NDMS, the problem of a business partner has been revealed and prompt decision could be made with supporting assessment report, thus avoiding the loss of a trusted business partner, which can in turn weaken the overall performance of the VE. This case example indicates that the introduction of the neural network module to the OLAP module is able to significantly upgrade the decision support functionality of the VE. However, the results obtained so far are by no means perfect, although they demonstrate that the suggested NDMS is viable.
LIMITATIONS AND RECOMMENDATIONS FOR FURTHER RESEARCH This study mainly focused on the practical need to consider trust between management teams in the evaluation of partner companies in a VE. Owing to the circumstances pertaining to the schedule of the study, the perception of a limited number of managers responsible for partner selection was used to verify the importance of inter-firm trust. It would be desirable to extend the analysis to all the dimensions pertaining to trust and explore the effect of trust on performance of partner companies. Additionally, qualitative studies on mapping the cognitive processes of CEOs responsible for evaluation of partner companies in the VE would also be helpful. Here conjoint measurements might be used. Coupled with the decision support tool proposed, such a methodology would help to understand how managers actually make such decisions and use experience in partner evaluation.
CONCLUSION In order to achieve effective knowledge management in a VE, the core company must be clear about its business aims and objectives. It must assemble a set of partner companies that can deliver to those objectives. It must support them in doing so and trust them to do so. However there is no single model of trust which could be applied in practical situations. Hence there is great pressure placed on the CEOs as the decision maker for selection of satisfactory partner companies. In this chapter, an intelligent decision support system for partner evaluation is introduced. It demonstrates the benefits of using a combination of technologies to form an integrated system which capitalizes on the merits and at the same time offsets the pitfalls of the involved technologies. A special feature is that the trust between
The Impact of Trust in Virtual Enterprises 167
management teams of companies could be incorporated as one the evaluation criteria. The NDMS is proved to be feasible in predicting the problems of companies as shown in a case example described in the chapter. As suggested, further study on (i) all the dimensions relating to trust between companies within a VE and (ii) the effect of trust on overall performance of partner companies would be needed.
ACKNOWLEDGMENT The authors wish to thank the Department of Mechanical Engineering and the Department of Manufacturing Engineering of the Hong Kong Polytechnic University for the support of this research project.
REFERENCES Augustine, N. R. (1997). Reshaping an industry: Lockheed Martin’s survival story. Harvard Business Review, 75, 83-94. Axelrod, R. (1990). The Evolution of Co-Operation. Penguin Books. Business Week. (1993). The virtual corporation. February 8, 98-103. Coyle, J. and Schnarr, N. (1995). The soft-side challenge of the “virtual corporation.” Human Resource Planning, 18, 41-42. Davidow, W. H. and Malone, W. S. (1992). The Virtual Corporation. New York: Edward Burlingame Books/HarperBusiness, Harper Collins Publishers. DeSanctis, G. and Poole, M. S. (1997). Transitions in teamwork in new organisational forms. Advances in Group Processes, 14, 157-176. Greenwich, CT: JAI Press Inc. Erik, T., George, S. and Dick, C. (1999). Microsoft OLAP Solutions. New York: John Wiley & Sons. Gamble, Paul R. (1992). The virtual corporation: An IT challenge. Logistics Information Management, 5(4), 34-37. Giddens, A. (1993). The nature of modernity. In Cassell, P. (Ed.), The Giddens Reader, 284-316. Stanford University Press. Handy, C. (1995). Trust and the virtual organisation. Harvard Business Review, 73(3), 40-50. Herrmann, C. S. (1995). A hybrid fuzzy-neural expert system for diagnosis. Proceedings of International Joint Conference on Artificial Intelligence, 494-500. Iacono, C. S. and Weisband, S. (1997). Developing trust in virtual teams. Proceedings of the Hawaii International Conference on Systems Sciences, Hawaii. (CD-ROM). Jarvenpaa, S. L. and Leidner, D.E. (1998) Communication and trust in global virtual teams. Journal of Computer-Mediated Communication, 3(4). Lau, H. and Wong, T. T. (2001). Partner selection and information infrastructure of a virtual enterprise network. Computer Integrated Manufacturing, 14(2), 186-195.
168 Wong & Lau
Lau, H., Chin, K. S., Pun, K. F. and Ning, A. (2000). Decision-supporting functionality in a virtual enterprise network. Expert Systems with Applications, 19(4), 261-270. Lea, M. and Spears, R. (1992). Paralanguage and social perception in computermediated communication. Journal of Organisational Computing, 2(3/4), 321-341. Lipnack, J. and Stamps, J. (1997). Virtual teams: Reaching across space, time, and organisations with technology. New York: John Wiley & Sons. Lippitt, G. and Lippitt, R. (1978). The Consulting Process in Action, 42-43. University Associates Inc. Marris, P. (1974). Loss and Change, Institute for Community Studies. Routledge and Kegan Paul. Mayer, R. C., Davis, J. H. and Schoorman, F.D. (1995). An integrative model of organisational trust. Academy of Management Review, 20, 709-734. Meyerson, D., Weick, K. E. and Kramer, R. M. (1996). Swift trust and temporary groups. In Kramer, R. M. and Tyler, T. R. (Eds.), Trust in Organisations: Frontiers of Theory and Research, 166-195. Thousand Oaks, CA: Sage Publications. Musthaler, L. (1995) Effective teamwork virtually guaranteed. Network World, October 16, SS10-SS11. Qnet. (2000). http://qnetv2k.com/. SmartBridge. (1997). http://www.hokie.bs1.prc.com/maritech/dsc94-44.htm. Walther, J. B. (1997). Group and interpersonal effects in international computermediated collaboration. Human Communication Research, 23(3), 342-369.
Market of Resources as an Environment 169
Chapter XI
Market of Resources as an Environment for Agile/ Virtual Enterprise Dynamic Integration and for Business Alignment Maria Manuela Cunha Instituto Politécnico do Cávado e do Ave, Portugal Goran D. Putnik Universidade do Minho, Portugal A. Gunasekaran University of Massachusetts, USA
ABSTRACT We are assisting to a shift from traditional “self-centred closed-enterprises” to “global open-enterprises,” corresponding to the recent Agile/Virtual Enterprise (A/V E) model. This new organisational model, where market information concerns information about resources to integrate an A/V E, although reinforced by the ability to use more globally distributed resources and by lower transaction costs provided by information and communication technologies, claims for a wider support environment, able to assure better quality and better response at lower time. This corresponds to the concept of Market of Resources, proposed by the authors as an environment for A/V E dynamic integration and for business alignment. The chapter describes the main functionalities of the Market of Resources, with a special focus on the specification of its creation, operation and maintenance. Copyright © 2003, Idea Group, Inc.
170 Cunha, Putnik & Gunasekaran
INTRODUCTION Since the beginning of the nineties, we are assisting to the development of new concepts of enterprises that, supported by the advances in information and communication technologies, helps them to remain competitive and to answer to a more demanding and more global market. Several factors determine the competitiveness of the enterprise, being the most important requirements for competitiveness, the adaptability to environmental change and, as a consequence, fast or dynamic reconfigurability. The paradigms satisfying those requisites are the Agile and the Virtual Enterprise ones, which, in the context of the present work, will be designated as the Agile/Virtual Enterprise (A/V E) model, corresponding to the Virtual Enterprise model offering the characteristics of the Agile Enterprise. The requirements of adaptability and reconfigurability imply the ability of (1) flexible and almost instantaneous access to the optimal resources to integrate in the enterprise; (2) design, negotiation, business management and manufacturing management functions independently from the physical barrier of space; and (3) minimisation of the reconfiguration or integration time. According to several definitions (Davidow & Malone, 1992; Byrne, 1993; Preiss, Goldman, & Nagel, 1996; Camarinha-Matos & Afsarmanesh, 1999; Browne & Zhang, 1999; Putnik, 2000; Cunha, Putnik, & Ávila, 2000), virtual enterprises are defined as “agile” enterprises, i.e., as enterprises with integration and reconfiguration capability in useful time, integrated from independent enterprises (resources providers), in order to answer to a market opportunity. After the conclusion of that opportunity, the enterprise either reconfigures itself or is dissolved and another virtual enterprise is integrated, due to new market opportunities. Even during the operation phase of the virtual enterprise, the configuration can change, as the need for readjustment or reconfiguration facing unexpected situations can happen at any time, raising the importance of the integration dynamics. The resource is the entity that can contribute or add value, providing either a product (component, assembly) or an operation and can be primitive or complex (a meaningful combination of primitive resources). A resource “is (a view of) an enterprise object which is used to realise, or to support the execution of, one or more processes and it is the subject of control” (Putnik, 2000). A concept of Market of Resources, as the institutionalised environment assuring the accomplishment of the competitiveness requirements for Agile/Virtual Enterprise dynamic integration is proposed. The Market of Resources, defined as a concept in the author’s previous work, consists of a “virtual” market as the institution offering an electronically delivered intermediation service, between the set of resources registered in the Market (candidate resources for A/V E integration), organisations looking for resources to integrate in an A/V E and Brokers. Offer and demand are usually matched under several different circumstances, from unregulated search to oriented search, from simple intermediation mechanisms to the market mechanism, all of them with the possibility of being either manually performed or automated. The Market of Resources is an intermediation service
Market of Resources as an Environment 171
with different degrees of automation, mediating offer and demand of resources to dynamically integrate A/V Es. The service is supported by (1) a knowledge base of resources and results of the integration of resources in previous A/V E, (2) a normalised representation of information, (3) intelligent agent brokers and (4) regulation, i.e., management of negotiation and integration processes. It is able to offer (1) knowledge for A/V E selection of resources and its integration, (2) specific functions of A/V E operation management, and (3) contracts and formalising procedures to assure the accomplishment of commitments, responsibility, trust and deontological aspects, envisaging the production of the projected product. The environment supports not only the integration process, but, what is most important when the fast and proficient reaction to change is a key element, it is able to effectively support dynamic integration, which is the main reason for the concept of Market of Resources as an institution. In this chapter we intend: (1) to present the role of the Market of Resources defending its advantages as an enabler of the process of dynamic A/V E integration and (2) to specify its structure, its main functions and the management environment that is built around the concept, frameworking the creation and operation of the Market of Resources. The chapter is organised as follows: this first section introduced the main concepts and definitions, and the following section briefly refers to the state of the art concerning the most relevant organisational paradigms, as answers to the present requirements for competitive enterprises. The third section addresses the environment of the Market of Resources in the support of A/V E dynamic design and integration processes. The main body of the chapter consists of the fourth section, corresponding to the specification of the creation and operation of the Market of Resources with the support of the IDEF01 design tool, and the presentation of the main procedures supporting the Market creation and maintenance. Finally the chapter will include a brief discussion on limitations of the model, future trends and the conclusions.
STATE OF THE ART Supported by a literary review, this section highlights the emerging organisational paradigms and the main requirements for enterprise competitiveness, to stress the importance of enterprise dynamic integration, as well as to present some tools or environments for enterprise integration.
Requirements for Competitive Enterprises and the New Organizational Paradigms Since the mid-eighties organisational models have suffered radical transformation, towards new concepts such as Agile Enterprise and Agile Manufacturing, Virtual Enterprise and Virtual Manufacturing, Extended Enterprise, Lean Manufacturing, Holonic Manufacturing and Intelligent Manufacturing, etc.
172 Cunha, Putnik & Gunasekaran
It is proposed in recent business literature (Miles & Snow, 1984, 1986; Davidow & Malone, 1992; Bradley, Hausman, & Nolan, 1993; Byrne, 1993; Kidd, 1994, 1995; Handy, 1995; Browne & Zhang, 1999) that the organisational network structures offer the basic principle to remain successful in a highly complex environment where competitiveness requirements consist of high flexibility and quick response, with high-quality standards, but constrained by environmental concerns. The goal of the enterprise is to fulfil the customer requirements, traditionally, using the limited set of resources existing inside its walls. We are assisting to a shift from “self-centred close-enterprises” to “global open-enterprises” (Browne & Zhang, 1999), corresponding to the recent approaches of the Extended Enterprise” and the Virtual Enterprise. At the same time, it is expected the flexibility, responsiveness, efficiency will continuously evolve and closely align enterprise systems to changing business needs in order to achieve competitive performance (Vernadat, 1999)–this is called Agility. The Agility concept was coined in 1991 (Iacocca, 1991) and has been defined in terms of outcomes by several researchers (such as Iacocca, 1991; Nagel, 1993; Dove, 1994; Goldman, Nagel, & Preiss, 1995). However, Kidd (1994, 1995) advances with operational aspects of agility. Some of the aspects proposed by Kidd (1995) that we consider to be the most relevant include: (1) quick response to market opportunities; (2) adaptability or capability to change direction; (3) virtual corporations; (4) reconfigurability of corporate resources to answer to unexpected market opportunities. A very complete definition of agility is suggested by Yusuf, Sarhadi, & Gunasekaran (1999): “Agility is the successful exploration of competitive bases (speed, flexibility, innovation, proactivity, quality and profitability) through the integration of reconfigurable resources and best practices in a knowledge-rich environment to provide customer-driven products and services in a fast-changing market environment.” In the BM_Virtual Enterprise Architecture Reference Model (Putnik, 2000), the author presents fast adaptability and fast reconfigurability as characteristics for the competitive enterprise, considering that the concepts of Agile Enterprise and Virtual Enterprise are the new organisational paradigms that incorporates those characteristics2. Other models satisfying the requisite of fast reconfigurability are the concepts of Virtual Factory and Agile Manufacturing (Goldman, Nagel, & Preiss, 1995; NIIIP, 1996; Putnik, 1997; Gunasekaran, 1999).
Enterprise Dynamic Integration The traditional organisational model uses the own resources existing within the organisation, a relatively limited selection domain, which cannot, in general, provide the desired competitive performances. To solve the problem of the lack of resources that could bring to the enterprise a competitive advantage, the enterprise searches for cooperation with other enterprises, integrating an A/V E. Inter-enterprise integration is the essential condition to make this cooperation effective.
Market of Resources as an Environment 173
Webster defines integration as “a combination of separate and diverse elements or units into a more complete or harmonious whole.” Enterprise Integration means the establishment of effective and efficient interactions between the elements of an organization, and the concept of A/V E Dynamic Integration, here introduced, means that the integrated elements must be permanently aligned to business, passing by as many instances of combination of resources as necessary, to accomplish the objectives of the A/V E. According to Putnik (2000), one of the most important requirements for the virtual enterprise is the capability of efficient access to heterogeneous candidate resources, efficient negotiation between them and their efficient integration in the Virtual Enterprise. The same author and Vernadat (1996), for the purpose of defining a Virtual Enterprise Architecture Reference Model, state that integration is primarily the task of improving interactions among the system’s components using computer-based technologies, with the goals of ensuring portability, information sharing and interoperability. We can identify several phases on the life cycle of a Virtual Enterprise (see for example Faisst, 1997; Camarinha-Matos & Afsarmanesh, 1999); namely: (1) search and selection of partners, (2) operation and finally (3) dissolution. Integration is the support for the enterprise operation phase. As we are thinking of open-systems, integration means to assure the several dimensions of integrability, proposed in Petrie (1992), for instance, the language dimension, connectivity, reconfigurability and resources integration domain dimension. In our work, when referring to the integration phase, we will implicitly include the upstream phase of search and selection of partners, and consider integration to be a dynamic process. A review of the related literature reveals that although considerable research has been undertaken on the subject of Virtual Enterprise integration, management and coordination, insufficient attention has been devoted to the problem of creating the environment where those processes take place, i.e., the environment to enable an efficient and effective dynamic integration, offering strategies to dynamically align the virtual enterprise with business. The concept of A/V E we are addressing is a more broad, more embracing and more dynamic than the concepts of Virtual Enterprise or Extended Enterprise found in the research and literary review undertaken, as these do not require the dynamic integration we defend for agility, and for which we propose the implementation of a Market of Resources.
Business Strategic Alignment The driving force of business is to satisfy customers, each time more demanding, each time more global, with products each time more customised to their individual needs. The meaning of Business Alignment we are addressing consists of the actions to be undertaken to gain synergy between business, i.e., a market opportunity, and the provision of the required product, with the required specifications, at the required time, with the lowest cost and with the best possible return (Cunha, Putnik, & Gunasekaran, 2001). The Market of Resources environment supports the imple-
174 Cunha, Putnik & Gunasekaran
mentation of alignment strategies, between business (market opportunity) and the integration of resources in an A/V E that answers to that market opportunity. Strategic alignment between business and A/V E integration involves a mix of dependencies between market requirements, product requirements and resources requirements.3 The selection of resources and integration in an A/V E follows three dimensions of alignment: (1) market alignment–aligning the A/V E project (system of resources and process plan) with the market requirements (captured by the client); (2) product and operations alignment–aligning the product with its specification (operations provided by selected resources must conduct to the desired product) and (3) resources alignment–aligning resources with the market requirements (includes economical, managerial and organisational aspects). As a consequence of the performance of the resources in the A/V E operation, or of any other factors, resources can need to be substituted and the A/V E project can be subject of adaptations or corrections or deliberated change, and quick response is a permanent challenge. The permanent and continuous alignment requires the ability of dynamic integration.
Tools/Environments for Integration Market is the mechanism that allows buyers and sellers to change things; its main characteristic is to link buyers and sellers to define prices and quantities. An Electronic Market is the Virtual Marketplace where business participants can meet each other and usually cooperate in order to achieve a common business goal. With the exponential growth of the Internet, Electronic Markets are gaining increased importance. The establishment of an Electronic Market does not rely just on the basic information and communication infra-structure. This is absolutely necessary as a support mechanism, but the added value comes from the higher level functions, in our case, to support A/V E selection and integration processes. From the undertaken literary review, it was possible to conclude that it is largely proposed the model of Electronic Markets offering market functionalities like searching goods or participants, filtering information or helping negotiation, using either Brokerage (Bichler, 1998; Eversheim et al., 1998; Kanet et al., 1999; Cunha, Putnik, & Ávila, 2000; Putnik, 2000; Sihn, Palm, & Wiednmann, 2000; Manfred & de Moor, 2001) and/or Intelligent Agents technology (Tsvetovatyy, Gini, Mobaster, & Wieckowski, 1997; Camarinha-Matos & Afsarmanesh, 1998; Viamonte & Ramos, 2000). Virtual Enterprise Brokerage4 is defined by Eversheim et al. (1998) as the exploitation of business opportunities through the creation of Virtual Enterprises (VE). Core processes of VE brokerage (Eversheim et al., 1998) are related to the organization and deployment of competencies of a set of potential partners (organised under a VE Cluster), in order to integrate selected partners into a VE. Besides partner search and formation of the partnership, it is also committed to the Broker the responsibility of configuring the adequate infrastructure for the successful operation and dissolution of virtual enterprises, i.e., physical, information, legal and socio/cultural infrastructure. In Putnik (2000), the broker is an essential and
Market of Resources as an Environment 175
distinguishing component and feature of A/V E that provides rapidness and especially virtuality to the enterprise. In most of the review, part of the electronic brokerage process is performed by intelligent agents, i.e., client brokering and server brokering can be implemented with multi-agent technology, as well as negotiation. Intelligent agents consists of software that can travel over networks, activate and control remote programs, and return to source with information. In the model for negotiation in virtual organisations formation proposed by Oliveira and Rocha (2000), when a specific consumer’s need is identified, it creates a new agent that will formulate an announcement for goal satisfaction in the electronic marketplace, will receive and evaluate bids from potential suppliers of the product or service, and negotiate in order to integrate the partnership. The electronic brokerage is a broader activity than the agents technologybased solution, as it can offer more than search and negotiation processes, as already mentioned.
THE MARKET OF RESOURCES ENVIRONMENT Snow, Miles, and Coleman (1992) distinguish three different categories of interorganisational networks: the internal, the stable and the dynamic networks. The virtual enterprise is a dynamic network and, as such, management is an essential function, meaning coordination between partners, to maximise integration benefits. Virtual enterprises, as partnerships, need coordination mechanisms, such as rules, procedures and leadership, functions which organised environment is assured by the Market of Resources. In this section we refer to the Market of Resources as an environment enabler for A/V E dynamic integration and for business alignment. In the following subsections, we intend to remark some of the main benefits offered by the Market of Resources, to present its structure and to explain briefly the processes of selection of resources and A/V E integration, with the objective of understanding and justifying the necessity of the creation and operation of the Market, the central objective of this work.
Benefits of the Market of Resources The Market of Resources is the institutionalised organisational framework where Participants (resources providers) make their resources available, as potential servers/partners for A/V E integration. We will designate by Client the entity looking for a product, components or operations (resources) to create/integrate an A/V E. The Client is the one that wants to answer to a market opportunity, by capturing the corresponding market requirements, and asks the Market of Resources for optimal A/V E design, selection and integration, traducing the market requirements into resources requirements, process requirements and product/ operation requirements.
176 Cunha, Putnik & Gunasekaran
The cost associated with the integration of a virtual enterprise surpasses the sum of costs of making contacts, with the cost of overcoming distance, etc.; it is also the opportunity cost, the cost of losing an opportunity because of taking a few more hours or days to locate resources (especially for low-level processes) or to reconfigure the virtual enterprise. Speed is a fundamental characteristic that should be considered, as one instantaneous physical structure (or one instance) of a virtual enterprise may last only for a few days or even hours, so it is necessary to act almost on real time. Our contribution integrates this concern. The selection problem is by nature a very complex problem (NP class) and if manually performed, it is not possible to assure high performance. The search of resources in the universal /global domain to integrate a Virtual Enterprise, even using agent technologies, is extremely time-consuming and the lack of standardisation and uniformity in the description of the resources cannot assure an efficient selection in useful time. We propose that all the entities in interaction in a selection process must be described in a normalised format to allow automatic selection and decision making (application of automated brokerage mechanisms, later designated as searching algorithms). The second way to assure efficient selection is to limit the search domain to a subset of the universal domain (focused domain). Also the uncertainty concerning the behaviour of the resources to integrate increases the risks associated with the ability to answer to the production of an ordered product (the motive that led to the integration of the A/V E) and therefore must be taken into consideration. The Market of Resources also offers procedures to manage the performance of every integrated resource; can support negotiation, contracts and commitments; and can act as the face of the entities in interaction or in negotiation. An essential aspect is the evaluation of the result of previous situations, i.e., the behaviour of the enterprises in previous integrations, and to use this historical information in the search processes.
Structure of the Market of Resources The overall functioning of the Market of Resources is represented by an IDEF0 diagram5 (Figure 1), consisting of the creation and management of the Market environment (Process A.1.) to support the Design and Integration of A/V E (Process A.2.) that, under the coordination of the environment, produces a product to answer to a market opportunity (Process A.3.). The model proposed herein respects the BM_Virtual Enterprise Architecture Reference Model proposed by Putnik (2000).
Process A.1.–Market of Resources Creation and Operation This process corresponds to the creation and operation (management/maintenance) of the environment proposed, from the technological aspects–such as the creation of databases and development of software tools, implementation of communication systems– to the definition and permanent adaptation and updating of the managerial aspects– such as regulation and rules, criteria for selection, management and brokerage procedures, commitments, evaluation, etc.–including the performance of the Market itself in order to improve the Market of Resources organisation.
Market of Resources as an Environment 177
Figure 1: IDEF0 representation of the processes of Creation of a Market of Resources and A/V E design, integration and operation (adapted from Cunha, Putnik, & Avila, 2000) Client/Server Project Constraints
Virtual Enterprise Reference Model
A/V E Management
A/V E Integration Management
Project Management
Selection Results
Integration Results Market of Resources Management
Market of Resources
Operation Results
Market of Resources Management Market of Resources Creation and Operation
Resources
Operation Failure Service/Process Patterns
A1
Client Search Constraint Patterns Client Search Constr./Negotiation Param. Requirements for Resources Selection
A/V E Design / Integration
Integrated A/V E
Selection Failure Integration Failure Dissolution
A2
A/V E Operation Raw Materials Specification Product Requirements Process Plan
Resources Representation Language Simulation Tools Database and Software Tools Communication Tools
Products A3
Algorithm for Optimal Search Algorithm /Software Tool for Search over Focused Market Algorithm /Software Tool for Search over the Market of Resources
Process A.2.–A/V E Design/Integration The process consists on two activities–Resources Selection and A/V E Integration–performed on a given subset of the Market of Resources, resulting from the decomposition of the Market participants into meaningful combinations of resources, designated Focused Markets (Cunha, Putnik, & Ávila, 2000), to increase the efficiency of the selection process and to reduce search time. This process of decomposition takes place off-line. Resources Selection involves the design of the A/V E that matches the requirements to produce the desired product and the search for the best combination of resources that will be integrated in the A/V E. The redesign of an A/V E, implying the substitution/integration of new resources, is also considered in this process, as well as the dissolution of the A/V E.
Process A.3.–A/V E Operation The service controls the operation of the Integrated A/V E, tracking the performance of each resource, and restructuring the A/V E design whenever necessary (dynamical adjustment) to make possible the achievement of the results. The operation results are of interest to keep actualised historical information concerning the performance of the resources, to be taken into consideration in future selection processes, and to adjust the management procedures. The main objective of the present work is to detail process A.1., which will be done later. The meaning of the input and output flows of those processes is systematised in Table 1.
178 Cunha, Putnik & Gunasekaran
Table 1: Flows in the global representation of the Market of Resources Processes
Process A.1 Market of Resources Creation
Process A.2 A/V E Design / Integration
Process A.3 A/V E Design / Integration
Market of Resources requires – Input Flows –
Market of Resources provides – Output Flows –
• Resources: information concerning the enterprises that subscribe to the Market to provide products or operations. This includes: (1) enterprise generic information and (2) characterisation of the resources able to provide products or operations, conditions for provision, specification, availability, restrictions and constraints. • Selection Results: to allow the adaptation of criteria for resources selection and of Service/Process Patterns and Client Search Patterns. • Integration Results: to allow the adaptation of criteria for resources selection and to adjust procedures for integration. • Operation Results: to update historical information concerning the participation of an enterprise in an A/V E and to allow the actualisation of Market of Resources Management; this flow is also used to determine A/V E dissolution, a task to be dealt by Process A.2.
• Market of Resources Management: rules and procedures to regulate the functioning of the environment, methodologies to evaluate performance, brokerage and all the support documents; those management procedures are permanently adjusted to allow a better response, based on the Selection Results and Operation Results. • Market of Resources: database of resources, clients, products, operation results, performance and historical information. • Service/Process Patterns: patterns of the concrete services/processes that can be asked to the Market–this information is permanently adjusted from the Selection Results. • Client Search Patterns: patterns of the possible constraints of the services that can be asked, like quality level, negotiation constraints, available time for search, cost–this information is permanently adjusted from the Selection Results.
• Market of Resources (output flow of A.1.). • Service/Process Patterns (output flow of A.1.). • Client Search Patterns (output flow of A.1.). • Requirements for Resources Selection. • Operation Failure: in case of failure of the A/V E, it is necessary to substitute the responsibile resources, which implies a new A/V E project and selection/ integration. • Operation Results: to determine A/V E dissolution.
• Selection Results. • Integration Results. • Selection Failure: when it is not possible to find resources matching the requirements of the negotiation parameters. • Integration Failure: when the selected resources are unable to interoperate. • Integrated A/V E: selected resources and integrated under an A/V E.
• • • •
Integrated A/V E (output flow of A.2.). Raw Materials Specification. Product Requirements. Process Plan.
• Operation Results. • Operation Failure. • Products.
Agile/Virtual Enterprise Design and Integration in the Market of Resources When the Client entity requires the service, it must specify the conditions and characteristics for the A/V E that will answer the market opportunity, i.e., the
Market of Resources as an Environment 179
objective of the A/V E. This specification consists of the technical and operational requirements to produce the desired product and the managerial requirements, and correspond to the input flows of Process A.2.: Requirements for Resources Selection and Client Search Constraints/Negotiation Parameters. Process A.2. is represented in Figure 2. To keep the dynamics of the Virtual Enterprise model, the search for the best combination of resources to integrate should be obtained almost in real time. As the search problem is an NP class problem, whose search effort grows exponentially in function of the domain size, we have proposed the decomposition of the Market of Resources (the global set/domain of resources) into subsets, of meaningful combinations, designated Focused Markets of Resources (Cunha, Putnik, & Ávila, 2000), where a search algorithm will look for the best. This corresponds to a focused domain, to be proposed for each search, reasonably dimensioned to allow a good match at a limited time. This way, the search in the Market of Resources will take place at two phases: the first occurs off-line (Process A.2.1.), and consists of separating the Market of Resources into Focused Markets, according to previously identified and determined Patterns of Client Search Constraints and Service/Project Patterns (input flows of Process A.2.1.). The second phase takes place on-line (Process A.2.2.) and consists of selecting the resources verifying the search constraints required by the Client, in order to propose the set of resources to be integrated into the A/V E (Cunha & Putnik, 2001). The correct capture of those patterns to be used by the process A.2.1. is essential to the efficiency of the Resources Selection (specifically the Focused Market identification). The set of patterns must be permanently calibrated in the function of the results of the Selection and Integration processes, to assure an optimal focused domain identification. The control of the patterns is done in Process A.1. Figure 2: IDEF0 representation of Process A.2 A/V E design and integration (adapted from Cunha, Putnik, & Avila, 2000) C1 Market of Resources Management
C4 A/V E Integration Management
C2 Virtual Enterprise Reference Model C3 Client/Server Project Constraints
I3 I4 I5
Market of Resources Service/Process Patterns
Focused Market Focused Domain Identification
Client Search Constraint Patterns
A21
Selection Results Selection Failure Dissolution
Resources Selection
I7 Requirements for Resources Selection I6 Client Search Constr./Negotiation Param. I1 Operation Failure I2 Operation Results
A22
A/V E Integration Selected Resources
Resources Representation Language M1
O1 O3 O5
M4 Algorithm /Software Tool for Search over the Market of Resources
Database and Software Tools M2 M3 Communication Tools
A23
M6 Algorithm for Optimal Search
M5 Algorithm /Software Tool for Search over Focused Market
Integration Results
O2 Integration Failure O4 Integrated A/V E O6
180 Cunha, Putnik & Gunasekaran
The process of Resources Selection (A.2.2.) corresponds to the fulfilment of the three already mentioned components of the strategic business alignment for A/V E selection and integration: (1) market alignment, (2) product and operations alignment, and (3) resources alignment. The strategic alignment is performed by the algorithms Algorithm for Search over Focused Market and Algorithm for Optimal Search. The alignment is a continuous activity, as even A/V E operation is controlled by the A/V E Management function. The dynamics requirement is a permanent characteristic of the A/V E model, as a consequence of continuous alignment, as resources can need to be substituted and the A/V E project can be subject of adaptations or corrections or deliberated change, and quick response is a permanent challenge. As we have considered the readjustment or re-design of an A/V E (the substitution of resources or rearrangement of integrated resources), an operation to be undertaken by this process A.2.2., we also consider the dissolution of the A/V E, a special case of re-design, but we are not addressing this issue here. The A/V E Integration (Process A.2.3.) consists of establishing procedures, normalising processes, assuring interoperability, and defining responsibilities and assuring commitments through legalising contracts between the participants.
CREATION AND OPERATION OF THE MARKET OF RESOURCES The Market of Resources has two components: (a) the organisational or managerial one, integrating the criteria for resources selection, procedures to manage, control and evaluate the environment, and (b) the infrastructural or informational one (databases). In this section we describe how: 1) the two components of the Market of Resources are created (for the first time): Process A.1.1.–Market of Resources Definition; 2) the organisational component is operated and kept actualised: Processes A.1.2.–Maintenance of Management Procedures and A.1.3.–Maintenance of Search Patterns; and 3) the Market of Resources information (database) is managed: Process A.1.4.– Market of Resources Operation. The four proposed processes (A.1.1., A.1.2., A.1.3. and A.1.4.) are represented in the IDEF0 diagram of Figure 3 and are described in the following subsections.
Process A.1.1. – Market of Resources Definition This process (Figure 4) corresponds to the creation of the Market of Resources environment–the organisational component and the support infrastructures–for the first time. Subsequently, the components of the Market can be updated and operated, through processes A.1.2. to A.1.4. As we have already mentioned, the scope of the service comprises, besides the selection and integration of resources, the management of the A/V E design, selection and integration of resources, and the evaluation of A/V E operation.
Market of Resources as an Environment 181
Figure 3: IDEF0 representation of Process A.1.—Market of Resources creation and operation C1 Virtual Enterprise Reference Model C2 Project Management
Market of Resources Management
Initial Specifications
Market of Resources Management
Market of Resources Management
Market of Resources Management O1 Service/Process Patterns O3 Client Search Constraint O4 Patterns O2 Market of Resources
Market of Resources Definition A11
Service/Process Patterns Market of Resources Maintenance of Management Procedures
Manag Proced Specifications I1 Selection Results I2 Integration Results I3 Operation Results
Market of Resources Management
Client Search Constraint Patterns
A12
Service/Process Patterns Maintenance of Search Patterns
Client Search Constraint Patterns
A13
Market of Resources
I4
Market of Resources
Market of Resources Operation
Resources
A14
M3 Communication Tools M2 Database and Software Tools M1 Resources Representation Language
Simulation Tools M4 M5 Algorithm /Software Tool for Search over the Market of Resources
Figure 4: IDEF0 representation of Process A.1.1.—Market of Resources definition C1 Virtual Enterprise Reference Model C2 Project Management
I1
Initial Specifications
Creation of Database
Market of Resources O4
A111
Definition of Search Patterns
Service/Process Patterns
O2
Client Search Constraint Patterns A112
Definition of Regulation
O3
Market of Resources O1 Management
A113
Implementation of Brokerage
Broker
A114
Database and Software Tools M2
M1 Resources Representation Language
M3 Communication Tools
From the set of initial specifications, the environment is created. This corresponds briefly to the creation of the Market of Resources information structure (Process A.1.1.1.–Creation of Database), the definition of the search patterns to be used in the selection process (A.1.1.2.–Definition of Search Patterns), the definition of the management procedures to control all the operation of the Market, the processes of selection and integration of A/V E, and the A/V E Operation (A.1.1.3.–Definition of Regulation) and the implementation of the brokerage function (A.1.1.4.–Implementation
182 Cunha, Putnik & Gunasekaran
of Brokerage). After the creation, the Market is ready to be operated and to perform its projected activities of selection, integration and management.
Process A.1.2.–Maintenance of Management Procedures All the operation of the Market of Resources is constrained by a control, designated Market of Resources Management, defined for the first time in process A.1.1 and used as a control in every process. The Market of Resources Management represents all the procedures and rules that govern the Market,6 and is maintained in order to provide maximum efficiency in the processes of selection of resources, integration of A/V E, control of A/V E operation and management of the operation of the Market, and this maintenance is accomplished by this process A.1.2., as represented in Figure 5. The output flow Market of Resources Management is a control flow in all the processes, except in process A.1.1. Even the application of changes in the Market of Resources Management to the ongoing A/V E operation and to the resources registered in the Market (Process A.1.2.5.) is constrained by this control. Periodically, or after each activity of the Market, the results of the operations of resources selection, integration or of A/V E operation control are evaluated (A.1.2.1.) in order to determine the need for introducing any change in any of the management procedures. The adjustment of the management procedures is an iterative process, where the impact of the necessary adjustments on the present environment (resources subscribed, ongoing activities, etc.) is evaluated and measured, until an equilibrium is found, between the management rules adjustment and the effect on the environment (Processes A.1.2.2. and A.1.2.3.). When the elements of the management procedures that will assure an improvement of the Market operation performance, Figure 5: IDEF0 representation of Process A.1.2.—Maintenance of management procedures C1 Virtual Enterprise Reference Model C2 Project Management C3 Market of Resources Management
I3 Selection Results I4 Integration Results I5 Operation Results
Change Required
Impact of Change
Evaluation of Need to Change Proposal of Change
A121
I2
I1
Manag Proced Specifications
Identification of Necessary Change Change to Introduce
A122
Evaluation of Impact of Change
Market of Resources
A123
Implementation of Changes
Market of Resources O1 Management
A124
Application of New Management Procedures
A125
M1 Resources Representation Language
M2 Database and Software Tools
M3 Communication Tools
Market of Resources
O2
Market of Resources as an Environment 183
with a minor disturbance of the ongoing activities found, the changes are effectively implemented (A.1.2.4.), originating a new control Market of Resources Management output flow from this process. Finally, the new management procedures are set to the ongoing activities and entities registered in the Market (A.1.2.5.).
Process A.1.3.–Maintenance of Search Patterns As mentioned earlier, the set of patterns used in the selection of resources must be permanently calibrated in function of the results of the Selection and Integration processes, to increase the efficiency of the Selection process. As we can see in Figure 6., when it detects (by the process A.1.3.1.) the necessity of updating the search patterns, which can happen as a consequence of the evaluation of the performance of the selection and integration processes, an iterative process (A.1.3.2.) is triggered; this process simulates the best combination of patterns to maximise the efficiency of the selection, namely, the identification of the focused domains where the Resources Selection process will find the optimal combination of resources to be integrated. After the conclusion of this process (A.1.3.2.), the new combination will be made applicable to the future (A.1.3.3.).
Process A.1.4.–Market of Resources Operation This process corresponds to the maintenance of the database of resources, consisting of three processes: 1. Subscription of the service by enterprises willing to make their resources available for integration–Process A.1.4.1. 2. Actualisation of information related to the resources subscribed, either required by the resources themselves or the registration of the results of an A/ V E operation, updating the historical information of resources performance– Process A.1.4.2. 3. Removal of enterprises/resources from the Market of Resources database, if required by the enterprises themselves or expulsion as a consequence of failure Figure 6: IDEF0 representation of Process A.1.3.—Maintenance of search patterns C1 Virtual Enterprise Reference Model C2 Project Management C3 Market of Resources Management
Change Required I1 I2 I3 I4 I5
Service/Process Patterns Client Search Constraint Patterns Market of Resources Selection Results Integration Results
Evaluation of Actual Search Patterns A131
Changes to Introduce Simulation and Find Best Combination A132
Aplication of New Search Patterns A133
Resources Representation Language M1 Database and Software Tools M2
Simulation Tools M3 M4 Algorithm /Software Tool for Search over the Market of Resources
Service/Process Patterns
O1
Client Search Constraint Patterns O2
184 Cunha, Putnik & Gunasekaran
Figure 7: IDEF0 representation of Process A.1.4.—Market of Resources operation Virtual Enterprise Reference Model C1 C2 Project Management C3 Market of Resources Management
Service/Process Patterns I1 I2 Client Search Constraint Patterns Market of Resources I3 I5 Resources
Subscription Market of Resources
O1
A141
Actualization
I4
Operation Results A142
Unsubscrition or Expulsion A143
M3 Communication Tools Resources Representation Language M1 M2 Database and Software Tools
in the accomplishment of obligations, or inobservance of commitments– Process A.1.4.3. In Figures 8, 9 and 10, we detail the processes A.1.4.1., A.1.4.2. and A.1.4.3. respectively.
Process A.1.4.1.–Subscription The first step of the subscription consists of the data entry and verification, in order to analyse the interest of both parties to negotiate the conditions of subcontracting the service (Process A.1.4.1.1.). If agreement is found between the parties (A.1.4.1.2.), the negotiated conditions are formalised under a contract (A.1.4.1.3.) and the specification of the resources to be provided and conditions are translated (A.1.4.1.4.) using a specifically conceived Resources Representation Language (mechanism M1) for normalising resources description in order to allow the use of automatic search algorithms in process A.2. The resources’ normalised description is associated with the Focused Markets of Resources or focused domains (A.1.4.1.5.) where the search is performed, as previously referred, and all the information is appended to the Market of Resources database (A.1.4.1.6.).
Process A.1.4.2.–Actualisation Two types of information actualisation are identified: the first is required by the resources provider and respects the actualisation of information concerning the characteristics of the resources, conditions to provide them, details for negotiation; and the second is automatically triggered by the service and respects the update of the historic with the results of the participation of a resource in an A/V E. In the first case, it starts with the data entry (identification of the resources and the characteristics and conditions to be altered) and verification of the involvement of the
Market of Resources as an Environment 185
Figure 8: IDEF0 representation of Process A.1.4.1.—Subscription C1 Virtual Enterprise Reference Model C2 Project Management C3 Market of Resources Management
Enterprise Data Market of Resources Resources Data Entry and I3 Verification I4 Resources A1411
Negotiated Conditions Negotiation
A1412
Resources Data
Reject Conditions Contract Formalisation Resources Normalised Description A1413
Translation Resources Register A1414
Service/Process Patterns I1 Client Search Constraint Patterns I2
Association to Focused Domains A1415
Append on Databases
Market of Resources O1
A1416
M3 Communication Tools M2 Database and Software Tools M1 Resources Representation Language
Figure 9: IDEF0 representation of Process A.1.4.2.—Actualisation C1 Virtual Enterprise Reference Model C2 Project Management C3 Market of Resources Management
Resources Data Market of Resources I3 Resources I4
Resources Data Entry and Verification A1421
Actualization not allowed In Progress Projects A/V E Redesign A1422
Enterprise Data
Formalisation of Change A1423
Contract Resources Normalised Description Translation Resources Register A1424
I1 I2
Service/Process Patterns Client Search Constraint Patterns
Association to Focused Domains A1425
Update Database I5
Operation Results
Market of Resources O1
A1426
Communication Tools M1 Database and Software Tools M2 M3 Resources Representation Language
resources in ongoing A/V E, or the existence of any commitment or compromise (Process A.1.4.2.1.). If so, it is necessary to see if the pretended changes will affect those compromises (A.1.4.2.2.) and if this is true, the possibility of A/V E redesign without disruption will have to be analysed. If it is not possible to introduce the changes without prejudice of assumed compromises, they will not be accepted until
186 Cunha, Putnik & Gunasekaran
Figure 10: IDEF0 representation of Process A.1.4.3.—Unsubscribe or expulsion C1 Virtual Enterprise Reference Model C2 Project Management C3 Market of Resources Management
I1 I2
Market of Resources
Data Entry and Verification
Resources Data
Resources Participation in A/V E
A1431
Expulsion Data I3 Operation Results
Remotion Failure
Disentail from in progress Projects
A/V E Redesign Conditions for Rescission
A1432
Indemnity for Damage A1433
Indemnity
Resources to Remove Rescission from the Contract A1434
Remotion from Market of Resources
Market of Resources O1
A1435
M3 Communication Tools M2 Database and Software Tools M1 Resources Representation Language
their accomplishment. When it is possible to accept the required actualisation, an amendment to the contract is agreed to (A.1.4.2.3.), the new information is translated (A.1.4.2.4.), associated with focused domains (A.1.4.2.5.), and the database is updated (A.1.4.2.6.). In the second case, the update of the database with the operation results will trigger only the process A.1.4.2.6.
Process A.1.4.3.–Unsubscription or Expulsion This process aims at removing a resources provider from the database, and is a consequence of two situations: as a request of the resources provider, corresponding to the unsubscrition operation, or as a consequence of a bad performance or failure in the accomplishment of obligations, corresponding to expulsion. The first case starts with the data entry (identification of the resources to be unsubscribed) and the verification of the involvement of the resources in ongoing A/V E, or the existence of any commitment or compromise (A.1.4.3.1.). If so, it is necessary to study the possibility of disentailing the resources from the operations in which they are involved, even if it is necessary to redesign the A/V E under operation (A.1.4.3.2.). If the unsubscription is allowed, the indemnities to pay for the damage caused by the substitution and the redesign of the A/V E are calculated (A.1.4.3.3.) and the contract with the Market of Resources and with the operating A/V E is rescinded (A.1.4.3.4.); finally, the records of the database are actualised with the remotion information. In the second case, the expulsion is a consequence of the input flow Operation Results, and starts with the study of the possibilities of disentailing the resources from operating A/V E and the respective redesign (A.1.4.3.2.), and continues as the previous case, with processes A.1.4.3.3., A.1.4.3.4. and A.1.4.3.5.
Market of Resources as an Environment 187
FUTURE TRENDS The Agile and Virtual Enterprise model is of increasing relevance in the organisational panorama, due to its intrinsic agility, dynamic adaptability and efficiency. We think that the proposed environment is capable of assuring the A/V E efficient integration and business alignment requirements, but it is of very complex implementation. Up to the present, we were not able to find any development conducting to the dynamic integration of Agile/Virtual Enterprises and to the environment we are proposing. A more detailed specification of the model, the construction of a demonstrator for such a service and the development of the management procedures will be the next objective of our research activities, in order to demonstrate the model and validate its potential. The demonstrator will integrate contributions from other researchers that share the model and follow the reference model BM-VEARM (Putnik, 2000).
CONCLUSIONS Adaptability and fast reconfigurability are the requirements that thrive in the actual context of high competitiveness, and the concept of the A/V E is the one that can assure those requirements. But de per si the Agile/Virtual Enterprise models are not the solution, as the A/V E must be dynamically aligned with business, and a delay can mean to lose a business opportunity, when virtual enterprises tend to last a shorter and shorter time. Selection must happen very fast and integration should be almost automatic. At the same time, the coordination and evaluation function of the A/V E operation is essential, to detect failures and to trigger the A/V E redesign in case of any disruption. In this work we have proposed an environment able to answer to the requirements of competitiveness, through the support for dynamic Agile/Virtual Enterprise integration, assuring speed, efficiency and effectiveness. We have focused mainly on the creation and operation of the structure supporting the service provided by the Market of Resources.
ENDNOTES 1 IDEF stands for ICAM (Integrated Computer-Aided Manufacturing) DEFinition. IDEF is a top-down modelling method used to describe systems. 2 Although we state that there is a difference between those two concepts– “Agile” and “Virtual”–in the context of the present work, we will not address it, as our main concern is the intrinsic flexibility provided by both models, and we will consider the A/V E model corresponding to the Virtual Enterprise model offering the characteristics of the Agile Enterprise. 3 Process requirements must also be considered; we opted by including them in resources requirements.
188 Cunha, Putnik & Gunasekaran
4 Probably one of the first attempts to address the concept of electronic broker was done by Miles and Snow (1984), as the support to the implementation of the model of Dynamic Network Organisation (the former version of the Virtual Enterprise concept), but at that time, constrained by the limitation of the information and communication technology. 5 An IDEF diagram illustrates the structural relations between two processes and the entities present in the system. The processes (represented as boxes) transform the inputs into outputs (respectively the left and the right arrows of a process), using the mechanisms for the transformation (the bottom arrows of a process) and constrained by control information or conditions under which the transformation occurs (the top arrows). 6 It is important to remark that the Market of Resources Management, besides being an output flow, is a control, and as such, should be described using the same specification methodology as processes are described; but due to limitations of the specification methodology, only processes can be described, not control (neither mechanisms) flows. The methodology also does not allow the transformation of an output flow into an activity box, as we needed to define the control Market of Resources Management.
REFERENCES Bichler, M. (1998). An electronic broker to business-to-business electronic commerce on the Internet. International Journal of Cooperative Information Systems, 7(4). Bradley, S. P., Hausman, J. A. and Nolan, R. L. (1993). Global Competition and Technology, Globalisation Technology and Competition: The Fusion of Computers and Telecommunications in the 1990s, 3-31. Boston, MA: Harvard Business School Press. Browne, J. and Zhang, J. (1999). Extended and virtual enterprises: Similarities and differences. International Journal of Agile Management Systems, 1(1), 30-36. Byrne, J. A. (1993). The virtual corporation: The company of the future will be the ultimate in adaptability. Business Week, February 8, 98-103. Camarinha-Matos, L. M. and Afsarmanesh, H. (1998). Flexible coordination in virtual enterprises. Proceedings of the 5th International Workshop on Intelligent Manufacturing Systems, IMS’98, 43-48. Gramado, Brasil. Camarinha-Matos, L. M. and Afsarmanesh, H. (1999). The Virtual Enterprise Concept, Infrastructures for Virtual Enterprises. London: Kluwer Academic Publishers. Cunha, M. M. and Putnik, G. D. (2001). Agile/virtual enterprise integration based on a Market of Resources. Proceedings of the Business Information Technology Management Conference–BITWorld2001. Cairo: American University in Cairo.
Market of Resources as an Environment 189
Cunha, M. M., Putnik, G. D. and Ávila, P. (2000). Towards focused Markets of Resources for agile/virtual enterprise integration. In Camarinha-Matos, L. M. and Afsarmanesh, H. (Eds.), Proceedings of the 4th IEEE/IFIP International Conference on Information Technology for Balanced Automation Systems in Manufacturing and Transportation. Berlin: Kluwer Academic Publishers. Davidow, W. H. and Malone, M. S. (1992). The Virtual Corporation–Structuring and Revitalising the Corporation for the 21st Century. New York: HarperCollins Publishers. Dignum, F. (2000). Agents, Markets, Institutions and Protocols, European Perspectives on Agent Mediated Electronic Commerce. Springer-Verlag. Dove, R. (1994). The meaning of life and the meaning of agile. Production Magazine, November. Eversheim, W., Bauernhansl, T., Bremer, T., Molina, A., Achuth, S. and Walz, M. (1998). Configuration of virtual enterprises based on a framework for global virtual business. In Sieber, P. and Griese, J. (Eds.), Proceedings of the VoNet Workshop. Simowa Verlag Bern. Faisst, W. (1997). Information technology as an enabler of virtual enterprises: A lifecycle-oriented description. Proceedings of the European Conference on Virtual Enterprises and Networked Solutions. Paderborn, Germany. Goldman, S., Nagel, R. and Preiss, K. (1995). Agile Competitors and Virtual Organizations: Strategies for Enriching the Customer. New York: van Nostrand Reinhold. Gunasekaran, A. (1999). Agile manufacturing: A framework for research and development. International Journal of Production Economics, 62, 87-105. Handy, C. (1995). Trust and virtual organization. Harvard Business Review, 73(3), 40-50. Iacocca Institute. (1991). 21st Century Manufacturing Enterprise Strategy. An Industry-Led View, 1(2). Bethlehem, PA: Iacocca Institute. Kidd, P. T. (1994). Agile Manufacturing: Forging New Frontiers. Reading, MA: Addison-Wesley. Kidd, P. T. (1995). Agile Corporations: Business Enterprises in the 21st Century–An Executive Guide. Cheshire Henbury. Manfred, A. J. and de Moor, A. (2001). Concept integration precedes enterprise integration. Proceedings of the 34th Hawaii International Conference of Systems Sciences (HICSS-34). Island of Maui, Hawaii. Miles, R. E. and Snow, C. C. (1984). Fit, failure and the hall of fame. California Management Review, 26, 10-28. Miles, R. E. and Snow, C. C. (1986, ). Organizations: New concepts for new forms. California Management Review, 28, 62-73. Nagel, R. (1993). Understanding Agile Competition, A Quick Look at How to Make Your Company Agile. Bethlehem, PA: Iacocca Institute, Lehigh University.
190 Cunha, Putnik & Gunasekaran
NIIIP. (1996). The NIIIP Reference Architecture. Available on the World Wide Web at: http://www.niiip.org. Oliveira, E. and Rocha, A. P. (2000). Agents advanced features for negotiation in electronic commerce and virtual organisations formation process. European Perspectives on Agent Mediated Electronic Commerce. Springer-Verlag. Petrie, C. (Ed.). (1992). Enterprise Integration Modeling. The MIT Press. Preiss, K., Goldman, S. and Nagel, R. (1996). Cooperate to Compete: Building Agile Business Relationships. New York: van Nostrand Reinhold. Putnik, G. (2000). BM_virtual enterprise architecture reference model. In Gunasekaran, A. (Ed.), Agile Manufacturing: 21st Century Manufacturing Strategy. Elsevier Science. Putnik, G. D. (1997). Towards OPIM system. In Younis, M. A. and Eid, S. (Eds.), Proceedings of the 22nd International Conference on Computers and Industrial Engineering, 675-678. Cairo. Sihn, W., Palm, D. and Wiednmann, H. (2000). Virtual marketplace for SME cooperation. In P. et al. (Ed.), Integrated Technology Systems: Academic and Industry Collaboration in Engineering Design and Automation for the New Millenium: Proceedings, 4th International Conference on Engineering Design and Automation. Orlando, Florida. Snow, C. C., Miles, R. E. and Coleman, H. J. (1992). Managing the 21st century organizations. Organizational Dynamics, (Winter), 5-20. Tsvetovatyy, M., Gini, M., Mobaster, B. and Wieckowski, Z. (1997). MAGMA: An agent-based virtual market for electronic commerce. Journal of Applied Artificial Intelligence, 11(Special Issue on Intelligent Agents). Vernadat, F. (1996). Enterprise Modeling and Integration. Chapman & Hall. Vernadat, F. B. (1999). Research agenda for agile manufacturing. International Journal of Agile Management Systems, 1(1), 37-40. Viamonte, M. J. and Ramos, C. (2000). A Model for an Electronic Marketplace, European Perspectives on Agent Mediated Electronic Commerce. SpringerVerlag. Yusuf, Y. Y., Sarhadi, M. and Gunasekaran, A. (1999). Agile manufacturing: The drivers, concepts and attributes. International Journal of Production Economics, 62, 33-43.
Market of Resources as an Environment 191
Section IV Knowledge Management in E-Commerce Environment
192 Sharma & Gupta
Chapter XII
Managing BusinessConsumer Interactions in the E-World Sushil K. Sharma Ball State University, USA Jatinder N.D. Gupta The University of Alabama in Huntsville, USA
ABSTRACT As we move into the 21st century, the need for rapid access to relevant knowledge has never been greater. The business world is becoming increasingly competitive. Even though there is an increasing demand for innovative products and services, enterprises face a daunting task to understand the customers and find ways to attract and retain them. The Internet and ecommerce have changed the way people interact with businesses. The recent developments in e-commerce and knowledge management are creating new organisational forms in the 21st century. These technologies have also increased the expectations of the customers. Traditional principles of customer relations do not always transfer well to the online world. This chapter discusses the use of knowledge management concepts to create an appropriate framework for managing Business-Consumer relationships for understanding and retaining customers.
INTRODUCTION Organisations of the 21st century are characterized by globalisation, rapid technological change and the importance of organisational knowledge in order to gain and sustain competitive advantage. The exponential growth of e-commerce and Copyright © 2003, Idea Group, Inc.
Managing Business-Consumer Interactions in the E-World
193
related technologies during the past decade has shifted traditional economies to knowledge-based economies. The new knowledge-based economy depends entirely upon information technology, knowledge sharing, intellectual capital and knowledge management. The environment of e-commerce and knowledge management is changing the business-consumer relationship paradigm. In the electronic world of a knowledge-based economy, competitive advantage will be with those organisations that have strong social cohesion with their customers, a clear understanding of their expectations and a capacity to deliver fast. While the nascent form of e-business has shown much promise, many unresolved issues, most importantly the handling of interactions with customers, still persist. The question that remains to be answered is how can e-business be best used to secure the patronage of a customer and how can this ‘virtual’ relationship be sustained? Organisations are moving to new electronic business models both to cut costs and to improve relationship management with customers, suppliers and partners. If an organisation knows patterns of customer demand, it can reduce inventory requirements and unused manufacturing or service capacities. Traditionally, firms have focused customer knowledge management efforts on supporting enterprise customer sales and marketing processes, such as direct mail campaigns, catalogs, and telephone solicitations. Customer knowledge provides guidance and direction to these processes by improving the enterprise’s understanding of the factors that influence customer decision-making, leading to more effective marketing and sales strategies. E-commerce is not only changing trading processes and/or refashioning internal business processes of enterprises, but also introducing many new channels to the existing approaches of reaching end users. Being “customercentric” and having knowledge about customers is becoming critical to the success of an enterprise. Today, customers interact with businesses and purchase items on a 24-hour, seven-day-a-week basis. If a consumer attempting to make a purchase on-line is not handled well, he can easily become a lost sale, or at the very least an irritated customer. This dissatisfaction can grow throughout the sales cycle, and often includes problems with product delivery, handling of complaints, and most importantly the handling of returns and exchanges. Many companies experience that e-commerce has helped switching customers’ loyalty from one company to another because customers are sitting just one click away from other competitors. One bad business-consumer interaction is enough for a company to lose customers due to mismanagement of the relationship with the customer. Therefore, it becomes extremely important to know the customers and their expectations, and accordingly build suitable strategies in companies’ web sites for effective e-world interactions. This chapter shows that the emerging knowledge management concepts can be used to create an appropriate framework for managing businessconsumer relationships for understanding and retaining customers. The rest of the chapter is organised as follows. We first describe the significance of ecommerce-led knowledge management in the 21st century organisation. Various
194 Sharma & Gupta
approaches for understanding and retaining customers are detailed in the next two sections. This leads us to discuss the issues and challenges of businessconsumer interactions and suggest a framework for managing business-consumer relationships. Finally, we conclude the chapter with a summary of our suggestions and some guidelines for future research.
KNOWLEDGE MANAGEMENT IN THE 21ST CENTURY ORGANISATION Globalisation, industry consolidation, increasing customer demands and ubiquitous technology drive today’s dynamic business environments. The turn of the new millennium has seen the maturation of a new business paradigm; global, virtual and flexible. To compete in today’s environment, organisations have to develop an ability to intelligently use the knowledge already inherent within it and the new intellectual capital created daily (Lee, 2000). While organisations may have sophisticated technological solutions, many of them are disparate systems and lack an effective integration with people and processes. As we move into the 21st century, the need for rapid access to relevant knowledge has never been greater. The business world is becoming increasingly competitive, and the demand for innovative products and services is growing (Duffy, 2001). The recently published report Knowledge Management Software Market Forecast and Analysis, 2000/2004 estimated that the total knowledge management (KM) software market would reach $5.4 billion by 2004 (Mcdonough, 2000).
KM–A Tool for Customer Relationship Management During the 1970s and 1980s, data collection was a large part of the companies’ practices of accumulating client information. Companies used relational databases along with other application software packages to record customer or productrelated data. Companies used many disparate operational systems for the data collection and compilation. Organisations were flooded with data and information at this phase although due to lack of integrated single source of databank, it experienced difficulty of providing valuable information for any analysis. At this stage, organisations were focused on using information technology (IT) mainly for recording data to answer the question: “What happened?” The systems of this phase were termed as Report Oriented Systems (Saporito, 2001). Organisations successfully implemented systems for the “What happened?” phase but suddenly realized that data recorded in reporting systems did not help them much for useful introspection analysis. So the companies started looking for systems which could help to analyze “Why did it happen?” This required drilling down beneath the numbers on a report to slice-and-dice data at a detailed level (Saporito, 2001). For this requirement, in the late 1990s, companies began to use data warehousing to consolidate information from disparate operational systems into one source for reliable and accessible information. Data warehousing is a generic term used for the
Managing Business-Consumer Interactions in the E-World
195
system for storing, retrieving and managing large amounts of any type of data. Many organisations had implemented systems which were mature enough to give “ Why did it happen” analysis, but as global competition grew more fierce, organisations started looking for technologies which could help them to examine “What will happen?” As many organisations used state-of-the-art technologies and implemented systems to understand the “what” and “why” of its business dynamics, they felt a need to go one step further to understand “What will happen?” to manage the strategy. Organisations are learning that data placed inside the warehouse systems, coupled with data mining techniques, can help predict future trends and behaviors, allowing companies to make proactive, knowledge-driven decisions. Many business leaders started demanding technologies, which could help them to answer “What is happening?” and “What do I want to happen?” With this emphasis coming from business executives, it became imperative that new technologies have to be fully integrated with people and processes to answer, “What is happening?” and “What do I want to happen?” Data warehouses are updated continuously on a real-time online basis in order to support day-to-day activities and answer “What is happening?” The systems must be fully automated to provide selfservice to users when asking, “What do I want to happen?” KM tools and technologies are the systems that integrate various legacy systems, databases, ERP systems and data warehouse to help organisations to answer all questions “What happened? Why did it happen? What will happen? What is happening? and What do I want to happen?” Integrating these with advanced decision support and online real-time systems would facilitate better customer interactions and would encourage customer loyalty. KM solutions are often integrated in an emerging environment known as enterprise information portals (EIPs). The rate of change and improvements in technology enablers have made knowledge management more feasible today than at any other time.
What is KM? Although many definitions of knowledge management have been posited, a particularly useful one has been described by the Gartner Group: “Knowledge management is a discipline that promotes an integrated approach to identifying, managing and sharing all of an enterprise’s information needs. These information assets may include databases, documents, policies and procedures as well as previously unarticulated expertise and experience resident in individual workers” (Lee, 2000). Knowledge management requires the application of a triad of people, process, and technology. Organisations collect data and information about customers, products, suppliers and transactions through their transactional operational systems. This data and information is stored in many structured (databases, ERP systems, etc.) and unstructured (document and content management, groupware, email and other forms of interpersonal communication) formats. Knowledge management transforms this data and information into knowledge. Knowledge management is an intelligent process by which raw data is gathered and transformed into information elements (Onge, 2001). In KM, information residing in its databases, file servers,
196 Sharma & Gupta
Web pages, e-mails, ERP (enterprise resource planning) and CRM (customer relationship management) systems from all structured and unstructured data sources are integrated to a single EIP, which can be accessed through usually personalized, Web-based interface.
Knowledge Management Architecture Knowledge management architecture can be divided into five layers as shown in Figure 1. The bottom layer can be termed as the communication layer that is mainly a hardware layer. The second layer is known as the enterprise data source layer (also called the transactional or operational systems layer). The third layer is the knowledge repository where data warehouse and data marts extract information from the enterprise data source layer. The fourth layer consists of middleware that creates an easy-to-use interface with a knowledge repository. The fifth and the last layer is the enterprise information portals (EIPs) layer and Web-based user interface. These layers are briefly explained in the next section. Figure 1: Knowledge management architecture
Knowledge Management
End User Application
–Enterprise Information Portal (EIP) (Web-based access systems)
Middleware
–Knowledge maps and knowledge meta-models (XML and retrieval algorithms)
Knowledge Repository
–Data warehouse and groupware (Document management and collaborative technologies)
Enterprise Data Source –Database and ERP
(Document management and e-mail)
Communication Systems
–Intranet, extranet and internet (Technologies such as internet, wireless, 3G and GPRS)
Managing Business-Consumer Interactions in the E-World
197
Communication Systems Layer The communication systems layer representing all communication systems involved is the basic foundation layer for knowledge management. There are varieties of communication systems such as a local area network (LAN) or intranet, extranet and Web or Internet. Organisations need to have their communication systems in place before they decide to implement the knowledge management function (Wachter and Gupta, 1997).
Enterprise Data Source Layer At the lowest level in the organisation, there are transactional or operational systems in the form of databases and ERP systems that consist of raw data relevant to customer orders, receipts, inventory, procurement processes, supplier performance status and a host of other data streams. Organisations may have increasingly sophisticated and voluminous databases and ERP systems, but users may still starve for information.
Knowledge Repository or Data Warehouse Layer In this layer, information from the enterprise data source layer is extracted into summarized data. At this level data is organised into a data warehouse. Data warehousing is the process of bringing data together from a wide range of sources into a single, reliable repository. It requires the ability to extract data from other operational applications (inside the enterprise and beyond) into data sources or streams. The data then is cleansed, de-duplicated and enriched (Secker, 2001). This layer provides organisations with high-quality market analytical information and helps to extract useful information on customer loyalty, sales, contact, satisfaction, profitability, and segmentation analysis from a data warehouse. Companies can then use further data mining and other tools to predict future demands and find the means to deal with them. Many companies have been very successful using data warehousing and data mining for various activities, such as improved selling and pricing, cross selling, reduced expenditures and enhanced customer service (Secker, 2001).
Middleware Layer The middle layer, termed middleware, integrates the applications of the knowledge repository and enterprise information portals. This middleware layer supports intelligent message routing, business rules that control information flow, security, and system management and administration.
EIPs and Web Interface Layer Enterprise information portals (EIPs) are evolving as a single source of knowledge-based systems (Silver, 2000). These EIPs integrate access to knowledge and applications. They provide a single point of entry to all the disparate sources of knowledge and information both within and outside an organisation, through the
198 Sharma & Gupta
Internet or a company intranet (Ruppel and Harrington, 2001). This layer represents the user interface into the applications and knowledge repository. Since the Web is used as a medium to interface, it uses Web-based interactive tools to access knowledge from knowledge management systems. This layer hides all the internal complexities of KM architecture and responds to users’ requests through easy-to-use features.
UNDERSTANDING CONSUMERS IN THE E-WORLD It is becoming increasingly important for an organisation to establish long-term business relationships with its customers through understanding the target market. Without understanding their customers, organisations cannot develop innovative products and services. As of now, only a select group of customers has access to Internet and can be a part of online shopping. In the near future, however, the population of Internet customers will increase as the Internet revolution spreads to the masses. Soon customers will be just one click away from their merchants and can now freely move between competitors and easily shop for the best value available in the market. Understanding customers is the most critical part of customer-centric e-world. To meet customers’ demands, make consumers satisfied and retain them, businesses needs to do things differently for marketing, sales, maintenance and follow-upservices in the e-world. The online world is very different from the ‘real’ world and at times, ideas and models that work well in brick-and-mortar stores do not necessarily transfer well to an e-business. Web and Internet technology help to gather customer data for understanding customer behavior. Various methods such as cookies are used extensively to profile the customers. Today, customers buy goods from both brick-and-mortar stores and online storefronts. Retailers match data from multiple channels to construct complete marketing profiles of individual customers (Briody, 2000). A growing roster of software tools from new and established business-intelligence software developers attempts to pull e-commerce data from various sources and combine it for analysis.
Acquiring and Attracting Consumers Once an organisation understands the consumers and their shopping habits, it is ready to find ways and means to acquire and attract them. As mentioned before, since the characteristics of online customers and the tools available to reach and attract them are different from the traditional customers, the marketing strategies required for electronic retailers need to be different than those of traditional marketing. Electronic retailing requires firms to play a more active role in seeking customers since customers would get lost in an Internet ocean unless they can find appropriate banners or links to reach the firm. There are plenty of search engines that help customers and companies to reach to their destination through simple search features. Firms need to register their URLs with all possible search engines so that customers can reach the firms’ Web sites through listed
Managing Business-Consumer Interactions in the E-World
199
links. Banners and cross-links to partners’ sites may also be needed to make sure that the firm is visible on the Net. A recommended technique for successfully attracting or obtaining online customers is to provide a free service. Online users are accustomed to receiving services and information for free. This type of draw can be very effective in exposing a customer to new products. Furthermore, this free service, such as news updates, free e-mail or stock quotes, can keep the customer coming back and purchasing an online product simply out of convenience. The key however is to gain exposure to ensure that customers are aware of a company’s product line. Some companies provide free products on a sample basis before being purchased. Some companies offer free gift certificates to make this initial contact, and later on offer many valueadded services for the consumers’ conveniences. These are examples of providing added value to the customer as an incentive to purchase a given product, and are critical in obtaining new customers. Organisations must also communicate with users at the consumers’ wavelength level and should avoid technical languages used in online interactions. At times, organisations make their Web sites less user-friendly and expect consumers to be technically skilled. This discourages consumers from visiting that Web site again. On the contrary, the design of a Web site and user interface should be such that it is not only easy to use but provides many more value added services during interaction. For example, secure servers were often described using terms such as SSL and DEC. These terms have little meaning to laypersons and only serve to confuse them. Firms must communicate at the consumers’ level of understanding because there is no salesperson by their side to help them. Acquiring customers is the most difficult task as it involves the effective design of online dialogue interface along with innovative financial incentive offerings. The most fundamental key of acquiring customers is to design sites that are easy to use and have the effective user interface for communication with the customers. Online customers are heavily influenced by their first experience, and if their first experience is enjoyable and pleasant, there is a strong possibility of the customer visiting the online store or site again. Therefore, it becomes imperative that firms design their sites or stores to give pleasant shopping experiences to potentially new customers. A good user-friendly store design can enjoy relatively low costs to obtain a new loyal customer. The use of promotions such as gift certificates and discounts can also be used to acquire these new customers. Once a customer has a successful experience online, s/he is likely to be more loyal. Hence, it becomes much more difficult for a competitor to attract these customers. To attract customers who are already loyal to other competitors or firms, companies have to spend extra money in the form of deep discounts, give-a ways, extensive advertising, marketing of a new image or even a change in product offerings, etc. All of these options have significant costs attached to them and emphasize the importance of acquiring customers from their first online shopping experience while they are still new to the medium. In addition to offering incentives, having a bug-free process and communicating effectively, online companies need to continuously keep innovating new methods to be close to the customers.
200 Sharma & Gupta
Interacting with Consumers Today, customers have more choices and flexibility for shopping and bargaining than ever before. The broadening range of customer choices stems from increased competition to deliver the most advanced and valuable services. Customers have the choice to interact with businesses from home, office and even while traveling by using varieties of sophisticated technological tools (Nelsen and Fraley, 2000). The businesses that can couple innovative products and services with proactive, personalized customer service will win higher customer loyalty, lower customer churn and, as a result, higher profit stability. To attract customers, supermarkets spend a lot in ergonomics for displaying their products in an appealing and organised manner in the grocery store. Online shoppers of the e-world never visit the store physically and thus would never be impressed by ergonomics or the conveniences offered during the shopping experience. This provides opportunity to online stores or supermarkets to reduce display costs. Some stores pass these savings to customers by reducing the prices of items. The online world is clearly positioned to offer benefits in terms of convenience, savings, innovative products and services, information and variety. All of these aspects can be capitalized upon to better appeal to the customer and retain their attention. E-retailers offer comparative shopping features and many other value-added services to customers to keep customers better informed and educated about products and services. Consumers also get additional benefit of receiving personalized goods and services. Due to more and more emphasis on customisation, customers see this as a unique opportunity to get products and services to the level of their satisfaction and gain better related information, instructions and product reviews to make them much different than in the past.
Empowered Consumers Fierce global competition has challenged businesses but at the same time has empowered the customers. Customers now have more choices and therefore greater power than ever before. Organisations are forced to become “customer centric” to understand their customers better before offering products and services. New electronic business models allow companies to know better what customers expect from businesses (Nelson et al., 2000). There have been a number of new ways businesses are using to attract customers to the e-world. Some online stores have been delivering the ordered items for free or (very inexpensive) delivery services. Customers get convenience of home delivery at no extra cost paid. On the surface, this may seem like an excellent venture, as customers would surely be delighted for the convenience of having items delivered to their door with no additional delivery cost but from a business point of view, this model is not sustainable for a long time.
Simplified Design of User Interface Another challenge for businesses is that although online businesses offer valueadded services along with detailed information of products, they may lose business
Managing Business-Consumer Interactions in the E-World
201
because of customers’ ignorance of handling technology. Therefore, online businesses have to design systems for novice users, which are easy to use and completely fault free. The purchasing process has to be a simple, clearly designed and should be integrated in a single-click function. In case of technical difficulties and/or systems failure, customers generally switch to new competitors. Even if the online stores are not responsible for these failures, they may still lose valuable customers. Businesses need to prepare simplified design of user interface and provide a technically robust system to avoid any failure of the transactional process. This may force online stores to deal with the additional burden of hiring technical people to keep their systems fault free.
Use New Methods to Reach Customers Traditionally, businesses used to interact with their customers through mass media in one to many relationships. Today’s businesses have different forms of interactions with their customers. E-world provides opportunity for companies to use new methods such as personalized Web pages, chat rooms, e-mails and automated response systems, helpdesk and call centers to reach customers. Therefore, the traditional mass-market approach is transforming to segmentation, niche marketing and one-to-one customer marketing. Direct marketing on one-to-one basis certainly makes organisations more customer-centric and may maximize both efficiency and effectiveness simultaneously. (Foreman, 2000). The Internet-related technologies help companies build direct relationships with each customer. Using Web technologies, companies can learn about their customers by watching how they use the Web site. Companies can use this data along with statistical analysis, inductive learning and neural network modeling to classify the customer segments and then use this for advertisements and customer services effectively. One-to-one marketing is a type of relationship marketing. One-to-one marketing involves much more than just sales and marketing because a firm is able to change its products and services based on the needs of individual customers. Eworld enables companies to better understand their customers’ needs and buying habits, which in turn enables them to improve and frequently customize the marketing efforts. In the e-world, it is very important for the companies to use the Web’s new tools to collect customer data and interact with customers. The challenge for a company is to tie that Web data into its existing information to create a unified view of each customer, fueling better business decisions.
Self-Service Technologies (SSTs) Technology is changing consumer needs and expectations of the e-world. It is increasingly evident that technological innovations and advances will continue to be a critical component of customer-firm interactions. These technology-based interactions are expected to become a key criterion for long-term business success. Effective management of these channels may increase profitability and success in the increasingly competitive marketplace. Web and other technologies offer an opportunity for technology-facilitated transactions. Technology-facilitated transactions allow customers to interact
202 Sharma & Gupta
with systems directly to get products and services in a customized fashion. This concept is known as self-service technologies (SSTs). In such a technologydriven marketplace, therefore, customers are increasingly given the option or are being asked to provide services for themselves through the various options of SSTs (Meuter et al., 2000). Self-service technologies are technological interfaces that enable customers to produce a service independent of direct service employee involvement. The types of technology interfaces include telephone-based technologies and various interactive voice response systems, direct online connections and Internet-based interfaces, interactive freestanding kiosks and video or compact disc (CD) technologies. Examples of self-service technologies include banking through automated teller machines (ATMs), pay-at-the-pump terminals, electronic self-ordering products and services, automated hotel checkout, banking by telephone and services over the Internet such as tracking package on Federal Express or any other online brokerage services (Meuter et al., 2000). Customers expect good service from SSTs. Bitner suggests that many SSTs are poorly designed. Even when SSTs work, customers are often frustrated by poorly designed technologies that are difficult to use or understand. Poor design of SSTs force customers to return to the conventional personal service option (Bitner, 2001). Companies must make sure SSTs work as dependably as promised and that the design is user-friendly to the customer. Examples of outstanding SST transactions are Charles Schwab’s online trading service, Amazon.com and the SABRE Group’s Travelocity, an Internet-based travel ticketing service (Meuter et al., 2000). Southwest Airlines’ online ticketing services provide a standard for simplicity and reliability, and customers have rewarded the airline accordingly. It boasts the highest percentage of online ticket sales of any airline (Bitner, 2001). Amazon.com, with its highly personalized, yet efficient, services, is a notable success story of SSTs’ solutions. GE Medical Systems provides video and satellite-television-based “just-intime training” on its equipment for hospital and clinic customers, which enables customers to train themselves at their convenience (Meuter et al., 2000).
Online Channel Interactions and Web Services Technologies New technologies such as online channels are attracting customers to have text chat on Web sites for interactions. Web service technologies provide a means of integrating applications via the Internet. By using XML messaging to exchange data, Web services allow companies to link applications and do e-business regardless of the computing platforms and programming languages involved. IBM has proposed WSFL (Web Service Flow Language), a standard for building complex Web service interactions that meet specific goals. Microsoft, IBM, Hewlett-Packard and Sun all have made significant strides in the Web services market by working to develop standards for interoperability and building support for Web services into their offerings (Borck, 2001). Online channel interactions enable customers to interact with an agent via instant text messaging, as if customers are in an Internet chat room. Companies are
Managing Business-Consumer Interactions in the E-World
203
using call-me buttons and collaborative browsing for this purpose. The call-me button simply passes a message to the contact center to call the user back in a given time window–much better, from the user’s perspective, than waiting in a queue on the end of a phone. Collaborative browsing where a call center agent walks a user through a Web site and pushes the pages out to the user, is a potentially more significant development. HelpMagic offers this service with its system, which places text chat, call-back or Voice over IP (VoIP) buttons on customers’ Web sites, letting companies route customers to their own call center agents. Because it’s a managed service, the costs are significantly reduced compared with Web-enabling a company’s own call-center operation. It is also quick to implement. New media agency Cimex has developed an Online Customer Support System (OCSS), which bolts on to an existing Web site to move online communications beyond the typical FAQ section (Murphy, 2001). Byzantium, a UK tech company, has developed a system called Hyphone, which combines collaborative browsing with text chat and voice. When a user clicks on the HyPhone button, they can communicate with a call centre agent via text chat or, if their PC supports it, VoIP, which enables the agent and the user to talk, via the PC, over the telephone line being used to access the Web site (Murphy, 2001).
Consumers’ Interactions at Market Spaces Traditional marketplace interaction is being replaced by a market space transaction. The market space is defined as “a virtual realm where products and services exist as digital information and can be delivered through information-based channels” (Meuter et al., 2000). Traditionally, marketplaces have three main functions such as matching buyers and sellers, facilitation of transactions through logistics, transfer of payments and regulatory infrastructure to protect both buyers and sellers for trading. Emergence of Internet-based marketplaces that are termed as market spaces has started changing the process of trading. The foundation of customer-company interactions has significantly changed in this new market space environment. Self-service technologies are a classic example of market space transactions in which no interpersonal contact is required between buyer and seller. These technology-driven market spaces may greatly influence the way consumers view the shopping process for technology-driven products and services. Companies are exploiting Internet technology to gain market share, and these Internet-based market spaces are helping to reduce the cost of searching for product information enabling customers to get cheaper and better products.
A CHALLENGE TO RETAIN CUSTOMERS In the previous section we discussed the factors that initially make a firm understand and attract a potential customer. This section deals with how to make sure that customer keeps coming back to a firm’s online business. The key to motivating customers to come back to a firm’s products and services is to offer
204 Sharma & Gupta
innovative value-added services on a continual basis and create market leadership brand equity. Amazon.com is good example to understand how a company innovated many value-added services and created their brand equity. Of course, these valueadded services need to be supplemented with good qualitative products.
Technically Sound Web Site It is necessary that firms create a strong and technically sound Web site before they start offering online products and services. The sites should be well tested for security and data consistency concerns. A single bug or error during consumers’ interaction can permanently turn both new and old buyers away for all times. The Web site should be tested extensively before opening it to the public. Design should be strong, but user interface should be friendly and users should find answers and help in case of any query. Customers should feel confident enough that their interactions and data are fully secure and handled in error-free environment. In addition to providing an error-free environment, a commercial Web site should guide the purchasing process for consumers as if sales representatives were assisting them. Purchasing process design should not only have easy-to-use features but should record the footsteps of consumers to help them in purchasing. Details such as shipping prices and shipping times presented during the purchasing process can help consumers to know the exact time for completion of transaction. It is important to make sure that any information that a user might need be easily accessible in the appropriate stage of the purchasing process.
Fast Response Time Response time is becoming a major factor to characterize the online shopping experience as unique. With the emergence of Internet and new high-speed technologies, consumers expect fast response for information or delivery of items. Whether it is time taken to load Web pages, respond to questions, fix problems or to deliver the product, consumers expect quick answers. The quicker the response systems, the better are the chances of delivering products and services to the satisfaction of consumers. Without keeping up this response time, online companies remove one of the key advantages of electronic commerce. Even the smallest delay in service significantly increases the odds of a potential customer spending their money elsewhere.
Customisation Customisation is very important for attracting and retaining customers in an online world. Today one can configure many products and service as one likes. Customisation helps consumers to get personalized products and creates consumers’ satisfaction and loyalty. Customisation provides consumers with a high degree of control during the purchasing process. Consumers feel as if they are creating these products and services for themselves and get a strong feeling of partnership with
Managing Business-Consumer Interactions in the E-World
205
businesses. Firms that allow customers to choose delivery and payment methods, and provide features to track their transactions during purchasing process, claim better customer retention. Web portals are particularly good at achieving high levels of customisation, and e-Business can draw on this experience.
Customer Relationship Management (CRM) Recent emphasis on relationship marketing–that is, attracting, developing and retaining customers–indicates how difficult it would be for the firms to face this challenge in the 21st century where every customer is just one click away from the competitors. Companies must integrate all customers’ needs starting with attracting them to ending with the delivery of items that might influence customers’ feelings about their relationship with the firm. There are many technological options available for enhancing customer relationships. These include the activities of the back office (e.g., billing, shipping), not just front-office options that directly contact the customer (Kohli and Gupta, 1993). Building relationships requires that companies view customers as their partners and not as consumers or targets. Thoughtful understanding of the customer needs coupled with the actions outlined to implement a philosophy of need gratification can produce the type of relationships that lead to customer retention and profitability. Kana iCARE (intelligent Customer Acquisition and Retention for the Enterprise) is such an eCRM suite intended to enable Global 2000 organisations to develop effective interactions with customers. The Kana CRM suite includes various other components such as Kana ResponseIQ for e-mail management system featuring automated e-mail, Web and instant messaging request management; Kana IQ for self-service and assisted-service system for contact center agents, featuring a knowledge base; and Kana iCare Analytics measures customer service, marketing and commerce operations across various touch points (Krill, 2001). Chordiant Software Inc. offers a customer-relationship management application and Dialog Interaction Server. It is a Web application that can run on a thin client. It uses information stored in legacy databases and other CRM applications to guide customers through complex online purchases based on the information that they provide (Maselli, 2001).
ISSUES AND CHALLENGES OF BUSINESSCONSUMER INTERACTIONS Online interactions with consumers are entirely different than traditional retailing experiences. The foremost issue that differentiates the online interactions is the lack of social interaction between salespeople and the customer, or even between customers. Shopping is taken as a very social exercise, and often the intent is not to purchase anything, but to simply interact with people and the products. In a traditional brick-
206 Sharma & Gupta
and-mortar store, friendly and helpful staff can keep customers coming back. It is very difficult to simulate or replace the social interactions environment of the physical world in an online store. Attracting online customers without providing them the social aspect to shopping, and the convenience of shopping malls, becomes a very difficult task. In this section, we discuss various areas of concern in this regard.
Applying New Laws for E-Business As the business-consumer interactions change in the e-world, new regulatory mechanisms and laws are needed to handle online transactions. E-world uses the Internet as a medium for global business and thus creates the necessity to examine existing laws and legal frameworks available for international trade. Problems can arise in areas where transactions occur, including dishonesty on the part of the retailer or the customer and differences in expectations.
Privacy and Security Concerns One of the main concerns of online shopping is related to the privacy of consumers and the security of data involved in the transactions. It is a well-known fact that when consumers visit online stores, online stores exploit cookies to collect information about the consumers and their buying habits, by viewing the consumers’ wish list and the pages they visited. Cookies are short pieces of data used by Web servers to help identify Web users. Cookies are usually a string of random-looking letters that a Web server places on a computer’s hard drive. Files such as “cookies” get planted on the consumers’ computer by the Web sites that are visited or surfed. Cookies help to know the movements of consumers while surfing on the Web site. As long as this collected data through cookies is not passed further to anyone, consumers don’t mind but there have been cases where companies passed the consumers data to potential marketers and as a consequence consumers start receiving junk mail. This raises the concern of privacy. Generally, these fears of security and privacy are unwarranted. In an online environment, creating a relationship of trust is quite difficult and mostly it comes through personal experiences and not through guidelines. One of the added privacy problems in the online world is that of the mass collection of individuals’ information to create personal profiles. DoubleClick.com has recently come under significant scrutiny as it has allegedly been collecting and storing information from individuals who visit sites with their ads on them. This data can then be combined through a process called data fusion (taking information about an individual from a wide variety of sources and then combining the data by comparing names, e-mail addresses, etc.) and used to create a detailed profile of an individual’s shopping habits. Although benefits exist to the end user in the form of customized service, most people are not comfortable with their personal details being freely available on the Internet. The use of cookies further simplifies the collection process and amplifies these fears. Online businesses should ensure that control of consumers’ data resides in the hands of consumers themselves. Online privacy is a
Managing Business-Consumer Interactions in the E-World
207
huge concern not because consumers are paranoid but because most of them fear that they don’t know how information about them will be used. Perhaps the simplest way of alleviating customers’ concerns is displaying statements reassuring the privacy of any personal information. This statement is more likely to be believed on the Web site of a reputable, well-established company. However, simply having this statement indicates that the company is aware of the issue, and hopefully will abide by the statement.
Customer Intimacy Developing intimacy with consumers is one of the daunting tasks for online businesses since online businesses do not get an opportunity to have personal contact with consumers. Due to global business opportunities, online businesses may have a large customer base and at times may find it difficult to interact with consumers on a one-to-one basis to create customer intimacy. However, business situations demand that each customer should be recognized as important and valuable for the success of business. One method of fostering an intimate relationship with the customer is to track their activities and past purchases. If a customer often purchases a certain type of product, then when a new similar product becomes available, it can be displayed as the customer visits the Web page. Amazon.com performs this task very well and can generate numerous additional sales by recommending similar products. Customers get the benefit of seeing what is relevant to them while the company increases its perceived value to the customer while also increasing sales. This can be accomplished through the use of cookies with or without the user’s knowledge or through a login process.
Handling Returns and Complaints Another concern that consumers have about online products and services are related to the manner in which returns and complaints are handled. Personal contact is critical in handling the wide variety of possible returns and complaint situations. Therefore, companies that back their online service with a physical store do not need to worry about returns and complaint situations. But, companies that do not have (or have limited) physical store outlets to provide sales support services, find it difficult to attract customers with their ability to handle returns and complaints. The weaknesses of e-businesses become clear as customers wanting to return unwanted or damaged items experience significant problems in returning these to sellers and settle their disputes. This clearly demonstrates the importance of having robust systems in place for handling complaints and returns. One of the difficulties when handling complaints and returns is the inability to fix responsibility since there are many players who are involved for a single transaction. Internet service providers (ISPs) are generally responsible for providing Internet connectivity. Online stores handle the content of products and services that consumers order. Third-party agencies may get involved for delivery of items (supply and logistics). Returns are difficult to process with online stores as the issue of
208 Sharma & Gupta
delivery methods and costs becomes ambiguous (who should pay?). It is difficult to assess among the players who made the customer unhappy. At times, it may be the customer end that itself is responsible for customer dissatisfaction, but the blame may be passed on to the online stores. For example, a customer may have problem due to an Internet connection but he may think that problem is in the company’s Web site. It is also difficult to discuss a defect or damage in a product when both parties can’t be looking at it simultaneously. This puts a large strain on customer-business relationships and stresses the importance of providing a quality product and Web site supported by an effective mechanism to handle returns and complaints.
MANAGING THE BUSINESS-CUSTOMER RELATIONSHIPS The challenge for firms is to gratify and perhaps delight customers, while avoiding the perception that they do not respect customer needs. Based on our discussion of the issues involved in business-customer interactions, Figure 2 depicts the framework for managing business-customer relationships. This proposed framework is based on the KM architecture described previously and suggests that KM software solutions should be embedded with various features mentioned in Figure 2. This proposed framework will enable an organisation to manage its businesscustomer interactions more effectively. Each aspect of the proposed framework is briefly described below. Figure 2: Managing business-customer relationships
Customer
Business
Customer Knowledge Base
Effective Design of User Interface Personalized Products and Services Knowledge Management Tools
Open Communication Interactive Feedback Mechanism Fair Play Trust
Customer Understanding
Consumer Technology Availability
Managing Business-Consumer Interactions in the E-World
209
Design Effective User Interface Firms can develop and reinforce respect for a customer’s security, esteem and fairness needs through a proper design of their user interface. The site developers must be told the importance of customer interaction for delivery and service of items to be sold online.
Create Flexible and Personalized Products and Services A mix of technologies can be used to enhance a firm’s ability to create and deliver personalized products and services. As mentioned previously, a firm can use many emergent forms of self-service devices, such as informational kiosks and voice mail systems, Internet-based interactive technologies and online services (like Prodigy or America Online) to allow customers to act on their own behalf. In addition, companies collect demographic, historical, behavioral and even psychological data about their customers that enables companies to create personalized products and services. However, as detailed information about individual consumers becomes widely available for personalized products, caution must be taken to protect privacy concern of consumers.
Provide Interactive Customer Feedback Mechanisms Companies need to create an online feature in their Web sites through which they can ask customers to clarify how they feel about the company’s products or services in relation to the three basic needs of security, intimacy and service. Companies can organise focus groups that consist of a small number of customers to discuss need-based issues and then can gather feedback from a large number of customers through various online surveying methods. By eliciting customer feedback in this way, it is possible to monitor efforts to improve the gratification of customer needs and assess them as a basis for action. Companies should provide online chat forums to share their feedback openly with companies as well as with customers. Companies should get to know the customers who have experienced a problem or expressed dissatisfaction. They should hold regular online meetings to discuss customer complaints and solutions to ward off future dissatisfaction. Cisco Systems encourages customers to help each other by posting problems and solutions on a Web-based users group. A centralized database of such feedback information may help firms to innovate as well as to better understand their customers.
Create Customer Knowledge Base Traditionally, most companies use three or more separate systems to attract customers, sell products and services, and service faults. This way, one function would not know the information fed back by the customer to other functions (Nelsen and Fraley, 2000). Disconnected systems cannot answer a host of important questions such as: when there are serious faults or performance degradations, which services and customers are affected? Which customers are most important? How quickly can the company identify and contact those customers when there is a
210 Sharma & Gupta
problem? How have individual customer’s services performed over time, and how does this compare to the service last year? Answering these questions requires embedding customer awareness and the services they receive directly into the service creation and management processes, enabling fault and performance management systems to create a direct association among the network, services and customers. For this to occur, information about customers, services and other elements must reside in a common knowledge base accessed by the service management applications. The same knowledge base becomes the repository for all information, the services and the customers using those services. A customer-centric service management system can become the primary basis for service differentiation, improving customer satisfaction and loyalty. This will pay big dividends in the increasingly competitive markets.
Use KM Tools to Understand Customers Apart from determining consumers’ preferences and choices by direct customer feedback, it is desirable to understand customer behavior by using KM tools. KM tools enable an organisation to better understand the business process at work by searching automatically through huge amounts of data, looking for patterns of events and presenting these to the business in an easy to-understand graphical form (Rawlings ,1999). KM tools help to solve business problems by analyzing the data to identify patterns and relationships that can explain and predict behavior. Organisations need these new solutions if they are to remain competitive. With the massive increase in data being collected and the demands of a new breed of application like customer relationship management, demand planning and predictive forecasting, it becomes imperative for the organisations to use a KM framework as part of their online framework for managing better business-consumer relationships.
Build a Trusting Relationship Firms of the 21st century require open communication for building trust. Online mediums have the capabilities to remain closer to the customers. Online chat, e-mails and educational material periodically posted on the site can support open communication for building trust and relationships. Customers should also be invited to assess business performance by navigating to semi-confidential financial reports. Sharing strategic plans and information with customers will help create and promote trust in the relationship.
Create a Sense of Fair Play Trust has long been the cornerstone of any successful business relationship. Although e-commerce has become ubiquitous among both consumer retailers and business-to-business (B2B) merchants, customers’ acceptance of it is based on trust in the transaction and business practices of the other party. The difference between trust online and in all other contexts is that it is more difficult to assess potential harm and goodwill of others during online interactions.
Managing Business-Consumer Interactions in the E-World
211
Customers expect companies to treat them fairly. Consumers become angry and mistrustful when they perceive otherwise. Fair play or justice issues become salient when considering the degree of reciprocity existing between a business and a customer. Companies must keep promises and commitments as they are made. Promises should also be supported through appropriate regulatory protections to handle any disputes. Companies shouldn’t lie to customers. They should treat consumers courteously and respectfully while delivering products online. One bad online interface can result in the loss of several valuable customers. Safeguarding or enhancing self-esteem is the key to creating customer delight. Many companies treat the customer as an important individual, not just as a member of a certain class of consumers. When companies start viewing customers as unique with particular problems and personal histories, the customers are delighted. Remembering the names of repeat customers is invaluable, whereas stereotyping negates a customer’s sense of individuality.
CONCLUSIONS The online world is vastly different than the ‘real’ world, and many of the traditional principles of customer relations do not transfer well to the online world. The Internet has changed the way people interact with businesses while simultaneously increasing the expectations of the customer. In the 21st century, the role of the customer is going to be considerably more important and complex than simply that of passive service recipient. Organisations with a greater level of knowledge about their customers will have a competitive advantage over others. This advantage may place organisations in a position of proactive strength rather than reactive weakness. Many researchers feel that the majority of the businesses on the Internet are expected to lose market share if they do not deal with their customers effectively. This chapter shows how consumers’ interactions in an online world will be fundamentally different than those in the traditional market place and how businesses should react accordingly. Our society has to adapt to this new way of shopping and doing business because, for better or worse, the new medium will change the business-consumer interactions drastically. Recognizing this paradigm shift to a new or knowledge-management-centered economy model, organisations need to have a proper framework that guides them for transitioning to a new phase of knowledge management. The mere application of technology, without the previously mentioned people and process components, will result in a failure of the knowledge management initiative. To avoid islands of knowledge, therefore, future research studies should be undertaken to understand mechanisms to integrate disparate systems, which may be combinations of legacy and incremental system implementations to enterprise-wide portals. It would also be worthwhile for future research to determine how this added functionality of knowledge management of Web sites or EIPs could create more loyal customers and longer relationships. New channels such as the Web, b-to-b marketplaces, wireless, Interactive Voice Recording-based systems and new global e-commerce competitors are
212 Sharma & Gupta
increasing the complexity of managing customer relationships. Web-based commerce is changing the dynamic of customer interaction and the kinds of assistance and support customers expect. One example is the waves of e-mail that companies now receive via their Web sites. Organisations need to reengineer business processes for integration and redesign of customer data. Business processes should be triggered to IT-enabled customer interaction, and should support self-service offerings. When connected via a shared CRM solution, manufacturers, distributors, resellers and retailers can more easily leverage end-customer data to collaborate in product development, sales, marketing and service initiatives–and move functions to the channel partners best equipped to perform them. Companies could consider the following few steps when managing businessconsumer interactions in the e-world. • Use a web based database, web messaging and data mining technology for creation of an accurate data repository of customers. A vital element of managing customers of electronic business is to have an accurate data repository. The data repository can be spread across disparate sources, including legacy hosts and relational databases, and should be integrated so that it can be accessed by the Web, call centers, telephone or any other medium. • Self-service offering should be supported by providing simple FAQ (Frequently Asked Question) lists and interactive chat sessions. Intelligent search engines should be employed to help customers with their inquiries. Since the Web exposes a business to a huge population, it would be inadvisable to place a “call-me” button on every page of a Web site. Integrated systems should also support computer-telephony integration so that it enables sophisticated processing of customer requests for callbacks to available phone lines. A company’s Web portal should also encourage real-time chats for creating virtual communities and chat forums. Real-time text chat can be very effective in facilitating Web-based customer service– in many cases, a targeted and instantaneous response to a simple question will satisfy a customer. • Web messaging should further enhance e-mail interactions as it will help customers to post inquiries publicly or privately and get answers from the Web sites or pick up responses via e-mail, call or pager. While the benefits of using Web, Internet call centers, Web messaging and other technologies are compelling, implementation is challenging. Disparate data repositories are difficult to aggregate, and it’s never easy to coordinate separate organisations and business units–that is, customer service, marketing, information technology and operations. Choosing a customer relationship strategy can be quite challenging in a time of great upheaval and requires some future research efforts. Studies could identify effective change management strategies so that detailed guidelines can be developed to help practicing managers to be more effective when managing business-consumer interactions in the e-world.
Managing Business-Consumer Interactions in the E-World
213
REFERENCES Bitner, M. J. (2001). Self-service technologies: What do customers expect? Marketing Management, 10(1), 10-11. Borck, J. R. (2001) Solving the Web services puzzle. InfoWorld, 23(38), 44, 52. Briody, D. (2000). Retailers reach for multi-channel customers. InfoWorld, 22, 36. Foreman, S. (2000). Marketing: Measuring performance and understanding customer privacy. Manager Update, 11(4), 8-18. Hammond, C. (2001) The intelligent enterprise. InfoWorld, 23(6), 45-46. Hanley, S. and Dawson, C. (2000). A framework for delivering value with knowledge management: The AMS knowledge centers. Information Strategy, 16(4), 27-36. Kohli, R. and Gupta, J. N. D. (1993). Strategic application of organisational data through customer relationship databases. Journal of Systems Management, 44(10), 22-25, 39-41. Krill, P. (2001). CRM marriage bears fruit. InfoWorld, 23(40), 26. Lee, S. J. (2000). Knowledge management: The intellectual revolution. IIE Solutions, 32(10), 34-37. Maselli, J. (2001). Chordiant’s CRM app acts as online guide for shoppers. InformationWeek, (845), 51. Mcdonough, B. (2000). Knowledge management software market forecast and analysis, 2000-2004, {\bf IDC Report}; August. Meuter, M. L., Ostrom, A. L., Roundtree, R. I. and Bitner, M. J. (2000). Self-service technologies: Understanding customer satisfaction with technology-based service encounters. Journal of Marketing, 64(3), 50-64. Murphy, D. (2001). How to exploit all forms of contact. Marketing, 31-32. Nelsen, D. and Fraley, A. (2000). Customer king of telecom jungle. Telecommunications, 34(9), 61-62. Onge, A. S. (2001). Knowledge management and warehousing. Modern Materials Handling, 56(3), 33. Rawlings, I. (1999). Using data mining and warehousing for knowledge discovery. Computer Technology Review, 19(9), 20-22. Saporito, P. L. (2001). The data to make decisions. Best’s Review, 101(12), 130. Schneider, B. and Bowen, D. E. (1999). Understanding customer delight and outrage. Sloan Management Review, 41(1), 35-45. Secker, M. (2001). Do you understand your customer? Telecommunications, 35(3), 108-110. Stratigos, A. (2001). Knowledge management meets future information users. Online, 25(1), 65-67. Wachter, R. M. and Gupta, J. N. D. (1997). The establishment and management of corporate intranets. International Journal of Information Management, 17(6), 393-404.
214 Misra & Sounderpandian
Chapter XIII
Electronic Money and Payment Systems Santosh K. Misra Cleveland State University, USA Jayavel Sounderpandian University of Wisconsin-Parkside, USA
ABSTRACT This chapter describes the demands on any acceptable type of money or payment system and examines how well the existing electronic money and payment systems satisfy those demands. Certain weaknesses in security and performance still remain in these systems and they need to be overcome before the systems can be completely accepted. It is also not clear what kind or government regulations may be brought to bear on these systems. Even with these weaknesses and uncertainties, a variety of systems are thriving, and their details are given in this chapter. “Electronic money is likely to spread only gradually and play a much smaller role in our economy than private currency did historically. Nonetheless, the earlier period affords certain insights into the way markets behaved when government rules were much less pervasive. Those insights, I submit, should be considered very carefully as we endeavor to understand and engage the new private currency markets of the 21st century.” – Alan Greenspan, chairman of the Federal Reserve Board in an address given at the U.S. Treasury Conference on Electronic Money & Banking: The Role of Government, Washington, DC, September 19, 1996. Also published in The Future of Money in the Information Age (Cato Institute, 1997). Copyright © 2003, Idea Group, Inc.
Electronic Money and Payment Systems 215
INTRODUCTION Some consider electronic money as “the killer application for electronic networks” that is “going to hit you where it really matters–in your wallet. It’s not only going to revolutionize the Net, it will change the global economy.” While the jury is still out for such proclamations, the interest in electronic money and payment systems has grown steadily. Even though electronic money has not yet become ubiquitous, there is enough emerging evidence that the use of electronic money is growing and sometime in the distant future, electronic money may replace money as we know it today. The term electronic money is used in a variety of contexts. Some consider electronic money as a substitute for cash, some associate it with systems used to carry out retail transactions, and others think of it as a prepaid electronic device that can record a monetary value for use by consumers. Electronic money, in a sense, is nothing more than a collection of bits recorded in an electronic storage device. These bits represent a monetary value that a consumer may have purchased at some point of time. The consumer may use these electronic bits to make a purchase and the stored value in the device would be appropriately reduced. The use of electronic money itself is an exchange of bits between two storage devices where the ‘volume’ of bits stored in one device is reduced by the amount of the transaction and the volume in the other device is increased by the same amount. A number of systems of electronic money are currently in operation. For example, Mondex (www.mondex.com) is a type of smart card that can be used to store money as well as carry out transactions. Other systems in use include Visa Cash (www.visa.com), and Proton (www.protonworld.com). Many of these smart cards are reloadable, i.e., these can be used as purses; money can be loaded into or taken out of these electronic purses. Typically an electronic reader, similar to those in ATMs, is used with smart cards and electronic purses to complete a transaction. Creators of these smart cards believe that these cards would replace traditional purses some day. Electronic money, as just defined, differs from the traditional payment systems that many of us are familiar with: credit and debit cards. Access to an electronic Figure 1: A smart card
216 Misra & Sounderpandian
communication network is required for using a credit or a debit card. A transaction using these conventional cards has many steps: • the credit or debit card is inserted into a card reader; • the card reader reads the card information and initiates an electronic authorization request to the card issuer’s financial institution; • the reader transmits the amount of transaction; • the financial institution sends an authorization for the amount of purchase; and • the card reader completes the transaction by printing a receipt and presenting the receipt for the signature (or PIN number) of the customer. The key to this type of transaction is the availability of an electronic communication network through which an authorization for the transaction amount can be secured. Electronic money avoids this authorization request and confirmation cycle. What are some of the advantages of electronic money and payment systems? From a consumer point of view, it results in the reduction of risks associated with carrying cash. An individual may also be able to save time at check out counters; there would be no need to count the money for payment or wait for change. Electronic money is more secure than cash. Stolen or lost electronic money cards can be replaced without loss of money. Electronic money systems may also provide a better value to a merchant. For example, a merchant does not have the risk of keeping cash on his premises nor does he need an armed escort to go to a bank to deposit his daily revenue. The merchant does not have to worry about his employees miscalculating change since computer programs do the job. Checkout lines would also move faster–no need to count money to make change, or print a receipt. Finally, we would save a few trees by reducing the printing of paper money. It is doubtful that physical currency will fall into disuse in the foreseeable future. However, growing familiarity with electronic payment systems, smart card technology and the reduction in the production costs of smart cards are, perhaps, going to improve the prospects for the replacement of physical currency.
CONCEPT OF MONEY What is money? It is those pieces of paper or metal that a check-out clerk at a grocery store accepts to give you, say, a gallon of milk. Why? Just like you, the grocery clerk recognizes the piece of paper or the metal as money. Is the grocery clerk likely to accept any piece of paper as money? The answer is, obviously, no. What makes one piece of paper equal to money and many other pieces not so? Traditionally, we have used tokens or objects as money. Such tokens must possess four major characteristics: • Monetary tokens must be recognized as a medium of exchange. Everyone must be willing to exchange these tokens for goods and services. • Monetary tokens must represent a standard value. It should be possible to attach specific value to each token used as money. For example, the value
Electronic Money and Payment Systems 217
associated with a dollar bill is obviously different from the value associated with a one hundred dollar bill. • Tokens must have a stored value. If we keep these tokens for a long period of time, they must not degrade or become useless. Students of economics may not readily agree with this statement since a dollar today is worth more than a dollar tomorrow. If we consider that a dollar tomorrow is still a dollar even if it buys less, the stored value concept becomes clear. • Tokens must be durable. Paper or coin money may be destructible, but they are durable. There is nothing in the above requirements that says money has to be a greenback or a piece of metal. Almost any object can be given the status of money as long as it satisfies the above requirements. Historically, many different objects have been used as money. Many of the primitive societies used a myriad of artifacts as money. In the United States, native-Americans used a variety of items including buck-skin and tobacco as money. In times of war or hyperinflation, gold and other precious metals have been used as money. There is no reason why electronic bits cannot be used as money. It is only a question of satisfying the above requirements. Modern economies use a surrogate form of money that does not rely on specific tokens, and it is sometimes called notational money. In a notational system, record books are adjusted to complete a transaction. For example, you may issue a check to a merchant and receive a gallon of milk. As a consequence of the check, your account balance is reduced by the amount of the check and the receiver’s balance is increased by a corresponding amount. Even though no tokens are transferred in a notation exchange, it is still anchored on the same characteristics as token money. Notational money is not destructible, since destruction merely results in an incomplete transaction. A form of electronic notational money is already in wide use. Banks in the United States and most countries in this world handle high-value payments electronically. Many of the smaller transactions are also handled electronically in most countries. For example, many organisations in the United States pay their employees electronically. At the end of a pay period, the paying organisation sends an electronic order to its bank directing the bank to credit accounts of its employees with the salary amount. The bank responds by debiting the account of the payer by the total of all payment amounts and crediting the accounts of all the employees by their salary amounts. The electronic form of transactions has been the norm for high-value payment for more than 30 years. All international settlements are also done electronically. When the United States settles its current account balance with, say, Australia, the settlement is done electronically through the Bank of International Settlement at Geneva, Switzerland.
ELECTRONIC MONEY Electronic money is a set of electronic bits recorded on some device, such as a stored value card (e.g., Mondex) or an electronic wallet (discussed later).
218 Misra & Sounderpandian
These bits are nothing other than a notation recorded electronically indicating an amount of money. As of now, electronic money does not represent any new form of money but only a new form of representing money. What do we mean by new form of money? Consider the history of the banking system in the United States. Throughout the 19th century, most of the money in this country was in the form of notes issued by private banks. This chaotic situation continued in one form or another until the National Banking Act was adopted in 1863. Until that time, bank notes of one bank looked different from those of another. There was no standard looking ‘dollar’ bill. In this period of “wildcat banking” (Greenspan, 1996), anyone could create a bank and issue useless bank notes. Electronic money does not fit into this pattern of money issue. As of now, we do not have any “eMoney” that is purely electronic. What we have is Dollars, or Marks, or Yens stored in a digital form. The value of this digital money is firmly pegged to the underlying monetary system. Electronic money in US Dollar is firmly related to the token money of US Dollar; electronic money in Deutsche Mark is firmly related to the token money of Deutsche Mark and so on. Does the electronic money then satisfy the four required characteristics of money? Clearly, it can be a medium of exchange. We are, in reality, exchanging Dollars or Marks or Yens and not any new form of eMoney. The mode of exchange is, of course, different compared to the exchange of token money. Using electronic money, we would not be handing over a few pieces of tokens in exchange for our purchases, but adjusting the value of money stored in our stored value card or electronic wallet. Electronic money, unlike tokens, is potentially weak in durability. As is well known, electronic bits are susceptible to relatively easy destruction. For example, a magnetic stored value card may lose all its bits if subjected to electromagentic radiation or even extreme cold or heat. Devices used to carry electronic money are also potentially non-durable. A device carrying electronic money may be physically damaged resulting in the destruction of the contained information. Durability of money may also be viewed in terms of its role in a monetary system. If a token money is destroyed, it has the effect of withdrawal of some value from the total value of token money in circulation. For example, if your paper dollar is destroyed, this dollar is physically removed from the volume of dollar bills in circulation. Electronic money, on the other hand, can be destroyed without leaving a trace. You, the owner of the electronic money, may become poorer, but it would have no effect on the volume of money in circulation. Can electronic money become a substitute of token money? It is still too early to say if it will. The use of electronic money has grown slowly over the last decade and it is quite possible that such usage is going to continue to grow. There are many technological issues associated with a system that can support the use of electronic money. We discuss a number of popular systems in a later section of this chapter.
Electronic Money and Payment Systems 219
MONEY AND TRANSACTION Perhaps the most important use of money is to complete a transaction. The transaction may be for something as simple as buying groceries, or a bit more involved as being paid for services rendered, or quite complicated as financing a large project. If an electronic money system is to be built and universally used, such a system must be able to support the requirements of a transaction. We discuss the common characteristics of a transaction next. • Atomicity: Atomicity is defined as the ability to pay for products or services one receives to the exact amount. For example, if a product costs 42 cents, one should be able to pay exactly 42 cents rather than 40 or 50 cents. If monetary payments are restricted, for whatever reason, to multiples of 10 cents, then it would be impossible to pay 42 cents. At the same time, if all the prices of products are also multiples of 10 cents, then there will not be a problem. Atomicity requires that the unit of money payable be small enough to match all prices. • Anonymity: A transaction may be anonymous or not. For example, you may buy groceries by paying cash–notes and coins. The seller of groceries need not know or record your identity. The seller may, for book keeping and tax purposes, keep a record of items sold and the total value received. A nonanonymous transaction involves a complete record of the transaction, even if the payment is made in cash. For example, you may buy a prescription medicine by paying cash. This transaction is not anonymous since the pharmacist is, by law, required to record your identity along with other details of the sale. When a notational payment is used, the transaction is always non-anonymous. As an example, consider any transaction using a check or a credit card. Payment records will contain the identity of the customer. • Durability: A transaction is durable when both parties agree and record that the transaction is complete. In general, the completion of payment and the delivery of contracted goods and services mark the completion of the transaction. Fulfillment of warranty obligation may extend this contractual period. If the payment fails to materialize or the delivery is incomplete, the transaction would be deemed to be incomplete. • Non-repudiability: A transaction is non-repudiable if it cannot be denied by one of the parties. Consider the following scenario. A customer placed an order to buy 1,000 shares of Company X at 9:30 a.m. at the prevailing price of $20. At 12:30 p.m., the share price dropped to $15. The customer should not be able to deny the earlier transaction and take advantage of the lower price. Alternatively, let us say the share price went up to $25. Then the broker should not be able to deny that he sold shares earlier at $20. A system carrying out a transaction needs to establish safeguards to ensure non-repudiabilty.
220 Misra & Sounderpandian
MAKING ELECTRONIC PAYMENTS We now focus our attention on systems that make electronic payments possible. Let us start by examining conventional payment systems. Our first conventional payment system is payment by cash. In this system, the purchaser would normally hand over cash for goods or services received. Transactions of this type are common and examples include buying gasoline at a pump, or a magazine at a stand, or a can of soda from a dispenser. Such a transaction is completely anonymous and memory-less. The transaction is completed as soon as payment is made and the goods or service is received. An electronic equivalent of this type of transaction may be envisaged as follows: • a customer walks up to a soda machine; • the customer inserts a stored value card in a receptacle of the soda machine and selects the desired soda type; • the machine debits the value of the selected soda from the stored value card and credits the internal register of the machine by an equivalent amount; • the machine dispenses the can. Though the above scenario is simple, a number of systemic issues can arise. For example, how do we know for sure that credits recorded in the vending machine’s register are legitimate? How does the soda machine verify that the record of money in the stored value card is accurate and not the result of some wildcat scheme? As mundane as these issues may appear, they represent formidable challenges to the growth of such practice. A second form of conventional system is based on the use of a bankcard: credit or debit cards. As is generally known, we can pay for a large variety of items using our bankcards. The transaction involving a bankcard is rather involved. In general, when a bankcard is used, the merchant checks with the card issuing bank for available credit for that card. If sufficient credit is available, the purchase is authorized. The cardholder then reimburses the bank at the end of his billing period. What are some of the known factors that govern such a transaction? In a face-toface transaction, identities of both parties are known to each other. The transaction is non-repudiable and authentic. Non-repudiability and authenticity may suffer when we engage in a transaction using a telephone. For example, if we call an 800 number to order a shirt from Lands End, we are trustful of the 1-800 number and believe that the party on the other end of the phone is in fact an order taker from Lands End. The other party, Lands End in this case, also believes that you are the legitimate owner of the bankcard and are authorized to carryout the transaction. An electronic equivalent of the bankcard scenario is not all that different. We type in the card number in a computer screen. The card information is transmitted to the merchant, say Lands End, from whom we want to buy a shirt using the company’s electronic business system. Just like the case of telephone ordering, we believe that our credit card information is being safely transmitted. We do not have any face-to-face contact with the merchant. What makes this type of transaction
Electronic Money and Payment Systems 221
different from a telephone transaction is the complete lack of control of the network, the Internet, through which the card information is transmitted. We would like to make such credit card use a secure process. What are the characteristics of a system that can support electronic transactions whether it is for a vending machine or an electronic purchase? • Identifiability–Any system used to support transactions is likely to be used for many transactions. It should be possible to isolate one transaction from another. In other words, records in the system should appear as a sequence of separate transactions. • Consistency–A system is considered consistent if it is able to capture comparable information for comparable transactions. The information captured by the system would depend upon the type of transaction in question. For example, an equivalent of electronic cash transaction may be backed by a minimal set of data, and a purchase through the Internet may need to be supported by a more extensive record of the transaction. • Scalability–The system should be scalable both in the volume of transactions it can support as well as the size of transactions. It is expected that a very large number of micro-transactions would occur using electronic business systems. For example, a customer may be required to pay a few pennies if he chooses to download a news item from a news archive, listen to the broadcast of a baseball game from a distant sports arena or listen to classical music live from Vienna, Austria. Some transactions do not fit into this micro-transactions framework. An example would be buying a custom-trimmed Mercedes direct from the manufacturer in Stuttgart, Germany. A scalable system should be able to accommodate these varying sizes of transactions that may differ greatly in the traffic volume. It is reasonable to expect that there would be many more penny transactions than Mercedes purchases. • Interoperability–A system is considered interoperable if the owner of one form of money is able to convert it into another form. Suppose you are interested in listening to a live concert from a famed opera house in Sydney, Australia. If your electronic money is in US dollars, you would need to convert it into Australian dollars when you confirm your order for the broadcast. The debit from your account should be in your currency, and the credit in the local currency of the other party. Suddenly, the transaction is not only significant from the local banking system point of view, but also from the International Bank of Settlement point of view. There is yet another form of interoperability that an electronic payment system needs to consider. Suppose you carry a certain type of stored value card and you use it to buy a can of soda from a vending machine. The machine carries an interface that reads cards and deducts the price of the can before dispensing. How can you be sure that your card will be accepted by the vending machine, especially if you are far away from home? Will the card reader interface be the same in New Delhi, India, as in San Francisco, California? Will the card be compatible with different brands of
222 Misra & Sounderpandian
vending machines, or are you expected to carry different cards for different brands? How are we going to persuade competing brands to adopt a standardized system? Answers to these questions are not yet clear. • Vulnerability–A payment system is vulnerable if the transfer of funds can be intercepted and hijacked. Consider a credit card payment for a purchase made through the Internet. If the credit card information can be stolen during the transmission from the customer to the merchant, the system would fail the vulnerability test. Fortunately, advances in cryptographic technologies have significantly deterred unauthorized access to data. A number of references are available for readers interested in encryption and cryptography (Garceua, Matos & Misra, 1998; Greenstein & Feinman, 2000). • Reliability–The system reliability is a major operational characteristic required of an electronic payment system. Reliability requires that all recorded payments truly reflect the terms of the transaction. If electronic cash is used, the system should correctly debit the payer and credit the receiver by the amount of the transaction. Similarly, if a credit card is used in a Web-based transaction, the transaction should correctly be charged to the account holder. We must also expect a system to be operationally reliable. It must be available for use anytime and anywhere. There must be failsafe alternatives available for contingencies when the electronic system, for any reason, is not accessible. Consider some form of electronic cash. It should be possible for an individual to pay for purchases using electronic cash anytime and anywhere just like the person would pay using cash. For cash transactions, merchants do not usually contact a third party such as a bank. We would expect a similar kind of convenience with electronic cash with a reliable operational system. Credit card-based transactions, on the other hand, are a three-party activity. The seller usually contacts a bank before accepting the credit card payment. • Transaction cost–Unfortunately, nothing in life is free. It is to be expected that electronic payment system would cost us. So long as the cost of an electronic transaction is comparable to current physical money standards, we may not have anything to complain about. • Privacy–Privacy is one of the most critical aspects of an electronic payment system and is of interest to anyone participating in electronic payments. All electronic transactions must be private., i.e., the facts associated with the transaction must not be divulged voluntarily or involuntarily to anyone not legally entitled to such information. An electronic payment system must install safeguards to make total privacy possible. The notion of privacy for electronic cash transactions goes beyond the ordinary definition of privacy. Such transactions must be memory-less and anonymous. The system must not keep any residual records for such transactions beyond what would be expected for equivalent physical money-based transactions. The implications of above definition need careful consideration. Nominally, there should be a guarantee that various pieces of information about a transaction are kept private. The information includes identities of buyers and sellers, items
Electronic Money and Payment Systems 223
purchased and price paid. A more complete privacy is achieved when there are safeguards against extrapolation of information from nominal facts. For example, it should not be possible for a merchant to analyze our consumption habits if we specifically do not permit such analysis. Similarly, it should not be possible for a person to hide his/her bad credit rating using privacy as a shield while seeking a credit from a merchant. Protection of privacy must also apply to keeping one’s information private from various government agencies. For example, it should not be possible for the IRS to audit your transactions without your approval or a court order.
SMART CARDS Futurists have been speculating about the prospects for a cashless society for many years. Such predictions became more frequent following the introduction of “smart” cards–cards containing a computer chip–in the mid-1970s. Introduction of smart cards was expected to reduce our reliance on cash and checks especially for low-value purchases. Even though smart cards and other types of electronic payment systems are yet to substitute for cash and checks, conversion, nevertheless, has started. For example, the use of debit cards has significantly accelerated in recent years, perhaps reducing the use of paper checks. Smart card has its origin in the development work done in Japan, Germany and France in early ’80s. However, its use was not very widespread until mid-1980s, most work being confined to research and development. A historical perspective on smart card development can be found in an online museum at http://www.cardshow.com/ EN/Public/museum/bienvenue.html. The use of smart cards has since taken off as can be seen from data published by the Smart Card Industry Association (http:// www.scia.org). Not only have the number of smart cards grown from 805 million in 1996 to 2.8 billion in 2000, these cards are also used by many different industries as shown in Table 1. Smart cards can be used for small value purchases such as at a vending machine, or for more data-intensive activities such as carrying one’s medical records. A smart card, sometimes known as a chip card, is a plastic card with an embedded microchip. Smart cards are generally of two types: memory cards and microprocessor cards. Memory cards store data and may be viewed as a small floppy disk. Microprocessor cards contain a CPU for data processing and security functions, memory for storing data and interim calculations, and read-only memory (ROM) for storing programs. These cards can also contain either EPROM or EEPROM for storing specific applications and cardholderspecific data. Data and programs stored in a smart card are protected through strong encryption. Some microprocessor cards can also run multiple applications in one card, thereby improving its versatility and appeal to a user. A card reader is required to read a smart card.
224 Misra & Sounderpandian
Table 1: Smart card use by application area (Source: SCIA–http://www.scia.org/ knowledgebase/default.htm)
Card Application Pay Phone GSM Health Care Banking Identity/Access Transportation Pay TV Gaming Metering/Vending Retail/Loyalty
1996 (in millions) 605 20 70 40 20 15 15 5 10 5
2000 (in millions) 1,500 45 120 250 300 200 75 200 80 75
Average Annual Growth 29% 25% 14% 105% 280% 247% 80% 780% 140% 280%
Smart cards can be either contact or contact-less type. When a contact type of card is inserted into a card reader, data are transferred through direct contact with an electrical connector in the card. Contact-less smart cards do not need to be inserted into a reader, but are brought into the proximity of a reader equipped with an antenna. These cards have embedded electronic microchips and an antenna that enable them to communicate with readers without physical contact. Figures 2 and 3 show the scheme of these two types of cards. Smart cards can be used in many different places including as stored value cards. Stored value cards, at the time of purchase, come preloaded with a fixed amount of money and are thrown away after the money is used up. Some of these cards are reloadable and are not thrown away after one use. Smart cards, because of their nature, may be considered as electronic cash. Contact-less smart cards can be used for applications such as highway toll where the motorist does not have to stop to pay for the toll. Since Smart cards can be used to hold any kind of information, we can expect to see its application grow in the future.
Figures 2 & 3: Contact and contactless smart card (Source: Gemplus–http:// www.gemplus.com)
Electronic Money and Payment Systems 225
Figure 4: Electronic purse (from the uses of electronic purses by disabled people, John Gill, Chief Scientist, RNIB)
Electronic Purse An electronic purse is a smart card that holds an electronic equivalent of cash and is viewed as a substitute for a conventional purse. An electronic purse carries a preloaded amount of money that can be used for payment for goods and services. Unlike earlier smart cards, electronic purses are reloadable. Reloading can be done at smart card terminals or automated teller machines (ATMs). Many of us are familiar with various types of prepaid cards. We have been using them to make phone calls, copy documents, buy food in university cafeterias and so on. Most of these prepaid cards are thrown away when the initial money stored in a card got used up. These cards are usually meant for one type of application only. For example, if you purchased a copy card from Kinkos, your card could be used for that company only. A better version of prepaid card has been used by a number of universities for some time. In this version, you could use the same card to buy a variety of goods and services from merchants around the university. These cards are also reloadable. However, true interoperability has been lacking in these cards since such cards cannot be used in lieu of cash at far away places. The advent of smart cards and electronic purses is trying to overcome that limitation. In general, these cards require a card reader to complete a transaction. Devices such as Point of Sale (POS) terminals, smart card kiosks and ATMs can be equipped with card readers. Whenever a customer inserts a card into a card reader to complete a transaction, the reader would either debit or credit the transaction value from or to the card. If it is a purchase transaction, the merchant’s account will also be credited. The electronic purse technology is beginning to mature. The emerging maturity is evidenced by the release of two global specifications for electronic payments. The first of these specifications comes from CEPSCO (http://www.cepsco.com) and the other from the World Wide Web consortium (http://www.w3c.org). We review these specifications briefly.
226 Misra & Sounderpandian
Common Electronic Purse Specification (CEPS): CEPS is a product of CEPSCO, a consortium of large payment system operators. This consortium was originally incorporated in October 1999. Its original shareholders include: • CEPSCO Española A.I.E. • EURO Kartensysteme • Europay International (http://www.europay.com) • Visa International. (http://www.visa.com) In July 2000, the following two also joined. Groupement des Cartes Bancaires (CB) Proton World The primary goal of CEPSCO is to develop, maintain and implement CEPS throughout the world. This group has considerable market presence. CEPSCO Española A.I.E is a product of major system operators in Spain: SERMEPA and Sistema 4B. SERMEPA provides card banking to more than 150 financial institutions and operates more than four million Visa cards in Spain. Sistema 4B has 32 member banks, 10,000 ATMs and 200,000 POS devices. EURO Kartensysteme is a jointly owned company of the German banking industry and provides services for EurocardMasterCard, Eurocheque and GeldKarte. Europay International, headquartered in Waterloo, Belgium, is a leading provider of personal payment and related services. Through its alliance with MasterCard, Europay offers bank products accepted at almost 494,000 ATMs and 17 million locations worldwide. Groupement des Cartes Bancaires (www.cartes-bancaires.com) is a private non-profit business organisation designed to serve the common interests of its members. It brings together a large number of French and other financial institutions belonging to the “CB” interbank system. Cartes Bancaires, a leader in payment systems, represents 37.6 million • •
Figure 5: Money kiosk
Electronic Money and Payment Systems 227
cards, 32,500 ATMs and 620,000 affiliated merchants or service providers. Proton World (http://www.protonworld.com) is a creation of American Express, Banksys, ERG, Interpay and Visa International. It is licensed in more than 20 countries and has more than 280,000 terminals worldwide. Visa International, the owner of the leading credit card symbol in the world, is one of the original members of CEPSCO. With a worldwide membership of more than 21,000 financial institutions and smart card programs in more than 35 countries, Visa International is pioneering the Secure Electronic Transaction (SETTM) standard to promote Internet-based commerce. CEPS standard is embodied in three publicly available documents: CEPS Technical Specification (Version 2.3, March 2001), Business Requirements (Version 7.0, March 2000) and Functional Requirements (Version 6.3, September 1999). These three documents, together, set out criteria for the operation and development of electronic purse systems that would support interoperability. The specification addresses the following issues: • Security. • Card application. • Terminal application. • Point-of-sale transactions. • Load transactions. • Unload transactions. • Currency exchange transactions. • Transaction processing. • Settlement and reconciliation. • Clearing and Administration. Figure 6 shows a generic scheme for a load transaction conforming to the CEP Specification. The electronic purse card is inserted into an interface in a card reader. Figure 6: A generic load operation (Source: CEPS Business Requirements, Version 7)
228 Misra & Sounderpandian
This card reader is connected to an organisation (Load Acquirer/Load Operator) through which funds would be transferred from the account of the cardholder. When a load transaction is initiated, the Load Acquirer would be expected to send an authorization and authentication request to the Card and Funds Issuer via the Network (Legend 1). The Card and Funds issuer would debit the cardholder’s account by the amount of requested funds and send an authorization to the Load Acquirer. The card reader, on command from the Load Acquirer, would then load the electronic purse by the requested amount (Legend 2). Needless to say that this sequence of activities would fail if the cardholder’s account does not have sufficient funds or credit. The scheme shown above assumes that the Card Issuer and the Funds Issuer are the same. Realistically, two different organisations may be involved in a transaction. Figure 7 shows a variation that takes into account this possibility. As can be seen from Figure 7, the initial request for authorization goes from the Load Acquirer to the Funds Issuer. The Funds Issuer would be expected to approve the request based on sufficiency of funds. The Load Acquirer then seeks authorization from the Card Issuer. The authorization request also acts as a guarantee to the Card Issuer that it will be credited with funds. The electronic purse is loaded only when the electronic purse is authenticated. The card loading operation is finally completed when transfer of funds completes from the Funds Issuer to Load Acquirer and then from Load Acquirer to Card Issuer. How is a purchase made using an electronic purse? Figure 8 shows the scheme. The Merchant sends the transactions to the Merchant Acquirer, an organisation that collects and possibly aggregates many transactions from several purchase devices. Figure 7: An enhanced load operation (Source: CEPS Business Requirements, Version 7)
Electronic Money and Payment Systems 229
Figure 8: A purchase transaction (Source: CEPS Business Requirements, Version 7)
The Merchant Acquirer in turn sends the transactions to the Card Issuer for transfer of funds from Card Issuer to Merchant Acquirer. This scheme presupposes authentication of cardholder either online or offline. There are of course other issues associated with the operation of an Electronic Purse. For example, what happens if we want to unload a purse, or what happens if we transfer funds between currencies? These matters are governed by other schemes. The reader is referred to the CEPS specification for details. The discussion regarding CEPS would not be complete without a mention of the security requirements. After all, we would not have much confidence in a purse system that cannot guarantee privacy, authenticity, non-repudiability and integrity. Online transactions with electronic purse require secret key matching between the card and the card issuer. Offline transactions on the other hand require the POS device to authenticate the card using RSA Public Key algorithm (Flinn & Jordan, http://www.rsa.com) and a session key. It is too early to say whether CEPS would emerge as the global standard for electronic purses and smart cards. However, given the financial clout of the players supporting CEPS, it is an early favorite to become the de facto standard. Common Markup for Micropayment Per-Fee-Links: This standard originates from the World Wide Consortium (http://www.w3c.org). The primary goal of this standard is to ensure interoperability for all “Web micropayments” at minimum cost. According to W3C, it should be possible for a user to “use the same user interface metaphor as for regular Web content” and “just click on a hypertext link.” This new type of link, called per-fee-link, would result in reaching micropayment
230 Misra & Sounderpandian
Figure 9: W3C architecture (Source: http://www.w3.org/TR/MicropaymentMarkup/#origin-goals)
-
-
content. Even though many of the existing micropayment systems do follow this approach, each of them uses its own proprietary method of creating a per-fee-link and encoding the vital information in the per-fee-link. W3C standard is proposing “an extensible and interoperable way to embed in a Web page all the information necessary to initialize a micropayment, and in particular, for encoding per-fee-links.” Common Markup for micropayment per-fee-links standard focuses on the server-to-browser link and does not address the electronic wallet (a synonym for electronic purse) to Fee Link Handler-Browser link. Standards are specified in terms of a number of required fields that are reproduced below.
SECURITY IN PAYMENT SYSTEMS One of the key issues that affect any electronic payment is the security of data. These issues come into play in many circumstances: • Loading and unloading an electronic purse • Sending credit card numbers for Web-based purchases • Making electronic cash payments using smart card technology Data security for electronic payments is achieved through encryption and other supporting technologies such as digital certificates. Even though the technology for encryption is well known, there are at least two other standards that are currently being used to make electronic transactions secure. These standards are discussed in the rest of this section. SET (Secure Electronic Transaction): In the words of Visa, SET is specification designed to utilize technology for authenticating the parties involved in payment for card purchases on any type of online network, including the Internet. The SET standard was jointly developed by Visa and MasterCard with participation from leading technology companies including Microsoft, IBM, Netscape, SAIC, GTE, RSA, Terisa Systems and VeriSign. By using sophisticated cryptographic techniques, SET makes cyberspace a safer place for conducting business and is expected to boost consumer confidence in electronic commerce. The focus of SET
Electronic Money and Payment Systems 231
Table 2: Micropayment fields Field name price textlink imagelink
requesturl payment system buyid baseurl title longdesc merchantna me duration expiration
target hreflang type accesskey charset ExtData ExtDataPar m
Short Description Specifies the default amount and currency that the Customer will be charged upon following the per-fee-link. Textual description of what the client is requesting. The text source of the per-feelink. Graphical description of what the client is requesting. The graphic source of the perfee-link. (here textlink provides a textual equivalent of the image for accessibility). Identifies what the client is actually requesting. Identifies the micropayment systems supported by the merchant and can also provide information unique to each payment system. Identifies the merchant offer the client is buying or has already bought. Provides a common prefix for relative URIs (e.g., the buyid and requesturl parameters). Titles the content of the merchant offer the client is buying or has already bought. Describes in detail the content of the merchant offer the client is buying or has already bought. Specifies a merchant designation. Indicates the time after purchase any URIs with the same buyid can be retrieved without further payment. Indicates a date until which the offer from the merchant is valid.
Specifies the name of a frame where a document is to be loaded. Specifies the base language of the resource designated by the per-fee-link. Specifies the content type of the linked resource, designated by the per-fee-link. Assigns an access key to a per-fee-link. Specifies the character encoding of the resource designated by the per-fee-link. Allows linking to an external metadata file describing additional information regarding the per-fee-link. Provides a parameter to be applied for interpreting the contents of the ExtData file.
Format
Requirements
character string
MUST be provided
character string
MUST be provided
URI
MAY be provided
URI
MUST be provided
URI character string
MUST be provided MAY be provided
URI
MAY be provided
absolute URI
MAY be provided
character string URI
SHOULD be provided
character string integer number
MAY be provided
date/time string YYYMMDDThh:m m:ss character string language codes Mimetype s character string Charset
MAY be provided
URI
MAY be provided
character string
MAY be provided
MAY be provided
MAY be provided
MAY be provided MAY be provided MAY be provided MAY be provided MAY be provided
232 Misra & Sounderpandian
is on maintaining confidentiality of information, ensuring message integrity and authenticating parties involved in a transaction. The significance of SET, over existing Internet security protocols, is found in the use of digital certificates. Specifically, SET: • Establishes industry standards to keep order and payment information confidential. • Facilitates interoperability among different type of devices, software running on those devices to read various types of cards and the network supporting electronic transaction. • Increases integrity for all transmitted data through encryption. • Provides authentication that a cardholder is a legitimate user of a branded payment card account. • Provides authentication that a merchant can accept branded payment card transactions through its relationship with an acquiring financial institution. • Uses digital signatures and cardholder certificates to authenticate cardholder accounts. • Uses digital signatures and merchant certificate to authenticate a merchant. • Allows the use of the best security practices and system design techniques to protect all legitimate parties in an electronic commerce transaction. SET is expected to be the primary means through which Internet transactions around the world will be made secure for all consumers and merchants. How does SET work? We rely on the SET Secure Electronic Transaction Specification to answer this question. The SET process requires five steps: 1) cardholder registration, 2) merchant registration, 3) purchase request, 4) payment authorization, and 5) payment capture. Cardholder and Merchant registrations are one-time activities that may be repeated when a registration expires. The other three activities occur per transaction. SET specification also includes other types of transactions such as credit reversal, digital certificate inquiry and status, and inquiry about a purchase. The basic process of using SET may start with acquiring a digital certificate by a cardholder and a merchant. The process of getting a certificate is rather involved. Figure 10 extracted from the SET Specification standard illustrates the process of a cardholder’s registration. As can be seen from this figure, a cardholder may initiate a certificate request by contacting a certificate authority such as Verisign. Upon completion of the necessary forms, the cardholder would receive a digital certificate. A cardholder’s certificate, according to SET protocol, does not contain the account number or expiration date of the card. Instead, a one-way hashing algorithm is used to encode the account information and a secret key value known only to the cardholder’s software. When the cardholder wishes to carryout a transaction, the purchase request and encrypted payment instructions are transmitted to the merchant along with the digital certificate. All of these activities occur in the background automatically through
Electronic Money and Payment Systems 233
Figure 10: Cardholder registration process (Source: SET Specification Standard)
the software used for completing a transaction. Since the cardholder’s certificate is a proof of authenticity, a merchant is assured about the legitimacy of the purchase. A merchant’s certificate is, in concept, similar to that of a cardholder. However, before a merchant can be issued a certificate, it must also get the approval of the merchant’s financial institution through the institution’s digital signature. A merchant and a cardholder may have many certificates. A purchase transaction, using SET, may follow the following set of steps: 1. A cardholder initiates a purchase request to a merchant. 2. The merchant receives the request and generates a response and digitally signs it by generating a message digest of the response. The merchant’s private key (using an asymmetric key protocol) is used for the digital digest. 3. The cardholder verifies the merchant’s signature by decrypting the transmission with the merchant’s public key. 4. Upon successful verification, the cardholder sends the purchase request to the merchant. Purchase information is encrypted and sent along with a message
234 Misra & Sounderpandian
digest of the order information. The cardholder’s certificate is also sent to the merchant at this time. 5. The merchant verifies the cardholder’s certificate and then decrypts the transmission using the cardholder’s public key. 6. Finally, the merchant creates the purchase confirmation digitally signing the confirmation using his private key. Even though we use terms like cardholder and merchant to describe the transaction, the process is completely automated through SET-compatible software running at the purchase station and the merchant’s authorization gateway. Secure Socket Layer (SSL): SSL is the Internet security layer for point-topoint connections. Any electronic transaction can use this protocol to build a communication pipeline to transmit data in a secure manner. The conceptual use of SSL is shown in Figure 11. The SSL protocol can be considered to be composed of two subprotocols. One of these two levels is SSL Handshake Protocol that enables the client and server to authenticate each other and establish an encryption algorithm as well as a key for encryption. A second protocol is the Record Level protocol that uses the Handshake protocol to transmit data. What are some of the advantages of using SSL? • Once an SSL connection is established, it is private. • The connection is reliable. • Different cryptographic protocols can be used to authenticate the identities of communicating parties. • Browsers come with SSL support.
COMMERCIAL SYSTEMS Figure 11: SSL in a Web system
Electronic Money and Payment Systems 235
As can be expected, a variety of commercial systems have emerged that support or enable electronic payments of various types: person-to-person, person-tobusiness, business-to-business and business-to-person. A number of these systems are also designed to assist mobile-commerce users (mCommerce) where a payment may be made using a handheld cellular device such as a phone or a personal digital assistant. It is difficult to catalog all the products that are currently available in the global market since the set is very large. The set is also constantly changing by the addition of new products and the demise of a few old ones. The character of an individual product is also changing since a product may offer new services or new technology support. Those interested in a comprehensive (but slightly dated) list of products are referred to a survey done by the Committee on Payment and Settlement Systems (CPSS) of the Bank for International Settlement, Basel, Switzerland (http:// www.bis.org). A survey of commercial payment systems yields several identifiable patterns of services and features. The following are general categories that follow these patterns. Our categorization above does not imply that commercial systems neatly fit into one category or other. As stated earlier, commercial systems evolve and change their service mix and therefore may very well offer services from multiple categories above.
Cash Substitutes Cash substitute systems are designed as replacement of physical cash. Electronic cash is carried in a digital form using a one-time use or reloadable smart card. Operationally, smart card technology may use a standard protocol such as CEPS or a proprietary protocol of the card issuer. It is safe to assume that common international standards governing the issue and use of smart cards will emerge as this industry matures. We review a few of such systems now in common use.
Mondex (www.mondex.com or www.mondexinternational.com) Mondex is an electronic cash system that uses the electronic purse technology. This system, originally invented by a couple of bank managers from NatWest (National Westminster Bank, London, England) in 1990 has grown into arguably the most recognized electronic cash system. NatWest introduced “Byte” smart cards to its staff in London in 1991 to prove the concept of electronic cash systems. Byte cards could then be used to pay for goods and services. Mondex, as an international player, made its debut in 1993 when NatWest formed a partnership with Midland Bank of England. There were major developments in the life of this fledgling technology in the year 1994:
236 Misra & Sounderpandian
Table 3: Types and descriptions of services
Type Substitutes for physical cash
Electronic check processing tools
Micro-payment systems
Money remittance service
Miscellaneous
• •
General Description These systems are generally promoted as cost-effective, secure and ready payment tools that can behave as cash substitutes. Products belonging to this category, of which Mondex (www.mondex.com) is an example, use reloadable smart card technology and need some form of card reader to operate. A user may use these products to pay for goods and services. These systems are Web-based and generally require the establishment of an account. Once an account is established, it can be used to make payments by writing checks against invoices. Invoices may be for transactions conducted over the Web or ordinary bills received from various service providers such as a utility company. Money for a transaction originates from the account holder's checking or credit card account. This segment of the payment system is beginning to mature through the effort of the Financial Services Technology Consortium (FSTC) (www.fstc.org) who has successfully completed field trials of electronic check processing protocol. These systems specialize in micro-payments. Micropayments may range from a fraction of a penny to a few dollars. Such payments are not very conveniently handled using credit or debit cards. Micro-payment systems may be Web-based or hosted on the user’s personal computer using electronic wallet software such as Microsoft Money. A number of companies specialize in money order services. These services may be used to remit money to a person so long as the recipient has an email address. In contrast to the first two categories above, money remittance is the electronic product and is designed to substitute various types of money remittance services already in operation. A number of other innovative systems exist. For example, one of the electronic payment systems operates as an online coupon service (Cybergold, www.cybergold.com). Another use scrips (eScrip www.escripinc.com/jsp/index.jsp) for fund raising.
announcement of product development specification that would eventually make Mondex a global product; selling of franchise rights for the Asia Pacific region; and
Electronic Money and Payment Systems 237
•
the availability of an advanced “attack-resistant” technical platform from Hitachi for the wider introduction of Mondex. The year 1996 is yet another landmark year for Mondex. During this year, MasterCard International signed a letter of intent to acquire 51% of Mondex with an aim to adopt its technology as the MasterCard’s future choice of strategic platform. Since then, the ownership of Mondex has grown to include many leading banks in the Americas, Europe, Asia, Australia and the Asia-Pacific region. Currently, Mondex can be used as a substitute for physical cash. Mondex contains a microchip (purse) that holds the cash value electronically. This purse contains five pockets, each of which can hold a cash value in a different currency. Mondex cards can be used without the need of bank authorization at the time of the transaction unlike credit and debit cards. A user of the Mondex system may carry a smart card that incorporates the Mondex chip, or carry a variety of gadgets such as mobile phones and TV set-top boxes that include the Mondex chip. Before the card (or Mondex-enabled gadget) is used, it must be loaded with money using a compatible terminal device. The heart of the Mondex system is the MULTOS operating system that uses a Hitachi-developed MULTOS chip. MULTOS or multi-application operating system, as the name implies, provides an open and high-security environment through which smart card application developers can carry out their tasks without committing to a specific operating system or hardware. Chip level integrity is established through a specification that requires rigorous testing to demonstrate security, interoperability and tamper-resistant behavior. Smart cards using MULTOS also check for validity of applications being sent to them and, once found valid, are stored in separate memory areas in the chip. Each new application is kept isolated from other applications on the chip in a “firewall”-protected environment so that the running of one application is guaranteed to be isolated from any other application. To protect the integrity of the card further, each card is installed with a unique cryptographic key. Each application supported in the card is also given a unique key. The loading of an application into a card is allowed only after the MULTOS Certificate Authority issues an appropriate certificate. What benefits can a user of Mondex derive? A Mondex user can carry electronic cash of any amount. The card itself does not specify any limit even though each issuer (or the country in which it is issued) may impose limits on the monetary amount. The electronic money is carried in a secure and convenient way and can be used to make purchases of any size. Such cards can effectively become an international electronic money system since Mondex is now licensed in more than 57 countries spanning all of the geographic regions. For a graphical presentation of the use of Mondex, visit Mondex, Canada at http://www.mondex.ca/eng/dayinlife/dayinlife_four.cfm or http://www.mondex.com.
Visa Cash (www.visa.com) This is a chip-based smart card available from VisaTM. Visa Cash cards can be throw-away type and may be purchased with a predetermined value. When the pre-
238 Misra & Sounderpandian
loaded cash value is used up, purchase of a new card would be necessary. Alternatively, one can purchase a reloadable Visa Cash card that can be reloaded on demand at specialized terminals and ATMs. To complete a transaction, the card needs to be inserted into the merchant’s Visa Cash card reader. The card reader debits the amount of transaction from the card and credits the merchant’s account balance. Visa Cash is an electronic money application that complies with CEPS standards and is based on Visa Cash Electronic Purse Specification. Other cash substitute cards include Proton (www.protonworld.com) and eCash (formerly DigiCash) (www.ecashtechnologies.comand, successor is known as Monneta).
Electronic Checks and Processing Systems Electronic checks and check processing systems are designed to substitute for paper checks. User of an electronic check system writes a check by completing an online check form. A completed check may be processed electronically using established clearing protocols. Electronic Funds Clearinghouse (www.efunds.com) protocols are used by many of the electronic check providers to provide a secure protocol for clearing house functions using the Internet. An alternative form of electronic check processing automates the creation of a check; the check service provider then prints the check for transmission to the payee. A number of companies provide the capability for writing online checks. We start the discussion with a brief review of the electronic check project of the Financial Services Technology Consortium (FSTC) (www.fstc.org). This project may be considered as a significant attempt to standardize electronic check processing. FSTC is a not-for-profit consortium of several major banks, financial institutions, research laboratories, universities, technology companies and government agencies. One of the projects undertaken by this organisation was the development and field trial of electronic checks or eChecks. Phase I market trial of eChecks was successfully completed in the year 2000 with the help of the U.S. Treasury with over $10 million in payments distributed over the Internet. Phase II trial is now in operation with a goal of broadening the use of eChecks. The core eCheck technology, secured by patents issued in 1997, 2000 and 2001, has also been licensed to private companies for further development. Goals driving the development of eChecks include the following (Wade): • eChecks should work just like paper checks; • eChecks should have the same legal validity as paper checks; • the mechanism of tendering and clearing eChecks should be all electronic; • eChecks should be digitally signed and allow for multiple signatures; and • they should be safe enough to be used over the Internet Figure 12 shows the operation of a basic eCheck-based transaction. A payer originates an electronic check by starting a computer program that generates a form representing a blank check. The payer then completes the form by filling payee information such as name of the payee, amount of the check and a memo. To sign this check, the payer needs to have an electronic checkbook and a PIN number to
Electronic Money and Payment Systems 239
Figure 12: Basic electronic check processing scheme
activate the checkbook (see Figure 13). The checkbook itself is a smart card application. This checkbook not only reads the check information but also attaches a digital signature of the payer and logs the transaction into the checkbook’s log. The check can then be sent through electronic mail or the Internet. The receiver of the check also needs an electronic checkbook with a PIN so that he can verify the validity of the incoming check and digitally sign the check for deposit. As Figure 12 shows, these checks are then sent to the payee’s bank for verification and settlement. FSTC electronic check scheme supports a protocol that exceeds the basic flow shown in the figure above. For example, the payee of a transaction may insist on a certified check from a financial institution as a payment. To satisfy such a need, the payer can electronically request its financial institution for a certified electronic check. The check can either be returned to the payer who then forwards it to the payee, or the issuing bank may directly send it to the payee. Either way, the check is accompanied by the issuing institution’s digital signature and certificate to ensure authenticity and integrity. For those transactions where an escrow agent is involved, the flow of the check can be altered to incorporate the escrow agent. The escrow agent would receive the payment through an electronic check up and on completion of the transaction, issue a check to the payee. Key to the success of electronic check technology is cryptography. The FSTC scheme uses public and private key pairs. Digital certificates are used to establish authenticity. The cryptographic protocol and digital certificates guarantee the confidentiality, authenticity and non-repudiability of payments. Additional security for transactions can be achieved by using secure socket layer (SSL) for Web-based delivery of an electronic check. Similarly, secure email (S/MIME) can be used for transmitting an electronic check.
240 Misra & Sounderpandian
Figure 13: FSTC check system (Source: An overview and explanation of security measures, Sept. 22, 1999 [www.fstc.org]) 6/30/98 Payee One Hundred and no/100s
100.00 Pin c =
Invoice # 593281
/
*
7 8 9 – 4 5 6 +
Smartcard Reader
1 2 3 = 0 •
Electronic Checkbook
What are some of the potential advantages of an electronic check system? Electronic checks are designed to function in a manner similar to traditional paper checks and thus support authorization and endorsements. Users’ comfort level with time-honored paper check systems are not sacrificed. • Elimination of paper checks not only save paper but also the need to support systems to print paper checks. Costs associated with printing of paper checks are not trivial. • Use of electronic check lead to the elimination of data entry required for capturing paper check information. There is also the additional benefit of eliminating verification of manually entered data to ensure the correct coding of paper check information. • Use of paper check requires manual verification of a check’s authenticity. Electronic checks, on the other hand, are automatically verifiable since these carry digital signatures and certificates. • Potential delays in processing a transaction can be eliminated with electronic checks since all aspects of a payment can be verified electronically. An electronic check system must also guard against potential pitfalls including disallowing duplicate checks. If the success of US Treasury trial is a barometer of things to come, many of us are likely to use electronic checks in the not-so-distant future. A number of companies provide online check processing capabilities. These include Online Check System (www.onlinecheck.com), PayByCheck (www.paybycheck.com), TeleCheck (www.telecheck.com), Checkfree (www.checkfree.com), CheckSpace (www.checkspace.com), Cybercash (www.cybercash.com) and eCheck Secure (www.echecksecure.com). Even though •
Electronic Money and Payment Systems 241
details of operations vary, some of the common characteristics of these systems may be summarized as follows: • Most commercial systems require the establishment of a customer account. • Money to support online checks is drawn from the customer’s checking account or a credit card account. • Check system providers often impose restrictions regarding the volume and amount of transactions allowed through online checks. • A customer usually completes a check facsimile online and commits his action through a ‘SEND’ operation. Variations to these basic operations are many. For example, in the case of Online Check System, the payee (a merchant) receives a paper check and not an electronic transaction. A customer fills out an online check form giving details of the payment to be made. Once this data is completed, Online Check System prints out a paper check matching the online information and forwards the check to the payee. The printed check can be sent to the merchant or the merchant’s bank. Similar check writing services are available from PayByCheck. Checkfree, on the other hand, is electronic and the payment mechanism is Internet based. Payment is made from the payer’s checking account or money market account. The payer’s account is also debited automatically for service fee imposed by Checkfree. Checkfree can also be used to make several payments at one time to multiple payees as well as scheduled payments; the payment information can be downloaded to the user’s software such as Quicken 98 and Microsoft Money. Many of these electronic check-processing companies provide facilities for receiving electronic invoices for purchases as well as routine payments such as electric and phone bills. For example, Checkfree has established partnership with a large number of financial institutions and others to provide Internet-based service to receive bills and make payments electronically. A number of other companies offer these types of service including CheckSpace, PayTrust (www.paytrust.com) and PayMyBills (www.paymybills.com). PayTrust issues electronic mail notification to its customer when a bill arrives. Users of PayMyBills can download bill information into MS Money, Quicken and Excel and make payment by completing a form. These users may also establish an automatic payment scheme for routine payments. eCheckSecure operates in a niche area within the overall check processing category. In the case of eCheckSecure, a customer may establish secure brokerage account or merchant account. Payment is made using paper checks printed at the brokerage or the merchant when shoppers fill in a check form online. Another such service comes from Worldpay (www.worldpay.com). The Worldpay system can process payments in many different currencies using credit and debit cards.
Micropayment Systems Micropayment systems specialize in small payments that would be normally unattractive for credit or debit card transactions. The amount of payment may be as small as 1/10th of a penny and may go up to several dollars based on limits placed by the system owner. Why do we need micropayments? Think of an electronic business
242 Misra & Sounderpandian
that distributes music titles. This company does not limit its music selling activity to an entire album; it may sell you a single song from an album for a fraction of the album’s price. How does one pay for such purchases? Micropayments are expected to meet this niche. Micropayments can be applied in a number of other cases such as business news and information, financial research and data, archives, images, music, video and other digital services. We discuss examples of several systems that support micro-payment. • eCoin (www.ecoin.net)–This system is designed to support micropayments for purchase of goods and services through the Internet. A customer needs to establish an account that enables him to download an eCoin wallet software. eCoins can then be downloaded from his online account to the wallet. eCoins are downloaded in the form of tokens each of which is uniquely identifiable. A merchant submits an invoice through an invoice tag in the Web document. The wallet manager in the customer’s machine interprets the invoice tag and sends eCoins from the customer’s wallet along with the invoice to a broker. The broker completes the cycle by processing and completing the transaction after verification of the coin’s authenticity. Tokens dispensed are internally marked as used to avoid duplicate use. • Millicent (www.millicent.com)–This is a micropayment system from Compaq. This system can be used to make payment in denominations starting as low as 1/10th of a cent. Users are required to open an account. This account can be funded in one of three possible ways: online credit or debit transaction, billed through the user’s ISP or telephone company or through a prepaid card. Vendors participating in Millicent system either need to run the Millicent software at their Web site or use a licensed Commerce Hosting Provider. Other micropayment systems include InternetCash (www.internetcash.com), Amadigi (www.oakington.com) and Qpass (www.qpass.com). A number of micropayment systems need the use of a special class of software generally called Electronic Wallets. These types of applications are designed to facilitate Web-based electronic commerce. An Electronic Wallet contains information that would be ordinarily required to complete a transaction through the Internet electronic money: account numbers, passwords, credit card numbers, names and shipping addresses. Some wallets, as in the case of eCoins, may also contain electronic tokens representing cash. Wallet software is usually downloaded from providers of wallets and micropayment systems, and resides on the user’s personal computer. Owners of wallet software can complete payment forms at the click of a button using previously stored information. The transaction itself is credit card based and requires connectivity and authorization.
Money Remittance Service These services, as the name signifies, are Internet-based services to send or receive cash instantly and in a secure manner. Owners of these services hope to replace conventional money orders. In general, the sender and receiver need to have email addresses for money remittance to work. We review a few of these systems.
Electronic Money and Payment Systems 243
PayPal (www.paypal.com) PayPal is a Web-based payment system that provides “instant and secure online payment service.” Money can be sent to a recipient electronically using the send option of the PayPal system. Conversely, money can be received electronically from a sender. In order to send or receive money through this system, one needs to set up an account with PayPal. Accounts can be of three different types: personal, premier and business. All customers can send and receive money online from either a credit card or a checking account. Business account holders can send or receive money using business or corporate names. How does this system work? Let us say that you are interested in setting up a PayPal account. You will start by entering a set of personal information about you. One of the key pieces of information entered is your email account name (e.g.:
[email protected]). The email account name is used to identify the account holder. Once the identification is verified, you will be able to send or receive money electronically. You may also link a credit card or a checking account number to your PayPal account and transfer money into that account. Sending and receiving money is a free service to all Personal Account holders. However, the amount of money that can be received in a month by a personal account holder is limited to $100 only. Premier and Business Account holders are required to pay a transaction fee for receiving payments but enjoy better transaction volume limits. Sending payment is always free of charge. Money can be sent to anyone including those who do not have a PayPal account. The receiver would be expected to create an account before he can get the money. The PayPal system is now international and is available in 42 different countries. Details of rules governing sending and receiving money internationally are expectedly a bit different compared to transactions within the U.S. As always, security is an issue in Web-based transactions. PayPal uses SSL protocol with an encryption key length of 128bits. PayPal servers are protected by electronic firewall and are not directly connected to the Internet. However, shoppers using the PayPal system to make payment for their purchases can enhance their data security by taking normal precautions such as requesting a digital certificate from the seller. Many other systems support money remittance. These include c2it (www.c2it.com) from Citibank, eMoneyMail (www.bankone.com/emoneymail/ home/) from BankOne, MoneyZap (www.moneyzap.com) from Western Union, PocketPass (www.pocketpass.com), Rocketcash (www.rocketcash.com), Paydirect (paydirect.yahoo.com) and Propay (www.propay.com).
Other Miscellaneous Systems There are many other electronic payment systems that dot the world of electronic commerce. All of these systems support electronic commerce transactions in some manner. However, they show enough of a distinctive behavior to merit a separate consideration.
244 Misra & Sounderpandian
BidPay (www.bidpay.com)–This organisation is a facilitator for online auction. related purchases not exceeding $500. Clickshare (www.clickshare.com)–This is a micro-payment intermediary service through which a pre-registered client can procure digital information from electronic commerce sites. Cybergold (www.cybergold.com)–Can be best described as a product portal that offers ‘online coupons’ to its customers. eGold (www.e-gold.com)–This is a payment system that uses precious metal as the currency to buy and sell goods. The precious metal is held by a trust and is intended to back transactions conducted through eGold. A member of the system may designate a payment using a measure of the chosen precious metal such as gold, silver and platinum. eScrip (www.escripinc.com/jsp/index.jsp)–This program is designed as a fundraiser for schools. The program requires its participants to register their shopping cards with eScrip. These cards may be store cards, debit cards or credit cards. When a customer makes a purchase from any of the participating merchants using these pre-registered cards, a fraction of the purchase amount is sent to eScrip as a donation. Flooz (www.flooz.com)–This is a system that specializes in sending a greeting. A user can send a gift, optionally accompanied by an online greeting card, designated in ‘flooz’ dollars to a recipient with an email address. Flooz dollars can be used for purchases online. iBill (www.ibill.com)–Internet Billing Company, Ltd. provides secure transaction services to Web merchants to accept and process real-time payments for goods and services purchased over the Internet. A Web merchant can outsource its payment processing through iBill. i-Escrow (www.i-escrow.com)–This is an Internet-based escrow service. Kagi (www.kagi.com)–This site acts as a payment intermediary for any vendor interested in such a service. Kagi provides customer order processing capability including securing payment using a credit card. Upon completion of the purchase information, Kagi confirms the order to both the buyer and seller. O-Card (www.orbiscom.com)–This card provides a variation to a normal credit card-based payment scheme. O-Card is a single-use payment card established at the time of completing a transaction. The use of O-Card requires connectivity to the payer’s bank. On receipt of a transaction request, the participating bank returns an O-Card with a unique account number for that transaction.
LEGAL AND REGULATORY IMPLICATIONS Discussion about electronic money and payment systems would be incomplete without a brief examination of its legal and regulatory impact. Since a full treatment of this topic is beyond the scope of this chapter, we highlight some of the major issues in this section.
Electronic Money and Payment Systems 245
Can any business entity create electronic money? As the current state of electronic commerce stands, this issue is quite moot since no organisation has yet created electronic money. All devices supporting electronic money (smart cards, prepaid cards, phone cards, etc.) and all systems supporting electronic money transactions are firmly based on national currencies. But is this situation likely to continue in the future? We do not know the answer to this question at the current time. However, it is conceivable that, sometime in the future, entrepreneurs may attempt to create their own brand of money. In fact, operation of eGold may be considered as an early but ‘very conservative’ example of creation of a new monetary unit. Let us suppose that a business entity, other than the central banks of national governments attempts, to issue electronic money in some form. Would this be allowed? Even though the answer to this question is still being debated, several attempts have been made to address this policy issue. Early thinking in the United States is captured in a 1997 report of the President’s Information Infrastructure Task Force (IITF): • The Internet should develop as a market-driven arena (and) not a regulated industry. Governments should encourage self-regulation and private sector leadership. • Government should refrain from imposing new and unnecessary regulations, bureaucratic procedures or new taxes and tariffs on commercial activities that take place via the Internet. • The commercial and technological environment for electronic payments is changing rapidly, making it difficult to develop policy that is both timely and appropriate. For these reasons, inflexible and highly prescriptive regulations and rules are inappropriate and potentially harmful. In the near term, case-bycase monitoring of electronic payments is preferable to regulation. European Union’s central bank has studied the issue of electronic money and its policy and regulatory impacts. In their 1998 report, they conclude the following: • The issuance of electronic money is likely to have significant implications for monetary policy in the future. Above all, it must be ensured that price stability and the unit of account function of money are not endangered. A significant development of electronic money could have implications for the monetary policy strategy and the control of the operational target. • A number of additional regulatory concerns, i.e., the efficient functioning of payment systems and confidence in payment instruments, the protection of customers and merchants, the stability of financial markets and protection against criminal abuse, also have to be taken into account. • Clear rules on the conditions under which electronic money can be issued need to be established. The following minimum requirements must be fulfilled: • Issuers of electronic money must be subject to prudential supervision. • The rights and obligations on the part of the respective participants (customers, merchants, issuers and operators) in an electronic money scheme must be clearly defined and disclosed.
246 Misra & Sounderpandian
• Electronic money scheme must maintain adequate technical, organisational and procedural safeguards to prevent, contain and detect threats to the security of the scheme, particularly threat of counterfeits. • Protection against criminal abuse, such as money laundering, must be taken into account when designing and implementing electronic money schemes. • Electronic money schemes must supply the central bank in each relevant country with whatever information may be required for the purpose of monetary policy. • Issuers of electronic money must be legally obliged to redeem electronic money against central bank money at par at the request of the holder of the electronic money. • The possibility must exist for central banks to impose reserve requirements on all issuers of electronic money. The policy paper from IITF and European Central Bank give us a good foundation on regulatory and legal issues that may arise out of private issue of electronic money. There are other issues as well. For example, how does the volume of electronic money affect the total volume of money circulation in an economy and what impact will this have on the price structure? Answers to such questions may emerge in the future as the use of electronic money and payment systems mature and become commonplace.
SUMMARY We reviewed the basic requirements of money, namely, serving as a medium of exchange, possessing a standard stored value and being durable. Next, we reviewed the basic requirements of a transaction, namely, atomicity, anonymity, durability and non-repudiability. We then reviewed the basic requirements of a payment system, namely, identifiability, consistency, scalability and interoperability. We believe electronic payment systems will mature as the underlying technology advances, and these systems will gain efficiencies comparable to current cash and notational systems. We also saw the use of smart cards in many types of electronic payment systems. These systems will mature by following any one of a few emerging standards, such as CEPS for large payments or Common Markup links for micropayments. Security of transactions, currently based on SET or SSL, will also continue to be perfected by further advances in cryptography and digital certification. We saw a review of several existing commercial smart card systems, electronic check processing systems, electronic micropayment systems and Internet-based money remittance systems. While the variety of these systems can benefit the public through competition, it also highlights the need for standardization. Finally, we saw the legal and regulatory implications of electronic money. Although no one has yet created electronic money as such, it will very likely be created in the near future.
Electronic Money and Payment Systems 247
REFERENCES Camp, L. J., Marvin, S. and Tygar, J. D. (1999). Token and Notational Money in Electronic Commerce. http://www.cs.cmu.edu/afs/cs/academic/class/15712s99/www/summaries/Camp95.html. CEPSCO. (2000). http://www.cepsco.com. CESP Business Requirements. Flinn, P. J. and Jordan III, J. M. (1997). Using the RSA Algorithm for Encryption and Digital Signatures: Can You Encrypt, Decrypt, Sign and Verify without Infringing the RSA Patent? http://www.cyberlaw.com/rsa.html. A Framework for Global Electronic Commerce–A Report from The President’s Information Infrastructure Task Force, July 1997. http://www.iitf.nist.gov/ eleccomm/ecomm.htm. Garceau, L., Matos, V. and Misra, S. (1998). The use of electronic money in electronic commerce transactions. IS Audit Control Journal, 3, 14-24. Gill, J. (year). The use of electronic purses by disabled people. What are the needs? http://www.tiresias.org/epurse/. Greenspan, A. (1996). Regulating electronic money. Presented at the U.S. Treasury Conference on Electronic Money and Banking: The Role of Government, Washington, DC, September 19. http://www.cato.org/pubs/policy_report/ cpr-19n2-1.html. Greenstein, M. and Feinman, T. M. (2000). Electronic Commerce: Security, Risk Management and Control. New York: Irwin/McGraw-Hill Implications for Central Banks of the Development of Electronic Money. (1996). Bank for International Settlement, Basel, Switzerland, October. Le Tocq, C. and Young, S. (1998). SET comparative performance analysis. Gartner Group, November 2. Levy, S. (1994). E-money (That’s what I want). Wired Magazine, 2, December. Micropayments Overview. http://www.w3.org/ECommerce/Micropayments/. W3C Working Group. Mondex. (2002). www.mondex.com or www.mondexinternational.com. Proton. (2002). www.protonworld.com. Report on Electronic Money. (1998). August. European Central Bank. http:// www.ecb.int. SET Secure Electronic Transaction Specification, Version 1.0. (1997). May. http://www.setco.org. Survey of Electronic Money Developments. (2002). Bank for International Settlement, Basel, Switzerland. http://www.bis.org. Wade, C. (1999). eCheck: An Overview and Explanation of Security Measures. September 22. www.fstc.org.
OTHER REFERENCES AND READINGS Bernkopf, M. Electronic Cash and Monetary Policy. http://www.firstmonday.dk/ issues/issue1/ecash/–Bernkopf served at the Open Market Operations De-
248 Misra & Sounderpandian
partment of the Federal Reserve Bank of New York and in the research unit of the White House Office of Communications. Checkfree Corporation’s Electronic Payment Services - http://www.checkfree.com. Clarke, R. Net-Based Payment Schemes. http://www.anu.edu.au/people/ Roger.Clarke/EC/EPMEPM.html. This document is concerned with a particular class of payment mechanisms, viz. those that support circumstances in which the payer is not present at the point of sale or service, but does have electronic communications facilities available. Clickshare. http://www.clickshare.com/home/–A service that tracks movements and settles charges for digital transactions–down to as little as a dime per query–as users browse the Web. Coyle, D. Virtual money can take on the central banks. The Independent, 12 September 2000–http://www.independent.co.uk/news/Digital/Update/2000-09/ monetary120900.shtml. Cybank–http://cybank.net/–This system was developed by Oxford Media which is based in Malaysia. It offers a means whereby any Web site can sell items and receive immediate payment from anyone in the world. Suggested applications include selling pay-per-view access to Web pages, online magazines, games, finance services, gambling, entertainment, etc. eCoin–http://www.ecoin.net/–A token-based micropayment system. The Electronic Frontier Foundation’s Online Commerce and Digital Money Archive–http://www.eff.org/pub/Privacy/Digital_money/. E-Gold–http://www.e-gold.com/–Electronic money 100% backed by gold! Electronic Purse links–http://csecmc1.vub.ac.be/cfec/purses.htm. EU Financial Issues Working Group–http://europa.eu.int/ISPO/fiwg/–The FIWG is part of the Electronic Commerce actions of the European Commission. Its overall purpose is to stimulate the development and deployment of innovative payment systems and transaction mechanisms within the European Union. The site also provides an open forum for individuals to discuss e-finance. Financial Transaction Models in the Electronic World–http://www.hpl.hp.co.uk/ projects/vishnu/main.html–A project of Hewlett-Packard Laboratories, Bristol. The Future of Money in the Information Age–http://www.cato.org/moneyconf/ money14.html–Papers delivered at the Cato Institute’s (http://www.cato.org/) 14th Annual Monetary Conference May 23, 1996. The papers are also available as a book (http://www.cato.org/pubs/books/money/tableof.htm) . Digital Money: A divine gift or Satan’s malicious tool? G-10 Working Party Releases Study On Key E-Money Issues–http:// www.ustreas.gov/press/releases/pr1674.htm–Details of a report by the Deputies of the G-10 finance ministers and central bank governors that outlines a broad consensus of their Working Party on electronic Money regarding key considerations that should help guide national approaches to merging electronic money technologies.
Electronic Money and Payment Systems 249
Grigg, I. The Effect of Internet Value Transfer Systems on Monetary Policy. http://www.systemics.com/docs/papers/monpol.html–This paper argues that, in actuality, Internet cash issuance will not be a strong force, neither against the tools of monetary policy, nor for its own mercantile purposes. Matonis J. W. Digital Cash & Monetary Freedom. http://www.isoc.org/HMP/ PAPER/136/html/paper.html. Mondex: Electronic cash on a smartcard. http://195.157.97.145:8016/. Mondex in Japan: can e-money succeed?, Noriko Takezaki - http://www.cjmag.co.jp/ magazine/issues/1999/July99/mondex.html - Mondex International’s long-time efforts to bring electronic money to Japan may finally be paying off. Multibanco Electronic Purse(PMB)–http://www.sibs.pt/eng/porta_moedas.html– Brief information about a Portuguese system aimed at replacing small cash payments that was introduced experimentally in 1994 and has been in normal use since 1995. NetBill–http://www.ini.cmu.edu/netbill/–acts like an electronic credit card service to provide financial services in support of electronic commerce. The Netbill electronic commerce mechanisms will be used by CMU’s Informedia Digital Library Project, and by the Networked Multimedia Information Services project, and to charge for information delivered via the World Wide Web. NetCheque–http://www.isi.edu/gost/info/NetCheque/–An electronic payment system for the Internet developed at the Information Sciences Institute of the University of Southern California. Oakington Corporation: Transact electronic currencies: Online technology that makes it not just possible but easy to issue and use electronic currencies–http:/ /www.oakington.com/. Orlin Grabbe, J. The End of Ordinary Money Part 1 and 2. http://www.aci.net/ kalliste/money1.htm, http://www.aci.net/kalliste/money2.htm. PayPal–http://www.paypal.com. Rahn, R W. The End of Money and the Struggle for Financial Privacy. http:// www.endofmoney.com/–A Web site based on the book of the same title. Sifers, R. W. Regulating Electronic Money in Small-Value Payment Systems– http://taxi-l.org/emoney.htm–An article by. that originally appeared in the Federal Communications Law Journal, April 1997. Sifers urges that telecommunications law should be taken as a regulatory model. Smart Card Forum–http://www.smartcardforum.org/–This forum was founded to accelerate the widespread acceptance of multiple application smart card technology by bringing together leading users and technologists from both the public and private sectors. Stored-Value Cards–http://minneapolisfed.org/sylloge/cbo2.html–Chapter 2 of a US Congressional Budget Office Study of Emerging Electronic Methods for Making Retail Payments.
250 Misra & Sounderpandian
Smartshield–http://www.megsinet.net/~jeffp/–This is a magnetically shielded cardholder which is claimed to protect contactless smartcards from surreptitious, wireless access by hackers, electronic pickpockets or the government. Szabo, N. http://www.best.com/~szabo/–Articles on e-money, smart contracts etc. by a member of the DigiCash team. Turk, G. Money and Currency in the 21st Century–http://www.goldmoney.com/ futuremoney.html–What are these brand-new forms of payment? Who will use them? And most importantly, which of the emerging electronic money technologies will survive into the next century? Van Hove, L. A selected bibliography on electronic purses–http://cfec.vub.ac.be/ cfec/purses.htm–It contains links to the full text of quite a lot of the articles.
A Managerial Perspective on E-Commerce
251
Chapter XIV
A Managerial Perspective on E-Commerce: Adoption, Diffusion and Culture Issues Thuong T. Le, S. Subba Rao and Dothang Truong University of Toledo, USA
ABSTRACT From a trading perspective, Internet-based e-commerce (electronic commerce), the most widely known form of e-commerce, refers to the exchange of products and services via electronic networks that may include value-added networks (VANs), the Internet, corporate intranets and extranets. E-commerce can be described and discussed from a strategic perspective using the ICDT framework (information exchange (I), communication (C), distribution (D) and transaction (T)) as well as the Four Layer framework. This chapter aims to discuss the issues of adoption and diffusion of e-commerce at the macro level in relation to the Four Layer and the ICDT frameworks. The influences of cultural differences among countries on diffusion and success of e-commerce are briefly discussed in relation to organisational models of national cultures. Future areas of research are pointed out at the end. Copyright © 2003, Idea Group, Inc.
252 Le, Rao & Truong
INTRODUCTION Electronic commerce (or e-commerce), defined somewhat narrowly here as the exchange of information, goods, services and payments over the Internet, is projected to rocket from $657 billion in 2000 to $6.8 trillion in 2004 (Forrester Research, 2000). It has until now been concentrated mainly in North America, but it is expected to reach the threshold level for hyper-growth in Japan and key European economies by the beginning of the new millennium or shortly thereafter (The Economist, 1999). Table 1 shows the e-commerce growth projection worldwide, classified by regions and countries. North America leads the rest of the world, but the projection for growth rates in Europe and Asia are remarkably higher than for North America. Table 1 is an indication of proliferation of e-commerce among the economies of regions of the world. The proliferation varies significantly among the regions, and even more significantly among the economies within a region. South Korea, for example, whose economy is trade oriented, and whose firms are well placed in the global supply chains of key industries such as electronics and automotive, was expected to reach e-commerce hyper-growth in 2002. Meanwhile, India is held back, probably until 2006, by its restrictive policies on trade and foreign investments, its underdeveloped telecommunication system and low PC (personal computer) penetration (Forrester Research, 2000). Opinions differ as to whether e-commerce will widen or narrow the development gap between richer and poorer economies (PricewaterhouseCoopers, 2000). It can be an “equalizer” that provides some emerging economies and their firms a means to leapfrog into a knowledge-based economy, or at least to strengthen their competitive posture. Yet, it can leave others further behind for their inability to invest in costly, fast-changing information technology and/or to adapt to the complexity of competing in highly interconnected markets and industries. The importance of e-commerce goes far beyond conducting transactions online. E-commerce is about defining and implementing a meaningful Web presence Table 1: Worldwide e-commerce growth (Source: Forrester Research, 2000) Year 2000
Year 2001
Year 2002
Year 2003
Year 2004
Total (billions US$)
$ 657.0 $ 1,233.6 $ 2,231.2 $ 3,979.7 $ 6,789.8
North America
$ 509.3
$ 908.6 $ 1,495.2 $ 2,339.0 $ 3,456.4
Asia Pacific
$ 53.7
$ 117.2
$ 286.6
$ 724.2 $ 1,649.8
Western Europe
$ 87.4
$ 194.8
$ 422.1
$ 853.3 $ 1,553.2
Latin America
$ 3.6
$ 6.8
$ 13.7
$ 31.8
$ 81.8
Rest of World
$ 3.2
$ 6.2
$ 13.5
$ 31.5
$ 68.6
A Managerial Perspective on E-Commerce
253
that opens up the firm to its partners, suppliers and customers (openness); connects them as an extended enterprise through electronic media (interconnectivity); and aligns people, processes and technology to offer a new value proposition (realignment) (Andersen Consulting, 1998). It represents a new way to conduct business that enables forward-looking, fast-moving firms to seize emerging market opportunities, leapfrog competition and redraw the competitive landscape to their advantage. It is essentially a new management mindset. There are significant issues to be considered and managed in the adoption and diffusion of e-commerce. In addition to the usual economic and human capital (resource) issues, cultural and national differences play a part in the adoption and diffusion of e-commerce. It is the purpose of this chapter to describe and discuss the issues of adoption and diffusion of e-commerce from a macro perspective as well as the impact of cultural issues on e-commerce. In the next few sections we describe briefly the e-commerce concept and development, the issues of adoption and diffusion of e-commerce from a macro perspective, and cultural issues related to adoption and diffusion from a national perspective.
E-COMMERCE CONCEPT AND DEVELOPMENT From a “trading” perspective, e-commerce refers to the exchanges of products and services via electronic networks that may include value-added networks (VANs), the Internet, corporate intranets and extranets. The first network type has been the backbone for an older form of e-commerce, known as Electronic Data Interchange (EDI). The high cost and technical limitations of EDI confine its adoption to the very large firms and its applications mainly to the automated processing of common documents in routine business transactions (e.g., purchase order, shipment notice, commercial invoice, etc.). After several decades of existence, it barely penetrates three percent of firms in North America and not even half of that elsewhere; its future growth is expected to be very modest (Boston Consulting Group, 1999). In contrast, the low cost, easy access and open standard of the Internet make it an ideal platform for e-commerce. Internet-based e-commerce, hereafter referred to simply as e-commerce, has an invaluable trading capability that EDI lacks. It offers virtual marketplaces where buyers and sellers can seek out each other, products and services can be searched and compared, prices can be set dynamically and interactions can be in real time; by comparison, EDI requires pre-established relationships between the two trading partners, pre-determined prices and batch data processing. Transaction is only a step in business processes that link buyers and sellers. From “information exchange” and “activity” perspectives, e-commerce encompasses a wide range of pre- and post-transaction exchanges that facilitate seller and buyer discovery, product and service search, payment settlement, order fulfillment and customer care (Maira and Taylor, 1999). Not all of these have to, or can, be conducted online (e.g., a consumer shops at a retailer’s Website but makes an actual purchase at its physical store). From an “effect” perspective, e-commerce is not just
254 Le, Rao & Truong
about exchanges, but also about defining and implementing a meaningful presence in the virtual marketplace that opens the firm to its partners, suppliers and customers, bringing about greater cost efficiency, extended reach, enhanced customer satisfaction and channel dis- (re-) intermediation, among others (Timmers, 1999). It ultimately, from a “value chain” perspective, connects human performance, business processes and technology to offer new value propositions (Wigand, 1997). The development of e-commerce has evolved through distinct phases as shown in the figure below. These phases are brochureware, interaction, e-commerce, ccommerce and e-business (Le and Koh, 2001). The rise of business-to-business (B2B) e-commerce which places a premium on relationship building and inter-firm workflow coordination (not just transaction automation) underscores the importance of process integration throughout the value chain, “from search to select, to order and delivery, to settlement and accounting– providing complete customer care throughout the process” (Kalakota, 2000). The focus begins to shift toward what is termed as collaborative (c-) commerce (Phillips and Meeker, 2000). Whereas e-commerce focuses on transactions, c-commerce “mandates accessibility and visibility to supply chain information” (Scheller and Creech, 2000) and acts as a “conduit for virtual collaborations (between trading partners) across a wide range of business processes” (Genovese, 2000). Eventually in many industries, online business will become inseparable from “old line” business. Competing successfully in that environment necessitates redefining customer relationships, reconfiguring value chain activities, reengineering business processes and reinventing business models or, in short, totally transforming business operations and the enterprise through the use of Internet technologies (Callahan and Pastemack, 1999). The scope of e-business transformation encompasses not only ecommerce but also customer relationship management (CRM) and supply chain management (SCM) (Gartner Group, 1998).
Figure 1: Development of e-commerce Brochureware
Interaction
E-commerce
C-commerce
E-business
! Web presence ! Real-time, 2-way ! Online ! Intra- & inter- ! Online business ! 1-way, communication transactions firm inseparable from broadcast ! Personalization ! No or few collaboration ”old line” communication features linkages to ! Business business ! Static corporate ! Capture back-office process ! Transformation and product customer functions and integration of the enterprise information profiles other business ! Dominated by ! E-commerce + processes B2B sector CRM + SCM ! Led initially by B2C sector Evolution
Revolution We are here (June 2001)
A Managerial Perspective on E-Commerce
255
ADOPTION The development of e-commerce is essentially a collective state of adoption and diffusion of e-commerce among individual firms in an economy. Adoption refers to the firm’s decision to provide the mandate and resources for changes, and that in turn reflects its strategic intent on taking advantage of the interconnectivity and interactivity among participants. Contrary to a widespread perception, e-commerce has not rendered strategy obsolete. If anything, e-commerce tends to weaken existing entry barriers and, hence, industry profitability that it is more important than ever for the firm to distinguish itself through strategy (Porter, 2001). In e-commerce, digital information substitutes physical products and services totally or partially as the content. Online experience and collaborations, rather than brick-and-mortar storefronts, provide the context. Market infrastructure (i.e., distribution channels and logistics functions) migrates to computer networks. In the marketspace, these three elements–content, context and infrastructure–are no longer inseparable from one another as they are in the physical marketplace. Information technology adds or alters the content, changes the context of the interaction, and enables the delivery of varied content and a variety of contexts over different infrastructures (Rayport and Sviokla, 1994). E-commerce has to function therefore not only as a low-cost channel for the retrieval and distribution of company and product information, but also as a new platform for relationship building, revenue generation and market development. Yet most firms have used e-commerce without considering its strategic role (McBride, 1997). E-commerce has been discussed in three different strategic frameworks, Ho (1997), Schubert and Selz (1997) and Dutta, Kwan and Segev (1998). According to Ho (1997), businesses treated e-commerce with respect to three primary purposes– promotion of products and services, provision of data and information, and processing of transactions. Noting the absence of depth of Ho’s framework, Schubert and Selz (1997) used a traditional three-phased transaction framework–information, agreement and settlement–and added a fourth phase–community–to reflect the interactive nature of e-commerce. Dutta, Kwan and Segev (1998) adopted the time-tested 4Ps framework of marketing (product, price, place and promotion) and added customer relationships and Internet technology to form a six-dimensional framework they termed marketspace. The three frameworks share one common weakness. They lack a perspective on the maturity of e-commerce. This perspective is integrated into the ICDT framework (Angehrn, 1997; Angehrn and Myer, 1997) that views e-commerce as taking place not in a single marketspace, but rather in a series of “spaces” for information exchange (I), communication (C), distribution (D) and transactions (T). Firms that adopt e-commerce as a virtual information space (VIS) treat it as an online billboard, mainly to display brochures and catalogs, and to broadcast advertising messages like an extension of traditional marketing communication tools. Those treating e-commerce as a virtual communication space (VCS) use it for building relationships, facilitating collaborations and networking with customers, suppliers,
256 Le, Rao & Truong
and partners. Those using a virtual distribution space (VDS) or virtual transaction space (VTS) exploit electronic networks to deliver greater customer value, displace existing intermediaries and/or to introduce new business formats. A firm’s presence in any of these ICDT spaces can range from being technically simple to sophisticated, and generic to customized. It is likely to progress through these spaces in sequence. Using the logic of Quelch and Klein (1996), the sequence can be either “informationto-transaction” or “transaction-to-information.” Established multinational enterprises are likely to adopt the former sequence as they face an immediate need to provide information and to communicate with their existing customers. Internet startup firms, on the other hand, have to begin with distribution and transaction, and later use e-commerce for building brand image, providing product supports and winning repeat purchases. The firm’s ability to pursue its strategic e-commerce intent is facilitated and constrained by the broader macroeconomic environment or what is often referred to as the Internet economy. A recent study by the University of Texas at Austin shows the Internet economy as comprising four layers. The top two layers are widely known as e-commerce. At the top is the “Internet Commerce” layer that involves the sales of products and services to businesses and consumers over the Internet by such companies like Amazon (books, initially), Dell (microcomputers) and Expedia (travel services). Supporting it is the “Internet Intermediary” layer that increases the efficiency of e-commerce by facilitating the meeting and interaction between buyers and sellers over the Internet. This second layer consists of a variety of intermediaries such as content aggregators (e.g., ZDnet and Emarketer.com), portals (e.g., Yahoo and Excite), vertical market makers (e.g., ChemNet), auction sites (e.g., eBay and OnSale) and so forth. Another two layers must support these two layers: “Internet Applications” and “Internet Infrastructure.” The “Internet Applications” layer provides computer software applications (e.g., Microsoft, Netscape and Macromedia) and e-commerce consulting services (e.g., Forrester Research, IDC and USWeb) that make it technologically feasible to conduct business online. The bottom layer–“Internet Infrastructure”–provides the network backbone and local access that carry the e-commerce information flows (e.g., MCI Worldcom) (Internet Indicators, 2001). This approach to describing e-commerce is referred to as the Four Layer framework. The Internet economy is projected to produce $830 billion in revenues in 2000, a 58 percent increase over 1999. The revenues of four Internet economy layers in first two quarters of 2000 grew significantly over corresponding quarters of 1999. The Internet commerce generated more than $127 billion in revenues in the first half of 2000, growing 11 percent between the first and second quarters of 2000; the Internet intermediary layer grew an impressive 34.5 percent, generating almost $64 billion in revenues in the first half of the year; the Internet Applications Infrastructure layer grew 14.7 percent, generating $72.8 billion in revenues; the Internet Infrastructure layer generated $142.8 billion in revenues in the first half of the year 2000, growing 11.2 percent between the first and second quarters. Thus, it has documented an impressive growth of revenues with the emerging Internet economy. Despite the
A Managerial Perspective on E-Commerce
257
failure of some highly publicized dot com companies during the last two quarters (dot com companies are a very small part–about 9.6%–of the overall Internet economy), this new economy keeps growing rapidly and continues to create opportunities for all types of companies (Internet Indicators, 2001). At the macroeconomic level, the development of the top (Internet Commerce) layer depends on that of the Intermediary and Applications layers, in addition to the development of the Internet infrastructures. In the business-to-business e-commerce sector, for example, a majority of firms have a strong interest in bringing their procurement operations online, but have taken a wait-and-see approach toward participating in e-marketplaces, partly due to the limited functionalities being offered by these intermediaries (Deloitte Consulting, 2001). Nearly half of these e-marketplaces still lack dynamic trading capabilities (auctions, and bid-ask pricing mechanism) which are essentially basic transaction capabilities; they are a long way off from offering collaborative capabilities that would enable their customers–business buyers and suppliers–to move their existing business relationships online (Kearney, 2000). The latter capabilities depend on the availability of integration software applications that are currently still in their development.
Some Issues of E-Commerce Adoption While many of the adoption issues are at the firm level, a few are macro-level issues. These macro-level issues in adoption of e-commerce relate to the environmental factors, service infrastructure, security, legal issues and taxation. A brief description of these follows. Environmental factors are those changes in the business environment that create threats as well as opportunities for an organisation and are usually beyond the control of management. They include the intensity of competition in the industry, the information intensity of the industry, and government support and policies for ecommerce adoption (Teo et al., 1998). Government entities are among the most powerful institutional forces affecting innovation. Government policies that enhance, or appear to enhance, the ability of the firm to compete in the marketplace have a strong positive influence on ecommerce development strategy at the corporate level. These policies include direct research and development (R&D) funding, agency level research policy, investment tax credits, industrial policy and R&D tax credits (Teo et al., 1998). E-commerce needs a variety of services to support all potential functions, activities, requirements and applications. These services need a coherent infrastructure to ensure integrity, uniformity, efficiency and effectiveness. The infrastructures may include infrastructures for public key; payment and banking; information services for organising, searching, retrieving, filtering and summarizing information; and for processing business-to-business transactions, sharing suppliercatalog information and supply-chain coordination (Shaw, 1999). Ensuring security for e-commerce is a fundamental perquisite before any commercial activities involving sensitive information can take place. The principles of security for e-commerce over an open network are based on the five requirements
258 Le, Rao & Truong
that rely heavily on each other: (1) authentication, (2) authorization, (3) confidentiality, (4) integrity, (5) nonrepudiation of origin. The requirements pose the greatest design challenges for secure e-commerce systems; these challenges lie in the formulation, specification and enforcement of comprehensive data protection policies to meet these requirements (Adam et al., 2000). The era of e-commerce will bring about greater use of electronic documents as the substitute for traditional paper-based documents. This shift requires the development of a new framework of legal precedent (Shaw, 1999). The adoption of ecommerce involves many legal issues including: contractual settlement, privacy, intellectual property, free speech, consumer protection and other issues such as jurisdiction over trades, encryption policies and Internet gambling (Shim et al., 2000; Adam et al., 2000; Turban et al., 2000). Taxing e-commerce is an important issue that requires the consideration of organisations due to the large volume of trade forecasted for the next decade. Because of the enormous commercial potential, state and local governments are examining ways to tax both ISPs (Internet Service Providers) and transactions occurring in cyberspace. Applying existing law to new mediums of exchange is far more difficult than ever imagined. The area of taxing e-commerce is ambiguous, confusing and unsettled, involving several tax jurisdictions, on a domestic as well as international level in a single transaction (Shim et al., 2000; Turban et al., 2000).
DIFFUSION Diffusion refers to spreading (proliferation) of an innovation within and across organisations. It is the process by which an innovation is communicated through certain channels over time among the members of a social system. The social system consists of a set of entities which are constituted according to an organising principle with definite boundaries and interrelationships among them with stated social (societal) goals to achieve. Thus, the social system consists of individuals, organisations or agencies that share a common culture and are potential adopters of an innovation (Roger, 1983). Examples of interrelationships among the entities are the inter-firm relationships, inter-industry and inter-national relationships, as well as the channel of communication among these entities. In addition to the social system, three other key elements that determine the characteristics of the diffusion process of an innovation are innovation, time and communication channels (Rogers, 1983; Mahajan & Peterson, 1985). An innovation is any idea, object or practice that is perceived as new by the members of a social system. Time relates to the time period over which the diffusion of the innovation or idea takes place. An important characteristic of the time dimension is the rate at which the innovation is diffused or relative speed with which it is adopted by members of the social system. Communication channels are the means by which the innovation and/or information is transmitted to or within the social system (Hu et al., 1997). In the context of e-commerce diffusion, social system refers to a set of organisations, firms in an industry (and the customers and/or suppliers of the
A Managerial Perspective on E-Commerce
259
organisations), industries in an economy, economies in a regional economy leveraging e-commerce to achieve their mission. Innovation refers to Internet-based ecommerce and c-commerce, and their associated software and hardware having significant impact on the adopting organisations and customers/suppliers of the organisations, as well as national economies. Channels of communication are described by horizontal channel (e.g., direct interpersonal contacts, indirect observations within the e-commerce user community) and vertical channel (e.g., interaction with outside agents, promotional efforts by the e-commerce vendor, etc.). These channels could be physical, electronic and informational. Finally, time refers to the time period of e-commerce diffusion under study, the rate of diffusion (relative speed) (Raman and Raisinghani, 2000). An understanding of these elements is needed in tracing the diffusion of ecommerce within organisations and across organisations. Since e-commerce crosses national borders, an additional element in understanding diffusion across national borders will be the governmental policies. All these elements will have substantive influences on the diffusion of e-commerce. The diffusion of e-commerce can be discussed in qualitative as well as quantitative terms. Two aspects of the diffusion process to be considered are: i) the extent and ii) the rate. The former depends upon the latter. The extent is the proportion of the population adopting e-commerce at a given time. Another way to describe extent is the time it takes for the entire population to adopt e-commerce. The rate of diffusion is dependent on the type of e-commerce application, the channels of communication and their characteristics, and any internal/external driving/ opposing forces. Quantitatively diffusion models of e-commerce describe the trajectory of the diffusion process over time in which one can mathematically predict the number of adopters by a certain time. Diffusion of e-commerce can be discussed in the context of the Four Layer framework, in which the spread of Internet Commerce involving the online transactions of product and services to business and consumers is supported by other three layers. Internet Application Infrastructure and Internet Infrastructure influence the e-commerce diffusion speed (rate) as supporting infrastructure when providing network backbone, computer software applications and e-commerce consulting service. They support businesses carrying the e-commerce information flows and conducting business online more technologically flexible. In addition to supporting infrastructure, Internet Intermediary influences substantively the diffusion of ecommerce in expanding the relationship between entities. Internet Intermediary layer can facilitate the interaction between buyers and sellers over the Internet, and accordingly, increase the efficiency of e-commerce across organisations. Understanding the nature of the diffusion process is important to plan directly and guide the proliferation of e-commerce in national economies and across countries. The diffusion models help planners to forecast the diffusion process and then to allocate resources to achieve planned growth in e-commerce (Raman & Raisinghani, 2000). Diffusion speed (rate) and relationships among the entities are the key issues and these are influenced by the nature of economic system, industry
260 Le, Rao & Truong
structure, governmental policies, supporting infrastructure as well as availability and commitment of resources (both public and private).
THE ROLE OF CULTURE It has been demonstrated that the development of technological infrastructure (on which e-commerce depends) is affected by cultural ideologies. In other words, technology and culture exhibit “contextual interactions” where cultural and political forces can affect innovation and further use of technologies, and vice versa (Barbour, 1991; Rudraswamy & Vance, 2001). It can also be argued here that the impact of culture extends beyond technology adoption. Culture has unique values to guide other business and economic behaviors that shape all four layers of the Internet economy. Hofstede (1991) defines culture as the collective programming of the mind that distinguishes members of one society from another. He proposes a model of national culture that comprises five dimensions: individualism-collectivism, masculinityfemininity, power distance, uncertainty avoidance and time orientation. First, in individualistic societies (e.g., the USA), people are expected to look after themselves and their immediate families; by contrast, in collective societies (e.g., China) people are integrated into strong cohesive groups from cradle to grave. Second, the gender dimension extends from masculine societies (e.g., Latin America) that tend to reserve high-status positions for men, to feminine societies where the distribution of roles is more equal between the two genders. Third, power distance defines the extent to which the less powerful members of an institution expect and accept such unequal distribution of power. Many societies, Singapore among them, are marked by high power distance. Fourth, uncertainty avoidance reflects the extent to which members of a society feel threatened by uncertain situations. Fifth, the time horizon continuum extends from the long-term orientation in societies such as Japan to the short-term orientation in those such as the USA. National culture dimensions can be used to differentiate countries based upon various aspects. Among them, power distance and uncertainty avoidance are useful in describing organisations and structures whereas individualism and masculinity affect the thinking about people in organisations (Hofstede, 1991). In terms of time horizon dimension, it is difficult to identify where to place the different countries on the time horizon scale (Gardfield and Watson, 1998). Therefore, in investigating the influences of national culture on e-commerce infrastructure, the two appropriate dimensions are: power distance and uncertainty avoidance. Based on these two dimensions, four distinct organisational models were identified by Hofstede (1991): the village market, the family, the pyramid of people and the well-oiled machine (Figure 2). The village market tends to exhibit organisational structures in which the creation or the formation of any project is driven by the marketplace. In village market structure, market conditions, not people or formal rules, dictate what should
A Managerial Perspective on E-Commerce
261
Figure 2: Organisational models of different cultures High
Family
Pyramid of people
Village market
Well-oiled machine
Power distance
Low Low
High Uncertainty avoidance
take place. Following a study of Gardfield and Watson (1998), the United States and United Kingdom are classified into the village market structure. Family structure aims to protect the head of the family and is less concerned with equality. In the family structure, power is unevenly distributed so that a few people hold most of the power. Organisations tend to be very centralized and guided by visions rather than rules. China and Singapore are examples of family structure. In the pyramid of people, the personal power and formal rules within an organisation play an important role. A good leader in this culture has personal authority and deploys formal rules and regulations to guide his employees. France and Japan are examples of pyramid of people structure. In the well-oiled machines structure, work processes are well defined. The role of the relationship among peoples is not an important element. Finland and Germany are welloiled machine structures (Hofstede, 1991; Garfield and Watson, 1998). Studies have shown that cultural differences impact organisational and economic structures, and by implications, e-commerce infrastructures. Countries should design infrastructure policies that are appropriate for their culture for ecommerce to be successful as the bottom two supporting layers of the Four Layer framework of e-commerce are infrastructure layers. Experience in the USA and some East Asian nations to date demonstrates the profound role of culture in the development of e-commerce and the Internet economy. Typical of a society with the “village” cultural trait, the USA has seen its Internet economy driven primarily by market forces; the government’s role has been limited to regulatory actions to prevent predatory practices and, hence, preserve market competition. The top two layers of its Internet economy (e-commerce and Intermediaries layers) has been built first by a wave of entrepreneurial dot com
262 Le, Rao & Truong
upstarts seeking golden opportunities in the Cyber Gold Rush, and fueled by easy venture capital funding and sky-high stock prices, and second, by cautious responses by brick-and-mortar industries incumbents which were held back initially by uncertain e-commerce prospect, organisational inertia and concerns over possible conflicts with traditional distribution channels. Meanwhile, the bottom layers (Physical Infrastructure and Applications layers) have undergone a massive capacity build-out and capability expansion in response to the real or perceived demand explosion to be brought about by the top two layers, and by the battle for market domination among the telecommunication network builders, as well as among the application developments such as the “browser war” between Microsoft Internet Explorer and Netscape Navigator. Being free from governmental interference, the development of these two bottom layers has also benefited from an ongoing convergence between information, telecommunications and entertainment industries that opens the way, for example, for cable TV service providers (essentially entertainment network builders) to become Internet service providers (via their broadband cable modem connections) (Beardsley and Evans, 1998), or for entertainment media companies such as Time Warner to bring their rich content to the Internet (via its acquisition by America On Line– AOL). The “family” cultural trait of societies such as Singapore or Malaysia, in contrast, has placed the government as the leading player in the development of the bottom two layers of their Internet economies. Driven by a national plan and ambition to develop modern telecommunication networks and software and IT industries, the government has poured large-scale investments in developing such infrastructures (e.g., the Multimedia Super Corridor project as part of Malaysia’s vision of becoming a developed economy by 2020). Despite such relatively developed infrastructures and the availability of other government incentives, the development of their e-commerce and Intermediaries layers has been lack luster for lack of creativity talents and private initiatives which are more ample in “village” societies (Le and Koh, 2001; Far Eastern Economic Review, 1999). As the above examples pointed out, national and cultural differences influence the way diffusion of e-commerce takes place, and how national governments through their policies can affect the development of ecommerce layers, especially the bottom two infrastructure layers. Culture plays a significant role in the development of e-commerce. Countries that design e-commerce infrastructure policies that are appropriate for their culture are likely to be more successful in the e-commerce. In the organisational model of Figure 2, countries that fall into the same quadrant will be more similar to one another than to those that fall into a different quadrant. By looking towards culturally similar countries to learn what works, the creation of e-commerce infrastructure is more likely to be successful. Lessons from successful ecommerce policies of a culturally dissimilar country may be inappropriate and may mean limited success, if not failure.
A Managerial Perspective on E-Commerce
263
CONCLUSION E-commerce is becoming the major form of commerce, especially in B2B, and the future survival or success of organisations may depend on adopting e-commerce. It is projected to expand significantly in coming years not only in North America but reach the threshold level for hyper-growth in Europe and Asia in the next few years. E-commerce can be viewed from “trading,” “information exchange,” “activity,” “effect” and “value-chain” perspectives as well as progressing through distinct evolutionary phases including brochureware, interaction, e-commerce, c-commerce and e-business. As with any technology, e-commerce needs to be adopted and diffused through organisations, economies, countries and cultures. The issues of adoption and diffusion of e-commerce are discussed in relation to two frameworks: the ICDT (Information–Communication–Distribution–Transaction) framework and the Four Layer framework (Internet economy, Intermediary, Application Infrastructure and Infrastructure). The decision to adopt e-commerce depends on the strategic view of organisations: Information oriented, Communication oriented, Distribution oriented, Transaction oriented or combinations of them. The capability of an organisation to adopt e-commerce depends on its supporting infrastructures and the interaction with customers (the bottom layers of the Four Layer framework). From a macro perspective, some of the external issues to be considered in adopting e-commerce are: environmental factors, service infrastructures, security, legal issues and taxation of e-commerce. The diffusion of e-commerce across organisations and countries is supported by the bottom three layers of the Four Layer framework. Diffusion processes are influenced by the nature of economic system, industry structure, governmental policies, supporting infrastructure as well as availability and commitment of resources. The rate of e-commerce diffusion and success of e-commerce implementation are influenced by the cultural differences among countries. Four distinct organisational models of cultures were considered: the village market, the family, the pyramid of people and the well-oiled machine. The village market and the family models were illustrated in the context of the Four Layer framework to bring out the differences of how national and cultural differences influence the way diffusion of e-commerce takes place. In this chapter we have pointed to the issues of adoption and diffusion of ecommerce at the macro level. These issues need to be addressed in detail in future work. Future research should concentrate on the following: 1) developing models for e-commerce based on the ICDT and Four Layer frameworks and testing and validating them; 2) developing contingency models for adoption of ecommerce at the firm level and testing them for relationships could lead to better understanding of the adoption process for e-commerce as well as help in planning the implementation of e-commerce projects; 3) investigating models of ecommerce diffusion to understand the quantitative side of e-commerce proliferation; and 4) developing national policies for e-commerce development based on the cultural understandings.
264 Le, Rao & Truong
REFERENCES Adam, N. R., Dogramaci, O., Gangopadhyay, A. and Yesha, Y. (2000). Electronic Commerce: Technical, Business, and Legal Issues. Upper Saddle River, NJ: Prentice Hall. Andersen Consulting. (1998). What is e-commerce, e-enterprises conduct ecommerce in the e-economy? Commerce Showcase. Angehrn, A. (1997). Designing mature Internet business strategies: The ICDT model. European Management Journal, 15(4), 361-369. Angehrn, A. and Meyer, J. F. (1997). Developing mature Internet strategies: Insights from the banking sector. Information Systems Management, Summer, 37-43. Barbour, I. (1991). Ethics in an age of technology. The Gifford Lecture. New York: HarperCollins. Beardsley, S. C. and Evans, A. L. (1998). Who will connect you? The McKinsey Quarterly, 4, 18-31. Boston Consulting Group. (1999). The business-to-business e-commerce markets. BCG Research Bulletin. Available on the World Wide Web at: http:// www.bcg.com/practice/btb_ecommerce_bulletin.asp. Callahan, C. and Pastemack, B. (1999). Corporate strategy in the digital age. Strategy and Business, 15, (Quarter 2), 1-5. Available on the World Wide Web at: http://www.strategy-business.com/pdf/099202.pdf. Deloitte Consulting. (2001). Realizing the B2B procurement vision. Trends, challenges, and best practices in e-sourcing and e-procurement. The 2nd Annual Survey. Available on the World Wide Web at: http://www.dc.com/obx/library/ pdf/b2bprocurement.pdf. Dutta, S., Kwan, S. and Segev, A. (1998). Business transformation in electronic commerce: A study of sectoral and regional trends. European Management Journal, 16(5), 540-551. The Economist. (1999). Business and the Internet. The Net Imperative, June. The Economist. (1999). E-commerce. Asia Online, April. Far Eastern Economic Review. (1999). Asia’s race to go digital. July, 8-11. Forrester Research. (2000). Global ecommerce approaches hypergrowth. Available on the World Wide Web at: http://www.forrester.com/ER/PDF/ 0,1521,8408,00.pdf. Garfield, M. J. and Watson, R. T. (1998). Differences in national information infrastructures: The reflection of national cultures. Journal of Strategic Information Systems, 6(4), 313-337. Gartner Group. (1998). The future of e-business. A presentation at the 1998 Gartner Symposium and IT Expo, October. Available on the World Wide Web at: http://gartner6.gartnerweb.com:80/glive/static/ussym98_36c.pdf. Genovese, Y. (2000). Collaborative commerce: The future of manufacturing and distribution in the Internet age, in collaborating commerce: Helping manufacturers e-interact with the supply chains. Manufacturing Systems, Multimedia Library: Technology Broadcast, August. Available on the World Wide Web at: http://www.manufacturingsystems.com/seminar/default.asp.
A Managerial Perspective on E-Commerce
265
Ho, J. (1997). Evaluate the World Wide Web: A global study of commercial Web sites. Journal of Computer Mediated Communication, 3(1). Available on the World Wide Web at: http://www.usc.edu/dept/annenberg/vol3/issue1/ho.html. Hofstede, G. (1980). Culture’s Consequences: International Differences in Work-Related Values. Beverly Hills, CA: Sage Publications. Hofstede, G. (1991). Cultures and Organisation: Software of the Mind. Berkshire: McGraw-Hill. Hu, Q., Saunders, C. and Gebelt, M. (1997). Research report: Diffusion of information systems outsourcing: A reevaluation of influence sources. Information Systems Research, 8(3), 288-301. Internet Indicators. (2001). Measuring the Internet economy. January 2001. Available on the World Wide Web at: http://www.internetindicators.com/ jan_2001.pdf. Kalakota, R. (2000). Next Generation B2B Solutions. Available on the World Wide Web at: http://www.hsupply.com/shop/news/winners.pdf. Kearney, A. T. (2000). Building the B2B Foundation. Positioning Net Market Makers for Success. Available on the World Wide Web at: http:// www.atkearney.com/pdf/eng/WP_B2B.pdf. Le, T. T. and Koh, A. (2001). A managerial perspective on electronic commerce development in Malaysia. Electronic Commerce Research. Mahajan, V. and Peterson, R. (1985). Models for Innovation Diffusion. Beverly Hills, CA: Sage Publications. Maira, A. and Taylor, M. (1999). The big picture: An overview of electronic commerce. Prism, Quarter 1. Available on the World Wide Web at: http:// www.arthurdlittle.com/prism/prism_1q99/maira.html. McBride, N. (1997). Business use of the Internet: Strategic decision or another bandwagon? European Management Journal, 15(1), 58-67. Phillips, C. and Meeker, M. (2000). The B2B Internet report. Collaborative commerce. Morgan Stanley Dean Witter Equity Research, April. Available on the World Wide Web at: http://www.msdw.com/techresearch/b2b/info.html. Porter, M. E. (2001). Strategy and Internet. Harvard Business Review, March, 63-78. PricewaterhouseCoopers. (2000). Inside the Mind of the CEO. Europe: A Survey for the Year 2000. Available on the World Wide Web at: http:// www.pwcdavos.com/pdfs/CEO_Survey_Europe.pdf. Quelch, J. A. and Klein, L. R. (1996). The Internet and international marketing. Sloan Management Review, Spring, 60-75. Raman, S. M. and Raisinghani, M. S. (2000). Electronic Commerce: Opportunity and Challenges. Hershey, PA: Idea Group Publishing. Rayport, J. F. and Sviokla, J. J. (1994). Managing in the marketspace. Harvard Business Review, November-December, 141-150. Rogers, E. (1983). Diffusion of Innovations. New York: The Free Press.. Rudraswamy, V. and Vance, D. A. (2001). Transborder data flows: Adoption and diffusion of protective legislation in the global electronic commerce environment. Logistics Information Management, 1(2), 127-136.
266 Le, Rao & Truong
Scheller, B. and Creech, W. (2000). Collaborative commerce: A requisite to survival in the new economy. Presented at i2 Technologies Planet2000 San Diego Conference, October. Available on the World Wide Web at: http://planet.i2.com/ home/sandiego2000/presentations/Wednesday_Breakout/ 6C_B2B2C_Scheller_W245_1_3.htm. Schubert, P. and Selz, D. (1997). Web assessment–Measuring the effectiveness of electronic commerce sites going beyond traditional marketing paradigms. Proceedings of the 32nd HICSS Conference, Hawaii, “Internet and the Digital Economy Track,” January. Available on the World Wide Web at: http:/ /www.businessmedia.org/netacademy/publications.nsf/all_pk/1142. Shaw, M. J. (1999). Electronic commerce: Review of critical research issues. Decision Support Systems, 1(1), 95-106. Shim, J. K., Qureshi, A. A., Siegel, J. G. and Siegel, R. M. (2000). The International Handbook of Electronic Commerce. The Glenlake Publishing Company. Teo, T., Tan, M. and Buk, W. K. (1998). A contingency model of Internet adoption in Singapore. International Journal of Electronic Commerce, 2(2), 95-118. Timmers, P. (1999). Electronic Commerce. Strategies and Models for Businessto-Business Trading. New York: John Wiley & Sons. Turban, E., Lee, J., King, D. and Chung, H. M. (2000). Electronic Commerce: A Managerial Perspective. Upper Saddle River, NJ: Prentice Hall. Wigand, R. (1997). Electronic commerce: Definition, theory and context. The Information Society, 13, 1-16.
A Managerial Perspective on E-Commerce
267
Section V Human and Social Aspects of Knowledge and Information Technology Management
268 Baker
Chapter XV
Human and Social Perspectives in Information Technology: An Examination of Fraud on the Internet C. Richard Baker University of Massachusetts, USA
ABSTRACT This chapter adds to the discussion of human and social perspectives in information technology by examining the existence and extent of fraudulent activities conducted through the Internet. The principal question addressed by this chapter is whether fraudulent activities perpetuated using the Internet constitute a new type of fraud, or whether they are classic forms of fraud appearing in a new medium. Three areas of fraud are investigated, namely: securities fraud, fraud in electronic commerce, and fraud arising from the rapid growth of Internet companies. The U.S. Securities and Exchange Commission (SEC) has cited more than 100 companies for committing securities fraud using the Internet. Actions prohibited under U.S. securities laws are now being conducted through the Internet, and the SEC has taken steps to suppress these frauds (SEC, 2001). The rapid growth of electronic commerce, and the natural desire on the part of consumers to feel secure while engaging in electronic commerce, has prompted the creation of mechanisms, such as web site seals and logos, to reduce concerns about fraudulent use of information. It is, however, questionable whether these mechanisms are effective in reducing fraud conducted through the Internet. A third potential area for fraud on the Internet involves the rapid growth of Internet companies, often with little economic substance and lacking in traditional managerial controls. This Copyright © 2003, Idea Group, Inc.
Human and Social Perspectives in Information Technology 269
chapter seeks to examine areas with significant potential for fraud on the Internet and to assess implications of such activities for the management of information technology.
INTRODUCTION We will say then that a considerable advance has been made in mechanical development when all men, in all places, without any loss of time, are cognizant through their senses, of all that they desire to be cognizant of in all other places, at a low rate of charge, so that the back country squatter may hear his wool sold in London and deal with the buyer himself, may sit in his own chair in a back country hut and hear the performance of Israel in Egypt at Exeter Hall, may taste an ice on the Rakaia, which he is paying for and receiving in the Italian opera house Covent garden. Multiply instances ad libertum–this is the grand annihilation of time and place which we are all striving for, and which in one small part we have been permitted to see actually realized. (Attributed to Samuel Butler with reference to the opening of the first telegraph between cities in New Zealand in 1863.) Speculation about the effects of new information technology is not a new phenomenon. As the quotation cited above indicates, the invention of the telegraph in the early 19th century prompted the belief that the world would quickly become smaller and more closely connected, thereby eliminating wars and conflicts. Sadly, this was not to be the case. Similar speculation has arisen in recent years with regard to the Internet. Is the Internet a liberating tool offering the possibility of rapid increases in human freedom, or does the Internet threaten our right to privacy? By using the Internet, musicians can now by-pass recording companies and publish their own music directly online for fans to download. Day traders can buy and sell shares of stock without the intervention of brokers. Readers of newspapers, books, and magazines can choose the news, entertainment, and even people that they wish to interact with. There is a common thread running through these and similar Internet developments. What appears to be going on here is a radical shift in power, whereby individuals use technology to take control of information away from governments and corporations (Kaplan, 1999). Many observers feel that the advent of the Internet is an unmitigated positive trend, while others believe that there is a dark side to cyberspace. This latter perspective argues that when individuals use technology excessively and avoid contact with other human beings, there is the danger that they will remove themselves from the wider world. The result may be that cyberspace, which has been prized for its diversity and wealth of information, will lead to a certain type of ignorance through over-involvement in virtual communities at the expense of citizenship in real-world communities (Shapiro, 1999). While the Internet has the potential to shift control of information away from organisations and institutions in interesting ways, individual power and control can be
270 Baker
misused. Examples of this misuse include hacking, virus spreading, sending massive e-mail spams, distributing pornography, and perpetuating fraudulent schemes. To prevent the abuse of individual power, it may be necessary to curb some of the freedom that has heretofore reigned in cyberspace. The question is whether a balance can be achieved between individual freedom and the needs of civil society. This chapter focuses on one aspect of this question, namely the existence and extent of fraud perpetuated through the Internet. The chapter will discuss whether fraud using the Internet constitutes a new category of fraud or whether it is a classic form of fraud committed through other means. Before addressing this question in more detail, the following section will briefly discuss the issue of what fraud is or may be.
A THEORY OF FRAUD Mitchell et al. (1998) indicate that fraud is a form of white-collar crime. They argue that white-collar crime is: “a contested concept which is invoked to cover abuse of position, power, drug trafficking, insider trading, fraud, poverty wages, violation of laws, theft, exploitation, and concealment, resulting in financial, physical, psychological damage to some individuals and a disruption to the economic, political, and social institutions and values” (Mitchell et al., 1998, p. 593). Mitchell et al. suggest that opportunities to commit white-collar crime have expanded as free-market policies have become the reigning political economic philosophy, rendering it more likely that fraud and other white-collar crimes will go unpunished and unprevented. They also argue that some professionals, including lawyers, accountants, and information technology specialists, have been implicated in white-collar crime and fraudulent activities. Because the Internet has been a repository of strong beliefs about the inadvisability of government regulation, the potential for white-collar crime and fraud to proliferate through the Internet may be greater than it is for other media (Kedrosky, 1998). From a general point of view, fraud is defined as any act where one party deceives or takes unfair advantage of another. From a legal perspective, fraud is defined more specifically as an act, omission, or concealment, involving a breach of legal or equitable duty or trust, resulting in disadvantage or injury to another. By law, it is necessary to prove that a false representation was made as a being true, and that the statement was made with intent to deceive and to induce the other party to act upon it. Ordinarily it must be proven that the person who was defrauded suffered an injury or damage from the act. In sum, fraud is a deliberate misrepresentation of fact for the purpose of depriving someone of a valuable possession (Encyclopaedia Britannica Online, 2001). While fraud can be viewed as a crime, often it is an element of a crime, such as in the act of taking money by false pretenses or by impersonation. European legal codes often define fraud to include not only intentional misrepresentations of fact designed to deceive another into parting with valuable property, but also misunderstandings arising out of normal business transactions. Thus, any omission or
Human and Social Perspectives in Information Technology 271
concealment that is injurious to another, or that allows a person to take unfair advantage of another, may constitute criminal fraud in some countries. In AngloAmerican legal systems, this latter type of fraud is often treated as deceit, subject to civil action rather than criminal penalties (Encyclopedia Britannica Online, 2001). Managers of information technology are often concerned about fraud in their organisations. This is understandable because the cost of fraud is high on an annual basis. It is estimated that businesses lose approximately six percent of their annual revenue to fraudulent schemes. On average, organisations lose $9 dollars per day per employee to fraud (Association of Certified Fraud Examiners, 2001). Research indicates that the persons most likely to commit fraud are college- or universityeducated white males. Men are responsible for almost four times as many frauds as women. On average, if the perpetuator is male, the loss is $185,000 versus $48,000 for a female. Losses arising from persons with college degrees are five times greater than from high school graduates. Fifty-eight percent of fraud is committed by employees, with an average of $60,000 per case, while, 12 percent is caused by owners, with an average cost of $1 million per case. Fifty percent of fraud involves the cash account of the organisation. About 10 percent arises from conflicts of interest, and about five percent of fraud arises from fraudulent financial statements (Association of Certified Fraud Examiners, 2001). There has been a growing realization in recent years that the Internet offers a fertile venue for fraudulent schemes. The focus of this chapter is on three particular areas with significant potential for fraud on the Internet, namely: securities fraud, fraud in electronic commerce, and fraud arising from the rapid growth of Internet companies. The next section will address the issue of securities fraud using the Internet.
SECURITIES FRAUD ON THE INTERNET While the Internet can be helpful in obtaining investment information, it can also be used to commit securities fraud. The U.S. Securities and Exchange Commission has cited more than 100 companies and individuals for committing securities fraud using the Internet (SEC, 2001). Among other things, the perpetuators of securities fraud through the Internet have been cited for failing to tell investors that they were paid for recommending shares of companies, for hiding their lack of independence from the companies they were recommending, for issuing false or misleading information about the companies they recommended, and for using false information to drive up the price of shares so that they could be sold before accurate information became known. Because the Internet allows information to be communicated easily and inexpensively to a vast audience, it is easy for persons intent on committing securities fraud to send credible-looking messages to a large number of possible investors. Investors are often unable to tell the difference between legitimate and false claims. Some of the ways that securities fraud has been committed using the Internet include:
272 Baker
online investment newsletters, bulletin boards, and e-mail spam. Many of the fraudulent activities cited by the SEC have been classic investment frauds, such as: The Pump and Dump, The Pyramid, The Risk-Free Fraud, and Off-Shore Frauds (SEC, 2001).
Online Investment Newsletters There have been a large number of investment newsletters appearing in recent years on the Internet. Online newsletters offer investment advice and recommend the purchase of a specific company’s shares. Legitimate newsletters help investors gather investment information, but some are fraudulent. Companies may pay newsletters to recommend their shares. This practice is not illegal, but U.S. securities laws require newsletters to disclose who paid them, as well as the amount and the type of payment. If the newsletter does not disclose its relationship with the company being recommended, the newsletter has committed securities fraud. The newsletter may appear to be legitimate, but it earns a fee if it persuades investors to buy or sell a particular company’s shares. Some online newsletters commit securities fraud by claiming to perform research on the companies they recommend when in fact they do not. Other newsletters spread false information or promote worthless shares. The goal is to drive up the price of the shares in order to sell before investors can obtain truthful information about the companies (SEC, 2001).
Bulletin Boards Online bulletin boards exist in several different formats, including chat rooms, newsgroups, and web site-based bulletin boards. Bulletin boards have become a popular way for investors to share information concerning investment opportunities. While some messages are true, many are fraudulent. Persons engaged in fraudulent schemes pretend to reveal inside information about upcoming announcements, new products, or lucrative contracts. It is often difficult to ascertain the reliability of such information because bulletin boards allow users to hide their identity behind aliases. Persons claiming to be unbiased observers may be company insiders, large shareholders, or paid promoters. Acting alone, an individual may be able to create the illusion of widespread interest in a thinly traded stock by posting a large number of messages under various aliases (SEC, 2001).
E-Mail Spam E-mail spam is similar to junk mail. Because e-mail spam is inexpensive and easy to create, persons intent on committing securities fraud use it to locate potential investors for investment schemes or to spread false information about a company. E-mail spam allows solicitation of many more potential investors than mass mailing or cold calling. Through the use of bulk e-mail programs, personalized messages can be sent to thousands of Internet users simultaneously (SEC, 2001).
Human and Social Perspectives in Information Technology 273
Classic Investment Frauds Using the Internet Investment frauds on the Internet are similar in many respects to frauds using the telephone or the mail. The following are some examples: The Pump and Dump–This type of fraud involves online messages that urge investors to buy shares quickly or recommend selling before the price goes down. The sender of the message claims to have inside information about a company or the ability to pick shares that will increase in price. The perpetuator of the fraud may be an insider or paid promoter who stands to gain by selling their shares after the stock price is pumped up. Once the perpetuator sells his shares and stops promoting the company, the price falls and investors lose their money. This scheme is often employed with small, thinly traded companies because it is easier to manipulate share prices when there is relatively little information available about the company. The Pyramid–This type of fraud involves a message such as: “How To Make Big Money From Your Home Computer!!” The message might claim that investors can turn $5 into $60,000 in just three to six weeks. The promotion is an electronic version of a classic pyramid scheme where participants make money only if they can recruit new participants into the program. The Risk-Free Fraud–This type of fraud involves a message like: “Exciting, Low-Risk Investment Opportunities” inviting participation in: wireless cable projects, prime bank securities, or eel farms. The investment products usually do not exist. Off-Shore Frauds–Off-shore frauds targeting U.S. investors are common. The Internet has removed barriers imposed by different time zones, different currencies, and the high costs of international telephone calls and postage. When an investment opportunity originates in another country, it is difficult for U.S. law enforcement agencies to investigate and prosecute the frauds.
Examples of Securities Fraud on the Internet Francis Tribble and Sloane Fitzgerald, Inc. sent more than six million unsolicited e-mails, and distributed an online investment newsletter to promote the shares of two small, thinly traded companies (SEC, 2001). Because Tribble and Sloane failed to tell investors that the companies they were recommending had agreed to pay them in cash and securities, the SEC sued to stop them and imposed a $15,000 penalty on Tribble. The massive amount of e-mail spam distributed by Tribble and Sloane resulted in hundreds of complaints being received by the SEC’s online Enforcement Complaint Center (SEC v. Tribble, 1998). The SEC also cited an Internet newsletter called Future Superstock (FSS), written by Jeffrey Bruss of West Chicago, Illinois. Bruss recommended the purchase of shares in 25 Microcap (i.e., small capitalization) companies and predicted that the share prices would double or triple in the months following dissemination of the recommendations. In making these recommendations, FSS: (1) failed to disclose more than $1.6 million of compensation, in cash and stock, from profiled issuers; (2) failed to disclose that it had sold shares in many of the issuers shortly after
274 Baker
dissemination of recommendations; (3) said that it had performed independent research and analysis in evaluating the companies profiled by the newsletter when it had conducted little, if any, research; and (4) lied about the success of certain prior stock picks (SEC v. The Future Superstock et al., 1998). The SEC also cited Charles Huttoe and 12 other defendants for secretly distributing to friends and family nearly 42 million shares of Systems of Excellence, Inc., known by its ticker symbol SEXI (SEC, 2001). In a pump and dump scheme, Huttoe drove up the price of SEXI shares through false press releases claiming multimillion dollar sales which did not exist, an acquisition that had not occurred, and revenue projections that had no basis in reality. He also bribed co-defendant, SGA Goldstar, to tout SEXI to readers of SGA Goldstar’s online newsletter called Whisper Stocks. The SEC fined Huttoe $12.5 million. Huttoe and Theodore Melcher, the author of the online newsletter, were sentenced to federal prison. In addition, four of Huttoe’s colleagues pled guilty to criminal charges (SEC, 2001). Matthew Bowin recruited investors for his company, Interactive Products and Services, in a direct public offering completed entirely through the Internet. Bowin raised $190,000 from 150 investors. Instead of using the money to build the company, Bowin pocketed the proceeds. The SEC sued Bowin in a civil case, and the Santa Cruz, California, District Attorney’s Office prosecuted him criminally. He was convicted of 54 felony counts and sentenced to jail (SEC, 2001). IVT Systems solicited investments to finance the construction of an ethanol plant in the Dominican Republic. The Internet solicitations promised a return of 50% or more with no reasonable basis for the prediction. The solicitations included false information about contracts with well-known companies and omitted other important information about the company. After the SEC filed a complaint, IVT Systems agreed to stop breaking the law (SEC, 2001). In another case, Gene Block and Renate Haag were charged by the SEC with offering prime bank securities through the Internet, a type of security that does not exist. Block and Haag collected over $3.5 million by promising to double investors’ money in four months. The SEC froze their assets and prevented them from continuing their fraud (SEC, 2001).
Combating Securities Fraud on the Internet It should be recognized that securities frauds using the Internet are similar to frauds that existed before the Internet. The perpetuators of securities fraud often engage professional advisors such as lawyers, accountants, and information technology specialists for advice concerning accounting, taxation, information systems design, and other matters. Mitchell et al. (1998) indicate that professionals are implicated in white-collar crimes such as money laundering. While, there is no specific evidence that lawyers, accountants, and information technology professionals have been involved in securities frauds using the Internet, it seems improbable that such frauds could be perpetuated without at least the tacit involvement of knowledgeable professionals. It is important for information technology managers and professionals to be aware of the activities of their associates. If these activities include
Human and Social Perspectives in Information Technology 275
securities fraud using the Internet, there should be an attempt to prevent such activities. If an appropriate response is not forthcoming through these efforts, the IT manager should cease further contact with such associates. Obviously, if the IT manager is facilitating such activities, they could be subject to SEC enforcement actions or even criminal prosecution. Securities fraud on the Internet is not just a U.S. phenomenon. The Fraud Advisory Panel of the Institute of Chartered Accountants in England and Wales (ICAEW) estimates that Internet fraud costs the United Kingdom as much as five billion pounds per year. This estimate includes both securities fraud and other types of fraud in electronic commerce, which is the subject of the next section.
FRAUD IN ELECTRONIC COMMERCE There is widespread recognition that the Internet offers an innovative and powerful way to conduct business activities (Tedeschi, 1999). Forrester Research, Inc. indicates that participants in electronic commerce purchase an average of $4 billion per month online (Forrester Research, 2001). Many transactions in electronic commerce are consummated with credit cards. The use of credit cards provides a certain degree of comfort to consumers because there are legal limits on losses arising from unauthorized use of credit card information. Nevertheless, perpetuators of fraudulent schemes using the Internet often look for opportunities to obtain credit card information as well as other private information such as e-mail addresses, home addresses, phone numbers, birth dates, social security numbers, and other similar types of information which can be sold to e-mail spam lists. This is a ripe area for fraud. Participants in electronic commerce are frequently concerned about the potential for fraud or other forms of misuse of information transmitted through the Internet. Gray and Debreceny (1998) have detailed some of the concerns that participants in electronic commerce have, including: • Is this a real company? • Is this a trustworthy company? • If I send credit card or bank information, is it safe? • If I provide information to a company on its web site, where will the information end up? • If I place an order, will I receive what I asked for? • Will I receive delivery when promised? • Will any problems I have be resolved quickly? • Is a money-back guarantee honored? • How soon will I get credit for returned items? • How quickly will the company perform service on warranty items? • Will the company be able to send me necessary replacement parts quickly? It should be recognized that the above expressed concerns can exist in any type of transaction, whether conducted face-to-face, over the telephone, or through the Internet. Unscrupulous people will be unscrupulous regardless of the medium through which the transaction is conducted.
276 Baker
Several mechanisms have been developed in recent years to reduce the concerns of participants in electronic commerce, including electronic logos, encryption techniques, and firewalls. The idea behind an electronic logo is that if an online merchant meets certain specified criteria, the merchant is allowed to place a logo on its web site. The logo is provided by an assurance provider, such as a public accounting firm, or another entity organised for that purpose. Examples include: AICPA/CICA’s WebTrust, Verisign, TRUSTe, ICSA, and BBBOnline. The logo is intended to provide assurance that the merchant has complied with standards established by the assurance provider. Usually, the logo is linked to the assurance provider’s web site. The online consumer can navigate to the assurance provider’s web site to read about the degree of assurance provided by the logo (Gray and Debreceny, 1998). An example is the VeriSign logo (www.verisign.com) which provides assurance that a web site is capable of transmitting and receiving secure information and that the site and company are real. The VeriSign logo focuses primarily on the security of the transaction and the validity of the web site and the electronic merchant. WebTrust is another logo assurance service that was developed jointly between the American Institute of CPAs (AICPA) and the Canadian Institute of Chartered Accountants (CICA). Other accounting associations in the United Kingdom, Australia, and New Zealand are also participating in the WebTrust program. WebTrust operates under the assumption that consumers seek assurance in the following areas: • They are dealing with a real company, rather than a fraudulent company seeking to obtain and sell credit card numbers, addresses, and other private information. • They will receive the goods and services ordered, when promised, at the agreed-upon price. • They have the option to request that the Internet seller not give or sell any private information provided in an online transaction. • Private information cannot be intercepted while being transmitted (Primoff, 1998). WebTrust is an attestation service provided by a licensed public accounting firm. During the assurance engagement, the WebTrust practitioner “audits” the online business to verify compliance with certain principles and criteria. The principles and criteria address matters such as privacy, security, availability, confidentiality, consumer redress for complaints, and business practices. The WebTrust Principles and Criteria were developed jointly by the AICPA and the CICA. In the United States, the WebTrust engagement is performed in accordance with standards specified by the AICPA. At the client’s request, the WebTrust practitioner may also provide consulting advice as part of the preparation for the WebTrust examination. If the online business meets the WebTrust Principles and Criteria, the site can display the WebTrust seal of approval. By “clicking” on the WebTrust seal, online customers can review the site’s business practice disclosures, report of the independent accountant, and
Human and Social Perspectives in Information Technology 277
management’s assertions, as well as viewing a list of other sites with seals and a digital certificate that authenticates the seal. At least every 90 days, the WebTrust practitioner must update their testing of the relevant activities to determine continued compliance with the WebTrust Principles and Criteria. If the site fails to comply, the seal can be revoked.
Combating Fraud in Electronic Commerce The use of logo assurance services and other forms of encryption techniques are intended to reduce concerns about fraud in electronic commerce. IT managers may seek to convince potential online consumers to rely on logos as providing assurance against fraud and misuse of information. However, it is important for online consumers to be aware of the limits of the assurance provided by these logos. It must be recognized that providers of logos disclaim responsibility if the electronic merchant violates the principles and criteria of the logo provider or if fraud is present. Consequently, logo assurance programs do not provide protection against fraud, rather they are primarily marketing devices. In addition, the National Consumer’s League Internet Fraud Watch indicates that the greatest number of complaints concerning fraud in electronic commerce concern on-line auctions (National Consumers’ League, 1999). In an on-line auction, the auction web portal takes no responsibility for the quality, the suitability, or even the existence of the merchandise offered for sale. Fraud in online auctions has occurred frequently. For example, in December 1998, using a number of aliases, Jamison Piatt promised on eBay auctions that he had more than 1,500 copies of the popular Furby toy ready for delivery by Christmas. In January 1999, the state of Pennsylvania’s attorney general announced that Piatt had agreed to reimburse 29 persons who never received their Furbys because they had never existed (Wice, 1999). Many online auction sites are legitimate business enterprises, and they try to ensure that the persons offering items for sale do not mislead buyers, but some sites and some sellers are not legitimate businesses. A typical type of scheme is to induce online purchasers to submit bids for software, such as Microsoft Office, at below market prices. Bidders are told that they won the auction and are legally obliged to pay within 24 hours, but the product never arrives and the buyer is left holding the bag (BBC, 1999a). What appears to be happening, both in the area of Internet securities fraud and fraud in electronic commerce, is that the ability of the perpetuator of the fraud to contact a large number of people at relatively low cost allows the fraud to be conducted more easily. In addition, the lack of face-to-face contact appears to induce people to be more credulous of unlikely claims. The creation of virtual communities and the corresponding decrease in the level of participation in real-world communities reduces the propensity of individuals to question the reasonableness of claims, thereby facilitating the growth of fraud on the Internet.
278 Baker
FRAUD IN THE RAPID GROWTH OF INTERNET COMPANIES A third area of potential fraud using the Internet lies in the rapid growth of companies whose existence depends solely on the Internet. This potential has been highlighted during the last several years by the rapid rise in prices of Internet company shares followed by an equally rapid decline, with many dot com companies going bankrupt during the years 2000 and 2001. Even though electronic commerce has been growing very rapidly, it can be described as still in the development stage. In the dot com industry, there are many companies struggling to succeed, and, as recent events have demonstrated, many of these companies will ultimately fail. During a period of rapid growth and contraction in an industry, it is likely that fraudulent practices will develop. Even if most Internet companies are legitimate, some have no economic basis. The business practices of some Internet companies border on fraud in the broader sense defined in the first part of this chapter. In addition, as with other rapidly growing industries, there is often a lack of control over data and systems, particularly when a significant portion of a company’s transactions are conducted through the Internet. In this environment, Internet companies may not have control over the information systems that are essential to their business. This is an environment ripe for fraud. The Internet has sometimes been viewed as a rainbow with a pot of gold at the end, but we now realize that there is a grim reality to this picture. Most Internet companies do not make money (Hansell, 1998; Kedrosky, 1998). Even Amazon.com, one of the best-known Internet companies, has not made a profit since its inception. The economic basis of many Internet companies is not the sale of products or services, but rather the sale of advertising. Many Internet companies were created on the basis of projections about advertising revenues drawn from market research done by consulting firms. These projections may be suspect for several reasons. As with other forms of advertising, Internet advertising revenue is based on the number of persons who view the advertisement, but, it has been estimated that the top 10 Internet sites receive 50% of the advertising revenue, and the top 100 sites receive almost 95% of the revenue (Kedrosky, 1998). A second area in which projections concerning Internet advertising revenues may be suspect lies in the area of banner exchanges. Internet companies earn advertising credits by showing advertisements for other Internet companies. In other words, one Internet company provides advertising for another Internet company and vice versa. The payments are in the form of credits for advertising on the other company’s web site. Revenues are produced, but there is no cash flow (Kedrosky, 1998). A third area in which Internet advertising revenues may be suspect lies in the measurement of the number of visitors to a web site. A web site may report that it receives one million hits (i.e., visitors). However, the number of actual visitors may be as low as one percent of that number (i.e., 10,000). This is because the measurement of hits is based on factors such as the number of links and graphic
Human and Social Perspectives in Information Technology 279
images on the site. Consequently, the number of actual visitors is difficult to measure with any degree of accuracy (Kedrosky, 1998). Beyond the issue of questionable projections concerning Internet advertising revenues, there is the issue of technologies such as autonomous agents which may reduce the probability of earning a profit from Internet sales. The purpose of an autonomous agent is to locate every seller on the Internet that sells a particular item and then to sort them by price. Consequently, whatever an Internet company may try to do to create brand identity, or provide a service, autonomous agents will drive the market to the lowest price (Kedrosky, 1998). In addition, it is questionable whether Internet companies make money even in a period of rapidly growing electronic commerce. It is estimated that despite large increases in online commerce during recent years, less than five percent of online retailers earned a profit (High, 1999). Another area with potential for fraud lies in the initial public offering of Internet company shares. During 1998 and 1999, there was a stock market fascination with Internet companies which resembled a classic speculative bubble. Internet companies with no earnings, and in some cases no sales or even negative net worth, were able to complete initial public offerings at highly inflated prices. Because most Internet companies did not have earnings, financial analysts invented the price-torevenues ratio as a comparative indicator. This precipitated a host of misleading accounting practices related to premature recognition of revenues. The lack of economic substance underlying many Internet IPOs resulted in a sharp decline in the price of Internet company shares in 2000 and 2001. A final area for potential fraud arising from the rapid growth of Internet companies lies in the lack managerial and internal controls in these companies. Until recently, the cost of the hardware, software, and professional expertise necessary for electronic commerce served as a barrier to entry. The costs are now much lower. Internet Service Providers (ISPs) offer turnkey solutions that combine hardware, software, payment processing, and communications in one package. Since the ISP packages are outsourced, they operate solely on the ISP’s computers (Primoff, 1998). In a turnkey ISP approach, all of the information is located with the ISP, potentially compromising the Internet company’s access to information and the ability to exclude unauthorized persons from obtaining access. It is important for companies to understand how their information is controlled and by whom. It is also important to ascertain whether the Internet company and the ISP personnel have the skills necessary to deal with issues of security and internal control and what security techniques are employed (Primoff, 1998).
Combating Fraud in the Rapid Growth of Internet Companies Many would say that the recent rapid rise and fall of Internet companies is an example of free markets at work. However, what is overlooked in this assessment
280 Baker
is that previous speculative bubbles, such as the world wide stock market crash of 1929, have usually resulted in calls for greater government regulation of private sector economic activity. The U.S. Securities and Exchange Commission has been observing and in some cases punishing securities fraud using the Internet, but they have not taken any visible steps to scrutinize the issuance of shares in Internet companies when there is little or no economic substance. Whether or not these companies will ultimately prove to be successful in a traditional economic sense remains to be seen. What is true is that the issuance of shares of Internet companies during the late 1990s had all of the hallmarks of a classic speculative bubble. As has been historically true of all previous speculative bubbles, this bubble burst, causing economic losses to many investors. Some would say that if losses have occurred, it is all in the normal course of business, but is this merely an example of the virtual community triumphing over any sense of real-world community?
CONCLUSION This chapter has examined the issue of fraud on the Internet and has examined three areas with significant potential for fraud, namely: securities fraud, fraud in electronic commerce, and fraud arising from the rapid growth of Internet companies. The SEC has cited many companies and individuals for committing securities fraud on the Internet. Activities prohibited under U.S. law are being conducted through the Internet, and the SEC has taken action to suppress these activities. A second potential area for fraud on the Internet lies in electronic commerce. The rapid growth of electronic commerce in recent years, and the corresponding desire by consumers to feel secure when engaging in electronic commerce, has prompted the creation of logo services such as WebTrust which are designed to reduce concerns about misuse of information. Nonetheless, it must be recognized that providers of logos and other seals do not actually offer any assurances regarding the lack of fraud. A third area for potential fraud on the Internet discussed in this chapter involves the rapid growth of Internet companies, often based on little economic substance and without traditional management or internal controls. These three potential areas for fraud on the Internet have developed rapidly, and it may well be that we are seeing opportunistic fraudulent schemes perpetuated by clever individuals. However, as Mitchell et al. (1998) point out, complex fraudulent schemes are difficult to perpetuate without the assistance of knowledgeable professionals. Have lawyers, accountants, and information technology professionals been involved with fraud on the Internet? The evidence on this question is unclear, but the possibility is there.
REFERENCES Association of Certified Fraud Examiners. (2001). Report to the Nation on Occupational Fraud and Abuse. Avaliable on the World Wide Web at: http:/ /www.cfenet.com/media/report/reportsection1.asp.
Human and Social Perspectives in Information Technology 281
BBC. (1999a). Internet scam file. BBC On-Line Network, April 7. Available on the World Wide Web at: http://news.bbc.co.uk/hi/english/business/your_money/ newsid_313000/313051.stm. BBC. (1999b). Internet fraud to cost UK business billions. BBC On-Line Network, May 13. Available on the World Wide Web at: http://news.bbc.co.uk/hi/ english/business/the_economy/newsid_342000/342644.stm. Cahners Publishing Company. (1998). Auditing the website. Electronic News, 44(2219), 48. Encyclopedia Britannica Online. (2001). Fraud. Available on the World Wide Web at:http://www.eb.com:180/bol/search?type=topic&query=fraud&DBase= Articles&x=20&y=. Forrester Research. (2001). Forester Online Retail Index. Cambridge, MA: Forrester Research, Inc. Available on the World Wide Web at: http:// www.forrester.com/NRF/1,2873,0,00.html. Garcia, A. M. (1998). Global e-commerce explodes: Will you thrive, survive, or die? e-Business Advisor, October. Gray, G. L. and Debreceny, R. S. (1998). The electronic frontier. Journal of Accountancy, 185(1), 32-37. Hansell, S. (1998). A quiet year in the Internet industry. The New York Times, December 28, C1. High, K. (1999). What the holiday web boom hid. Fortune & Your Company, January 4. Available on the World Wide Web at: http://cgi.pathfinder.com/ yourco/briefs/0,2246,142,00.html. Kaplan, C. (1999). Writer seeks balance in Internet power shifts. New York Times Cyber Law Journal, June 18. Available on the World Wide Web at: http:// www.nytimes.com/library/tech/99/06/cyber/cyberlaw/18law.html. Kedrosky, P. (1998). There’s little but fool’s gold in the Internet boomtown. The Wall Street Journal, November 23, A22. Lohr, S. and Markoff, J. (1998). AOL lays out plan for marriage to Netscape. New York Times On-Line, November 28. Available on the World Wide Web at: http://www.nytimes.com. Nagel, K. D. and Gray, G. L. (1998). Guide to Electronic Commerce Assurance Services. New York: Harcourt Brace Professional Publications. National Consumers League. (1999). Internet Fraud Watch. Available on the World Wide Web at: http://www.nclnet.org/Internetscamfactsheet.html. Primoff, W.M. (1998). Electronic commerce and webtrust. The CPA Journal, 68(November), 14-23. Schmidt, W. (1998). Webtrust services: AICPA launches webtrust for assurance. The CPA Journal, 68, 70. SEC. (2001). Internet Fraud: How to Avoid Internet Investment Scams. Washington, DC: Securities and Exchange Commission, October. Available on the World Wide Web at: http://www.sec.gov/investor/pubs/cyberfraud.htm.
282 Baker
SEC v. John Wesley Savage et al. (1998). Washington, DC: Securities and Exchange Commission, October. Available on the World Wide Web at: http://www.sec.gov/ enforce/litigrel/lr15954.txt. SEC v. The Future Superstock et al. (1998). Washington, DC: Securities and Exchange Commission, October. Available on the World Wide Web at: http:/ /www.sec.gov/enforce/litigrel/lr15958.txt. SEC v. Tribble. (1998). Washington, DC: Securities and Exchange Commission, October. Available on the World Wide Web at: http://www.sec.gov/enforce/ litigrel/lr15959.txt. Shapiro, A. L. (1999). The Control Revolution: How the Internet is Putting Individuals in Charge and Changing the World We Know. New York: Public Affairs. Tedeschi, B. (1998). Real force in e-commerce is business-to-business sales. New York Times Online, January 5. Available on the World Wide Web at: http:// www.nytimes.com. Wice, N. (1999). Furby fraud on eBay. Time Digital, January 25. Available on the World Wide Web at: http://cgi.pathfinder.com/time/digital/daily/ 0,2822,18831,00.html.
The Role of Trust In Information Technology Management 283
Chapter XVI
The Role of Trust In Information Technology Management István Mezgár Computer and Automation Research Institute, Hungary Zoltán Kincses Eötvös Loránd University of Sciences, Hungary
ABSTRACT All information systems are based on human beings (users), so taking into consideration basic human aspects while approaching the information management has of vital importance. Trust and confidence are essential for the users of networked systems, as for all members of Information Society. The lack of trustworthy security services is the main reason of not using the electronic and mobile technologies either in private, or business, either in public services. Trust is essentially linked to consumers’ rights, like identification, authentication, privacy, and confidentiality. In the chapter a summary is given on the challenges in information management, and on definitions and elements of trust. The chapter introduces shortly the basic elements of secure information management and the present technologies and tools for achieving trust. Trends in information management systems are also outlined. As the chapter covers a very broad area references for each important part are given. Copyright © 2003, Idea Group, Inc.
284 Mezgár & Kincses
INTRODUCTION The developments in the fields of information technology, telecommunication and consumer electronics are extremely fast. The ability of different network platforms to carry essentially similar kinds of services and the coming together of consumer devices such as the telephone, television and personal computer is called “technology convergence.” The ICT (Information and Communication Technology), the “infocom” technology, covers the fields of telecommunication, informatics, broadcasting and e-media. A very fast-developing field of telecommunication, the mobile communication has a growing role in many fields as well. The connection of mobile devices to the Internet established basically new, services for the users. The low cost of establishing a presence on the World Wide Web is making it possible both for businesses of all sizes to develop a regional and global reach, and for consumers to benefit from the wider choice of goods and services on offer. Globalisation is therefore the key theme in developments. This convergence is not just about technology. It is also about services and about new ways of doing business and of interacting within the society. The impact of the new services resulting from convergence can be felt in the economy and in the society as a whole, as well as in the relevant sectors themselves. Because of this great impact of information technologies and the level of knowledge content in products and services, the society of the XXI century is called Information and Knowledge Society. The availability of the individuals independently from location and time means mobility, and that is an important attribute in this society. The knowledge content of a product or process might not always appear spectacularly; it remains hidden in a lot of cases. Today the greatest added value is in the areas of software, electronics and exotic materials. An important aspect is that these three areas refer not only to the end product, but also to the tools and organisations that build and produce the product. This information and knowledge age has three main characteristics: dematerialisation (e.g., information is the source of three-fourths of added value in manufacturing), connectivity (connection computing and communication) and virtual networks (virtual technologies, networked economy with deep interconnections within and between organisations) (Ungson and Trudel, 1977). In order to meet the demands of the present era originating from the technologies, the networked information (info-communication) systems have an outstanding role. Managing these new types of systems, new aspects came into focus in information management. The final goal of all information systems is to provide data, information, knowledge or different services for the users (human beings), so taking into consideration basic human aspects (e.g., psychological) while approaching information management is of vital importance. So, trust and confidence are essential to Information and Knowledge Society. The lack of trustworthy security services is a major obstacle to the use of information systems in private business (B2B) as well
The Role of Trust In Information Technology Management 285
as in public services. Trust is intimately linked to consumers’ rights, like security, identification, authentication, privacy and confidentiality. Secure, authentication of the users and communication security are main problems in networked systems. The chapter intends to concentrate on the problem of trust, namely what information, security services and mechanisms have to be applied to provide the acceptable level of trust for the users on different system levels, during the life-cycle of information management systems (design, development, operation, maintenance). The readers will get an overview on the possible dangers of attacks against information systems parallel with the possibilities to parry them. The chapter briefly introduces the present tools and technologies that are appropriate to increase the trust level of the users as well. As the chapter covers a very broad area, it is not possible to introduce all these aspects in detail. References for each important part are given.
CHALLENGES IN INFORMATION MANAGEMENT The Trends in Information Technology Computer network technologies as one of the main drivers of convergence and globalisation are integrated into all fields of the economy, in different applications of industry, banking, health care, etc. Network connections are not limited only for one enterprise (intranet), for a country or for a certain sector of economy, but for many functions and for the whole world. This globalisation trend can be identified in most sectors of the economy. The functional integration and the globalisation have effected the integration of material, information flows and money circulation, which are the three basic components of complex production and service processes. This deep integration of information and communication technologies into the whole company is changing the culture, the structures and the (business) processes of companies. The globalisation of the economy means the keen cooperation of firms world wide, and the cooperation means intensive application of information and communication technologies. Distributed, networked information systems can fulfill the demands, and the information management methods, technologies and tools have to adapt to these challenges. The integration of computer networks and mobile technologies has made the communication channels more crowded, as a “mobile citizen” has access to different data sources and information systems independent of his/her location and the phase of the day. These new infocom systems have generated plenty of new problems, but one of the main challenges is the security, both of information handling and communication. Today the globalisation is based not only on multinational (giant) firms, but the SMEs (Small- and Medium-sized Enterprises) are deeply involved as well; the problem of security affects a very broad group of organisations from all sectors of the economy, as well as finance and government bodies.
286 Mezgár & Kincses
Types and Trends of Cyber Crimes The logical approach to introduce security mechanisms is to start with the definition of the threat model of the information system. The threat model is the collection of probable attack types, so it defines the system protection requirements as well. Attacks on information and communication systems are classified into the groups of passive attack (only to observe communications or data) and active attack (actively modify communications or data). In the following the active attacks will be described, but passive attacks precede active attacks in many cases. The “Computer Crime and Security Survey” of the Computer Security Institute (CSI) is based on responses from 538 computer security practitioners in U.S. corporations, government agencies, financial institutions, medical institutions and universities (FBI, 2001). The survey confirms that the threat from computer crime and other information security breaches continues unabated and that the financial toll is mounting. The total reported financial loss of 186 responders was $ 377,828,700 in 2001, while in 2000 this sum was “only” $265,589,940 of 249 responders. These numbers demonstrate that the value or the loss/damage caused by the attacks is dramatically increasing. The most frequent types of attacks are viruses (94%), insider abuse of net access (91%), unauthorized access (49%), denial of service (36%) (percentages give the rate of responders involved in the attack). It is worth it to give a short description of the most common attack types to understand later the needed counter-measures. A detailed description of attack types can be read, e.g., in Anderson (2001) and Sams (1997). Denial of service attacks are dead intervals of a computer system caused by an attacker who used one or more computer systems to force another system off-line to overload it with useless traffic. A denial of service attack is a form of traffic jam on the network–an attacker can paralyze, for example, a business’s web server in this way. Computer viruses are the best-known form of Internet security attack. A virus is a piece of software programmed with the unique ability to reproduce and spread itself to other computers. A virus may be merely annoying, or completely destructive. The most destructive viruses can erase the contents of the computer’s hard drive, or make it completely useless. If no back-ups were made, important data can be lost or damaged that could result in serious financial losses. A special type of virus is the Trojan horse in the way it is transmitted; however, unlike a virus, a Trojan horse does not replicate itself. It stays in the target machine, inflicting damage or allowing somebody from a remote site to take control of the computer. If an attacker gets control of a computer, he or she can access all the files that are stored on the computer, including all types of sensitive information (personal or company financial information, credit card numbers, and client or customer data or lists). It is obvious that this could do significant damage to any business. If data is altered or stolen, a company can risk losing the trust and credibility of their customers. In addition to the possible financial loss that may occur, the miss of information can cause the loss of competitiveness in the market.
The Role of Trust In Information Technology Management 287
Sometimes the biggest problem is that, since the information can be copied as well, the original owner will not realize the attack as no information loss can be detected. But the data will be present in another location (disk) as well, and without the knowledge of the right owner the valuable information will be used by the illegal owner. In the cited survey there are lot of interesting and instructive statistics and some case studies as well, but it is the trend of what is most important that is confirmed by the statistics. The main conclusions of the analysis are as follows: • Organisations are under cyber attack both from inside and outside. • Many cyber attacks have been detected (but more remains hidden for the owners). • Serious financial losses can be a result of cyber attacks. • Successful defense against such attacks needs more than just the application of information security technologies. These grim conclusions have to be inspiriting for the organisations and for information managers to do effective and complex steps to defend their systems, companies.
Challenges for the Information Technology Management Information technology (IT) can be defined as acquisition, processing, storage and dissemination of all types of information using computer technology and telecommunication systems. The information management (IM) is a fuzzy term covering the various stages of information processing from production to storage and retrieval to dissemination towards the better working of an organisation, where information can be from internal and external sources and in any format. Information technology management is an even more complex activity that integrates the functions given in the above two definitions. As security involves many topics (as will be introduced in the next chapters), not only information security technology, but the structure of the firms, and management techniques also have to follow the ongoing changes of the IC technologies. In case the security challenges are put into a broader, management-oriented interpretation, they can be classified into four categories according to Dhillon (2001): • Establishing good management practices in a geographically distributed environment and yet being able to control organisational operations. • Establishing security policies and procedures that adequately reflect the organisational context and new business processes. • Establishing correct structures of responsibility, given the complex structuring of organisations and information processing activities. • Establishing appropriate information technology emergency recovery plans.
Problems in Secure Information Technology Management In order to defend, the appropriate security, networked information systems can be analyzed from different aspects. Based on the geographical viewpoint, networked application levels of the security policy can be classified into four levels:
288 Mezgár & Kincses
1. 2. 3. 4.
International–management of networks over the borders (Internet). National–management of networks within the country. Local–within an enterprise (LAN local area network–intranet). Individual–workstation or PC used by the individual. Other viewpoints of classification can be: • Life cycle phases–design, development, operation, maintenance, • Technical–computer network, mobile network, etc. • Management–size of the firm, organisational structure, cultural orientation, • Legal–international, national. • Fields of application–e.g., commerce (eCommerce, mCommerce), industry (eManufacturing), banking (eBanking, mBanking). To develop the proper security policy–to select the proper equipment, tools and the best fitting methodology–algorithm needs high–level expertise, since in such a multidimensional, interdisciplinary decision problem, there is no optimal, only suboptimal solution in many cases. The problem space is extremely complex, as the whole economy is based on networked information management, all sectors are strongly influenced by the ICT. In the information society the behavior and habits of the people are dynamically changing, and government-supported programs can speed up certain processes. In all information and communication systems, there is a common factor: the human being. This factor plays the most important role in every level and in every aspect. A human can be a designer, a developer or a user (sometimes a hostile user– cracker) of the system. The most frequent instantiation of the human being is the average user who maybe is not well informed/skilled in computer science, but has his/her own personality and psyche. In order to move the individuals to use a certain information system, they have to be convinced that it is safe to use the system, that their data will not be modified, lost or used in another way as defined previously, etc. In the case that the individuals have been convinced, they will trust the system and they will use it. In the following sections the meaning and content of trust will be introduced, and the possibilities (technologies, methods, policies, etc.) of gaining this trust will be shown as well.
ELEMENTS OF TRUST What Is Trust? The word “trust” is used by different disciplines, so there are many definitions of the term fulfilling the demands of the actual theory or application. Trust has more meanings, and different parts of trust are used in psychology, in management, in communication, in sociology, in economics and in political science. Common users and researchers both agree that trust is very important as it makes cooperative efforts happen. It is a key to positive interpersonal and/or inter-organisational relationships in various settings because it has an outstanding importance how individuals interact with other individuals, organisations or (computer) systems.
The Role of Trust In Information Technology Management 289
In the everyday usage trust has many definitions. In different dictionaries a wide range of definitions can be found (between nine and 24), so it can be stated that trust is a highly complex and multidimensional phenomenon. An interesting fact is that trust definitions change according to time–the “center of mass” of the definitions is altering. According to Guralnik (1958) the basic term “trust” means “reliability of some person, or thing” or “to allow to do something without fear of the outcome.” The second interpretation of trust covers very well the feeling of a client who performs a transaction with a bank, or does shopping through the Internet using his/her credit card. The following functions, techniques and services together form the sense of “trust” for a human being who uses a service, or a given equipment: security, identification, authentication, privacy and confidentiality.
An Overview of Trust Definitions Classification of the Meanings of Trust McKnight and Chervany (1996) made a very deep and thorough analysis of the word “trust” from many aspects in their working paper with the title “The Meanings of Trust.” The working paper analyses the definitions of trust as it has been given in 60 research articles and books. It has a multi-disciplinary approach, as 18 of the sources come from the management/communication-related literatures, 19 come from sociology/economics/political science and 23 from psychology or social psychology. The goal of the WP was to develop a classification system for the types of trust and to develop trust definitions/types that can be accepted by most of the disciplines.
Construct It is not possible to define trust appropriately narrowly as this single word covers too many concepts. A way to overcome the problem of complexity is to build the definition from constructs. A construct is one basic building block/element of a complex definition. By developing empirical and theoretical trust constructs, the classification of trust became possible. Trust refers to a relatively broad set of constructs, both in terms of the trust research literature and in terms of everyday usage of the term. The final proposal is that trust has to be characterized as a set of inter-related constructs. The classification of trust constructs results in three basic categories: Impersonal/Structural trust means that trust is founded upon social or institutional structures in the situation, not on personal attributes of the trusting or trusted parties. Dispositional trust means that trust is based on the personality attributes of the trusting party. That is, the trustier has a general tendency to trust others across situations or has a general faith in human nature. Dispositional trust can be described as an “essential trustfulness of others as well as a fundamental sense of one’s own trustworthiness.”
290 Mezgár & Kincses
Personal/Interpersonal trust means that one person trusts another person, persons or thing(s) in the situation. That is, the trusting entity is one person, and trust is directed to another party or parties. In case of Interpersonal trust, two or more people (or groups) trust each other in the situation. The states that belong to this group are the Affective State (Attitude and Feeling) and the Cognitive State (Expectancy, Belief, Intention, Behavior). Based on the analysis three conclusions can be made: • Trust is most often defined in terms of expectancies or beliefs. Expectancies and expectations reflect the future orientation of trust. Beliefs reflect the critical role perceptions about the other party play in trust. • Many definitions include affective, or cognitive/affective, aspects. These definitions of trust typically include a phrase about feelings of security about, or confidence in, the trusted party (e.g., “emotional security”). • A large number of definitions refer to trust as a behavior.
The Six Trust Types Guided by the classification system, six related types of trust have been defined in the working paper. The six types cover the more common of the dictionary definitions of trust. This multi-dimensional view of trust provides a parsimonious way to organize measurable trust types, while clearly distinguishing one type from another. Trusting Intention–Trusting Intention can be defined as the extent to which one party is willing to depend on the other party in a given situation with a feeling of relative security, even though negative consequences are possible. It is personal (originating in a person) and (one-way) directional. Trusting Behavior–Trusting Behavior means the extent to which one person voluntarily depends on another person in a specific situation with a feeling of relative security, even though negative consequences are possible. Trusting Beliefs–Trusting Beliefs means the extent to which one believes (and feels confident in believing) that the other person is trustworthy in the situation. Trustworthy means one is willing and able to act in the other person’s best interests. The Trusting Beliefs construct is shown as person- and situation-specific. The most prevalent (and probably the most important) trusting beliefs involve benevolence, honesty, competence and predictability. System Trust–System Trust means the extent to which one believes that proper impersonal structures are in place to enable one to anticipate a successful future endeavor. Personal attributes of the other are not at issue with System Trust. It supports Trusting Intention. Two types of impersonal structures can be differentiated: (a) structural assurances and (b) situational normality. Structural assurances include such safeguards as regulations, guarantees or contracts. Situational normality may include one’s own role and others’ roles in the situation. System Trust supports Trusting Intention. Dispositional Trust–Trust can be viewed as a cross-situational, cross-personal construct, which can be called Dispositional Trust. This construct recognizes
The Role of Trust In Information Technology Management 291
that people develop, over the course of their lives, generalized expectations about the trustworthiness of other people. Situational Decision to Trust–Situational Decision to Trust means the extent to which one intends to depend on a non-specific other party in a given situation. It means that one has formed an intention to trust every time a particular situation arises, irrespective of one’s beliefs about the attributes of the other party in the situation. It is simply an individual, situational strategy. The interdisciplinary nature of the constructs is apparent: System Trust comes from sociology, Situational Decision to Trust comes from economics, Dispositional Trust from psychology, while Trusting Beliefs and Trusting Intention, and Trusting Behavior reflect research in several disciplines.
Trust in Software Development Sabherval introduced the role of trust in Outsourced Information System Development (OISD) projects (Sabherval, 1999). In this environment the best fitting definition of trust was “confidence that the behavior of another will conform to one’s expectations and in the goodwill of another.” The analysis concentrates on trust between groups of people working together. According to this approach trust has two groups as it is between individuals, or organisations (group of people) during project realisation, services. Characterizing trust from this aspect, trust can be classified into four categories: • Calculus-based trust–Originates from the rewards or punishments connected to a project. The base of this type of trust is the working structures (reporting mechanism, change management procedures). • Knowledge-based trust (KBT)–The base of KBT is the knowing the partners each other well working together on previous project. • Identification-based trust–The partners identify common goals, mutual understanding and appreciate each other’s efforts. They will act for one another. • Performance based trust–This type of trust depends on a project’s early success. Accomplishing a project goal will improve trust and cooperation while performance problems can cause distrust and conflict.
Trust in WEB Applications The Internet-(Web-) based applications, usually are called eTechnologies, are very popular, and have spread extremely fast. Today eTecnologies have a strong fallback that reflects, for example, in the values of “dot-com Companies” on the NYSE (in spring 2001). According to many experts the main reason of this decreasing trend is the lack of trust of consumers to Web providers. Based on the data collection and detailed analysis done by Hoffmann, Novak and Peralta (1999), interesting results were published. The perceptions of Web shopping (based on 45 million U.S. Web users) are as follows; not safe to give credit card number (29 mill.), secondary use of personal data by the providers (26 mill.), not all Web sites are legitimate (16 mill.), will not get what ordered (8 mill.).
292 Mezgár & Kincses
The reasons for not buying on the Web (based on 12.6 million non-buyers) are: don’t trust security (4.77 mill.), privacy problems (1.43 mill.), no need (0.932 mill.), no interest (0.444 mill.). The conclusion of the analysis is that the most effective way for a commercial Web provider to gain a profitable exchange relationship with the users/buyers is to earn their trust. The way to earn this trust is to develop more cooperative interaction between on-line business and the customers. Using this conclusion as a starting point, trust is an important factor for the following actors between user and a Web-based systems/services: • Internet provider–structure, regulations, law, contract. • Content provider–handling of data. • Individuals–who designed, developed the system and operate it. • Technology–the applied security mechanisms in ICT.
Trust in Information Management In the previous subchapters an overview was given on different approaches to trust. In the following the connection of trust components and the information technology management will be given. Approaching from the users’ side, there is an emotional (feeling of security, confidence) and a cognitive (beliefs, expectancies) component. According to the classification of McKnight and Chervany, this relation can be described with the Trusting Intention and Trusting Belief constructs. These two components are in relation with the institutional phenomena (System Trust). During the development phase of an information system, the willingness to depend, trusting beliefs and situation–specific trusting behaviors of future users are present (Trusting Intention, Trusting Belief and Trusting Behavior constructs). For the managers of information systems, the belief, the intention and the behavior are the most important components of trust in the contact with their inferiors. In this contact the relationship between trust and power is also important, as managers have power originated from their position. The sometimes instable power situation between employees and managers can be controlled by well-defined rules and control mechanisms of the firm (System Trust).
SECURE INFORMATION MANAGEMENT The owners, the providers and sometimes the state as well intend to push people to use Internet-based services. People will use these services when they think, or know it is based on mutual advantages. In case there is a risk to use these systems, they do not trust them and will not use the system. The main risk factor in these cases is the security of the systems. In the following chapter the components of security will be introduced.
Security Security consists of a number of measures that organisations implement to protect information and systems. It includes efforts not only to maintain the
The Role of Trust In Information Technology Management 293
confidentiality of information, but also to ensure the integrity and availability of that information and the information systems used to access it. Security is a conscious risk-taking, so in every phase of a computer system’s life cycle must be applied that security level which costs less than the expense of a successful attack. In other words security must be so strong that it would not be worth it to attack the system, because the investment of a successful attack would be higher than the expected benefits. At different levels different security solutions have to be applied, and these separate parts have to cover the entire system consistently. The main security requirements are as follows: • Confidentiality–Protection from disclosure to unauthorized persons. • Integrity–Maintaining data consistency. • Authentication–Assurance of identity of person or originator of data. • Non-repudiation–Originator of communications can’t deny it later. • Availability–Legitimate users have access when they need it. • Access control–Unauthorized users are kept out. These are often combined as user authentication used for access control purposes and non-repudiation combined with authentication.
Elements of Security The building blocks, of security are the security services and the security mechanisms. The security services are: • Access control: Protects against unauthorized use. • Authentication: Provides assurance of someone’s identity. • Confidentiality: Protects against disclosure to unauthorized identities. • Integrity: Protects from unauthorized data alteration. • Non-repudiation: Protects against originator of communications later denying it. The means achieving these properties depends on the collection of security mechanisms that supply security services, the correct implementation of these mechanisms and how these mechanisms are used. Regarding security mechanisms with crypto functions, three basic building blocks are used: • Encryption is used to provide confidentiality, and also can provide authentication and integrity protection. • Digital signatures are used to provide authentication, integrity protection and non-repudiation. • Checksums/hash algorithms are used to provide integrity protection and can provide authentication. Usually one or more security mechanisms are combined to provide a security service. A typical security protocol provides one or more services. The hierarchy of services, mechanisms and algorithms is the following: services are built from mechanisms, and mechanisms are implemented using algorithms. The application possibilities of security mechanisms to provide security services are shown in Table 1. Several algorithms that realize mechanisms will be introduced later on in the “Technologies and Tools” subchapter.
294 Mezgár & Kincses
Table 1: Application of security mechanism to provide security services
Privacy or confidentiality Integrity Authentication Access control Non-repudiation
Encryption X
Hash functions
Digital signatures
X X
X X
X X X X
X
Different services can be applied along the information handling process. The services that make the users feel a system is secure and induce trust are confidentiality/ privacy, the integrity and the authentication (identification of user). The more different mechanisms are used, the more will increase the level of user’s trust. The weights of the services are different so the trust level generated in the user’s mind is also different according to the service. Let’s show this through an example: the main factor for trust is the confidentially, and it is realized through encryption algorithms. The result is a T2 trust level. If a digital signature mechanism is also applied in the same process, it will result in a T1 trust level. The cumulative level of trust will be T1 + T2 (Figure 1 shows the approach in a graphical way). The figure gives a qualitative representation, as it is not proven yet that the connection between trust level and the level of applied security service (weak, strong) is linear.
Security Domains Security is a complex domain; it would be a big mistake to take into consideration only software-related fields. The following list gives the main topics that belong to Figure 1: Qualitative connection between trust level and security services trust level high
confidentiality authenticy T2
integrity
T1 low weak
level of applied security service
strong
The Role of Trust In Information Technology Management 295
the term “computer security” and, when managing information systems, have to be taken into account: • Physical security–Controlling the incoming and outgoing of people and materials. Protection against the elements and natural disasters. • Operational/procedural security–Includes everything from managerial policy decisions to reporting hierarchies. • Personnel security–Hiring employees, background screening, training, security briefings, monitoring and handling departures. • System security–User access and authentication controls, assignment of privilege, maintaining file and file-system integrity, backups, monitoring processes, log-keeping, and auditing. • Network security–Protecting network and telecommunications equipment, protecting network servers and transmissions, combatting eavesdropping, controlling access from untrusted networks, firewalls and detecting intrusions.
Human Aspects in Security Services Identification The security service authentication according to its definition assures the identity of a person or originator of data. The identification of a user of an information system is a complex task. The original meaning of the term “identification” is the process “to make identical; treat as the same,” or “to show to be the same as described or claimed; to show to be a certain person or thing” (Guralnik, 1958). In computer science the identifier is a string of bits or characters that names an entity, such as a program, device or system, in order that other entities can call that entity. In the context of information systems, the purpose of identification is very concrete: it is used to link a stream of data with a certain person, so the following definition can be given: “human identification is the association of data with a particular human being” (Clarke, 1994). Information systems have tended to use codes rather than names as the primary identification mechanism. As information technology developed, artificial codes gave way to combination of natural names and codes. The most reliable mode to identify a person is to apply biometrics techniques. A short overview of biometric techniques is given in the following: • appearance (e.g., the familiar passport descriptions of height, weight, color of skin, hair and eyes, visible physical markings; gender; race; facial hair, wearing of glasses; supported by photographs); • social behavior (e.g., habituated body-signals; general voice characteristics; style of speech; visible handicaps; supported by video-film); • bio-dynamics (e.g., the manner in which one’s signature is written; statistically analysed voice characteristics; keystroke dynamics, particularly in relation to login-ID and password); • natural physiography (e.g., skull measurements; teeth and skeletal injuries; thumbprint, fingerprint sets and handprints; retinal scans; earlobe capillary patterns; hand geometry; DNA-patterns); and
296 Mezgár & Kincses
•
imposed physical characteristics (e.g., dog-tags, collars, bracelets and anklets; brands and bar-codes; embedded micro-chips and transponders). The applied techniques in biometry systems in IT includes a physiological (fingerprint, iris, facial features, etc.) element or factor as well as behavioral one (e.g., vocal patterns, typing rhythm). The Layered Biometrics Verification (LBV) technology entails layering different biometrics technologies into a complex identification process. When selecting the proper technique for identification of information system users, there are several hard limits that have to be taken into consideration, such as: • Adaptability–easy to realize the technique for computer input. • Uniformity–feature should not change over time. • Thrift–acceptable costs for realisation. Biometry has become a very important field of research recently. The reason of this enhanced interest is that with the methods and techniques of biometry, it is possible to identify the person (the user him/herself) reliable as biometry applies unique biological characteristics for identification. With passwords and PIN codes, only the computer or the equipment can be identified, but not the person. As trust is the base of secure information systems and network communication, and the starting point to develop trust is the secure identification, the importance of biometry is obvious. An additional important question is the identification of special users. Usability is vital for avoiding the great divide between those who have and those who do not have access to the resources of the Information Society. All the citizens have to be able to identify themselves wherever they are and at all access points with equipment. Based on the statistical data, the rate of disabled/handicapped (mentally, physically, sense-organically) people is about 10% in a general population. Very young and elderly people also need special handling. These groups of citizens can be considered special users.
Role of Interfaces Generally speaking the goal of an interface is to interconnect two or more entities at a common point or shared boundary. As a communication/information system term, an interface is the point of communication between two or more processes, persons or other physical entities. Interfaces are the key points for gaining the trust of the user/customer. They are the first connection point between the user and the system, the identification of the users take place at this point (e.g., password input, fingerprint reader, smart card reader), so they have to be designed very carefully. Interfaces can be based on software (that generates the screen of the display) or on hardware (equipment for input). In the design of both types of interfaces, ergonomic and psychological aspects are taken into consideration besides the technical ones.
The Role of Trust In Information Technology Management 297
The problem of special users has to be taken intoconsideration. The question designing an interface can raise is, “For all or for abled bodied only?” For an information system manager, the selection of the interfaces determines in many cases the security level of the system as well.
User’s Training, Education Another factor that has increased the risk of intrusion for Internet users is the tremendous rate of technological change. The speed of technological development has never been faster, and the world is trying frantically to catch up with it. One of the biggest security concerns that a small business may face today is a lack of information about the threats that exist on the Internet. This doesn’t mean that people don’t care, or aren’t concerned, but in today’s world of doing business at lightspeed, managers do not have the time or resources to stay on top of the latest developments in information security. The knowledge of new attack technologies and the latest results of the defence techniques have to be learned, otherwise, the internal network of the enterprise is open for intruders. The time invested in learning or keeping the level of security knowledge is not wasted; it can come back when the first attack has been averted. If an organisation or an individual user misses the continuous education in the topic, information security can be a very expensive afterthought for them.
Standards In the field of security, standards and quasi standards have an important role. In the following some of the most relevant ones are introduced, only to show the directions and status of these significant works. In order to classify the reliability and security level of computer systems, an evaluation system has been developed and the criteria have been summarized in the so-called “Orange Book” (Orange Book, 1995). Its purpose is to provide technical hardware/firmware/software security criteria and associated technical evaluation methodologies. The ISO/IEC 10181–(ISO, 1996) multi-part (1-8) “International Standard on Security Frameworks for Open Systems” addresses the application of security services in an “Open Systems” environment, where the term “Open System” is taken to include areas such as database, distributed applications, open distributed processing and OSI. The ISO/IEC 15408 standard (ISO, 1999) consists of three parts, under the general title “Evaluation Criteria for Information Technology Security.” This multi-part standard defines criteria to be used as the basis for evaluation of security properties of IT products and systems. This standard originates from the well-known work called “Common Criteria” (CC). By establishing such a common criteria base, the results of an IT security evaluation will be meaningful to a wider audience.
298 Mezgár & Kincses
TECHNOLOGIES AND TOOLS FOR ACHIEVING TRUST As has been stated earlier, security is a very complex term. There is computer, communication, information system, physical and a lot of other “securities.” Additionally these terms overlap each other in a lot of cases. The development of a security system can start with the building of the threat model that describes all possible attack and failure types for the actual system. Based on this threat model, the specification of the security functions also can be described. The security policy can be defined as a succinct statement of a (generic or specific) system’s protection strategy. Protection profile is a description of security mechanisms in an implementation-independent way. Security target is a more detailed description of security mechanisms with a specific implementation offer (Anderson, 2001). The totality of protection mechanisms within a computer system, including hardware, orgware and software, the combination of which is responsible for enforcing a security policy, is called trusted computing base (TCB). The ability of a trusted computing base to enforce correctly a unified security policy depends on the correctness of all types of mechanisms within the trusted computing base, the protection of those mechanisms to ensure their correctness and the correct input of parameters related to the security policy. As there are too many security technologies, tools and equipment to be introduced here, only the most frequently used and some new ones will be described in the following. Detailed descriptions can be found, for example, in Anderson (2001), Tripton and Krause, (1998), Sams (1997), Menezes, Oorschot and Vanstone (1996) and Schneier (1996).
Technologies Virus Defence Viruses and other malicious code (worms and Trojans) can be extremely destructive to the vital information and the computing systems both for private and business systems. There are big advances in anti-virus technology, but malicious codes remain a permanent threat. The reason is that the highest level security technology can only be as effective as the users operating them. In the chain of computer security, human beings seem to be the weakest point, so there is no absolute security in virus defence. There are some basic rules that have to be followed, and in this way the users can achieve an acceptable level of virus protection: • Do not allow use of your computer by anybody. • Install an anti-virus program and update it regularly. • Use different anti-virus technologies. • Open e-mail attachments only from trusted sources. • Be aware of new software, even from a trusted source. • Check CDs and floppy disks before using them.
The Role of Trust In Information Technology Management 299
• •
Back up files regularly. In case the computer has been infected by a virus, contact professionals (network/system administrator, or specialized firm).
Achieving Confidentiality The main factor of trust is confidentiality that can be achieved by technologies that convert/hide the data into a form that cannot be interpreted by unauthorized persons. There are three major techniques to fulfill this goal: encryption, steganography and winnowing. • Encryption is transforming the message to a ciphertext such that an enemy who monitors the ciphertext cannot determine the message sent. The legitimate receiver possesses a secret decryption key that allows him to reverse the encryption transformation and retrieve the message. The sender may have used the same key to encrypt the message (with symmetric encryption schemes) or used a different, but related key (with public key schemes). Public key infrastructure (PKI) technology is widely used as DES and RSA are wellknown examples of encryption schemes, while the AES (with the Rijndael algorithm) belongs to the new generation. • Steganography is the art of hiding a secret message within a larger one in such a way that the opponent cannot discern the presence or contents of the hidden message. For example, a message might be hidden within a picture by changing the low-order pixel bits to be the message bits. • Winnowing is a new technique, called “chaffing and winnowing”–to winnow is to “separate out or eliminate (the poor or useless parts),” and is often used when referring to the process of separating grain from chaff. Winnowing does not employ encryption, and so does not have a “decryption key.” A confidentiality system based on winnowing works in the following way: there are two parts to sending a message–authenticating (adding MACs) and adding chaff. The recipient removes the chaff to obtain the original message.
Tools, Equipment Security Architectures The goal for security in distributed environments is to reflect, in a computing and communication-based working environment, the general principles that have been established in society for policy-based resource access control. Each involved entity/node should be able to make their assertions without reference to a mediator and especially without reference to a centralized mediator (e.g., a system administrator) who must act on their behalf. Only in this way will computer-based security systems achieve the decentralisation needed for scalability in large distributed environments. The security architectures represent a structured set of security functions (and the needed hardware and software methods, technologies, tools, etc.) that can serve the security goals of the distributed system. In addition to the security and
300 Mezgár & Kincses
distributed enterprise functionality, the issue of security is as much (or more) a deployment and user-ergonomics issue as technology issue. That is, the problem is as much trying to find out how to integrate good security into the industrial environment so that it will be, trusted to provide the protection that it offers, be easily administered and be really useful.
Security Framework for Managers The information and knowledge, and the systems that handle and process them, are among the most valuable property of any organisation. Adequate security of these assets is a fundamental responsibility of the management. Realizing the need and the unstable knowledge of security knowledge of users and mangers of systems, the National Institute of Standards and Technology (NIST) has developed an evaluation framework to help the protection of information systems (NIST, 2000). The Federal Information Technology (IT) Security Assessment Framework (or Framework) provides a method for agency officials to determine the current status of their security programs relative to existing policy and where necessary, establish a target for improvement. The Framework does not establish new security requirements.
Firewall A firewall is a system or group of systems that enforces an access control policy between two networks. The actual means by which this is accomplished varies widely, but in principle, the firewall can be thought of as a pair of mechanisms: one exists to block traffic, and the other one to permit traffic. The firewall also can filter both incoming and outgoing messages, so it is a popular tool for protecting computers on the net. The case of false trust, that happens in many cases when installing off-the-shelf security software, has to be mentioned. This is the case when the situation is considered as trusted, but should not be. An example of such a case is when a company buys a firewall to protect its network, but the system is installed with factory defaults, and even the default password remains unchanged. In this case the company thinks the environment is secure, while the attackers can enter the system very simply.
Integrated Software Integrated software is the most flexible and comprehensive security/monitoring program available today for a PC or a network. It can provide any combination of stealth and security, from invisible monitoring of all activities on the computer to the complete locking of the system if necessary.
Smart Cards There is a strong need for a tool that can fulfill the functions connected to trustworthy services. Smart card (SC) technology can offer a solution for current problems of secure communication by fulfilling simultaneously the main
The Role of Trust In Information Technology Management 301
demands of identification, security and authenticity, besides the functions of the current application. The smart card is a plastic plate that contains a microprocessor, a chip, similar to computers. It has its own operation system, memories, file system and interfaces. A smart card can handle all authorized requests coming from the “outside world.” It is also called IC card. There are different SC configurations equipped with different interfaces. The crypto-card has a built-in chip for doing encryption/ decryption, other cards have keyboards, the SC for secure identification has fingerprint sensor (Balaban, 2001; Koller, 2001). The smart card can help in secure signing of digital documents as well. Smart cards can be read by SC-readers integrated or connected to PCs or any other equipment. There are many special applications, e.g., for military, using a multi-function card for identification, encryption and physical access (Peck, 2001). The application of SCs in the security field can result in the next step of the technological revolution because of offering new possibilities in effective integration of the functions of security and the actual application field. In this way the SC can be the general, and at the same time personalized “key” of the citizens for the information society. The governments also realized the importance of smart cards in secure communication, so the title of the 5th point of the eEurope initiative (issued by the EC) is “Smart cards for secure electronic access eEurope Targets” (European Commission, 2001).
Personal Trusted Device People like smart, little tools that fit in their hands and be brought with them, so they can control them both physically and in time. This physical and time controllability makes people think that these devices are secure (physically nobody else can access them), so they trust them (even this approach is not always really true). In case such a device can be used for communication, it is called mobile phone. Today mobile phones represent the first generation of Personal Trusted Devices (PTDs) as they can be used not only for talking but for different other functions as well. The connection of mobile phones with the Internet (WAP) made a big leap in the direction to make mobile phones into PTDs. The scale of functions became wide and different mobile technologies have appeared (mTechnologies). According to manufacturers (Nokia, 2001) market trends in the field of applications are mobile entertainment (downloading of games, music, video), location-based services (getting local information), personalisation (packing personal services in a unique and user-friendly way), multimedia messaging (electronic postcards, video clips) and mobile e-commerce (easy to buy different things remotely, not only from home, but from anywhere). The weak point of mTechnologies is again the security. According to a study (VAS, 2000), “in order to market m-commerce successfully, extra effort needs to be put on convincing the customers about safety and reliability of m-commerce.”
302 Mezgár & Kincses
The mobile phone will became a trusted device in e-mail or Web communication by using PKI and other crypto-systems. The user authentication could be done based on biometry (fingerprint or voice). Moreover the application management in such devices could be done dynamically and every user could create his/her own profile and environment. The application possibilities of a PTD are nearly infinite; only the fantasy limits them. Emerging researches are done in this field, which could become a reality very soon.
Trends in Information Management Systems Nearly all types of systems in all fields of the economy became distributed; virtual structures appeared. The result is large structural and cultural changes in enterprises. Based on the integration of information and communication technologies, the infocom technology has appeared. Parallel, a new communication technology appeared as well, the mobile technology. Mobile devices become far more popular, as it was estimated before, and thanks to the extremely fast development of electronic industry, these devices have grown into a multi-functional tool. Mobile Internet rewrites many rules. All types of people are using new services that are relevant to their personal needs and preferences– and are accessible anytime, anywhere. New terms are developing in all sectors of the industry (e.g., e&mManufacturing, e&mCommerce), in finance (e&mBanking), in the government (eGovernment) and in society as well (eLearning, e&mMedia). The organisations have continuously increasing dependence on information and communication technologies, and these technologies became strategic ones not only in key fields. The information architectures, the structures, the business processes, the business models of enterprises have to be modified according to the new infocom technology. Mobile technologies add new value as to be continuously connected. Short response times assure the validity of information, so productivity is no longer restricted by place or time. Best of all, it is possible to experience new ways of sharing information. In an information and communication-centered world, the security is exceptionally important, as the value and strategic role of reliable information is extremely high. Information technology management systems and the managers themselves have to adapt to this fast, non-stop changing environment.
CONCLUSIONS The chapter has briefly introduced the main characteristics of trust, its connections between security services and mechanisms, and its role in information technology management. The importance of trust is increasing very fast, as the main characteristic of the Information and Knowledge Society is the networked-based organisations and services. As it is pointed out by different analysis based on reallife statistics, when users do not trust a system/service, they do not use it. The organisations have to adapt themselves to this requirement even by changing their culture or organisation structures as well.
The Role of Trust In Information Technology Management 303
Today the global nature of communications platforms, particularly the Internet, are providing a key which opens the door to the further integration of the world economy. The integrated mobile technologies will speed up this tendency as they offer mobility/freedom for the citizens. The distributed information systems with different sizes will play a definite role, but originating from their openness and flexibility, the information systems will always be a security risk. As absolute (100 %) security cannot be reached; the risk management techniques and technologies have to guarantee the lowest risk as possible. There is a need for complex, flexible security systems that are user friendly and platform independent at the same time. The developments of hardware and software elements of such systems are going on and the potential users have to get acquainted with them. The managers of information technology have to adapt these technologies, tools and devices into their systems to provide a high security level that can induce trust in all humans involved in the different phases of the life cycle of the information system.
ACKNOWLEDGMENTS Part of the work included in this chapter has been done with the support of the OTKA (Hungarian Scientific Research Foundation) project with the title “The Theoretical Elaboration and Prototype Implementation of a General Reference Architecture for Smart Cards (GRASC)” (Grant No.: T 030 277).
REFERENCES Anderson, R. (2001). Security Engineering: A Guide to Building Dependable Distributed Systems. New York: John Wiley & Sons. Balaban, D. (2001). Fortifying the network. CardTechnology, May, 70-82. Clarke, R. (1994). Human Identification in Information Systems: Management Challenges and Public Policy Issues. Available on the World Wide Web at: http://www.anu.edu.au/people/Roger.Clarke/DV /HumanID. Accessed September 8, 2001. Council of the European Commission. (2002). eEurope. An Information Society For All, Action Plan. Dhillon G. (2001). Information Security Management: Global Challenges in the New Millennium. Hershey, PA: Idea Group Publishing. FBI. (2001). CSI/FBI Computer Crime and Security Survey, 7(1). Guralnik, D. B. (1958). Webster’s New World Dictionary of the American Language. Clevland & New York: The World Publishing Company. Harrison, D., McKnight N. and Chervany, L. (1996). The meanings of trust. University of Minnesota Management Information Systems Research Center (MISRC), Working Paper 96-04. Hoffman D. L., Novak, T. P. and Peralta, M. (1999). Building consumer trust online. CACM, 42(4), 80-85.
304 Mezgár & Kincses
ISO/IEC 10181-1. (1996). Information Technology—Open Systems Interconnection—Security Frameworks for Open Systems: Overview. ISO/IEC 15408. (1999). Evaluation Criteria for Information Technology Security. Koller, L. (2001). Biometrics get real. CardTechnology, August, 24-32. Menezes, A. P., van Oorschot and S. Vanstone, S. (1996). Handbook of Applied Cryptography. New York: CRC Press. NIST. (2000). Federal Information Technology Security Assessment Framework, November 28. National Institute of Standards and Technology (NIST). Nokia. (2001). Why Mobile Internet? Available on the World Wide Web at: http:/ /www.nokia.com/ networks/ mobile_internet/. Accessed October 14, 2001. Orange Book. (1995). Trusted Computer System Evaluation Criteria. DoD 5200.28-STD, Department of Defense, December 26, Revision: 1.1 Date: 95/ 07/14. Peck, M. (2001). Smart cards for smart soldiers. CardTechnology, May, 98-108. Sabherwal R. (1999). The role of trust in outsourced IS development projects. CACM, 42(2), 80-86. Sams, N. (1997). Maximum Security: A Hacker’s Guide to Protecting Your Internet Site and Network. New York: Macmillan Computer Publishing. Schneier, B. (1996). Applied Cryptography. New York: John Wiley & Sons. Tipton, H., and Krause, M. (Eds.). (1998). Handbook of Information Security Management. New York: CRC Press. Ungson, G. R. and Trudel, J. D. (1999). The emerging knowledge-based economy. IEEE Spectrum, 36(5), May, 60-65. VAS. (2000). Demand for m-commerce–On-line shopping. Market Study of VAS, June. Available on the World Wide Web at: http://www.nokia.com/networks/ mobile_internet/. Accessed October 9, 2001.
Inexperienced Software Team and Global Software Team 305
Chapter XVII
Inexperienced Software Team and Global Software Team Kim Man Lui and Keith C. C. Chan The Hong Kong Polytechnic University, Hong Kong
ABSTRACT Software project management in the 21st century requires that a manager deal either with Inexperienced Software Team or Global Software Team or both. This is because well-developed and less well-developed countries have exploited information technology to various extents. The former requires managing a software team consisting of talents remotely located whereas the latter a team of local inexperienced developers. This chapter assimilates the management challenges involved and explicates how these two types of supposedly different software development are managed by one framework: Plagiarismbased Programming.
INTRODUCTION Despite the burst of the dot com bubble and the economic slowdown in most parts of the world, the demand for programmers has never been greater. To meet the demand, some companies have started recruiting overseas and some have outsourced software projects offshore. Unfortunately, neither of them seems to be a very satisfactory solution to the manpower-shortage problem. The former requires programmers to physically relocate to a new place, and it takes time for them to adjust to a new environment and, possibly, a new and very different culture. By the time these programmers become productive, they could well be headhunted to work elsewhere. As for the latter, the risks and costs involved in managing an outsourced off-shore project could be very high. Copyright © 2003, Idea Group, Inc.
306 Lui & Chan
Given the fact that the number of qualified programmers will not be increased drastically and rapidly, software managers in most parts of the world will likely have to live with the manpower-shortage problem for some time. In fact, in the U.S. alone, according to the estimation of the Information Technology Association of America in April 2000, 850,000 IT positions were expected to remain unfilled in 2001 (Information Technology of America, 2000). Clearly, this figure is expected to be many times larger if we are to compile some global statistics. To deal with the manpower-shortage problem, the forming of global software teams, where members are recruited from all over the world and software is developed in a “distributed” manner, have to be considered. Forming such a global software team can have many advantages. In addition to alleviating the problems caused by scarcity of human resources, programmers on a global team are now free to work without the confine of physical locations. If they happen to work in different time zones, they can develop a novel, non-stop, around-the-clock, working style (see Figure 1). A company with a global software team may therefore be able to reduce costs and improve efficiency. Besides these benefits, forming a global software team also has advantages from the risk-management viewpoint. A software team, if located in one single location, might be brought to a complete halt due to whatever reasons such as strike, political turmoil, natural disasters, terrorist attack, etc. However, for a global software team, even with these incidences occurring in a single location, the team may likely be able to continue to function, as members in other parts of the world may be unaffected. This is also expected to increase customer confidence and trust. They will be pleased to know that support services can continue even in crisis situations. Other than the idea of forming a global software team, some software managers are advocating for a review of software engineering curriculum to include compulsory/selective course(s) in software engineering for students majoring in other disciplines, especially engineering (Meyer, 2001). By doing so, it is hoped that more programmers can be produced more rapidly here and elsewhere. While the idea of forming a global software team may increase the size of the pool of programmers that one can recruit, there is always the concern about quality. Figure 1: Around-the-clock development
Inexperienced Software Team and Global Software Team 307
Software managers do not want only programmers; they want good programmers. Booch (1994) felt that there should not be any reason to believe that the software engineering community had an inordinately large proportion of smart people. It is up to the people within and without the community to decide if this is the case. The problem of quality versus quantity and a country’s educational policy is beyond the scope of this chapter. Logically, one would prefer to establish a software team consisting of experienced programmers. But since this is almost impossible, a software manager can, at best, hope for a team that consists of both experienced and inexperienced programmers. They are thus faced with the problems of managing inexperienced programmers for a software project. It is not difficult to observe that maintaining a team with a large proportion of inexperienced members significantly reduces running expenses (Figure 2), as there could be a tremendous salary gap between skilled and unskilled developers. Companies that may operate under tight cash flow will normally have an inexperienced software team as they try to minimize costs. There have been few success stories about the management of such a team. Even if there are, very few of them have reported a universal success model. Furthermore, the repeatability of it is questionable because the management is people-oriented. Replicating previous results in project management must be based on a well-defined mechanism under the same set of assumptions. This chapter will discuss some of our experiences in running the inexperienced and global software team. We believe that the sharing of our experience is useful to organisations thinking of exploiting the relatively cheaper talents in developing countries such as China. In the following, we first present some valuable points on the environment that compel us to develop techniques to manage an inexperienced software team and a global software team. Our objective is to present the details of a real case for study. It is our own past experienced that motivated us to form these teams. In other words, the study was not started out with pure academic motivation. In addition to the discussion on the motivation of our work, we present here a conceptual framework for software management practices in relation to an inexperienced and a global software team. We also describe the common connections between inexperienced team and global team from management and software development points of view. Given the characteristics of these teams, we propose a software model called plagiarism-based programming, which is based on the concept of pattern Figure 2: From a professional team to an inexperienced team
308 Lui & Chan
theory (Alexander, 1964, 1979; Coplien, 1999) and plagiarism–a forbidden misconduct that so many students have tempted to be involved in. This is proposed as a solution to the management problem so that programming assignments could be handled with less effort and time by inexperienced programmers cooperating remotely.
THE CASE OF SOFTWARE DEVELOPMENT WITHIN LESS-DEVELOPED REGIONS AND AROUND THE GLOBE This section introduces some real cases that have driven the formation of inexperienced software team and global software team, respectively. The mainstay of managerial decision for building them is finance. An argument against a computerized system cheaper than clerk administration may not hold true in some undeveloped world, where the price of a personal computer is several times the monthly salary of a programmer or where there could be a utopia for an industrial developing area. Another argument against centralized support cheaper than distributed service may not hold true when corporate business covers a number of different small-scaled local operations over several remote regions or countries. Try to imagine that you could see and talk to several of your colleagues altogether by videoconference mobile phone; centralisation transforms into geographical distribution in structure but virtual centralisation in function. This concept parallels the philosophy of a global software team. We will now look at these problems in details.
Less-Developed Region–Inexperienced Team It has been a trend for the active rural industrialisation, in which a manufacturing plant always moves from many more-developed regions to the less-developed, so as to exploit the cheaper costs of land, labor and distribution channels (Otsuka, 2001). In order to manage these plants better, there is always a need for management information systems to be developed. While recruiting laborers for manufacturing is easy in less-developed regions, recruiting IT professionals for the development of an integrated, customized MIS in phases is much more difficult. In China, the demand for IT professionals in larger cities is so high that it is almost impossible for any manufacturing plant located in the rural area to recruit any. Those available in a poor rural area are usually inexperienced. Even though the alternative of employing expatriates may sound reasonable, it is not practical. Instead of in-house development, we might evaluate a third-party solution. The additional expenses incurred in the purchase of vendor products, consultancy services, maintenance, version upgrading, training, traveling, etc. are expected to be much larger than what one can save from exploiting cheaper land and labor. In lessdeveloped areas, many of programmers do not receive proper training in computing. Besides these, a rate of personnel turnover is typically high. As long as they have
Inexperienced Software Team and Global Software Team 309
received some training, many of them will seek a job with a better career prospect in more developed cities in China. This results in a vicious cycle where the project manager always has to work with programmers who were green and inexperienced. The situation of a high turnover rate is aggravated, sometimes by resignation without any notice in advance. People tender resignation and choose to leave on the same day. They attempt to secure their current job while seeking any opportunity. Clearly, handing over of work is very difficult, if not impossible, and the team has to work under-staffed constantly. In Denmark, a technical manager from a small software company personally admitted that their team was energetic but inexperienced. Some were even far from smart. He further explained that one of the highest costs in the software industry would be people. The company could not afford to employ so many qualified persons. They are expensive. If the annual budget of staff remains unchanged, people gaining experience from projects may leave soon. The budget however relies on the profit that is affected by some external factors such as market competition from bigger companies. The good news was that his projects were in relative small scope. This compromised some impact from personnel turnover. One of the goals of knowledge management is to help employees within an organisation to contribute more to their employer (Wilson & Snyder, 1999). Even though the statement is clear, the way to achieve such a goal is vague indeed. One may argue that educating inexperienced people or allocating suitable jobs according to an individual’s ability may well fulfill the same purposes. However, when the knowledge and experience of staff members are not properly aligned with the tasks assigned, the learning curve can be steep and long (Amrine, Ritchey, Moodie & Kmec, 1993). Nevertheless, when a staff becomes well-trained in some lessdeveloped regions in China, or in a small company in Denmark, for example, his determination to look for better job prospects somewhere else will become stronger. Training, therefore, does not provide a promising solution in this case. In contradiction to what we could expect in well-developed regions, a certified professional program psychologically encourages them to leave a lesser opportunity company or lesser developed region sooner. Some senior managers are disturbed by this phenomenon and they say: “We are always training other company’s staff.” As for allocating developers according to their skill set, the idea is not feasible when all team members are inexperienced. Human resource allocation can therefore only be implemented with limited extent. Better knowledge management, other than adopting conventional principles, is required.
Borderless Region–Global Team A company, which was headquartered in New York, had a number of small offices of 40 employees in different parts of the world. For each such office, one to two staff were hired to provide IT support. When there was any need to modify the MIS system to meet requirements for local processing, request for modifications would be sent to the head office. The result was that more resources were required
310 Lui & Chan
at the head office to provide ongoing support to branches. While a larger software team was thus required at the head office, it should be noted that the IT staff at the branches could well have much time to spare. The load-balancing problem got worse when the number of branches increased. The question, naturally, was to decide if it could be possible for us to link people together to establish a global software team. The ideal and much productive approach for the above could be towards a global software team. The team in each site then plays a role more or less as a distributed agent following a communication scheme from a coordination agent. Roughly, there are two main types of global team. One is by function that could have a number of sub-teams working for particular functions respectively such as design, programming, testing and maintenance shown in Figure 3. This has been a common practice nowadays, as we have heard many organisations saying about their development centers and support centers. Another approach is by agent that is quite aggressive and involves into more technical, managerial and communicational problems shown in Figure 3. All agents located in different places could perform any similar assignments. One of them is the control agent that will delegate assignments and monitor the progress. It is obvious that the two types of team vary in foundation. The global team by agent involves more new issues than the one by function. The proposed framework in the section, Plagiarism-Based Programming does not make any assumption of the architecture of global software team. Although we implemented in global software team by agent for small-scale projects, we believe the method throws light on both types. Referring to the problem earlier, load-balancing could easily be resolved by the structure of agent. In practice, both are not exclusive and thus a hybrid model of two types can be used. Table 1 summarizes key features of both types.
THE CONCEPTUAL FRAMEWORK The concept of a global software team is implemented locally in the sense that a software team can be set up in different locations within the same country or in Figure 3: Two types of global teams
Inexperienced Software Team and Global Software Team 311
Table 1: Two types of global software teams Global software team by function
Global software team by agent
1
Clear-cut functioning in each site
Multi-functioning in each site
2
Being similar to traditional
Requiring a technical framework
company’s organization so that past
for technical, managerial and
managerial experience might be
communication problems in this
further applied
new structure of team (note that plagiarism-based programming is proposed for the team by agent)
3
Having less flexibility for resources
Having more flexibility for
allocation and load-balancing
resources allocation and loadbalancing
nearby countries or regions; there may not be much difference in time zone and in culture. In such case, the term a “multi-site” software team can be used more generally to describe the situation when a software system is developed by teams that are physically separated from each other in different cities of a country or in different countries. Compared with a global team, a “multi-site” team in nearby time zones can be managed with less complexity and fewer challenges. A multi-site software team of this scale, when compared to a global team, has the constraints that service hours are relatively limited. Clients who may send their requests over the Internet around the world normally demand a prompt reply. But if all teams are in the same time zone, meeting their demand immediately outside office hours is not easy. In any case, the management framework required for a global or multi-site team should be very similar. To further explore around-the-clock development and global development, we realize the intrinsic difference is how synchronisation of work-in-progress proceeds. This concludes that challenges of managing around-the-clock tasking widely cover managerial and technical problems of non-around-the-clock global software development. Around-the-clock development does release severe pressure to improve timeto-market by means of time zone differences. Unfortunately, there has not been a
312 Lui & Chan
sophisticated framework for this kind of software development. Yet it rarely puts obstacles in the way of building the global software team in reality. The reason is that a number of sites can work and communicate together in less-strict synchronisation. For example, while waiting for the result from another site, this center could work for some other tasks of the same project. In around-the-clock development, the synchronisation of work-in-progress and communications conform to a rigid schedule based on the natural rhythm just like the sun rising at 5:30 am and setting in 6:00 pm. The progress of a team replies on not only themselves but also the progress of another team that always posts their deliverables to the team by the end of a day. Traditional framework hardly provides any sound solution. However, light is threw in the dark when some of us recall how quickly we copied our colleagues’ work at school and intended to learn the parts that might be included in an exam paper. Given a statistics question and its model answer, we are able to solve a similar one by following each step of the standard solution such as getting the sum first, then the mean and the deviation, etc. Interruption occurs while solving that problem; any of our colleagues is able to continue the calculation at the exact point where we stopped. When the problem has a large amount of data and can only be finished in a week, the time of interruption will then be sunset. By forwarding our work to another colleague in a different time zone, he carries on the work. This process repeats until finished. The above demonstrates that the success of around-the-clock development highly depends on what type of application we build. Too innovative infers that a similar system cannot be found for reference of development sites. In this case, our statistics example can be interpreted as: the model answer corresponds to a generic system for a particular application such as a database or a web system, and the similar assignment is a new IT project of the same type of that generic system. The bad news is that the approach is unable to cope with any new type of project totally. Fortunately, a large number of information projects nowadays are related to commercial database applications and web applications. The problem of managing inexperienced programmers has received relatively little attention in the past (Lui & Chan, 2000). Most software development projects are design-intensive (Wang & King, 2000) and hence an inexperienced person or fresh graduate might stand the possibility of doing things improperly or even harmfully. Many project leaders have been involved in situations where a developer’s mistake created extra work for other team members. While these could be problems with one’s ability to control their work and debug their code, it could result in a terrible managerial problem or even office politics, when the person intends to hide his own mistake by any irresponsible behavior. No manager is surprised about this as it has been a well-known phenomenon in many organisations. Superficially, the answer for dealing with the scarcity of professionals is followed by a riddle of managing inexperienced programmers. Assume that we are involved in a software project to develop a real-time intelligent control system or an image recognition system. We might think of how complex the internal workings of the systems can be. In our daily life, for most
Inexperienced Software Team and Global Software Team 313
software projects, they are related to the development of applications for a company’s database system or popular web applications, although there have been many available products in the market. As for a commercial database, the heart of it is data manipulation such as record insertion or deletion. Intuitively, a small-sized database could even be substituted by many linked spreadsheets; thereby database applications to us are no longer high technology. But they are still in high demand in the market. We believe that the inexperienced team and global team are a solution for these kind of projects: business database applications and web applications. In our statistics example, with reference of the standard answer, many students who have little knowledge of mathematics in that area could complete it by themselves. The statistics example, although trivial, describes in the abstract how a common framework is applied for both global team and inexperienced team. To overcome the problems associated with managing the global team and the inexperienced team, we propose an innovative software implementation paradigm, plagiarism-based programming (Lui & Chan, 2001). More precisely it is “old concept, new use.” Plagiarism can be both a fast roadmap for learning and less workload for re-producing. Two key steps of plagiarism-based programming for software development are (1) developing representatives of a set of source as originals and (2) mimicking those originals by patterns and regulations. We discovered that plagiarismbased programming threw light on software development by an inexperienced and global or multi-site team. It is interesting to point out our experience in a virtual software team for running a number of small projects such as the Internet Administration and Home networking based on plagiarism-based programming. A recent study has introduced many definitions and interpretations of the term, virtual software team. Some would suggest that teams are virtual when producing work deliverables across different locations, at differing work cycles and across cultures (Gray & Igbaria, 1996; Plamer & Speier, 1998); however, this overlaps with global software team. We might prefer to define the virtual team as: a part of the team members are unknown and unpaid but intelligently and temporarily contribute themselves for projects. This interlude will be shortly included later, for it could open up an exciting vista of IT development and management in the future.
CHALLENGES At a glance, an inexperienced development team and a global software team conjure up two totally different pictures in Knowledge and Information Technology Management. However, some issues involved can be dealt with by the same common solution. We start with the examination of commonalities between them shown in Table 2. The lexical meaning might not reveal the relationships between both. From point one of Table 2, it is hard to realize any single connection between high personnel turnover and around-the-clock development, much like the case that a whirling whale in the sea and a flying bat in the dark indeed are in a mammal group.
314 Lui & Chan
Here we explain the roadmap of our findings. First, the mechanism of managing the inexperienced team was studied. Later, we discovered our solution was applicable for managing the global team. In short, those both share some commonalities solution wise, rather than question wise. And now we elaborate each item in Table 2. As mentioned, programmers are mostly inexperienced in less-developed regions within the country. As long as they gain experience and receive some training, many of them will look for a new job in the more developed cities in the same country or other countries. As a result, the personnel turnover rate is high. The situation of a high turnover rate is aggravated by the fact that many of them who tender resignation prefer to leave immediately in order to maximize their own personal benefits. Obviously, handing over of work is very difficult if not impossible. A phenomenon that shares the same characteristics of a rapid turnover rate is for some global teams to achieve around-the-clock software development. For the latter, a team that is to follow up on the work done by another team has to understand the code without being able to communicate with the other team, as the team members were asleep on the other side of the globe. The basic challenge of aroundthe-clock working here would be the time needed to follow up in order to continue the task delivered electronically from another site, and the remaining time for working and then to relay to another site at sunset, depicted in the following figure. There are two unconventional issues. The sub-process in a site must be non-interrupted or non-stopped; otherwise, the whole process will then be suspended. So is the progress. In addition, the sum of the working hours to follow up and to continue is just a day. The around-the-clock software development is similar to the problems associated with a high personnel turnover rate. Both do not guarantee that the expected outcome be achievable for sure. And both strongly require a very quick job hand-over without face-to-face, lengthy explanations from the previous developers. We will Table 2: Characteristics of software development teams: An inexperienced team and a global team An inexperienced software team in a less-developed region
A global development team involving multiple sites
1
High personnel turnover
Around-the-clock development
2
Weak IT knowledge and lack of IT
Varied experience and knowledge at
project experience
each site
3
Deliverables being less affected by the Deliverables being less affected by the capability and personality of members
factors of culture values, mentality and talent
Inexperienced Software Team and Global Software Team 315
discuss how plagiarism-based programming can be used to cope with sudden hand over in the next section. Now we look at the second point. To develop a CIS database application, a software team may encounter many different kinds of technical problems that require different skill sets, such as inserting record(s) into a database, deleting record(s) from a database, updating that record(s), control of data integrity, control of transaction and the like. In order to do programming, a software team should basically be equipped with some minimum expertise that allows them to complete part, if not all, of programming jobs. Developers below that level could do nothing by themselves. Figure 5 depicts the idea. Our goal is to pull down the line of minimum expertise to a lower position. But how? Let us look at an example. Suppose you have a group of people, say mental defectives, who are able to perform number counting but do not understand mathematics addition. If we would like them to do the addition without a calculator, the best possible way seems to be to teach them the calculation, which seems to be the minimum expertise for this problem. Still the learning curve may be long. (If they are your employees, your boss can’t help wondering why you would really hire those people and make the office as a learning center. Subsequently, you can’t help worrying about the full support that your boss has committed to you before.) Another approach to getting the same work done is by a mechanical style method that is to ask them to follow a predefined mechanism for counting marbles. As for 4+3, the rule may be as follows: (Step 1) count four marbles and put them aside, (Step 2) count three marbles and put them aside as to repeat the step 1, (Step 3) mix it and count all the marbles. Figure 4: Anatomy of the process of around-the-clock development
Figure 5: Minimum expertise for programmer
316 Lui & Chan
The minimum expertise now transforms to counting marbles and putting counted marbles aside. Obviously, to learn addition of numbers in our heads, we must first have to understand counting numbers. As for counting marbles, it is an act of following the rules. In short, the minimum expertise is pulled down in some degree (Figure 6), although the approach might not appear to be intelligent. In fact, we found that management would be delighted by the approach, as employees are able to start working and get the work done in a predictable way and time. The problem of different technical skills among team members parallels to the situation of different knowledge levels among multiple sites in the global software team (Fig 7). In this connection, we would like to apply patterns and/or regulations of knowledge of programming, testing and support and to implement all the multiple sites. Such mechanism will be addressed in next section. A framework for point two also fits for managing the around-the-clock development. The idea of that framework behind point one and point two in Table 2 is easy to follow. Cognitively, easy-to-do and easy-to-do by step (i.e., easy-to-follow) are two different concepts. The first tends to be a subjective judgment of implementers. The second is more objective. For instance, coding a program of factorial by a recursion method is easier for people who get accustomed to doing it in this way. As for easyto-follow, a do-loop could be more straightforwardly expressed in steps, as all people should already be familiar to this style. Interestingly, in factorial, the best way in “easy-to-follow” is if-then-else. As shown below, a seven-level if-then-else program is inflexible as 0 ≤ n ≤ 7, but weaker people should be able to write it with some guidance. The pseudo-code below is so simple, even a pianist who hates playing Figure 6: Pulling down minimum expertise
Figure 7: Minimum expertise for development center
Inexperienced Software Team and Global Software Team 317
around logic could follow it. Interestingly enough, he is able to enhance the program for 0 ≤ n ≤ 20 or to revise it for 3n. f(n) defined as if n=0 then answer=1 if n=1 then answer=1 /* n equals to 1 */ if n=2 then answer=2 /* n equals to 1 x 2 */ if n=3 then answer=6 /* n equals to 1 x 2 x 3 */ … if n=7 then answer=5040 /* n equals to 1 x 2 x 3 x 4 x 5 x 6 x 7 */ if n>=8 then print “Out of range! n must be between 1 and 7.” The final point we are going to discuss is about deliverables from a team. In the short term, the deliverable has to meet the requirements and do so within a projected timeframe. For around-the-clock development, a partial deliverable should be digestive to another team and within a day, in which there is no buying time. In the longer term, the deliverable is related to implementation and project maintenance. The quality of most IT products should never be judged by their outlook and package. Although a user-friendly interface attracts and impresses people at the beginning, positive comments are always from functionality, fault tolerance, performance and on-going support, all of which directly correlate to the internal design and coding. This intangible product, like other intelligent property, connects to human and social values. Recent research has reported that personal and cultural values have implications for work (Schwartz, 1999). We believe higher implication could be in the area of information technology, as it requires skills mastering for design-intensive work and rapid learning for new things coming out everyday. In software development, for a smart developer, the real meaning of his coding might be in between the lines as he was exercising some tricky techniques, whereas for a dull developer, he might keep re-applying some useful fragments of source in an ad hoc manner without bothering about building a set of routines. The expectation of a deliverable must be well projected. By common consent, the software deliverable is affected by personal values in the inexperienced team and cultural values in the global team. The scope is too wide to understand both values fully. More unfortunately, to study whether groups of the Israeli and Palestinian or of the black and white (particularly in South Africa), in connection with their intelligence of software development and management, might even arouse racial anger. For instance, could anyone dare to study whether the color of skin will relate to the capability of programming and the speed of coding; or to study the relationships between racism in a global software team and age discriminations in an inexperienced software team? If we have three groups, Japanese, Roman and African respectively, and then request each to build a house, they will produce three types of building. It is likely we would be able to identify which architecture was by which group. The culture goes in the deliverables. But, when you provide a fine miniature as a blueprint, they will produce three buildings in the same style. If we further control them for methods and daily working hours, those buildings could be completed in almost the same time. To
318 Lui & Chan
deal with human and social problems, we had better apply a well-defined paradigm to confine the know-how of work. It will not be easily affected by people characteristics and culture factors, and it focuses on the process and guidance of work done, rather than people attitudes and interpersonal concerns. From all the above, we mentioned the easy-to-follow and referential model for replicating similar works. Given a model of work, we will be able to control people for production of similar deliverables based on that model. The primary constraint here is that for some advanced applications, we are unable to get samples of subsystems. Thus our method might well be applied on business database applications, web applications and the like.
PLAGIARISM-BASED PROGRAMMING Plagiarism is an act that is considered unacceptable by many. Research was found long ago on detecting plagiarism in programming (Jankowitz, 1988; Whale, 1990; Spafford & Weeber, 1993), rather than studying plagiarism as a quick way of knowledge transfer or work re-production, especially in a cost-conscious sense and in consideration of helping weak programmers. We describe a programming paradigm based on it and we encourage inexperienced programming teams to adopt it. This paradigm, which we call the PbP (Plagiarism-based Programming), can be interpreted as an approach to write software programs so as to ensure that these programs, or part of these programs, can be easily copied or modified for re-use. While computer programs could be very complicated and their logic difficult to understand, it should be noted that, if written according to the PbP, these programs have to be made available for plagiarism without programmers having to spend a lot of time to understand them. The code that needs to be modified, the types of changes that need to be made, etc., have to be easily and quickly identified if programs are to be written according to the PbP principles and methodologies. Based on the PbP, there are those who produce code to be plagiarized and those who plagiarize. The former adopts the PbP because of the need to supervise an inexperienced software team, and the latter is involved because they are either not capable of producing similar work all on their own or they intend to minimize their workload. By adopting the PbP, both the former and the latter can save development costs. This section briefly outlines the PbP, which we implemented and tested in a less-developed region in China (Lui & Chan, 2000). We also used it for management of several remote development sites. According to C. Alexander, his pattern theory (Alexander et al., 1964) can be used in such a way that it can be perceived as a way to arrange workspaces so that new employees can learn by being in proximity to their mentors (i.e., “Master and Apprentices”). Our proposal is developed with the same objective in mind, but in addition, we would like to make sure that the development of patterns is easy to follow. Figure 8 illustrates the principle it adopts. Path 1 indicates that an original program is written to handle one particular task. By modifying part of the source, such
Inexperienced Software Team and Global Software Team 319
Figure 8: Illustration of programming by plagiarism
as converting a for-loop into a while-loop, the program can be used for the same task (i.e., path 2). The modification, of course, is quite common among some computing students who try to complete a programming assignment in the last minute by plagiarism. Many of us might even experience how we once plagiarized other people’s programs by taking these paths. At times, one may even plagiarize something to achieve something else other than what it was originally intended for. It is thus possible that a program modified, based on the original, ends up accomplishing a completely different task (i.e., path 3). Thus, a piece of code revised based on the original program may perform a similar yet different task. For the PbP, our concern is how to write (or find) the source or design patterns so that it can be used as the original to facilitate plagiarism. We introduced the MDC (Managing Design-Coding) for the design of patterns (Lui & Chan, 2000). A generic application-independent architecture could be built as a set of patterns. An extension of the MDC is to take into consideration the concept of easy-to-follow for plagiarism. To do so, we highlight each pattern in three different colors. Given a piece of code, we need to identify three structures so as to reuse and produce similar works: [1] Neither Read Nor Write [2] Read But No Write [3] Read and Write. These are visualized by coloring for quick lookup. Blue color is used to represent no-change; green color is used to indicate that the code concerned requires reading; and red color is used for the part of the code that requires modification. A piece of code is therefore like a series of colors of the form [B G R B G B R B … B]. The coloring makes plagiarism possible in real, competitive business environments. The purpose is to make sure that copiers know where to pay attention to and where changes are required to be made. It is noted that, for the object-oriented approach or the component-based methodology, the internal of an object is blue and the interface is red. For inexperienced programmers, for example, they read the green part of a source and read/write the red part. The blue part can be copied exactly. Many
320 Lui & Chan
technical problems covered are transparent to them. For a global software team to solve the problems as stated in the previous section, what we need to do is to first send out a set of templates to each site. A site can start work and once the time is up, another site can then continue their work with less difficulty as they are all doing a kind of plagiarism which is confined by the blue part. And now we exemplify the PbP by coding a delete operation in database. Although some readers might not know SQL, do not be afraid. It is our intention to demonstrate some technical matters which readers could quickly realize what they should care about and what they should not in a piece of strange code. The following describes a code fragment that is for a row deletion in a database. The code includes logical deletion (updating a flag column), transaction rollback and a mechanism for error checking. For plagiarism, we are concerned with three pieces of semantic information: (1) name of the table where a row is to be deleted (2) condition in which a row is to be deleted (3) number of rows in which the operation affects (by default, only one row of deletion is allowed) For example, when a programmer wants to delete a unique invoice INV1MAY01, what he needs to do is to copy the following code and replace “tablename” with “INVOICE” and “col=’condition’” with “INVOICE_NBR=’INV1MAY01’” update tablename set status = ‘D’ where col=’condition’ if @@rowcount <> 1 // number of rows affected in this operation // if more than one row is affected, set @@rowcount <=1. // if only 2 rows are affected, set @@rowcount <> 2 and @@error<>0 begin raiserror 50000 ‘Error in update tablename set status’ exec master..xp_logevent 50000 rollback transaction return end Note: Bold text represents blue (neither read nor write). Italic text represents green (read only and write occasionally). Normal text represents red (read and write). When the above is highlighted with colors, we can tell, by visualisation alone, how much to change for this piece of code. We should realize that a person should reuse (or plagiarize) the code as soon as possible. Furthermore, the method should not be confused with education or training. It is not our concern whether the person has motivation and capability of understanding the original code fully. In working, he reads the green part of the source and reads/writes the red part. The blue part is copied exactly. Many technical problems covered are transparent to them. Inexperienced programmers work with fewer difficulties as they are all performing plagiarism. What they can do is confined by the blue part. Recently, we have studied the method in the application of Virtual Software Team and briefly explain it here as we believe the type of workflow in our virtual
Inexperienced Software Team and Global Software Team 321
software team will become significant in the future. The definition of virtual software team is that part of the members are unknown or unpaid, but they are a key part of the project. For the PbP, some efforts are needed to discover a model of work as the original. We found that few newsgroups might provide a number of fragment sources for the original in the PbP. Here we shared our latest experience on developing an email alert by SMS (short message service) using perl computer language. Figure 9 illustrates the process flow of virtual software team. We first posted our requests to a Perl newsgroup such as asking any samples for sending an SMS request to the Internet, any samples for checking an email account by Perl, etc. (shown in path 1). Normally, we would get a number of replies (path 2). Without any filtering, we transferred those emails by subject and our requirements to two different development centers (path 3). After modifications, a number of code fragments or functions were developed with coloring. We collected them (path 4) and integrated all fragments into a system by the hints of coloring. The end product was then delivered (path 5). We have enjoyed the success of a number of small-sized projects developed in this style.
SUMMARY We addressed some interesting issues of inexperienced software team and global software team in depth. Both require new managerial, technical and social approaches at large. We also discussed virtual software team for interest. In the future, we expect the economic values gained from building those teams will become obvious when IT and our life are totally merged everywhere around the world. The demand of IT professionals can be predicted as an IT crisis, just if we now look at the rapid increment of web sites. As a consequence, we can guess that the number of job vacancies is huge. How many people are needed to develop and maintain them? There are two things we can do for it right away. One is resources reallocation virtually (i.e., global software team) or physically. Another is to hire lessFigure 9: Workflow of virtual software team
322 Lui & Chan
qualified people who plagiarize the work of qualified professionals. One might argue this would be tricky. It is far from a long-term solution. Well, there is no long-term thing in IT as it continues to advance almost everyday. Remember: when you learn all the answers, IT has already changed all the questions. Many social conflicts and managerial issues are external or ungovernable events to us, thereby part of them could not be avoided or stopped. A high turnover rate in IT in less-developed areas is a typical issue of which we cannot be in full control. On this account, Plagiarism-based Programming takes an active approach. In addition, it also helps avoid problems arising from cultural characteristics, such as being proud of having their own writing style, etc., that could make remote management difficult. Technically, Plagiarism-based Programming works well only on predictable behaviors of mature applications and in software development involving the use of sophisticated technology (e.g., database and web applications). Although these kinds of systems do not belong to the cutting edge of computing, they are still in high demand nowadays. The proposed work does not require that much emphasis be put on people training, personal drive, work challenges, continuing education and cultural respectability. When the demands for programmers are high everywhere in the world, many applications can be written with just similar technical skills but different requirements. The idea of Plagiarism-based Programming will definitely shed light on many areas in Knowledge and Information Management.
REFERENCES Alexander, C. (1964). Notes on the Synthesis of Form. Cambridge, MA: Harvard University Press. Alexander, C. (1979). The Timeless Way of Building. New York: Oxford University. Amrine, H. T., Ritchey, J.A., Moodie, C. L. and Kmec, J. F. (1993). Manufacturing Organization and Management. Upper Saddle River, NJ: Prentice Hall. Booch, G. (1994). Object-Oriented Analysis and Design, 4. Reading, MA: Addison Wesley. Carmel, E. (1999). Global Software Teams. Upper Saddle River, NJ: Prentice Hall. Coplien, J. O. (1999). The origins of pattern theory: The future of the theory and the generation of a living world. IEEE Software, September/October. Gray, P. and Igbaria, M. (1996). The virtual society. ORMS Today, December, 44-48. Information Technology of America. (2000). Major new study finds enormous demand for IT workers: Research pinpoints hot jobs and skills needed, offers insights on employer preferred training approaches. Available on the World Wide Web at: http://www.itaa.org/news/pr/PressRelease.cfm?ReleaseID =955379119. Jankowitz, H. T. (1988). Detecting plagiarism in student Pascal programs. The Computer Journal, 31, 1-8. Lui, K. M. and Chan, K. C. C. (2000). Managing inexperienced programmers by managing design-coding. Proceedings of European Software Process Improvement 2000, November.
Inexperienced Software Team and Global Software Team 323
Lui, K. M. and Chan, K. C. C. (2000). Managing design-coding for software development in China. Proceedings of Software Engineering and Applications, November. Lui, K. M. and Chan, K. C. C. (2001). PbP: A programming paradigm for inexperienced software teams. Proceedings of European Software Process Improvement 2001, October. Meyer, B. (2001). Software engineering in the academy. IEEE Computer, 28-35. Otsuka, K. (2001). Book reviews: Growth and development from an evolutionary perspective. Journal of Development Economics, 65(June), 237-241. Plamer, J. W. and Speier, C. (1998). Teams: Virtualness and media choice. Proceedings of HICSS, 4. Schwartz, S. H. (1999). A theory of cultural values and some implications for work. Applied Psychology: An International Review, 48(1), 23-47. Spafford, E. H. and Weeber, S. A. (1993). Software forensics: Can we track code to its authors? Computers & Security, (12), 585-595. Wang, Y. and King, G. (2000). Software Engineering Processes–Principles and Applications. New York: CRC Press. Whale, G. (1990). Software metrics and plagiarism detection. Journal of Systems and Software, 13, 131-138. Wilson, L. T. and Snyder, C. A. (1999). Knowledge management and IT: How are they related? IEEE IT Pro, (March), 73-74.
324 Warne, Agostino, Ali, Pascoe & Bopping
Chapter XVIII
The Knowledge Edge: Knowledge Management and Social Learning in Military Settings Leoni Warne, Katerina Agostino and Irena Ali Defence Science and Technology Organisation, Australia Celina Pascoe University of Canberra, Australia Derek Bopping Defence Science and Technology Organisation, Australia
ABSTRACT This chapter reports on the methodologies used and the findings of the research done by the Enterprise Social Learning Architectures (ESLA) Task into learning processes occurring in two diverse environments, tactical and strategic at the Australian Defence Organization (ADO). The research focused on identifying factors that enable and act as motivators for social learning. More specifically the chapter describes a number of environmental, cultural factors, processes and strategies that when positively applied, facilitate social learning and knowledge management within organizations. These factors fall into two categories: Motivators– characteristics in the organisational environment and culture that provide a context and motivation for the individuals to learn. The second category comprises Enablers–processes and strategies that if present in an enterprise, can facilitate social learning. The chapter concludes with a set of recommendations that could be implemented by managers who seek to enhance social learning, knowledge management and knowledge sharing in their organisations. Copyright © 2003, Idea Group, Inc.
The Knowledge Edge 325
INTRODUCTION Imagine, for a moment, asking a group of practitioners from various organisations and disciplines what they understand to be effective knowledge management. It is likely that, before too long, such a survey would turn up a considerable array of loosely accepted formalisms, personal experiences, and intuitions based on ‘gutfeeling.’ Consensus in such a group (regardless of whether it is desirable or not) would indeed be a surprising finding. Not because the true meaning of “knowledge management” resides in the minds of only a privileged few, but because the essence of what it means to ‘manage knowledge’ is difficult to ascertain, and hence comes to mean different things to different people. In the Australia Defence Organisation (ADO), the military Executive often refer to the importance of maintaining the ‘Knowledge Edge.’ As with ‘knowledge management’ a shared understanding of the Knowledge Edge is difficult to formulate, even though it is used frequently in Defence Executive publications. What can be said, though, is that the concept of the Knowledge Edge has little to do with how ADO personnel perceive their day-to-day work, how they acquire knowledge and how they share it with others. Our use of the term represents its acknowledged shortcomings, rather than its benign acceptance. We do know, however, that knowledge exists in the minds of individuals and is generated and shaped through interaction with others. In an organisational setting, knowledge management must, at the very least, be about how knowledge is acquired, constructed, transferred, and otherwise shared with other members of the organisation, in a way that seeks to achieve the organisation’s objectives. Put another way, knowledge management seeks to harness the power of individuals by supporting them with information technologies and other tools, with the broad aim of enhancing the learning capability of individuals, groups, and in turn, organisations. The research reported in this chapter provides a deeper understanding of the ways in which day-to-day work activities have a direct impact on an organisation’s ability to maintain the knowledge advantage. With such an understanding workers will be better able to see and therefore modify their day-to-day activities to help their organisations achieve and maintain the competitive edge. Researchers interested in knowledge management are increasingly employing qualitative methods, specifically ethnography, to understand the interplay of social, organisational, and information systems (Myers, 1999). This chapter reports on the use of such a methodology by the Enterprise Social Learning Architectures (ESLA) team of the Defence Science and Technology Organisation in Australia. Broadly speaking, the ESLA task is a three-year research study investigating social learning within the Australian Defence Organisation (ADO). Social learning is defined as learning occurring within a group, an organisation, or any cultural cluster and includes the procedures by which knowledge and practice are transmitted across posting cycles, across different work situations, and across time; and the procedures that facilitate ‘generative learning,’ that is learning that enhances the enterprise’s ability to adjust to dynamic and unexpected situations and to react creatively to them.
326 Warne, Agostino, Ali, Pascoe & Bopping
The immediate aim of the research is to understand the issues inherent in building sustainable and adaptive learning organisations. A long-term objective, however, is to develop architectures capable of supporting the development of information systems to guide and enhance organisational learning and thereby facilitate knowledge management in the ADO. The ESLA study began in June 1998 and has acquired data from several different ADO settings. The first setting was a ‘tactical’ single service Headquarters, where a pilot study was conducted to determine the feasibility of the project’s aim and methods. The research team returned to this Headquarters in April and May 2001 to validate that the findings remained stable over time. The second setting was a joint ‘strategic’ Headquarters within the main Australian Defence Headquarters. Currently, further studies are underway in a single-service strategic headquarters and a single-service ‘operational’ headquarters. Results show a tightly coupled relationship between knowledge management and effective social learning. The team has identified a number of environmental or cultural factors, processes, and strategies that, when positively applied, facilitate social learning and knowledge management within the ADO. In this context, the environmental and cultural factors are termed “motivators” and the processes and strategies are termed “enablers”; however, some of these factors and processes can also act as inhibitors of social learning when they are not thoughtfully applied. Although some of these issues are specific to a military organisation, most are equally applicable to any organisation that is attempting to improve its learning capability and knowledge management. All things considered, the results strongly suggest that the interplay between human, social, and organisational issues within an organisation must be considered first, to effectively facilitate social learning and knowledge management. Technology alone, however well designed it may be, is unlikely to produce effective knowledge management solutions in the absence of appropriate attention to such issues. This chapter provides a brief overview of the study methodology, the findings to date including an overview of the motivators and enablers, and concludes with a list of practical recommendations for managers.
EVOLUTION OF THE STUDY METHODOLOGY Given the exploratory nature of the research, as well as the importance of the context and the need to understand the social process of learning, ethnography was the appropriate methodological tool to adopt. Prior to the commencement of the research study, the original team members were thoroughly briefed on the principles and ethics of ethnographic research by Gitte Jordan (then from the Institute of Research and Learning associated with Xerox Parc) who helped to popularise the use of ethnography in industrial settings. Generally speaking, ethnographers ‘immerse’ themselves in the situation for sufficient time so as to gradually see and understand the key factors that influence
The Knowledge Edge 327
the setting being studied. It is most useful when “one needs to understand complex functioning systems from a holistic perspective” (Jordan, 1993) as it deals with realworld practices in real-world situations. The main body of techniques that fit these criteria falls under the domain of ethnographic approaches (Harvey & Myers, 1995). Ethnography is often used, for example, to provide information systems researchers with rich insights into the human, social, and organisational aspects of information system development and implementation. The primary form of ethnography used here was ‘field work,’ which entailed observing the work taking place in different settings, and using directed questioning to clarify issues. The individual team members embedded code terms, or textual labels, within the text of field notes and collected documents. The embedding of code terms serves several purposes. They aid the researcher to develop and map an analytical structure onto the data. Coding also allows the raw field data to be gathered together into more meaningful chunks so that they can be retrieved together. The overall aim is the same as that of the ethnography it is intended to support–to discover the underlying patterns and regularities in the data (Miles & Huberman, 1994). As the study proceeds, the different work environments provided a motivation to expand the research methods used. For instance, in the more recent settings, surveys were constructed based on the observations, and the team undertook extensive unstructured interviews with a stratified sample of personnel. Therefore, the research study is subject to triangulation by data source (different times and places) and by method (observations, interviews, and, in two settings, a quantitative survey).
FINDINGS OF THE STUDY The findings reported in this chapter represent the collective research results of all the different settings studied to date and focus on the findings that are likely to be relevant to all organisations, not just military ones. These findings are multilayered and allowed the research team to pinpoint a number of environmental and cultural issues, processes, and strategies that facilitate effective social learning and knowledge management. These factors fall into two categories. The first category of factors refers to characteristics in the environment and organisational culture that provide a context in which personnel are motivated to learn, construct and share knowledge, and excel in their work. These factors are referred to as Motivators. The second category is referred to as Enablers and represents processes and strategies that, if present in an enterprise, can facilitate social learning. Both of these categories are discussed under a number of social learning constructs identified by the study. An outline of each of these factors is given in the next section. However, these two sets of factors cannot function in isolation, and both are required to optimise effective social learning and knowledge management in the settings studied. While technology can provide enabling tools for these enablers and motivators, it cannot, in itself, provide an
328 Warne, Agostino, Ali, Pascoe & Bopping
environment that will support and evolve effective knowledge management. The enablers and motivators must first be encultured into or engendered within the organisation.
MOTIVATING AND ENABLING FACTORS FOR EFFECTIVE SOCIAL LEARNING The enablers and motivators discussed in this chapter were identified from the data gathered so far and are, therefore, not necessarily exhaustive. For ease of discussion, the identified enablers and motivators have been assigned to seven categories: Common Identity, Problem Solving, Team Building, Access to Information, Development of Individual Expertise, Communication, and, finally, Induction and Enculturation. These categories, or elements, are believed to be essential constructs for effective social learning. However, it is important to point out that several of the motivators and enablers (for example ‘Workplace Design’ and ‘Inquiry and Reflection’) appear in more than one category as they play a role in supporting a number of social learning constructs, and all of the identified factors appear to be inextricably intertwined.
COMMON IDENTITY The ESLA team identified that a common identity is one of the important factors that acted as a motivator for social learning to occur. A common identity implies a similar way of describing and making sense of the world, of determining what is significant and important, and of how to use resources in the environment (Jordan, 1993). In turn, having this common view of the world enables effective communication, and the development of shared understandings as well as acting to expedite social learning processes. The data gathered so far suggests that systems thinking is tightly coupled with effective social learning and contributes to the development of common identity. Systems thinking, according to Senge (1992), requires a shift of mind–from seeing ourselves as separate to seeing ourselves as connected to, and part of, the world (or part of any other system such as an organisation or organisational sub-unit). The ESLA team observed that, regardless of the study setting, presence of this type of thinking was accompanied by generally higher levels of interaction between staff as well as sharing of information. In the main part, the research team found that this common identity is influenced by issues surrounding goal alignment, cultural identity, gendered identity, language, morale, and workplace design, and that all are integral to effective social learning. While a common identity and its sub-categories are discussed as independent features in this chapter, in reality they are not mutually exclusive since they significantly impact on one another.
The Knowledge Edge 329
Goal Alignment Motivates and Enables Goal alignment is an important value within the work culture, particularly in terms of social learning. At the tactical environment studied, the team observed a strong and consistent degree of goal alignment within the aircrew community. In contrast, at a strategic headquarters facility, the research team found nothing uniform about the ways in which goal alignment takes place, and that cohesiveness and work relationships varied in accordance to social positioning, influenced by hierarchy, civilian as opposed to military discourses, and rank. Cohesive work arrangements were very much in evidence at the small team level, but less so as the teams expanded up the hierarchy of the organisation. During the initial reorganisation which created a particular strategic headquarters, for example, observational data indicated a disparity between the shared vision and goals of the Executive, and the staff at lower ranks who seemed to feel excluded from the process. Within this workplace, a deep cultural divide was seen between those at the Deputy Director level (lower management level) and below, and those of higher ranks. This was validated in survey results that showed that staff at Deputy Director level and below were seen to hold corporate views that were contrary to those expressed by staff at the Director level and above. Doney, Cannon et al. (1998) discuss the relationship between goal alignment and group cohesiveness, claiming that the extent of group cohesiveness relies on the extent to which a team’s goals are clear and accepted, and also on the degree to which all members adopt team behaviours. Such behaviours include sharing knowledge, networking, and teamwork, and all require a certain level of trust; therefore, trust itself becomes a matter that underpins all working relationships and goal alignment generally. The ESLA team found that trust rather than goal alignment becomes more important as working environments become more risky and uncertain.
Cultural Identity Motivates The term cultural identity refers to a member’s sense of self in relation to the specific ‘tribe’ and ‘tradition’ to which they belong and how this distinctiveness applies in their workplace. In the Australian military, these may mean Army, Airforce, or Navy cultures; while for civilians, their cultural identity may be as accountants, computing professionals, or civil servants. Cultural identity is another important motivator for social learning because, like common identity, it impacts on the extent to which a member of staff feels that they are part of the system or alienated from it. There have been some significant cultural changes, which have impacted on the sites researched. For instance, at the strategic headquarters environment, the ESLA team found that there appears to be a cultural shift away from the belief that only the ‘traditional warriors’ should hold senior positions within Defence, which is emerging in tandem with a shift in the skills that are valued. The clash of old and emerging values and worker identity is also reflected in more traditional areas like the clash between civilians and the military and in gender equity. Such clashes could also
330 Warne, Agostino, Ali, Pascoe & Bopping
undermine team cohesiveness and individual commitment to common goals. As one informant indicated: “Cultural clashes between the various service groups and civilians can be problematic, particularly when there is a fine line here between duty to self–in relation to promotion prospects and following the requirements of your service–and duty to the business at hand–in relation to working on joint projects.” Moreover, the needs of family and the concerns over domestic issues are becoming much more openly visible. Perhaps in part this is because partners of military personnel are also becoming more vocal and demanding of their rights. The fact that senior members acknowledge that family needs are important demonstrates a shift in the overall understanding of the typical service member who traditionally had to prioritise his work above other commitments. This trend towards valuing a more balanced life is also evident in civilian organisations. The clash of values that occurs as this cultural shift takes place threatens the extent to which staff feel that they are a part of the system and may result in higher levels of alienation, and thereby reduced common identity. The aircrew and headquarters communities at the tactical headquarters were quite homogenous in that they were staffed primarily by officers from the same service, with only a small number of civilians and reservists. The strategic headquarters environment does not have such a homogenous staff profile.
Gendered Identity Motivates Gendered identity relates specifically to one’s sense of self, which is imbued with the social, cultural, and historical constructions surrounding femininity and masculinity. Gender identity, because of its relationship with common identity, was also seen to impact on social learning. Issues around male bonding and masculinity are central here. The pressure to conform to dominant notions of masculinity not only denies the feminine, but also rejects ‘other’ masculinities that lie outside of the prevailing interpretation of ‘maleness.’ Such ‘rejection’ can have an impact on social learning through its influence on individual staff members’ degree of common identity. Some women, for example, take up what have been typically deemed masculine traits. Watching them in action is powerful evidence of how so-called masculine traits are not masculine at all, but have been merely constructed in that way (Agostino, 1998). Interestingly, during the semi-structured interviews, a substantial number of largely male informants stated their belief that women generally have different skills to men; networking, being attuned to body language, better non-verbal skills, and stronger focus were cited as examples.
A Common Language Motivates and Enables Language is another important factor fundamental to the overall social learning processes. By reflecting the social and political relationship between various
The Knowledge Edge 331
members, language can impact on common identity. At a strategic headquarters reorganisation workshop for example, it was stated that the Defence Executive had chosen the term ‘staff’ rather than ‘division’ to refer to sections within this workplace because the latter is seen as connoting separation and division between sections within the Branch. Language is also important in terms of creating a shared understanding among workers and their relationship to the wider organisation. The importance of language was highlighted at a joint strategic headquarters induction, when one presenter said: “Words are bullets. Never, never use imprecise language (when appraising performance). It can mean a career hit for somebody...There are subtle differences in language use between the services. Words that may signify praise in the Navy or Air Force may be poison in Army.” A shared knowledge of the language specific to the organisation and overall administrative and work processes can help to create a common identity for individual personnel and the organisation as a whole. It is a prerequisite for staff working together. Thus learning the specific work-related language is of central importance to broader social learning development, and is an important outcome of the enculturation process.
Good Morale Motivates Morale has been a significant focus in the overall study because the research team found evidence of low morale being coupled with higher levels of alienation towards senior management. Such alienation has obvious implications for the broader understanding of a common identity and thus for social learning. Perception of low morale has frequently been coupled with comments about not understanding the motivation or agenda of more senior staff. The team also found this lack of understanding not only affects morale, but also has an impact on trust, organisational cohesiveness, goal alignment, and common identity, and consequently, on opportunities and motivation for social learning. A comment from one interviewee exemplifies this: “When you’re sitting in a meeting and someone says something that you know would not be kosher with your chain, then you can stand up and say, No…you can’t do things that way because you know how the people think…You would know what your boss thinks, whether he would approve or disapprove of that particular activity, conflict, whatever. I think that’s important for you to have that kind of interaction to actually know what your peers, your chain [of command] thinks.” Other morale issues that have been raised during the research include budget cuts for travel and training, loss of control over the work people do, lack of career path, job insecurity, gender inequality, slow promotion, and poor records management.
Workplace Design Enables A particular problem within the joint service environment was that its staff was scattered among several sites within the city, a consequence of the reorgani-
332 Warne, Agostino, Ali, Pascoe & Bopping
sation which created it. The dislocation had impacted on many of the social learning constructs. There was strong evidence of reduced morale, problems in sharing information because some sites were unable to access file servers and classified networks, as well as the impact that the reorganisation had in individuals’ power bases. Workplace design and proximity also threatens common identity when staff are not working in the same location. One informant summed up the general feeling in this way: “[Building X] and us. We don’t see them. There is not any spirit that we are belonging to one branch. I have more to do with [a specific area] than anything else and I’ve made some good contacts in there...who I sit around. Like the people that I tend to sit more with have really helped me in showing me some of the ropes....” Many joint service personnel interviewed by the ESLA team stated that they enjoyed the spontaneity of face-to-face discussion on work-related issues as they crop up, and so this became problematic for them when teams are not co-located. The converse also applied. Staff located in the smaller outposts, who often tended to be of junior ranks, often felt little identification with their organisation. Several ‘dislocated’ staff members expressed significant confusion about their cultural and common identity, in some cases identifying more with the workplace with which they were based than their Branch within the joint service workplace. This has serious implications for the effectiveness of social learning in these areas.
PROBLEM SOLVING For knowledge workers in any organisation, problem solving is a core activity. Importantly the process of problem solving fosters knowledge generation and thus social learning. For instance, routine tasks often need to be done slightly differently in different circumstances, and in doing so involve an element of problem solving that requires generative learning (Lave & Wenger, 1991). Both tactical and strategic settings studied provided numerous examples of this. The gap between formal or routine procedures and the ways in which people actually do their work was highlighted many times.
Networking Enables Problem Solving An individual’s network is one of their most important resources, as personal and social networks are an important means of acquiring, propagating, and sharing knowledge. The individuals who comprise the network can make available their own knowledge, expertise, and experience. In this way, the knowledge resources available to any one person, in their work and when problem solving, can be greatly increased. Staff at both settings generally did not hesitate to draw upon their networks. Personal networks can also play a pivotal role in the propagation of knowledge. A few members of staff in the settings studied were seen to be systematic in passing
The Knowledge Edge 333
on relevant knowledge to colleagues, and many others told the research team that, time permitting, they would pass information onto colleagues. As Davenport and Prusack (1998) claim, when those who are in a position of ‘know-how’ share their expertise, they contribute to problem solving. Hence, personal networks were seen to function as channels supporting both ‘information pull’ and ‘information push.’ A good deal of effort can go into generating, maintaining, and obtaining value from these networks. The research team was told that often, the primary benefit in attending a course was the new contacts that were made. One informant, said: “One of the techniques that we use in the military,...the Commander’s notebook...which is in its general sense, a contact list of the people who work for you and the people you work with and actually your boss and things like the birthdays and the kids, any particular thing that you need to remind yourself about. It’s the sort of information that you might glean and it’s useful to remember.” Several times the research team was told that an officer’s personal list of professional contacts would be passed on to their replacements in a good handover.
Accurate Perceptions of the Organisation Enable Individual and shared perceptions of the organisation, and how they operate, provide an essential backdrop to problem solving within an organisational context. These perceptions may consist of deeply ingrained assumptions, generalisations, or even pictures or images that influence how people within an organisation understand their organisational world and how they should act within it (Senge, 1992). The importance of these perceptions cannot be stressed enough, because they directly influence the construction of individuals’ knowledge and understandings that they draw upon in their day-to-day-activities. One general example is appreciating the ways in which an organisation’s formal rules and processes can be bent to achieve a desired outcome. This class of knowledge can empower people to solve problems by expanding the range of solutions which may be available, and by giving them confidence to improvise or innovate. Conversely, a lack of knowledge or incorrect perceptions will constrain the types of solutions that can be found.
Systemic Understanding Enables Similarly, effective problem solving often requires a systemic understanding of organisational and inter-organisational issues. Achieving a systemic understanding requires a holistic view of an organisation and its inter-relationships, an understanding of the fabric of relationships and the likely effect of interrelated actions (Senge, 1992). Such systemic understandings are becoming more important for organisations as they position themselves and derive benefit from a constantly changing environment. This systemic understanding of the workplace and its context is effectively applied to enhance work practices in tandem with social learning by a number of staff members observed in the research, but not by most. The military posting cycle and
334 Warne, Agostino, Ali, Pascoe & Bopping
high staff turnover ensure a continuing need to make this class of information both readily available and to ensure that it is brought to the attention of all staff. Hence the need remains to develop, encourage, and nurture systemic understandings of the relevant parts of the organisation. The ESLA team was told on numerous occasions that a general awareness of where one’s job or a project fits into the big picture is an important element in job satisfaction and hence morale. However, there was little effort on the part of management to provide this ‘big picture’ perspective, particularly so in the strategic environment.
Inquiry and Reflection Enables and Motivates Inquiry and reflection together are a powerful means of enhancing social learning and knowledge creation. Inquiries, or questions, can be triggered by problems or needs that require solutions or explanation. Reflection allows time for examination, contemplation, and often, resolution of the inquiries. To use a common metaphor, it is perhaps the best means for distinguishing between the forests and the trees of everyday working life. Inquiry and reflection involving more than one staff member is an opportunity for both creating and sharing knowledge. Some of the processes at the tactical setting studied, such as the mission debrief held after each mission, were seen to include elements of inquiry and reflection. In contrast and with very few exceptions, most of the staff interviewed at the strategic level headquarters regretted that their high workloads meant they had insufficient time for inquiry and reflection at the workplace, either individually or as a group. Some pertinent comments are: “We spend so much time in the detail on the process that we’re in, that we rarely give ourselves the opportunity to think strategically about what we’re doing and to really determine the priorities of where our effort is worthwhile and where it isn’t. I think you need to do that both individually and at a group level again with the people that you work with.” “So reflection time at work…would be nice. At the moment it is a luxury because we’re all just so busy with doing–fighting the bush fires and trying to meet all the objectives.” In order to effectively solve problems and innovate, time for inquiry and reflection must be factored into the workplace. Staff is generally more motivated to do so if they know that inquiry and reflection is recognised as a valid and valuable use of their time.
TEAM BUILDING Very few people work by themselves and achieve results by themselves. So the people who interact together and yet have different tasks and responsibilities need to understand what each of them are trying to do, why they are doing it, how they are doing it, and what results to expect. In this relationship of interdependencies, communication and trust play vital roles (Drucker, 1999).
The Knowledge Edge 335
Good Leadership Enables and Motivates In general, the calibre of leadership within the settings studied was to be admired. The leaders and managers were innovative and they motivated and developed their staff. They articulated their vision, their goals, and preferred outcomes to their staff. The research team witnessed many meetings where leaders used inclusive language, were receptive to new ideas, and openly shared their thinking processes and imperatives. They empowered others, and the team’s mutual respect was evident regardless of rank. The ESLA team spoke to many staff members who expressed their appreciation for the accessibility of their leaders, their non-judgemental approach, and their courage in admitting their shortcomings. Such qualities and atmosphere build up trust, and encourage collaboration and teamwork. The military leadership exhibited two other motivating dimensions: a willingness to demonstrate that staff are highly valued and a willingness to acknowledge expertise and knowledge regardless of rank. Another team-building issue that emerged during the study was that people were appreciative of informal ‘drop ins’ by senior managers inquiring how they were doing. This ‘roving management,’ as it was referred to, was said to contribute to better cohesion of teams, to promote system thinking, to help to focus on overall goals, and to facilitate communication and feedback.
Strong Team-Based Morale Motivates ‘Team spirit’ and ‘team cohesiveness’ are both important values within the work culture and work ethic in the settings studied. Nonetheless, there is nothing uniform about this. In this environment so-called team cohesiveness and work relationships vary in accordance to social positioning, hierarchy, and cultural identity. During some of the interviews, a strong indication emerged of poor team-based morale, a lack of cultural cohesion and team spirit. These indications were sometimes explicit, and at other times they were implicit and were particularly an issue with staff who are either of low rank or remotely located. The attitude of ‘them and us’ as well as a feeling of being undervalued was clearly prevalent. These individuals did not identify themselves as team members of a unit they belonged to, but more as members of an organisation where they were physically located. Moreover, some felt they were not encouraged to operate in a coordinated way to support organisational goals. This was because they did not see the significance of their particular tasks to the overall goals of the organisation. This was a serious de-motivator, and these staff members were not anxious to learn or share their knowledge with other team members. However, some examples of teamwork and team spirit were evident in the settings studied. The researchers observed instances where teamwork was well integrated into daily work and where people worked collaboratively. Such teams were goal oriented and bound by achieving business results. There were not only teams in structure but in spirit. They were usually formed within individual sections, and were led and energised by a leader who saw his or her role as serving team
336 Warne, Agostino, Ali, Pascoe & Bopping
members rather than just having the position of a leader. Members of these teams were co-located and made information (such as calendars, contacts, and computer work folders) available to one another. This not only contributed to knowledge sharing among team members and aided communication, but it also emphasised trust within the team. In these areas, morale was perceived to be high. People worked together happily and looked forward to socialising together. Regular social functions were organised for the workplace teams.
Constructive Performance Management Enables Assessment, reporting, and performance management form a significant part of the overall management of military personnel throughout their careers; however, if not handled constructively it may have adverse impacts on team spirit and thus social learning. The outcome of a performance report often determines the prospects of one’s career progression; the ESLA team was told that a poor performance rating, at a critical point in one’s career, would severely reduce the prospects for promotion. This strong emphasis on individual performance management may be divisive as it may influence a proportion of individuals to focus on achieving their individual goals at the expense of assisting their team to achieve its goals. Team-based performance management systems have yet to gain wide acceptance. The Somerset Maugham statement, “People ask you for criticism but they only want praise” (Morgan, 1989), captures the problem endemic to performance appraisal systems. Performance appraisals are supposed to meet the needs of both the organisation and the individual. The aim of a performance appraisal is to introduce management practices where merit is recognised and rewarded in a systematic way. Furthermore, a well-planned performance appraisal system should help to make equitable and unbiased decisions regarding staff selection, placement, development, and training (Wood, 1989). The criteria and standards used in a performance appraisal provide a focus for performance measurement and therefore must be clearly related to the individual’s job. Many problems associated with performance appraisals stem from the fact that these criteria are not valid and/or lack clear communication about performance expectations. For the military, the performance cycle is annual. Some of the interviewees felt somewhat uneasy as their performance evaluation was due relatively early into their posting cycle. It was reported to ESLA team members that there was often a lack of clear communication about performance expectations. Some of the people interviewed were unclear about what was expected of them and what was stated in their duty statements. An annual performance appraisal appears to be too long to wait for recognition of good work and too late to correct a performance problem. Morgan (1989) and Wood (1989) explain that to maximise positive results, the appraisal process should be two way: it should facilitate and coach staff in doing their jobs effectively, and it should be frequent and informal. This way performance management can contribute not only to achieving organisational goals but also to social learning.
The Knowledge Edge 337
Judicious Use of Humour Motivates Senior management frequently use humour for smoothing discussions that are becoming heated. This helps to stop the conflict from escalating while also enabling the conflicting subordinates to save face. Humour was often used at meetings to assist in uniting people around common themes in the settings studied. Other use of humour was to make criticism palatable.
Valuing Skills Motivates and Enables At the tactical setting studied, the skills of individuals were seen to be highly valued. Current expertise and competence were often seen to take precedence over rank. One of several instances observed occurred when a senior officer was reviewing a proposed software update for a weapons system. The person with the most current operational experience with the affected system was a relatively junior officer. It was illuminating to observe the combination of the junior officer’s specific expertise and the senior officer’s experience used to generate what they believed to be the most viable solution, and to share and transfer knowledge between them. This type of interaction is acknowledged as an important cultural requirement for organisations that wish to further their generative learning (Bokeno, 2000; Isaacs, 1999; Schein, 1993). The researchers were told that the attitude had changed over previous years. Before that change, senior officers believed that they must maintain the pretence that they were superior in all spheres of performance, to maintain their authority. However, informants explained that the results of that attitude were counterproductive. A senior officer stated: “There has been a huge change with the culture regarding senior and junior officers. Previously, junior officers would not go out of their way to help them– the attitude was, if you knew it all, then do it yourself. It couldn’t be more different now. They will offer help when they see it is needed. You simply need to say what you require. They respect the fact that we have other jobs to do and are not as current as we once were.” One way of valuing skills and increasing morale is to publicly acknowledge outstanding work. Making employees feel appreciated, focusing attention on their good ideas, inviting them to extend themselves, and saying, ‘Thank you, we know that you are a good employee, we value you and your work,’ is a big factor in motivation (Mitchell, 2000). Key informants stated that public recognition of good work was scarce in the strategic setting studied. Many personnel told the research team that a written or verbal word of praise, a pat on the back often means more, for example, than a pay raise–“praise is better than money” and praise is needed at all levels.
Peer Review Enables For the military, peer review is an important method for sharing expertise and problem solving. When done constructively, it also has a positive effect on team building because it demonstrates that colleagues are willing to contribute
338 Warne, Agostino, Ali, Pascoe & Bopping
to achieving an individual’s tasks and goals, and helps to build and sustain relationships and to generate trust. Peer review was seen to form an essential part of team building at the tactical headquarters, and to a lesser extent, at the strategic setting. At the tactical setting, the mission debrief is a major component of peer review. Each mission is analysed to see how well its objectives were met, and how each individual’s actions contributed towards the success or failure of that component of the mission. The importance of peer review was stressed by a senior officer who said, “Interaction with peers is an important way of getting feedback, constructive criticism, and validation.”
Effective Workplace Design Enables Workplace design was seen to have many negative impacts on social learning. Staff located at small isolated outposts are at risk of feeling isolated and do not identify strongly with the parent organisation. As stated elsewhere, outposted staff identified more with the workplace with which they were based than their Branch where they administratively belonged to. This was further exacerbated by the fact that the workers often felt excluded by their colleagues; the ESLA team was told about a number of instances when they were overlooked or not informed about Branch meetings or events. Several staff members expressed confusion as to who was in their line management, or even their place in the ADO. Those teams, whose staff were co-located, demonstrated the highest teambased morale and team spirit. However, while co-location can be helpful, it is insufficient on its own to guarantee successful team building.
ACCESS TO INFORMATION The easy availability of corporate information has a direct input into knowledge acquisition and generation, and thereby, social learning. Information, therefore, is an important organisational resource which, if properly managed, can lead to improved decision making and increased productivity.
Good Records-Keeping Enables The research team observed that general familiarity with records-keeping procedures in the settings studied was quite poor, and adherence to formal process is almost non-existent in the strategic headquarters. Records management and access to information contained in paper records pose a problem. Some people have developed their own personal records-keeping systems, but there is little uniformity in these, and no adherence to file naming conventions and standards. Consequently, there seems to be little faith in the integrity of organisational records. Those areas where there was good local practice in records management were seen to be very much the exception. As two informants stated:
The Knowledge Edge 339
“I believe that physical files in the … are no longer managed well because their management has been farmed out to outside bodies. With the file clerks there was consistency of procedures but the file clerks are no longer part of the procedure.” “I think we have problems with passing on information in the organisation as a whole. We just don’t do it very well. You know we go upstairs and ask people where they save their work to and [if] you ask five people, you’d find different answers. We have an information management problem right from the outset. Where do you save your e-mails? Well I save them somewhere. Where do you save yours? I put mine in a public folder. Where do you save yours? I put them into private folders. It’s corporate information. Unless it’s personal stuff which you could argue you shouldn’t be doing anyway. We have an information management problem and therefore passing on information becomes quite difficult because everybody does it in a different way.” Furthermore, the preference for accessing and transferring information electronically seems to be growing, and the use of electronic tools for communication and decision making is prevalent at all levels. The issue of electronic records, particularly e-mail messages containing evidence of business transactions, posed problems not only in the setting studied but also in the ADO at large. Research reported in the professional literature on records management (Enneking, 1998; Henricks, 1999; Robles & Langemo, 1999) suggest that the capture of email messages into a records management system offers the best solution to this problem. Several staff members have suggested that the Defence Intranet should become a de facto records and knowledge management system. The argument is that the intranet, with the overlay of appropriate access privileges, should be the repository of all finished work and living documents. However, the current reality falls short of this in the ADO. One informant said: “It’s a bastardised Intranet in its current form. It’s useful because there is nothing else that does it. But it’s very limiting. You can’t search information unless you know what you’re looking for on a file, you won’t find it. So, I would call it only 10% useful, 20% useful.” The fact remains that, in the absence of organisation-wide protocols, technology is only as good as the people using it. Those who are very particular about record keeping do not want to trust all records to an electronic system. In the ADO, many electronic directories and other directory style databases remain poorly structured.
Personal Networking Enables Personal networks from previous postings as well as newly acquired contacts in the new environment play a vital role in knowledge construction and acquisition. Those who are in a position of ‘know-how’ shared their expertise, and newcomers to the area often rely on these networks to gain insights into the complexities of the work they are involved with. Apart from satisfying social needs, informal networks play a pivotal role in knowledge propagation. New knowledge often begins with the individual, and
340 Warne, Agostino, Ali, Pascoe & Bopping
making personal knowledge available to others is the central activity of knowledge creating organisations. Through conversations people discover what they know, what others know, and in the process of sharing, new knowledge is created. Technology such as e-mails, faxes, telephones are invaluable aids in the process of knowledge sharing, but they are only supporting tools. Knowledge sharing depends on the quality of conversations, formal or informal, that people have. Webber (1993) aptly describes it as follows: “Conversations–not rank, title, or the trappings of power–determine who is literally and figuratively ‘in the loop’ and who is not.”
Protocols at Meetings Enable Meetings are another means of accessing information and those that were observed in the different settings studied varied significantly in format and the protocols in place. The meetings in the tactical environment were seen to be more conducive to social learning than those at the strategic environment, largely by design. At the tactical headquarters, meetings that were mission related were observed to be excellent opportunities for learning. Strict protocols were observed at briefings like these, such as allowing participants to discuss errors or problems encountered during missions without assigning blame or shame to individuals. As well as providing access to information, this facilitated the sharing of mistakes and the sharing of responsibility for solutions. The ESLA team had been told that, prior to the introduction of these protocols, the debriefs commonly became very heated and it was not uncommon for fighting to take place. Even briefings, however, whose main aim was information transfer from senior to junior staff, had sections where staff were, for example, invited to speak to topics, discuss tactics, or to identity potential objectives. There were few equivalent meetings at the strategic headquarters, other than some induction meetings and briefings. In an environment, which in many ways seemed more complex and ambiguous, learning how to do the job was not given quite the same priority. The staff were simply too busy.
Effective Information Exchange at Meetings Enables Many meetings at both the settings studied were held with the stated aim of propagating and exchanging information. Information transfer is generally considered to be an uni-directional process, while information exchange is a two-way or multi-directional process. The concept of information exchange is further delineated by some theorists (Senge, 1992) into discussion and dialogue, where ‘dialogue’ requires members of a team to suspend assumptions and enter into a genuine “thinking together.” Genuine dialogue is not a heaving of ideas back and forth in a winner-takes-all competition. Instead it is a free flowing of meaning through a group, allowing the group to discover insights not attainable individually (Senge, 1992). While meetings observed in the strategic environment tended to have a basic structure, their purpose however appeared to be more for information transfer rather than for information exchange or information sharing. The ESLA team observed there is a vast difference among different Branches in the way information is
The Knowledge Edge 341
disseminated at meetings. In one Branch the Director General (DG) has a whole-ofBranch meeting after his get together with his manager in the Chain of Command. In other Branches, the DG meets with the Directors only and they convey the necessary information to their Deputy Directors, sometimes only on an ad hoc basis, who in turn disseminate information to their respective directorates. Members of those Branches told the research team that this ‘multi-step’ means of disseminating information somewhat compromises the content (inevitably filtering of information takes place) as well as accuracy of information. In addition, they do not have an opportunity to meet as a branch to discuss issues and get feedback. One survey respondent stated that: “Whilst at a social level members of … interact well, there is very poor group cohesiveness in the work environment. I believe that this is in part due to the failure to conduct regular progress/section meetings at lower levels and the failure to clearly identify who is doing what.” A number of staff told the research team that a lack of active listening seems to be an endemic problem and examples were given where managers either avoided questions put forth at meetings or terminated meetings prematurely if questioning got too uncomfortable. These staff would have preferred frank openness and to be told ‘I do not know what the answer is’ or ‘I need to investigate this more’ or ‘at this stage this has low priority,’ and so on, rather than being brushed off. While it is sometimes necessary to have meetings for information transfer only, social learning is more effective when there is an open information exchange and dialogue at meetings, and where questions can be asked, views expressed, and problems raised without fear of blame or shame.
Workplace Design Enables Workplace design is also an enabler for the category of Access to Information because it impacts on the ways by which individuals can access information. In the first instance, it is important that individuals know some type of information exists. It is more likely to be used if it is readily accessible rather than if it is hidden or scattered across many offices. The goals of the tactical headquarters military exercise, for example, of flying missions safely and meeting targets, necessitated the centralisation of information sources in the operations rooms. The information was available in a variety of formats, and as well there were easy means to access this information. In a multi-goal area, like the strategic environment, it is not possible to centralise information resources to the same extent. This problem is exacerbated by geographical distribution of units within the organisation. Therefore, there seemed to be a greater reliance on personal networks and on obtaining information from people rather than from documents and centralised sources. In the strategic environment, because of the geographical distribution and the architectural design of the workplace, the team observed that workplace design tended to inhibit social learning.
342 Warne, Agostino, Ali, Pascoe & Bopping
Good IT Infrastructure Enables The ESLA team observed that information access due to failings in the IT infrastructure inhibited access to information within the strategic settings. It was especially a problem for those individuals who were not co-located in the central headquarters. They lacked access to classified networks and to the main information resources stored on shared hardware drives; similarly, those staff located within the central headquarters had difficulty accessing some shared information resources of outposted staff. Another issue that was often brought to the team’s attention was the difficulty in finding information on the shared drive. Since there was no specific person responsible for maintaining information on the shared drive and for naming folders, it was left to the discretion of the document originator where it would be stored. The research team observed that in some individual sections, people were more precise in giving explicit names to documents and folders but this practice was not uniform across the whole of the headquarters.
DEVELOPMENT OF INDIVIDUAL EXPERTISE The acquisition and development of individual expertise is an integral part of social learning and there are a number of enablers that have been observed to facilitate this.
Career Trajectories Enable An individual’s career trajectory describes the positions, roles, and experience that they have accumulated, up to and including the position they currently hold. While not excluding personal experiences outside of a work or training context, a welldesigned career trajectory generally equips an individual with the skills, experience, maturity, and personal networks needed to successfully fill a particular position or role. A senior officer at the tactical headquarters stated that “It is assumed that officers who have gone through an appropriate career development path would have 90% of the knowledge required for a position.” If one asks how an individual learns to be a Commanding Officer, the response generally is that, by the time a person reaches that position, they should already know how (O’Neill & Gori, 1998). This answer assumes that these people have undergone an appropriate career trajectory. Appropriate career trajectories facilitate social learning because they provide a foundation of knowledge upon which an individual can become fully productive more quickly, and as a consequence they are more able to generate new knowledge. Good career management, and thoughtful recruitment and selection procedures could serve the same purpose in civilian organisations.
Professional Currency Enables Professional currency is an enabler of social learning in the same way that appropriate career trajectories do so–by providing a foundation for the generation of
The Knowledge Edge 343
new knowledge. However, the term professional currency has a somewhat different meaning within different environments. Within a tactical environment, in operational positions, except for the most senior officers, professional currency translates into the ability to do one’s job. For a pilot working on a reconnaissance project, for example, an indicator of their professional currency might be the number of hours they have flown over the last two years in the reconnaissance variant of the aircraft. The training available reflects this. New members of the community did whatever basic training they required. Officers returning from a non-tactical posting took whatever training they required to refamiliarise, refresh, and update their knowledge. In the ADO, this may mean that some degree of valid operational experience is also essential for all staff members, both civilian and military, if they are to have sufficient understanding of the military to allow them to do their job. In other environments, professional currency merely means staying up to date with changes in the professional skills required to effectively do one’s job.
Professional Training Enables and Motivates Appropriate professional training is a significant component of the development of individual expertise and, therefore, again a fundamental enabler for generating new knowledge. Generally, the military excel at supporting their officers through inhouse, graduate, and postgraduate training. Training courses are also important to furthering individuals’ expertise, as well as for forming the personal networks that subsequently develop. However, in times of budgetary constraints, training money is often the first to go, with damaging consequences for the organisation’s ability to learn and manage their knowledge. The ESLA team observed that numerous individuals at the strategic level perceived a lack of accessibility to operational experience and a perceived lack of availability of funds for training. This perceived shortage of funding may have direct impact on the trust that staff have for management. Numerous researchers, for example McCauley and Kuhnert (1992) and Argyris (1973), have found that a general trust in management is associated with professional development opportunities at work. The implication is that an employee’s sense of trust is promoted when the supervisor provides career growth opportunities because it authenticates the supervisor’s commitment to that employee’s professional development. This is important to social learning because, as explained in the section entitled Team Building, trust plays a key role in collaboration and teamwork.
Mentoring Enables and Motivates A degree of mentoring may be an important element in the development of individual expertise. Mentoring is regarded as an effective method of assisting staff development, especially for junior staff. At the tactical headquarters, for example, a degree of informal mentoring was seen to be built into elements of the training program. In terms of developing a career trajectory, the knowledge acquired through mentoring may also be important when individuals want to prepare themselves for
344 Warne, Agostino, Ali, Pascoe & Bopping
specific roles in the future. The active management of the career of junior officers in the tactical environment provided another avenue for mentoring. The ESLA team, however, saw little evidence of systematic attempts at mentoring within the strategic environment, although a mentoring program was on the ‘wish list’ of many staff members: “…it might range from ex-politicians, ex-Secretaries, senior members of the Strategy Committees, or something like that–mentors that have previously been through the system, just saying, ‘Hey, look, this is–these are the hidden rules about how you do the thing.”
COMMUNICATION Communication is one of the social learning constructs essential to effective learning within an organisation, and to the effective functioning of any workplace.
A Supportive Communication Climate Enables and Motivates Supportive communication climates are recognised as being positively linked to open and free exchange of information, constructive conflict management procedures, a high degree of worker involvement in solving organisational problems, and job satisfaction (Gibb, 1967). Such communication climates also encourage openness in supervisors but demand of them particular effort in their willingness to allow employees to openly express opinions, feelings, and ideas (Larsen & Folgero, 1993). Characteristics of a supportive communication climate include the existence of a culture of sharing knowledge, treating each other with respect, generally behaving in a cooperative and not competitive manner with each other, and breaking down cultural barriers arising out of inaccurate stereotyping. Research has established the link between supportive organisational communication climates and generative learning (Bokeno, 2000; Ruppel, 2000). Supportive organisational climates have also been associated with higher levels of organisational commitment (Guzley, 1992). Furthermore, two elements of communication climate found to predict organisational commitment are participation in decision making (Hall, 1977; Welsch, 1981) and participation in goal setting (Hall, 1972). In the joint environment, the research team more often observed examples of behaviours that lead to a defensive communication climate rather than those that foster a supportive communication climate. Defensive organisational communication climates encourage workers to keep things to themselves and to make only guarded statements (Gibb, 1967). For instance, findings of the quantitative survey conducted as part of the study indicated the majority of joint strategic headquarters staff did not perceive they were involved in decision making and goal setting. Further, findings from semi-structured interviews with these staff also indicated lower levels of organisational commitment. In such settings, social learning is likely to be inhibited.
The Knowledge Edge 345
Effective Formal and Informal Information Flows Enable Within a tactical environment such as the single service tactical headquarters studied, the communication flows, both formal and informal, are the lifeblood of the organisation. It simply could not function without the meetings, briefings, email, telephone calls, and conversations that take place, and the orders that are issued. These activities provide information, align goals between individuals and different parts of the organisation, and help to coordinate their activities. When communication channels and information flows break down, as was observed several times at a highstress military exercise, the consequences may be severe. The situation is similar in a strategic environment such as the joint service environment, even though many of the time pressures are not as dire. The data gathered suggests that formal information flows were not always effective within the strategic organisation. Information flowing from the top levels of the organisation to the lower levels were perceived to be excessively filtered and fragmented in the process. But information flows from the lower level to the senior ranks, and lateral information flows, were seen to be particularly ineffective. Informal information flows, however, seem more effective in the joint service environment. On numerous occasions, it was pointed out that informal meetings (for instance, morning teas) provide an invaluable forum for exchange and transfer of information. In fact, these informal gatherings are a preferred means for communication and exchange of views by many staff. Similarly, the ESLA team was told that there is less and less networking between colleagues within the strategic headquarters as people do not mix as much and a lunch hour is often used to catch up on work. Apart from socialising, shared lunch provides a forum where people could talk informally about work and decisions were often made at these lunches. Similar comments were heard at the tactical headquarters studied. The ESLA team was told of “a strong informal flow of information in the crew room around lunchtime. There is also an informal brief around coffee at about 13:15.” The team was also told that the opportunities for informal interaction were diminishing.
Inquiry and Reflection Motivates and Enables An important element of generative learning is for organisational members to be able to engage in dialogue which is open and is based on inquiry and reflection. A supportive communication climate is a prerequisite for such dialogue and it requires learning how to recognise defensive patterns of interaction in teams that undermine learning (Senge, 1992). However, an additional and obvious requirement for such dialogue is having the time to engage in it. As stated earlier, on numerous occasions, the ESLA team encountered comments from the strategic headquarters staff that there is little time to reflect; learn from experiences, whether they be successes or failures; and generally discuss work matters. The comments were often made with an indication of bitterness, and overwork was attributed as a factor preventing people from setting some time aside for thinking and reflection.
346 Warne, Agostino, Ali, Pascoe & Bopping
Many of the processes at the tactical headquarters, from mission debriefs to tactics meetings and training, highlight that they recognise the importance of inquiry and reflection at a group level. As well as improving the quality of the reflection, it also aids the development of a common identity and shared understanding among the participants.
Workplace Design Enables The issue of workplace design and its impact on team and network building, and on accessing information necessary to getting one’s job done, arose repeatedly during the study. Numerous interviewees were aware that physical location and proximity to each other had the potential to promote the transfer of pertinent knowledge. Indeed, the point was even made that in addition to more quickly obtaining answers to questions about particular tasks, an open plan workplace enabled one to tap into pertinent knowledge by overhearing others’ conversations. Hutchins (1996) uses the term ‘horizon of observation’ to describe the area of the task environment which can be seen, and is therefore available as a context for learning, by team members. Officers at the tactical headquarters settings believe that opportunities for learning have significantly decreased, citing the decrease of open plan areas as a factor: “These days this has all but gone out of the window. The crew rooms are no longer used in the same way. Nowadays, they all have desks and work separately. This change has meant that the learning, which used to take place informally in the crew room, happens less frequently.” However, as Davenport and Prusack (1998) point out, co-location in itself does not guarantee the sharing of knowledge; a common training or experience, or at least a common language, is essential. Unless individuals are prepared to ask and answer questions of one another, or to even just chat with each other, the knowledge advantage provided by open plan workplaces will be lost. An example of this was brought to the research team’s attention when told that two workers had been co-located for three months before they realised that they were both working on the same project. Many staff members of the strategic headquarters setting consider the open plan arrangement noisy and an inhibitor of effective communication, as the following comment illustrates: “If the environment was such that a group of people who worked together were located together and had some form of privacy to do their work as a group, that would be fine, but when you’re all lumped together and everybody can hear what everybody is saying and everybody’s saying a hundred things at the same time, sometimes it can be an absolute nightmare.” An organisational culture that recognises the value of knowledge and its exchange is a crucial element in whether knowledge work is successfully carried out or not. Such a culture provides the opportunity for personal contact so that tacit knowledge, which cannot effectively be captured in procedures or represented in documents and databases, can be transferred. Webber (1993) claims that “Conver-
The Knowledge Edge 347
sations are the way knowledge workers discover what they know, share it with their colleagues, and in the process create new knowledge for the organisation.” In a culture that values knowledge, managers recognise not just that knowledge generation is important for business success, but also that it can be nurtured with time and space (Davenport & Prusack, 1998).
INDUCTION AND ENCULTURATION A substantial majority of the knowledge that individuals and groups hold is tacit, and therefore cannot be taught formally. Many of the sorts of knowledge required for a particular job or role, being tacit, tends to be invisible in that most people are generally not conscious that they have this knowledge. Learning to be a member of an organisation, for example, also entails learning what is acceptable behaviour and how conflicts are resolved. Induction and enculturation, which has been defined as the process by which humans acquire the culturally constructed meanings attached to various actions in a particular society or subculture (Merten, 1999), are two processes by which staff can learn both explicit and tacit knowledge. Reports in the literature suggest that orientation of new employees is one of the most overlooked aspects of employee training (Cooke, 1998; Ganzel, 1998; Tyler, 1998). Like appropriate career trajectories and professional currency, effective induction and enculturation programs facilitate social learning by providing a foundation of knowledge upon which the individual can become fully productive more quickly and as a consequence they are more likely to generate new knowledge. Good induction, however, is more that just an introduction to a new job and workmates; it is a way of helping people find their feet. Attitudes and expectations are shaped during the early days of new employment and the issue of work satisfaction cannot be considered without examining more basic issues of work orientation (Dunford, 1992; George & Cole, 1992). There are numerous advantages that come from good induction programs. These include morale building, minimisation of misunderstanding (because rules and regulations have been clearly explained), establishment of good working relationships, reduction of anxiety, and reduction of inefficiency.
Comprehensive, Timely, Induction Programs Enable Induction, or the perceived lack of it, was a problem in the joint strategic setting studied. Although not everybody interviewed was explicitly critical about the lack of job induction, some felt frustrated because often they had to labour to find obvious organisational information required for their work. This “discovery learning,” as it was referred to, was regarded as very time consuming and seemed to lead to poor morale, frustration, and it negatively influenced people’s perceptions about the organisation. The plight of some informants was made clear in the following remarks: “…there was just no guidance, no handover, nothing about where this paper was at and what I should do with it.”
348 Warne, Agostino, Ali, Pascoe & Bopping
“We had no … handover in terms of the status of projects that we were going to assume responsibility for. No handover in relation to file or information management within the section. So we just foraged. We’re still foraging.” Timing is one of the most important elements of employee job induction. If the employees have to wait for weeks to be introduced to the job and the organisation, they have been largely unproductive for that period of time. An issue that emerged from the interviews with staff in the strategic headquarters was a relationship between meaningful and timely induction and subsequent job satisfaction. What was also interesting was that those who were not properly inducted or enculturated into the organisation saw no need and responsibility to actually prepare any form of handover for anyone who may take over their position in the future. If learning is to occur, a timely and comprehensive induction program would provide a solid ground to begin this process.
‘Buddy’ and Mentoring Systems Enable and Motivate Although highly desirable, when a new posting cycle begins, it is not always feasible to conduct an induction program. In the interim, a ‘buddy’ or ‘mentoring’ system could fill in the gap. A ‘buddy’ would be an experienced workmate who could be available to answer questions and assist the orientation of new members during the initial few weeks. Some interviewees mentioned that a colleague acted as a buddy when they first came to the headquarters, and that they found this to be immensely useful to settling into a new job and to effective learning. For instance, one person commented: “Well, because [Name] did the job before, and was pretty much my bible for the first three months so I was relying on [Name] for a lot of stuff.” Similarly, mentoring is regarded as an effective method of staff enculturation and development. Its advantages include helping to align staff with organisational goals, and providing a context for the transfer of knowledge between senior and junior staff (Davenport & Prusack, 1998).
Early Training Enables The ESLA team was repeatedly told that early training is an important part of effective induction and enculturation. It is an opportunity to learn the explicit knowledge that is taught as part of formal training. It is also an opportunity to be exposed to the attitude and cultural perceptions of colleagues and peers.
RECOMMENDATIONS FOR MANAGERS The ESLA research to date has identified a number of complex and interwoven factors that enable social learning. The identified enablers and motivators have been assigned to seven categories: Common Identity, Problem Solving, Team Building, Access to Information, Development of Individual Expertise, Communication, and finally, Induction and Enculturation. These
The Knowledge Edge 349
categories, or elements, are believed to be essential constructs for effective social learning. It has been the aim of the authors of this chapter to show that it is these human, social, and organisational perspectives in an organisation that must be addressed first, to effectively facilitate social learning and knowledge management. Therefore the following recommendations are made to managers who seek to enhance social learning, knowledge management, and knowledge transfer in their organisation: • Induction programs should include full information about the hierarchy within the organisation and important inter-organisational links as well as points of contacts for personnel, pay, and conditions information. This information should also be easily accessible electronically. • Ideally organisations should be co-located. If this is not possible, attempts should be made to co-locate at Branch or Division level. • Staff from all levels of the organisation should be invited to contribute to the formation and implementation of policy in relation to issues of equity and the celebration of diversity. • Junior staff should be involved in discussions and policies to improve morale and cohesion within the organisation. • The imperatives and decision-making process of senior managers should be made more transparent to junior staff. For example, minutes from senior staff meetings could be circulated by e-mail. • Encourage and facilitate personal networks of information experts by providing tools for staff to map and record these networks. • Establish formal and informal processes for providing information on how the organisation operates, what its rules are, how much autonomy individuals have to bend the rules when necessary, and how free individuals are to act within its perceived boundaries. • Encourage systems thinking by providing learning opportunities for staff to develop a systemic understanding of organisational and inter-organisational issues and their inter-relationships. • Institute formal processes for group sessions for inquiry and reflection, and encourage individual inquiry and reflection by allowing staff members official time for this process and by rewarding positive outcomes from this process. • Implement the practice of public acknowledgment of good work. This could be in the form of a ‘special celebration’ during social events, or the public awarding of certificates. It is important that this is seen as a genuine commitment to acknowledging good work and not just as a token exercise. • Encourage social and work-related opportunities for work team to build team cohesion. • Institute team-based performance management and reward good teamwork. • Set out criteria for team’s and individual work expectations and performance measures. Make this publicly available electronically. • Where performance appraisal is held annually, allow for individual, team feedback, and peer review throughout the year.
350 Warne, Agostino, Ali, Pascoe & Bopping
•
• •
• • • • • • • •
•
• •
Some meetings should be set aside primarily for the purpose of information exchange and sharing. This should include an opportunity for staff at all levels to freely vent their concerns and frustration without fear of retribution. This could be combined with a ‘Refection and Inquiry’ meeting as discussed earlier. Protocols on the structure and use of the shared electronic resources should be developed and promulgated. It would be useful if a member of staff could be appointed responsible for maintaining accuracy and relevancy. Records management systems should be developed for both electronic and hard files. All staff members should be trained in the use of this system, and compliance and adherence to records standard and procedures should be given a high priority in the organisation. Career management both within the organisation and outside of it should be an integral part of the performance management cycle and should be supported by appropriate staff development measures. Where recent professional currency is regarded as important for a position, planning to maintain or improve the currency of an individual’s knowledge should be included as part of their performance management cycle. All staff should have an informal skills audit when they first join the organisation to map their skills and background. When necessary, appropriate training and local practices should be organized. Establish and actively support a mentoring program. Mentors should not be the same people as those who are managing the performance of individual staff members. Train managers to adopt an approachable and tolerant management style, while simultaneously providing clear guidelines and expectations of required work standards. Develop opportunities through formal structures and protocols for all staff to readily voice their opinions. Institute meetings and codes of conduct where members can openly share mistakes and lessons learned without fear or shame. Provision should be made for newcomers to observe workplace proceedings, policy meetings, and work practices. An experienced member of staff should accompany new employees to answer questions and clarify any confusing issues. Regular induction programs should be instituted in the organisation. An induction or handover pack should be developed for each position with information on the organisation duties of the position, relevant reading material files, and contacts. Because many aspects of this handover pack would be useful to the staff member, ideally the staff member should maintain and update them as living documents. A buddy should be appointed to all newcomers with a view to facilitating knowledge about the organisation and the duties. New technologies should be investigated to assess the feasibility of delivering customised, training videos on work processes directly to desktops.
The Knowledge Edge 351
The above recommendations are indicative rather than prescriptive. A good manager will know which strategies will best suit the workplace culture in his or her organisation. The challenge for IT managers is to find ways that the technology can support and strengthen these enablers and motivators.
CONCLUSION The findings reported in this chapter, and the recommendations made, represent the collective research results from different Australian Defence Organisation settings studied, to date, as part of the Enterprise Social Learning Architectures Task at the Australian Defence Science and Technology Organisation. These findings are multilayered and allowed the research team to pinpoint a number of environmental and cultural issues, processes, and strategies that facilitate effective social learning and knowledge management. These factors provide a context in which personnel are motivated to learn, construct and share knowledge, and suggest processes and strategies that, when present in an enterprise, can facilitate social learning and knowledge management. While information technology can provide facilitating tools that support these enablers and motivators, it cannot, in isolation, provide an environment that will support and evolve effective knowledge management. The enablers and motivators that support the human, social, and organisational needs of an organisation must first be embedded.
REFERENCES Agostino, K. (1998). The making of warriors: Men, identity and military culture. JIGS: Australian Masculinities, 3(2). Argyris, C. (1973). On Organisations of the Future. Beverley Hills, CA: Sage Publications. Bokeno, R. M. and Gantt, V. W. (2000). Dialogic mentoring. Management Communication Quarterly, 14(2), 237-270. Cooke, R. (1998). Welcome aboard. Credit Union Management, 21(7), 46-47. Davenport, T. H. and Prusack, L. (1998). Working Knowledge: How Organisations Manage What They Know. Boston, MA: Harvard Business School Press. Doney, P. M. and Cannon, J. P. et al. (1998). Understanding the influence of national culture on the development of trust. The Academy of Management Review, 23(3), 601-620. Drucker, P. (1999). Managing oneself. Harvard Business Review, March-April, 65-74. Dunford, R. W. (1992). Organisational Behaviour: An Organisational Analysis Perspective. Sydney: Addison Wesley. Enneking, N. E. (1998). Managing email: Working toward an effective solution. Records Management Quarterly, 32(3), 24-43.
352 Warne, Agostino, Ali, Pascoe & Bopping
Ganzel, R. (1998). Elements of a great orientation. Training, 35(3), 56. George, C. S. and Cole, K. (1992). Supervision in Action: The Art of Managing. Sydney: Prentice Hall. Gibb, J. R. (1967). Defensive and supportive communication. Journal of Communications, 11, 141-148. Guzley, R. M. (1992). Organizational climate and communication climate: Predictors of commitment to the organization. Management Communication Quarterly, 5(4), 379-402. Hall, D. T. and Schneider, B. (1972). Correlates of organizational identification as a function of career pattern and organizational type. Administrative Science Quarterly, 17, 340-350. Hall, R. H. (1977). Organisations: Structure and Process. Englewood Cliffs, NJ: Prentice-Hall. Harvey, L. and Myers, M. D. (1995). Scholarship and practice: The contribution of ethnographic research methods to bridging the gap. Information Technology & People, 8(3), 13-27. Henricks, M. (1999). Sorting out electronic filing. Office Systems, 16(2), 30-34. Hutchins, E. (1996). Cognition in the Wild. Cambridge, MA: MIT Press. Isaacs, W. (1999). Dialogue and the Art of Thinking Together. New York: Currency Doubleday. Jordan, B. (1993). Ethnographic Workplace Studies and Computer Supported Cooperative Work. Interdisciplinary Workshop on Informatics and Psychology. Austria: Scharding. Larsen, S. and Folgero, I. S. (1993). Supportive and defensive communication. International Journal of Contemporary Hospital Management, 5(3), 2225. Lave, J. and Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge, UK: Cambridge University Press. McCauley, D. P. and Kuhnert, K. W. (1992). The Professional Manager. New York: McGraw-Hill. Merten, D. E. (1999). Enculturation into secrecy amoung junior high school girls. Contemorary Ethnography, 28(2), 107-137. Miles, M. B. and Huberman, A. M. (1994). Qualitative Data Analysis: An Expanded Sourcebook. London: Sage Publications. Mitchell, S. (2000). Be Bold and Discover the Power of Praise. East Roseville: Simon & Schuster. Morgan, T. (1989). Performance management–The missing link in strategic management and planning. In Corbett, D.C. (Ed.), Public Sector Policies for the 1990s, 243-250. Melbourne, Australia: Public Sector Management Institute, Faculty of Economics and Politics, Monash University. Myers, M. D. (1999). Qualitative research in information systems. ISWORLD NET Web Site.
The Knowledge Edge 353
O’Neill, J. and Gori, R. (1998). Knowledge representations for supporting an activity-based approach to command and control. Fourth International Command Control Research and Technology Symposium. Robles, M. and Langemo, M. (1999). The fundamentals of record management. Office Systems, 16(1), 30-36. Ruppel, C. P. and Harrington, S. J. (2000). The relationship of communication, ethical work climate, and trust to commitment and innovation. Journal of Business Ethics, 25, 313-328. Schein, E. H. (1993). On dialogue, culture, and organizational learning. Organizational Dynamics, 22(2), 40-51. Senge, P. M. (1992). The Fifth Discipline: The Art and Practice of the Learning Organization. Sydney: Random House. Tyler, K. (1998). Take new employee orientation off the back burner. HR Magazine, 43(6), 49-57. Webber, A. M. (1993). What’s so new about the new economy? Harvard Business Review, January/February, 24-42. Welsch, H. P. and LaVan, H. (1981). Inter-relationships between organizational commitment and job satisfaction, professional behaviour, and organizational climate. Human Relations, 34, 1079-1089. Wood, R. (1989). Performance appraisal in the reform of public sector management practices. In Corbett, D. C. (Ed.), Public Sector Policies for the 1990s, 225242. Melbourne, Australia: Public Sector Management Institute, Faculty of Economics and Politics, Monash University.
354 About the Authors
About the Authors
Angappa Gunasekaran is an Associate Professor of Operations Management in the Department of Management at the University of Massachusetts, Dartmouth. He has a PhD in Industrial Engineering and Operations Research from the Indian Institute of Technology, Bombay (India). Dr. Gunasekaran has held academic positions at Brunel University (UK), Monash University (Australia), the University of Vassa (Finland), the University of Madras (India), and the University of Toronto, Laval University, and Concordia University (Canada). He has over 125 articles published in journals such as the International Journal of Production Research, International Journal of Systems Science, International Journal of Operations and Production Management, Computers in Industrial Engineering: An International Journal, European Journal of Operational Research, Logistics Information Management, TQM Magazine, Management Decision, Managerial Auditing Journal, International Journal of Advanced Manufacturing Technology, International Journal of Production Economics, Journal of Operational Research Society, Enterprise Innovation and Change, International Journal of Technology Management, Technovation, Computers in Industry: An International Journal, Total Quality Management, International Journal of Information Management, International Journal of Quality and Reliability Management, and International Journal of Computer Integrated Manufacturing. He has presented over 50 papers in conferences and given a number of invited talks in more than 20 countries. He is on the Editorial Board of over 15 international journals that include International Journal of Production Economics, International Journal of Computer-Integrated Manufacturing, International Journal of Productions Planning and Control, International Journal of Systems Science, Computers in Industry: An International Journal, CERA, Technovation, Journal of Product and Process Development, Logistics Information Management, Business Process Management Journal, Journal of Operations Management, Supply Chain Management: An International Journal, and International Journal of Quality and Reliability Management. He has edited special issues for a number of highly reputed international journals. Dr. Gunasekaran is involved in several national and international collaborative projects that are funded by private and government agencies. He has supervised more the 30 dissertations and several industrial projects. Most of the projects are industrial based. Dr. Gunasekaran is currently interested in researching agile manufacturing, concurrent engineering, management information systems, technology management, supply chain management, computer-integrated manufacturing, and total quality management. He is also Copyright © 2003, Idea Group, Inc.
About the Authors 355
the Editor of Benchmarking: An International Journal, an Associate Editor of Integrated Manufacturing Systems: The International Journal of Manufacturing Technology Management, and a Regional Editor (USA) for Supply Chain Management: An International Journal. Dr. Gunasekaran organized several international conferences in the merging areas of operations management and information systems. Omar Khalil is currently the Assistant Dean for Graduate Programs and Activities, and a Professor of Information Systems at the Charlton College of Business, University of Massachusetts, Dartmouth, USA. He has a PhD and an MS in Information Systems from University of North Texas and MBA from Alexandria University, Egypt. He has taught many graduate and undergraduate courses at a number of universities in different countries. His teaching interests include database design and implementation, data communications, systems analysis and design, and information resources management. His research interest includes information systems effectiveness, information systems and organizational change, management of global information systems, information quality, IT utilization in developing countries, and the human side of information systems. His publications in these areas have appeared in many referred journals and peer-reviewed proceedings. Syed Mahbubur Rahman is currently a Professor at the Minnesota State University, Mankato, USA. He worked for several other institutions around the world, including NDSU in USA (1999), Monash University in Australia (1993-98), Bangladesh University of Engineering and Technology (BUET, 1982-92), and Ganz Electric Works in Budapest (1980-82). He was the head of the department of Computer Science and Engineering at BUET from 1986 to 1992. He is co-chairing and is involved as a program/organizing member in a number of international conferences. He obtained his doctoral degree from Budapest Technical University in 1980. He supervised more than 30 research projects leading to masters’ and PhD degrees. His research interests include electronic commerce systems, multimedia computing and communications, image processing and retrieval, computational intelligence, pattern recognition, distributed processing, and security. He has published 100+ research papers in his areas of interest. He has published four edited books on e-commerce, internet commerce, software agents, multimedia, and networking. He also edited the special issue of IEEE Multimedia on distance education. *** Katerina Agostino recently joined the Land Operations Division of the Defence Science and Technology Organisation as a Senior Research Scientist. Prior to that, Dr. Agostino taught sociology at Macquarie University in Sydney. She also worked as an Academic Consultant to DSTO doing research on workplace practices in the Australian military, and as a part-time member of the Enterprise Social Learning Architectures research team. She writes extensively on aspects of feminism, gender, sexuality, and military culture.
356 About the Authors
Irena Ali is a Defence Scientist in the Joint System Branch, Defence Systems Analysis Division. Ms. Ali’s background is in information management and information seeking, and she joined the DSTO in August of 1999 as a member of the Enterprise Social Learning Architectures team. She has published and presented both nationally and internationally in the field of organisational and social learning. Ms. Ali is part of a multidisciplinary team recently awarded a three-year Australian Research Council grant, together with academic colleagues from Wollongong and Sydney Universities, to look into systems to support knowledge creation in learning organisations. The proposed project will investigate the capacity of IT to support human activities of knowledge making and use. C. Richard Baker is Associate Professor of Accounting in the Charlton College of Business at University of Massachusetts, Dartmouth. Prior to joining the University of Massachusetts, he held academic positions at Columbia, Fordham, and St. John’s Universities in New York City. His current research interests are focused on the regulatory, legal, and ethical aspects of the accounting profession. Dr. Baker holds a PhD from the Anderson School of Management at the University of California, Los Angeles. He is a Certified Public Accountant in New York State. Reto Bolliger has a master’s degree in Computer Science and Economics from Switzerland. For seven years he has worked as a Project Manager in a large travel agency in Switzerland and networked all the different flight and travel reservation systems together from all over the world. He was the Network Technician of the NSF Science and Technology Center for Environmentally Responsible Solvents and Processes (STC-ERSP) for the last two years, and was responsible for developing and implementing a variety of information and communications technology initiatives, including networking, videoconferencing, and web mastering within the STC-ERSP. Derek Bopping has been part of DSTO since 1996. Currently, he is studying towards a PhD at the School of Psychology, Australian National University. His research examines the social-psychological processes implicated in disclosure behaviour within the context of the Australian Defence Force. His broader interests concern the impact of ideology upon human social-identity processes, and the psychology of organizational secrecy. A keen fisherman, Mr. Bopping holds a Bachelor of Arts majoring in Psychology and Sociology, and has masters’ studies in Cognitive Science. Keith C. C. Chan has a BMath (Hons) degree in Computer Science and Statistics and MASc and PhD degrees in Systems Design Engineering from the University of Waterloo, Ontario, Canada. He has a number of years of academic and industrial experience in software development and management. Before joining The Hong Kong Polytechnic University, he was with the IBM Canada Laboratory where he was involved in every phase of software development. Dr. Chan had worked as an Associate Professor at the Department of Electrical and Computer Engineering at
About the Authors 357
Ryerson Polytechnic University, Ontario, Canada. He joined the Department of Computing, The Hong Kong Polytechnic University in 1994 where he is currently Acting Head and Associate Professor and the Director of the Intelligent Home Group. Dr. Chan was an Adjunct Faculty Member of the Department of Systems Design Engineering, University of Waterloo, Ontario, Canada. He is currently an Adjunct Professor of the Institute of Software, The Chinese Academy of Sciences. Dr. Chan’s research interests are in software engineering and data mining. Michael F.S. Chan is a PhD Research Student at the Department of Manufacturing Engineering, The Hong Kong Polytechnic University. He received his degree of Engineering and degree of Commerce from the University of Melbourne, Australia. His research focuses on the use of modeling and design management tools to examine the relationship between data model, behavior model, and process model for organizing economic activities in an extended enterprise context. His work is largely based on research conducted in an industrial and practical environment. Walter W. C. Chung is Associate Professor at the Department of Manufacturing Engineering, The Hong Kong Polytechnic University. He graduated in Industrial Engineering and received an MBA from UNSW Australia. His research interests are in enterprise transformation for management succession. A recipient of the President’s Awards for Achievement 1996/97: Research & Scholarly Activities, he has successfully supervised research students up to the PhD level and published numerous research papers. He is active in consulting with business and industry, and has secured research grants (HK$7.1 millions). He is the Project Director of the Teaching Company Scheme for a couple of companies. Thomas H. Cox holds a clinical faculty appointment at the University of North Carolina (UNC) at Chapel Hill, School of Education. He is also a computer consultant and manager of Academic Technology and Networks Video Services at UNC. He has 19 years of experience in interactive distance education and video conferencing, more than anyone else in the state of North Carolina at the university level. He holds a master’s degree in Instructional Design from UNC. His work over the past few years has been in the outcomes-based aspect of distance education and videoconferencing. Maria Manuela Cunha received her Dipl Eng in Informatics and MSci in Computer Integrated Manufacturing from Minho University, and is preparing a doctoral thesis in Production and Systems Engineering. Her current position is Assistant Professor in the Department of Informatics and Mathematics of Instituto Politécnico do Cávado e do Ave, Portugal, for the subjects Information Systems and Information Technology on undergraduate studies. She is also Director of the Department of Informatics and Mathematics of Instituto Politécnico do Cávado e do Ave. Her interests are agile/virtual enterprises and information systems.
358 About the Authors
Pradeep Gopalakrishna is Associate Professor of Marketing and Assistant Chairman of the Marketing and International Business department at Pace University’s Lubin School of Business. His research has appeared in several journals, including Journal of Business Research and Journal of Global Marketing. Jatinder N. D. Gupta is currently Eminent Scholar of Management, Professor of Management Information Systems, and Chairperson of the Department of Accounting and Information Systems in the College of Administrative Science at the University of Alabama in Huntsville, Huntsville, Alabama. Most recently, he was Professor of Management, Information and Communication Sciences, and Industry and Technology at Ball State University, Muncie, Indiana. He holds a PhD in Industrial Engineering (with specialization in production management and information Systems) from Texas Tech University. Co-author of a textbook in operations research, Dr. Gupta serves on the editorial boards of several national and international journals. Recipient of the Outstanding Faculty and Outstanding Researcher awards from Ball State University, he has published numerous papers in such journals as Journal of Management Information Systems, International Journal of Information Management, INFORMS Journal of Computing, Annals of Operations Research, and Mathematics of Operations Research. More recently, he served as a co-editor of a special issue on “Neural Networks in Business” of Computers and Operations Research and a book entitled, Neural Networks in Business: Techniques and Applications. His current research interests include information and decision technologies, scheduling, planning and control, organizational learning and effectiveness, systems education, and knowledge management. Dr. Gupta is a member of several academic and professional societies including the Production and Operations Management Society (POMS), the Decision Sciences Institute (DSI), and the Information Resources Management Association (IRMA). Noriko Hara, PhD, is Visiting Assistant Professor of Information Science at Indiana University. Her research agenda includes organizational learning, online learning, and communities of practice within social informatics. She has earned master’s degrees in Psychology of Learning and Instructional Systems Technology from Indiana University. She has published influential articles on online learning, and her current research focuses on communities of practice and information technologies. In addition, she serves on the editorial board of The Technology Source. Previously she was an NSF postdoctoral research fellow in the School of Information and Library Science, University of North Carolina at Chapel Hill. Zoltán Kincses got his Diploma at the Eötvös Loránd University of Sciences as a mathematician-programmer in 1996. He had a PhD scholarship at the same university in the Department of General Computer Science. He worked for the Giro Bankcard Ltd. as the head of the security group from 1999-2001. At present he is working in the SEARCH Laboratory at the Budapest University of Technology and Economics as a researcher. He is in the last phase of his PhD
About the Authors 359
studies. He has 21 publications in international journals and conferences. His fields of interest are policy- and standard-based information security, ontologybased reference architectures, and security taxonomies. Henry C. Lau is currently an Assistant Professor of the Department of Manufacturing Engineering at The Hong Kong Polytechnic University, involved in research and teaching activities. He received his master’s degree at Aston University in Birmingham in 1981 and his doctorate at the University of Adelaide in 1995. His current research areas cover manufacturing information systems and artificial intelligence applications. He has authored and co-authored over 100 refereed research articles covering multi-agent modeling, knowledge management, global manufacturing, and computational intelligence applications. Benn Lawson holds a Bachelor of Commerce (with Honors) in accounting from Monash University, Melbourne, Australia. He is currently working toward a PhD in the management of organizational innovation at the Department of Management, The University of Melbourne, Australia. His research interests include innovation management, corporate entrepreneurship, and management control systems. Mr. Lawson is a member of the faculty of the Department of Accounting at The University of Melbourne. Previously he held the position of Financial Analyst with a large Australian insurer. Thuong T. Le (PhD, Michigan State University) is a Professor of Marketing, ECommerce, and Supply Chain Management in the Department of Information Systems, Marketing, E-Commerce, and Professional Sales, in the College of Business Administration at the University of Toledo, Ohio, USA. He has written extensively on e-commerce management, international logistics, maritime economics and policy, airline management, and global business. His works have appeared in several publications, including Electronic Commerce Research, Transportation Journal, Maritime Management and Policy, Journal of Maritime Law and Commerce, Transportation Quarterly, International Journal of Transport Economics, and Journal of Purchasing and Materials Management. Kim Man Lui received a B Eng from Tamkang University, Taiwan. He was awarded a bursary to study for his MSc in Computer Science at the University of the Witwatersrand, South Africa. There he received the MSc. He has worked in a number of IT positions in the commercial sector from system engineer, analyst programmer, system analyst, project leader, and IT manager. Mr. Lui was a certified Sybase Database Administrator and also a certified Oracle Database Administrator. While studying towards his PhD degree at The Hong Kong Polytechnic University, Mr. Lui is currently working as the IT Manager for Carlsberg, responsible for Pan China IT development and support. Mr. Lui published several international papers on pattern recognition, qualitative reasoning, data mining, and software engineering.
360 About the Authors
Isidre March-Chorda, PhD, in Economics and Business Administration by the University of Valencia since 1994 and MsC in Technology and Innovation Management by SPRU (University of Sussex) since 1991, has published several articles in international journals such as: International Journal of Technology Management, Technovation, The Journal of High Technology Management Research, International Journal of Innovation Management, and Management Decision. Senior Lecturer at the University of Valencia, he has headed several research projects with EU funding in the field of strategic innovation audits. He is responsible for the research line in “Innovation Management” at his University. István Mezgár (Diploma in ME, 1977; CSc in TechSci., 1995) is a Senior Researcher at the CIM Research Laboratory at the Computer and Automation Research Institute, Hungarian Academy of Sciences. Previously he served as Visiting Researcher/Professor at different universities in Italy in 1989-91, at the University of Waikato (New Zealand, 1995), and at the Korea Institute of Science and Technology (1995-96). An IPC member of several international IFAC, IFIP conferences, he has more than 100 scientific publications in international journals and conferences. His current interests focus on artificial intelligence, integration of product and enterprise models, virtual enterprises, computer security, and smart card technology. Dinesh A. Mirchandani is an Assistant Professor of Management at Grand Valley State University. He teaches information systems management courses. His research has been published in journals such as Communications of the ACM, Journal of Organizational Computing and Electronic Commerce, International Journal of Electronic Commerce, and Information & Management, among others. Santosh K. Misra is an Associate Professor of Computer and Information Science at Cleveland State University. He has a strong interest in object-oriented technologies, including UML and patterns, and has conducted seminars in this area. He has published in several areas including performance measurement and electronic commerce. Dr. Misra holds his doctorate in business administration from Kent State University. He can be reached at
[email protected]. Jaideep Motwani is Chair, Department of Management at Seidman School of Business, Grand Valley State University. He teaches operations management courses at both the undergraduate and the MBA levels. His research has been published in Operations Research, European Journal of Operational Research, IEEE Transactions on Engineering Management, Omega, Journal of Operational Research Society, and International Journal of Production Research, among others. Celina Pascoe is a Lecturer from the University of Canberra with research interests in the relationship between workplace communication and effective management practices, including knowledge creation. Other areas of interest are the impact of information and communication technologies on job satisfaction and
About the Authors 361
motivation. She is a member of the Centre for Communication, Media and Cultural Studies, and she consults for the Australian Department of Defence Science and Technology Organisation (DSTO) on communication for management. Goran D. Putnik received his Dipl Eng, MSci and DrSci from the Belgrade University, both MSci and DrSci in the domain of Intelligent Manufacturing Systems. His current position is the Associate Professor in the Department of Production and Systems Engineering, University of Minho, Portugal, for the subjects CAD/CAPP, CAM, flexible manufacturing systems and virtual enteprises on undergraduate studies, and CAD/CAPP/CAM systems, concurrent engineering, enterprise organization, intelligent systems for manufacturing, and design theory on post-graduate studies. He is also the Director of the research centre, Centre for Production Systems Engineering (CESP), the Director of the Postgraduate Course of Computer Integrated Manufacturing (CIM) of the University of Minho. His interests are manufacturing system and enterprise design and control theory, and implementations and machine learning as a general design theory model. S. Subba Rao holds a PhD in Operations Research and is a Professor of Operations Management at The University of Toledo. He has taught at Case Western Reserve University, Southern Methodist University, Virginia Commonwealth University, Washington State University, University of Rochester, and at the Indian Institute of Management, Ban Galore. Dr. Rao has taught graduate and undergraduate courses in the areas of operations management, management science, queuing theory, forecasting and energy management. His research interests span a wide range: operations research modeling, quality management, supply chain management, and operations management. He is extensively published in international and national journals like Operations Research, Naval Research Logistics Quarterly, Journal of Optimization Theory and Applications, IEEE Transaction, Journal of Applied Probability, OPSEARCH, Total Quality Management, Journal of Quality Management, OMEGA, European Journal of Operations Research, International Journal of Quality and Reliability, etc. Over 50 papers are to Dr. Rao’s credit, in addition to another 60 or so publications in conference proceedings, presentations, and technical memoranda. Violina Ratcheva is currently a Lecturer at the University of Nottingham (UK), teaching Creative Problem Solving and Enterprise Knowledge Management. She received her MA degree in Economics and Retail Marketing from Varna University of Economics (Bulgaria), and MPhil in Business Studies from Loughborough University Business School (UK). She has worked as accountant, lecturer in strategic management, researcher in European business, and a research fellow in growing SMEs. Her main research interests are in the area of knowledge creating interactions in inter-organisational business networks. She is currently investigating interaction processes and behaviour changes of virtual teams, knowledge capturing, and sharing practices in dispersed organisational environment.
362 About the Authors
Danny Samson is Chair of Management in the Department of Management, University of Melbourne. He has published over 50 research papers in academic journals and five books in fields ranging from decision sciences to operations, technology and quality management, and business improvement strategies. He has previously worked in the chemicals industry and has held academic positions at the University of NSW and the University of Illinois. He is a Director of a major insurer in Australia and regularly contributes to the translation of theory into practice, through executive education and the provision of strategic advice to corporations and governments. Sushil K. Sharma is currently Assistant Professor of Management at Ball State University, Muncie, Indiana. He received his PhD in Information Systems from Pune University, India, and taught at the Indian Institute of Management, Lucknow, for 11 years before joining Ball State University. Prior to joining Ball State, Dr. Sharma held the position of Visiting Research Associate Professor at the Department of Management Science, University of Waterloo, Canada. His primary teaching interests are e-commerce, computer communication networks, database management systems, management information systems, and information systems analysis and design. He has extensive experience in providing consulting services to several government and private organizations including World Bank-funded projects in the areas of information systems, e-commerce, and knowledge management. Dr. Sharma is the author of two books and has numerous articles in national and international journals. His current research interests include database management systems, networking environments, electronic commerce (e-commerce), knowledge management, and corporate information systems. Nancy C. Shaw received her PhD in Information Systems from the National University of Singapore. She holds an MBA and a BBA from the University of Kentucky. Dr. Shaw has been a practitioner and consultant in the information systems industry for over 20 years. She has worked for AT&T, General Electric, and most recently as a Senior Systems Analyst for the Central Intelligence Agency. She also served as a Military Intelligence Officer in the U.S. Army Reserves during the Persian Gulf War. Currently she is an Assistant Professor of Information Systems at George Mason University in Fairfax, Virginia. Her current research interests include end-user computing support and knowledge management. Paul Solomon has earned degrees from Penn State (BS), the University of Washington (MBA), and the University of Maryland (MLS, PhD). He worked in the private sector and with the federal government before joining the School of Information and Library Science, University of North Carolina at Chapel Hill in 1991, where he is currently Associate Professor and Associate Dean. He spent the 19961997 AY as Fulbright Professor at the University of Tampere, Finland. His current research focuses on how people create information as they engage in work and life, and the influence of information structures on information-related behavior.
About the Authors 363
Diane H. Sonnenwald is an Associate Professor at the University of North Carolina at Chapel Hill. She has published over 40 journal papers, conference papers, and book chapters; given more than 80 presentations throughout the world; and received approximately 20 grants from national and state research agencies and corporations. Currently she leads the Social Science Research Team of the NSF Science and Technology Center for Environmentally Responsible Solvents and Processes, and the nanoManipulator Collaboratory Design and Evaluation Project (with M. Whitton) at the NIH National Computing Research Resource at the University of North Carolina. Jayavel Sounderpandian is Professor of Operations Management at the University of Wisconsin-Parkside. He has a strong interest in the use of technology in industry and academia. He has published in, among others, Operations Research, Interfaces, Abacus, Journal of Risk and Uncertainty, and International Journal of Production Economics. He holds a doctorate in business administration from Kent State University. He can be reached at
[email protected]. Ram Subramanian is Associate Professor of Management at Grand Valley State University. He teaches the strategic management course at both the undergraduate and the MBA levels. His research has been published in Journal of Management, Journal of Business Research, and Management International Review, among others. Edward Szczerbicki has had very extensive experience in the theory of information, autonomous systems analysis, and decision support systems development over an uninterrupted 25-year period, 13 years of which he spent in the top systems research centers in the USA, UK, Germany, and Australia. He has published over 160 refereed papers, 98 of which appeared in international journals covering the area of systems science, decision support, and autonomous systems modeling and simulation. His DSc degree (1993) was gained in the area of the theory of information flow for autonomous systems. His PhD (1983) was gained in uncertainty modeling for design and MSc (1976) in engineering management. He is now with the University of Newcastle, Australia. Colin K. S. Tam is Business Development Manager (Business Equipment and Personal Products Division) for Johnson Electric Manufacturing Ltd. (JE) which is the world’s second largest micro-motor manufacturer. He has over six years of experience in leading engineering project development for most major multimedia and business equipment corporations in Asia. He has personally developed the prototype of the Internet-based knowledge system with QFD and successfully implemented it for JE. Antonio Torres-Perez is a Lecturer at the University of Valencia; he has published several articles in Spanish journals. He has participated in research projects leaded by Isidre March and holds wide experience as a consultant in the field of information technology management. He works as Chief Information Officer in a medium sized company in Spain.
364 About the Authors
Dothang Truong is a PhD student in Manufacturing Management, in the College of Business Administration at the University of Toledo, Ohio, USA. He received the bachelor’s degree of Engineering from Hanoi University of Technology, Hanoi, Vietnam, and the master’s degree of Business Administration from the Asian Institute of Technology, Bangkok, Thailand. He taught in the Department of Economics and Management at Hanoi University of Technology for three years before coming to the PhD program in Manufacturing Management at the University of Toledo. His research interests are e-commerce management, business-to-business e-marketplaces, management information systems, and supply chain management. Francis D. (Doug) Tuggle is in transition to become Professor and Dean of the George L. Argyros School of Business and Economics at Chapman University in Orange, California. Previously, he was Professor and Dean of the Kogod School of Business at American University in Washington, DC; Jesse H. Jones Professor of Management and Dean of the Jones Graduate School of Management at Rice University in Houston, Texas; and Professor of Computer Science and Professor of Business Administration at the University of Kansas in Lawrence, Kansas. He has written two books, and he has more than 50 refereed publications in outlets such as Management Science, Academy of Management Journal, Academy of Management Review, and Interfaces. He sits on several corporate boards of directors and consults in the field of knowledge management. He has his BS degree from MIT, and his MS and PhD degrees from Carnegie Mellon University. Leoni Warne is a Senior Research Scientist within the Defence Systems Analysis Division of the Defence Science and Technology Organisation (DSTO) in Australia. She works in the Joint Systems Branch where she is the Task Manager and Team Leader of the research team responsible for researching and developing enterprise social learning architectures. Dr. Warne has been with DSTO for three years. Prior to this, she spent 10 years lecturing in Information Systems at the University of Canberra. Dr. Warne’s research is primarily focused on the social and organisational aspects of information systems. T. T. Wong is an Associate Professor of the Department of Mechanical Engineering at The Hong Kong Polytechnic University. He is a Local Representative of the Hong Kong Institution of Engineers and the Honorary Secretary of the Hong Kong Institute of Marine Technology. His current research interests include data mining, heat transfer enhancement, laser surface treatment of alloys, recycling of plastics, maintenance engineering, and management. He also serves as an editorial board member of the International Journal of Plant Engineering and Management.
Index 365
Index A
customer intimacy 207 customer loyalty 197 adoption 251 customer relationship management (CRM) adoption of KMS 83 195, 205 adoption of technological change 73 customer relationship management system agile/virtual 169 85 agile/virtual enterprise 170 customer relationships 53 agile/virtual enterprise dynamic integration customer situation analysis 65 170 customer-company interactions 203 around-the-clock development 311 customization 204 assessment process 84 D attracting consumers 198 autonomous agent 94 data fusion 206 data maps 105 B data mining technologies 153 B2B online exchanges 53 data warehouse 160, 197 bulletin boards 272 database system 85 business alignment 173 databases 61 business process reengineering 73 debit card transactions 241 business value 53 decision support system 31 business-consumer interaction 193 diffusion 251 business-consumer relationships 193, 208 distance learning situations 119 dynamic networks 138
C
capacity management 55 career path 331 case studies 73 channel interactions 202 chief knowledge officers (CKOs) 104 Cisco Systems, Inc. 1 collaboration 115 common identity 328 communication 344 communication systems 197 communication technologies 285 competitive advantage 18, 34 complex systems 89 computer integrated manufacturing 93 cross-functional teams 106 culture issues 251
E e-commerce 193, 251, 254 e-mail message 123 e-mail policy 63 e-mail spam 272 effectiveness 106 efficiency 106 electronic check technology 239 electronic commerce 53, 268 electronic money 214 electronic notational money 217 electronic purse technology 225 electronic reader 215 empowered consumers 200 enablers 327 enterprise information portals 195, 197 Copyright © 2003, Idea Group, Inc.
366 Index
enterprise resource planning 195 enterprise resource planning system 85 entrepreneurial culture 40 environmental factors 257 ethnography 326 executive information system 31
F fair play 210 feedback mechanisms 209 firewall 295 four layer framework 251 fraud 268 fraudulent activities 268
G gender 329 geographically distributed 116 global software team 306, 307 government policies 257 group interaction 115
H hacking 270 handling returns 207 hierarchical communication model 93 human capital empowerment 40 human resources 306 human-to-human interaction 139
I ICDT framework 251 induction 347 inexperienced software team 307 information and communication technology 283 information diversity 90 information flow 89 information processing theory 106 information technology 1, 268 innovation 1 integrated software 300 integration 90 inter-organizational information system 52 interaction 123
interconnectivity 253 Internet 268 Internet-based interactive technologies 209 IT crisis 321
K knowledge 1, 15 knowledge acquisitions 60 knowledge base 209 knowledge brokers 15 knowledge creation 138 knowledge layer 61 knowledge management (KM) 14, 73, 193, 309 knowledge management architecture 196 knowledge repositories 105 knowledge-sharing networks 81 knowledge society 284 knowledge transfer 142
L language 330 leadership 335 local area network 196, 287 Lotus Domino Notes 62
M macro perspective 253 management information system 31 management of information 89 manpower-shortage problem 305 manufacturing technologies 56 market of resources 170 meetings 340 mentoring 343 military 327 morale 331 motivators 327 multi-user database 19
N network security 295 neural network 161 niche marketing 201
Index 367
O
segmentation 201 self-service technologies (SSTs) 201 small- and medium-sized enterprise 285 smart card 300 smart card technology 216 social learning 325 socio-technical approach 120 software innovation 80 software upgrades 79 strategic business plan 35 strategic planning model 38 system security 295
on-line analytical processing 160 on-line services 209 one-to-one customer marketing 201 online auction sites 277 online customers 198 operational management level 90 organizational behavior 17 organizational competitiveness 14 organizational culture 72, 74 organizational learning 126 organizational processes 18 T organizational structures 1 outsourced information system development taxing e-commerce 258 291 team composition 106 outsourcing 153 team knowledge 106 telecommunication 283 P telecommunications infrastructure 129 partial expertise 106 total quality management 73 pattern theory 318 training 343 PayPal 243 trust 155, 156, 288 performance management 336 U personal networks 332 personalized products and services 209 understanding consumers 198 personnel turnover 308 user interface 200 plagiarism-based programming 307 point of sale 225 V privacy 206 problem domain 106 value chain 33, 154, 254 problem-solving teams 105 value-added services 199 professional currency 342 videoconference rooms 124 videoconferences 116 R virtual collaborations 254 virtual communication space 255 reach customers 201 virtual distribution space 256 realignment 253 virtual enterprises 153, 154 records keeping 338 virtual organization 141 reengineering business processes 254 virtual partnerships 143 reflection 334 virtual software team 313 resource-based strategy 16 virtual teams 138, 139 response time 204 virtual transaction space 256 retain customers 203 risk-management 306
S science research teams 121 securities fraud 268, 271 security concerns 206
W web services technologies 202 web-based transactions 243 workflow management 55 workplace design 331
1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Knowledge Media 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Knowledge Management and 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 and Healthcare: 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Business Model Innovation 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Opportunities and Challenges 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Yogesh Malhotra, PhD, Syracuse University, USA 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Rolf Grutter, PhD, University of St. Gallen, Switzerland 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 ISBN: 1-878289-98-5; Copyright: 2001 ISBN: 1-930708-13-0; eISBN: 1-59140-006-6 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Pages: 464 (h/c); Price: US $149.95; Available: Now! Copyright: 2002; Pages: 296 (h/c); Price: US $74.95 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Available: Now! 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 ISBN: 1-878289-82-9 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 eISBN:1-930708-63-7 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Copyright: 2000 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Pages: 280 (s/c) 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Price: US $119.95 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Available: Now! 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Internet-Based Organizational 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Knowledge Management and 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Memory and Knowledge Virtual Organizations 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Management Yogesh Malhotra, PhD, Syracuse University, USA 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 David G. Schwartz, PhD, Bar-Ilan University, Israel 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 ISBN: 1-878289-73-X; eISBN:1-930708-65-3 Monica Divitini, PhD, Norwegian University of 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Copyright: 2000; Pages: 450 (h/c) 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Science and Technology 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Price: US $149.95; Available: Now! 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Terje Brasethvik, PhD, Norwegian University of 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Science and Technology 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 ISBN: 1-878289-82-9; eISBN:1-930708-63-7 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Visit the Idea Group Online Copyright: 2000; Pages: 269 (s/c); Price: US $119.95 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 Available: Now! 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 bookstore at 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 www.idea-group.com for 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 detailed information 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 on these titles! 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212
NEW Knowledge Management books from IGI!
R ecommend these KM books to yyour our librar y! library!