Intelligent Support Systems Technology:
FL Y
Knowledge
TE
AM
Management
Vijayan Sugumaran
IRM PRESS
Team-Fly®
Intelligent Support Systems: Knowledge Management Vijayan Sugumaran, Ph.D. Oakland University, USA
IRM Press
Publisher of innovative scholarly and professional information technology titles in the cyberage
Hershey • London • Melbourne • Singapore • Beijing
Acquisitions Editor: Managing Editor: Assistant Managing Editor Copy Editor: Cover Design: Printed at:
Mehdi Khosrow-Pour Jan Travers Amanda Appicello Jane Conley Tedi Wingard Integrated Book Technology
Published in the United States of America by IRM Press 1331 E. Chocolate Avenue Hershey PA 17033-1117, USA Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.irm-press.com and in the United Kingdom by IRM Press 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 3313 Web site: http://www.eurospan.co.uk Copyright © 2002 by Idea Group, Inc. All rights reserved. No part of this book may be reproduced in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Library of Congress Cataloguing-in-Publication Data Intelligent support systems : knowledge management / [edited by] Vijay Sugumaran. p. cm. Includes bibliographical references and index. ISBN 1-931777-00-4 (paper) 1. Database management. 2. Artificial intelligence. 3. Knowledge management. I. Sugumaran, Vijay, 1960QA76.9.D3 I5495 2002 006.3--dc21
2002017310
eISBN: 1-931777-19-5 British Cataloguing-in-Publication Data A Cataloguing-in-Publication record for this book is available from the British Library.
Other New Releases from IRM Press • • • •
• • • • • • • • • • • • • •
Effective Healthcare Information Systems, Adi Armoni (Ed.) ISBN: 1-931777-01-2 / eISBN: 1-931777-20-9 / approx. 340 pages / US$59.95 / © 2002 Human Computer Interaction Development and Management, Tonya Barrier (Ed.) ISBN: 1-931777-13-6 / eISBN: 1-931777-35-7 / approx. 336 pages / US$59.95 / © 2002 Data Warehousing and Web Engineering, Shirley Becker (Ed.) ISBN: 1-931777-02-0 / eISBN: 1-931777-21-7 / approx. 334 pages / US$59.95 / © 2002 Information Technology Education in the New Millennium, Mohammad Dadashzadeh, Al Saber and Sherry Saber (Eds.) / ISBN: 1-931777-05-5 / eISBN: 1-931777-24-1 / approx. 308 pages / US$59.95 / © 2002 Information Technology Management in Developing Countries, Mohammad Dadashzadeh (Ed.) / ISBN: 1-931-777-03-9 / eISBN: 1-931777-23-3 / approx. 348 pages / US$59.95 / © 2002 Strategies for eCommerce Success, Bijan Fazlollahi (Ed.) ISBN: 1-931777-08-7 / eISBN: 1-931777-29-2 / approx. 352 pages / US$59.95 / © 2002 Collaborative Information Technologies, Mehdi Khosrow-Pour (Ed.) ISBN: 1-931777-14-4 / eISBN: 1-931777-25-X / approx. 308 pages / US$59.95 / © 2002 Web-Based Instructional Learning, Mehdi Khosrow-Pour (Ed.) ISBN: 1-931777-04-7 / eISBN: 1-931777-22-5 / approx. 322 pages / US$59.95 / © 2002 Modern Organizations in Virtual Communities, Jerzy Kisielnicki (Ed.) ISBN: 1-931777-16-0 / eISBN: 1-931777-36-5 / approx. 316 pages / US$59.95 / © 2002 Enterprise Resource Planning Solutions and Management, Fiona Fui-Hoon Nah (Ed.) ISBN: 1-931777-06-3 / eISBN: 1-931777-26-8 / approx. 308 pages / US$59.95 / © 2002 Interactive Multimedia Systems, Syed M. Rahman (Ed.) ISBN: 1-931777-07-1 / eISBN: 1-931777-28-4 / approx. 314 pages / US$59.95 / © 2002 Ethical Issues of Information Systems, Ali Salehnia (Ed.) ISBN: 1-931777-15-2 / eISBN: 1-931777-27-6 / approx. 314 pages / US$59.95 / © 2002 Human Factors in Information Systems, Edward Szewczak and Coral Snodgrass (Eds.) ISBN: 1-931777-10-1 / eISBN: 1-931777-31-4 / approx. 342 pages / US$59.95 / © 2002 Global Perspective of Information Technology Management, Felix B. Tan (Ed.) ISBN: 1-931777-11-4 / eISBN: 1-931777-32-2 / approx. 334 pages / US$59.95 / © 2002 Successful Software Reengineering, Sal Valenti (Ed.) ISBN: 1-931777-12-8 / eISBN: 1-931777-33-0 / approx. 330 pages / US$59.95 / © 2002 Information Systems Evaluation Management, Wim van Grembergen (Ed.) ISBN: 1-931777-18-7 / eISBN: 1-931777-37-3 / approx. 336 pages / US$59.95 / © 2002 Optimal Information Modeling Techniques, Kees van Slooten (Ed.) ISBN: 1-931777-09-8 / eISBN: 1-931777-30-6 / approx. 306 pages / US$59.95 / © 2002 Knowledge Mapping and Management, Don White (Ed.) ISBN: 1-931777-17-9 / eISBN: 1-931777-34-9 / approx. 340 pages / US$59.95 / © 2002
Excellent additions to your institution’s library! Recommend these titles to your Librarian! To receive a copy of the IRM Press catalog, please contact (toll free) 1/800-345-4332, fax 1/717-533-8661, or visit the IRM Press Online Bookstore at: [http://www.irm-press.com]! Note: All IRM Press books are also available as ebooks on netlibrary.com as well as other ebook sources. Contact Ms. Carrie Stull at [
[email protected]] to receive a complete list of sources where you can obtain ebook information or IRM Press titles.
Intelligent Support Systems: Knowledge Management Table of Contents
Foreword ............................................................................................................ vii Vijayan Sugumaran Oakland University, USA Preface .................................................................................................................. x Chapter 1. Intelligent Agents and the World Wide Web: Fact or Fiction? ............................................................................................................ 1 Sudha Ram, University of Arizona, USA Chapter 2. Comparing U.S. and Japanese Companies on Competitive Intelligence, IS Support and Business Change ....................................... 4 Tor Guimaraes, Tennessee Technological University, USA Osamu Sato, Tokyo Keizai University, Japan Hideaki Kitanaka, Takushoku University, Japan Chapter 3. Knowledge Assets in the Global Economy: Assessment of National Intellectual Capital ..................................................................... 22 Yogesh Malhotra, @Brint.com and Syracuse University, USA Chapter 4. Knowledge-Based Systems as Database Design Tools: A Comparative Study ..................................................................................... 43 W. Amber Lo, Millersville University and Knowledge-Based Systems, Inc., USA Joobin Choobineh, Texas A&M University, USA Chapter 5. Policy-Agents to Support CSCW in the Case of HospitalScheduling ................................................................................................... 72 Hans Czap, University of Trier, Germany
Chapter 6. Building an Agent: By Example ................................................. 84 Paul Darbyshire, Victoria University of Technology, Australia Chapter 7. Intelligent Agents in a Trust Environment .............................. 98 Rahul Singh, University of North Carolina, Greensboro, USA Mark A. Gill, Arizona State University, USA Chapter 8. A Case Study on Forecasting of the Return of Scrapped Products through Simulation and Fuzzy Reasoning ............................109 Jorge Marx-Gómez and Claus Rautenstrauch Otto-von-Guericke-University, Magdeburg, Germany Chapter 9. Newshound Revisited: The Intelligent Agent That Retrieves News Postings ..........................................................................................124 Jeffrey L. Goldberg, Analytic Services Inc. (ANSER), USA Shijun S. Shen, Tygart Technology, Inc., USA Chapter 10. Investigation into Factors That Influence the Use of the Web in Knowledge-Intensive Environments ................................................135 Yong Jin Kim and H. Raghav Rao, SUNY at Buffalo, USA Abhijit Chaudhury, Bryant College, USA Chapter 11. A Study of Web Users’ Waiting Time ....................................145 Fiona Fui-Hoon Nah, University of Nebraska-Lincoln, USA Chapter 12. Stickiness: Implications for Web-Based Customer Loyalty Efforts .........................................................................................................153 Supawadee Ingsriswang and Guisseppi Forgionne University of Maryland, Baltimore, USA Chapter 13. “Not” is Not “Not” Comparisons of Negation in SQL and Negation in Logic Programming ............................................................164 James D. Jones, University of Arkansas at Little Rock, USA Chapter 14. Knowledge Management and New Organization Forms: A Framework for Business Model Innovation ........................................177 Yogesh Malhotra, @Brint.com, L.L.C. and Florida Atlantic University, USA Chapter 15. Implementing Virtual Organizing in Business Networks: A Method of Inter-Business Networking ................................................200 Roland Klueber, Rainer Alt and Hubert Osterle University of St. Gallen, Switzerland
Chapter 16. Managing Knowledge for Strategic Advantage in the Virtual Organization ..............................................................................................225 Janice M. Burn and Colin Ash, Edith Cowan University, Australia Chapter 17. Virtual Organizations That Cooperate and Compete: Managing the Risks of Knowledge Exchange .................................................248 Claudia Loebbecke, Copenhagen Business School, Denmark Paul C. van Fenema, Erasmus University, The Netherlands Chapter 18. Becoming Knowledge-Powered: Planning the Transformation ................................................................................................274 Dave Pollard, Ernst & Young, Canada About the Editor .............................................................................................296 Index .................................................................................................................297
vii
Foreword Organizations use a variety of computer-based systems such as management information systems, decision-support systems and executive information systems to support decision making. These systems deliver business data and information in a highly aggregated form. However, they have not been able to keep up with the new flood of information, particularly with the explosion in the amount of data being generated, stored, accessed and processed by the ubiquitous Internet technologies. This information overload coupled with competitive pressures signal the need for “intelligent support systems” that can minimize the cognitive load on the knowledge workers and decision makers. In addition, fierce competition, globalization, and the digital economy have forced organizations to search for new ways to improve customer satisfaction and competitive advantage. This has created tremendous pressure on businesses to minimize cost, increase quality, and reduce time-to-market for products to meet customer demand. In order to satisfy these objectives, businesses are reorganizing themselves into smaller and efficient units by pruning the organizational hierarchy and becoming decentralized. Consequently, there is great need for improving communication and information flow, and providing decision-making capabilities to sites that have to respond quickly to market changes. Organizations are increasingly turning to technologies to support their problem-solving and decision-making activities. To gain dramatic improvement in organizational productivity, emerging information technologies (such as intelligent agents) are being applied to create a cooperative and group-based work environment. Although artificial intelligence (AI) technologies such as expert systems and neural networks have been successfully used in aerospace, communication, medicine, finance, etc., they have not made a significant impact on improving overall productivity due to their narrow scope. In contrast, the new breed of “intelligent support system technologies” hold greater potential in that they can be applied to a large number of domains and a diverse set of problems. For example, a generic intelligent agent-based application can be customized for different domains and a variety of problem scenarios. Intelligent support systems are generally characterized as systems that help users in carrying out difficult tasks by minimizing complexity and, hence, the users’ cognitive load. These systems have a learning component and gain “experience” over time. They respond to changes in the environment and new situations with minimal human intervention. They are context sensitive and capable of making sense out of ambiguous or contradictory information. They also maintain user profiles
viii including user preferences and previous actions, and serve as a tutor, critic, consultant or advisor by providing suggestions and/or courses of action to take. These systems exhibit “intelligent” behavior by dealing with complex situations and applying their knowledge to manipulate the environment by recognizing the relative importance of different elements within a problem scenario. The following is a partial list of enabling technologies that are used in creating intelligent support systems: a) intelligent agents, b) data mining and knowledge discovery, c) data warehousing, d) fuzzy computing, e) neural networks, f) machine learning, g) client-server and web technologies, h) business components, i) java and XML technologies, and j) evolutionary algorithms. This book discusses the various aspects of designing and implementing intelligent support systems using one or more of the afore-mentioned technologies. Intelligent agent technology is finding its way into many new systems, including decision-support systems, where it performs many of the necessary decision-support tasks formerly assigned to humans. Agents are loosely defined as “software entities that have been given sufficient autonomy and intelligence to enable them to carry out specified tasks with little or no human supervision.” Software agents are useful in automating laborious and repetitive tasks, such as locating and accessing necessary information, filtering away irrelevant and unwanted information, intelligently summarizing complex data, and integrating information from heterogeneous information sources. Like their human counterparts, intelligent agents can have the capability to learn from their managers and even make recommendations to them regarding a particular course of action. Generally, agents are designed to be goal driven, i.e., they are capable of creating an agenda of goals to be satisfied. Organizations are investing heavily in systems that help capture and manage Business Intelligence (BI). One technology used to generate BI is data mining and knowledge discovery. Data mining applications are coming to the forefront of business data analysis and decision making. However, to successfully execute these applications, a significant amount of a priori knowledge is required about data mining techniques, their applicability to different scenarios, relevant data selection and transformation, etc. Hence, for a casual user interested in deciphering trends and buying behaviors from customer “digital footprint” data, shielding some of the nuances of normal data mining operations would be a welcome change. Intelligent agent technology can play a major role in the design and development of such data mining systems, particularly in hiding the complexity and implementing a scaleable system. For example, the “interface agent” can assist decision makers (users) to perform actions on a data warehouse that they cannot, or prefer not, to do themselves. Thus, intelligent agent technology is emerging as one of the most important, and rapidly advancing areas in support system technologies. A number of agentbased applications and multi-agent systems are being developed in a variety of fields, such as electronic commerce, supply chain management, resource allocation, intelligent manufacturing, mass customization, industrial control, information retrieval
ix and filtering, collaborative work, mobile commerce, decision support, and computer games. While research on various aspects of intelligent agent technology and its application is progressing at a very fast pace, there are still a number of issues that have to be explored in terms of agent design, implementation, integration, and deployment. For example, identifying salient characteristics of agents in different domains, developing formal approaches for agent-oriented modeling, designing and implementing agent-oriented information systems, collaborating and coordinating multi-agent systems, and analyzing the organizational impact of agent-based systems are some of the areas in need of further research. Intelligent support system technologies will attain a permanent place in industry and will be deployed for the purpose of increasing industrial productivity in many roles, such as assistants to human operators and autonomous decision-making components of complex systems. One can easily envision a world filled with millions of knowledge agents where the boundary between human knowledge agents and machine agents is invisible. Intelligent agents have the potential to radically change the way organizational work is currently performed. Human agents can delegate a range of tasks to personalized software agents that can not only make decisions based on the criteria provided by their human counterparts, but also model the reasoning, action, communication, and collaboration skills involved in performing human job functions. Capturing organizational knowledge in a reusable form, and designing intelligent agents having access to this corporate knowledge, is going to revolutionize organizational work environment in the near future.
Vijayan Sugumaran Department of DIS School of Business Administration Oakland University Rochester, MI 48309
x
Preface
TE
AM
FL Y
The Internet and associated technologies are playing an ever-increasing role in the lives of businesses and academic organizations. As these technologies grow in use, organizations are striving to improve their use within the organization. Intelligent Support Systems play an important role in developing competitive technologies in the Internet world. Additionally, knowledge capture, use and sharing are extremely timely issues for businesses as they deal with virtual communities and virtual organizations. In order to get the most from these emerging technologies and understand how to best manage knowledge, academics, researchers and practitioners must have access to latest information describing the most current research and best practices in the use and development of these technologies. This book provides just that. The chapters in this timely new book are a compilation of research on how to develop and implement information systems. Additionally, the authors tackle the difficult topics of defining virtual organizations and utilizing them to gain advantage. Furthermore, the chapters describe the optimal knowledge management techniques and practices. The authors represent a wide variety of organizational and cultural backgrounds and share their insights in the following chapters. Chapter 1, “Intelligent Agents and the World Wide Web: Fact or Fiction” by Sudha Ram of University of Arizona (USA), proposes that collaborative multi-agent systems are a very promising approach for managing information overload. The author indicates that it will be necessary to move beyond the current Web interaction paradigm of direct manipulation to indirect management of the Web. The author describes how multi-agent systems have the capabilities to make the transition from the current paradigm more smoothly. Chapter 2, “Comparing U.S. and Japanese Companies on Competitive Intelligence, IS Support and Business Change” by Tor Guimaraes of Tennessee Technological University (USA), Osamu Sato of Tokyo Keizai University and Kideaki Kitanaka of Takushoku University (Japan), reports on the findings of a field test of how effectively U.S. and Japanese business organizations are identifying strategic problems and opportunities, and how effectively they implement business changes and use IS technology to do so. Chapter 3, “Knowledge Assests in the Global Economy: Assessment of National Intellectual Capital” by Yogesh Malhotra of Syracuse University (USA), discusses the developing need for assessing knowledge capital at the national economic level. The chapter further reviews a national case study of how intellectual capital assessment was undertaken, suggests implications of such assessment methods, and offers areas needing advancement.
Team-Fly®
xi Chapter 4, “Knowledge-Bases Systems as Database Design Tools: A Comparative Study” by W. Amber Lo of Millersville University and KnowledgeBased Systems, Inc. and Joobin Choobinch of Texas A & M University (USA), surveys tools used in prototype database design and compares these tools with respect to four aspects: database design support, tool flexibility, expert system features and implementation characteristics. The results of the study indicate that, in general, there is a lack of support for all phases of design, for group database design, for graphic support, for empirical verification of the tools’ effectiveness, for long-term maintenance of the tools, and for specialized knowledge representation. Chapter 5, “Policy Agents to Support CSCW in the Case of Hospital Scheduling” by Hans Czap of University of Trier, demonstrates the concept of a policy agent used in hospital scheduling. This agent is able to represent individual preferences and goals, and thus may act as a personal assistant to support solving standard problems like operating room scheduling. The chapter demonstrates the representation of preferences and goals in order to make adaptations to changes in the environment and shows how the interaction works. Chapter 6, “Building an Agent: By Example” by Paul Darbyshire of Victoria University of Technology (Australia), is written in response to the growing need of people who are interested in the emerging Web-based technologies and desire to build their own agents. This chapter demonstrates the problems of building an agent using the example of an email helper. Chapter 7, “Intelligent Agents in a Trust Environment” by Rahul Singh of University of North Carolina, Greensboro, and Mark Gill of Arizona State University (USA), addresses the need for intelligent agents to include a mechanism for handling trust. The chapter then discusses how the agents can be used as intermediaries in electronic commerce. This work responds to the growing use of the Internet for commerce and banking activities and acknowledges the important role trust plays in online transactions. Chapter 8, “A Case Study on Forecasting of Scrapped Products through Simulation and Fuzzy Reasoning” by Jorge Marx-Gómez and Claus Raustenstrauch of Otto-von Guericke University, Magdeburg (Germany), suggests a method to forecast the timing and quantities of scrapped products. The method combines a simulation approach with fuzzy reasoning. The prediction model presented is based on life-cycle data, such as sales figures and failures and impact factors such as lifetime wear and tear. The chapter presents the results of an empirical study wherein the model was to use life-cycle data of photocopiers to forecast the returns. Chapter 9, “Newshound Revisited: The Intelligent Agents That Retrieves News Postings” by Jeffrey Goldberg of Analytic Services (ANSER)and Shijun Shen of Tygart Technology (USA), reports on the authors’ experiences implementing an Intelligent Internet Agent, Newshound. Newshound can be trained to recognize a desired topic and scan Usenet newsgroups looking for new examples of that topic. The chapter also introduces two additional intelligent agents: Chathound and Webhound. Finally, the authors discuss the inter-agent communication layer, the facilitator for cooperation between ANSER’s intelligent agents.
xii Chapter 10, “Investigation into Factors That Influence the use of Web in Knowledge-Intensive Environments” by Yong Jin Kim and H. Raghav Rao of SUNY, Buffalo and Abhijit Chaudhury of Bryant College (USA), develops a set of hypotheses regarding the relationship between the Technology Acceptance Model (TAM) constructs and external variables The study reported here give insights regarding the questions of when to implement a new technology and who is an eager user of new technologies to learn. The chapter also is one of the first papers to use TAM in the context of knowledge-management systems. Chapter 11, “A Study of Web Users’ Waiting Time” by Fiona Fui-Hoon Nah of University of Nebraska—Lincoln (USA), evaluates Nielsen’s hypothesis of 15 seconds as the maximum waiting time of Web users and provides approximate distributions of waiting time for Web users. The chapter discusses the literature on waiting time and reports on a study conducted by the author. The chapter recommends that researchers and practitioners understand users’ waiting time behavior, propose and evaluate techniques to reduce users’ perception of waiting time, and recommend a trade-off between aestheticism of Web page design and download/access time. Chapter 12, “Stickiness: Implications for Web-Based Customer Loyalty Efforts” by Supawadee Ingsriswang and Guisseppi Forgionne of University of Maryland (USA), applies the concept of customer loyalty in traditional businesses to digital products or services in order to describe a conceptual model of online stickiness. Using the conceptual model, the authors identify the measures that determine the stickiness of the Website and describe the applications of the stickiness value. Chapter 13, “’No’ is Not ‘Not’ Comparisons of Negation in SQL and Negation in Logic Programming” by James Jones of University of Arkansas at Little Rock (USA), focuses on the expressive power of weak negation in logic programming. Weak negation is not presently well understood and is easily confused with negation in SQL. The author describes weak negation and, to a lesser extent, discusses strong negation in logic programming. Chapter 14, “Knowledge Management and New Organization Forms: A Framework for Business Model Innovation” by Yogesh Malhotra of Syracuse University, proposes a sense-making model of knowledge management for new business environments. The chapter then applies this framework in order to facilitate business model innovations necessary for sustainable competitive advantage in the new business environment, characterized by dynamic, discontinuous and radical pace of change. Chapter 15, “Implementing Virtual Organizations in Business Networks: A Method of Inter-Business Networking” by Roland Klueber, Rainer Alt and Hubert Österle of University of St. Gallen (Switzerland), describes a method that addresses the need for a holistic view and methods that support implementation for business networks. The method described includes the dimensions of strategy, process and IS required for establishing and managing business networks. The authors describe
xiii a project implementing a business-networking solution for electronic procurement. The scenario described shows how a structured approach helps to identify scenarios, aids in implementation, and applies previously created and newly created knowledge. Chapter 16, “Managing Knowledge for Strategic Advantage in Virtual Organization” by Janice Burn and Colin Ash of Edith Cowan University (Australia), looks at the virtual organization and suggests that the basic concepts of virtual management are so poorly understood that there are likely to be very few such organizations gaining strategic advantage from their virtuality. The authors provide clear definitions of virtual organizations and different models of virtuality that can exist. The chapter presents six virtual models with a dynamic framework of change and offers specific examples applying the models to organizations. Chapter 17, “Virtual Organizations That Cooperate and Compete: Managing the Risks of Knowledge Exchange” by Claudia Loebbecke of Copenhagen Business School (Denmark) and Paul van Fenema of Erasmus University (The Netherlands), explores the art of controlling knowledge flows in cooperative relationships. The chapter conceptualizes types of knowledge flows and dependencies, resulting in four configurations. The authors propose control strategies that allow companies engaged in cooperation to anticipate deviant trajectories and define adequate responses. Chapter 18, “Becoming Knowledge Powered: Planning the Transformation” by Dave Pollard of Ernst & Young (Canada), identifies possible strategies, leading practices and pitfalls to avoid in each phase of his award-winning process to transform the company from a knowledge-hoarding to knowledge-sharing enterprise. The chapter describes the challenges involved in identifying and measuring intellectual capital, encouraging knowledge creation, capturing human knowledge in structural form, and enabling virtual workgroup collaboration. The role of intelligent agents in optimizing Website performance and development and in enhancing security of Websites, as well as knowledge management’s place in the virtual organization and in establishing and maintaining competitive business advantage are just some of the timely topics contained in this important new book. The information contained herein will be useful to academics as they attempt to understand the theory of intelligent agent systems, to researchers as they attempt to evaluate the efficacy of these systems and understand the intricacies of the emerging field of virtual organizations, and to business people and practitioners as they strive to implement the most current, best practices in knowledge management, intelligent systems and virtual organizations. This book is a “must have” for all those who want to understand how to achieve and maintain competitive advantage in this increasingly virtual world. IRM Press January 2002
Ram
1
Chapter 1
Intelligent Agents and the World Wide Web: Fact or Fiction? Sudha Ram University of Arizona
We are fortunate to be experiencing an explosive growth and advancement in the Internet and the World Wide Web (WWW). In 1999, the global online population was estimated to be 250 million WWW users worldwide, while the “/images/spacer_white.gif”number of pages on the Web was estimated at 800 million (http://www.internetindicators.com/facts.html). The bright side of this kind of growth is that information is available to almost anyone with access to a computer and a phone line. However, the dark side of this explosion is that we are now squarely in the midst of the “Age of Information Overload”!!! The staggering amount of information has made it extremely difficult for users to locate and retrieve information that is actually relevant to their task at hand. Given the bewildering array of resources being generated and posted on the WWW, the task of finding exactly what a user wants is rather daunting. Although many search engines currently exist to assist in information retrieval, much of the burden of searching is on the end-user. A typical search results in millions of hit, many of which are outdated, irrelevant, or duplicated. One promising approach to managing the information overload problem is to use “intelligent agents” for search and retrieval. This editorial explores the current status of intelligent agents and points out some challenges in the development of intelligent agents based systems. An intelligent agent is a piece of software that performs a given task using information from its environment and acts in such a way that it can complete the given task successfully. Some desirable properties for such Previously Published in the Journal of Database Management, vol.12, no.1, Copyright © 2001, Idea Group Publishing.
2
Intelligent Agents and the World Wide Web: Fact or Fiction?
▼
agents are: autonomy, adaptability, mobility, and communication ability. To deal with complex real world problems, it is desirable to have different type of agents specializing in different types of tasks to collaborate with others to solve a problem. Given the number of sources of information on the web, using a network of collaborating agents is bound to ease the task of information discovery and retrieval and therefore appears to be very promising. The Virtual Enterprise Model of collaborating agents (see Figure 1) uses software agents of three different kinds — demand, supply, and broker agents who interact with each other to supply answers to users. Such systems are also known as Multi-agent systems. Demand agents interact with end-users to determine their background and understand their information needs. Supply agents understand specific sources of information and advertise their “information wares”. Broker agents interact with demand and supply agents to match the needs of end-users with what is available. For the virtual enterprise model to be successfully deployed, it is essential for the agents to understand and communicate with each other. This requires a common ontology that the agents can use to facilitate interaction. An ontology is a set of terms or vocabulary that describes a subject area. It includes a description of how the terms are related to each other. A number of systems based on the virtual enterprise model are currently being designed and tested to handle the “information overload” probFigure 1: Virtual Enterprise Model lem. Infosleuth, and Warren are two such systems, that provide inforDemand Agents mation finding, filtering and integration functions ▼ in the context of Broker Agents helping a user manage his financial portfolio. Such systems ▼ consist of agents Supply Agents that cooperatively self organize to monitor stock quotes, financial news, financial ▼
▼
Ram
3
analysis reports and company earnings reports. The agents also continuously filter incoming news flashes to alert users about events that may affect his portfolio. While such systems exist as prototypes, I believe a number of key research challenges need to be addressed to make them truly useful in the real world . These include: 1. Semantic Heterogeneity: As stated earlier, there is a staggering number of sources of information available on the web. Most of these are textual or unstructured sources. The semantic heterogeneity problem has been addressed successfully in the context of structured data sources such as relational or object relational databases. However, the web poses a new problem. We need solutions to automatically detect and resolve semantic heterogeneity in an unstructured environment. Mediators (a type of broker agent) may be one way to tackle this challenge. 2. Support for Dynamic Evolution of Information Sources: A major problem with the Web is that it is continuously evolving i.e. new information sources are being added and existing ones removed. This evolution is exacerbated by the problem of the sources themselves changing over time. A comprehensive set of techniques to keep track of information sources and their changes (via brokers or supply agents) needs to be developed. 3. Scalability and Performance: The web presents an unprecedented scale because of its sheer size and number of sources. For a multi-agent system to be effective in light of this large scale, we need to address important questions such as: (a) How many types of collaborating agents are necessary? (b) How many instances of each type of agent will be necessary to provide quick responses? (c) How do we minimize the amount of communication to provide adequate response times to users? 4. Generalization across application domains: Current prototype multiagent systems are built to address specific domains such as financial portfolio analysis, and technology tracking. However, it remains to be seen how these systems can be adapted and reused for application domains other than the ones for which they were originally designed. In conclusion, I believe collaborative multi-agent systems are a very promising approach for managing the information overload problem. However, given the rate of the WWW, we have to move beyond our current dominant web interaction paradigm of “direct manipulation” to “indirect management” of the WWW. Multi-agent systems provide us with the capability to make this transition, provided we can tackle the challenges presented in this article. I exhort the information systems research community to respond to these challenges and help eradicate the information overload problem.
4 Comparing U.S. and Japanese Companies
Chapter 2
Comparing U.S. & Japanese Companies on Competitive Intelligence, IS Support, and Business Change Tor Guimaraes Tennessee Technological University, USA Osamu Sato Tokyo Keizai University, Japan Hideaki Kitanaka Takushoku University, Japan
The increase in business competitiveness forces companies to adopt new technologies to redesign business processes, improve products, and support organizational changes necessary for better performance. The literature on Competitive Intelligence (CI) touts its importance in providing corporate strategic vision to improve company competitiveness and success. To implement their strategic vision companies have to implement changes to their business processes, products, and/or to the organization itself. The voluminous body of literature on the management of change, including subareas such as Business Process Reengineering (BPR), Total Quality Management (TQM), and product improvement, implicitly or explicitly propose that company strategic intelligence is a pre-requisite for change, and that effective Information Systems (IS) support is a critical requirement for Previously Published in the Journal of Global Information Management, vol.7, no.3, Copyright © 1999, Idea Group Publishing.
Guimaraes, Sato & Kitanaka
5
implementing change. There is some empirical evidence supporting these two hypotheses based on U.S. business organizations and there is little reason to believe that the relationships do not hold for Japanese companies. Whether or not U.S. and Japanese organizations are different in any way along these important variables is an interesting question. A field test of how effectively U.S. and Japanese business organizations are identifying strategic problems and opportunities, how effectively they implement business changes, and use IS technology to do so, was undertaken to empirically explore any differences. Despite the relatively small sample size, the results corroborate the importance of competitive intelligence and IS support for effectively implementing business change in U.S. and Japanese companies. The findings indicate, on the average, American companies are more effective in providing IS support for business change and Japanese companies are more effective in CI activities. Increasing business competition has forced managers to recognize the importance of business innovation. American business organizations have derived substantial benefits from widespread changes to the old business ways. For example, the American manufacturing sector is thought to have become more productive and the erosion of our manufacturing base and the loss of initiative to Japan and Europe has been reversed [Howard, 1994]. In the process of exploring the basic differences between the Japanese and American manufacturing management approaches and applying a host of new methods and techniques, many U.S. firms are redefining the very nature of their businesses [Patterson & Harmel, 1992]. On the other hand, success implementing the required changes is far from assured, with many organizations reporting very disappointing results, given the cost and turmoil caused by the changes [Guimaraes and Bond, 1996]. Two primary approaches for implementing organization change worldwide are known as Total Quality Management (TQM) and Business Process Reengineering (BPR). BPR differs from TQM in two important respects. First, TQM focuses on continuous improvement (an incremental performance improvement approach), while reengineering is founded on the premise that significant corporate performance improvement requires discontinuous improvement (breaking away from the outdated rules and fundamental assumptions that underlie operations). With BPR, rather than simply eliminating steps or tasks in a process, the value of the whole process itself is questioned [Gotlieb, 1993]. In conformance with TQM principles, the focus of change is also market driven [Guimaraes and Bond, 1996]. Second, reengineering makes a significant break with previous performance improve-
6 Comparing U.S. and Japanese Companies
TE
AM
FL Y
ment approaches by requiring a high level of state-of-the-art information technology awareness among the entire reengineering team prior to, rather than after, the definition of process changes or improvements [Cypress, 1994]. Some technologies (i.e., imaging systems and expert systems) can provide substantial opportunities for the redesign of business processes [Guimaraes, 1993; Guimaraes, Yoon and Clevenson, 1998]. Regardless of the change methodology being employed (i.e., BPR or TQM) the factors important to innovation success or failure are many, but most authors would agree that strategic awareness or competitive intelligence is an important prerequisite for success. This is deemed particularly important in highly competitive industries [Luecal & Dahl, 1995; Cartwright, Boughton & Miller, 1995]. Competitive intelligence (CI) is the process by which organizations gather and use information about products, customers, and competitors, for their short and long term strategic planning [Ettorre, 1995]. It is the first step guiding the planning and redesign of processes, products, and organization structure. Without this strategic vision, business changes will be conducted in haphazard fashion and are less likely to produce significant results. To implement their strategic vision, take advantage of strategic opportunities, and address problems, companies have to implement changes to their business processes, products, and/or organization. It is reasonable to assume that knowledge about their markets (customers, competitors, etc.) is a prerequisite for effective change, and effective Information Systems (IS) support is a critical requirement for implementing business change. There is some empirical evidence supporting these two hypotheses based on U.S. business organizations [Guimaraes & Armstrong, 1998], and little reason to believe the relationships do not hold for Japanese companies. However, an interesting question is whether U.S. and Japanese organizations are different in any way along these important variables. If any differences can be detected, managerial attention can be focused on the impact of strengths or weaknesses on company performance in the two nations. Also, any differences may provide further motivation to explore these important issues from different perspectives, addressing other theoretical constructs, and using improved measures. A field test of how effectively U.S. and Japanese business organizations are identifying strategic problems and opportunities, how effectively they implement business changes, and use IS technology to do so, was undertaken to explore any differences.
Team-Fly®
Guimaraes, Sato & Kitanaka
7
Figure 1: The Main Conceptual Model CI Effectiveness
Effectiveness Implementing Business
IS Support Effectiveness
Innovation
CONCEPTUAL FRAMEWORK AND PROPOSED HYPOTHESES The basic conceptual model for this study is graphically represented in Figure 1. It proposes that effectiveness in competitive intelligence and in using IS technology to support business change will be directly related to company effectiveness innovating in the areas of products, processes, organization structure and culture. An extensive survey of the literature reveals that academics have neglected to address some of these constructs and their relationships from a practical perspective. For example, there is very little work in theory building in the competitive intelligence area, and there is practically nothing in this area regarding intercultural differences. Most of the discussion on these extremely important constructs and their relationships come from the practitioner literature. This situation provides a rich opportunity for rigorous academic research attempting to build theory useful in practice. Implementing Business Change To take advantage of strategic opportunities and address problems, companies worldwide have to implement changes to their business processes, products, and/or to the organization itself. Similar to the earlier study by Guimaraes and Armstrong [1998], the dependent variable in this case is the degree of company effectiveness in implementing business change. As business competitiveness increases, many business organizations have reacted to expand the value of their products and services to customers by redesigning their business processes to increase efficiency, deliver new products and services, and improve quality of their offerings [Tsang, 1993].
8 Comparing U.S. and Japanese Companies
The literature contains considerable evidence showing U.S. and Japanese management differ substantially in many ways [Badawy, 1991; Sherman, 1996; Billings & Yaprak, 1995; Herbig & Jacobs, 1996]. However, there is no evidence organizations in one culture are better managers of innovation than in the other. Quite to the contrary, effectiveness in innovation seems to be a shared gift with neither East nor West excelling at sustained innovation [Sherman, 1996]. Thus we propose: H1: There is no difference in effectiveness implementing business change between U.S. and Japanese companies. Company Competitive Intelligence Again, the importance of competitive intelligence and knowledge as a key asset is increasingly recognized by managers [Darling, 1996]. Even though most of the necessary operational knowledge within a company is in the employees’ minds [Sawka, 1996], with the increase in business competition, company survival and success is increasingly determined by its rate of learning. If learning is faster than external changes, the organization will experience long term success; otherwise, it is at risk [Darling, 1996]. The antecedents and consequences of CI dissemination were studied by Maltz and Kohli [1996]. Competitor Analysis (CA) was proposed by Ghoshal & Westney [1991], and approaches useful for companies to collect information from competitors were addressed by Heil and Robertson [1991]. The importance of organization intelligence to financial performance has also been demonstrated. Companies with well established CI programs on the average showed earnings per share of $1.24, compared to those without CI programs which lost 7 cents per share [King, 1997]. The literature contains many examples of benefits that can be derived from CI. Among these are improved competitive edge [McCune, 1996; Sawka, 1996; Westervelt, 1996] and improved overall company performance [Babbar and Rai, 1993], two essential company goals that can be brought about with effective application of competitive intelligence. More specific benefits of CI include: Uncovering business opportunities and problems that will enable proactive strategies [Ellis, 1993; Westervelt, 1996]; providing the basis for continuous improvement [Babbar and Rai, 1993]; shedding light on competitor strategies [Harkleroad, 1993; Westervelt, 1996]; improving speed to markets and supporting rapid globalization [Baatz, 1994; Ettorre, 1995]; improving the likelihood of company survival [Westervelt, 1996]; increasing business volume [Darling, 1996]; providing better customer assessment [Darling, 1996]; and aiding in the understanding of external influences
Guimaraes, Sato & Kitanaka
9
[Sawka, 1996]. Benefits like these provide the basis for firms to better understand the potential impact of the proposed changes and the means by which they can be infused into the company’s fabric. Based on the above discussion, we propose the following hypothesis: H2a: Regardless of nationality, company CI effectiveness is directly related to effectiveness implementing business change. CI effectiveness is proposed as an important requirement for effective implementation of business change. Ironically, even though as much as 68% of U.S. companies have an organized approach to providing information to decision makers [Westervelt, 1996], “probably less than 10% of U.S. corporations know their way around the CI process and effectively integrate the information into their strategic plans...” [Ettorre, 1995]. Japanese organizations are known for greater dedication to this area. Perhaps as a necessary requirement for shifting from imitation to innovation, leading Japanese companies developed CI as part of their research and development [Kokubo, 1992]. Business organizations in the U.S. are moving slowly to develop intelligence about competitors, markets, and important technologies [Shermach, 1995; Anonymous, 1996] and, in general, CI has been considered low priority [Herring, 1991]. U.S. companies “have not relied on CI as much as they should or as much as non-U.S. companies do.” [Bertrand, 1990]. Based on this discussion, we propose: H2b: On the average, Japanese companies have greater CI effectiveness than U.S. counterparts. Using IS Technology To Support Business Change Also as discussed by Guimaraes and Armstrong [1998], the effects of IS technology on organization design, intelligence and decision making have been studied by Huber [1990]. Many authors have proposed the importance of a wide variety of IS technologies to support business change. Computer Telephony Integration has been touted as a powerful tool to improve the relationship with customers [McCarthy, 1996]. The use of IS for data mining and warehousing is seen as essential for decision support [Anonymous, 1995]. Friedenberg and Rice [1994] and Guimaraes, Yoon and Clevenson [1998] have proposed Expert Systems as viable implementation vehicles for business change because they are effective in capturing and distributing knowledge and knowledge processing capability across an organization. IS technologies available to support the necessary business changes are endless:
10 Comparing U.S. and Japanese Companies
DSS, Group DSS, EDI, Client Server Systems, Imaging Systems, the Internet, and Intranets. Without effective IS support the change implementation processes would be severely hindered, and in many cases rendered impossible. Based on the above discussion we propose: H3a: Regardless of nationality, company effectiveness using IS technology to support business change is directly related to effectiveness implementing business change. Western counterparts in the implementation of IS technology [Davenport, 1996], and the latter has received considerable credit for driving the latest renewal of US business competitiveness. Correspondingly, U.S. companies have increased their spending in IS an average of 14% in 1996, compared with 8% by Japan [Moshella, 1997]. In other areas, such as Internet use, similar to American organizations, Japanese business use has lately increased substantially as a tool to reduce communication costs [Sasaki, 1998]. Nevertheless, the time lag is significant. “Nine years after the network revolution swept into American offices, the wiring of corporate Japan has begun in earnest,” [Anonymous, 1997]. Further, Japanese companies are expected to speed their adoption of IS technologies, but face obstacles such as more rigid corporate culture and tighter state control over electronic communication links, leading to greater costs and equipment obsolescence [Anonymous, 1997]. In many areas which are heavily dependent on electronic communications, Japanese companies also have lagged behind their U.S. counterparts to a large degree [Patton, 1995]. Based on the above discussion, we propose a final hypothesis: H3b: On the average, U.S. companies will be more effective in the use of IS technology to support business change than Japanese companies.
STUDY METHODOLOGY Data Collection Procedure A questionnaire was used to collect data from a convenience sample of 52 top managers from companies in the U.S. and 39 from Japanese companies headquartered in Japan. A cover letter described the purpose of the study and provided instructions for the respondents. Much of the data was collected through personal interviews with top managers (VP or higher.) A similar questionnaire was used in a previous study which included only U.S.
Guimaraes, Sato & Kitanaka
11
Table 1: Sample Demographics A) Industry Sectors - Manufacturing - Communications - Health Care - Retail - Banking - Other
Japan 24 ( 62%) 2 ( 6%) 0 4 ( 9%) 2 ( 6%) 7 ( 17%) 39 (100%)
US 26 7 4 3 2 10 52
(51%) (13%) ( 8%) ( 5%) ( 4%) (19%) (100%)
B)
Japan 12 ( 30%) 19 ( 48%) 9 ( 22%) 39 (100%)
US 19 23 10 52
(37%) (44%) (19%) (100%)
Gross revenues (in dollars) - $50 million or less - $51 to $500 million - Above $500 million
companies [Guimaraes & Armstrong, 1998]. As discussed later, the questionnaire content and readability were extensively tested through several meetings and phone conversations with U.S. and Japanese managers and employees. These managers are known personally to the researchers and have expressed their personal opinions about their company’s processes and activities for identifying strategic problems and opportunities, business changes, and IS support to business activities. Sample Description The companies represented in the sample range widely in terms of their industry sector and size. Among the U.S. organizations, 51% of the firms identified their primary business as manufacturing, with the remaining companies distributed fairly evenly across other sectors. In terms of gross revenues, among the U.S. and Japanese companies there was also broad representation. Table 1 presents these sample demographics in more detail. Validity of the Measures Several precautions were taken to ensure the validity of the measures used, and many of the recommendations by Carmines and Zeller (1979) were followed. To ensure content validity, and that any important dimension of each of the constructs would not be neglected [Guimaraes & Armstrong, 1998], a thorough survey of the relevant literature was undertaken to understand the important aspects of each major variable and its components. As discussed earlier, the theoretical underpinnings of this study are quite intuitive but have not been empirically well established. A previous study used the same questions to collect data from U.S. companies [Guimaraes
12 Comparing U.S. and Japanese Companies
& Armstrong, 1998]. To reduce the possibility of any non-random error (the main source of invalidity) several managers and employees in the areas of CI, IS management, and management of change reviewed the questionnaire for validity, completeness, and readability. This validation process occurred before the data collection process in the U.S. and Japan. A few questions were reworded to improve readability; otherwise, the items composing each major variable remained as in Appendix A. Reliability of the Measures The earlier study on U.S. companies was based on a sample too small to assess the psychometric qualities of the measures [Guimaraes & Armstrong, 1998]. In this case the U.S. and Japanese data were combined. Exploratory factor analysis produced 3 factors and showed the items for each scale loading unambiguously (>.50 into one factor and <.35 into the others), thus indicating construct unidimensionality, a requirement for computing the Cronbach’s Alpha coefficient of scale reliability. As discussed in the next section, the reliability coefficients for the constructs are all above the level of .70 acceptable for exploratory studies [Nunally, 1978]. Variable Measurement Effectiveness in Implementing Business Changes was measured by the respondents rating the effectiveness of the firm in making changes to address strategic problems and opportunities in four areas: products, processes, organization structure and organization culture. Each item was rated on a 7point Lickert-type scale ranging from extremely below average to extremely above average (average being the company’s main competitors). The ratings for the four areas were averaged to produce a single measure for effectiveness in implementing business changes. The Cronbach’s Alpha coefficient of internal reliability for this scale was .71. Effectiveness in Competitive Intelligence was measured by asking the respondent to rate the effectiveness of the firm in identifying strategic business opportunities and problems in six specific areas: traditional industry competitors, emerging competitors, traditional customer needs and wants, non-traditional customer needs and wants, relationships with business partners, and product or service development. Each item was rated on the same 7-point scale as above. The overall measure of CI effectiveness was the average rating for the six areas. The Alpha coefficient in this case was .76. IS Effectiveness in Supporting Business Activities was measured by asking the respondents to rate the extent to which the company’s needs for IS technology have been met. This was asked in terms of overall effectiveness
Guimaraes, Sato & Kitanaka
13
in four specific areas: technology leadership in the industry, knowledge of how to get the best technology, effectiveness with which technology has been used over the years, and effectiveness in using technology in comparison with main competitors. Respondents were asked to use the same 7-point scale described above. The measure for IS effectiveness in supporting business activities is the average rating for these five items. The Alpha coefficient was .83. Data Analysis Procedure The relatively small sample sizes require the use of simple but robust statistical analysis techniques. Multivariate regression (stepwise) analyses were used to test the relationships between the two independent variables (CI effectiveness and IS support effectiveness) and the dependent variable (effectiveness implementing business change). Multivariate regressions for U.S. and Japanese companies were computed separately. T-tests were used to test the statistical significance of any differences between U.S. and Japanese companies along the main variables and their subcomponents.
DISCUSSION OF RESULTS Table 1 shows the means for the three aggregated research variables and their respective component items for the U.S. and Japanese samples. On the average, the companies in the U.S. sample are performing significantly below in the area of CI and significantly above in IS support effectiveness when compared to their Japanese counterparts. No statistically significant difference exists in overall effectiveness implementing business change. Based on these results, the following hypotheses are accepted at the .05 level of significance or better: H1: There is no difference in effectiveness implementing business change between U.S. and Japanese companies, H2b: On the average, Japanese companies have greater CI effectiveness than their U.S. counterpart. H3b: On the average, U.S. companies will be more effective in the use of IS technology to support business change than Japanese companies. For exploratory purposes, Table 2 also shows the results of comparing Japanese and U.S. companies along the component items for CI effectiveness,
14 Comparing U.S. and Japanese Companies
effectiveness implementing business change, and IS support effectiveness. No difference was detected among the component items for effectiveness implementing business change except for changes in organization culture, where American companies seem to be slightly more effective in implementing changes in this area. Average performance along the CI component items differs significantly and consistently in favor of the Japanese companies, except for effectiveness in identifying traditional industry competitors actions/reactions and traditional customer needs and wants. In the context of IS support effectiveness components, results show significant and consistent differences in favor of the U.S. companies, except for company knowledge of how to get the best technology and the effectiveness with which technology has been used over the years where the differences between the two groups were not significant. Table 3 presents the results from stepwise regression analyses for U.S. and Japanese companies separately. In both cases CI effectiveness and IS support effectiveness are confirmed as important factors in explaining the variance in business change effectiveness. Thus, the following hypotheses are corroborated at the .05 level of significance or better: H2a: Regardless of nationality, company CI effectiveness is directly related to effectiveness implementing business change. H3a: Regardless of nationality, company effectiveness using IS technology to support business change is directly related to effectiveness implementing business change. However, for U.S. companies the first variable to enter the regression equation (thus the one contributing the most in explaining the variance in the dependent variable) is IS support effectiveness. For Japanese companies, CI effectiveness is the strongest determinant for effectiveness in implementing business change.
CONCLUSIONS AND MANAGERIAL IMPLICATIONS This study had two main objectives. One was to empirically confirm the previously reported [Guimaraes & Armstrong, 1998] relationship between company CI effectiveness and IS support effectiveness and ability to implement changes necessary to improve business competitiveness. The second objective was to empirically test the similarity and differences between U.S. and Japanese business organizations in terms of their effectiveness in defining
Guimaraes, Sato & Kitanaka
15
Table 2: Means of Study Variables and T-tests for Differences Aggregated Variables and Individual Items
US Average
Japanese Average
t-test p value
Competitive Intelligence (CI) Effectiveness * Traditional industry competitors * emerging (new or same industry) competitors’ actions/reactions * traditional customer needs and wants * non-traditional customer needs and wants * relationships with business partners * product/service development
4.0 4.5 3.3 4.6 3.5 4.3 4.0
4.5 4.9 3.8 4.8 4.1 4.9 4.4
.03 NS .00 NS .01 .02 .04
Effectiveness Implementing Business Change * products * processes * organization structure * organization culture
4.2 4.3 4.4 4.3 3.9
4.1 4.5 4.1 4.4 3.5
NS NS NS NS .05
IS Support Effectiveness * overall support * this company’s technology leadership position in the industry * its knowledge of how to get the best technology * the effectiveness with which technology has been used over the years * its effectiveness using technology in comparison with main competitors
4.2 4.2 4.6 4.5
3.8 3.6 4.0 4.3
.04 .02 .00 NS
3.6
3.5
NS
4.2
3.8
.05
Rating Scale: 1 (extremely lower than average), 2 (very much lower), 3 (somewhat lower), 4 (average), 5 (somewhat higher than average), 6 (very much higher), and 7 (extremely higher). NS means not significant at the .05 level.
Table 3: Multivariate Regressions (Stepwise) (Dependent Variable: Business Change effectiveness) (Independent Variables: CI Effectiveness and IS Support Effectiveness) A) US companies IS Support Effectiveness CI Effectiveness Total R squared
R2 .24 .19 .43
S.L. .02 .03 .03
B) Japanese Companies CI Effectiveness IS Support Effectiveness Total R squared
R2 .22 .16 .38
S.L. .02 .05 .04
R2 = Incremental R squared. S.L. = Significance level.
16 Comparing U.S. and Japanese Companies
TE
AM
FL Y
strategic opportunities and problems (competitive intelligence), implementing business changes, and supporting these changes with IS technology. The results confirm the importance of an effective CI program and effective IS support if organizations worldwide are to successfully implement business changes to their products, business processes, organization structure, and organization culture. Given the importance of effective business change implementation in these days of hyper competitiveness, it behooves top managers to do whatever they can to improve their company’s CI program and ability to provide effective IS support for the necessary changes. To improve CI programs, managers need to consider the collection of market intelligence based on the six areas addressed in this study: the traditional industry competitors, emerging competitors, traditional customer needs and wants, non-traditional customer needs and wants, relationships with business partners, and new product or service development. Good performance in these areas, whenever applicable to the company’s industry sector and lines of business, are likely to lead to more effective implementation of required business changes. Coy [1993] stated that during troubled financial times Japanese companies tend to turn inward, implying that American organizations are relatively more externally oriented while seeking for solutions. This opinion seems to be contradicted by the apparent superiority Japanese companies have shown in the CI area in general. To the CI professional, the results provide the basis for two major conclusions. First, CI professionals must take the initiative to convey to top managers and other change agents in their organizations the importance of CI. In operational terms this means before embarking on major programs of change such as TQM and/or BPR, which are supposedly market driven, the strategic competitiveness of the changes are validated with CI information, rather than superficial guesswork by top managers and BPR consultants more focused on the change process instead of the strategic reasons for change. Needless to say, CI personnel must become integral members of any teams charged with projects involving strategic change. Second, CI professionals must ensure that their company’s CI program includes the collection of market intelligence in the six areas addressed in this study: traditional industry competitors, emerging competitors, traditional customer needs and wants, non-traditional customer needs and wants, relationships with business partners, and new product or service development. The importance of any one of these areas may be relatively higher or lower, and in some cases some of these sources may be irrelevant, depending on the company’s specific industry sector, line of business, products, and processes being considered. Nevertheless, whenever the strategic CI information
Team-Fly®
Guimaraes, Sato & Kitanaka
17
provided by the source is relevant, it is likely to play an important role in helping define the necessary changes to enhance organization competitiveness. To improve IS support for implementing business improvements, managers must look at the company’s IS leadership position in its main industry sector, knowledge of how to get the best technology, effective use of specific technologies, and benchmarking the use of specific technologies against the company’s main competitors. Implications from Differences of Japanese and U.S. Firms As proposed by Davenport [1996], and Patton [1995], the differences between Japanese and U.S. companies corroborate the hypotheses that U.S. firms are in general more advanced in the use of IS technology to support business change. While companies from the two nations are not significantly different as to their effectiveness managing business change, one can surmise that both groups can improve by learning to perform better in their weaker areas, respectively. Japanese companies should consider measures to improve IS support for business change while their U.S. counterparts are likely to benefit from attempting to improve their CI activities. A significant reason for the differences may be historic. In other words, as suggested by Ettorre [1995], Westervelt [1996], and others, Japanese companies have had a head start in the CI arena. Similarly, as suggested by Sasaki [1998], U.S. companies lead in the use of IS technologies may also be temporary. The fast pace of change in these critical areas for company survival is likely to soon provide more conclusive evidence about whether the differences between the two groups are due to basic cultural differences or are just temporary historical differences. In the later case, the differences will quickly narrow as American companies increasingly improve CI programs and their Japanese counterparts do the same with IS technologies. Finally, a significant factor in bridging the gaps between the two groups over time is raging business globalization where firms are operating across the two nations, or because partners in business operations which require compatibility in business models as well as the IS technologies that bind the organizations together. Study Limitations and Research Opportunities Based on an extensive survey of the relevant literature, this study is a first attempt at empirically testing the importance of CI to business change effectiveness and the importance of the role IS support plays in the implementation of necessary business changes. The literature also indicates that while the constructs are well established, much can be done for further testing and
18 Comparing U.S. and Japanese Companies
perhaps improve the measures used. It would be useful for other researchers to further explore their psychometric properties and attempt to develop and test new measures. Further, this study deliberately used a highly focused model that needs to be expanded to include other factors potentially important to effective implementation of strategic business change and to study the intercultural similarities and differences between business organizations. Another field test with a larger sample should be used to test the extent to which effective IS support directly influences the effectiveness of the business change process versus moderates the relationship between CI and business change. Specifically, the use of a path analytic modeling technique is suggested in this case. The results should provide valuable information on the extent to which the use of IS should be coupled with the establishment of effective CI programs for companies to improve their business competitiveness.
REFERENCES Anonymous (1995). Data mining a new weapon for competitive advantage. Software Quarterly 2(4), pp. 15-19. Anonymous (1996). Symposium: Understanding the competition: The CEO’s perspective. Competitive Intelligence Review. 7(3), 4-14. Anonymous (1997). Doing it differently: Wiring corporate Japan. The Economist. 342(8013), 62-664. Baatz, E. B. (1994). The quest for corporate smarts. CIO. 8(3), 48-58. Babbar, S., & Rai, A. (1993). Competitive intelligence for international business. Long Range Planning. 26(3), 103-113. Badawy, M. K. (1991). Technology and strategic advantage: Managing corporate technology transfer in the USA and Japan. International Journal of Technology Management. 205-215. Bertrand, K. (1990). Competitive intelligence: The global spyglass. Business Marketing. 75(9), 52-56. Billings, B. A. & Yaprak, A. (1995). Inventive efficiency: how the U.S. compares with Japan. R & D Management. 25(4), 365-376. Carmines, E. G. & Zeller, R. A. (1979). Reliability and Validity Assessmen. Sage University Paper. Cartwright, D. L., Boughton, P. D. & Miller, S. W. (1995). Competitive intelligence systems:Relationships to strategic orientation and perceived usefulness. Journal of Managerial Issues. 7(4), 420-434. Coy, P. (1993). When the going gets tough, Yanks get yanked. Business Week. ( 3316), 30.
Guimaraes, Sato & Kitanaka
19
Cypress, H. L. (1994). Reengineering. OR/MS Today. 21(1), 18-29. Darling, M. S. (1996). Building the knowledge organization. Business Quarterly. 61(2), 61-66. Davenport, T. (1996). Haiku, cherry blossoms, sushi and...notes? CIO. 10(4), 40-43. Ellis, J. R. (1993). Proactive competitive intelligence: Using competitive scenarios to exploit new opportunities. Competitive Intelligence Review. 4(1), 13-24. Ettorre, B. (1995). Managing competitive intelligence. Management Review. 84(10), 15-19. Friedenberg, R. & Rice, A. (1994). Knowledge re-engineering as a BPR strategy, working notes of the AAAI-94. Workshop on Artificial Intelligence in Business Process Reengineering, Seattle, WA. 21-26. Ghoshal, S. & Westney, D. E. (1991). Organizing competitor analysis systems. Strategic Management Journal. 12(1), 17-31. Gotlieb, L. (1993). Information technology. CMA Magazine. 67(2), 9-10. Guimaraes, T. (1993). Exploring the determinants of imaging systems success. 26th Hawaii International Conference on System Sciences. Guimaraes, T. & Armstrong, C. (1998). Exploring the relation between company intelligence, IS support and business change. Competitive Intelligence Review. 9(3), 45-54. Guimaraes, T. & Bond, W. (1996). Empirically assessing the impact of BPR on manufacturing firms. IJOPM. 16(8), 5-28. Guimaraes, T., Yoon, Y. & Clevenson, A. (1998). Exploring ES success factors for BPR. JETM. 15(2/3), 179-199. Harkleroad, D. (1993). Sustainable growth rate analysis: Evaluating worldwide competitors’ ability to grow profitability. Competitive Intelligence Review. 4(2/3), 36-45. Heil, O. & Robertson, T. S. (1991). Toward a theory of competitive market signaling: A research agenda. Strategic Management Journal. 12(6), 403418. Herbig, P. & Jacobs, L. (1996). Creative problem-solving styles in the USA and Japan. International Marketing Review. 13(2), 63-71. Herring, J. P. (1991). Senior management must champion business intelligence programs. Journal of Business Strategy. 12(5), 48-52. Howard, J. S. (1994). Reinventing the manufacturing company. D&B Reports. Jan./Feb., 18-21. Huber, G. P. (1990). A theory of the effects of advanced information technology on organizational design, intelligence, and decision making. Academy of Management Review. 15(1), 47-71.
20 Comparing U.S. and Japanese Companies
King, M. (1997). Corporations take snooping mainstream. Indianapolis Business Journal, 17(2), 1-4. Kokubo, A. (1992). Japanese competitive intelligence for R & D. Research Technology Management. 35(1), 33-34. Luecal, S. & Dahl, P. (1995). Gathering competitive intelligence. Management Quarterly 36(3), 2-10. Maltz, E. & Kohli, A. K. (1996). Market intelligence dissemination across functional boundaries. Journal of Marketing Research. 33(1), 47-61. McCarthy, V. (1996). CTI lets you coddle customers at lower cost. Datamation. 42(13), 46-49. McCune, J. C. (1996). Checking out the competition. Beyond Computing. 5(2), 24-29. Moschella, D. (1997). There’s no place like home. Computerworld. 31(1), 39. Nunally, J. C. (1978). Psychometric Theory. New York: McGraw Hill. Patterson, M. C. & Harmel, R. M. (1992). The revolution occurring in US manufacturing. IM. Jan./Feb., 15-17. Patton, R. (1995). Computer networking and competition catch up in Japan. Electronics. 68(1), 8. Sasaki, S. (1998). Internet answers call for lower costs. Nikkei Weekly. 36(1813), 9. Sawka, K. (1996). Demystifying business intelligence. Management Review. 85(10), 47-51. Shermach, K. (1995). Much talk, little action on competitor intelligence. Marketing News. 29(18), 40. Sherman, S. (1996). Hot products from hot tubs, or how middle managers innovate. Fortune. 133(8), 165-167. Tsang, E. (1993). Business process reengineering and why it requires business event analysis. CASE Trends. March, 8-15. Westervelt, R. (1996). Gaining an edge: Competitive intelligence takes off. Chemical Week. 158(25), 29-31.
APPENDIX A Unless otherwise indicated, use the following rating scale where + means higher and - means lower than average: 1=extremely-, 2=very-, 3=somewhat-, 4=average, 5=somewhat+, 6=very+, 7=extremely+ Is the company competitive intelligence process effective in that few opportunities and problems have been/are missed by top management over the years? Separately rate the process effectiveness in the following areas: Traditional industry competitors’ actions/reactions ............1 Emerging (new/same industry) competitors’ actions/reactions..........................................................1 Traditional customers needs and wants................................1 Non-traditional customers needs and wants.........................1 Relationships with business partners....................................1 Product/service development...............................................1
2
3
4
5
6
7
2 2 2 2 2
3 3 3 3 3
4 4 4 4 4
5 5 5 5 5
6 6 6 6 6
7 7 7 7 7
6 6 6 6
7 7 7 7
How effective is the company in changing the following areas to address strategic problems/opportunities? 1 1 1 1
2 2 2 2
In general, is IS support effective in that few company needs glected?........................................................................ 1 2 Also, specifically please rate this company’s: Technology leadership position in the industry..................... 1 2 Its knowledge of how to get the best technology...................1 2 Its effectiveness using technology over the years..................1 2 Its effectiveness using technology in comparison w/main competitors ......................................................1 2
3 3 3 3
4 4 4 4
5 5 5 5
for IS technology have been/are ne3 4 5 6 7 3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
3
4
5
6
7
Guimaraes, Sato & Kitanaka
Products?......................................................................... Processes?........................................................................ Organization structure?......................................................... Organization culture?...........................................................
21
22 Knowledge Assets in the Global Economy
Chapter 3
Knowledge Assets in the Global Economy: Assessment of National Intellectual Capital Yogesh Malhotra Florida Atlantic University and @Brint.com This article has the following objectives: developing the need for assessing knowledge capital at the national economic level; review of a national case study of how intellectual capital assessment was done in case of one nation state; suggesting implications of use of such assessment methods and needed areas of advancement; and highlighting caveats in existing assessment methods that underscore the directions for future research. With increasing emphasis on aligning national information resource planning, design and implementation with growth and performance needs of businesses or nations, better understanding of new valuation and assessment techniques are necessary for information resource management policymakers, practitioners and researchers. “Our government is filled with knowledge…We have 316 years’ worth of documents and data and thousands of employees with long years of practical experience. If we can take that knowledge, and place it into the hands of any person who needs it, whenever they need it, I can deliver services more quickly, more accurately and more consistently.” — From ‘Knowledge Management: New Wisdom or Passing Fad?’ in Government Technology, June 99
Emergence of the service society after the last world war brought increased realization of the role of employees’ knowledge and creativity in adding value to the Previously Published in the Journal of Global Information Management, vol.8, no.3, Copyright © 2000, Idea Group Publishing.
Malhotra 23
company. Attempts to capitalize company investments in people on the balance sheet in the 1970s failed because of measurement problems. The subject gathered increased interest more recently in the 1990s, with the rapid emergence of information and communication technologies (ICT). As business processes became increasingly ‘enabled’ by large-scale information systems, information systems designers attempted to capture employees’ implicit and explicit knowledge in “corporate memory” by means of intranets and other similar applications (Malhotra, 2000a, 2000b). In contrast to the knowledge of individual employees, such corporate knowledge does contribute to the company's value-creation capabilities as well as financial valuation by analysts. Hence, such organizational knowledge or intellectual capital must be accounted for in the company's balance sheet that has generally focused on the traditional factors of production such as land, labor and capital. The topic is not only pertinent to individual enterprises, but also to national economies that are making a rapid transition to a society based on knowledge work. This article develops the case for assessment of national intellectual capital by drawing upon existing research, practice, and a recent study of an Asian nation representative of countries making a transition from ‘developing’ to ‘developed’ status. The issues discussed herein are important for information resource management policymakers, practitioners and researchers for assessing their contributions in terms of new measures of performance. More importantly, as the world economies transition from the world of “atoms” to world of “bits,” they would be expected to plan, devise and implement information and knowledge management systems that provide differential advantage in terms of ‘intellectual capital.’ Knowledge Assets and Intellectual Capital Traditional assessment of national economic performance has relied upon understanding the GDP in terms of traditional factors of production – land, labor and capital. Knowledge assets may be distinguished from the traditional factors of production– in that they are governed by what has been described as the ‘law of increasing returns’. In contrast to the traditional factors of production that were governed by diminishing returns, every additional unit of knowledge used effectively results in a marginal increase in performance. Success of companies such as Microsoft is often attributed to the fact that every additional unit of informationbased product or service would result in an increase in the marginal returns. Given the changing dynamics underlying national performance, it is not surprising that some less developed economies with significant assets in ICT knowledge and Internet-related expertise are hoping to leapfrog more developed economies (San Jose Mercury News, 2000).
24 Knowledge Assets in the Global Economy
Despite the increasingly important role of knowledge-based assets in national performance, most countries still assess their performance based on traditional factors of production. Today’s measurement systems are limited in their capability to account for tacit knowledge embedded in the human resources, although there is some agreement on measuring a few categories of knowledge-related assets, such as patents and trademarks. However, the emerging knowledge economy is characterized by industries that are more knowledge intensive and a service economy that is increasingly based on information-based intangible assets. Knowledge assets or intellectual capital may be described as the “hidden” assets of a country that underpin its growth, fuel its growth and drive stakeholder value. There is increasing realization about knowledge management as the key driver of national wealth, the driver of innovation and learning, as well as that of the country’s gross domestic product (GDP). Increasing importance of knowledge assets and intellectual capital have been drawing greater attention of not only company CEOs, but also national policymakers, to non-financial indicators of future growth and performance. Knowledge asset measurement relates to the valuation, growth, monitoring and managing from a number of intangible but increasingly important factors of business success. In the context of knowledge assets, knowledge represents the collective body of intangible assets that can be identified and is measurable. This interpretation of knowledge differs from the notion of knowledge as knowing and learning, which concerns how organizations acquire, share and use knowledge – either helped or hindered by technology and organizational processes. In contrast, the notion of knowledge assets is about the identifiable aspects of the organization that although “intangible” can be considered as adding some kind of value to it. Knowledge capital is the term given to the combined intangible assets that enable the company to function. Examples of such knowledge assets could include shared knowledge patterns and service capability and customer capability. Assessment of Knowledge Capital and Intellectual Assets The worth of knowledge assets, taking the difference between market and book values as a proxy, is hidden by current accounting and reporting practices. However, as evident from current valuations of many Net-based enterprises, one observes a significant widening gap between the values of enterprises stated in corporate balance sheets and investors’ assessment of those values. The increasing proportion of intangible vis-à-vis tangible assets for most industrial sectors has been affirmed by various other observations (Edvinsson and Malone, 1997; Hope and Hope, 1997; Stewart, 1995). In the case of major corporations, often such high market valuations are attributed to brands. Recent business history has shown that huge investments in human capital and information technology are the key tools of
Malhotra 25
value creation that often do not show up on company balance sheets as positive values themselves. Measurement of institutional or organizational value in the current business environment using traditional accounting methods is increasingly inadequate and often irrelevant to real value in today’s economy. For instance, while traditional accounting practices often treat brand as depreciable entity over time, in today’s economy, intangible assets like brands and trademarks often increase in value over time, often longer than the time periods accounted for their depreciation. Even, specific kinds of valuations of intellectual capital, such as patents, copyrights and trademarks are not valued according to their potential value in use, but recorded at registration cost. Similarly, the distinction between assets and expenses is made arbitrarily on many balance sheets: an advertising campaign could be recorded in either column as evident from a case such as that of AOL. The traditional balance sheet, a legacy of the last five centuries of accounting practices, provides a picture of historic costs, assuming that the cost of purchase reflects the actual value of the asset. However, it does not account for the hidden value inherent in people’s skill, expertise and learning capabilities; the value in the network of relationships among individuals and organizations; or the structural aspects relevant to servicing the customers. These hidden values or intangible assets assume an increasingly important role in an economy that is characterized by a transition from ‘programmed’ best practices to ‘paradigm shifts’ that characterize the new business world of ‘re-everything’ (Malhotra, 2000c). Such factors are assuming greater importance in assessment of the potential for future growth of an enterprise or a national economy. This issue is compounded by an apparent paradox: the more a company invests in its future, the lesser is its book value [although the recent astronomical caps for various Net-related stocks suggest increasing realization about intangible assets]. Extrapolating the case of such companies to the organizations within a national economy, one may understand the implications for accounting for intangible assets that do not show up in accounting reports, but may underpin their future success or failure. Valuation from the perspective of intellectual capital and knowledge assets takes into consideration not only financial factors, but also human and structural factors (Stewart, 1997). Stewart defines intellectual capital as the intellectual material that has been formalized, captured, and leveraged to create wealth by producing a higher-valued asset. Intellectual capital is defined as encompassing: i) human capital; ii) structural capital; and iii) relational capital. These aspects of intellectual capital include such factors as strong business relationships within networked partnerships, enduring customer loyalty, and employee knowledge and competencies. The compelling reasons for valuation and measurement of intellec-
26 Knowledge Assets in the Global Economy
tual capital and knowledge assets include understanding where value lies in the company and the sectors of the national economy, and for developing metrics for assessing success and growth of companies and economies.
TE
AM
FL Y
Measuring Knowledge Assets and Intellectual Capital Managers of business enterprises and national economies are trying to find reliable ways for measuring knowledge assets to understand how they relate to future performance. The expectation from finding reliable measures of knowledge assets is that such measures can help managers to better manage the intangible resources that increasingly determine the success of the enterprises and economies. The terms knowledge capital and intellectual capital are used synonymously in this article. Within the scope of subsequent discussion, such terms refer to “the potentiality of value as it exists in various components or flows of overall “capital” in a firm; the relationships and synergistic modulations that can augment the value of that capital; and the application of its potential to real business tasks… [it] includes an organization’s unrefined knowledge assets as well as wealth generating assets whose main component is knowledge” (Society of Management Accountants of Canada, 1999, p. 17). One may observe that it is the application of intellectual capital to practical situations that contributes, primarily, to the translation of its potential value to financial assets. Or as observed by Stewart (1997, p. 67): “Intelligence becomes an asset when some useful order is created out of free-floating brainpower – that is, when it is given coherent form (a mailing list, a database, an agenda for a meeting, a description of a process); when it is captured in a way that allows it to be described, shared, and exploited; and when it can be deployed to do something that could not be done if it remained scattered around like so many coins in a gutter.” Unless effectively utilized and applied, knowledge assets may not necessarily yield any returns in terms of financial performance measures. In other words, “knowledge assets, like money or equipment, exist and are worth cultivating only in the context of strategy…you cannot define and manage intellectual assets unless you know what you are trying to do with them” (Stewart 1997). [For instance, a detailed account of how knowledge management is relevant to e-business strategy and performance is presented in a forthcoming article (Malhotra 2000c).] The subsequent discussion reviews the case of an Asian nation state that utilized one of the more popular methods for assessment of its national intellectual capital. Concluding discussion will highlight the existing caveats in the adopted methodology and underscore the important issues that need to be addressed in future research and practice.
Team-Fly®
Malhotra 27
KNOWLEDGE CAPITAL OF A NATION STATE: THE CASE OF ISRAEL The nation state of Israel, having been classified as an industrialized nation in April 1997, represents an interesting case study for both less developed countries as well as industrialized nations. Having bridged this gap over its recent past, it provides a vantage point for understanding the transition from both sides of the industrial divide. Since 1950, Israel’s economy has grown 21-fold resulting in overall rapid development resulting in significant growth in per capita income and an exponential increase in the number of hi-tech start-up companies. These developments have occurred despite a population growth of 330% and periodic wars that have impacted the region’s economies. A popular method of assessment of intellectual capital originally proposed by the Swedish company Skandia was recently applied to a joint Swedish-Israeli study that examined how to assess Israel’s intellectual capital. The study represented the first attempt to document Israel’s core competencies, key success factors and hidden assets that provide comparative advantage and high potential for growth. The study compared Israel with other developed countries, not developing countries, since the objective was to assess the country’s ability to compete with other industrialized nations in the global economy. The study aimed to develop an assessment of intellectual capital of the country, which along with the more traditional focus on financial capital, could help in an integrated and comprehensive view of the nation’s assets as well as its potential for future growth. The study used Skandia’s model for measuring intellectual capital, a model that had been earlier used for developing the Intellectual Capital Balance Sheet for Sweden. Skandia Model for Measuring Intellectual Capital In Skandia’s view, intellectual capital denotes intangible assets including customer/market capital, process capital, human capital, and renewal and development capital. The value of intellectual capital is represented by the potential financial returns that are attributable to these intangible or non-financial assets. The Skandia model attempts to provide an integrated and comprehensive picture of both financial capital and intellectual capital. Generally, the national economic indicators supported by hard quantitative data are used for examining the internal and external processes occurring in a country. However, the model questioned if such indicators provided a full and accurate assessment of the country’s assets and if they provide an indication of its potential for future growth. In doing so, it developed the framework of intellectual capital as a complement of financial capital.
28 Knowledge Assets in the Global Economy
In this model, there are four components of intellectual capital: market capital (also denoted as customer capital); process capital; human capital; and renewal and development capital. While financial capital reflects the nation’s history and achievements of the past; intellectual capital represents the hidden national potential for future growth. The value chain according to Edvinsson and Malone (1997, p. 11) expresses the various components of market value on the basis of the following model: Market Value = Financial Capital + Intellectual Capital The key determinants of hidden national value, or national intellectual capital, are human and structural capital, defined thus: Intellectual Capital = Human Capital + Structural Capital Human Capital: The combined knowledge, skill, innovativeness, and ability of the nation’s individuals to meet the tasks at hand, including values, culture and philosophy. This includes knowledge, wisdom, expertise, intuition, and the ability of individuals to realize national tasks and goals. Human capital is the property of individuals, it cannot be owned by the [organization or] nation. Structural Capital: Structural capital signifies the knowledge assets that remain in the company when it doesn’t take into consideration human capital that is the property of individual members. It includes organizational capital and customer capital [also known as market capital]. Unlike human capital, structural capital can be owned by the nation and can be traded. Structural Capital = Market Capital + Organizational Capital Market Capital: In the context of the original model applied to market enterprises, this component of intellectual capital was referred to as customer capital to represent the value embedded in the relationship of the firm with its customers. In the context of national intellectual assets, it is referred to as market capital to signify the market and trade relationships the nation holds within the global markets with its customers and its suppliers. Organizational Capital: National capabilities in the form of hardware, software, databases, organizational structures, patents, trademarks, and everything else of nation’s capabilities that support those individuals’ productivity through sharing and transmission of knowledge. Organizational capital consists of two components: process capital and renewal and development capital.
Malhotra 29
Organizational Capital = Process Capital + Renewal & Development Capital Process Capital: National processes, activities, and related infrastructure for creation, sharing, transmission and dissemination of knowledge for contributing to individual knowledge workers' productivity. Renewal and Development Capital: This component of intellectual capital reflects the nation’s capabilities and actual investments for future growth such as research and development, patents, trademarks, and start-up companies that may be considered as determinants of national competence in future markets. Figure 1: Components of Intellectual Capital (based upon Edvinsson & Malone, 1997) Market Value Financial Capital + Intellectual Capital Human Capital + Structural Capital Market Capital + Organizational Capital Process Capital + Renewal Development Capital
In the context of the national intellectual capital assessment, while financial capital reflects the nation’s history and achievements of the past: 1. Process capital and market capital are components upon which the nation’s present operations are based; 2. Renewal and development capital determines how the nation prepares for the future; and, 3. Human capital lies at the crux of intellectual capital. It is embedded in capabilities, expertise and wisdom of the people and represents the necessary lever that enables value creation from all other components. Process of Measuring Intellectual Assets This article covers an overview of the various factors that were taken into consideration for assessing national intellectual assets for Israel. The details about the study and related statistical data about Israel are the subject of the report The Intellectual Capital of the State of Israel (Pasher, 1999). In this article, discussion will focus on only key aspects of the national intellectual capital assessment process with the motivation of providing a general framework that could be adapted for similar assessment for other national economies and businesses. The process of assessment of national intellectual assets as applied in the case of Israel was made of four phases: developing a vision of the nation’s future;
30 Knowledge Assets in the Global Economy
Figure 2. Financial Capital and Intellectual Capital (Based upon Edvinsson & Malone, 1997) Financial Capital
Market Capital
Human Capital
Past
Process Capital
IC Present
Present
Renewal & Development Capital Future
IC=Intellectual Capital
identifying core competencies needed to realize the vision; identifying the key success factors for such competencies; and, identifying the key indicators for the key success factors. The vision for the country’s future was identified through brainstorming sessions and interviews with national leaders in various fields relevant to country’s future growth and performance as well as young leaders whose views were relevant to the country’s future progress. The core competencies devolved from the above process and its participants. These competencies were mapped in the form of clusters along each of the dimensions of intellectual capital based on Skandia’s model discussed earlier. The key success factors, or the most important determinants of the respective competencies needed for future performance, were identified. Specific indicators that were considered reliable measures for the critical success factors were then determined based on analysis of historical data as well as the analysis of the results of brainstorming sessions and interviews. The study found the vision of Israel has the substantiation of its position as a developed, modern, democratic and pluralistic nation attractive to world Jewry, investors, tourists and its citizens. Two key areas that were determined relevant to Israel’s future growth and progress included enhancement of quality of life of the citizens, and, making it attractive for future generations by improving its standing among developed nations. While the former goal could be achieved through cultural and regulatory interventions, the latter goal was to be achieved through economic growth fuelled by knowledge-based industries. It was also determined that both these growth related areas would depend upon the country’s capability in nurturing peaceful relations in the geographical region that has been characterized by periodic inter-country wars.
Malhotra 31
The study identified the key competencies necessary for the nation’s current and future performance and clustered them along the five components of the nation’s balance sheet: financial capital, market capital, process capital, human capital, and renewal and development capital. The specific indicators identified for each of the components represent the criteria that represent long-term competitive strength of Israel in comparison with other countries. As noted earlier, the specific criteria that are used as indicators of each of the components may differ for other countries. Financial Capital: As noted before, financial capital is an indicator of a nation’spast success and achievements. The valuation of the assets as they appear on a traditional balance sheet does not reflect the nation’s real value as assessed by the global market. This component of the nation’s balance sheet is based upon past performance and statistical data that express the rate of change in tangible assets. Such factors include gross domestic product (GDP), dollar exchange rate, external debt, unemployment, productivity rates within various sectors of the national economy, breakdown of exports according to industries, and inflation. Gross Domestic Product (GDP): This indicator represents the total value of all services and goods produced in the country. The change in the GDP per capita (in real terms) represents the change in the citizens’ well-being and in the country’s economic strength. Since its origin, Israel has enjoyed rapid economic growth: its GDP per capita (in real terms) has grown from $3,500 annually in 1950 to $17,200 in 1995, although interrupted by a stagnation and recession in 1996. In terms of purchasing power, this change amounts to an increase of 370% reflecting a narrowed gap in the standard of living between Israel and the developed countries. Dollar Exchange Rate: As with other national economies, an inflationary process leads to increase in the cost of domestically produced goods and services, a relative decrease in the prices of imported products and services, and a devaluation of the domestic currency. Israel’s high level of inflation has resulted in the devaluation of its currency in the past, although inflation has been controlled in recent years. External Debt: Due to the financial crisis of 1980s, Israel’s net external debt rose to 75% of GDP in 1985. This indicator showed a decrease until 1993 when it rose again due to loans taken by a large wave of immigrants. Finally, these loans showed an increase in production and restored the country’s external debt-to-GDP ratio to about 25%. Unemployment: Higher employment enables a national economy to increase production efficiency to maximum by using its existing resources. Until 1985, unemployment levels in the Israeli economy were below 5% when they started rising due to an influx of immigrants. After peaking to 11.5% in 1992, these levels had been falling again and in 1997 were lower than most industrialized nations.
32 Knowledge Assets in the Global Economy
Productivity Within Various Economic Sectors: Over the decade 19861996, Israel’s agricultural productivity grew at an annual growth rate of 8%. In the post-1990s era, the productivity of the industrial sector has been growing at a moderate average annual growth rate of 1.5% as a result of slowdown because of structural changes in the industry. In the commercial and services sector, the average annual growth rate has been about 2% with greatest growth in the financial and business services as production has shifted from traditional sectors to more sophisticated, knowledge-based sectors. Breakdown of Exports According to Industries: The exports have reflected production in various economic sectors. Coming from an agriculture-intensive background, in 1950, out of $50 million in exports, agricultural products accounted for 70% of exported goods. The transition from a developing economy to a developed nation has been characterized by a shift in production and exports to the knowledge-intensive economic sectors such as electronic products, computer software, and pharmaceuticals. In 1994, agricultural products accounted for only half-a-billion dollars of $25 billion in exported goods and services. In 1997, hi-tech exports constituted 33% of Israel’s total exports. Inflation: The 1980s were characterized by very high inflation rates in Israel that reached a magnitude of 450% in 1984 and caused economic imbalance. Concerted efforts to reduce inflation thereafter have resulted in dramatic decreases bringing the inflation rate to about 20% in 1986, to 10% in 1996 and to 7% in 1997. The study asserts that Israel’s economic history and economic picture of the mid-1990s does not provide an accurate assessment of the country’s true growth potential. Hence, there is need for considering the country’s core competencies and key success factors in the form of intellectual capital that provides it with long-term advantage in terms of future growth and performance. Such core competencies are delineated in the form of market capital, process capital, human capital, and renewal and development capital. Market Capital: Market capital reflects the intellectual capital embedded in Israel’s relations with other countries. The intellectual assets in this area derive from a country’s capabilities and successes in providing attractive and competitive solutions to the needs of the international clients. Israel’s investments and achievements in foreign relations along with its export of quality product and services significantly contribute to the intangible assets that comprise its market capital. Indicators of market capital include outgoing tourism, openness to foreign cultures, and international events and language skills. Such core capabilities create a basis for assessing the country’s attractiveness from the perspective of international clients. Providing Solutions to Market Needs: Given a dynamic business environment characterized by changing customer needs, a country’s capability in meeting
Malhotra 33
such needs represents a competitive edge in the global marketplace. Israel is ranked among the top countries that are considered as having the fastest time for introduction of new products and services and their penetration in the market. International Events: The country’s level of participation in international events is an indicator of its strong desire for renewal as well its openness and willingness to gain knowledge. Given its high rate of participation, Israel is seen as having tremendous motivation to expose itself to new intellectual fronts. In addition, the high rate of hosting international conferences in Israel is an indicator of Israel’s attractiveness to business people from around the world. This indicator reflects the extent of Israel’s international openness and the increasing interest of international entities in Israel. Openness to Different Cultures: People’s desire to meet others, learn, see, broaden their horizons, and to develop and renew themselves may be considered another indicator of its market capital. Such openness of Israel’s citizens toward different cultures constitutes an important channel of communication in learning about trends and needs in the global village. Language Skills: Knowledge of foreign languages alleviates problems of communications both in local culture and the global market. There is a realization in Israeli society that the willingness to learn languages contributes greatly to a country’s relations with other countries. Accordingly, Israeli schools are rated highly in professional teaching of foreign languages. Process Capital: This component represents the country’s intellectual assets that support its present activities, including sharing, exchange, flow, growth and transformation of knowledge from human capital to structural capital. Such assets include information systems, laboratories, technology, management attention and procedures. A nation’s long-term growth can be achieved if human capital is integrated within existing structural systems. Such integration through information and communication systems enhances the nation’s capability to anticipate and translate market needs into product and service applications. Information technology serves as a key tool for the production of high-quality products and services and the opening of access channels to new markets. Indicators of process capital include communications and computerization, education, agriculture, management, employment, development of service sector and absorption of immigrants. Communications and Computerization: Strong communications infrastructure for domestic and international communications between the nation’s citizens and the rest of the world facilitate rapid exchange of information and its translation into knowledge inherent in innovative processes, products and services. Some parameters that may be used for assessment of this indicator include communications and computerization infrastructure, extent of Internet use, circulation of daily newspapers, and extent of software use.
34 Knowledge Assets in the Global Economy
• Communications and Computerization Infrastructure: An index of computer infrastructure that measured variables such as the number of PCs per capita, and the number of PCs in homes and schools, ranked Israel high among developed and developing countries. Similarly, an index of communications infrastructure that rates the level to which the communication infrastructure meets business organizations needs ranks Israel ahead of developed countries such as Germany, Japan, Belgium and Italy. • Extent of Internet Use: Internet use makes it possible to rapidly share information and to communicate and collaborate even when isolated by geography and time zones. The report asserts that the extent of Internet use is also an important indicator for the assessment of a country’s effective management of knowledge. An index that measured extent of Internet use relative to population size ranks Israel high within the list of developed nations. • Circulation of Daily Newspapers: Per capita newspaper distribution is assumed to be another indicator of the level of knowledge sharing and involvement in the happenings around the world. According to a World Bank report, Israel ranks high on the list of nations with highest per capita newspaper distribution. • Extent of Software Use: The extent of software use reflects the level of knowledge sharing and the effort to turn human capital into structural capital. The extent of software use also serves as an indicator of the quality of the country’s current infrastructure that supports effective management of information and knowledge. An index based upon the relationship between the extent of expenditure for hardware and the extent of expenditure for software places Israel among the top ranks of developed nations. Education: Education enhances knowledge sharing, and building and assimilation of mechanisms for the flow of knowledge in the society. Three indicators used for assessing Israel’s investments in education included: student-teacher ratio (lower is better), PC-student ratio (higher is better), and freedom of expression in the school system. Based on available data and national surveys, Israel ranks high in all these criteria for assessment. Agriculture: In making transition from a developing country to a developed nation, Israel – like other developed nations – has shown greater focus on knowledge and service-based industry with diminishing emphasis on agriculture. However, technological innovation in the agricultural sector has resulted in higher efficiencies resulting in higher agricultural worker’s added value. Management: The quality of management in a nation’s economy is an important determinant of the future health of its enterprises and long-term comparative advantage. Three criteria that were used in the study for assessing Israel’s intellectual capital included: top management’s international experience; entrepreneurship and risk-taking; and venture capital funding.
Malhotra 35
• Top Management International Experience: International experience of management provides the country’s enterprises better ability for penetrating global markets and exploiting opportunities. • Entrepreneurship and Risk Taking: Government’s support in entrepreneurship and risk-taking through financial support is necessary for technological innovation. Israel has championed such a program to support technological incubators for raising financing at an early stage when the technological idea is considered high-risk for private sector funding. The high success rate of the magnitude of 56% of companies that graduate from the incubator stage for Israel compares favorably with other countries such as the USA, with a success rate of 10%. • Venture Capital Funding: A venture capital fund is an important basis for supporting entrepreneurship and in ensuring the success of start-ups. Israel has been successful in cultivating a number of hi-tech enterprises because of its infrastructure and venture capital funds, which invest in start-up companies. Employment: Israel owes its economic growth to its service industry that has enjoyed a high growth rate compared to other economic sectors. The financial and business sector, characterized by a relatively small number of employees and the application of advanced information and communication technologies, has been leading in production output among the various service sectors. Israel ranked high in the average annual growth rate of the service sector over the past decade or so, suggesting a greater share of experience and knowledge base in the nation’s economy. Also, Israel ranks high in computer skills among the developed nations, thus providing an indicator of superiority of the use of its information technologies. Development of the Service Sector: The trend of increasing percentage of commercial services based on the development of advanced and knowledgebased sectors is common among the developed nations. The high rate of growth of Israel’s service sector characterized by the GDP contributed by this sector, investments in R&D, high yield of invested capital, and productivity, wages and percentage of exports in this sector, all point to growth in knowledge-based fields. Immigration and Absorption: Successful integration of highly skilled and professional immigrants is a key factor in the country’s ability to benefit from the immigration and its human capital. Sustained migration of high quality scientists and professionals into the economy of Israel and their successful absorption has resulted in consistent increase in the GDP. Human Capital: Human capital, as noted earlier, lies at the crux of intellectual capital. It constitutes the nation’s peoples’ capabilities reflected in education, experience, knowledge, intuition and expertise. Human capital embodies the key success factors that provide a competitive edge in the past, present, and the future. The human capital is the most important component in value creation.
36 Knowledge Assets in the Global Economy
TE
AM
FL Y
However, due to the “soft” nature of these assets, it is often difficult to devise measures for many of them. As noted by Pasher (1999): “The analysis is especially complex when dealing with wisdom, intellect, experience and knowledge. The attempt to assess wisdom or motivation ultimately differs from the quantitative evaluation of “hard” assets, such as the extent of personal computer use or the proportion of employees in R&D.” Despite the acknowledged difficulty of measurement of such assets, the study considers the following factors as key indicators of human capital. Education: This component is assessed in terms of percentage (and its growth) of students having, or working towards advanced degrees (including certification studies); and the number of graduates and holders of doctorate degrees in fields considered fundamental for long-term growth – including computer sciences, life sciences and engineering. Equal Opportunities: The study asserts that a country that grants equal opportunity for exploiting the citizens for wisely utilizing their inherent human resource generates greater human capital. The indicators that were used to measure this component included: female students at institutions of higher education and women in the professional work force, two criteria in which Israel ranks strong among the developed nations. Culture: This factor was based on two indicators: number of published books per 100,000 inhabitants, and annual number of museum visits per capita. Health: Maintenance of good living conditions while guaranteeing the population a decent level of health was considered as important for maintaining the attractiveness of the nation for its citizens. Crime: A low rate of crime was considered as a positive correlate of human capital given lesser resources directed to fighting crime and more positive contributions to the society. Renewal and Development Capital: Renewal and development capital reflects the country’s desire and ability to improve and renew itself in order to progress. Early identification of changes in the dynamic business environment and their translation into business opportunities contributes to the nation’s future growth and performance. The six indicators used for this component of intellectual capital in the study included the following. • National Expenditure on Civilian R&D: Investments in civilian R&D are expected to facilitate incubation of innovative ideas and their translation into value-adding products and services that contribute to future economic growth. • Scientific Publications in the World: The extent of the scientific activity – represented in terms of scientific publications, and the quality of that activity – in terms of citations by other scientists, are considered another indicator of the renewal and development capital.
Team-Fly®
Malhotra 37
• Registration of Patents: In terms of per capita patent registrations, Israel ranks high among developed nations. • Work Force Employed in R&D: Human capital in technological fields is considered as Israel’s most important success factor. • Start-up Companies: The study reports that Israel has the third largest concentration of start-up companies in the world, led only by the Silicon Valley and the Boston area. • Biotechnology Companies: Considered as one of the industries that represents progressiveness of a country’s scientific and technological progress, the biotech sector represents another indicator for renewal and development capital. This is an area of emerging growth for Israel. Synopsis of Israel’s Intellectual Capital Assessment The reported study and its assessment of national intellectual capital of Israel represented an initial attempt at presenting a holistic and organized picture of the knowledge and intellectual assets of a country. The distinction between financial capital and intellectual capital was underscored to suggest that while the former is a reflection of the country’s past progress and achievements, the latter provides a more accurate depiction of futuregrowth and performance. The expectation from the study was that the report will be used by government and other policy-makers to upgrade tools for exploiting knowledge to accelerate the process of long-term economic and social growth. In addition, the focus on intellectual capital, and its key components and indicators, brings into perspective key areas in which the country has growth potential. As noted by the investigators, the national intellectual capital balance sheet needs to be updated every year with reassessment of the key success factors and related indicators.
DISCUSSION AND ISSUES FOR FUTURE RESEARCH The reported study used specific indicators of the various components of intellectual capital that represent critical success factors pertinent to long-term future success and growth. However, such indicators may vary across different nation states depending upon their specific national economic strengths in the global market. Also, the case study discussed one popular method for assessment of national intellectual capital and illustrated its application. This doesn’t imply that there is only one method that may be used for such assessment. There are diverse methods that have been applied for the assessment of intellectual capital at the level of business enterprise, and they may be extrapolated to similar assessments at the level of nations and countries (see for instance, Society of Management Accountants of Canada, 1999, for a review of some of these methods). For national policymakers who plan to do intellectual capital assessment for their national
38 Knowledge Assets in the Global Economy
economies, another document of interest would be the Netherlands Government’s Ministry of Economic Affairs pilot project, “Balancing Accounts with Knowledge” that provides comparison between methodologies used by four different accounting firms (Government of Netherlands Ministry of Economic Affairs, 1999). While the presented framework of intellectual capital and the illustrative case study have merit in communicating these issues to information professionals, however, they also raise important issues for advancing the research and practice in information systems. From the perspective of information professionals and researchers interested in strategic, organizational and behavioral issues, such issues provide venues for advancing understanding of knowledge assets and intellectual capital. The following discussion provides a brief synopsis for such issues for future research. Information, Knowledge and Performance Several practitioners and researchers have acknowledged that tacit knowledge is a key component of intellectual capital. However, the superficial distinctions between data, information and knowledge are often criticized, as one person’s data could be another person’s knowledge. Or, to put in one such critics’ terms (Stewart, 1997): “Knowledge exists in the eye of the beholder.” Does this imply that information professionals and researchers can do nothing about management of knowledge assets or intangible assets? Not necessarily so!! As noted earlier, knowledge assets, like money or equipment exist and are worth cultivating only in the context of strategy. Or, keeping in perspective the [future] outcomes driven focus of intellectual capital, rather than focusing upon information or information technology, one needs to focus upon ‘what gets done’ with that information. This shift in perspective would certainly bring the focus closer to performance that is the key motivation for investments in information and technology. Although one person’s data may be another person’s knowledge, however that distinction may spell the difference between effective use, misuse, abuse or non-use of information. Hence, it is important to understand why the same information often results in different actions (or inactions) when processed by different individuals. Seminal work in this area done by Malhotra and Kirsch (1996), Malhotra and Galletta (1999) and Malhotra (1999) could serve as a basis for developing further understanding for relating information and knowledge to performance. Taking a Hard Look at the “Soft Issues” Human capital lies at the crux of intellectual capital. It is embedded in capabilities, expertise and wisdom of the people and represents the necessary lever that enables value creation from all other components. Several practitioners and
Malhotra 39
researchers have acknowledged that human capital, often characterized by “soft” issues such as individual motivation and commitment, is difficult to measure. The same assumption has often resulted in use of inappropriate surrogates for such “soft issues”. Given the relevance of such soft issues, it is the author’s recommendation that researchers and practitioners need to develop more rigorous measures of such constructs. Seminal work done by Malhotra (1998) that has tried to develop “hard” measures for such “soft” issues in the context of effective use of information systems could provide a base for developing better understanding of human capital. Based on this work, one may argue that many published accounts have incorrectly assumed that ‘organizational capital’ is what remains after the employees “go home.” Based on existing research on motivation, compliance and commitment, one may argue that many employees may be on the job, but they may still be “at home,” while others may telecommute from home and yet may contribute more to the human capital. In essence, given the increasing importance of knowledge work, the post-industrial concepts of organization and work need to be reconsidered in the same hard terms of ‘outcomes’ and ‘performance.’ Intellectual Capital Entangled with Networked Systems Several popular accounts of the intellectual capital framework, including the one discussed in this article, have taken a simplistic view of the role of information systems. For instance, many such accounts have assumed that information systems, hardware, software and databases form a part of the structural capital or process capital. However, it is the author’s argument that given the new networked economy, the advent of ‘free agents’ and ‘knowledge intrapreneurs’ (Malhotra, 2000d), individual education, knowledge and experience is more related to personal pursuits related to quality of life. In essence, as the new workers empower themselves by appropriating the networked technologies, they assume self-control and self-leadership for their own development regardless of their affiliation with a ‘closed’ concept of an organization or a nation. In other words, they become denizens of the global electronic village. Similarly, with increasing automation, production processes become increasingly efficient, however, the ability to produce as such does not generate sufficient market differentiation. The focus shifts more towards excellence in marketing, product development, quality assurance and customer management as evident from the more recent popularity of e-business issues such as customer relationship management, supply chain management, and selling chain management (Kalakota and Robinson, 1999). The role of knowledge management and information systems in developing new market niches, creating and distributing innovative products, and ensuring “stickiness” of ‘portals’ by cultivating the loyalty of customers has also been recognized (cf: Malhotra, 2000c). Hence, information and communication
40 Knowledge Assets in the Global Economy
systems also become a key part of the market capital as well as the renewal and development capital with increasing ‘virtualization’ of the products, processes and the delivery agents (Turban et al. 1999). Post-Industrialization of Intellectual Capital Measures As suggested by existing research in information systems, investments in information technologies may not necessarily correlate with increases in performance (Brown, 1996; Strassmann, 1997). Hence, in all such contexts, the emphasis should not only be on investments in relevant technologies, but effective utilization of such technologies. A large number of desktops or PCs may not necessarily correlate with higher performance in terms of outcomes. In other words, the concept of ‘intellectual capital’ is based on the notion of ‘intangible assets;’ however, many of the indicators seem to be grounded in the world of ‘tangible assets.’ For instance, use of an indicator such as per capita distribution of newspapers needs to be reassessed given that such information is not a ‘scarce good’ but an ‘abundant product.’ Those not subscribing to any print-based publications may be using more updated and multifarious push- and pull-based channels – many of which are free — for remaining on top of what is important and relevant to them. Similarly, the number of scientific publications and citations as an indicator needs to be assessed in terms of its relevance as an indicator in terms of ‘real outcomes’ in the form of economic growth or performance. As has been demonstrated by many authors (Kealey, 1996; Sobel, 1996) there is convincing evidence that the new knowledge (and its economic value), generated in the cause of technological or application-oriented research, far outweighs that of basic research. The latter is the subject of publications, while the former is not. As peer recognition is traditionally based on the number of publications and citations, the wrong conclusion is inevitably drawn that basic research adds more to the body of knowledge than technological or application-oriented research.
CONCLUSIONS Transition of most developing and developed nations to knowledge economies has resulted in an increasing awareness of ‘knowledge’ as a key lever for economic growth and performance. Despite increasing importance of knowledge as a factor of production, most accounting systems are still based on the traditional factors of production. While accountants have been trying to determine how to capitalize the knowledge assets captive in the minds of the human employees, information system designers have been attempting to capture those assets into technology-based databases and programmed logic.
Malhotra 41
The article discussed the framework for developing an understanding of intellectual capital and knowledge assets, and provided an illustrative case study of a nation state that has applied this assessment method. The framework of intellectual capital – popularized by a Swedish company, Skandia – was described and then illustrated through its application for national intellectual capital assessment for Israel. In an attempt to bridge the gap between the accountants and the information resource management practitioners and researcher, some caveats were observed. These caveats were explained in the discussion as points deserving attention in future research and practice. One important issue that was not discussed in the article is that of fundamental and radical change that requires ongoing reassessment of all given models, frameworks, premises and assumptions. This issue is discussed in detail elsewhere (Malhotra, 2000c). Such dynamic radical and discontinuous change seems to have significant implication about the stability of the models and frameworks that are based on a static view of the business environment.
REFERENCES Brown, J.S. (1996-1997).“The Human Factor”, Information Strategy, December 1996-January 1997. Edvinsson, L. and Malone, M.S.(1997). Intellectual Capital, Harper Collins, New York, NY, 5. Government of Netherlands Ministry of Economic Affairs Directorate-General for Economic Structure Technology Policy Department. Balancing Accounts with Knowledge, VOS number 25B 19a, The Hague, Netherlands, October 1999. [Available in pdf format from http://info.minez.nl] Hope, J. and Hope, T.(1997). Competing in the Third Wave, Harvard Business School Press, Boston, MA, 12. Kalakota, R. & Robinson, M.(1999). e-Business: Roadmap for Success, Addison Wesley, Reading, MA, 1999. Kealey, T. The Economic Laws of Scientific Research, St. Martin’s Press, Inc. Malhotra, Y.(1998). Role of Social Influence, Self Determination and Quality of Use in Information Technology Acceptance and Utilization: A Theoretical Framework and Empirical Field Study, Ph.D. thesis, July, Katz Graduate School of Business, University of Pittsburgh, 225 pages. Malhotra, Y. (1999). Bringing the Adopter Back Into the Adoption Process: A Personal Construction Framework of Information Technology Adoption, Journal of High Technology Management Research, 10(1). Malhotra, Y. (2000a). From Information Management to Knowledge Management: Beyond the ‘Hi-Tech Hidebound’ Systems, in K. Srikantaiah and M.E.D. Koenig (Eds.), Knowledge Management for the Information Professional, Information Today, Inc., Medford, NJ, 37-61.
42 Knowledge Assets in the Global Economy
Malhotra, Y. (2000b).Knowledge Management and New Organization Forms: A Framework for Business Model Innovation, Information Resources Management Journal, 13(1), 5-14. Malhotra, Y. (2000c). Knowledge Management for E-Business Performance: Advancing Information Strategy to ‘Internet Time’. Information Strategy: The Executive’s Journal, 2000c. Malhotra, Y. (2000d). Information Ecology and Knowledge Management: Toward Knowledge Ecology for Hyperturbulent Organizational Environments. In Kiel, Douglas L. (Ed.), UNESCO Encyclopedia of Life Support Systems (EOLSS) theme Knowledge management, Organizational Intelligence and Learning, and Complexity, 2000d. Malhotra, Y. & Galletta, D.F.(1999). “Extending the Technology Acceptance Model to Account for Social Influence: Theoretical Bases and Empirical Validation,” in the Proceedings of the Hawaii International Conference on System Sciences (HICSS 32) (Adoption and Diffusion of Collaborative Systems and Technology Minitrack), Maui, HI, January 5-8. Malhotra, Y. & Kirsch, L.(1996). Personal Construct Analysis of Self-Control in IS Adoption: Empirical Evidence from Comparative Case Studies of IS Users & IS Champions, in the Proceedings of the First INFORMS Conference on Information Systems and Technology (Organizational Adoption & Learning Track), Washington D.C., May 5-8, 105-114. Pasher, E. (1999). The Intellectual Capital of the State of Israel: A Look to the Future – The Hidden Values of the Desert, Herzlia Pituach, Israel. San Jose Mercury News (2000). Presidential Visit Marks India's Clout: Clinton to Push Economic Ties, Reforms, February 22 [www.mercurycenter.com]. Sobel, D.(1996). Longitude, Fourth Estate Limited, London. The Society of Management Accountants of Canada (1999). Measuring Knowledge Assets (Management Accounting Guideline Focus Group Handout), Ed. M. Tanaszi and J. Duffy, Toronto, Ontario, Friday, April 16. Stewart, T.(1995). Trying to Grasp the Intangible, Fortune, October 2. Stewart, T .(1997). Intellectual Capital: The New Wealth of Organizations, Doubleday, New York, NY. Strassmann, P.A.(1997). The Squandered Computer: Evaluating the Business Alignment of Information Technologies, Information Economics Press, New Canaan, CT. Turban, E., Lee, J., King, D., and Chung, H.M.(1999). Electronic Commerce: A Managerial Perspective, Prentice Hall, New York, NY, 1999.
Lo & Choobineh
43
Chapter 4
Knowledge-Based Systems as Database Design Tools: A Comparative Study W. Amber Lo Millersville University & Knowledge-Based Systems, Inc., USA Joobin Choobineh Texas A&M University, USA
Database design process is a knowledge intensive task that requires expertise, practical experience, and judgment. It is not surprising, therefore, that over the last few years many research prototype database design expert systems have been reported in the literature. This paper is a survey of such tools. These tools are compared with respect to four major aspects: database design support, tool flexibility, expert system features, and implementation characteristics. This study reveals that, in general, there is lack of 1) support for all the phases of the design, 2) support for group database design, 3) graphic support, 4) empirical verification of effectiveness of the tools, 5) long-term maintenance of the tool and database schemata, and 6) specialized knowledge representation schemes, inference, and learning techniques. A database is a collection of related data that represents some relevant reality. The task of designing a database has traditionally been performed manually by database designers. The design process turns informal end-user requirements into the design of static database structures, specification of integrity rules, and (may include) specification of dynamic aspects of data (transactions and queries to be made). The four major steps of database design Previously Published in the Journal of Database Management, vol.10, no.3, Copyright © 1999, Idea Group Publishing.
44
Knowledge-Based Systems as Database Design Tools
are Requirements Collection, Conceptual Design, Logical Design, and Physical Design. This process is complex and error-prone. With the maturity of AI techniques, there is a significant potential to automate parts or all of the design process by developing knowledge-based systems as design tools. The following terms will be used throughout the paper. A “database design methodology” is a system of principles, procedures, techniques, rules, data models, tools, documentation, planning, management, control, and evaluation applied to the entire design process. A methodology should describe each of the above components in detail (Maddison et al. 1983, p. 4). A “database design technique” is a systematic procedure by which a complex task within a step of the database design process is performed. A “data model” is a set of logical concepts that can be used to describe the structure of a database. It should consist of two parts, a notation for describing data and a set of operations used to manipulate that data (Ullman, 1988, p. 32). A data model usually is not as comprehensive as a methodology. It has a set of notations and a method of using it. However, it usually lacks a unified stepby-step guideline on how to use the concepts to represent a database structure. A “database design tool” is computer software used to perform or assist in one or more of the step(s) of the database design process. A tool is based on a data model or design technique. This can enhance the validity and uniformity of the design. The objectives of this paper are threefold. First, to define desirable features of a knowledge-based database design tool. Second, to compare 23 existing knowledge-based systems so as to provide an overview and evaluation of the current state of research. Third, to identify future research directions in the development of computer-aided software engineering (CASE) tools for database design. Desirable database design tool features will be reviewed in the next section. The major body of the paper is contained in the third section where 23 existing intelligent tools will be compared. Based on this survey, we will conclude the paper by presenting a discussion on research progress and future research directions in section four.
DESIRABLE FEATURES OF KNOWLEDGE-BASED DATABASE DESIGN TOOLS Based on published literature, we have chosen four distinct sets of desirable features of knowledge-based database design tools. These are database design support, tool flexibility, knowledge-based system features, and implementation features. A summary of these are presented in Figure 1. The extent of usefulness of database design support includes the number of design steps covered, support for view integration, completeness of the
Lo & Choobineh
45
output produced within the steps covered, the data models used, and the extent of the tool’s support of database design. The more steps a tool can automate, the more comprehensive it is. View integration support is important because, in a multi-user database environment, views of different user groups must first be documented and then integrated. Completeness of output refers to the fact that the more complete the output is, the more useful the system would be to the tool users. Data models supported by a design tool are the theoretical bases of that tool. In order to be rigorous and produce acceptable and reproducible output, a tool must be well grounded in some database design theory. The extent of a tool’s support of the design process depends on the levels of design expertise possessed by the tool, which in turn may determine its users. Targeted users, with a given level of database design expertise, are an important factor in determining the design of the tool, the user interface, the amount of expertise to reside in the knowledge base, and the role of the system in supporting database design activities. Tool flexibility issues include application domain independence, data model independence, and maintenance features. A flexible tool should be able to aid in designing databases for any application domain. A tool is application domain independent if it can be used for designing databases for any domain. As design techniques and data models may change over time, an ideal tool should be flexible enough to facilitate the incorporation of these future changes. A tool is independent of any data model if its knowledge base is built in a manner that it can accept and use a new data model without extensive Figure 1: Desirable Features of Database Design Tools • Database Design support • Design Steps Covered • Support for View Integration • Completeness of the Output • Data Model Used • Extent of Support
(Table 1) (Table 1) (Table 2) (Table 3) (Table 4)
• Tool Flexibility • Application Domain Independence • Data Model Independence • Maintenance Features
(Table 5)
• Knowledge-Based Systems Features • Knowledge Representation • Inference Mechanism • Explanation of Actions/Decisions
(Table 6)
• Implementation Features • Development Tool • User Interface • System Testing
(Table 7)
46
Knowledge-Based Systems as Database Design Tools
TE
AM
FL Y
modifications such that existing data models can be updated and new ones can be added. With more than one kind of data model residing in the knowledge base, a tool user can make use of different ones for the same application and compare the output produced. The comparison enables the designer to obtain deeper insights into the nature of the application and/or it will help in deciding about a proper data model if that decision has not yet been made. A data model independent tool is more difficult to develop. More research and development is required because meta-level knowledge to accept new models is needed. Meta-level knowledge here is the knowledge about the proper way of defining a data model so that the constructs and rules of using them are rigorous and technically sound. With such tools, database design experts can enter the specification of any data model into the system and database designers can make use of them to design databases. The last flexibility issue is tool maintenance. Maintenance facilities are special components of a knowledgebased system that are written for future reprogramming of the system. An example is a facility to enable insertion of a rule into a rule base through a menu-driven sub-system instead of direct insertion into the source code. A tool that includes maintenance features is more flexible and can save substantial amount of human time and effort in the long run. Knowledge-based system features include knowledge representation, inference mechanism, and explanation. Knowledge representation refers to the format in which a system stores its knowledge. Two important inference issues are the problem solving approach used and whether a system can incorporate uncertainties. From a tool user’s point of view, a very important knowledge-based system feature that a design tool should have is the ability to explain its actions and answer questions about database design. The availability of this feature is an important advantage of using knowledgebased systems to overcome the black-box problem. Three issues regarding implementation that are discussed here are development environment, user interface, and system testing. Development environments could vary from programming languages that can be run on general-purpose machines to knowledge-based systems that only run on specialized machines. User interface should be flexible and friendly. A tool is flexible if, in order to communicate the same idea, it has different user interfaces for different kinds of tool users. A tool user that is very familiar with a tool may find command-based interface more direct and convenient than having to go through several levels of menus to perform a task. On the other hand, menu-driven or window-based interfaces can guide a novice tool user while he/she is learning to use the tool. System-initiated question-and-answer in simple English phrases lets the system take charge and is simple to
Team-Fly®
Lo & Choobineh
47
understand. Graphic interface for database design tools usually involves either the input or output of a schema diagram to be analyzed or produced by the tool. A tool can use a combination of interfaces for different purposes. The rigor of a prototype is demonstrated through testing. System testing of any computer software should comprise both validation and verification. Validation is the process of ensuring that a system correctly solves the problem being addressed (Liebowitz 1986). It involves running some cases and comparing the output against known results or expert opinion. Verification is the process of ensuring that a computer system is useful to its targeted users (O’Keefe et al. 1987).
A COMPARATIVE SURVEY OF INTELLIGENT DATABASE DESIGN TOOLS This section discusses the development of knowledge-based design tools and compares 23 systems that have been implemented and published in literature to date. Since the early 1970s, computer-aided database design tools have been used. Beginning in the 1980’s, researchers have been developing knowledge-based design tools with artificial intelligence (AI) technologies. There are four motivations for using this approach. First, knowledge-based systems capture human expertise, which is scarce and costly. Equipped with it, a knowledge-based system can take over some of the burden of inference, decision making, and continuous design validity checking of the evolving output schema. Second, the explanation facility of a knowledge-based system can minimize the black box problem of conventional computer-aided design tools. It also serves to educate a tool user by explaining the rationale behind a certain action taken. Third, the flexibility of updating the knowledge base of such a system is another important motivation. In rule-based systems for instance, given the fact that individual rules are independent from each other, the addition, update, or deletion of a rule leaves other parts of the knowledgebase relatively intact. This is in sharp contrast to procedural programs where such changes are not as benign. In fact, a system that has an evolving and incremental knowledge base can improve its own performance over time. Lastly, the design process itself (particularly the first two steps) is a suitable problem domain for the knowledge-based system approach, which is for solving problems that are non-algorithmic, non-trivial, and not completely deterministic. Tables 1 through 7 summarize the features of the twenty-three systems surveyed. A more detailed description of each is available from the authors. Systems for Physical Design are not included here. Interested readers can
48
Knowledge-Based Systems as Database Design Tools
refer to Bitton et al. (1985), Kao (1986), and Dabrowski et al. (1989). Moreover, systems that are for purposes other than designing a new database are not included. Examples include the system by Wohed (1990) which is used for the diagnosis of already designed database structures and ERMCAT (Huffman and Zoeller, 1990) which is used for entity-relationship clustering. Lastly, systems that aid in designing object-oriented databases are excluded as they involve an approach very different from that of traditional ones. Examples are A Tool for Modular Database Design (Tucherman et al., 1985, Casanova et al., 1991) and KRISYS (Mattos and Michaels, 1990). Table 1: Design Steps Covered by Various Tools Requirements Conceptual Collection Design Consulting System for Database Design (Holsapple et al., 1982) Expert System for Translating an E-R Diagram to Databases (Briand et al., 1985) GAMBIT (Braegger et al., 1984, 1985) Computer-Aid for E-R Modeling (Hawryszkiewycz, 1985) SECSI (Bouzeghoub, 1992; Bouzeghoub and Gardarin, 1984, 1985) CARS (Demo and Tilli, 1986) ACME (Kersten et al., 1987) EDDS (Choobineh et al., 1988, 1992) VCS (Storey and Goldstein, 1988) PROEX (Obretenov et al., 1988) EXIS (Yasdi and Ziarko, 1988) GESDD (Dogac et al., 1989) Form Definition System (Tseng and Mannino, 1989; Choobineh et al., 1992) ERDDS (Springsteel and Chuang, 1989) OPTIM_ER (Catarci and Ferrara, 1989) VDIES (Civelek et al., 1989) OICSI (Cauvet et al., 1990) Knowledge-Based Information Analysis Support (Falkenberg et al., 1990) (Diet and Lochovsky, 1990) MODELLER (Tauzovich, 1990) CHRIS (Tucherman et al., 1990) FOBFUDD (Choobineh and Venkatraman, 1992) CABSYDD (Lo et al., 1991; Lo, 1994; Lo and Choobineh, 1995a, 1995b; Lo et al., 1996; Lo and Choobineh, 1998)
*
*
Logical Design
View Integration
*
*
* * * * * *
*
* * * * * * *
* * * *
* * * *
* * * * * *
*
* * *
*
*
Lo & Choobineh
49
A closely related work is that of Storey and Goldstein (1993) where 13 experimental tools are compared. In addition to the inclusion of close to twice as many systems, the present survey provides additional dimensions of analysis for each. These include completeness of design output, data model independence, inference, availability of explanation, and user interface. Areas from that reference that are not included here are discussions on the source of knowledge of each surveyed system and the criteria to be used to classify a system as “knowledge-based.” The two surveys complement each other. Similar, but not as comprehensive surveys, are those of Bouzeghoub (1992) and Reiner (1992). A related survey is that of Loucopoulos and Theodoulidis (1992) where various CASE tools are presented. The emphasis is, however, on requirement specifications and reuse instead of the database design. Database Design Support Design Steps Covered. The four steps of database design are Requirements Collection, Conceptual Design, Logical Design, and Physical Design. We determined the steps covered by each tool in two ways. First, some are explicitly stated in the references cited. Second, the form of input received and output produced by a tool determines which steps are covered. Table 1 shows the design steps covered by various tools. The first three columns correspond to the first three steps in database design. The fourth column is view integration. Although we believe that view integration must be performed during Conceptual Design, we have assigned a separate column to it for two reasons. First, not all the systems perform view integration during Conceptual Design. Our second reason is to explicitly separate the systems that perform this important function from those that do not. Three systems cover all the first three steps in the design process. They are the most comprehensive ones. Two cover Requirements Collection and Conceptual Design. Two systems cover only Conceptual and Logical Design. Ten tools cover only Conceptual Design. One tool, the Form Definition System, is solely for the front-end purpose of Requirements Collection. Five tools perform only Logical Design. In summary, the most popular step for automation is Conceptual Design. Seventeen systems cover this step. Ten systems cover Logical Design and six cover Requirements Collection. View Integration. As shown in Table 1, view integration is performed by five systems. All perform this function during Conceptual Design except FOBFUDD. Systems that do not perform it assume either that only a singleview is being modeled or that view integration has been done prior to Requirements Collection.
50
Knowledge-Based Systems as Database Design Tools
Consulting System for Database Design accepts individual report descriptions as input to Conceptual Design. Each report represents a view to be defined. Similarly, EDDS and the system of Diet and Lochovsky accept separate form descriptions as input where each form is considered to be a single view. The ultimate database structure produced can be viewed as a global schema that incorporates the information needs of all end-users. These three tools produce the global schema without preserving the individual views in the corresponding conceptual data models. VDIES is a tool that puts its emphasis on view integration. It accepts separate view descriptions and integrates them to form a global schema while preserving the initial individual external schemata. Like EDDS, FOBFUDD takes as input business form structures. As output, it produces the set of functional dependencies inherent in the forms. Each form is considered a separate view that contributes to the integrated logical view. Completeness of Output. Besides covering a certain design step, another important factor in the support of database design is the output produced in the step covered. The output of the Requirements Collection step is a set of documented user requirements. If there are multiple views, then there should be a separate output for each view. The output of Conceptual Design is one conceptual schema. It is the same as the global schema if there is only one view to be modeled. For a multi-user database, the output includes different external schemata as well as a global schema if view integration is performed. Some authors, including Elmasri and Navathe (1989, p. 467) suggest that the dynamic update operations of the database must also be specified in this step. The output of the Logical Design step for a single-view database is one logical schema. For a multi-user database, there should be a global schema with a set of external schemata in an implementation data model. Integrity constraints and dynamic database transactions should also be defined here. Table 2 shows the output produced by all the systems. The three major columns correspond to the first three steps in database design. Minor columns under each major column correspond to the output of each step. An asterisk in a cell denotes that the system (in the row) produces the output (in the column heading). Absence of an asterisk denotes that the corresponding output is not produced by that system. Among the systems that cover all the first three steps of database design, GESDD produces the most comprehensive output. Five of the six systems that cover Requirements Collection document users’ requirements in some formal specification. One system, VCS, directly derives a conceptual schema from requirements given by end users without representing them in a formal manner prior to Conceptual Design.
Lo & Choobineh
51
Table 2: Completeness of Output Requirements Collection Formal Doc. of Users’ Req. Consulting System for Database Design * Expert System for Translating an E-R Diagram to Databases GAMBIT Computer-Aid for ER Modeling SECSI CARS ACME * EDDS VCS No Distinct set of specs. PROEX EXIS GESDD * Form Definition System * ERDDS OPTIM_ER VDIES OICSI * Knowledge-Based Information Analysis Support Diet and Lochovsky MODELLER CHRIS FOBFUDD CABSYDD Conc. Const. Def.
Conceptual Constraints Definitions
Doc. Ext. Integ.
Conceptual Design Overall Set of Conc. Sch Ext. Sch
Integ. Const.
Tr. Def.
*
Logical Design Overall Set of Log. Sch Ext. Sch
Integ. Const.
Tr. Def.
*
* *
*
*
* * * *
*
* *
*
*
*
* *
*
*
* *
*
* * * *
* *
*
* *
*
*
*
* * * *
*
*
* * * *
* * *
*
* Documentation External Integrity
Log. Req. Sch.
Logical Requirements Schema (ta)
Spec. Tr.
Specifications Transaction
For the 13 systems that cover single-view conceptual database design, the five most comprehensive systems (in terms of their output) are GAMBIT, GESDD, OICSI, MODELLER, and CHRIS. They all produce a conceptual schema, integrity constraints, and also dynamic transaction descriptions. ACME produces both the conceptual schema and transaction specifications. Knowledge-Based Information Analysis Support produces the conceptual schema and integrity constraints. EXIS produces one conceptual schema and claims to model transactions without further elaboration. Other systems, Computer-Aid for E-R Modeling, CARS, VCS, PROEX, and CABSYDD all produce the basic conceptual schema only.
52
Knowledge-Based Systems as Database Design Tools
For the four systems that include view integration in conceptual database design, the most comprehensive tool is that of Diet and Lochovsky. It produces a global schema, integrity constraints, and a set of basic transaction descriptions (insertion, update, and deletion). VDIES produces a global schema while preserving the individual external schemata. Consulting System for Database Design and EDDS produce a global schema as output. For the ten systems that cover Logical Design, the two most comprehensive tools are GESDD and CHRIS. They produce one logical schema, a set of integrity constraints, and transaction specifications. Three systems, ERDDS, SECSI and PROEX, produce the logical schema and integrity constraints, but not the transaction specifications. FOBFUDD produces integrity constraints in the form of functional dependencies. Finally, systems that produce only the basic logical schema are Consulting System for Database Design, Expert System for Translating an E-R Diagram to Databases, VCS, and OPTIM_ER. Data Models Used in Each Step of Database Design. Table 3 shows the models used by the surveyed tools. The top three major headings of this table correspond to the first three steps of database design. Under each of them are the form of input received and output produced by that step. Absence of an entry in a cell is the indication that the tool does not cover the corresponding step. While Conceptual and Logical Design usually follow well-defined data models, user requirements can be expressed in many ways either as output of Requirements Collection or input to Conceptual Design. Different tools that deal with user requirements have chosen different approaches. Here, they are classified into four categories. The first category collects and works on requirements from specific examples of the real world. Consulting System for Database Design, EDDS, Form Definition System, the system by Diet and Lochovsky, and FOBFUDD use instances or structures of actual documents to collect information for design. They bypass, as much as possible, the need of interviewing human end users to obtain the required information. This approach can avoid the problem of inexactness in specifications obtained from human users. OICSI accepts structured statements that describe specific instances of the relevant business activities. It then generalizes them to obtain the description of the end-users’ needs organized in a “syntactic tree” data structure. The second category obtains user requirements through natural language descriptions or short answers to questions (or menu prompts). ACME is the only system that is equipped with a natural language parser to analyze such sentences for deriving description of users’ needs in structured statements called “predictions.” VCS obtains information through short answers to
Lo & Choobineh
53
Table 3: Models Used by Different Tools
Consulting System for Database Design Expert System for Translating an E-R Diagram to Databases GAMBIT
Requirement Collections Input Output Actual reports The Report Schema Model
Computer-Aid for ER Modeling SECSI CARS ACME EDDS VCS
Form Definition System ERDDS OPTIM_ER VDIES OICSI
Extended E-R from users’ commands Semantic Modeling Terms
Extended E-R in graphics or text E-R with subclasses
Data and relations Predictions
E-R Extended E-R
Free format natural language statements
Predictions
Information from end-users
No distinct output
Form definitions Information from end-users Extended E-R
E-R E-R
Information from Designers Actual forms
Structural statements of specific instances
Any model installed
Structured statements Any model used in step 1
Logical Design Input The Binary Schema Model E-R
A conceptual data model based on Extended E-R and SDM Semantic network Any conceptual data model
Output Extended Network Relational or BACHMAN diagram
Relational or Network
E-R
Relational
A conceptual data model based on Extended E-R and SDM
Relational
Any conceptual data model
Any implementation data model
Extended E-R E-R
Relational Relational
Extended E-R in text
Relational
Form definitions
Relational
Form definitions
Syntactic trees
E-R Syntactics trees
Diet and Lochovsky
Form definitions
MODELLER CHRIS
Structured statements Extended E-R through questions and answers
FOBFUDD CABSYDDDD
Output The Binary Schema Model
E-R with subclasses
PROEX
EXIS GESDD
Conceptual Design Input The Report Schema Model
Extended E-R through questions and answers
Extended E-R A conceptual data model based on a variant of the NIAM methodology E-R with generalization Extended E-R Extended E-R in text
Extended E-R in text
questions via menu prompts. A parser is not needed here. Assuming that the end users are not data model sophisticates, it avoids using data modeling terms for obtaining data from them. The third category demands that the user requirement be in a predefined format. Computer-Aid for Modeling elicits design data from a human modeler using semantic modeling terms. EXIS requires a tool user to supply design data in some structured statements or in a diagram. MODELLER accepts user requirements in a general-purpose knowledge acquisition language based on first order logic. Knowledge-Based Information Analysis
54
Knowledge-Based Systems as Database Design Tools
Support uses “tables” to express user requirements. CARS uses “data and relations” to express user requirements. However, the actual form and examples of such documentation formats are not elaborated in their references. The fourth category demands that user requirements be supplied directly in the concepts of a conceptual data model. Systems that use this approach are GAMBIT, PROEX, VDIES, CHRIS, and CABSYDD. Later, in the discussion for Table 4, it will be shown that except CABSYDD, they participate in Conceptual Design in a passive way because a tool user supplies input data already in terms of the output conceptual schema. In this way, they mainly act as a documentation tool. As discussed later, CABSYDD can participate in either an active manner with case-based reasoning or in a passive manner if there is no existing case from the case library to be used as a basis for designing a new database. Conceptual data models are used as a means to express both the output from Conceptual Design and the input to Logical Design. The E-R Model (Chen 1976), and its various extended versions, form the most popular family of conceptual data models. Systems that produce conceptual schemata in this data model include GAMBIT, Computer-Aid for E-R Modeling, CARS, ACME, EDDS, VCS, VDIES, the system of Diet and Lochovsky, MODELLER, CHRIS, and CABSYDD. Systems that accepts conceptual schemata in this data model as input to Logical Design include Expert System for Translating an E-R Diagram into Databases, SECSI, VCS, ERDDS, OPTIM_ER, and CHRIS. Other conceptual models are used in Consulting System for Database Design, PROEX, OICSI, Knowledge-Based Information Analysis Support, and EXIS and they are listed in Table 3. GESDD can use any conceptual data model installed by a separate module of the system. The relational model is the most popular implementation model among the tools. Six systems produce this model exclusively. These are VCS, PROEX, CHRIS, ERDDS, OPTIM_ER, and FOBFUDD. Others use a variety of models including a network model and its extensions and Bachman diagrams. GESDD is the most powerful because it can produce logical schemata in any of the three-implementation data models. Targeted Users and Extent of Support. Targeted users are classified as either database designers or end-users. Database designers should possess knowledge about database design, such as the proper usage of a certain data model. End-users are seen as novices in database design. They possess knowledge about their own application domains but are not familiar with concepts of data modeling. In Table 4, the targeted users stated by the authors and the degree of involvement of the tools are shown.
Lo & Choobineh
55
The first column of Table 4 shows the targeted users of the tools. Fourteen systems are designed solely for database designers. Five systems are exclusively for end users. One system (Form Definition System) is developed with two different modes, one for end users and the other one for database designers. Of the six systems that may be used directly by end users, four cover Requirements Collection. This may partially be due to the fact that Requirements Collection starts with end-users. Although SECSI and EXIS both claim
Table 4: Targeted Users and the Extent of Support of the Tools Targeted Users Consulting System for Database Design Expert System for Translating an ER Diagram to Databases GAMBIT Computer-Aid for E-R Modeling SECSI CARS ACME EDDS VCS PROEX EXIS GESDD Form Definition System ERDDS OPTIM_ER VDIES
OICSI Knowledge-Based Information Analysis Support Diet and Lochovsky 1990 MODELLER CHRIS FOBFUDD CABSYDDDD
Requirements Collection
Conceptual Design
Logical Design
Active
Active
Active
Active
Database designers Database designers End-users
Passive Active Active or passive
End-users Database designers End-users Database designers End-users Database designers End-users or database designers Database designers Database designers Database designers
Active
End-users Database designers
Active
Database designers Database designers Database designers Database designers Database designers
Active
Passive
Active Active Active Active Passive
Active Active
Active or passive Active
Active
Active or passive Active Active Passive view definition, active view integration Active Active
Active Active Passive
Active Active
Active or passive
56
Knowledge-Based Systems as Database Design Tools
TE
AM
FL Y
to have end-users as targets, their emphases are respectively on Logical and Conceptual Design. This may not be consistent with the presumption that endusers are not familiar with data modeling. The last three columns of Table 4 correspond to the first three steps in database design. Each column shows the mode of support that a tool provides for that step. A blank entry denotes that the tool does not cover the corresponding step. According to its relative degree of involvement at each step of the design process, a tool plays a “passive” or an “active” role. Similar to a conventional documentation tool, a passive database design tool lets a user take control of a design session to accept input and perform design validity checking. In contrast, active involvement means that the tool can direct the sequence of defining various constructs in a schema or make important inferences and decisions on behalf of the user. Eighteen out of the twenty-three tools surveyed assume a completely active role in all the steps that each covers. Of these eighteen systems, four can switch between a passive and active modes. As an example, Form Definition System can actively be involved in the production of form descriptions through inference on a particular given form instance. It can also accept form descriptions passively. Similarly, SECSI, EXIS, and CABSYDD can accept input passively in their data model or actively interact with a user in obtaining it. As shown in Table 4, four of the 23 systems, PROEX, GESDD, VDIES, and CHRIS, have different roles for the different steps that they cover. GAMBIT is the only tool that is completely passive. It is included in this survey because its knowledge base of design rules is used to check for consistency and design validity of the output schema. Overall, of the six tools that cover Requirements Collection, five can be classified as active. Of the 17 tools that cover Conceptual Design, 13 can be considered active. Finally, all the ten systems that cover Logical Design actively turn conceptual schemata into their logical schemata. There is a connection between the role that a tool plays and the targeted users of that tool. If the designated users are end users, then the tool must actively participate in the design process. All the tools targeted to end users are capable of taking an active role. As reviewed here, most intelligent database design tools are knowledgeable enough to play an active role in the database design process. Tool Flexibility Issues Application Domain Independence. As shown in the first column of Table 5, all the surveyed tools are application domain independent. All the
Team-Fly®
Lo & Choobineh
57
tools, except a later version of SECSI as reported in Bouzeghoub (1992) and CABSYDD, do not retain any domain-specific knowledge. In fact, accumulating it can serve as a basis for a tool to learn from its own experience. While SECSI retains specific domain knowledge within its data dictionary for later reuse, CABSYDD is the first system that makes use of casebased reasoning to retain and reuse schemata in a systematic manner. It has an evolving library (called a case base) of domain-specific schemata for its own use. Its problem solving approach will be discussed in detail later. Advantages of a tool that embodies application domain knowledge in its knowledge base include a) ease of start-up of a design, b) knowledge about most of the required entities, and c) knowledge about the relationship between entities. Such knowledge enables the tool to remind the user of potential missing items. This, in turn, will make the design process more efficient and its output more complete. Data Model Independence. As shown in the second column of Table 5, only GESDD is data model independent. It provides menu-based facilities for design experts to supply new design techniques. Data model independence can be achieved through implementation of a maintenance module, such as the one in GESDD (details are discussed later), or through some machine learning capability. The two overlap with each other from a certain point of view. These two issues are discussed in further detail below. Maintenance Features. As shown in the third column of Table 5, SECSI, GESDD, and OICSI include system maintenance facilities for future extensions. SECSI provides an interactive interface that allows database design experts to modify and add design rules. This is done through a function called LEARN which directly accepts rules in PROLOG. Similarly, OICSI claims to provide an expert interface for adding rules. GESDD has a more sophisticated maintenance module. A database design expert can define a new data model in terms of its “concepts,” “fields,” and “design rules.” Once this definition is completed, a database designer can make use of it to design databases. Mechanisms to check the model validity of any new data model are not discussed. Since “maintenance” is a means of improving a system, functionally, it is very similar to machine learning concepts. When a computer system learns, it improves its performance at a designated task over time without reprogramming. According to Simon (1983), learning is any change in a system that allows it to improve its performance in a certain task. Such changes can be in the body of knowledge or in one’s inference ability. The result can be solutions with higher quality or more economical problem solving methods (Tanimoto, 1987, p. 284). Assuming the existence of a rule-based system, changes made
58
Knowledge-Based Systems as Database Design Tools
Table 5: Flexibility Issues
Consulting System for Database Design Expert System for Translating an ER Diagram to Databases GAMBIT Computer-Aid for E-R Modeling SECSI CARS ACME EDDS VCS PROEX EXIS GESDD Form Definition System ERDDS OPTIM_ER VDIES OICSI Knowledge-Based Information Analysis Support Diet and Lochovsky 1990 MODELLER CHRIS FOBFUDD CABSYDDDD
Applicated Domain Independence Yes
Data Model Independence No
Yes
No
Yes Yes
No No
Yes Yes Yes Yes Yes Yes Yes Yes Yes
No No No No No No No Yes No
Yes Yes Yes Yes Yes
No No No No No
Yes
No
Yes Yes Yes Yes
No No No No
System Maintenance Facilities
Yes
Yes
Yes
to the knowledge base to improve system performance include addition, deletion, or update of a rule. A program maintenance module that facilitates these operations achieves the effect of learning from instructions. A new rule that is added to the knowledge base through maintenance module represents new knowledge that is learned directly from a human design instructor. However, the definition of machine learning given above states that a computer system learns to improve without reprogramming. Maintenance itself is reprogramming. Moreover, from a procedural standpoint, program maintenance does not follow the steps in learning from instructions. According to Hayes-Roth et al. (1981), there are three major steps of learning from instructions. The first step is parsing and interpretation, in which the learner
Lo & Choobineh
59
transforms the advice from its initial linguistic form to a declarative knowledge representation. The second step is operationalization, in which the advice in the declarative form is transformed into executable or procedural form. The last step is execution and monitoring, in which the learner uses the acquired knowledge and monitors the results. A maintenance module that accepts new rules or facts bypasses the step of parsing and interpretation. Therefore, if a rule is added in a predefined template, the input instruction is already in a declarative form. Viewed from the procedural standpoint, no system has machine learning capabilities to improve their permanent procedural knowledge base or inference abilities. This niche provides a fertile opportunity for future research. Knowledge-based System Features Table 6 shows the knowledge-based system features of the tools. The first column displays the data structures used for representing factual knowledge. All the references that describe their procedural knowledge representation use rules. Inference approaches used are shown in the second column of Table 6. Two systems, Knowledge-Based Information Analysis Support and FOBFUDD incorporate uncertainties, in the form of probabilities, in their rules and derived data. The former system uses probabilities in three ways. First, they are used to derive missing entity types, label types (which are similar to attributes), and relationship types from those already defined. For example, a rule is as follows: IF an entity type “address” is recognized, THEN the label type “street” must be recognized with p = 0.8 (Falkenberg et al. 1990). The second use of probabilities is to resolve the conflict between different pieces of knowledge. As the system incorporates three sources of knowledge: general world knowledge, design knowledge, and specific application-related knowledge, there may be a chance of having pieces of contradicting knowledge. The system can resolve this by simply ignoring the piece(s) that has (have) a probability below a certain threshold value. Third, as probabilities of one conclusion can become the probability of the antecedent of the next rule to be fired, probabilities are propagated. The probability of the overall conclusion can indicate the strength of belief in the final result. In the other system, FOBFUDD, which derives a set of functional dependencies from a set of form definitions, certainty factors (CF) are associated with the assertion of single-attribute potential determinants, dependents of potential determinants, and multi-attribute potential determinants. It uses CFs to
60
Knowledge-Based Systems as Database Design Tools
choose the rule to fire when a piece of data can satisfy more than one rule at the same time. The rule with the highest probability will be fired. Similar to Knowledge-Based Information Analysis Support, this CF value will be propagated to the derived conclusions. An interesting aspect of FOBFUDD is its Meta rules for combining certainty factors. These rules are unique to database design.
Figure 6: Knowledge-Based System Features Knowledge Representation Factual
Procedural
Consulting System for Database Design
Predicate calculus
Rules
Expert System for Translating an E-R Diagram to Databases GAMBIT
Semantic network
Rules
Semantic network
Rules
Inference
Explanation
Goal- or data- driven in different steps Goal- or data- driven in different steps
Yes
Computer-Aid for E-R Modeling SECSI CARS
Rules
Yes
ACME EDDS
Frames, relations
Rules
Data-driven
VCS
Predicate calculus
Rules
Goal-driven
Yes
PROEX
Semantic network
Rules
Goal- or data- driven in different steps
Yes
EXIS
Semantic network
Rules
Yes
GESDD
Predicate calculus
Rules
Yes
Rules
Yes
Form Definition System ERDDS
Yes
OPTIM_ER VDIES OICSI
Semantic network
Knowledge-Based Information Analysis Support
Rules
Yes
Rules
Yes
Rules
(with probabilities)
Rules
Data-driven
Diet and Lochovsky 1990 MODELLER CHRIS
Predicate calculus
Rules
FOBFUDD
Frames and relations
Rules
Data-driven (with certainty factors)
CABSYDDDD
Frames and predicate calculus
Rules
Data-driven, case-based reasoning
Yes
yes
Lo & Choobineh
61
Among the 23 tools surveyed, all but one system rely on first principles to design a database every time a design session begins. CABSYDD is the only tool that can use case-based reasoning to reuse database schemata. It maintains a library of schemata in a case base. The library is indexed according to features such as industry and department. Instead of solving every design problem from scratch, the system first tries to retrieve the closest conceptual schema (called a case) from its case base. Once such a schema is located, the system takes control to convert the existing constructs to those of the new problem through engaging a user in a conversation. If there is no existing feasible schema in the case base, then the system engages the user in the design process from first principles through accepting (and verifying) the conceptual schema in the constructs of an enhanced E-R model. After the new conceptual schema has been defined, the system learns from this experience through retaining this new case and adding any new indexes to the case base as appropriate. CABSYDD is the first system that formally implements this problem solving approach in database design. Lastly, as shown in the last column of Table 6, mid-run explanations are available in 12 of the surveyed systems. Implementation Issues User Interface. Cited literature on most of the surveyed systems did not contain details of user interface. The third column of Table 7 contains the input and output interfaces of the surveyed systems. Popular means of interface include menus, windows, and commands. There are four different approaches to accept design input. The first method is through the use of a restricted subset of a natural language. This can be simple question-and-answer or structured natural language sentences in a predefined format. Systems that use this approach include Computer-Aid for E-R Modeling, VCS, SECSI, EXIS, OICSI, CABSYDD, and PROEX which uses this approach during Logical Design only. The second method of accepting input is through the use of a natural language parser to accept sentences of a natural language in free format. The only system that uses this approach is ACME. The third method of accepting input is through the use of a predefined declarative format in text. It is less English-like than the structured natural language sentences in the first method above. SECSI, EXIS, and MODELLER can use this method. PROEX accepts input in a formal language during Conceptual Design. The fourth method of accepting input is accepting it in a graphic format. SECSI and EXIS can accept input in the form of a semantic network diagram. GAMBIT displays an evolving conceptual schema diagram while it accepts input description through its window-based system.
62
Knowledge-Based Systems as Database Design Tools
Table 7: Implementation Features Development Tool Consulting System for Database Design Expert System for Translating an E-R Diagram to Databases GAMBIT
Hardware
PROLOG
MODULA-2
Lilith work-station
Windows with evolving graphics Question and answer
Text, graphics
1. graphics 2. declarative statements 3. restricted natural language statements
Text
PROLOG PROLOG
MULTICS
CARS
VAX/730
EDDS
PROLOG under UNIX C-PROLOG under UNIX PASCAL
VCS
PROLOG
PROEX
PROLOG under DOS
EXIS
PROLOG
GESDD Form Definition System ERDDS OPTIM_ER VDIES OICSI
PROLOG Light-speed Pascal
Burroughs A9F Macintosh SE
PROLOG
PC PC IBM PC
PROLOG PROLOG
System Testing Output
Text
Computer-Aid for ER Modeling SECSI
ACME
User Interface Input
VAX/750 VAX/780 Amdahl 5850, IBM PC/AT IBM PC/AT/XT
Knowledge-Based Information Analysis Support Diet and Lochovsky MODELLER
PROLOG
C under UNIX Arity PROLOG
Sun work-station Compaq 386/20
CHRIS FOBFUDD CABSYDDDD
VM/PROLOG PROLOG Clips v. 6.0
IBM XT/AT IBM compatible
Free-form natural language statements Menu-/commanddriven Question and answer Declarative statements, question and answer during logical design 1. menu-driven, 2. graphics, 3. declarative statements, 4. restricted natural language statements Menu-driven Windows-/ command- driven Menu-driven Menu-driven Menu-driven Restricted natural language statements
Windows, declarative statements Menu-driven Menu-driven, restricted natural language statements
Text Text
Yes
Text
Yes
Text
Text
Text Text, graphics Text, graphics Text
Yes Planned
Text
Text
Text Text
Yes Yes
The most flexible tools, as far as the input format is concerned, are SECSI and EXIS. With SECSI, a tool user can choose to feed input either in restricted natural language statements, in a declarative form in text, or in a diagram. EXIS can use all these three plus the menu-driven approach. EDDS and Form Definition System both have two modes of accepting input, menus for novice users and commands for expert tool users. A user can switch between the two modes at will. As shown in Table 7, GAMBIT, ERDDS, and Form Definition
Lo & Choobineh
63
System can produce graphic output while most others produce output in text. Table 7 also has two other self-explanatory columns: the development tools and hardware used. System Testing and Commercialization. As shown in the fourth column of Table 7, of all the tools surveyed, five systems explicitly state that system testing has been performed. EDDS has been tested with textbook type cases. VCS has been tested with seven different database design problems using other users than the developer of the system. Form Definition System has been tested with cases and several users to collect evidence about the usability of the system. FOBFUDD has been tested using a case. CABSYDD has gone through both formal validation and verification. Two database design experts separately judged the correctness of several output schemata. An empirical experiment was also carried out for comparing the effectiveness of the case-based approach and design from first principles (Lo and Choobineh 1998). As for commercialization, we have received some brochures from the authors of SECSI about its commercial version sold in Europe. We are not aware of commercialization of any of the other surveyed systems.
CONCLUSIONS Research on the development of intelligent database design tools has resulted in many prototypes for the demonstration of various special features. The overall research progress and potential fruitful research directions conclude the paper in this section. They are discussed from the viewpoints of database design and artificial intelligence research. Database Design Research From a tool user’s (either a database designer or an end-user that needs to design database schemata) standpoint, a database design tool should support the activities of database design and if possible, all related activities for a particular design project and for the long run. Therefore, future research directions with respect to database design support include the following. The Development of Active Integrated Database Design Tools for Full Database Design Support According to this survey, present tools automate different steps in the database design process with different combinations of output. Only two tools were found which integrate the three major steps of the design. This (and the
64
Knowledge-Based Systems as Database Design Tools
next finding) is further supported by Wasserman (1982), Elmasri and Navathe (1989, p. 483), and Yourdon (1989, pp. 469-474). In order to fully support database design process, tools that can automate all the steps and produce all the required output for each step of the design are needed. Other desirable features, to complete the design and development process, include subsystems for documentation, code generation, system testing and simulation. Of the surveyed systems, simulation of transaction processing with the designed database structures is mentioned in GAMBIT and CHRIS. A related issue is the lack of a means of translation and communication among different tools. Ideally, the compatibility between different tools should be enhanced such that a user can potentially switch among different tools throughout the design process. The Development of Database Design Tools to Support Group Design Activities At present, none of the prototypes surveyed supports group design activities. The development of design tools for group database design (also supported by Yourdon, 1989) is a research direction with high potential. Since the development of large-scale database systems requires group efforts, design tools with project team blackboards and project management facilities are needed for support and coordination of the design effort. Useful administrative capabilities such as keeping track of information of tool users with usage profiles and control are definitely desirable. The Development of More Powerful Graphic Interface in Design Tools Of the surveyed systems, only four tools support a graphic interface. The development of tools with graphic support, which is customized for database design, is an important research direction. Since database design heavily depends on graphic data models, a design tool equipped with graphic support will help a tool user visualize the overall picture more vividly. The Long Term Maintenance of Database Design Tools Using artificial intelligence techniques in developing database design tools allows for adherence to sound data modeling rules as well as ease of maintenance in modifying these rules. The most advanced case (to date) in support of this fact is GESDD described in Dogac et al. (1989) where a separate maintenance module is capable of accepting a new data model description. With this module, a complete new data model can be added to the knowledge base and used for future design activities. This is an example of
Lo & Choobineh
65
data model independence as discussed previously. The approach in GESDD gives designers the flexibility to choose any data model or change the usage of a certain concept of a data model. It would even be more useful if the tool could automatically translate designed schemata among different data models. As for the long-term maintenance of database schemata stored in a design tool, CABSYDD can systematically store and index new schemata for future use. However, it does not take long-term maintenance of the collection of cases into consideration. The next step is to define methodologies to efficiently manage the case base with capabilities such as removal of non-useful old cases and reorganization of case base indexes to accommodate changes in the real world. Formal Testing and Commercialization of Prototypes that Have Been Implemented Of all the surveyed papers only CABSYDD has been statistically tested for usability, correctness, and completeness. Therefore, a recommended research direction is formal statistical and empirical testing of the effectiveness of the tools. Storey and Goldstein (1993) also recommended this conclusion in a similar survey. Validation of the effectiveness may enhance the opportunity for commercialization. Artificial Intelligence Research As presented earlier, most intelligent database design tools are capable of inference, decision-making, and checking the validity of the output. This has been made possible because the design knowledge is permanently stored. The role of a knowledge-based system is to certain extent mimic the behavior of the corresponding human experts. To various degrees, all intelligent systems for database design do this. However, the present systems all use existing knowledge representation schemes and inference approaches. Only CABSYDD improves its factual knowledge through case-based learning. Therefore, future research directions include the development of new knowledge representation schemes, inference, and learning techniques for intelligent database design tools. New knowledge representation techniques have been developed in other domains in the past. Examples include the synergism of frames and rules that originated in CENTAUR (Atkins, 1983) in the field of medicine and the blackboard architecture with opportunistic reasoning which was developed in the speech-recognition field in Hearsay-II (Erman 1980). Another very interesting and challenging issue is how to use machine learning techniques to improve the procedural knowledge of a design tool. It
66
Knowledge-Based Systems as Database Design Tools
is hoped that future research efforts may result in the development of new knowledge representation schemes, inference, and learning techniques specifically for database design.
REFERENCES
TE
AM
FL Y
Atkins, J. S. (1983). Prototypical knowledge for expert systems. Artificial Intelligence 20(2), 163-210. Bitton, D., Mannila, H, & Raiha, K-J. (1985). Design-by-example: a design tool for relational databases. Working Paper, Dept. of Computer Science, Cornell University, Ithaca, NY. Bouzeghoub, M. (1992). Using expert systems in schema design. In Conceptual Modeling: Databases and CASE. Edited by P. Loucopoulos & R. Zicari. New York: Wiley. Bouzeghoub, M. & Gardarin, G. (1984). The design of an expert system for database design. In New Applications of Data Bases. Edited by G. Gardarin. & E. Gelenbe. New York: Academic Press. Bouzeghoub, M. & Gardarin, G. (1985). Database design tools: an expert system approach. In Proceedings of the Eleventh International Conference on Very Large Data Bases, IEEE Computer Society, Stockholm, pp. 82-95. Braegger, R. P., Dudler, A. M., Rebsamen, J., & and Zehnder, C. A. (1984). Gambit: an interactive database design tool for data structures, integrity constraints, and transactions. In Proceedings of the IEEE International Conference on Data Engineering, Washington, D. C., IEEE Computer Society Press, pp. 399-407. Braegger, R. P., Dudler, A. M., Rebsamen, J., & Zehnder, C. A. (1985). Gambit: an interactive database design tool for data structures, integrity constraints, and transactions. IEEE Transactions on Software Engineering, 11(7), 574-583. Briand, H., Habrias, H., Hue, J-F., & Simon, Y. (1985). Expert system for translating an E-R diagram into databases. In IEEE International Conference on Entity Relationship Approach, IEEE Computer Society. Silver Spring, MD, IEEE Computer Society Press, pp. 199-206. Casanova, M. A., Furtado, A. L., & Tucherman, L. (1991). A software tool for modular database design. ACM Transactions on Database Systems, 16(2), 209-234. Catarci, T. & Ferrara, F. M. (1989). “OPTIM_ER: an automated tool for supporting the logical design within a complete case environment. In Entity Relationship Approach. Edited by C. Batini. New York: Elsevier Science Publishers B. V. (North-Holland).
Team-Fly®
Lo & Choobineh
67
Cauvet, C., Proix, C., & Rolland, C. (1990). “Information Systems Design: An Expert System Approach. In Artificial Intelligence in Databases and Information Systems. Edited by R. A. Meersman; Z. Shi; & C-H Kung. New York: Elsevier Science Publishers B. V. (North-Holland). Chen, P. P. (1976). The entity-relationship model toward a unified view of data. ACM Transactions on Database Systems. 1(1), 9-36. Choobineh, J. (1985). Form driven conceptual data modeling. Ph.D. Dissertation, Dept. of Management Information Systems, University of Arizona. Choobineh, J., Mannino, M. V., Nunamaker, J. F., & Konsynski, B. R. (1988). An expert database design system based on analysis of forms. IEEE Transactions on Software Engineering 14(2), 242-253. Choobineh, J., Mannino, M. V., & Tseng, V. P. (1992). A form-based approach for database analysis and design. Communications of the ACM. 35(2), 108-120. Choobineh, J. & Venkatraman, S. S. (1992). A methodology and tool for derivation of functional dependence from business forms. Information Systems 17:(3), 269-282. Civelek, F. N., Dogac, A., & Spaccapietra, S. (1989). An expert system for automated ER model clustering. In Entity Relationship Approach to Database Design and Querying. Edited by F. H. Lochovsky. New York: Elsevier Science Publishers B. V. (North-Holland). Dabrowski, C. E., Jefferson, D. K., Carlis, J. V., & March, S. T. (1989). Integrating a knowledge-based components into a physical database system. Information & Management (17), 171-186. Demo, B. & Tilli, M. (1986). Expert system functionalities for database design tools. In Applications of Artificial Intelligence in Engineering Problems. Edited by Sriram D. & and Adey R. New York: SpringlerVerlag, 1986. Diet, J. & Lochovsky, F. H. (1990). Interactive specification and integration of user views using forms. In Entity Relationship Approach to Database Design and Querying. Edited by F. H. Lochovsky. New York: Elsevier Science Publishers B. V. (North-Holland). Dogac, A., Yuruten, B., & Spaccapietra, S. (1989). a generalized expert system for database design. IEEE Transactions on Software Engineering 15(4), 479-491. Elmasri, R. & Navathe, S. B. (1989). Fundamentals of Database Systems. Menlo Park, CA : The Benjamin/Cummings Publishing Company, Inc. Erman, L. D. (1980). The hearsay-ii speech-understanding system: integrating knowledge to resolve uncertainties. ACM Computing Surveys 12(2), 213-253.
68
Knowledge-Based Systems as Database Design Tools
Falkenberg, E. D., Van Kempen, H., & Mimpen, N. (1990). Knowledgebased information analysis support. In R. A. Meersman; Z. Shi; & C-H Kung (eds.), Artificial Intelligence in Databases and Information Systems (DS-3). New York: Elsevier Science Publishers B. V. (North-Holland). Hawryszkiewycz, I. T. (1985). A computer-aid for ER modeling. In Proceedings of the Fourth International Entity-Relationship Conference, pp. 6469. Hayes-Roth, F., Klahr, P., & Mostow, D. J. (1981). Advice taking and knowledge refinement: an iterative view of skill acquisition. In Cognitive Skills and Their Acquisition. Edited by J. R. Anderson. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. Holsapple, C., Shen, S., & Whinston, A. (1982). A consulting system for database design. Information Systems 7(3), 281-296. Huffman, S. & Zoeller, R. V. (1990). A rule-based system tool for automated entity relationship model clustering. In Entity Relationship Approach to Database Design and Querying. Edited by F. H. Lochovsky. New York: Elsevier Science Publishers B. V. (North-Holland). Kao, S. (1986). DECIDES: an expert system tool for physical database design. In Proceedings of 1986 International Conference on Data Engineering, IEEE Computer Society Press, Washington, D. C, IEEE Computer Society Press, 671-676. Kersten, M. C., Weigand, H., Dignum, F., & Boom, J. (1987). A conceptual modeling expert system. In Entity-Relationship Approach: Ten Years Experience in Information Modeling. Edited by S. Spaccapietra. New York: Elsevier Science Publishers B. V. (North-Holland). Liebowitz, J. (1986). Useful approach for evaluating expert systems. Expert Systems 3(2), 86-96. Lo, W. A. (1994). Feasibility and Effectiveness of Case-Based Conceptual Database Design. Ph.D. Dissertation. Dept. of Business Analysis and Research, Texas A&M University, College Station, Texas 77840. Lo, W. A. & Choobineh, J. (1995a). Design reusability and learning in CABSYDD (CAse-Based SYstem for Database Design). In Proceedings of the Inaugural Americas Conference on Information Systems. Association for Information Systems, Pittsburgh, Pennsylvania, August 25-27, 451-453. Lo, W. A. & Choobineh, J. (1995b). Case-based approach to conceptual database design. In Proceedings of the 26th Annual Decision Sciences Institute Meeting. Boston, Massachusetts, November 20-22, 567-569.
Lo & Choobineh
69
Lo, W. A. & Choobineh, J. (1998). Empirical evaluation of a case-based conceptual database design. In Proceedings of the 29th Annual Decision Sciences Institute Meeting, Las Vegas, Nevada, November 21-24, 1998. Lo, W. A., Choobineh, J. & Courtney, J. (1991). Application of machine learning to database design. In Proceedings of the 22nd Annual Decision Sciences Institute Conference, Miami Beach, Florida, November 2426,1991, 953-955. Lo, W. A., Choobineh, J. & Courtney, J. (1996). Architecture of a case-based system for database design. In Proceedings of the 27th Annual Decision Sciences Institute Meeting, Orlando, Florida, November 24-26, 568-570. Loucopoulos, P. and Theodoulidis, B. (1992). Case Methods and Support Tools. In Conceptual Modeling, Databases, and CASE. Edited by P. Loucopoulos & R. Zicani. New York: Wiley. Maddison, R. N., Baker, G. J., Bhaluta, L., Fitzgerald, G., Hindle, K., Song, J. H. T., Stokes, N. & Wood, J. R. G. (1983). Information System Methodologies. Great Britain: Wiley Heyden, Ltd. Mannino, M., Choobineh, J. & Hwang, J. (1986). Acquisition and use of contextual knowledge in a form-driven database design methodology. In Proceedings of the Fifth International Conference on Entity-Relationship Approach, Dijon, France, November 1986, IEEE Computer Society Press, 141-157. Mattos, N. M. & Michels, M. (1990). Modeling with KRISIS: the design process of database applications reviewed. In Entity Relationship Approach to Database Design and Querying. Edited by F. H. Lochovsky. New York: Elsevier Science Publishers B. V. (North-Holland). Obretenov, D., Angelov, Z., Mihaylov, J., Dishlieva, P., & Kirova, N. (1988). A knowledge-based approach to relational database design. Data and Knowledge Engineering 3(3), 173-180. O’Keefe, R. M., Balci, O., & Smith, E. P. (1987). Validating expert system performance. IEEE Expert 2(4), 81-89. Reiner, D. (1992). Database design tools. In Conceptual Database Design: An Entity-Relationship Approach. Edited by C. Batini; S. Ceri; & S. B. Navathe. Redwood City, CA: Benjamin/Cummings Publishing Company. Simon, H. A. (1983). Why should machines learn? In Machine Learning - An Artificial Intelligence Approach., Vol. 1. Edited by R. S. Michalski; J. G. Carbonell; & T. M. Mitchell. Palo Alto, CA: Tioga Publishing Company.
70
Knowledge-Based Systems as Database Design Tools
Springsteel, F. N. & Chuang, P-J. (1989). ERDDS: the intelligent entityrelationship based database design system. In Entity Relationship Approach. Edited by C. Batini. New York: Elsevier Science Publishers B. V. (North-Holland). Storey, V. C. (1988). View Creation: An Expert System for Database Design. Washington, D. C., ICIT Press. Storey, V. C. (1991). Relational database design based on the entity-relationship model. Data and Knowledge Engineering 7(1), 47-83. Storey, V. C. & Goldstein, R. C. (1988). A methodology for creating user views in database design. ACM Transactions on Database Systems 13(8), 305-388. Storey, V. C. & Goldstein, R. C. (1993). Knowledge-based approaches to database design. MIS Quarterly March, 25-46. Tanimoto, S. L. (1987). The Elements of Artificial Intelligence. Rockville, MD, Computer Science Press, Inc. Tauzovich, B. (1990). An expert system for conceptual data modeling. In Entity Relationship Approach to Database Design and Querying. Edited by F. H. Lochovsky. New York: Elsevier Science Publishers B. V. (NorthHolland). Tseng, V. P. & Mannino, M. V. (1989). A method for database requirements collection. Journal of Management Information Systems 6(2), 51-75. Tucherman, L., Casanova, M. A., & Furtado, A. L. (1990). The CHRIS consultant - a tool for database design and rapid prototyping. Information Systems 15(2), 187-195. Tucherman, L., Furtado, A. L., & Casanova, M. A. (1985). A tool for modular database design. In Proceedings of the Eleventh International Conference on Very Large Data Bases, IEEE Computer Society, pp. 436-447. Ullman, J. D. (1988). Principles of Databases and Knowledge-Base Systems, Vol 1. Rockville, MD: Computer Science Press. Wasserman, A. I. (1982). Automated tools in the information system development environment. In Automated Tools for Information Systems Design. Edited by J. J. Schneider and A. I. Wasserman. New York: North Holland Publishing Company. Wohed, R. (1990). Diagnosis of conceptual schemas. In Artificial Intelligence in Databases and Information Systems (DS-3). Edited by R. A. Meersman; Z. Shi; & C-H Kung. New York: North-Holland. Yasdi, R. & Ziarko, W. (1988). An expert system for conceptual schema design: a machine learning approach. International Journal on ManMachine Studies 29(4), 351-376.
Lo & Choobineh
71
Yourdon, E. (1989). Modern Structured Analysis. Englewood Cliffs, NJ: Yourdon Press.
72
Policy-Agents to Support CSCW in the Case of Hospital Scheduling
Chapter 5
Policy-Agents to Support CSCW in the Case of Hospital-Scheduling Hans Czap University of Trier, Germany
Computer Supported Co-operative Work encircles collaboration of different parties in order to achieve a common goal. Human beings or organizations made up of human beings are conducted by preferences, goals, aims, intentions, etc., which in general are not consistent, i.e., they cannot be summed up to a common goal without conflicting individual preferences. For the case of hospital scheduling, we show the concept of a policy-agent that is able to represent individual preferences and goals, and thus, may act as a personal assistant to support solution of standard problems like scheduling of operating room activities. The main focus of the chapter lies in the representation of preferences and goals, such that adaptations to changed environments may be done easily. In addition, we show how interaction will work. Not within the scope of the chapter are questions of optimality of any scheduling algorithm. Rather, we focus on negotiation processes and acceptable negotiation results. The implementation of the concepts shown takes place at time. My thanks go to the German Research Foundation (Deutsche Forschungsgemeinschaft) for its support.
INTRODUCTION In CSCW, different parties are engaged in order to fulfil a common goal. In the context of hospital scheduling, especially scheduling of the operating room, we show the basic concept of an agent-based solution. The idea, at first hand, is very simple: every person or resource involved or needed in fixing a time slice for Previously Published in Managing Information Technology in a Global Economy edited by Mehdi Khosrow-Pour, Copyright © 2001, Idea Group Publishing.
Czap 73
treatment of the patient will be represented by an intelligent agent. Thus, not only every patient, surgeon, nurse, and anaesthetist, but also every operating room, every separately scheduled equipment, the whole hospital organization, the department in question, etc., is individually mapped to an intelligent agent, which represents like the “Use Case”-approach of UML the specific constraints and policies. This kind of agents are called policy-agents. These policy agents have to interact in order to achieve a common goal, here considered as a common timeslice for an operating room treatment In general, the different perspectives and the resulting goals will not be compatible with a common goal. Thus, we need a negotiation process that reveals the weights of the different goals and aims of the policy-agents involved to arrive at a compromise solution. In this chapter, we describe the basic concepts relevant for the hospital scheduling problem and offer a deep understanding of coordination and cooperation by agents designed as personal assistants of natural persons obeying management policies of surrounding organizations. Deeply, we are convinced that any planning software that is not able to consider the preferences and goals of persons and organizations involved will encounter acceptance difficulties in the near future. So far, the problem considered may be found in many organizations, and may be easily generalized to other environments (Czap and Haas, 1995; Czap and Reiter, 1997). The specific hospital scheduling problem shows severe planning deficiencies. It’s not possible to plan in the long run since any activity is subject to a high degree of indeterminacy regarding the kind of needed activity, intensity and amount of services. This simply results from the fact that after the start of a surgery-treatment, the diagnosis might be changed or by encountering multi-morbidity result in increased intensity of care. To keep up with the indeterminacy of planning process, a suited software must support a high degree of planning flexibility (Fitzpatrick, Baker and Dave, 1993). In addition to that, the actually used “manual” methods must be able to integrate local goals and interests of the involved parties into the global planning process. In order to ensure acceptance, this aspect is considered to be indispensable in the development of automated methods. As a result, we focus on multi-agent systems (Czap and Reiter, 2000; Durfee, Lesser and Corkill, 1992) representing a dialogue-oriented approach that seems to be suited to grasping the distribution and instability of the problem domain. Concerning this, existing agent technology is extended and further developed to the concept of a policy agent. State of the art in agent technology shows the central deficit–that only simple and static goal representations (Mattern and Sturm, 1989; Rao and Georgeff, 1995) are possible.
74
Policy-Agents to Support CSCW in the Case of Hospital Scheduling
The rest of the chapter is organized as follows. In the next section, we will show the elements of workflow, and how they are synchronized and relate to resources or groups of resources. The section following deals with the need for preferences and goals and shows how this is incorporated into the concept of a policy object. In the next section after that, selection and negotiation in the case of conflicting preferences is treated.
ACTIVITIES, WORKFLOWS AND ACTIVITY RELATED CONSTRAINTS The workflow will be described by a set A of activities a, a Î A, and synchronization constraints, relating different activities each other. We assume that every activity a A has a definite duration d(a), which for planning purpose is considered to be constant. Examples for activities: • a1 having lunch • a2 massage treatment • a3 appendectomy Different activities of the same type may be summarized to activity-classes. For example, the • activity a1, having lunch, belongs to the class “having a meal”; • activity a2, massage treatment, belongs to the class “physical treatment”; • activity a3, appendectomy, belongs to the class “surgical treatment in an operating room.” Single activities or activity classes may be connected to each other by synchronization constraints. Example: • Any activity of the class “physical treatment” starts no earlier than one hour after the start of any activity belonging to the class “having a meal.” In general, synchronization constraints on the level of activities are in accordance with Metra Potential Method MPM (Schwarze, 1990): (1)
a
b
t ³ 0,
Activity b starts at earliest t time-units after start of activity a (2)
a
-
b
t ³ 0,
Activity b starts at latest time-units after start of activity a Synchronization constraints on the level of activity-classes are inherited to the class members. Consequently, they establish synchronization constraints on
Czap 75
each of the respective activities, i.e., on each element of the classes involved. Any single activity might belong to different activity-classes (multiple inheritance). Decker and Li synchronize order activities respectively by the functions enables(), requires(), delay(), and inhibits(). Clearly, the modeling tools of MPMnetworking technology used here are more general. Besides synchronization constraints, we augment the resulting network by alternative paths yielding an “and-or”-graph. Thus we have an “and”-node (as is usual) and an “xor”-node: (3) “and”-node b
aa c
Activities a, b and c have to be done (4) “xor”-node aa
b
xor xor c
Either activities a and b or a and c have to be done In general, in health care, different patient management guidelines to the same diagnosis are possible and thus need to be represented in order to model the possible workflows. Synchronization constraints, as mentioned so far, belong to the class activity-related constraints. In addition, activity-related constraints also consists of resources that are necessary to perform the addressed activity. In order to express the requirement of specific resources, we introduce the concept of a resource class as a collection of resources of same type. Thus: (RC1) Any activity or any activity class needs a class of resources to be performed. Example: (RC2) Any activity of the class “surgical treatment in an operating room” needs the resource class “operating team,” which consists of a member out of each of the resource classes • surgeons, • anesthetists and • operating room nurses.
76
Policy-Agents to Support CSCW in the Case of Hospital Scheduling
AM
FL Y
Any person or any equipment needed in an activity is considered as a resource. Since a person might have different roles, and roles generalize to the concept of a resource class, as introduced here, we consider the relationship between resources and resource classes to be of type (n:m). For example: James is a surgeon, thus he belongs to the resource class “surgeons.” But also, he has a license to work as an anesthetist, thus he belongs to the resource class “anesthetists”. In general, any resource class will consist of different members. Summarizing, we model the hospital scheduling problem by the entity-types activity, activity class, resource and resource class related to each other by: (1) Any activity has a definite duration. Activities and activity classes are in a (n:1)-relationship. (2) Activities and activity classes are ordered by synchronization constraints of the type “at earliest” or “at latest.” Synchronization constraints of activity classes are inherited to the class members. (3) The possible workflow of a patient treatment is represented by an “and-or”graph. Sequence of activities is controlled by MPM-networking technology. (4) Any activity needs one or more resource class, each consisting of resources or resource classes. In the usual setting, the resource classes correspond to different roles. Resources and resource classes establish a (m:n)-relationship.
PREFERENCES, GOALS AND CONTEXTS
TE
Preferences and Goals Any resource, i.e., a person or a piece of equipment, is represented by an intelligent agent. We assume that resources have specific preferences and goals. This seems to be clear if a resource denotes a person. As far as equipment is involved, goals very often reduce to only one goal, namely, to be used as intensively as possible. For persons, one has very differentiated preferences and goals. For example, George is a patient who needs a time-slice for an appendectomy. George has preferences regarding the surgeons. In general, he might have the following preference-orderings with respect to the respective classes of surgeons: (P1) senior surgeons surgeons assistant surgeons But in the concrete case, he personally knows James very well. He is convinced James will do a good job, although James is a surgeon but not a senior one. Thus, in the concrete case, he has the additional specific preference relating a resource to a resource class: (P2) James senior surgeons overriding the general one: senior surgeons surgeons and James surgeons,
Team-Fly®
Czap 77
which implies senior surgeons James. In addition to these orderings (P1) and (P2) George would like to have the operation in the morning hours (different set of alternatives): (P3) morning noon afternoon evening In other words, orderings relate to resources and/or to resource classes. Since alternate methods for patient care exist resulting in alternate branches of workflow, preferences with respect to alternate activities respectively activity classes must be considered too. With respect to preference orderings, one should consider two perspectives, namely, the user side, which has been emphasized in the orderings (P1), (P2), and (P3), and the machine side, which simply states: • A preference (AL) is an ordering of a set of alternatives AL. An alternative a AL may be a resource, an activity or a possible workflow. In our concept, the user side, i.e., the orderings (P1), (P2), and (P3) together with a sequencing relation, are stored in a so called policy-object PO (Czap et al., 1997, 2000; Reiter, 2000): • A policy-object consists of the collection of preference-statements of a resource, i.e., of a user, an organization or a piece of equipment. Decisions of agents representing resources are controlled by policy-objects. • On the level of policy-objects, an ordering consists of preference statements regarding resources, resource-classes, activities, activity-classes or possible workflows. In addition, preference of conflicting policy statements is expressed by a sequencing relation. • On the user-side, the policy-object serves as the user interface for recording user-preferences. On the machine-side, it serves as an intermediate level in order to establish orderings on the respective sets of alternatives. Very often the set of alternatives AL is not a priori known. In this case, a preference cannot be expressed explicitly as an order of the alternatives in question. Rather, preferences are implicitly articulated by selection-functions acting as general goals on suitably defined set of alternatives AL. Examples: (1) minimal_wait_time (R, T) (2) equal_resource_utilization(R, T) (3) max_use_of_equipmt(R, A, T) The parameter R stays for a set of resources, T for a set of time-slots, A for a set of activities. (G1) minimal_wait_time() selects a resource r R and a time-slice t T , s. th. waiting time for the patient is minimized.
78
Policy-Agents to Support CSCW in the Case of Hospital Scheduling
(G2) equal_resource_utilization() selects a resource r R and a time-slice t T , s. th. resources are utilized equally. Similarly, (G3) max_use_of_equipmt(R, A, T) selects a resource, an activity and a timeslice in order to maximize use of equipment and (for private patients) thus maximizes earnings. Each general goal, expressed by a selection function select_f(AL), working on a set of alternatives AL, implicitly defines a preference ordering on AL by successive calls: · a1: = select_f(AL) a2 := select_f(AL \ {a1}) a3 := select_f(AL \ {a1, a2}) …. an := select_f(AL \ {a1, a2,.., an-1}) and a1 a2 … an . Since each explicitly given preference order on a set AL naturally defines a selection function, we don’t distinguish any longer between explicitly given preference orders or general goals. Still, for the user-interface, the distinction is important. Policies and Context-Dependencies Nothing has been said so far if and under what conditions these general goals are applied. They represent possible scheduling policies, which might, in the concrete case be combined with the actual scheduling policy. For example, management of the hospital might install the following set of management policies (Po1) .. (Po2) in order to guide the decision process. (Po1) if (insurance(patient) == privat_insurance) then { (R’,a1,T’) := max_use_of_equipmt(R, A, T) (r1,t1):= minimal_wait_time (R’, T’) } (Po2) if (planned_patient_stay < 2 days) then (G1) else (G2) As is very common, after some time, management policies undergo revision and might be changed. Therefore, it is very advisable to have a clear separation of policies from execution tasks. Therefore, the concept of a policy object should be augmented to not only cover preferences. The policies (Po1) and (Po2) make use of the context information “insurance(patient) == privat_insurance.” We assume all needed context information is collected to a programming object “context”. In general, a policy is defined to be a production rule • if (PREDICATE(context))
Czap 79
then { sequence_1_of_goals } else { sequence_2_of_goals } Where PREDICATE(context) means an arbitrary predicate connecting context information. Policy objects are connected to the agents representing decision subjects like natural persons or organizational units and offer a place to store and administer the goals and preferences of the respective principals. If a policy object contains inconsistent rules or preference relations that are not resolved by a preference relation acting on the rules, we assume a user dialogue will be started in order to end up with a definite order. All together, a policy object defines in dependency of context an order relationship on the join of the respective set of alternatives Ali. i
I
ALi , where ALi = set of alternativ es of goal Gi
In the section titled, Processing of Requests, we use a substructure of a policy object defined as that part consisting of all the policies acting on a specific set of alternatives.
SELECTION AND NEGOTIATION The hospital scheduling problem is modeled as a decision problem, where the decision subject has to select one alternative a out of a set of alternatives AL. The operation select() doesn’t make any problems, as long as there is only one preference order (AL) given on the set of alternatives AL. This is the case with intra-agent decisions. In the more general case, different persons or organizational instances participate and will yield to inconsistent preferences. Assuming there are n agents involved, we end up with n preference orderings: (AL),
1
2
(AL),..,
(AL).
n
If, for example, the patient George needs a time slice for an appendectomy, preferences of George, of the hospital management, of surgeon, anesthetist, nurse etc., are involved. By impossibility theorem of Arrow, there is no fair aggregation to a common preference order (AL). Consequently, a negotiation process has to be started in order to arrive at an acceptable preference order. In any case, the process to arrive at acceptability will be subject to intensive discussions and possible changes. To support ease of change, this mechanism should be written to a separate component accessed by a clearly defined interface. In the rest of the chapter we present an idea of how this procedure could work. The following negotiation process will be controlled by an intermediate instance represented by an intelligent agent. For each policy, the intermediate instance determines a weight, showing the relative importance of the preference
80
Policy-Agents to Support CSCW in the Case of Hospital Scheduling
implied by the policy object. Also, for each agent, a threshold is fixed to measure what can be reasonably expected as tolerable. The weights of relative importance are successively lowered forcing a final solution. Criteria applied for initially fixing weights and for adaptation might be organizational power of the principals defining their policies and the individual ordering of policies (for example, by weights) within the policy objects. We assume for each agent i and each preference ordering, the existence of a , where designates the set of real numbers. utility function ui: AL 1) Intermediate instance determines weights wi,k0 for the policies (Pok) within the policy-objects POi. ( i = 1,..,n; k = 1,..,ni ), set j:=0. Define for each agent i a threshold si. 2) Each agent i determines the resulting individual preference order i(wi,kj,AL) ( i = 1,..,n; k = 1,..,ni ),the corresponding utility function ui and the preferred alternative ai AL 3) Intermediate instance inspects the set of preferred alternatives { ai | i = 1,..,n} and determines a compromise candidate aC AL. If the difference of utilities of aC to individual solution ai is less than or equal to the threshold, then agent i will accept: if for all i: | ui(aC) – ui(ai) | si , accept aC as compromise solution, else go to (4). 4) Intermediate instance reduces weights wi,kj for the policies (Pok): wi,kj+1 wi,kj , where wi,kj+1 wi,kj . Set j := j+1, goto (2). If the weights in step (4) in every iteration are reduced by a constant amount the utility differences in (3) will approach zero, i.e., after a finite number of iterations | ui(aC) – ui(ai) | si holds for all i. For an alternative method of how to find a solution by negotiation, see Reiter (2000).
PROCESSING OF REQUESTS For illustration purpose, let’s assume that a patient agent PA examining the workflow of patient treatment issues a request RX0 for reservation of a time-slot, let’s say for an appendectomy, activity a3: • RX0 schedule (activity a3), where the set of alternatives AL (resources and resource-classes) connected with a3 is given by · AL = ALtimes × ALteams × ALrooms with ALtimes = available set of time-slices for an operation, ALteams = set of OP-teams and ALrooms = set of OP-rooms. RX0 is addressed to the management agent of hospital. This agent examines the set of alternatives AL. This set consists of different components and thus it’s structured.
Czap 81
Therefore, management agent decides before answering the request, the different components of set of alternatives AL must be collected, that is, an indirection of the initial call RX0 occurs. Management agent takes the policy object of the caller, i.e. POpatient, and its own policy object POmanagement and issues for every component of AL the request/message establish() to the resource agent respectively resource class agent connected with the component. Since the respective resources, time-slices, OP-teams, OP-rooms, have their own agents, the following messages are issued: (PO1,AL1):=establish(POpatient, POmanagement, POtimes, ALtimes ) (PO2,AL2):=establish(POpatient, POmanagement, POteams, ALteams ) (PO3,AL3):=establish(POpatient, POmanagement, POrooms, ALrooms ) combine( (PO1, AL1 ), (PO2, AL2 ), (PO3, AL3 )) Summing up, the general form of function establish() is given by: establish(POreceived, POsender-agent, POresource, ALresource ), where POreceived consists of the collection of all policy objects regarding the path back to initial call RX0. This function establish() determines the set of alternatives ALresource and connects this set with the so far collected policy objects POreceived, the specific policy object PO of the sending agent, and that of the receiving resource agent. Finally, the resulting combinations of policy objects and alternatives have to be combined into a common set of policy objects and alternatives, which will be done by the function combine(). This function acts as the select() function. It has to realize a common preference on the combined resources. Thus, if necessary, at this point a negotiation process is started. combine() sends the result back to the caller patient-agent.
Figure 1: Tree structure of initial requests RX0
p RX01
P1
RX1
RX2
RX3
s RX11
RX12
RX13
RX21
RX22
RX4
82
Policy-Agents to Support CSCW in the Case of Hospital Scheduling
For illustration, consider the selection of the possible operating teams: • George has the preferences (P1) and (P2) related to the operating team. POpatient • Management wants to further qualify the assistant surgeons. Therefore, they like the assistants to operate under supervision of senior surgeon as often as possible POmanagement • We assume the policy object heading the resource OP-teams to be empty: POteams = . By (RC2) ALteams consists of a combination of a surgeon, an anesthetist, and an operating room nurse. Thus, ALteams is a structured set of alternatives. The receiving agent of resource class OP-teams recognizes ALteams to be structured and thus, a second indirection has to take place. The agent of resource class OP-teams issues the message establish() to each of the component-resourceagents: (PO2,1,AL2,1):=establish((POpatient, POmanagement ), POteams, POsurgeons, ALsurgeons ) (PO2,2,AL2,2):=establish((POpatient, POmanagement ), POteams, POanesthetists, ALanaesthetists ) (PO2,3,AL2,3):=establish((POpatient, POmanagement ), POteams, POnurses, ALnurses ) combine((PO2,1, AL2,1 ), (PO2,2, AL2,2 ), (PO2,3, AL2,3 )) In the policy objects of surgeons, anesthetists or nurses, specific policies like “equal amount of work for all” or “reduction of overtime work” could be implemented. Since the respective set of alternatives ALsurgeons, ALanesthetists or ALnurses is not structured, indirection ends at this point. The combination of policy-objects and alternatives will be sent back to the caller, here the management agent, by the function combine(). As mentioned above, generally, this will initiate a negotiation process. By this sequence of indirections, the final structure of the overall decision process is built up. The initial request schedulet() works on this structure and evaluates successively the preferences as described in the last section. The nodes of this control structure correspond to the needed indirections. In the case above, the different messages establish() may be executed in parallel. In general, the control-structures sequence and selection will be needed in addition (see Figure 1). In Figure 1, three kinds of indirection are shown. Sequential connectors, marked by “s,” parallel ones, marked by “p,” and exclusive ones marked by “ .” In the control structure above, the construct “ ”-node may be necessary to model alternative workflows and a “s”-node to model a sequence of activities that must be evaluated as a substructure to higher situated activity.
Czap 83
SUMMARY Essential in coordination of human beings are negotiation processes where the prospected goals will be adapted in order to achieve consensus. The presented concept shows how to implement preferences and goals in a way that supports flexibility of agent activities and agent coordination. In separating preference structures of principals respectively agents in the so-called policy objects, a construct is introduced that allows ease of adaptation to changing requirements.
REFERENCES Czap, H., and Haas, J. (1995). Organizational modelling in distributed corporations. Proceedings of the Third European Conference on Information Systems ECIS‘95, Volume II, Athen, 947-954. Czap, H. and Reiter, J. (1997). OMC: An Organisational Model for Co-Operations. ACM Siggroup Bulletin Volume 18 Number 2, August, pp. 41-44. Czap, H. and Reiter, J. (2000). Goal Representation and Scheduling in Hospitals by Intelligent Agents. The Fourth International Conference on Autonomous Agents 2000, Workshop on Intelligent Agents for Co-operative Work: Technology and Risks, June. Decker, K. and Li, J. Coordinating Mutually Exclusive Resources Using GPGP. Durfee, E. H., Lesser, V. R., and Corkill, D. (1992). Distributed Problem Solving. In Shapiro, S.C. (Ed.) Encyclopedia of Artificial Intelligence, pp. 379-388, New York: John Wiley. Fitzpatrick, K. E., Baker, J. R., and Dave, D. S. (1993). An application of computer simulation to improve scheduling of hospital operating room facilities in the United States. International Journal of Computer Applications in Technology, 6(4), pp. 215-224. Mattern, F. and Sturm, P. (1989). An Automatic Distributed Calendar and Appointment System. Microprocessing and Microprogramming 27, 455-462. Rao, A. S. and Georgeff, M. P. (1995). BDI Agents: From Theory to Practice. Proceedings of the 1st International Conference on Multi-Agent-Systems, San Francisco. Reiter, J. (2000). Das Policy-Negotiation-Konzept: Heuristische kollektive Entscheidung in überbetrieblichen Kooperationen mit hybrider Leitungsstruktur. Dissertation, University of Trier (in German). Schwarze, J. (1990). Netzplantechnik. Eine Einführung in das Projektmanagement, 6. überarbeitete Aufl. Herne/Berlin.
84 Building an Agent: By Example
Chapter 6
Building an Agent: By Example Paul Darbyshire School of Information Systems Victoria University of Technology, Australia
Since the emergence of agent technology, there have been many papers and articles written on the advantages and use of the technology. In particular, in the last two years, the number of papers discussing the use of agent systems seems to have risen exponentially. Whether this rise in the interest of agent technology corresponds to the emergence of eCommerce, Internet banking and the explosion in Web-based systems, or the maturity of the technology and programming languages used to develop them is another matter. For people interested in the technology and wanting to build their own agents, most of the material provides little insight in how to actually build an agent. This chapter discusses the problem of actually building an agent using an example of an “email helper” agent.
INTRODUCTION Over the last decade, the programming languages and the maturity of the support technology used to build agents has developed to a level where agents have migrated from the laboratory to real-world applications. In particular, over this time, the Web has gained prominence, and the number of Web-based applications has grown enormously. The emergence of Web-based education, eCommerce, and eBusiness-type applications have coincided with the rise in development and popularity of the Web. The ensuing activity has generated enormous interest in agent technology and solutions. This popularity and interest in agent technology is also reflected in the number of papers devoted to agent-based systems. However, for the novice trying to understand and develop his or her Previously Published in Managing Information Technology in a Global Economy edited by Mehdi Khosrow-Pour, Copyright © 2001, Idea Group Publishing.
Darbyshire 85
own agents, the majority of papers and articles offer little insight as to the architecture and framework used for building such systems. The purpose of this chapter is to outline the process of building an agent, and this is done through an example of an email helper agent. The example uses the Java language to develop an architecture for building an agent, and a communication framework in which the agents operate. Another agent called an interface agent is outlined that can be used as a generic interface agent between a human operator, other agents, and the agent environment. The agents developed in this chapter are not fully functional but do give enough information to enable the novice developer to get started with some basic agent coding. In the following sections, a brief background on agents is given, followed by a treatment on the selection of Java as the development language. A framework for the environment in which the agents operate is described, and then a general simple architecture for building an agent is shown. Finally, an interface agent is shown to allow the user to interact with other agents and the environment.
BACKGROUND Before we begin to develop agents we really must be in a position to define what an agent is. This in itself is not an easy task, as no universally acknowledged definition of a software agent is currently available. Even the “experts” cannot agree on a definition for an agent (Nwana, 1996; Wooldridge & Jennings, 1995). Although the concept of an agent has been around for some time, agent software is still an emerging technology that can still be regarded as being in an embryonic stage. Despite this, the range of organizations and disciplines researching and pursuing agent technology is broad. The term “agent” is being increasingly used by computer researchers, software developers and even the average computer user, yet when pressed, many would be unable to give a satisfactory explanation of just what an agent really is. Agent technology emerged from the field of AI research, so the term “Intelligent Agent” is often used. However, agents need not be intelligent, and in fact most tasks do not warrant the use of “smart agents” (Nwana, 1996). Other adjectives often used with agents are: interface, autonomous, mobile, Internet, information and reactive. The term “agent” can be thought of as an umbrella under which many software applications may fall, but is in danger of becoming a noise term due to overuse (Wooldridge & Jennings, 1995). Many agents are currently characterized by descriptive terms that accompany them, for example intelligent, smart, autonomous. What makes agents different from standard software is the characteristics that agents must possess in order to be termed such. There are a number of classification schemes that can be used to typecast existing agents, and these in-
86 Building an Agent: By Example
Figure 1: Nwana’s Classification
TE
AM
FL Y
clude mobile or static, deliberative or reactive. Nwana (1996) classifies agents according to primary attributes that agents should exhibit, shown in Figure 1. The three primary attributes are cooperation, learning and autonomy. These attributes are laid out as intersecting circles and, to be classified as an agent, software must exhibit characteristics from any of the intersecting areas. However, Nwana concedes that the categories in the diagram are not definitive, and agents can also be classified by their roles. Wooldridge and Jennings (1995) take a more formal approach to the definition of agent, falling back to the more specific meanings from AI researchers. However, they note that, as the AI community cannot agree on the question of What is Intelligence?, a less formal definition may be needed to include many software applications being developed by researchers in related fields. To this end, Wooldridge and Jennings introduce the notions of weak and strong agency. Strong agency takes on the specific meaning from AI research, implying that agents must exhibit mentalistic notions such as knowledge, belief, intention and obligation, with some researchers considering emotional characteristics as a requirement. If this definition of agent is strictly adhered to, many software applications claiming to use agent technology would be rejected as such. In the weak notion of agency, the term agent can be applied to software that exhibits the following characteristics: • Autonomy: agents operate without direct human intervention and have some control over their actions and internal state. • Social Ability: agents interact with other agents and humans through some defined protocol. • Reactivity: agents can perceive their environment and can respond to it in a timely fashion. • Pro-activeness: agents do not just respond to the environment, but can take a proactive role and exhibit some goal-oriented behavior.
Team-Fly®
Darbyshire 87
Figure 2: 3D-Space model of agency
The definitions of Nwana and Wooldridge and Jennings above are not altogether incompatible. They do identify common characteristics that agents should exhibit, but most of the agent types identified by Nwana would come under Wooldridge and Jennings’ weak-agent classification. Other researchers have also identified the characteristics of autonomy, cooperation, intelligence, reactivity and pro-activity. Gilbert et al. (1995) provide a model where the degree of agency can be crudely measured by an agent’s position in a three-dimensional space relative to a 3-D axis. This model has been refined (Anonymous, 1999) using the three dimensions of Intelligence, Autonomy and Social Ability defined from the list above. This model is shown in Figure 2. In order to qualify as an agent in this model, software must exhibit at least the minimal characteristics in each dimension. That is, it must be able to communicate with the user, allow users to specify preferences, and be able to work out how to accomplish tasks assigned to it. While this model may not be ideal, it does provide for a more simplistic definition based on the common agent characteristics identified by a number of researchers.
AUTONOMOUS AGENTS One of the important characteristics of agents just identified is that of autonomy. Agents do not have to be intelligent. After 40 years of trying to build knowledge-based agents, researchers have still not been able to build a base of common sense information that intelligent agents may need to operate within their environment (Maes, 1995). Until truly smart agents are feasible and commercially viable, it is the degree of autonomy that agents exhibit that will determine their usefulness to many users. Franklin and Graesser (1996) define an autonomous
88 Building an Agent: By Example
agent as: An autonomous agent is a system situated within and part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to affect what it senses in the future. These types of agents will take on the role of personal assistants to help users perform tasks, and in time may learn to become more effective. We can see this type of software already in existence. For example, the subroutines in word processors that automatically check for spelling as you type and sometimes offer suggestions for the completion of words based on what was previously typed. Some of these also automatically capitalize words at the beginning of sentences. While some of these software examples may not exactly fall under our previous definitions of agent, they are small examples of the types of autonomy and cooperation that agents, and particularly autonomous agents will play in future. Autonomous agents have the potential to radically alter the way we work, but this will begin to appear in commercially available products as an evolutionary process rather than a revolutionary one (Nwana, 1996). In addition to the emphasis on autonomy, autonomous agents have a sense of temporal continuity in that they persist over time. Most software programs are invoked, perform their tasks, and then finish. For example, a payroll program may be invoked once a week to perform its payroll run, but would fail the test of agency as its output would not normally affect what the payroll program would sense the following week. The payroll program does not persist beyond the completion of its task. An autonomous agent on the other hand would continue to persist and monitor/affect a portion of its environment according to its specific task.
PROGRAMMING AN AGENT IN JAVA Before programming an agent, we must select a language (if indeed we do have a choice). A number of languages are suitable for programming agents, for example, C, C++, Smalltalk, Java, Telescript and others (Choi, 1998). Each of these languages will obviously have its advocates and opponents, but each is suitable for building agents of some kind. The choice will ultimately depend on the developer, and his or her experience and willingness to choose a language suitable to the problem. Typically, however, Object-Oriented languages lend themselves more easily to the construction of agent systems (Nwana & Wooldridge, 2000). The examples given in this chapter are programmed using Java. Java has a number of features that make it ideal for programming agents. Two of these are Java’s portable architecture-neutral language, and Java’s network communication ability. It’s not a coincidence that the rise in interest for agent technology has coincided with the development and evolution of the Web, as it provides a fertile framework in which to place agents. Recall that in order to be an agent, software must exhibit minimal characteristics, as indicated in Figure
Darbyshire 89
Figure 3: Depiction of a thread Memory
Threads
Execution Stack
2. Some of these characteristics are mobility, social ability and intelligence. The infrastructure of the Web provides a mechanism for most of an agent’s required characteristics to be realized. Agents can implement mobility and communication requirements by using standard Internet protocols. The global infiltration of the Web also helps ensure the infrastructure is mostly in place, providing worldwide connectivity. When using Java to program agents, all Java programs run on the Java Virtual Machine (VM), which is itself a process on the physical platform. Thus we implement an agent in Java as a Java thread. A Java thread is a lightweight process that has its own execution path, but shares memory and variables with the process that created it (Figure 3). Once initiated, a thread has a life of its own independent of other threads executing in the Java VM. Provided we build the threads in such a way that they don’t rely on common shared memory, they become as independent as separate processes. Although Java claims platform independence, there are aspects to Java that are not platform independent when using threads–in particular, the scheduling mechanism used. The scheduling of threads in the Java VM follows a simple deterministic scheduling algorithm, known as fixed priority scheduling (Campione & Walrath, 1996); that is, at any time, the thread that is currently executing is the thread with the highest priority amongst all the threads in a Runnable state. The scheduler is pre-emptive, therefore the currently executing thread will be preempted the moment a thread with a higher priority becomes Runnable. However, this is not guaranteed. The thread scheduler may choose to run a lower priority thread to avoid starvation. When there is more than one thread in a Runnable state at the same highest priority, the Java VM thread scheduler chooses the next thread to run using a simple non-preemptive round-robin scheduling order. These highest priority threads may or may not be pre-emptively time sliced. The Java VM does not
90 Building an Agent: By Example
implement time-slicing, and therefore does not guarantee time-slicing of equal highest priority threads. Instead, the Java VM relies on the architecture of the underlying Operating System. The “White Paper” on Java (Gosling & McGilton, 1996) states: Java’s threads are pre-emptive, and depending on the platform on which the Java interpreter executes, threads can also be time-sliced. On systems that don’t support time-slicing, once a thread has started, the only way it will relinquish control of the processor is if another thread of a higher priority takes control of the processor. Windows 95/98/2000/NT platforms are all multi-threaded Operating Systems, and implement their own pre-emptive scheduling algorithms. On these platforms, our agents as multiple threads will be time-sliced on a round-robin basis. On platforms that do not time-slice threads, we must explicitly build our agents in a manner that ensures they will not dominate processor time and so behave in an orderly manner.
EXAMPLE DESCRIPTION In this and the following Sections, an example agent system is discussed. For obvious practical reasons, the full code could not be included in this chapter, only small sections to highlight a particular point. The complete source of the sample agent system can be obtained by sending an email to the author. Before we begin to program an agent system, we should carefully consider what the intended system is to do, and design an appropriate architecture or general framework for the system. As a novice agent builder, I found that the design of such a framework was not immediately obvious. Also, in order to be classified as agents, the software we design must pass our “test of agency” that we adopt. While many papers and articles dwell on aspects of mobility and intelligence, recall from previous sections that mobility is not a necessary condition for agency, and not everybody needs truly intelligent agents. I have used the 3D-Space model shown in Figure 2 as the test of agency in this chapter, thus the agents must exhibit minimal characteristics of autonomy, social ability and intelligence. In a series of articles, Sundsted (Sundsted, 1998) describes an agent architecture and a general framework for building agents. The source code for the entire framework is available for download, and while it offers some excellent insights, a simpler example would better serve the new agent designer. The example provided here is loosely based on Sundsted’s design, but greatly simplified to concentrate on some basic aspects of design. The agents included in this example form part of a continuing research project described in Darbyshire and Lowry (2000), to evolve an interface for Web-based teaching. Three agents are included: an Interface agent; an Email Reader agent;
Darbyshire 91
and the Agent Host (also implemented as an agent). These agents are not fully functional as described in Darbyshire and Lowry (2000), but include enough functionality to demonstrate the basic principles used for building the agent system. The Interface Agent is the users’ interface to other agents and the Agent Host. This agent allows users to start, stop and suspend other agents, and also allows the display of messages transmitted from other agents for user viewing. By allowing the Interface Agent to accept the name of another agent to load or unload into the system, the agents become pluggable in the sense that new agents can be plugged into the current existing framework without upsetting agents already running. The Email Reader Agent is designed to hook into a specific mail account of a mail system attached to the Internet. The one included in this example uses the IMAP protocol to log into my own email account. Dummy names and passwords have been substituted into the example code for obvious security reasons. This agent periodically scans the email accounts INBOX searching for email whose subject-header field begins with the token “EVE:.” EVE (an acronym for Evolutionary VEhicle) is the name given to the collection of agents described in Darbyshire and Lowry (2000). This is a flag indicating that the rest of the subject-header and possibly the body of the email contains further instructions for the agents. The email reader then gets these instructions and passes them on to the Agent Host for distribution to the appropriate handler agent. The current message is then deleted. The Agent Host is a Java class that keeps track of all the other currently active agents. This program provides some basic functionality to the other agents and is responsible for starting, stopping and suspending other agents in the system through communication with the Interface Agent. The Agent Host can also be regarded as an agent itself.
AGENT ARCHITECTURE AND COMMUNICATION FRAMEWORK The basic architecture of the example system is depicted in Figure 4. The Agent Host class is described in the previous section, and for every agent it initiates, it maintains an Agent Reference object. This object is used to basically keep track of every active agent in the system, including their names (used for communication purposes), and their I/O channels. The code for the Agent Reference object can be seen in Listing 1. There is exactly one agent for every Agent Reference object. Every agent in the system (apart from the Agent Host) is represented by the Agent Implementation class in Figure 4. There are two such agents in the example system, the Email Reader and Interface Agents. Every agent must implement the AgentInterface class. This class is a Java Interface and defines the
92 Building an Agent: By Example
import java.util.Vector; class AgentReference { Thread agent; String agentName; Vector ic; // the agents input Channel Vector oc; // the agents output Channel }
Listing 1 Agent Reference object
Figure 4: Simple agent architecture AgentHost
1
*
AgentReference 1 1
AgentImplementation
AgentInterface
import java.util.Vector; public interface AgentInterface extends Runnable { public void agentStart(); public void agentStop(); public void agentShutdown(); public void setInputChannel(Vector ic); public void setOutputChannel(Vector oc); public String getName(); // messaging functions public String getMessage(); public void sendMessage(String msg); }
Listing 2 AgentInterface
--------------------------------------------------
Darbyshire 93
basic functionality that each agent must implement in order to exist and operate within the system. The Java code for the AgentInterface is shown in Listing 2, and defines methods for starting, stopping, and suspending the agent, as well as methods for setting the I/O channels, and primitive messaging functions. This interface also extends the Java Runnable interface, ensuring that any class implementing it, is executable as a separate Java thread. In our test for agency, our agents must exhibit a minimal social ability. The Interface Agent exhibits this, as the user is able to interact with its GUI interface. However, typically, the other agents within the system all perform small tasks and cooperate with each other in performing those tasks. In the example system, the Email Reader looks for incoming messages with specific commands for EVE, then on finding these commands passes them on to other agents equipped to handle them. Thus, EVE as a whole has a social ability in that it can communicate with humans via email, but then the component agents must also communicate with each other the content of the email. So our agents do display a high social ability according to our test. There are a number of ways to get agents to communicate, and a number of Agent Communication Languages (ACL) have been recommended (Nwana & Wooldridge, 2000). However, for the sake of simplicity, this example implements a simple message passing system. Message passing is more complex than some simpler forms of communication, which could be implemented, but is very flexible and imitates how separate processes communicate. This form of communication could also be extended to include passing messages to different Agent Hosts residing on different physical computers. In order to implement message passing, each agent is assigned a separate Input and Output channel. These I/O channels are implemented as Java Vectors and are created for the agent by the Agent Host after it instantiates the agent as a thread, but before it places it in a Runnable state by calling its start() method. The I/O channels are assigned to the agent by calling the appropriate set-channel method shown in Listing 2. The messages in the example are simply structured strings with addressing and command information identified by keywords. Each of the agents has a simple get and send message method that retrieves or sends a message via the appropriate I/O channel. The message delivery mechanism is implemented by functionality that is supplied by the Agent Host. The Agent Host periodically scans all the output channels of all active agents (by searching through the reference object list) looking for messages. When it discovers one, it retrieves the first token from the message string, which is the destination agent, and then passes that message to the appropriate agent by placing it into that agent’s input channel. That agent will then pick this up next time it checks for incoming messages. No guarantee is made as to the
94 Building an Agent: By Example
order of the messages delivered between agents. Some of the messages may be for the Agent Host itself, for example, a message from the Interface Agent to start another agent. In this case, the Agent Host deals with the message then discards it. This is a simple delivery mechanism constructed for the example, but it could easily be extended to a more comprehensive one. Figure 5 depicts the model for our framework for agent communication. The structure of the messages are simple, but our agents now satisfy the social ability criteria in our test for agency. Figure 5: Framework for agent communication Agent Host Agent Host Messaging Service In
Out
Interface Agent
In
Out
Specific Agent
In
Out
Specific Agent
Our next test for agency is that of autonomy. All agents in this example execute as separate threads in the Java VM, and thus run asynchronously, which is the bare minimum (Gilbert & al., 1995). However, both the Email Reader and the Agent Host lie somewhere between the reactive and proactive scales on the “Autonomy” axis of our test. In fact, given the nature of what these two agents do, they certainly seem to satisfy Franklin’s test of an autonomous agent (Franklin & Graesser, 1996) as outlined in a previous section. Each of the agents asynchronous execution is governed by the threads run() method. An agent can be designed to react to events as they occur, or a more proactive one may perform its own tasks routinely on a periodic basis, responding to what it finds. The run() methods of the Agent Host and Email Reader agents are governed by a simple sleep-perform loop as shown in Listing 3. The run() method of the Agent Host shown in Listing 3 begins by starting the Interface Agent, then is governed by a loop which causes the agent to continuously sleep for 15 seconds then check its messages. During this checking, the Agent Host delivers messages and responds to messages sent to it as described above. This agent can be interrupted during its sleep cycle, and hence catches any Interrupted Exceptions, but in this example does not respond to them. The length of the sleep cycle shown here is fixed, but could easily be made configurable by allowing the agent to accept a new time via a message.
Darbyshire 95
public void run() { // now start the Interface agent startAgent("InterfaceAgent"); while (true) { try { Thread.sleep(15000); checkMessages(); } catch (InterruptedException e) { } } }
Listing 3 Agent Host run method
---------------------------------------------------------------The last test of agency is for intelligence. Clearly the agents presented in this example are not intelligent, at least not in the strict AI sense. The framework we have developed here does not support intelligent behavior, though the agents themselves do exhibit the minimal characteristic of having preferences. In this example, these preferences are not realized in the form of rules but by simple flags and variables. Most applications do not need truly intelligent agents, and preferences could easily be realized by extending the message passing system so that sending an appropriate message to an agent could change its preferences. While the framework itself does not support intelligence, it does not preclude the addition of intelligence to the individual agents by some sort of reasoning system. There are some “Rule-based Engines” that can be added to a Java program to help support intelligent behavior through inference, based on these rules. Java Expert System Shell (JESS) (Friedman-Hill, 1999) is perhaps one of the best known, and a JESS “rules-based engine” can easily be added to any of the agents discussed to support intelligent behaviour.
CONCLUSIONS This paper has described a simple architecture for development of agents. Examples of code have been presented from a small example system developed using this architecture, and the full sample code can be obtained by contacting the author via email. The agents in this example system are not fully functional, but were taken from another research project (EVE) for simple demonstration purposes. The examples do provide enough functionality to demonstrate some basic
96 Building an Agent: By Example
principles in the development of some simple agents. A framework for communication between agents was also presented, and the agents included in the examples use this framework to perform very basic message passing. The simple architecture and framework developed in this chapter are provided to assist the novice agent builder in understanding some of the basics needed for the development of agents. The framework for communication can be enhanced to support a fully functioning agent system that requires a more sophisticated messaging passing system. Also, the functionality of the Agent Host can be further developed to support mobility between different host machines by allowing transfer via Internet protocols. The ideas presented here are intended to provide a starting point only for someone who wishes to develop an agent system but finds complex examples too daunting.
REFERENCES
TE
AM
FL Y
Anonymous. (1999). Agent Technology - An Overview. Paper presented at the Proceedings of 10th Australasian Conference on Information Systems, ACIS ‘99, Wellington, New Zealand. Campione, M., & Walrath, K. (1996). The Java Tutorial: Object-Oriented Programming for the Internet, Reading, MA: Addison-Wesley. Choi, J. (1998). Agent Concepts and Models. Dept. Computer Science and Engineering, Hanyang University. Darbyshire, P., & Lowry, G. (2000). An Overview of Agent Technology and its application to Subject Management. Paper presented at the International Resource Management Association, IRMA 2000, Anchorage, Alaska. Franklin, S., & Graesser, A. (1996). Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents. Paper presented at the Proceedings of the 3rd International Workshop on Agent Theories and Architectures. Friedman-Hill, E. J. (1999). Jess, The Java Expert System Shell, [Web page]. Distributed Computing Systems. Available: http://herzberg.ca.sandia.gov/jess [Accessed 8/25/99]. Gilbert, D., et al. (1995). The Role of Intelligent Agents in the Information Infrastructure . USA: IBM. Gosling, J., & McGilton, H. (1996). The JAVA Language Environment: A White Paper, Sun Systems. Maes, P. (1995). Intelligent Software. Scientific American, 273, Sept. (3), 8486. Nwana, H. (1996). Software Agents: An Overview. Knowledge Engineering Review, 11(3). Nwana, H., & Wooldridge, M. (2000). Software Agent Technologies: Intelligent Systems Research, Applied Research and Technology, BT Labs.
Team-Fly®
Darbyshire 97
Sundsted, T. (1998, 6/98). An introduction to agents. JavaWorld, June 1998. Wooldridge, M., & Jennings, N. (1995). Intelligent Agents: Theory and Practice. Knowledge Engineering Review, 10(2), June.
98 Intelligent Agents in a Trust Environment
Chapter 7
Intelligent Agents in a Trust Environment Rahul Singh University of North Carolina, Greensboro, USA Mark A. Gill Arizona State University, USA
Intelligent agents and multi-agent technologies are an emerging technology in computing and communications that hold much promise for a wide variety of applications in Information Technology. Agent-based systems range from the simple, single agent system performing tasks such as email filtering, to a very complex, distributed system of multiple agents each involved in individual and system wide goal-oriented activity. With the tremendous growth in the Internet and Internet-based computing and the explosion of commercial activity on the Internet in recent years, intelligent agent-based systems are being applied in a wide variety of electronic commerce applications. In order to be able to act autonomously in a market environment, agents must be able to establish and maintain trust relationships. Without trust, commerce will not take place. This research extends previous work in intelligent agents to include a mechanism for handling the trust relationship and shows how agents can be fully used as intermediaries in commerce.
INTRODUCTION As we look towards the future of electronic commerce, one can see a world where intelligent agents will play a larger and increasingly more important role in Previously Published in Managing Information Technology in a Global Economy edited by Mehdi Khosrow-Pour, Copyright © 2001, Idea Group Publishing.
Singh & Gill 99
day-to-day transactions between both businesses and people. These agents will be used in situations previously only addressed by humans. As more businesses move towards this point, agents must be able to address one of the most important parts of electronic commerce–the trust relationship. This research builds upon the prior work of Maes, Guttman and Moukas (1999) and Bakos (1998) to extend the notion of intelligent agents working in an electronic market to include the building and maintaining of trust relationships. Bakos (1998) establishes a framework (see Figure 1) for both electronic and non-electronic markets identifying three main functions: matching buyers and sellers, facilitation of transactions, and institutional infrastructure. Intermediaries usually provide the first two of these functions, while the third is the domain of governmental agencies. Figure 1 Matching buyers and sellers
• Determination of product offerings • Search • Price discovery Facilitation of transactions
• Logistics • Settlement • Trust Institutional infrastructure
• Legal • Regulatory
Maes et al. (1999) argue that intelligent agents are well suited for the roles that intermediaries play in commerce. In an electronic environment, agents fill many roles involved in matching buyers with sellers. Maes et al. have shown the application of intelligent agents to brokering and negotiating purchases between buyer and seller in the Kasbah project. However, in this project, issues involving trust are identified as more limiting than the issues involving artificial intelligence. In order for these agents to be widely accepted, the user must understand and easily control the mechanisms the agent uses to determine its behavior (Moukas, forthcoming). In an extension of the framework built by Maes et al., we will show how agents can be built to include the trust relationship necessary to fulfill the requirements put forth in the market framework of Bakos (1998).
100 Intelligent Agents in a Trust Environment
INTELLIGENT AGENTS Intelligent agents and multi-agent technologies are an emerging technology in computing and communications that hold much promise for a wide variety of applications in Information Technology. An intelligent agent is “a computer system situated in some environment and that is capable of flexible autonomous action in this environment in order to meet its design objectives (Jennings and Wooldridge, 1998). Agent-based systems range from the simple, single agent system performing tasks such as email filtering, to a very complex, distributed system of multiple agents each involved in individual and systemwide goal-oriented activity. With the tremendous growth in the Internet and Internet-based computing and the explosion of commercial activity on the Internet in recent years, intelligent agent-based systems are being applied in a wide variety of electronic commerce applications including online consumer purchasing, network management, supply chain systems, information retrieval, Internet-based auctions, and online negotiations. Agent-based systems may consist of a single agent engaged in autonomous goal-oriented behavior, or multiple agents that work together to exhibit granular as well as overall goal-directed behavior. The general multi-agent system is one in which the interoperation of separately developed and self-interested agents provide a service beyond the capability of any single agent model. Such multi-agent systems provide a powerful abstraction that can be used to model systems where multiple entities exhibiting self-directed behaviors must coexist in an environment and achieve the systemwide objective of the environment. It is clear that agents are a powerful abstraction to model and design systems that require independent action at various levels of granularity of the system. In the past few years, the Internet and the World Wide Web have become major vehicles for the growth of online commerce. The Census Bureau of the Department of Commerce estimate of U.S. retail e-commerce sales for second quarter 2000, not adjusted for seasonal, holiday, and trading-day differences was $5.52 billion, an increase of 5.3 percent from the revised first quarter 2000 level. The first quarter estimate was revised from $5.26 billion to $5.24 billion (US Department of Commerce, 2000). The department of commerce measures Ecommerce sales as the sales of goods and services over the Internet, an extranet, Electronic Data Interchange (EDI), or other online system, where payments may or may not be made online. A recent report on the Digital Economy attributes the recent rapid growth of the Internet to its strength as a medium of communication, education and entertainment, and as a tool for electronic commerce. It is clear from all sources that advances in Information Technology and Electronic Commerce have been a significant contributor to the recent success and growth in the national economy.
Singh & Gill 101
Bakos (1998) points out that markets match buyers and sellers, facilitate transactions between them, and provide an institutional infrastructure to support the transactions. In the contemporary marketplace, the first two of these three functions is conducted with intermediaries. In the electronic marketplace, these functions may be facilitated using electronic intermediaries by leveraging the efficiencies afforded by Information Technologies (Bakos, 1998). Intelligent agent technologies hold great promise for fulfilling the role of intermediary in the electronic marketplace and supporting, or conducting on behalf of the user, the processes involved in matching buyers and sellers and facilitating the transactions between them. The Internet is a large distributed environment platform where multiple agencies conduct commercial activity. This activity involves: the search for sellers with products to suit the buyers, defined by price, quality, and other business considerations; the search for buyers who will buy the products of a seller; and the facilitation of such transactions. Intelligent agent technology has the technology to search through a large information space for specific needs and identify such sources. Intelligent agents can perform such searches within the parameters defined by the user and facilitate the transaction by bringing the resource to the user and acting on behalf of the user to conduct transactions. Therefore, it is not surprising that significant attention is being paid to this technology for the facilitation and empowerment of electronic commerce by the academic and business communities. Intelligent agents that are primarily directed at Internet and Web-based activities are commonly referred to Internet Agents. There are many agent systems in the electronic commerce area that perform limited functions to support users. Examples include Andersen Consulting’s BargainFinder, which undertakes price comparison, as does Jango (see Doorenbos, Etzioni and Weld, 1997), and AuctionBot (see Wurman, Wellman, and Walsh, 1998) and Kasbah (Chavez, Drellinger, Guttman and Maes, 1997), which supports product transactions Figure 2 1. 2. 3. 4. 5. 6.
Need Identification Product Brokering Merchant Brokering Negotiation Purchase and Delivery Service and Evaluation
102 Intelligent Agents in a Trust Environment
Table 1: The consumer buying behavior model and intelligent agent support. Consumer buying behavior model stage
Activities involved
Intelligent Agent Facilitation
Need Identification
Realization of unfulfilled needs by the consumer.
Tools that alert the user to needs or opportunities based on knowledge of the user’s preferences or business environment are useful in this regard.
Product Brokering
Refers to retrieval of information about products.
In this stage, the agent is primarily involved in search activities to determine products and services to suit the consumer’s needs.
Merchant Brokering
This stage provides a filtering of the information retrieved by the agents on the various products based on the criteria of the buyer. It results in the creation of a set of feasible sellers.
This stage is analogous to many traditional decision-support activities that require choice on the part of the user. Agents may facilitate the ranking of alternatives, thereby facilitating the generation of choice from the user.
Negotiation
This stage determines the terms of the transaction. It varies in duration based on a number of factors including the complexity of the marketplace, the number of feasible alternatives, the monetary and business value of the transaction.
This is a dynamic stage where the buyer and seller(s) agents communicate preferences to find a mutually agreeable set of terms for the transaction. This activity may be facilitated by agents through communication abilities and matching of the needs of the buyer with the capabilities of the seller.
Purchase and Delivery
Upon completion of negotiations, the buyer makes the purchase and delivery of the goods or services occur based on the terms agreed upon in the negotiation stage.
This is typically the stage where the transaction moves from the electronic to the physical in the case of tangible goods or services, or comprises content delivery that simply requires a communication medium.
Service and Evaluation
After the purchase has been made, the buyer will engage in postpurchase evaluation to determine an overall satisfaction with the purchase decision.
This is a subjective stage where the user decides the utility of the product or service delivered. This stage does form the input for developing a preference for one seller over another, which is useful input for the merchant brokering and negotiation stages. This information provides guidance and input for development of learning and the adaptive behavior of the intelligent agent.
Singh & Gill 103
(Macredie, 2000). The Communications of the ACM ran a special issue in March 1999, that focused on how “software agents independently and through their interaction in multi-agent systems are transforming the Internet’s character.” Agents and the business performance they deliver will be involved in up to $327 billion worth of Net-based commerce in five years according to Forrester Research (Rosenbloom, 1999). Intelligent agents carry out activities in the electronic marketplace on behalf of the human user. Maes et. al. (1999) present a model (Figure 2) for the behavior of intelligent agents over the Web through traditional marketing consumer buying behavior models. The consumer buying behavior model illustrates the actions and decisions involved in the buying and using goods and services. This model is adapted in their research to consumer buying in the electronic marketplace and the use of intelligent agents in facilitating this activity. They present six states to categorize agentmediated electronic commerce. Intelligent agents may be used facilitate a number of these stages in the consumer buying model. Table 2 presents a summary of the activities involved in each of these stages and provides suggestions for the facilitating role agents may play in each of these stages. The above model and the associated applications of intelligent agent technology provide a foundation for the analysis and development of intelligent agentbased systems for Internet-based application deployment. Individual components of the consumer behavior model’s application to agent-assisted electronic commerce may have greater significance than others, based on the nature of application. Even with the completeness and wide scope of this model, there is a need to extend the model to account for trust. It is the one component of the Bakos (1998) framework for markets that is not addressed. In the absence of trust, agents will not be able to fully operate on behalf of their human masters, but instead can only provide basic data-gathering functions or operate only under strict instructions. Barney wrote that “Interpersonal and inter-organizational trust have been widely cited as important components of economics exchanges” (Barney and Hansen, 1995). For intelligent agents, trust is the missing piece to the puzzle.
ELECTRONIC TRUST Trust is perhaps the most important aspect of electronic commerce. It is at the very heart of its foundation. In the absence of trust, commerce usually breaks down. (Keen, Balance, Chan and Schrump, 2000) The current use of intelligent agents in the electronic environment does not explicitly account for this necessary ingredient for success. This is a modification needed in the model proposed by Maes et al. to enable agents to fully function in the current electronic marketplace.
104 Intelligent Agents in a Trust Environment
While it is true that we can program agents to buy and sell things, in order to become autonomous, we need to account for trust relationships. In the current literature, electronic trust is gaining importance and garnering much attention. Studies about trust can be found in many diverse disciplines such as: anthropology, economics, organization behavior, psychology and sociology. Bhattacharya, Devinne, and Pillutla (1998) attempt to synthesize several many different definitions of the construct to develop a research framework for this area. They define different criteria that a definition must include to truly represent the richness of its meaning. First, trust exists in an environment that includes uncertainty. Kollock (1999) and others (Deutsch, 1958; Sheppard and Sherman, 1998) view trust as existing only in an uncertain and risky arena. In commerce, there is an exchange of information. Whether it is electronic or face-to-face, there is a sharing of personal information such as credit card numbers, addresses, phone numbers and other such information. The concept of sharing this information places the parties involved at risk. This places a price or value for the consumer on the presence of trust in a transaction. The second aspect of trust is that it can be predicted to exist. One can count on its presence. This could also be looked at as dependability (Sitkin and Roth, 1993). This really should be considered a distribution of the expectancy. At any given time, there is a percentage chance that the trust one has in a company will fail (Bhattacharya et al., 1998). Two other aspects to be considered are the strength and importance of the trust relationship. If the purchase is something of great importance or cost, the importance of the relationship will be heightened. If it were a minor purchase, the reverse would be true. The importance of trust in these relationships can be seen in the work of Zucker (1986) and others (Arrow, 1974; Williamson, 1974) who claim that the basis for stability in markets is trust. The final aspect of the trust framework is that trust is good (Bhattacharya et al., 1998; Lewicki, McAllister and Bies, 1998). It is to be considered a positive outcome. In the absence of trust, we have distrust, which will inhibit the growth of the online relationship. This research will show how this framework is used to explain behavior in markets that take place in a virtual space and how intelligent agents can build and maintain a trust relationship. Given the range of disciplines that trust entails, it is important to clearly define the aspects of trust that are in the working definitions for this research. That definition is as follows: “Trust is an expectancy of positive (or nonnegative) outcomes that one can receive based on the expected action of another party in an interaction characterized by uncertainty” (Bhattacharya et al., 1998). Mayer, Davis and Schoorman (1995) synthesize multiple disciplines with the definition of trust as: “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party.”
Singh & Gill 105
To put it succinctly, Barney and Hansen (1994) describe trust as the avoidance of bad outcomes.
AGENTS THAT TRUST Having established the importance of trust in commerce, we now turn to how agents can carry out this most human of functions. This research presents a model for the application of intelligent agents in the implementation of electronic trust to facilitate electronic commerce. Using the framework defined by Bhattacharya et al. (1998), we will show that intelligent agents can be used to build and maintain trust relationships on behalf of the human owner on whose behalf the agent is acting. The first requirement is that the agent can handle uncertainty and risk. These elements will be present in any transaction. For the agent to be successful at brokering a deal to the satisfaction of its owner, it must first know something about the preferences of that person. This information will be stored in a profile that the intelligent agent maintains and modifies over time. Much like profiling techniques that currently are employed on the Internet, information about buying preferences and approved vendors can be entered. This information gives the agent a starting point as it begins buying and selling activities. As transactions take place, the agent can modify its behavior based on the response from the user. For example, initially when trust in the agent is low, the user may wish the agent to search for prices, but not complete the transaction until authorization is approved. Through a series of acceptance or denials of suggestions, the agent can build a better profile about the buying and selling patterns of its user that later lead the user to allow the agent to complete the transaction without interacting with the user. Two other important aspects of trust identified by Bhattacharya et al. are the notions of importance and strength. As agents negotiate on our behalf, the items being considered for sale are important. If the transaction involves a small amount of money it may not be critical that the best price be found or that the transaction involves a certain vendor. As the importance of the transaction grows, the behavior of the agent will be modified. As an example, the agent’s owner may have one set of preferences when the transaction cost is below a certain threshold, but require an additional step for the agent to complete a transaction involving high price items. What is considered the threshold is something that will be set by the user and can be modified over time as the agent obtains a greater case history of transactions to search when making a decision. The strength of the trust relationship may be used to decide which vendors or buyers an agent will deal with. The agent can store a database of trust ratings for trading partners with which it has previous experience. In certain transactions, the agent can be required by its user to only interact with vendors or buyers that have the desired trust rating. These
106 Intelligent Agents in a Trust Environment
FL Y
ratings could be obtained through experience or the agent could use an outside source such as a trust assurance service that is provided through a third party. One final consideration in the trust relationship that can be programmed into the agent is the ability to store and utilize the rating of risk for its user. Some people will be more risk seeking than others. The intelligent agent will need to reflect this characteristic in order to be used without constant supervision. The agent will need to know in what types of situations a person is risk seeking and when he or she is risk adverse. For instance, when buying commodities, an individual may be indifferent as to vendors and want to search for the best price. In other cases, the vendor may play a large role and the person is willing to accept a higher price in order to transact business with the particular company. All of these considerations can be built into the next generation of intelligent agents. The end result is that agents will be able to reflect the characteristics that users maintain and modify in their trust relationships as they transact business. When we have agents that can account for trust, we have agents that can act as intermediaries and fulfill the first two roles identified in the framework by Bakos (1998).
CONTRIBUTION
TE
AM
Intelligent agents have matured a great deal from the “filter email and find information” agents of a short time ago, but they still have more growing to do before they will be able to act on our behalf in a commercial situation. In this research we have outlined how an extension of the model by Maes et al. can allow trust to be represented. By having agents that can build and maintain trust relationships, we are able to build agents that can operate in the market framework identified by Bakos. Intelligent agents can take the place of human intermediaries that previously have acted on our behalf. These agents can be flexible and can adjust their profile of the user based on user input and market interaction. Most importantly, this next generation of intelligent agents will be able to provide a mechanism for operationalizing trust, the cornerstone of commerce.
REFERENCES Arrow, K. (1974). The limits of organization. New York: Norton. Bakos, Y. (1998). The emerging role of electonic marketplaces on the internet. Communications of the ACM 41, 8 (August), 35-42. Barney, J. & Hansen, M. (1995). Trustworthiness as a source of competitive advantage. Strategic Management Journal, 15, Special Issue: 175-190. Bhattacharya, R., Devinney, T., & Pillutla, M. (1998). A formal model of trust based on outcomes. Academy of Management Review, 23 (3) 459-472.
Team-Fly®
Singh & Gill 107
Chavez, A., Dreilinger, D., Guttman, R.and Maes, P. (1997). A real-life experiment in creating and agent marketplace, Proceedings of the Second International Conference on the Practical Application of Agents and Multi-Agent Technology (PAAM ’97), London, UK, (April). Deutsch, M. (1958). Trust and suspicion. Journal of Conflict Resolution, 2: 265-279. Doorenbos, R., Etzioni, O. and Weld, D. (1997). A scalable comparison-shopping agent for the world wide web, Proceedings of the First International Conference on Autonomous Agents, Marina del Rey, CA, (February). Jennings, N. R. & Wooldridge, M. (1998). Agent Technology: Foundations, Applications, and Markets, London: Springer. Keen, P., Balance, C., Chan, S., and Schrump, S. (2000). Electronic Commerce Relationships, Upper Saddle River, NJ: Prentice Hall. Kollock, P. The production of trust in online markets. In E. Elawler, S. Thyne & H. Walker (Eds.), Advances in Group Processes, (16), Greenwich, CT: JAI Press. Lewicki, R., McAllister, D., and Bies, R. Trust and distrust: New relationships and realities. Academy of Management Review, 23 (3) 438-458. Macredie, R. D. (1998). Mediating Buyer-Seller Interactions: The Role of Agents in the Web Commerce. In B. F. Schmid, D. Selz, and R. Sing (Eds.), EM Electronic Contracting. EM - Electronic Markets, Vol. 8, No. 3, 10/98. URL:
[10/05/2000]. Maes, P., Guttman, R., and Moukas, A. (1999). Agents that buy and sell. Communications of the ACM 42, 3 (March), 81-91. Mayer, R. C., Davis, J.H. and Schoorman, F. (1995). An Integrative Model of Organizational Trust. Academy of Management Review, 20, 3, 709-734. Moukas A., Guttman R., and Maes, P. (to appear). Agent-mediated Electronic Commerce: An MIT Media Laboratory Perspective, Proceedings of the International Conference on Electronic Commerce. URL: [10/05/2000]. Rosenbloom, A. (1999). Editorial Pointers, Communications of the ACM 42(3). Sheppard, B., and Sherman, D. The grammars of trust: A model and general implications. Academy of Management Review, 23(3) 422-437. Sitkin, S. and Roth, N. (1993). Explaining the limited effectiveness of legalistic “remedies” for trust/distrust. Organization Science, 4:367-392. Williamson, O. (1974). Markets and Hierarchies. New York: Free Press. Wurman, P., Wellman, M. and Walsh, W. (1998). The Michigan Internet AuctionBot: A configurable auction server for human and software agents, Proceedings of the Second International Conference on Autonomous Agents, (May).
108 Intelligent Agents in a Trust Environment
Zucker, L. (1986). The production of trust: Institutional sources of economic structure, 1984 – 1920. In B. Staw & L. Cummings (Eds.), Research in Organizational Behavior, 8: 53-111. Greenwich, CT: JAI Press.
Marx-Gomez & Rautenstrauch 109
Chapter 8
A Case Study on Forecasting of the Return of Scrapped Products through Simulation and Fuzzy Reasoning Jorge Marx-Gómez and Claus Rautenstrauch Otto-von-Guericke-University, Magdeburg, Germany
Forecasting of scrapped products to recycling poses severe problems to remanufacturing companies due to uncertainties in timing and quantities of returns. A method is suggested combining a simulation approach with fuzzy reasoning. The prediction model presented here is based on life-cycle data (e.g., sales figures and failures) and impact factors (e.g., lifetime, wear and tear, usage intensity). First, these data serve to develop a simulation model, which consists of sub-models describing sales, failures, usage and returns, respectively. Furthermore, the forecasting approach will be extended by a fuzzy component introducing expert knowledge into the model design to obtain more accurate forecasting results. An empirical study has been applied using life-cycle data of photocopiers to forecast the returns. The results of this study are presented in this chapter as well.
MOTIVATION Today companies are forced to predict their return of scrapped products at the end of the product life cycle by voluntary or legal liabilities. On the one hand these prognoses are needed for the planning of recycling and waste disposal. On the other hand, they are needed in material requirements planning for the calculaPreviously Published in Managing Information Technology in a Global Economy edited by Mehdi Khosrow-Pour, Copyright © 2001, Idea Group Publishing.
110 The Return of Scrapped Products Through Simulation & Fuzzy Reasoning
tion of the return of secondary materials from recycling processes into production. The return of products to be recycled from consumers to producers varies depending on consumers’ behavior and product life cycle figures. Up to now, neither production planning, scheduling and control systems (PPS-systems) nor recycling or disassembly planning systems (RPS-systems) contain methods for the prognosis of the return of scrapped products to be recycled (Guide, 1999; Rautenstrauch, 1997). If recycling is implemented at an industrial scale, active behavior would be expected, which means that recycling planning would be based on a forecasting of returns for the next planning period, analogous to manufacturing program planning. Since uncertainties concerning timing and quality of returns are among the main problems, a robust and precise forecasting method is a main prerequisite for a cost-effective remanufacturing. Contemporary forecasting methods used in the manufacturing program planning like moving average, exponential smoothing, and linear programming approaches cannot be applied, because usually the past data to use these methods are not available, insufficient or inconsistent. Further difficulties arise from most companies’ reluctance to provide the required data and figures. Therefore, a forecasting model, based on life-cycle data regarding the impact factors sales, failures, usage intensity and return quotas, must be applied. The model itself is a two-stage-approach. The forecasting itself is performed by a set of fuzzy controllers, which are created in the second stage. In the first stage, a simulation model is developed to generate basic data for the adjustment and calibration of the fuzzy controllers. In the following, a forecasting method based on this technology will be proposed. The method is illustrated by a case study on photocopiers.
APPROACHING FORECASTING THROUGH SIMULATION In order to achieve reliable data to forecast time and amount of scrapped products returning to the recycling process, the developed simulation model considers sub-models for each impact factor, describing sales, failures, usage intensity and return quotas, respectively. The sub-models, which are mathematically combined, deliver a framework to parameterize the simulation model and yield functions over time for sales, failures and the return amounts. The simulation model is used to find out in which periods what amount of scrapped products return to the producer providing approximate values. On the other hand, it serves to calibrate the fuzzy controller. Sub-Models The sub-models listed below have been considered and incorporated into the simulation model to identify the return characteristics of scrapped photocopiers.
Marx-Gomez & Rautenstrauch 111
Sales Model Product Life Cycle The Product Life Cycle (PLC) can be understood as a general model describing the development of turnover over all life periods of a product (Strebel and Hildebrandt, 1989). The PLC is divided into four stages (Sachs, 1984; Kotler, 1995): • Introduction stage: Product is rolled out into the market. • Growth stage: The new product is increasingly accepted by the market. • Maturity stage: This stage is characterized by decreasing growth of turnover and stagnant profits. • Decrease stage: The traditional PLC model ends with the decrease stage, in which turnover and profits are decreasing. Empirically Investigated Model The investigation is based on the sales figures of three types of photocopiers over a long period of time. For data protection reasons, the name of the company who supplied the data to us is not mentioned. Through the analysis of the sales Figure 1: PLC of Photocopier 1 12000
10000
8000
6000
Sold copiers
4000
2000
,5
,0
97 19
,5
97 19
,0
96 19
,5
,0
95
96 19
19
,5
95 19
,0
,5
94
94 19
19
,0
93 19
,5
93 19
,0
92 19
,5
92 19
,0
91 19
,5
91 19
,0
,5
90
90 19
19
,0
89 19
,5 88
89 19
19
19
88
0
data presented in Figure 1, only three stages could be identified (the sales figures of the other types of photocopiers are very similar): • Stage 1 with a relatively sharp growth of sales, • Stage 2 with stagnation on a high level, and • Stage 3 with decreasing sales over a longer period. In the figure illustrating the PLC of a photocopier, on the left side a steep rising curve and on the right side slightly dropping curve were identified. The attempt to describe the sales curves with a Gaussian distribution failed. It was impossible to construct a reasonable adjustment line in a probability net for a Gaussian distribution. In practice, the steeply rising beginning of sales that would be typical in a Gaussian distribution could not be proved for photocopiers. This is caused by
112 The Return of Scrapped Products Through Simulation & Fuzzy Reasoning
customers’ behavior: A branch of a high innovation rate, it is typical that customers demand the latest products. Therefore, the Weibullian distribution (Sachs, 1984; Reichelt, 1978; DGQ 11-04, 1995) is more appropriate to describe the sales curves over the whole PLC, because two parameters are employed to describe the appearance of the curve on the left and right side. The reliability function R(t) of a Weibullian distribution with two parameters is the following (Wilrich, 1987): R(t ) = exp
t
b
T
The Weibullian distribution with two parameters for the sales model is described by the following parameters: T: T is the scale parameter. In our model, T determines the beginning of the stage where the curve becomes slightly dropping. This stage is reached when 63.2% of all sales are carried out. b=2: b is defined as the form parameter. The grade of this adjustment can be shown applying a life cycle network Figure 2: Life cycle network Life Cycle Network (Copier Type 3) 99,9 99 90 80 63,2 50 40 30 20 Sales in %
10 5 3 2 1 0,5 0,3 0,2 0,1 0,01
0,1
1 Life Time t
10
100
Marx-Gomez & Rautenstrauch 113
(DGQ 17.26, 1995) if a reasonable adjustment line can be constructed. In Figure 2, the sales figure of a photocopier is sketched in a life cycle network. Now it is obvious that the Weibullian distribution can be applied for the sales model. Failure Model Failures are unexpected events interrupting or abnormally ending usage. The evaluation of failure data has shown that the Gaussian distribution cannot be reasonably applied for the description of the behavior of failures over time (Zacks, 1992). Nor does a logarithmic transformation put things right in most cases. Again, applying the Weibullian distribution is useful for the failure model in a PLC (Kühlmeyer, 1996; DGQ 11-04, 1995), but with three parameters in this case. A lot of failures are caused by damages appearing with a (long) delay between cause and effect. Therefore, a long-term summation of “microscopic” damages is necessary until a macroscopic failure becomes visible. The period in which only microscopic damages appear causes a delay between the rollout of a new product and the first failures. This delay is called “failure free period” (t0), which is the third parameter of the Weibullian distribution underlying the failure model. Furthermore, a main advantage of the Weibullian distribution is the variety in which it can be applied for the description of different types of failures, e.g., if they are very early or caused by accident or wear and tear. This feature is caused by the form parameter b that makes the Weibullian distribution to generalization of the Gaussian distribution. The reliability function R(t) of the Weibullian distribution with three parameters is given below: R(t ) = exp
t t0 T t0
b
As in the sales model, characteristic parameters for the Weibullian distribution with three parameters can be given for the failure model: T: as given in the previous subsection. b: Form parameter which in this case is a measure for the characteristic type of failure. The following assignments of b to types of failures are reasonable: b<1: Early failures b=1: Failures by accident b>1: Failures by wear and tear t0:Failure free period. All types of failures are relevant for technical products. Failures by accident are characterized by the fact that time has no influence on the appearance of the failure. Therefore, the failure rate is assumed to be constant. Failures by wear and tear are appearing if products become increasingly unreliable over time. In this
114 The Return of Scrapped Products Through Simulation & Fuzzy Reasoning
case, the failure rate depends on time. Early failures appear particularly in the introduction stage of the PLC. Here, the failure rate depends on time, too, but the effect of time is positive, i.e., the rate is decreasing over time. Summarizing all types of failures, a so-called “bathtub curve” results. In this simulation model, it is assumed that early failures are sorted out by the producers or suppliers, and the following failures by accident can be neglected until the end of t0 is reached. After that, failures by accident and by tear and wear have to be taken into account increasingly. This overall behavior of failures is typical for technical products with high quality. Usage Model The usage model (or usage intensity model) is based on the assumption that many users make few copies per day or few users make many copies per day. Obviously, this can be modeled with a logarithmic normal distribution. Therefore, the lower bound is 0, because less than 0 copies per day are not possible. Furthermore, a logarithmic normal distribution is remarked in the automotive industry regarding run km per year and car. Within the scope of this study the behavioral patterns concerning different types of photocopiers have been analyzed. Therefore, a sampling was taken with 35 types of photocopiers over five years. The graphical interpretation shows a good adjustment to the adjustment line in the logarithmic probability network. Furthermore, a geometric mean of 5,550 pages per month and a measure of dispersion Î = 1.4 was evaluated. It is remarkable that most of the photocopiers are made for a throughput of only 3,000 pages per month. This means that nearly 90% of all users use photocopiers more extensive than planned by the producer. The histogram depicting the probability density function of a logarithmic normal distribution shows a good correspondence between empirical and theoretical frequency distribution. Return Model Despite existing incentive systems, it cannot be assumed that all sold products return to the producer at the end of the life cycle. For example, this can be caused by ignorance, damage, export and convenience. Therefore, the return quota is less than 100%. Depending on incentives or sanctions (often prescribed by law), different return rates are possible. The return quota (probability of returns over time) is assumed as uniformly distributed. Executing Simulation The simulation model was developed and executed with the conventional spreadsheet program Microsoft Excel. The simulation of sales over time, lifetime and usage characteristics is based on the sub-models described above. The as-
Marx-Gomez & Rautenstrauch 115
sumption underlying the simulation model are the following: • The decrease stage of a photocopier begins three years after its rollout (T = 36 months). • The form parameter for sales is 2 with respect to results of the previous subsection (b = 2). • The minimum lifetime of a photocopier is determined to 200,000 pages (t0). • The characteristic lifetime of a photocopier is determined to 400,000 pages (T). • The measure for the failure is set to 3 (b = 3). • The average amount of copies is 5,000 pages per month. Figure 3: Simulation results with return quota 60% Return Simulation Failures
Return Amount
Sales
4,0%
3,5%
Relative frequency
3,0%
2,5%
2,0%
1,5%
1,0%
0,5%
336
324
312
300
288
276
264
252
240
228
216
204
192
180
168
156
144
132
120
96
108
84
72
60
48
36
24
0
12
0,0% Time [month]
Figure 4: Simulation results with return quota 80% Return Simulation Failures
Return Amount
Sales
3,5%
3,0%
2,0%
1,5%
1,0%
0,5%
Time [month]
336
324
312
300
288
276
264
252
240
228
216
204
192
180
168
156
144
132
120
108
96
84
72
60
48
36
24
0
0,0% 12
Relative frequency
2,5%
116 The Return of Scrapped Products Through Simulation & Fuzzy Reasoning
• The measure for asymmetry of the normal distribution Î is assumed as 1.4. The simulation is related to a production series of a photocopier of which 1,000 pieces were produced and sold. A series of experiments have shown that increasing the amount of sold photocopiers has no influence on the information capability of the model. Therefore, the amount of 1,000 pieces is sufficient. In the first run of the simulation model, it was assumed that the return quota is 60% for each photocopier at the end of its life cycle, i.e., the incentives or sanctions are not very successfully implemented. In the second run, the return quota is assumed to be 80% (well working incentives or sanctions). The results of these runs are presented graphically in Figure 3 and Figure 4.
TE
AM
FL Y
Interpretation of Simulation Results After a period of between two or three years, the first photocopiers are returning; i.e., recycling starts during the decrease stage. When the PLC is finished, only 15% of all returns are back. In other words, about 85% of all recycling has to be done after the PLC is finished. That means that only approximately 15% of secondary goods can be reused for the primary production of the same type of photocopiers. Another key result of this study is the so-called “triplication rule”: • After three years, the peak value of sales is exceeded. • After nine years, the PLC is finished. • After approximately 25 years, the return of scrapped products is finished. These results were discussed with representatives of several recycling firms, and they acknowledged that this would correspond to their experiences. The results can be confirmed with life cycle data and data regarding cars taken off the road published by German “Straßenverkehrsamt” (DGQ Z.42-45, 1990), too. The peaks concerning sales and returns show a broad range of variations, which can be described with the chance variation of binomial distribution. Therefore, decreasing sales figures do not smooth finish out the curves.
INTRODUCING FUZZY REASONING FOR FORECASTING Developing a Fuzzy Controller As described above the return of scrapped products is influenced by several impact factors, for example, the amount of sold products, life expectancy, usage intensity, frequency of failures, and return quota. Since these impact factors are characterized by uncertainty and vagueness, a fuzzy reasoning approach is applied for the forecasting of returns of scrapped products. The basic idea of rulebased inference applying fuzzy logic (fuzzy reasoning) is to introduce qualitative expert knowledge into the design of a fuzzy controller. The fuzzy logic approach is employed because it is suitable in particular for the adequate modeling of knowl-
Team-Fly®
Marx-Gomez & Rautenstrauch 117
Figure 5: General Architecture of a Fuzzy Controller Input variables I1
I2
In ...
Inference machine R u le A IF (S a l O R R u IF (S a l O R R u IF (S a l O R R u IF (S a l O R R u IF (S a l O R R u IF (S a l O R R u IF (S a l O R R u IF (S a l
1 e s le e s le e s le e s le e s le e s le e s le e s
= A = A = A = A = A = A = A =
l it tl e 2 l it tl e 3 l it tl e 4 l it tl e 5 m e d i 6 m e d i 7 m e d i 8 m e d i
A N D
P h a s e
=
r is e
A N D
I n c e n ti v e
=
m e d iu m ) T H E N
A N D
I n c e n ti v e
=
g o o d ) T H E N
A N D
P h a s e
=
r is e
A N D
P h a s e
=
S a tu r a ti o n
A N D
I n c e n t iv e
A N D
P h a s e
=
S a tu r a ti o n
A N D
I n c e n t iv e
R e tu r n
R e tu rn
=
=
l it tl e
m e d iu m
=
m e d iu m ) T H E N
=
g o o d ) T H E N
R e tu r n
m e d iu m ) T H E N
R e tu r n
R e tu r n
u m
A N D
P h a s e
=
ris e
A N D
I n c e n t iv e
=
u m
A N D
P h a s e
=
ris e
A N D
I n c e n t iv e
=
g o o d ) T H E N
u m
A N D
P h a s e
=
S a t u r a t io n
A N D
I n c e n ti v e
=
m e d iu m ) T H E N
u m
A N D
P h a s e
=
S a t u r a t io n
U N D
I n c e n ti v e
=
g o o d ) D A N N
O1
R e tu r n
=
=
m u c h
=
m u c h
=
l i tt le
m e d iu m R e tu rn
R e tu rn
=
=
m u c h
v e ry
m u c h
Output variable
edge, which is more available in shape of qualitative conceptions than quantitative figures (Mayer et al., 1993). In the fuzzy forecasting model presented here, verbally expressed knowledge by experience of recycling experts is transformed into a rule base applying linguistic variables and fuzzy sets. Furthermore, empirically determined life cycle data given by a producer of photocopiers of the impact factors mentioned above are used to parameterize the model. The fuzzy controller is equipped with a fuzzification module, a rule base, an inference machine, and a defuzzification module. Figure 5 shows the general architecture of a fuzzy controller. The design process of the fuzzy controller for the forecasting of returns to recycling can be described by the following steps: • Determining input and output variables; • Implementing the rule base; • Choosing the inference strategy; and • Calculating the sharp output values. A closer look at Figure 3 and Figure 4 shows that the shape of the curve representing the return amount (return curve) is similar to the shape of the curve of the sales figures. Analogously, three stages can be identified for the returns as for the sales. Since the membership functions of input variables and rule base depend on the stage of the return curve, three differently parameterized fuzzy controllers have to be created in this case. Determining Input and Output Variables Input variables represent the impact factors influencing the amount of returns, which is the output variable in our model. Prerequisite for a fuzzy-based forecasting of returns is the fuzzification of input and output variables. Therefore, properties like domains, granularity, and curve type, and attributes like “large” and “small” with their membership functions of a variable have to be specified. De-
118 The Return of Scrapped Products Through Simulation & Fuzzy Reasoning
pending on the type of a variable, two kinds of fuzzification can be distinguished (Bothe, 1997): • If the type of a variable is a physical dimension (e.g., sales figures in pieces), fuzzification has to be done by defining attributes and their membership functions. Each attribute represents a fuzzy set, and the variable contains a membership function for each attribute. • Impact factors–like the incentive system–cannot be scored with numbers. Therefore, they are modeled as linguistic variables that are arranged in a scale like “poor,” “medium,” “good.” Based on such a scale, attributes and membership functions are defined analogously to the linguistic scale. Reasoning with Fuzzy Rules Fuzzy rules representing the knowledge base determine how fuzzy input values are connected mathematically to process fuzzy output values, i.e., that input and output variables are connected through fuzzy inference. These rules are built up as “IF ... THEN” predicates including a condition (IF ...) and an inference term (THEN ...). An example for such a rule can be given as “IF assessment is good THEN return is much.” In this example, “assessment” is a fuzzy input variable, “return” is an fuzzy output variable, and “good” and “much” are linguistic variables describing their attributes. In this case, such fuzzy rules are implementations of expert knowledge. If the output value of a variable is only determined by one rule, then one speaks of a simple fulfillment of the rule, otherwise of a multiple fulfillment. Expert knowledge for forecasting the return of scrapped products (in this case, photocopiers) is implemented in a set of rules, which create a rule base. For fuzzy reasoning, the rule base will be structured as follows: Rule A1 IF (condition A AND condition B AND ...) THEN conclusion 1, conclusion 2 OR Rule A2 IF (condition C AND condition D AND ...) THEN conclusion 3, conclusion 4 OR... All rules of the rule base are connected by OR, which is equivalent to the maximum operator in fuzzy logic. The condition clauses of a rule begins with THEN and is connected by an AND, which is equivalent to the minimum operator. The rule base has to be modeled in such a way that all combinations of input variables are taken into consideration.
Marx-Gomez & Rautenstrauch 119
Table 1: Impact factors and knowledge about them Influencing Factor Sales figures Life cycle phase
Usage intensity
Life expectancy
Incentive system
Knowledge Over a period of 11 years (1986-1997) 107,509 photocopiers have been sold Three phases (see Error! Reference source not found.): Phase 1 with steeply rising sales Phase 2 with stagnation on a high level Phase 3 with tapering off over a longer period Photocopiers are designed for 3,000 pages per month Average usage is 5,500 pages per month Photocopiers are working at full capacity by 90 % of all users Minimum life expectancy is 200,000 pages Characteristic life expectancy is 425,000 pages Little early failures only Defective products are sorted out by the supplier Failures by accident are extremely rare Failures by wear and tear not until after minimum life expectancy Three stage incentive system Caused by ignorance, damage, export or convenience a certain amount of photocopiers does not return
Sharp Output Values Through Defuzzification The results of fuzzy reasoning, i.e., the evaluation of the rule base, are returned as fuzzy values and have to be defuzzificated for practical use. Analogous to input variables, properties like domain, granularity, defuzzification method, curve type, and attributes have to be specified for output variables. From a more detailed perspective, defuzzification means that the evaluation of the rule base returns a fuzzy value for each of these attributes of an output variable which then has to be transformed to a single sharp output value. Therefore, the most frequently applied method is center-of-area (Tuma, 1994; Zimmermann, 1991; Altrock and Zimmermann, 1991). In our example, the sharp output value represents the predicted amount of returns of photocopiers for one planning period. Application of Fuzzy Reasoning for the Forecasting of Photocopiers Based on fuzzy reasoning introduced above and the impact factors, it will be shown how a model for forecasting the return of scrapped products (in our example, photocopiers) can be developed. For the validation of the model, life cycle data beginning in 1991 are available. Furthermore, expert knowledge regarding the impact factors was provided by the producer. At the example of a photocopier type, the impact factors–sales figures, life cycle phase, usage intensity, life expectancy, and incentive system–will be parameterized and introduced into the fuzzy reasoning model. Initial Situation For this research study, the sales figures of 35 different types of photocopiers and their average amounts of photocopies over their whole life cycle were available. Furthermore, we got data about life expectancy and information about the
120 The Return of Scrapped Products Through Simulation & Fuzzy Reasoning
Figure 6: Results of fuzzification for our example µ 1
very little little
µ medium
much very much
rise
1
0,8
saturation
phase out
0,75
0,5
0,5
0,25
0,2
2000
16000 Sales [sold copiers]
4500
1989
Life cycle phase [year]
Result of fuzzification for input variable
"Sales "
"Phase"
µ
1997
1991,75
Result of fuzzification for input variable
µ
1
little
medium
much
1
low
medium
high
0,8
0,5
0,5
0,4
2500
4000
6000
4800
225000
375000
Usage [pages/month]
425000
Life expectancy [pages]
Result of fuzzification for input variable
Result of fuzzification for input variable
"Usage"
"Life expectancy"
µ 1
weak
medium
good
0,8
0,5
0,2
10
60
100 Incentive [%]
Result of fuzzification for input variable "Incentive"
Table 2: Fuzzy inference Rule Block A Rule A1 MIN (0.2,0.25,0.25) = 0.2 OR Rule A2 MIN (0.2,0.25,0.79) = 0.2 OR Rule A3 MIN (0.2,0.75,0.25) = 0.2 OR Rule A4 MIN (0.2,0.75,0.79) = 0.2 OR Rule A5 MIN (0.8,0.25,0.25) = 0.25 OR Rule A6 MIN (0.8,0.25,0.79) = 0.25 OR Rule A7 MIN (0.8,0.75,0.25) = 0.25 OR Rule A8 MIN (0.8,0.75,0.79) = 0.75
Rule Block B Rule B1 MIN (1,0,0.25) = 0 OR Rule B2 MIN (1,0,0.79) = 0 OR Rule B3 MIN (1,0.8,0.25) = 0.25 OR Rule B4 MIN (1,0.8,0.79) = 0.79
Rule Block C Rule C1 MIN (0.25,1,0.4) = 0.25 OR Rule C2 MIN (0.25,1,0.8) = 0.25 OR Rule C3 MIN (0.75,1,0.4) = 0.4 OR Rule C4 MIN (0.75,1,0.8) = 0.75
incentive system. Table 1 gives an overview of the impact factors and the knowledge about them. Fuzzification of Input and Output Variables Based on expert knowledge given in Table 1, for each impact factor a fuzzy input variable is defined. Figure 6 shows the input variables, their attributes, the membership functions, and the results of the fuzzification. All attributes and mem-
Marx-Gomez & Rautenstrauch 121
Figure 7: Defuzzification with center-of-area method Output Variable "Return" µ 1 0,8 0,75
0,5 0,4 0,2
1500
3824
12000 Return in [copiers]
bership functions were calibrated and validated by the simulation model described above. Building a Rule Base and Reasoning The conditions of the rule base are connected through AND and the rules of a block are connected through OR. Table 2 illustrates the fuzzy inference with the assignments as derived from the rule base. If the defuzzification values given in Figure 6 are assigned to the rule base, the inferences given in Table 2 can be calculated. The main task of the rule base is to determine how the fuzzy variables have to be connected to each other, i.e., whether a minimum or maximum operator has to be applied. For our case study, experiments have shown that minimum and maximum operators are most suitable; however, this might be different in other cases. Determining Sharp Output Values A sharp value for the output variable “return” has to be calculated through defuzzification. There are several methods for defuzzification available that have different characteristics. Therefore, the method to be applied has to be selected by criteria depending on the individual concrete situation. The most frequently applied methods are the center-of-area and the maximum-height methods. For our purposes, the center-of-area method is suitable. First, all membership functions are reduced to a membership value that is calculated as fuzzy inference for each attribute of an output variable, as described above. Beyond the membership functions, a surface that is limited by the edges of the membership functions has to be marked. The upper bound of the surface is determined by the membership value. The center of this surface, which is the sharp output value, is calculated with the following formula:
122 The Return of Scrapped Products Through Simulation & Fuzzy Reasoning
y=(
n
n
hj * yj / j =1
hj ) j =1
where hj is the grade of the membership function of the output obtained by rule j, yj is the center of the area of the corresponding output membership function, and n is the number of the active rules for the output. In our example, the sharp output value for the forecasting of returns of photocopiers for a period is 3824 pieces. Figure 7 shows the output variable with a sharp output value. A key technique in the presented method is to adjust the shape and location of the output membership function to minimize errors.
CONCLUSIONS Although all methods of the papers are illustrated by a case study, a general approach for forecasting of scrapped products can be identified. The steps for the development of a forecasting model are the following: 1. Identify the impact factors. 2. For each impact factor, find a formal description (becoming sub-models of the simulation model) of behavior over time. 3. Combine the sub-models, execute simulation, and get a model for the returns. 4. Create a fuzzy controller (as shown in Figure 5) for each phase of the return model. 5. Parameterize the membership functions and rule base applying the simulation results. 6. Calculate the amount of returns using the inference mechanism of the fuzzy controller. It was shown that the combination of simulation and fuzzy reasoning can be applied successfully for the forecasting of returns of scrapped products to be recycled. The simulation model presented here considers the impact factors: PLC (represented by a Weibullian distribution of sales figures), product life time, and intensity and frequency of usage (modeled with a logarithmic normal distribution). This impact factors on the return of scrapped products are base data for the simulation model. Because of the simulated frequency distribution, the demand for recycling of scrapped products can be predicted. Applying upper and lower bounds for random variables, recycling resources can be made available conservatively. These upper and lower bounds can be computed empirically through repeated simulation. On the other hand, the simulation model serves to calibrate the fuzzycontroller. Therefore, return characteristics identified by the simulation model, life cycle data, and expert knowledge have been used to build a model that consists of fuzzy variables and a fuzzy rule base. The impact factors are implemented as input variables and the return of scrapped products as output variables. The rule base
Marx-Gomez & Rautenstrauch 123
determines how the input variables have to be connected by fuzzy operations. Further research has to be done to validate the model with other case studies and to extend it for a multi-period forecasting.
REFERENCES Altrock, C., and Zimmermann, H. (1991). Wissensbasierte Systeme und Fuzzy Control. In RWTH-Themen (01/1991/), pp 86-92. Bothe, H. (1997). Neuro-Fuzzy-Methoden – Einführung in Theorie und Anwendungen. Berlin: Springer. DGQ 11-04 (1995). Begriffe zum Qualitätsmanagement. DGQ-Verlag, Berlin. DGQ 17-26 (1995). Das Lebensdauernetz. DGQ-Verlag, Berlin. DGQ Z.42-25 (1990). Gesamtlehrgang. DGQ-Verlag, Berlin. Guide, Jr., V. D. R. (1999). Remanufacturing Production Planning and Control: U.S. Industry Practise and Research Issues. In Flapper, S. D. P., and de Ron, A. J. (eds.), Proceedings Second International Working Seminar on ReUse. Eindhoven, pp. 101-114. Kotler, P. (1995). Marketing-Management. Stuttgart: Schäffer-Poeschel-Verlag. Kühlmeyer, M. (1996). Qualitätsmanagementmethoden Teil 4. WEKA Fachverlag für technische Führungskräfte, Augsburg. Mayer, A. et al. (1993). Fuzzy Logic - Einführung und Leitfaden. AddisonWesley, Bonn et al. Rautenstrauch, C. (1997). Fachkonzept für ein integriertes Produktions-, Recyclingplanungs- und Steuerungssystem (PRPS-System). de Gruyter, Berlin et al. Reichelt, C. (1978). Rechnerische Ermittlung der Kenngrößen der Weibullverteilung. VDI-Fortschrittbericht Nr. 56. VDI-Verlag, Düsseldorf. Sachs, L. (1984). Angewandte Statistik. Berlin: Springer. Strebel, H., and Hildebrandt, T. (1989). Produktlebenszyklus und Rückstandszyklen. In ZfO (2/1989). pp 101-106. Tuma, A. (1994). Entwicklung emissionsorientierter Methoden zur Abstimmung von Stoff- und Energieströmen auf der Basis von fuzzifizierten Expertensystemen, Neuronalen Netzen und Neuro-Fuzzy-Ansätzen. Lang, Frankfurt/Main. Wilrich, P.-Th. (1987). Formeln und Tabellen der angewandten mathematischen Statistik. Berlin: Springer. Zacks, S. (1992). Introduction to Reliability Analysis. New York: Springer. Zimmermann, H. (1991). Fuzzy Set Theory and its Applications. Boston, MA: Kluwer Acad.
124
Newshound Revisited: The Intelligent Agent That Retrieves News Postings
Chapter 9
Newshound Revisited: The Intelligent Agent That Retrieves News Postings Jeffrey L. Goldberg Analytic Services Inc. (ANSER), USA Shijun S. Shen Tygart Technology, Inc., USA
INTRODUCTION There has been a lot of research done in the area of Intelligent Internet Agents. In this chapter, we would like to report our experience in implementing such an agent. It is called Newshound, and it can be trained to recognize a desired topic, and then scan Usenet newsgroups looking for new examples of that topic. Recently, Newshound has been in use by law enforcement personnel, and in response to their feedback, we have extended its capabilities. Also, we introduce two additional intelligent agents: Chathound and Webhound. Finally, we describe the inter-agent communication layer, the facilitator for cooperation between ANSER’s intelligent agents. Organization This chapter is organized into eight sections: (1) Introduction; (2) Newshound, describes the requirements and implementation of an Intelligent Internet Agent; (3) Chathound and Webhound, describing briefly two additional agents; (4) InterAgent Communications Layer, outlines the common database supporting interagent communications; (5) Future Work; (6) Conclusions; (7) Acknowledgments; and (8) References. Previously Published in Managing Information Technology in a Global Economy edited by Mehdi Khosrow-Pour, Copyright © 2001, Idea Group Publishing.
Goldberg & Shen 125
NEWSHOUND Newshound Requirements The original purpose of Newshound is to look for specified, trainable content in Usenet newsgroups. By specified and trainable, we mean that given a set of example postings (positive and negative), it must be able to discern a classifier function and find new postings that are “like” the positive examples. Newshound is to operate as an intelligent agent. It must allow a human agent to specify the parameters of operation, including the news server, the newsgroups in which to look, and the categories of what to be on the look-out for. After having the parameters of operation selected, the intelligent agent must then operate autonomously, only requiring interaction whenever the user desires to check the results of what has been matched so far, or to change the parameters of operation. Once a Newshound agent has found postings that match its category(s), the human agent instructs the Newshound agent as to which of the results are correct and which are not. This last requirement is called user-feedback and retraining and allows the originally learned text categorizers to be refined and personalized. Newshound Implementation Newshound is an intelligent Internet agent that recognizes postings of interest to a human user from Usenet newsgroups. It uses text categorization technology (Goldberg, 1996a) to train a classifier function for each desired category based on a set of examples. The classifiers, or text categorizers, are then used to recognize documents (Usenet postings) that are like the positive examples. It has been employed in a pilot program with an organization of the federal government and is being operationally tested by Special Agents. Newshound Architecture Newshound is a complex and dynamic software system. As shown in Figure 1, it makes a connection from the computer on which the Newshound client is running to a news server, examines the articles one by one, and compares them to its text categorizers. If they match, it takes a snapshot of the posting and stores it in a database. Newshound also makes a connection to a database server (not shown) on the local area network (LAN). In the upper right-hand corner of Figure 1, the offline learning component is shown. This produces the classifier functions, or text categorizers. For each desired category, the input to the learning algorithm is a set of pre-labeled training documents, and the output is a text categorizer. Currently, the algorithms being applied for text categorization are batch algorithms, i.e., the categorizers are learned offline, prior to the performance part of the system when the categorizers are used
126
Newshound Revisited: The Intelligent Agent That Retrieves News Postings
Figure 1: Newshound Architecture
FL Y
to recognize new documents. These algorithms include Naïve Bayes (Lewis, 1999), Support Vector Machines (SVM) (Cristianini and Shawe-Taylor, 1999), and the Category Discrimination Method (CDM), which was developed at ANSER (Goldberg, 1996b).
TE
AM
Newshounds Interface: The Tail That Wags the Dog Because intelligent agents are like personal assistants, the user interface is sometimes personified. The graphical depiction of an intelligent agent has been shown to have an important influence on the human user’s perception of the agent’s friendliness, likeability, cooperativeness, and usefulness. It has been suggested that an intelligent agent’s graphical depiction should help the user understand the agent’s capacities, limitations, and way of operating. Newshound’s personification is shown below in Figure 2. Figure 2: Newshound’s Personification
Team-Fly®
Goldberg & Shen 127
Figure 3: Control Agents Window
Figure 4: Create Agent Interface
The components of Newshound’s graphical user interface (GUI) consists of the following windows: the Control Agents Interface, Configure New Agent Interface, Select Newsgroups Interface, View Agent Results Interface, and View Posting. In Figure 3, Newshound’s Control Agents Window is shown. It allows the user to control all the agents that have been created. Newshound has a clientserver architecture, and the various agents may be running on separate machines, but the results are visible from any machine connected to the LAN on which the Newshound client has been installed. A user can start, stop, delete, and view the results of an existing agent from the Control Agents Interface. A new agent can easily be created through the Create Agent Interface shown in Figure 4. Configuring a new agent consists of giving the agent a name, selecting a news server, and selecting the category(s) the user wants the agent to look for. Once the agent is configured, step 1 is completed and the user goes on to step 2 (Figure 5), which allows the user to specify a set of Usenet newsgroups. When done, selecting the finish button creates the new agent. The user is then returned to the Control Agents Interface window, shown in Figure 3, and the agent can now be started. Once the agent has been run for a while it will report postings that match the category. By selecting the agent and the view results button, the user gets the listing of matching news postings, as shown in Figure 6, the View Agent Results Interface.
128
Newshound Revisited: The Intelligent Agent That Retrieves News Postings
Figure 5: Select Newsgroups
Figure 6: View Agent Results and View Posting
The View Agent Results just contains a list of Subject lines, plus a few other fields, of the postings that matched the category. A user can display a posting itself by selecting it. The posting is displayed in the View Article Interface window shown in far right-hand window in Figure 6. The entire posting is displayed, plus its context including the time and date, name of the human agent running the Newshound, the name of the machine running Newshound, as well as any attached mime types including any images. The attachments are displayed in the third window. They can be fully displayed by pulling up the window border at the top of the third pane in the interface. An important feature of the View Article Interface window is the radio buttons at the bottom of the window. These are for user-feedback. The user can indicate whether matched documents are true positives (the categorizer says it’s a match, and the human agrees) or false positives (the categorizer says yes, but the human disagrees). The postings for which the user provides feedback are then used for retraining and refinement of the text categorizers. It is also the case that the learning algorithm can learn much faster from the postings that the user gives feedback on than the original training examples, because they are closer to the border between positive and negative in the feature space of examples than are arbitrary postings. Since the job of the learning algorithm is to locate that border
Goldberg & Shen 129
between positive and negative examples within the feature space used to represent the examples, the algorithm can learn a good classifier function much faster by choosing examples close to the border (Lewis, 1994). Feedback From Law Enforcement At the time of publication of the original Newshound paper (Goldberg, 2001), Newshound had just been delivered to end-users in the law enforcement community. We have added this section to describe the additional requirements and additional implementation. Additional Requirements and Implementation There are two main approaches to monitoring the Internet via intelligent agents: autonomous agents, and user-directed search. In the case of autonomous agents, our original approach to Internet monitoring, machine learning is first used to train an agent; the agent then goes onto the Internet looking for that content, and informs the user of whatever it finds. Alternatively, in the user-directed search approach, the user enters a “query” based on a Boolean combination of keywords, plus some additional constraints. For example, using the User Query Interface, shown in Figure 7 below, the user enters the following query: Keywords: “Windows and 2000”; and for MIME attachments selects: “executables,” which creates a query agent to find postings in the selected newsgroups with the words “Windows” AND “2000” with attached executables. An unanticipated synergy emerged between the two different kinds of searches. The autonomous agent approach is first used to look for content on a desired topic, and then a user-specified query is used to drill down for more information about a matched posting. A scenario in use by law enforcement agents is called “From-Id tracking”: a query is used to determine the prevalence of the perpetrator of a posting Figure 7: The User Query addition
130
Newshound Revisited: The Intelligent Agent That Retrieves News Postings
Figure 8: The View Results Overview addition
that matches the classifier for “individuals traders in child pornographic materials.” Using the Keyword: “[email protected],” the from-id is tracked across a large cross-section of Usenet, perhaps the entire Alt hierarchy. This will determine the kind and number of posts made by the perpetrator and to what newsgroups. Since there are currently more than 30,000 newsgroups in the Alt hierarchy alone, this is a query that could never be accomplished directly by a human agent. End-users from law enforcement felt it too cumbersome to review matched postings though the View Agent Results interface, as described above, and shown in Figure 6. They felt their work would be enhanced if they could see an overview of a set of matched documents all at once. When viewing the summary, they could select the postings that warranted a more detailed review and view them using the View Agent Results interface. In Figure 7, the View Results Overview is shown. Each of a set of matched documents is depicted at once, including a miniature of any attached images. Depicting summaries of other forms of MIME attachments is a difficult problem, and we have designed Overview screens for sound and video in addition to images (Figure 8).
CHATHOUND AND WEBHOUND: TWO AGENTS IN DEVELOPMENT Chathound Requirements The purpose of Chathound is to apply text categorization to public Internet Relay Chat (IRC) chatrooms and to provide other support for human agents that monitor IRC chatrooms. Additional requirements have been identified by observing the daily work practices of law enforcement agents who monitor IRC chatrooms. The additional requirements include the ability to monitor a set of user handles across a set of IRC chatrooms, to notify the user whenever one of those handles is currently logged on, and to display a list of the available lookup information for all users currently logged into a chatroom who reside in the United States.
Goldberg & Shen 131
Once the parameters of operation have been selected, including the handles to monitor, and the categories to look for, the intelligent agent can operate autonomously, only requiring interaction whenever the user desires to check the results of what has been matched or to change the parameters of operation. Chathound Implementation Chathound is an early prototype of an intelligent agent that monitors chatrooms. It makes a connection to an IRC chatroom(s) and pulls down windows of text, learns to recognize desired content, and stores snapshots of text windows in a database. The current implementation is shown in Figures 9 and 10. Figure 9 shows Chathound after it has been initialized, has made a connection with a prespecified chatroom, and has made a connection to its local database. Figure 10 shows Chathound after it has joined a chatroom and the window of text that is currently available in the chatroom. What needs to be added to meet the requirements for user handle monitoring is a GUI interface that allows the user to select a user handle appearing in the current IRC text window and add it to the list of handles to monitor. Then, whenever that handle appears in the current text window for a monitored chatroom, a graphical notification will be given, such as the handle in a blinking text label appearing at the top of the user’s desktop.
Figures 9 and 10: Chathound making connections (Figure 9) and displaying windows of text from selected IRC Chatrooms (Figure 10)
132
Newshound Revisited: The Intelligent Agent That Retrieves News Postings
Webhound Requirements The goal of Webhound is to perform heuristic search on the Web for specified, trainable content in Web pages. Since Web pages are considered multistructured units of information, the specified trainable content must take into account this multi-part structure. The top level of the classifier function will be the coefficients of a weighted average across several units of information including: the links on the page, the URL (the location of the page), windows of text on the Web page, images on the page, and XML and HTML tags on the page. Once the heuristic function is trained, Webhound allows a human agent to specify the parameters of operation, including starting pages, and the categories to look for. After having the parameters of operation selected, the intelligent agent can then operate autonomously, only requiring interaction whenever the user desires to check the results of what has been matched so far or to change the parameters of operation. Webhound Implementation At the time of this writing Webhound is still in the design phase. Its requirements and our plan of implementation have been described in other ANSER publications (Iseman, Goldberg and Davis, 2001).
INTER-AGENT COMMUNICATION LAYER In the last section, we mentioned that the basic operation of Newshound involved two phases. First, there is a training phase, done a priori and offline, where a classifier function is learned for each desired category. Second, in the performance phase, new documents are compared to the classifier(s) learned in the first phase. Whenever a document matches, a snapshot of it is stored in a database. We will now briefly describe the requirements of the database and how it facilitates the strategy of cooperating intelligent agents collectively accomplishing goals. First, the database must store prosecutable snapshots of the postings matched by a Newshound agent. Later, the postings can be reviewed by human agents, in either summary or detailed fashion. Users require the capability to extract, delete, and/or print the individual records. Also, there is a requirement that records thereby extracted are indistinguishable, as possible, from leads obtained by existing manual methods. Second, to facilitate cooperation, the database layer must be common across all ANSER intelligent agents. At the application level, the database schema must be shared, and all classes that interact with it must be common across applications (e.g., Newshound, Chathound, and Webhound). At the protocol level, the agents
Goldberg & Shen 133
can communicate indirectly via common tables, with one agent producing leads, and another consuming them.
FUTURE WORK Newshound would benefit from integration with several other intelligent Internet agents currently under development at ANSER. With Newshound, Chathound, and Webhound all working in conjunction, each storing the information it finds on the same topic, a more comprehensive index of information about that topic could be accumulated.
CONCLUSIONS Intelligent Internet agents are complex systems. They involve the integration of many technologies including artificial intelligence and Internet agents. The intelligent Internet agents we have developed at ANSER, particularly Newshound, have had some successes, but they are currently far from perfect. Newshound, the most developed agent, is still only a functional prototype. It requires more development to add desired robustness, functionality, and user friendliness.
ACKNOWLEDGMENTS This research has been partially supported by the National Institute of Justice, Grant numbers 97-LB-VX-K025 and 98-LB-VX-K021, as part of the work on advanced face recognition and intelligent software agents.
REFERENCES Cristianini, N. and Shawe-Taylor, J. (1999). An Introduction to Support Vector Machines (and other kernel-based learning methods). Cambridge University Press. Goldberg, J. L. (1996). The CDM Learning Algorithm: An Approach to Learning for Text Categorization. PhD thesis, Texas A&M University, August. Goldberg, J. L. (1996). CDM: An Approach to Learning in Text Categorization, International Journal on Artificial Intelligence Tools, 5(1 & 2), pp. 229253, July. Goldberg, J. L. (2001). Newshound: An Intelligent Agent that Retrieves News Postings, In Proceedings of IRMA’2001 the 13th IRMA International Conference on Managing Information Technology in a Global Economy, pp. 104-106, May. Iseman, J., Goldberg, J. L., and Davis, G. P. (2001). Biannual Report to the National Institute of Justice Office of Science and Technology, Intelligent Internet Agents for the New Millennium, by Analytic Services Inc., Arlington, VA, January.
134
Newshound Revisited: The Intelligent Agent That Retrieves News Postings
Lewis, D. D. (1994). A Sequential Algorithm for Training Classifiers, In Proceedings of SIGIR’94 the 17th ACM International Conference on Research and Development in Information Retrieval, pp. 3-12, July. Lewis, D. D. (1998). Naive (Bayes) at forty: The independence assumption in information retrieval. In European Conference on Machine Learning.
Kim, Rao & Chaudhury 135
Chapter 10
Investigation into Factors That Influence the Use of the Web in Knowledge-Intensive Environments Yong Jin Kim SUNY at Buffalo, USA H. Raghav Rao SUNY at Buffalo, USA Abhijit Chaudhury Bryant College, USA
The paper develops a set of hypotheses regarding the relationship between the TAM (Technology Acceptance Model) constructs and external variables such as individual differences, organizational factors, and risk factors. It uses TAM as a basis to hypothesize the effects of each external variable on the use of the Web as knowledge-transfer tool in the university context. The sample of this study will be professors in a university. The contributions of this chapter are twofold. First of all, this study may give an insight regarding the question of when and who is an eager user of new technologies for learning. Secondly, this chapter is the first one to use technology acceptance model in the context of knowledge-management systems.
INTRODUCTION With the proliferation of the Internet, professors develop their own websites to communicate with students and colleagues. Typical examples of these sites are Previously Published in Managing Information Technology in a Global Economy edited by Mehdi Khosrow-Pour, Copyright © 2001, Idea Group Publishing.
136
Use of Web in Knowledge-Intensive Environments
TE
AM
FL Y
Silva Rhetoricæ developed by Dr. Burton of Brigham Young University (http:// humanities.byu.edu/rhetoric/ silva.htm) and American History 102, Civil War to Present, developed by Dr. Schultz (http://us.history.wisc.edu/hist102/). Dr. Burton employs the metaphor of a forest, trees, and flowers to guide users to classical and Renaissance rhetoric. Dr. Schultz incorporates comprehensive lists of information related to the history class. These websites primarily focus on the teaching aids for students, although some websites are helpful to researchers. Recently, learning over the Web became an issue that all universities are concerned with. Web-based training is also a major issue in companies’ employee training. These phenomena can be understood in the context of the university role and knowledge management. The main role of the university lies in research and teaching, and these roles, among knowledge management processes, correspond to knowledge creation and transfer, respectively. Some researchers (Grant, 1996; O’Dell and Grayson, 1998) contend that knowledge transfer and integration are fundamental to an organization’s ability to create and sustain competitive advantage. The Internet provides the online environment and an interactive method between teachers and students, which makes the Web the main alternative to materials in paper. The degree of acceptance of the Web as a teaching tool, however, varies from professor to professor, even in the same university environment. What are the factors that influence the adoption of the Web as a teaching tool and how are the factors related to each other? Knowing these factors will help to provide the right sort of environment where professors are inclined to use the new tool. The question that we can focus on in this chapter is how can we predict the extent of the web-support developed by individual instructors? Given the importance of the study of the factors leading to the Web usage as a knowledge transfer tool, there are very few papers that have studied the technology acceptance model with individual/demographic and organizational factors in the context of knowledge management. The contributions of this chapter are twofold. First of all, this study may give an insight regarding the question of when and who is an eager user of new technologies for learning. Secondly, this chapter is the first one to use technology acceptance model in the context of knowledge-management systems. The results of it are of interest to researchers in two fields: knowledge-management systems and the technology acceptance researchers. The chapter is organized as follows: in the next section, we describe the general concept of Technology Acceptance Model (TAM) as the theoretical background and individual and organizational factors. Then, we propose the research model of this chapter including TAM and its external factors that influence the beliefs (perceived usefulness and ease of use). In the final section, we discuss future works and summarize the research model.
Team-Fly®
Kim, Rao & Chaudhury 137
CONCEPTUAL BACKGROUND External Variables in TAM One of the most influential models of technology acceptance is TAM, proposed by Davis (1986), that has the theory of reasoned action (TRA) by Ajzen and Fishbein (1980) as its antecedent, which in turn was obtained from psychology (Taylor and Todd, 1995; Venkatesh and Davis, 1996; Chau, 1996). TAM explains the relationship among beliefs, attitudes, behavioral intentions and system usage. According to Davis (1986), perceived usefulness and ease of use represent the beliefs that affect attitude toward use, eventually leading to system usage as shown in Figure 1. Perceive usefulness is the degree to which a person believes that using a particular information technology enhances his or her job performance, whereas perceived ease of use refers to the degree to which a person believes that using a particular information technology would be free of effort (Davis, 1986). Attitude towards use is the user’s evaluation of desirability of using a particular information technology, and behavioral intention is a measure of the likelihood a person adopts to use the technology (Ajzen and Fishbein, 1980). In TAM, the dependent variable is actual usage, which is usually a self-reported measure of time or frequency of using the technology. The external factors or “external variables” (Fishbein and Ajzen, 1975), such as system design characteristics, user characteristics, task characteristics, nature of the development or implementation process, political influences, organizational structures and so on, are clearly defined in TRA, while TAM focuses on the internal psychological variables such as beliefs and attitudes towards use (Davis, Bagozzi, and Warshaw, 1989). By excluding the direct effect of the external variables on behavioral intention, TAM assumes that the internal variables such as beliefs and attitudes fully mediate the effects that all other external variables may have on the system usage of individuals. However, many researchers (e.g., Fichman, 1995; Igbaria et al., 1997; Keegan et al., 1992; Moore and Benbasat, 1991; Venkatesh, 1999) have challenged the basic assumption of complete mediation by TAM constructs. Figure 1: The TAM P erceived U sefuln ess
P erceived E ase of use
A ttitude tow ards the W eb
In ten tion to use th e W eb
138
Use of Web in Knowledge-Intensive Environments
The Davis’ TAM has been modified in several ways. A concern in using the model has been about how far the reported intention matches closely with actual technology adoption. Agarwal and Prasad (1997) measure technology adoption by intentions towards future use and actual figures for current use. Another concern has been the limitation of having only two elements of beliefs or perceptions about technology. Moore and Benbasat (1991) studied 7 elements, which were motivated by diffusion theory literature. The third direction in which researchers have tried to modify the TAM model has been by including Lewin’s stimuli elements such as individual factors and external factors (Keegan et al. 1992). The individual factors and the external influences are in the nature of stimuli that are processed by the human mind and result in some observable behavior. Use of organizational variables such as economies of scale in learning, diversity of existing knowledge base, and others has been studied by Fichman (1995). The organizational elements were motivated by organizational learning literature. The diffusion literature has a rich history of studying social factors that influence the diffusion rate. Study of individual factors on technology acceptance also has a rich history in MIS research. Broadly, they have been grouped as personality, situational, demographic and cognitive (Zmud, 1979). Individual Differences Individual differences play a crucial role in the implementation of any technological innovation (Agarwal and Prasad, 1999). Particularly in the information systems domain, a relationship between individual differences and information systems success has been theoretically posited and empirically tested in a large body of prior research (e.g., Garrity and Sanders, 1998; Harrison and Rainer, 1992; Zmud, 1979). Among the research focusing on the individual differences, Harrison and Rainer (1992) investigate the relationship between individual differences and the outcome variable of computer skill in the context of end-user computing. In their research, individual differences included gender, age, experience, and personality. Fishbein and Ajzen (1975) posit that belief formation is essentially a learning process, and therefore understanding the learning process is critical to understanding the behavior of attitudes. Cronbach and Snow (1977) define the role of individual differences on the learning process. They assert that variables representing individual differences, such as knowledge, skills, and previous experiences, determine what an individual learns in a given situation. Likewise, in social learning theory, individual differences are also expected to influence learning through observation and then belief formation (Bandura, 1977). In summary, the research mentioned above seeks to elucidate the relationship between individual differences and the usage of a new information technology
Kim, Rao & Chaudhury 139
innovation by investigating possible intervening factors. In the university environments that allow professors autonomy and flexibility of structure, individual differences among instructors are expected to play a critical role in the acceptance and usage of a new technology innovation. Organizational Factors TRA assumes that behavior is under volitional control (Karahanna, Straub, and Chervany, 1999). There is broad agreement in organizational change literature that there is a relationship between authority decision and resistance to change (Argyris, 1976; Chatfield, 1990). The level of voluntariness in new technology use is a main organizational factor in the technology acceptance literature. The “voluntariness of use” is defined as “the degree to which use of the innovation is perceived as being voluntary or of free will” (Moore and Benbasat, 1991). This construct is related to the question of whether individuals are free to implement personal adoption or rejection decisions. Organizational supports such as budget, human resources, and time also play an important role in adopting a new technology. Chau (1996) finds that in the mandatory setting where system usage can be measured by user satisfaction, implementation and transitional support influence perceived usefulness and perceived ease of use.
RESEARCH MODEL AND HYPOTHESIS As described in the previous section, external variables influencing the acceptance of a new technology innovation include individual differences and organizational factors. We model an adapted technology acceptance model as Figure 2. In this model, we use perceived system usage measured by self-reported amount of time and frequency of using. Individual differences. According to Keegan et al. (1992), consumer response such as purchase or further product study is related to interaction of indi-
Figure 2: Proposed Technology Acceptance Model Risk Factors Individual Differences
Perceived Usefulness
Perceived ease of use Organizational Factors
Perceived use of the Web as KMS
140
Use of Web in Knowledge-Intensive Environments
vidual differences, such as demographic, cognitive, etc., and external influences, such as organizational incentives and promotional elements. The individual factors and the external influences are in the nature of stimuli that are processed by the human mind and result in some observable behavior. In the information systems literature discussed above, there is a lot of evidence that shows individual differences influence the usage of a new information technology innovation. Among the variables representing individual differences, knowledge, skills, and previous experiences (Cronbach and Snow, 1977; Agarwal and Prasad, 1999) are most frequently addressed. In this chapter, we confine the individual difference variables to length of job tenure, previous experience including skills, and knowledge about new information technologies. Previous research asserts that older workers and workers with greater company tenure are most likely to resist new technologies (Kerr and Hiltz, 1988; Majchrzak and Cotton, 1988). Majchrzak and Cotton found in their study of new production technology that workers with less experience were more committed to the changes resulting from new technologies. Gattiker (1992) also found that age significantly affected the ability of skill acquisition and retention. These results indicate that age and tenure in the workforce may be negatively associated with perceived beliefs. Thus, this relationship can be hypothesized as follows: H1a: The length of tenure in the academic field is negatively associated with perceived usefulness beliefs about the Web technology. H1b: The length of tenure in the academic field is negatively associated with perceived ease of use beliefs about the Web technology. Prior literature shows that there is a positive relationship between experiences with information technology and its acceptance (Levin and Gordon, 1989; Harrison and Rainer, 1992). With previous experiences regarding new technologies, end-users build up their own conceptual knowledge schema about new technologies (Soloway et al., 1988). Such conceptual schema then allows individuals to learn and acclimate themselves to a new technology readily (Gick and Holyork, 1987). Accordingly, the relationship between prior experience and technology acceptance can be hypothesized as H1c and H1d. H1c: The extent of prior experience with similar technologies is positively associated with perceived usefulness beliefs about the Web technology. H1d: The extent of prior experience with similar technologies is positively associated with perceived ease of use beliefs about the Web technology. Organizational Factors. As addressed in the previous section, the research based on TAM focuses on the environments under which technology acceptance behaviors are largely volitional. (Karahana et al., 1999). Individuals in organizations, however, tend to resist to a change when the change is made by authority decision (Argyris, 1976; Chatfield, 1990). Particularly in the university environ-
Kim, Rao & Chaudhury 141
ment as a surrogate for knowledge-intensive organizations, professors as surrogates for experts enjoy autonomy and flexibility of structure. Knowledge-intensive organizations do not resort to authority or formal controls. Instead, they rely on autonomous small teams to meet performance standards (Starbuck, 1992). Experts in the knowledge-intensive firms tend to resist to new ideas which are imposed by top management. A study (Karahanna et al., 1999) reveals that degree of perceived voluntariness of use influences not only attitudes toward usage but also the extent to which attitudes toward usage predicts use. Thus, the voluntariness may be hypothesized as H2a. H2a: The less voluntary the behavior, the less one’s attitude toward usage predicts use. Organizational supports such as budget, human resources, and time also play an important role in adopting a new technology (Kettinger, Teng and Gula, 1997; Guha, Grover, Kettinger and Teng, 1997; Grover, Fiedler and Teng, 1999). Chau (1996) finds that in the mandatory setting where system usage can be measured by user satisfaction, implementation gap and transitional support influence perceived usefulness and perceived ease of use. The research suggests that there is a different level of dimensionality in cognitive structures pertaining to computer systems among different classes of users and programmers (Vaske and Grantham, 1990). This implies that further investigation is needed to generalize the Chau model into end-user environment. Hence, in this chapter, we hypothesize only the organizational support: H2b: The more the organizational supports, the more one’s attitude toward usage predicts use. Risk factors. Uncertainty is involved when a new information technology is adopted, leading to a new work processes, new technical terminology, and potentially different communication structures (Burkhardt and Brass, 1990; Burkhardt, 1994). In the context of the Web usage, there are two risk factors affecting perceived usefulness of the Web and perceived use of the Web as Knowledge Management Systems (KMS) – job-related risk and financial risk which professors may take over. Given that the Internet as a knowledge-transfer vehicle is a relatively new concept, there is bound to be a much uncertainty regarding the value of the service. Thus, we hypothesize the relationship between risk factors and TAM as H3: H3a: The higher the risk associated with the Web as teaching tool, the less one’s attitude toward usage predicts use. H3b: The higher the risk associated with the Web as teaching tool, the less one’s perceived beliefs of usefulness.
142
Use of Web in Knowledge-Intensive Environments
CONCLUDING REMARKS AND FUTURE WORKS This study attempts to address several of the issues in knowledge management system research: How should success be measured? Whether existing measures of success can be applied? and How instructors react to the Web as a teaching tool? It uses TAM as a theoretical foundation to hypothesize the effects of each external variable on the use of the Web as a knowledge-transfer tool in the university setting, but challenges the basic assumption of the complete mediation by the two beliefs (perceived usefulness and perceived ease of use). The research model will be tested in the context of a knowledge transfer process in the university. The subjects of this study, professors in a university, will be given a questionnaire and asked to return it via email. It is expected that most respondents will have directly used the Web as a presentation tool or for personal use. Assessment of the research model will be conducted using Partial Least Square (PLS) because PLS is a powerful approach for analyzing models because of the minimal demands on measurement scales, sample size, and residual distributions (Wold, 1985). In addition, the component-based PLS avoids two serious problems, inadmissible solutions and factor indeterminancy (Fornell and Bookstein, 1982). The contributions of this chapter are twofold. First of all, this study will help answer the question of when and who is an eager user of new technologies for learning. Secondly, this chapter is one of the first to use technology acceptance model in the context of knowledge-management systems.
REFERENCES Ajzen, I., and Fishbein. M., (1980). Understanding Attitudes and Predicting Social Behavior, Englewood Cliff, NJ: Prentice-Hall. Argyris, C. (1976). Single-Loop and Double-Loop Models in Research on Decision making, Administrative Science Quarterly, Vol. 21 (3), pp. 363-377. Bandura, A. (1977). Social Learning Theory, Englewood Cliffs, NJ: PrenticeHall. Bartlett, C. (1996). McKinsey & Company: Managing Knowledge And Learning, Case 9-396-357, Boston, MA: Harvard Business School. Burkhardt, M. E. (1994). Social Interaction Effects Following A Technological Chang: A Longitudinal Investigation, Academy of Management Journal, Vol. 37 (4), pp. 869-898. Burkhardt, M.E., and Brass, D.J. (1990). Changing Patterns or Patterns of Change: The Effects of a Change in Technology on Social Network Structure and Power, ASO Vol. 35 (1), pp. 104-127.
Kim, Rao & Chaudhury 143
Chatfield, A.T. (1990). A User Learning Based DSS Implementation Methodology, Texas Tech University. Chau, P.Y.K. (1996). An Empirical Assessment of a Modified Technology Acceptance Model, Journal of Management Systems, Vol. 13 (2), pp. 185204. Davis, F.D. (1986). A Technology Acceptance Model for Empirically Testing New End-user Information System: Theory and Results, Cambridge, MA: Sloan School of Management, MIT. Davis, F.D., Bagozzi, R.P., and Warshaw, P.R. (1989). User Acceptance of Computer Technology: A Comparison of two Theoretical Models, Management Science, Vol. 35 (8), pp.982-1003. Fishbein, M., and Ajzen, I. (1975). Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research, Reading, MA: Addison-Wesley. Fornell, C., and Bookstein, F. (1982). Two Structural Equation Models: LISREL and PLS Applied to Consumer Exit-Voice Theory, Journal of Marketing Research, Vol. 19, pp. 440-445. Garrity, E.J., and Sanders, G.L. (1998). Information Systems Success Measurement, Hershey, PA: Idea Publishing Group. Gattiker, U. (1992). Computer Skills Acquisition: A Review and Future Directions for Research, Journal of Management, Vol. 18 (3), pp. 547-574. Gick, M.L., and Holyoak, K.J. (1987). The Cognitive Basis of Knowledge Transfer, In S.M. Cormier and J.D. Hagman (Eds.), Transfer of Learning: Contemporary Research and Applications, San Diego, CA: Academic Press. Grant, R.M. (1996). Prospering in Dynamically Competitive Environment: Organizational Capability as Knowledge Integration, Organization Science, Vol. 7 (4), pp. 375-387. Grover, V., Fiedler, K.D., and Teng, J.T.C. (1999). The role of organizational and information technology antecedents in reengineering initiation behavior, Decision Sciences Vol. 30(3), pp. 749-781. Guha, S., Grover, V., Kettinger, W.J., and Teng, J.T.C. (1997). Business process change and organizational performance: Exploring an antecedent model, Journal of Management Information Systems, Vol. 14(1), pp. 119-154. Harrison, A.W., and Rainer, R.K., Jr. (1992). The Influence of Individual Differences on Skill in End-user Computing, Journal of Management Information Systems, Vol. 9 (10), pp.93-111. Karahanna, E., Straub, D.W., and Chervany, N.L. (1999). Information Technology Adoption Across Time: A Cross-Sectional Comparison of Pre-Adoption and Post-Adoption, MIS Quarterly, Vol. 23 (2), pp. 183-213. Kerr, E.B., and Hiltz, S.R. (1988). Computer Mediated Communication Systems: Status Report, NY: Academic Press.
144
Use of Web in Knowledge-Intensive Environments
Kettinger, W.J., Teng, J.T.C., and Guha, S. (1997). Business process change: A study of methodologies, techniques, and tools, MIS Quarterly, Vol. 21(1), pp. 55-80. Majchrzak, A., and Cotton, J. (1988). A Longitudinal Study of Adjustment to technological Change: From Mass to Computer-Automated Batch Production, Journal of Occupational Psychology, Vol. 61, pp. 43-66. Moore, G.C., and Benbasat, I. (1991). Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation, Information Systems Research, Vol. 2 (3), pp. 192- 220. O’Dell, C., and Grayson, C.J. (1998). If only we knew what we know: Identification and transfer of internal best practices. California Management Review. 40(3): 154-174. Ruggles, R. (1998). The state of the notion: Knowledge management in practice. California Management Review. 40(3): 80-89. Sarvary, M. (1999). Knowledge management and competition in the consulting industry, California Management Review, 41(2): 95-107. Sensiper, S. (1997). AMS Knowledge Centers, Case N9-697-068, Boston, MA: Harvard Business School Soloway, E., Adelson, B., and Ehelich, K. (1988). Knowledge and Processes in the Comprehension of Computer Programs, In M.T.H. Chi, R. Glaser, and M.J. Farr (Eds.), The Nature of Expertise, Hillside, NJ: Lawrence Erlbaum.. Starbuck, W.H. (1992). Learning by Knowledge Intensive Firms, The Journal of Management Studies, Vol. 29, pp. 713-740. Stata, R. (1997). Organizational Learning – The Key to Management Innovation, Sloan Management Review, Spring, pp. 63-74. Taylor, S., and Todd, P. (1995). Assessing IT Usage: The Role of Prior Experience, MIS Quarterly (December), pp. 561-570. Vaske, J.J., and Grantham, C.E. (1990). Socializing the Human-Computer environment, Norwood, NJ: Ablex Publishing Corporation. Venkatesh, V. (1999). Creation of Favorable User Perceptions: Exploring the Role of Intrinsic Motivation, MIS Quarterly, Vol. 23 (2), pp. 239-260. Venkatesh, V., and Davis, F.D. (1996). A Model of the Antecedents of Perceived Ease of Use: Development and Test, Decision Sciences, Vol. 27 (3), pp. 451481. Wold, H. (1985). Partial Least Squares, In Encyclopedia of Statistical Sciences, S. Kotz and N. L. Johnson (Eds.), 6, New York, NY: Wiley, pp. 581 – 591. Zmud, R.W. (1979). Individual Differences and MIS Success: A Review of the Empirical Literature, Management Science, Vol. 25 (10), pp. 966-979.
Nah 145
Chapter 11
A Study of Web Users’ Waiting Time Fiona Fui-Hoon Nah University of Nebraska – Lincoln
The explosive expansion of the World Wide Web (WWW) is the biggest event in the Internet. Since its public introduction in 1991, the WWW has become an important channel for electronic commerce, information access, and publication. However, the long waiting time for accessing web pages has become a critical issue, especially with the popularity of multimedia technology and the exponential increase in the number of Web users. Although various technologies and techniques have been implemented to alleviate the situation and to comfort the impatient users, there is still the need to carry out fundamental research to investigate what constitutes an acceptable waiting time for a typical WWW user. This research not only evaluates Nielsen’s hypothesis of 15 seconds as the maximum waiting time of WWW users, but also provides approximate distributions of waiting time of WWW users.
INTRODUCTION The growth of the Internet is one of the most astonishing technological phenomena of the 21st century. One such growth is the explosive development of the World Wide Web (WWW). The WWW is an Internet client-server hypertextdistributed system that originated from the CERN High-Energy Physics Laboratories in Geneva, Switzerland. Since its public introduction in 1991, the WWW has become an important channel for electronic commerce, information access, and publication. With exponential growth in the WWW market, the long waiting Previously Published in Challenges of Information Technology Management in the 21st Century edited by Mehdi Khosrow-Pour, Copyright © 2000, Idea Group Publishing.
146
A Study of Web Users’ Waiting Time
TE
AM
FL Y
time for accessing web pages has become a major problem for WWW users (Lightner, Bose and Salvendy, 1996), especially with the increasing use of multimedia technology and the doubling of Internet users every 18-24 months. A survey conducted by the Graphic, Visualization, & Usability (GVU) Center at the Georgia Institute of Technology also indicates long download time to be the biggest problem experienced by WWW users (GVU, 1998). This problem is so noticeable that WWW users sometimes equate the “WWW” acronym with “World Wide Wait”! The WWW has been used for both business and personal purposes. It is used not only by businesses to gain competitive advantage, but also by individuals and groups to publish information in an electronic format for worldwide access. Any individual, group, or organization can make its web pages available to the public. Initially developed to provide scientists and researchers with an easy means of sharing information, the WWW has become one of the world’s most widely used environment for publishing information (Vacca, 1995). In addition, the WWW has become one of the most popular information search tools. The wealth of information available on the WWW makes it an excellent information repository. It puts global information at our finger tips and enables us to access it from the comfort of our home. The WWW also provides an avenue for people to “kill time” by surfing the net (Hayes, 1995). Since its public introduction, the WWW has diffused rapidly to different geographical locations and to different age groups. Although initial use of the Internet was dominated by technology developers/pioneers and early adopters/seekers of technology, a study reveals that more older and younger users are becoming Web literate (Pitkow and Kehoe, 1996). With this ever-increasing number of Web users, access time of WWW has become a serious concern (Lightner et al., 1996). The findings from the GVU’s (1998) 10th WWW User Survey indicate that the top two problems faced by Web users are the long download time of web pages (61.4%) and the long loading time of advertising banners (62.3%). The situation is gradually worsened by the increasing and excessive use of multimedia data (i.e., audio and video clips). This problem affects both information users and authors since users are less likely to visit websites requiring long access time (Reaux and Carroll, 1997). Therefore, it is important to conduct research studies on WWW access time, ways to reduce the access time, and ways to extend Web users’ waiting time. Although technologies and techniques have been introduced to partly resolve the problem and to ease the frustration of impatient users, fundamental research to assess web users’ waiting time is lacking. This research attempts to investigate the question, “What constitutes an acceptable waiting time for WWW users?”
Team-Fly®
Nah 147
The rest of the chapter is organized as follows: the next section discusses the concept of WWW waiting time, and the following section describes an experimental study that was carried out to assess Web users’ waiting time. The results of the study are then presented and discussed.
LITERATURE ON WAITING TIME Long access time has been a consistent problem encountered by Web users (GVU, 1998; Lightner et al., 1996). The waiting time for accessing a web page is the physical (objective) time from the moment the user clicks on a hyperlink (or any other operation that requests a new web page, such as typing in a new URL and hitting the “Enter” key) to the moment the requested information is presented in the browser window or the moment the user gives up waiting. It can be affected by the performance of the browser, the speed of the Internet connection, the local network traffic, the load on the remote host, and the structure and format of the web page requested. Tolerable Waiting Time Research on computer response times has yielded the following results (Miller, 1968; Nielsen, 1993): 1) 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary to display the result. 2) 1.0 second is about the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data. 3) 10 seconds is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Even though the traditional human factors guideline suggests ten seconds to be the maximum response time before computer users lose interest (Miller, 1968; Nielsen, 1993), Nielsen (1995, 1996) estimates a maximum of 15 seconds in the case of WWW access. According to Nielsen, Web users have been trained or conditioned to endure longer “suffering” from past experience accessing the Web. Is 15 seconds the maximum “tolerable waiting time” for accessing the Web? Unfortunately, there is no empirical evidence indicating that 15 seconds is the “magic number.” This question needs to be investigated empirically.
148
A Study of Web Users’ Waiting Time
RESEARCH QUESTION AND METHODOLOGY The aim of this study is twofold: (1) to assess Nielsen’s hypothesis of 15 seconds as Web users’ tolerable waiting time for accessing web pages, and (2) to determine the range and distribution of tolerable waiting time of WWW users. An exploratory experimental study was conducted to assess users’ tolerable waiting time in accessing the WWW. Two scenarios were studied – with and without feedback provided to users in the form of a status bar, which refers to a moving bar to indicate that the system is working on the request. The bar moves in a bi-directional manner (left to right, right to left, left to right, and so on) until the user’s request is satisfied (i.e., the web page is downloaded). It does not provide waiting duration information. Seventy subjects participated in the experiment. The subjects were undergraduate students enrolled in an introductory MIS class. They were randomly assigned to one of two groups for the experiment. The first group (i.e., control group) was provided with a browser that did not have a status bar. The second group (i.e., experimental group) was provided with the same browser that included a status bar. The experiment was given to the students as a class assignment that required them to look up specific information on the Web. The subjects were proficient users of the WWW. All subjects used the same browser and interface, and they accessed exactly the same web pages. All subjects began their browsing task from a standard web page that was designed specifically for the experiment. This standard web page provided links to the information needed to complete the assignment. The subjects were asked to look up the names of ten Web acceleration tools using the standard web page provided. Of the ten hyperlinks provided on the standard web page, only seven of them were working. Upon clicking on any of these seven working hyperlinks, their corresponding web page would appear instantaneously (i.e., negligible access time). The fourth, seventh, and ninth hyperlinks triggered an infinite waiting time. For these three non-working hyperlinks, the subjects would have to click the “STOP” icon to terminate the wait. The elapsed
Table 1: Statistics on Users’ Waiting Time for WWW Access
Control (36 subjects) Treatment (34 subjects) Mann-Whitney Test
Subjects’ Average Waiting Time for First Access to Non-Working Hyperlinks 1st non-working hyperlink 2nd non-working hyperlink 3rd non-working hyperlink 13 Sec. 4 Sec. 3.3 Sec. (8 out [or 22%] of 36 (0 out of 36 accesses (0 out of 36 accesses accesses > 15 Sec.) > 15 Sec.) > 15 Sec.) 37.6 Sec. 17 Sec. 6.7 Sec. (27 out [or 79%] of (7 out [or 21%) of (4 out [or 12%] of 34 accesses > 15 Sec.) 34 accesses > 15 Sec.) 34 accesses > 15 Sec.) p<.000 p<.002 p<.004
Nah 149
time between the moment the hyperlink was clicked and the moment the “STOP” button was clicked were captured automatically by the computer log and used for data analysis.
RESEARCH RESULTS The results of data analysis indicate that the inclusion of a status bar prolongs users’ waiting time (see Table 1). The average waiting time for the first access to a non-working hyperlink was 13 seconds for the control group (no status bar) and 38 seconds for the experimental group (with status bar). The Mann-Whitney test indicates that this difference is significant (p<.000). As subjects proceeded with the task, their average waiting time for accessing the non-working hyperlinks decreased. This was probably because subjects became more confident that these web pages, having come from the same server, would not be successfully downloaded. The average waiting time for the first access to the next different non-working hyperlink was four seconds for the control group (no status bar) and 17 seconds for the experimental group (with status bar). The average waiting time for the first access to the last non-working hyperlink was three seconds for the control group (no status bar) and seven seconds for the experimental group (with status bar). The Mann-Whitney test indicates that both of these differences are significant (p<.002 for the former, and p<.004 for the latter). Although subjects’ waiting time declined with the number of non-working hyperlinks observed, their waiting time may not decline if these nonworking hyperlinks were associated with web pages residing on different servers. This hypothesis will be tested in future research. Table 2. Distribution of Users’ Waiting Time in the Absence of a Status Bar
18 16 14 12 10 8 6 4 2 0
78 %
W a itin g T im e (in s e c .)
>70
65.01-70
60.01-65
55.01-60
50.01-55
40.01-50
35.01-40
30.01-35
25.01-30
20.01-25
15.01-20
10.01-15
0-5
61 %
5.01-10
Frequency
H is t o g r a m o f F ir s t W a it in g T im e ( w ith o u t s ta t u s b a r ) 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0%
150
A Study of Web Users’ Waiting Time
Table 3. Distribution of Users’ Waiting in the Presence of a Status Bar H is to g r a m o f F ir s t W a it in g T im e (w it h s ta tu s b a r )
7 6 5 4 3 2
21%
100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0%
>70
55.01-60
50.01-55
40.01-50
35.01-40
30.01-35
25.01-30
20.01-25
15.01-20
10.01-15
0-5
5.01-10
65.01-70
12%
1 0
60.01-65
Frequency
9 8
W a itin g T im e (in s e c .)
In the data analysis that follows, we will look at the “best case” scenario where subjects were surprised to encounter the first non-working hyperlink and were, therefore, relatively patient in waiting to access the web page. Thus, the results that follow would have to be interpreted from the perspective of a “best case” scenario. As indicated in Tables 2 and 3, the subjects’ waiting time was significantly prolonged when a status bar was provided on the web browser (p<.000 – refer to Table 1). In the case where no status bar was provided, the mode for waiting time was in the interval of 5-10 seconds, as shown in Table 2. On the other hand, the mode was in the interval of 20-25 seconds when a status bar was provided, as shown in table 3. Thus, when no status bar was provided, few users (only 22% of users) waited more than 15 seconds (see Table 2). On the other hand, when a status bar was provided, most users (79% of users) waited more than 15 seconds — only 21% of users waited less than 15 seconds (see Table 3). Hence, Nielsen’s proposal for a 15-second guideline is contingent upon whether a status bar is provided. As for the first accesses to the other two non-working hyperlinks, none of the users in the control setting (i.e., no status bar) waited more than 15 seconds. However, the scenario was different when a status bar was provided. When a status bar was provided on the web browser, 21% of the first accesses to the next non-working hyperlink had a waiting time of more than 15 seconds, whereas 12% of the first accesses to the last non-working hyperlink had a waiting time of more than 15 seconds.
Nah 151
FUTURE RESEARCH Waiting time varies under different circumstances and contexts. For example, if a user is sure that the information that is needed is available on a web page that is currently being accessed, s/he may be willing to wait until the web page is displayed. However, for a user who is just surfing the Web at leisure, even a short delay in the display of the web page is likely to prompt him/her to click the “STOP” button and try another website. As such, three scenarios have been identified for WWW access – netsurfing, browsing, and querying. The effect of task nature (i.e., netsurfing, browsing, querying) on waiting time is one area that we are interested in studying in the near future.
CONCLUSIONS To the best of our knowledge, no experimental study has been carried out to assess the degree of users’ tolerance toward long access time on the WWW. Given that long waiting time for Web access has always been one of the leading concerns for Web users (GVU, 1998; Lightner et al., 1996), it is important for researchers and practitioners to 1) understand users’ waiting behavior in accessing the WWW, 2) propose and evaluate techniques to reduce users’ perception of waiting time, and 3) recommend a trade-off between aestheticism of web page and download/access time. We intend to use the results of this study and the experience gained from this research to develop a foundation for future research on WWW waiting time. We also hope that this exploratory research would arouse the interest of other researchers to examine user-related issues and problems in this important research area on Web users’ waiting time.
REFERENCES Graphics, Visualization & Usability (GVU) Center. (1998). GVU’s User Surveys. Georgia Tech Research Corporation, April 1995 to October 1998. Available at http://www.cc.gatech.edu/gvu/user_ surveys/. Hayes, M. (1995). Working online or wasting time? Information Week, May 1, pp. 38-51. Lightner, N. J., Bose, I., & Salvendy, G. (1996). What is wrong with the worldwide web?: a diagnosis of some problems and prescription of some remedies, Ergonomics, 39(8), pp. 995-1004. Miller, R.B. (1968). Response time in man-computer conversational transaction, Proceedings of AFIPS Fall Joint Computer Conference, 33, pp. 267-277. Nielsen, J. (1993). Response times: the three important limits. Available at http:// www.useit.com/papers/responsetime.html. Excerpt from Chapter 5 of Usability Engineering by Jakob Nielsen, Academic Press.
152
A Study of Web Users’ Waiting Time
Nielsen, J. (1995). Guidelines for multimedia on the web, Jakob Nielsen’s Alertbox for December, 1995. Available at http://www.useit.com/alertbox/9512.html. Nielsen, J. (1996). Top ten mistakes in web design, Jakob Nielsen’s Alertbox for May, 1996. Available at http://www.useit.com/alertbox/9605.html. Pitkow, J.E. & Kehoe, C.M. (1996). Emerging trends in the WWW user population, Communications of the ACM, 39(6), pp. 106-108. Reaux, R. A. & Carroll, J. M. (1997). Human factors in information access of distributed systems”, in Salvendy, G. (Eds.), Handbook of Human Factors & Ergonomics, (2nd ed.), New York: John Wiley & Sons, Inc. Vacca, J.R. (1995). Mosaic: beyond net surfing, Byte, 20(1), January, pp. 75-86.
Ingsriswang & Forgionne
153
Chapter 12
Stickiness: Implications for Web-Based Customer Loyalty Efforts Supawadee Ingsriswang and Guisseppi Forgionne Department of Information Systems University of Maryland, Baltimore
INTRODUCTION The past few years have borne witness to a revolution in business with acceleration in the use of the World Wide Web to support or, in many cases, supplant traditional modes of marketing and selling products and services. The Internet consumer base is continually growing. According to a report conducted by Computer Industry Almanac, Inc. (www.c-i-a.com, 1999), 490 million people around the world will have online access by the year 2002. With the rapid increase in the number of online consumers, the managers and marketers are moving to exploit this opportunity to reach millions of customers worldwide. Between 1997-1999, Internet hosts grew from 16 million to over 72 million worldwide (www.isc.org, 2000). The explosive growth of websites raises the question to the Web designer and marketer about how to attract consumer attention to their sites and how to differentiate their sites from other sites. In the physical world, time and cost considerations make it difficult for people to change grocery stores for product selection, while searching a product on the Web generates quite low costs to the consumers. Consumers can switch to other websites or competitive URLs in seconds with minimal financial costs. Every commercial website is exploring a variety of efforts to hold its existing customPreviously Published in Managing Information Technology in a Global Economy edited by Mehdi Khosrow-Pour, Copyright © 2001, Idea Group Publishing.
154
Stickiness: Implications for Web-Based Customer Loyalty Efforts
ers because acquiring new customers is expensive (Hanson, 2000). Web managers and marketers have been paying more attention to the “Stickiness” of websites (Anders, 1999; Davenport, 2000; Murphy, 1999; O’Brien, 1999; Pappas, 1999). Some measurement companies, for instance, Media Metrix, and Neilsen/Netrating, reported the rank of the websites with a stickiness rating, which indicates how long the average user spent on a site in a period of time. Stickiness is the same concept as customer loyalty. It is the competence of a website to create both customer attraction and customer retention for the purpose of revenue or profitability. Customer attraction is related to how to attract and keep customers at the site, whereas customer retention is the ability to retain customer loyalty. Some sites spent millions of dollars by combining many stickiness programs that attempt to hold their customers, but these investments may not be wisely made since not every method is profitable. Therefore, what makes a site sticky is debatable. Moreover, there are many possible strategies to achieve site stickiness. Web managers and marketers would benefit from reliable and consistent measuring tools for choosing the proper site alternative. Although Web marketers monitor aggregate measures on Web traffic, such as average time spent, in order to determine the stickiness of a site, they gain little insight into the effectiveness of stickiness programs. For example, if the average time spent at a site is increasing monthly, is the site sticky? It is possible that each month the site attracts some new visitors who connect to the site and leave it on his/her screen for days while existing customers may be dropping out completely? In addition, the longer time spent at a site might be forged by the network delays at download time. Other traffic measures representing consumers’ behavior should be considered as well. Beyond the traffic volumes, it is important to quantify how much revenue and profits are impacted as well as to understand the underlying consumer behavior. In this chapter, the concept of customer loyalty in traditional businesses is applied to digital products or services in order to describe a conceptual model of online stickiness. Using the conceptual model, we identify the measures that determine the stickiness of a website and describe the applications of the stickiness value.
STICKINESS AS CUSTOMER LOYALTY The loyal customers who repeat their purchases or visits persistently are valuable assets of a website. The goal of a sticky program is to establish a high level of customer loyalty by providing increased satisfaction and value to the customer. The increase of customer satisfaction and loyalty has a positive influence on long-term financial performance (Anderson, Fornell, and Lehmann, 1994; Reichheld and Sasser, 1990). To determine the long-term efficacy of a sticky program, Web marketers must assess the program’s influence on customers’ usage or purchase behavior and verify the cost effectiveness and profitability of the program.
Ingsriswang & Forgionne
155
Customer loyalty has been a core element of marketing since the early eighties (Morgan & Hunt, 1994). The idea is to develop and maintain long-term relationships with customers by creating superior customer value and satisfaction. An understanding of current customers’ behavior and their determinants is an important basis for the identification of optimal marketing actions.
A CONCEPTUAL MODEL OF CUSTOMER LOYALTY Prior research in marketing has identified the key drivers of customer satisfaction and customer loyalty. It is widely known that customers determine their satisfaction via their perception. According to Engel et al.’s (1990) buying decision-making process, customers evaluate the product/service alternatives on the basis of the benefits and costs that satisfy their needs. The perceived value is the total utility the customers measure on each alternative available to them based on their requirement. From Figure 1, we can see the determinants that drive perceived values, customer satisfaction and customer loyalty. Drivers of Stickiness or Customer Loyalty The perceived value depends on the benefit or quality of products/services that customers gain and the cost that customers pay, or Perceived value = Customer Benefits – Customer Costs. Customers would usually select and stay at a site providing high perceived value and satisfaction. In this section, the values of electronic commerce sites to consumers are in the following categories where customers interact with the site. § Product /Site Before customers make a purchase, they must interact with the content posting on the e-commerce site. Forrest Research reported that content is the most important feature, which makes 75 percent of online consumers return to their favorite sites (Murphy, 1999). The quality of a site’s content is related to: Figure 1: A Conceptual Model of Customer Loyalty -Revisit -Repurchase -Reuse -Spend more time -Spend more money -View more pages
Customer Loyalty } Customer Satisfaction } Customer Perceived Value } Customer Benefits } Product/ Service Promotion Site
}
} Customer Calls Monetary Cost
} Time Cost
Risk Cost
156
Stickiness: Implications for Web-Based Customer Loyalty Efforts
TE
AM
FL Y
• The Variety of Goods and Services Jarvenpaa and Todd (1997) reported that the limited number of good or service offerings made most customers unsatisfied. To build stickiness, a predominant strategy used by several sites is to offer a wide range of services. There are a variety of services available at each sticky site including auction, communication ser vice (e.g. e-mail, voice mail, Internet access, chat room, homepage, Instant Messenger), search service (e.g., information, people), shopping mall, financial services (e.g. banking, insurance, brokerage, mortgage, mutual fund), entertainment and recreation services, (e.g. sport, game, travel, horoscope, movie, music) and so on. • Freshness The content of the website should provide accurate and up-to-date information available on demand at any time. As in publishing sites, the content should be up dated frequently and the previous series of newspapers, or magazines, should be archived at the site. • Interface Usabability Herschlag (1998) reported that 8% of the respondents don’t shop online because the sites are too hard to use. According to Lohse and Spiller (1998), the user interface design features influence Web traffic and sales. The organization and navigation of Web pages are important factors of the website interface. Tilson et al (1998) and Rohn (1998) found that some helpful product organizations make it easy for participants to choose the required product. The design of the user interface should maximize the ease of use and maximize response time to users. In addition, the interface should be planned to encourage and stimulate customers to produce positive emotional responses leading to increases in purchases. • Personalization Perkowitz and Etzioni (1997) stated that good websites should adapt automatically by gaining knowledge from the user access pattern. Many websites attempt to generate Web pages in real time based on the needs of individual consumers. Personalization or Customization is a technique to adjust the site’s presentation for an individual consumer. Most portal sites provide personalized options to visitors for creating and updating their preferences. At Excite (www.excite.com), users who personalized the site were likely to repeat their visit (Pappas, 1999). According to Fletcher Research, 68% of Web users who personalized an e-commerce site made a purchase from the site (Freedman, 1999). § Service The service quality is a key competitive weapon of service providers on the Internet. For example, search engines must minimize the search and response time and maximize the accuracy, precision and reliability of search results (Hater and Hert, 1997; Losse and Paris, 1999), while the e-commerce site must also provide accurate order processing (Loshin, 1995). The values of service at a site are created from:
Team-Fly®
Ingsriswang & Forgionne
157
• Customer Interaction In the digital world, providing a communication channel for customers is an important part of a customer service program because customers cannot interact directly with sales representatives. A survey conducted by NFO Interactive reported around 35% of 2,321 respondents would buy more from e-commerce sites if they could interact with online salespersons (Freedman,1999). Customer feedback provides valuable suggestions, such as existing product shortcomings and design strategies for new products and services. However, research shows that 40% of the surveyed sites did not respond at all to consumer emails (Rubric Inc., 1999). Responsiveness is one factor customers use to determine the service quality (Parasuraman, 1988). The communication channel and response time affect customer satisfaction. • Technology and Performance The performance of any application is dependent on the capability of the network, server hardware/software, and client hardware/software. Speed is very critical for Web business. The site should speed up consumer interaction and response time. Moreover, if speed is sufficient, delivering 3D images, voice and video to users makes users feel better about the product. • Reliability Beside the performance, reliability is important for customers in using a website service. For example, if a customer tries to make a call at a free long-distance call website but never succeeds, the customer will not believe in the website’s ability to deliver the service. Several technical errors, such as a website not found, page not found, code error and server busy, can reduce the reliability of the website. Reliability implies the consistency of website service performance as well as the accuracy of major functions of the website. Since the website promises to deliver the service at all times and to all places, the service at different times and different places should be performed consistently. § Promotion Banner ads and user rewards are usually used to increase traffic at several sites (Davenport, 2000; Strauss and Frost, 1999). To bring customers to the sites, the rewards include coupons, redeemable points, rebates, free gifts, airline mileages, and cash payments, which may be given out as a promotion tool for simply visiting the site or performing some action on the site. § Monetary Cost Price is an important element of the marketing mix, since it affects the consumers’ product selection process directly. When consumers pay nothing, they e a s i l y try the product or service. Consumers also want the highest return on their investment. It means that consumers want to minimize the cost or price they pay for any products/services. As with commercial television and free print publications, most advertising supported websites offer free services to the consumers to increase their satisfaction. Pricing strategy also has been adapted from other
158
Stickiness: Implications for Web-Based Customer Loyalty Efforts
business models; for example, a retailing site would set the lower price than physical stores and its competitive sites. § Time Cost On the Internet, beyond the monetary cost, time is the critical cost for the customer. E-commerce sites act as self-service places where customers have to do every thing by themselves. Websites should minimize the time for any processes at the site, such as access, search, and download, so customers can focus on desired activities. § Risk Cost • Privacy Most online consumers are concerned about disclosing personal information. According to Hoffman, Novak and Peralta (1999), 94% of Web users have refused to provide information at a website, and 40% have given false information. • Security Unsecured transmissions become a major concern for online consumers. During the back and forth data transmission between client and server on the Internet, there are many ways to break into a computer. One study reported that 21% of consumers would not buy things online because of the fear of the hackers (Krantz, 1998). Likewise, as cited by Strauss and Frost (1999), a study conducted for the Lycos Corporation by Cyberdialogue in 1998 reported that transaction security is the number one concern of all online users. These perceived values consolidate into an overall satisfaction measure, which impacts customer loyalty. Some studies show that satisfied customers are not necessarily loyal customers (Gale, 1997; Reicheld, 1996), but there is a high probability that satisfied customers would be loyal. For instance, Bolton and Lemon (1999) showed that customers’ usage of two continuously provided services depended on their prior satisfaction levels. Demographic and other characteristics of customers may also affect their satisfaction. For example, the customer who has a long experience with the website will have more opportunities to have a bad experience with the site. In general, loyal customers will perform the following actions: • Revisit/repurchase/reuse; • Spend more time at the site; • Generate a large amount of revenue and profits for the site; and • Recommend the website to others.
STICKINESS MEASURES Loyal customers are the target of the website and its stickiness programs. The conceptual model of customer loyalty indicates that customer loyalty is a kind of customer behavior. These characteristics can be used as the measures of stickiness with the following indicators.
Ingsriswang & Forgionne
159
• Revisit/repurchase/reuse. The loyal customers should frequently visit, purchase or use services at the site. We can measure this behavior by the frequency of visits and the frequency of purchases. • Spend more time at the site. The loyal customers should spend a longer period of time at the site. The duration of visits and how many website features customers use can represent this customers behavior. • Generate large amount of revenue and profits at the site. This behavior can be signified by the number of page views, number of ad exposures, number of clicks on ads, and the purchase amount during a visit. • Recommend the website to others. Many content provider sites provide an option for users to recommend the site to their friends. In these sites, the number of recommendations can be counted as well. Remarkably, the duration of visits, the most popular measure for stickiness of websites, represents only one of many loyal behaviors. The longer duration of visits does not necessarily determine the higher number of page views, since customers could spend one hour on one page. Other relationships among loyal behavior measures are still in question. This may confirm that an aggregate only measure cannot tell whether the site is sticky. Through loyal behaviors, the customers will generate high traffic and sales volumes with an increase in the revenue from the site. With the purpose of any businesses, the monetary value is the most frequently used to measure the effectiveness of the investments. We can measure the value stickiness as the profitability of the site. Profit = Revenue – Cost (1) Corresponding to the model of customer loyalty, the loyal customers tend to create or spend large amounts of money at the site. The revenue that customers generate for the site can be calculated from the volume of product bought or service used and the price of the product/service. In general, at the retailing sites, Revenue = Volume of product sold * Price (2) whereas at the advertising-based sites, Revenue = Volume of impressions or clicks on ads * Price. (3) however, revenue at some sites might include both (2) and (3). Costs of the website can be calculated by, Costs = Volume of transactions * Cost per transaction. (4) Based on the customer perception and satisfaction, the volume of transactions is determined by the stickiness of the site. The benefit of the product, service and promotion, and the cost or price to the customers are antecedents of their perception and satisfaction. Hence, we can formulate a function, Volume = f (Stickiness) (5) where Stickiness = f (product, service, promotion, price). (6)
160
Stickiness: Implications for Web-Based Customer Loyalty Efforts
Figure 2: Stickiness as the driver of volume and revenue of the website Stickiness
|
Loyal Behaviors
|
Volume
|
Revenue
It is clear that stickiness is the consequence of an appropriate marketing mix. In sum, the relationship between sticky features and revenue of a website will be as shown in Figure 2. Based on our stickiness concept, the advertising-supported site is a good example, since its revenue directly depends on the traffic volume (Rappa, 2000). At advertising-supported sites, larger volumes of impressions and higher clickthrough rates lead to greater revenues. The number of impressions and the clickthrough rate will be determined by customer behavior, such as the frequency of visits, the number of page views, the duration of visits and the number of features used. As shown above, the loyal behaviors and the transaction volume at a website are driven by site stickiness. If Web managers can measure site stickiness determinants and capture the site stickiness value, they can assess the effectiveness of their site design.
USING STICKINESS MEASURES TO IMPROVE THE WEBSITE DESIGN The conceptual model of customer loyalty and the proposed stickiness measures are designed to yield high levels of customer satisfaction, customer loyalty and profitability. Most designers only focus on functionality and usability; however, developers of an e-commerce site need to consider marketability. The e-commerce site is a prodFigure 3: An iterative website development process ~
~
~
Preproduction | Production |Postproduction |
Figure 4: Stages in an iterative website development process Stages Preproduction
Website Development Process Website & Target Market definition -Planning -Analysis -Design
Production
Implementation -Web application development -Pilot Trials
Postproduction
-Evaluation & Maintenance -Promotion
Team Members Web manager Web marketers Web designers Target customers Web designers Web developers Web marketers Target customers Web marketers Web designers Web developers Customers
Ingsriswang & Forgionne
161
uct in its own right. In electronic markets, customers have more opportunities to compare and select a site to shop and a product to buy. We view a website development process as a product development process: preproduction, production and postproduction, as shown in Figure 3. As illustrated in Figures 3 and 4, on an iterative website development, marketers and designers must work together in all stages. Each stage has iterations. The preproduction stage is to determine the definition of the website and whether any related website are available on the market. In addition, the target customers’ needs and perceptions of the present related websites and the new website ideas are analyzed and used to refine the new website features. In the refinement process, features are approved or rejected based on target customers’ perceptions and technology constraints. In the production stage, Web designers and developers implement the prototype of the website. The same target groups as in the previous stage will be shown the prototype. Again, the target group’s perceptions are collected and used to redesign the prototype. Once the website is launched, the postproduction stage starts. Advertising and promotional campaigns are dispatched to attract the customers to the site. Customers’ behavior and perceptions are used to determine what features satisfy or dissatisfy the customers. Then, the process of redefining or redesigning the website starts again at the preproduction stage. With this website development perspective, customer perceptions are very important in designing a marketable website. The stickiness measures proposed here, derived from customer perceptions, will support both the Web marketer and Web designer in tuning their website. Based on customer behavior, stickiness provides more detail to marketing data. Performing as a marketing response function, stickiness values suggest the need to develop the new product/service or redesign the existing products/services at the site. To be successful, a new product must create a superior stickiness value and a desired level of revenue and profitability for the site. On the Web design theme, stickiness acts as an evaluation function helping to focus the effect of site design. Stickiness measures can also be used to indicate customer interests, so the Web designer can adapt the site feature and interface based on the interest of the customer. Overall, using the stickiness measures, the Web marketers and designers can adapt and create the combinations of Web-based marketing strategies to yield a high level of customer loyalty.
CONCLUSIONS In the electronic business environment, competition between websites to keep their customers is intensifying. Stickiness is one of the Web attributes for measuring the effectiveness of the site. As with the concept of customer loyalty, stickiness points out how well a website can keep its customers. The rates of success in a loyalty program or sticky drivers certainly vary by the business model and by each website. The development and maintenance of the stickiness of the website require a systematic measuring and understanding of customer satisfaction and loyalty.
162
Stickiness: Implications for Web-Based Customer Loyalty Efforts
With regard to the effectiveness of the stickiness/web-based loyalty programs on the customer behavior, the Web manager can justify these programs based upon their ability to increase customer satisfaction and to encourage customers to use more products and services at the site. Many websites suffer from a low ability to hold their customer target. Customer satisfaction, loyalty and stickiness measurement can uncover the customer’s preferences and what caused the customer to leave. Once the Web managers can identify the potential features of stickiness, they can indicate the appropriate marketing instruments and web-based techniques to develop and maintain the customer relationship. Moreover, many websites have the same target group, and the Web manager must be aware of the present and future competition. Stickiness measurements can provide the information supporting the website investment. Through a stickiness analysis, the Web marketer can demonstrate the value of the product/services they provide and can determine the return on investment of those services to the Web manager. By modeling the stickiness concept, it is possible to predict the revenue impact of product and service improvements. Furthermore, improvements in the loyalty of profitable customers can improve market share and reduce marketing and operating costs.
REFERENCES Anders, G. (1999). The Race for Sticky Web Sites – Behind the Deal Frenzy, a Quest to Hang on to Restless Clickers, Wall Street Journal, February 11. Anderson, E.W., Fornell, C., and Lehmann, D.R. (1994). Customer Satisfaction, Market Share and Profitability: Findings From Sweden, Journal of Marketing, Vol. 58(July), pp. 53-66. Bolton, R. and Lemon, K.N. (1999). A Dynamic Model of Customers’ Usage of Services: Usage as an Antecedent and Consequence of Satisfaction, Journal of Marketing Research, 36 (May), pp. 171-186. Davenport, T.H. (2000). Sticky Business, CIO Magazine, February 1. Engel, J. F.; Blackwell, R. D. and Miniard, P. W. (1990). Consumer behavior, 6th ed., Chicago: The Dryden Press. Gale, B.T.(1997). Satisfaction is not enough., Marketing News, Oct. 27, p. 18. Freedman, L.(1999). Integrating Content and Commerce, Catalog Age, Dec. 1. Jarvenpaa, S.L., and Todd, P.A. (1997). Consumer reactions to electronic shopping on the World Wide Web, International Journal of Electronic Commerce, 1(2), pp. 59-88. Hanson, W. (2000). Principles of Internet Marketing, Cincinnati, OH: South-Western College Publishing. Harter, S. P. and Hert, C.A. (1997). Evaluation of the information retrieval systems: Approches,issues and methods. In Martha Williams editor, Annual Review of Information Science and Technology, American Society for Information Science, Washington D.C., Vol. 32, pp. 1-94.
Ingsriswang & Forgionne
163
Herschlag, M. (1998). Shopping the Net, Time Magazine, July. Hoffman, D.L., Novak, T.P. and Peralta, M. (1999). Building Consumer Trust Online, Communications of the ACM, Vol. 42(4), pp. 80-85. Krantz, M. (1998). Click till you drop, Time Magazine, July 20, pp. 34-49.. Lohse, G. L. and Spiller, P. (1998). Quantifying the Effect of User Interface Design Features on Cyberstore Traffic and Sales, in Coutaz, J. and Karat, J. (Eds.), CHI’98 Conference Proceedings, Los Angeles, CA, April 18 - 23. Losee, R.M. and Paris, L.A. (1999). Measuring Search Engine Quality and Query Difficulty: Ranking with Target and FreeStyle, Journal of the American Society for Information Science, 50(10), pp. 882-889. Loshin, P. (1995), Electronic Commerce, Rockland, MA: Charles River Media. Murphy, K. (1999). Stickiness Is the New Gotta-Have, Internet World, 5(12),1-2. Morgan, R. and Hunt, S. (1994). The Commitment Trust Theory of Relationship Marketing, Journal of Marketing, July, pp.20-38. O’Brien, J. (1999). Sticky Shopping Sites, Computer Shopper, 19(7), p.121. Pappas, C. (1999). Let’s Get Sticky, Home Office Computing, 17(1), pp.90-91. Parasuraman, A. (1988). SERVQUAL: A Multiple-Item Scale for Measuring Consumer Perceptions of Service Quality, Journal of Retailing, 64(Spring),12-40. Perkowitz, M. and Etzioni, O. (1997). Adaptive Web Sites: an AI Challenge, Proceedings of Fifteen International Joint Conference on Artificial Intelligence, Nagoya, Aichi, Japan, August 23-29. Rappa, M. (2000). Business Models on the Web, http://ecommerce.ncsu.edu/ business_models.html, accessed by March 2000, 20. Reichheld, F.F. and Sasser, W.E. (1990). Zero Defections:Quality Comes to Services, Harvard Business Review, 14(March), pp.495-507. Reichheld, F.F. (1996). Learning from Customer Defections, Harvard Business Review, March-April, pp.56-69. Rohn, J.A. (1998). Creating Usable E-Commerce Sites. StandardView, 6(3), pp. 110-115. Rubric,Inc. (1999). Evaluating the Sticky Factor of E-Commerce sites, http:// www.rubricsoft.com, accessed by April 2000,20. Strauss, J., and Frost, R. (1999), Marketing on the Internet, New Jersey: Prentice Hall. Tilson, R., Dong, J., Martin, S. and Kieke, E. (1998). A Comparison of Two Current E-Commerce Sites, Proceedings on the Sixteenth Annual International Conference on Computer Documentation, pp. 87 – 92 www.c-i-a.com (1999). http://www.c-i-a.com/199908iu.htm, accessed by January, 2000. www.isc.org (2000). Internet Domain Survey, http://www.isc.org/ds/WWW200001/report.html, accessed by January, 2000.
164
Negation in SQL and Negation in Logic Programming
Chapter 13
“Not” is Not “Not” Comparisons of Negation in SQL and Negation in Logic Programming James D. Jones Computer Science College of Information Science and Systems Engineering University of Arkansas at Little Rock
Logic programming presents an excellent paradigm within which to develop intelligent systems. In addition to the routine sorts of reasoning we would expect such a system to perform, the state of the art in logic programming allows one to reason explicitly about false information, to reason explicitly about unknown or uncertain information (the subject of this chapter), and to reason introspectively in spite of holding multiple competing ways of viewing the world. Each of these abilities are necessary forms of reasoning, and each are lacking from current technology. Needless to say, each of these abilities are also lacking from relational databases. In this chapter, the focus is upon the expressive power of weak negation in logic programming. Weak negation is not well understood and can be easily confused with negation in SQL. In particular, some may falsely equate weak negation with the “not exists” clause of SQL. It is true that both forms of negation do have some similarity. However, the expressive power of weak negation far exceeds that of “not exists” in SQL. To a much lesser extent, we shall also consider strong negation in logic programming.
Previously Published in Managing Information Technology in a Global Economy edited by Mehdi Khosrow-Pour, Copyright © 2001, Idea Group Publishing.
Jones 165
INTRODUCTION As we consider a future of very intelligent machines, we need enabling technologies that will allow us to represent and reason about all forms of knowledge. Those enabling technologies will concern themselves not only with hardware improvements but, much more importantly, with software improvements. We could have the fastest chips, busses, and memory conceivable, and we could have the most capacious memory and storage medium, yet without very significant improvements to our reasoning methods, all we gain is faster turnaround time and greater throughput. We need processing abilities, reasoning abilities, that are orders of magnitude beyond current technology. The software improvements we need are not concerned so much with programming languages and operating systems, important as these are. Rather, we are concerned with the essence of thought. How do we get an inanimate object to perform such thought? The quest of the artificial intelligence community-at-large is to produce a self-sufficient reasoning machine much like the computer “HAL” in the movies “2001: A Space Odyssey,” and “2010: The Year We Make Contact.” A more contemporary example is the robot in the movie “BiCentennial Man.” While others exalt the nature of such a successful inanimate object (Moravec 1999), the bottom line is that we have to devise ways to get something akin to a doorstop to perform the magnificent feat that we call thought. This is incredibly difficult. We need strategies or methods to perform very difficult forms of reasoning. We also need these strategies to be on solid theoretical foundations. This is the goal of the declarative semantics of logic programming. The quip is that we need to reason in semantically correct ways. Reasoning methods are only part of the requirement for deep reasoning. Another part of the picture is that we need knowledge. A successful machine reasoner must possess both a large quantity of knowledge and the ability to use that knowledge to create new knowledge (make inference, make decisions, etc.). Traditionally, databases have been viewed as the repository of vast amounts of information, while logic programming has been viewed as an inference engine to allow us to derive new information from existing information. In this chapter, we are concerned about negation in these fields, since: 1) these fields are related, and 2) we desire to provide a greater appreciation for the power of logic programming. The overlap of these two fields–that is, intelligent databases–has come to be known as deductive databases. The next section primarily discusses what weak negation means. In cursory manner, it also discusses strong negation. Strong negation in logic programming is something for which SQL does not have an analogue. The subsequent section
166
Negation in SQL and Negation in Logic Programming
FL Y
illustrates by way of examples how weak negation is far more powerful than negation in SQL. While formal proofs are beyond the scope of this chapter, it should be apparent that weak negation completely subsumes negation in SQL and far exceeds the expressive power of negation in SQL. It is assumed that the reader has a working knowledge of SQL. (For those readers not very familiar with SQL, Connolly and Begg (1999) provide an excellent and very readable discussion of SQL.) Throughout, reference will be made to relational databases. From the point of view of semantics, relational databases have a very strong mathematical foundation, and therefore are a good subject for us to consider. From this standpoint, the reader should not view relational databases as passé, preferring object-oriented databases or multimedia databases instead. For the purposes of this discussion, from the view of semantics, object-oriented databases and multimedia databases do not offer greater expressive power. This is not to say that these advanced databases are not very useful, and this is not to say that these databases do not allow us to incorporate new types of information. This is to say that semantically speaking, there is not a difference between pointing to a scalar value and pointing to an X-ray in terms of abstract computational complexity.
AM
WEAK NEGATION IN LOGIC PROGRAMMING
TE
Negation is quite an important topic in logic programming (Apt and Bol, 1994). From the point of view of semantics, there are two primary forms of negation in logic programming. One form of negation is strong negation (Gelfond, and Lifschitz, 1990), or classical negation, and is designated by the operator¬. This form of negation means that something is definitively false. An example would be something such as: ¬angry(john). which means that it is a fact (with respect to our view of this world) that john is not angry. Now such a fact may be explicitly stated, or it may be inferred. Nonetheless, with respect to the beliefs of our program, it is definitively believed that john is not angry. The other form of negation is weak negation (or negation-as-failure) (Lifschitz, 1989). This form of negation is designated by the operator not, which means that a fact is not known (or more precisely, not provable). For instance, not angry(john)
Team-Fly®
Jones 167
means it cannot be proven that john is angry. That is, with respect to our view of the world, it is not currently believed that john is angry. In fact, john may be angry, or john may not be angry; however, with respect to our knowledge (as embodied by our database or logic program), neither of these facts can be determined. (Actually, this statement is too strong, and a more correct statement is beyond this chapter. However, the intuitive meaning of this statement can be easily implemented.) The first example states that it is definite that john is not angry; the second example states that it is not believed that john is angry. The second is a far weaker statement about john’s anger. In the first example, there is presence of information that causes us to hold this belief; in the second example, there is absence of information that causes us to hold this belief. This is a vital distinction. It is weak negation that provides the ability to represent and reason explicitly about unknown or uncertain information. It is important to note that weak negation is part of an inference mechanism and is not at all mutually exclusive with or in competition with other forms of uncertain reasoning, such as fuzzy sets, certainty factors, neural nets, or probability theory. Further, not only can we represent the fact that there is absence of information, but we can also use such a fact to infer new information. Consider the following: safe(sue) ¬ not angry(john) which states that sue is safe if it is not known that john is angry. (Perhaps sue knows that if john were angry, she would know about it, because john is so transparent.) The Confusion These forms of negation in logic programming, and in particular weak negation, can easily be confused with negation in SQL. It is easy to see how such a confusion can arise. First of all, there is a syntactic similarity between weak negation’s operator not and SQL’s not exists. Secondly, there is an intuitive similarity in that both express the fact the some information is missing. To illustrate these two reasons for the confusion, let us consider the following example. Example 1 Assume that the following is a database for a company that places its employees on contract with other firms.
168
Negation in SQL and Negation in Logic Programming
EMPLOYEE Name John Jay Mark Sally
Skill carpenter plumber manager accountant
CURRENTLY ASSIGNED Name Contracting Org. John Alltel Sally RCA
Availability full-time full-time full-time part-time
Hrs. of Contract Remaining 500 hours 120 hours
The EMPLOYEE relation lists all employees. The CURRENTLY_ASSIGNED relation lists those employees that are already out on contract. These relations could be equivalently expressed as a logic program, as in the following: employee(john, carpenter, full-time). employee(jay, plumber, full-time). employee(mark, manager, full-time). employee(sally, accountant, part-time). currently_assigned(john, alltel, 500). currently_assigned(sally, rca, 120). Suppose we are interested in identifying those employees that are not yet contracted out. In SQL, such a request would be satisfied by the query: SELECT * FROM employee WHERE NOT EXISTS (SELECT * FROM currently_assigned WHERE employee.name = currently_assigned.name); This query will produce a report on which only jay and mark appear. The exact same information could be gleaned from a logic program using the following rule: available_for_work(Name,Skill,Availability)
Jones 169
employee(Name, Skill, Availability), not currently_assigned(Name, Contracting_org, Hours_remaining). This rule states that those employees available for work (contract) are those not currently assigned. In this example, the two paradigms produce exactly the same results. It is easy to see the syntactic similarity: SQL uses the form NOT EXISTS (... currently_assigned), and logic programming uses the form not currently_assigned. In both cases, not appears. It is also easy to see the intuitive similarity: in both cases we trigger on the fact that something is missing. As we will see in the next section, the similarity ends here.
COMPARISON OF WEAK NEGATION AND NEGATION IN SQL Those who are not intimately informed of the semantics of logic programming may fail to see the additional expressive power that logic programming provides over SQL. This section focuses only on one aspect of logic programming–weak negation. The purpose here is to examine by way of examples negation in each of these two paradigms to demonstrate the greater expressive power that weak negation provides. Example 2 Let us return to Example 1 of the previous section. The example, as stated, produces the exact same results as the SQL query. That example uses the logic programming rule: available_for_work(Name, Skill, Availability) employee(Name, Skill, Availability), not currently_assigned(Name, Contracting_org, Hours_remaining). Admittedly, this rule matches our intuition, and it produces the desired results. Both paradigms, SQL and logic programming, yield that same result, that is that jay and mark are available for work. From this point on, the similarities between SQL and logic programming cease. The remainder of this example and the other examples exceed the expressive power of SQL. If we dropped the goal employee(X, Y, Z), then we would have the following rule:
170
Negation in SQL and Negation in Logic Programming
available_for_work(Name, Skill, Availability) not currently_assigned(Name, Contracting_org, Hours_remaining). This rule is a much more powerful rule and would have yielded all objects in our Herbrand interpretation which/who are not currently assigned. Consider for instance, that we also had the following relation: FORMER EMPLOYEE Name Rachel Suzzie Bob Then in addition to jay and mark, this rule would also infer that rachel, suzzie, and bob are available to work. The additional inferences that rachel, suzzie, and bob are also available to work are beyond the expressive ability of SQL. There may be applications where ascribing a property or a relation to the rest of the objects of the Herbrand interpretation is appropriate. Unfortunately, in this example, the rule as it stands would also infer that carpenter, plumber, part-time, etc., are available for work, because these are also objects in our Herbrand interpretation. However, appropriate constraints could be placed upon the rule so that we do get the results we desire. The point is not that the above rule yields counter-intuitive results; the point is that, given the same data, logic programs have the ability to infer far more than what SQL will produce for us. As already mentioned, there may be applications where this unthrottled approach is suitable. Yet, we also have the ability to restrict our inferences to get the same results as does SQL. It is very important to note that with logic programming, we have the power of inference. By contrast, with SQL, we have static reporting capability. Inference is dynamic. In a sense, it updates our database. So regardless which version of the logic programming rule we use (the rule from Example 1, or the rule from Example 2), the result is a more informed database. It is also important to note that, in some respect, we have inferred something from nothing. (There is not truly nothing, because the definition of a Herbrand interpretation identifies the object constants in our language.) In Example 1, SQL had success in producing results by referring to an existing relation, EMPLOYEE. By contrast, the logic programming rule introduced in Example 2 yielded names that exist in the FORMER_EMPLOYEE relation without ever referring to that relation.
Jones 171
A very powerful feature of weak negation is that it allows us to express uncertainty in the sense that we may express that one or more of several alternatives may be true. This ability induces several competing views of the world. Yet, we still have the ability to reason in spite of these competing views, as demonstrated in this next example. Example 3 Let us return to the relations from Example 1. Assume that we are recruiting a new hire for a position, and we have narrowed the field down to two candidates: tom and victoria. That is, at this moment, we know that we will hire either tom or victoria, but we do not yet know which. Therefore, with respect to this example, we have two competing views of the world: one in which tom will be hired, and another in which victoria will be hired. Let us lay aside the additional complexities that a temporal database would have (that is, the ability to reason across multiple views of time), and consider that whichever candidate we will hire is available for work (at some time in the future). We have two candidates for what our relations would look like. One view of our world indicates that we will hire tom, represented by the following relations. EMPLOYEE Name John Jay Mark Sally Tom
Skill carpenter plumber manager accountant doctor
CURRENTLY ASSIGNED Name Contracting Org. John Alltel Sally RCA
Availability full-time full-time full-time part-time full-time
Hrs. of Contract Remaining 500 hours 120 hours
Given this view of the world (with respect to this example), the same SQL query or the same logic programming rule from Example 1 would inform us that jay, mark, and tom have not been assigned to a contract and are available for work. The other view of our world indicates that we will hire victoria. The relations in this view of the world are represented by the following:
172
Negation in SQL and Negation in Logic Programming
EMPLOYEE Name John Jay Mark Sally Victoria
Skill carpenter plumber manager accountant doctor
CURRENTLY ASSIGNED Name Contracting Org. John Alltel Sally RCA
Availability full-time full-time full-time part-time full-time
Hrs. of Contract Remaining 500 hours 120 hours
Given this view of the world, the same SQL query or the same logic programming rule from Example 1 would inform us that jay, mark, and victoria have not been assigned to a contract and are available for work. Clearly, the results of these two queries are different. Relational database technology does not have the capacity to represent these two competing ways of viewing the world. Certainly, if we cannot represent it, we cannot query against it with SQL. By contrast, this information is very easily represented by the following logic program: employee(john, carpenter, full-time). employee(jay, plumber, full-time). employee(mark, manager, full-time). employee(sally, accountant, part-time). currently_assigned(john, alltel, 500). currently_assigned(sally, rca, 120). employee(tom, doctor, full-time) not employee(victoria, doctor, full-time) employee(victoria, doctor, full-time) not employee(tom, doctor, full-time) This logic program has two belief sets: one in which tom is considered an employee, and one in which victoria is considered an employee. (Note that in terms of logical entailment, the order of the rules is completely unimportant. The new information has been added to the bottom to make it easier to identify the differences between the examples.) In addition to being able to represent and reason about different views of the world, logic programs can also reason about sets of items (Beeri, Naqvi, Shmueli and Tsur, 1991; Dovier, Pontelli, and Rossi, 1996; Jones, 1999). That is, a set
Jones 173
can be a term, an object of the universe of discourse. A term is akin to a single cell in a relation. (A single cell being the intersection of a tuple and an attribute.) The concept of a set of values to be taken as the current value of a cell is totally lacking from relational technology. Consider the following example. Example 4 Let us consider who is on the payroll at a particular company. This company has a basketball team that it supports as a publicity aid. Since the composition of the team constantly changes (personnel turnover, the total number of players fluctuates, etc.), the company has adopted the practice of writing one check to the team as a whole, and the team distributes those proceeds as it sees fit. So, those on the payroll include the employees of the company and the basketball team (which is treated as a single entity.) The following represents the basketball team: basketball_player(tony). basketball_player(hakim). basketball_player(clyde). basketball_player(michael). basketball_player(stuart). basketball_player(henry). The following rules (appealing to the information from Example 1) represents who is on payroll. on_payroll(Name) employee(Name, Skill, Availability). on_payroll(Set) setof(Name, basketball_player(Name), Set). The intensional relation on_payroll is the following: on_payroll(john). on_payroll(jay). on_payroll(mark). on_payroll(sally). on_payroll({tony, hakim, clyde, michael, stuart, henry}). We see that there are five entities on payroll: john, jay, mark, sally, and the set of players {tony, hakim, clyde, michael, stuart, henry}.
174
Negation in SQL and Negation in Logic Programming
This ability to represent and reason about collections of objects is very important. Further, the ability to define such collections intensionally (that is, by a somewhat mathematical definition rather than explicitly listing the collection) is very powerful. Allowing the intensional definition to use the full power of logic programming formulae, including weak negation, is powerful indeed. The following example shows the use of constructing such a set using weak negation. Example 5 Continuing from the previous example, let us assume that this same company also provides athletic scholarships to the local university. The company has the stipulation that recipients of the athletic scholarship must also play for the company team. However, since the individual is receiving a scholarship, the individual is not to receive compensation from the team. In this strange world that we are constructing, this does not violate NCAA rules. The following rule represents the fact that henry is on scholarship. on_scholarship(henry). Now let us rewrite the rules for defining who is on payroll. on_payroll(Name) employee(Name, Skill, Availability). on_payroll(Set) setof(Name,(basketball_player(Name), not on_scholarship(Name)), Set). Now the intensional relation on_payroll is the following: on_payroll(john). on_payroll(jay). on_payroll(mark). on_payroll(sally). on_payroll({tony, hakim, clyde, michael, stuart}). In the previous example, henry was among the set of basketball players considered on payroll. In this example, henry is not among the set of basketball players considered on payroll. This difference has been achieved by the use of weak negation in the intensional set definition, all of which is not achievable in relational databases and SQL.
Jones 175
SUMMARY AND FUTURE WORK This chapter has examined the simplest form of weak negation–that which appears in extended logic programs. Even extended logic programs are beyond current practice in terms of expressive power. There are yet more powerful semantics for logic programs that introduce additional opportunities for weak negation to represent different classes of problems. This present work could be extended to consider those more powerful languages. In comparing weak negation with negation in SQL, we have seen by example that weak negation can represent the same information as that represented by SQL. We have also seen several other examples where weak negation can represent problems not able to be expressed in relational databases. These examples include: the fact that, in its simplest form, weak negation can perform more inferences than SQL (appealing to ground terms in the Herbrand interpretation); the fact that weak negation allows us to represent multiple views of the world; the fact that extended logic programs (which are the minimum platform for weak negation) can be extended to allow representation of sets (extensional and intensionally); and the fact that weak negation can be used in those intensionally set definitions. There are other ways that this present work can be expanded. Future work can recast the present work into a more formal work with proofs. Considering a totally different matter, a related concept to negation is missing information. Future work could examine the relationship between null values in database technology and weak negation. Further, SQL does allow nesting of subqueries. (Such would be of the form of a subquery that uses the Anot exists@ clause.) Perhaps the limits of nesting should be considered. Very much related to this is the idea that SQL deals with only one relation at a time, and deals sequentially and deterministically with multiple relations. In logic programming, multiple relations can easily be dealt with in all possible combinations of Horn clauses, strong negation and weak negation, and order does not matter (to semantics). Further, parallelism does not affect the semantics. The comparison between database technology and logic programming could be continued along these lines of nesting, and considering multiple relations nondeterministically.
REFERENCES Apt, K. R., and Bol, R. N. (1994). Logic Programming and Negation: A Survey, Journal of Logic Programming, vol 19/20 May/July. Beeri, H. C., Naqvi, S., Shmueli, O., and Tsur, S. (1991). Set Constructors in a Logic Database Language, Journal of Logic Programming, 10(3):181-232. Connolly, T., and Begg, C. (1999). Database Systems, A Practical Approach to Design, Implementation, and Management, (2nd Ed.), Addison-Wesley.
176
Negation in SQL and Negation in Logic Programming
TE
AM
FL Y
Dovier, A., Pontelli, E., and Rossi, G. (1996). {log}: A language for programming in logic with finite sets, Journal of Logic Programming, 28(1):1-44. Gelfond, M., and Lifschitz, V. (1990). Logic Programs with Classical Negation. In D. Warren and Peter Szeredi, editors, Logic Programming: Proceedings or the 7th Int=l Conf. Lifschitz, V. (1989). Logical Foundations of Deductive Databases, Information Processing 89, North-Holland Jones, J. D. (1999). AA Declarative Semantics For Sets Based on the Stable Model Semantics@, Declarative Programming with Sets (DPS >99), in conjunction with Principles, Logics, and Implementations of high-level programming languages , Paris, France, Paris, France. Moravec, H. (1999). ROBOT, Mere Machine to Transcendent Mind, New York, NY, : Oxford University Press.
Team-Fly®
Malhotra 177
Chapter 14
Knowledge Management and New Organization Forms: A Framework for Business Model Innovation Yogesh Malhotra @Brint.com, L.L.C. and Syracuse University, USA
The concept of knowledge management is not new in information systems practice and research. However, radical changes in the business environment have suggested limitations of the traditional information-processing view of knowledge management. Specifically, it is being realized that the programmed nature of heuristics underlying such systems may be inadequate for coping with the demands imposed by the new business environments. New business environments are characterized not only by rapid pace of change, but also discontinuous nature of such change. The new business environment, characterized by dynamically discontinuous change, requires a reconceptualization of knowledge management as it has been understood in information systems practice and research. One such conceptualization is proposed in the form of a sense-making model of knowledge management for new business environments. Application of this framework will facilitate business model innovation necessary for sustainable competitive advantage in the new business environment characterized by dynamic, discontinuous and radical pace of change. “People bring imagination and life to a transforming technology.” — Business Week, The Internet Age (Special Report), October 4, 1999, p. 108
Previously Published in Knowledge Management and Virtual Organizations edited by Yogesh Malhotra, Copyright © 2000, Idea Group Publishing.
178
Knowledge Management & New Organization Forms
The traditional organizational business model, driven by pre-specified plans and goals, aimed to ensure optimization and efficiencies based primarily on building consensus, convergence and compliance. Organizational information systems – as well as related performance and control systems— were modeled on the same paradigm to enable convergence by ensuring adherence to organizational routines built into formal and informal information systems. Such routinization of organizational goals for realizing increased efficiencies was suitable for the era marked by a relatively stable and predictable business environment. However, this model is increasingly inadequate in the e-business era that is often characterized by an increasing pace of radical and unforeseen change in the business environment (Arthur, 1996; Barabba, 1998; Malhotra, 1998b; Kalakota and Robinson, 1999; Nadler et al., 1995). The new era of dynamic and discontinuous change requires continual reassessment of organizational routines to ensure that organizational decision-making processes, as well as underlying assumptions, keep pace with the dynamically changing business environment. This issue poses increasing challenge as ‘best practices’ of yesterday—turn into ‘worst practices’ and core competencies turn into core rigidities. The changing business environment, characterized by dynamically discontinuous change, requires a reconceptualization of knowledge management systems as they have been understood in information systems practice and research. One such conceptualization is proposed in this article in the form of a framework for developing organizational knowledge management systems for business model innovation. It is anticipated that application of this framework will facilitate development of new business models that are better suited to the new business environment characterized by dynamic, discontinuous and radical pace of change. The popular technology-centric interpretations of knowledge management prevalent in most of the information technology research and trade press are reviewed in the next section. The problems and caveats inherent in such interpretations are then discussed. The subsequent section discusses the demands imposed by the new business environments that require rethinking such conceptualizations of knowledge management and related information technology-based systems. One conceptualization for overcoming the problems of prevalent interpretations and related assumptions is then discussed along with a framework for developing new organization forms and innovative business models. Subsequent discussion explains how the application of this framework can facilitate development of new business models that are
Malhotra 179
Figure 1: Information Processing Paradigm: Old World of Business
REENGINEERING
RISK
RATIONALIZATION
AUTOMATION
RETURN better suited to the dynamic, discontinuous and radical pace of change characterizing the new business environment.
KNOWLEDGE MANAGEMENT: THE INFORMATION PROCESSING PARADIGM The information-processing view of knowledge management has been prevalent in information systems practice and research over the last few decades. This perspective originated in the era when business environment was less vacillating, the products and services and the corresponding core competencies had a long multi-year shelf life, and the organization and industry boundaries were clearly demarcated over the foreseeable future. The relatively structured and predictable business and competitive environment rewarded firms’ focus on economies of scale. Such economies of scale were often based on a high level of efficiencies of scale in absence of impending threat of rapid obsolescence of product and service definitions, as well as demarcations of existing organizational and industry boundaries. The evolution of the information-processing paradigm over the last four decades to build intelligence and manage change in business functions and processes has generally progressed over three phases: 1. Automation: increased efficiency of operations;
180
Knowledge Management & New Organization Forms
2. Rationalization of procedures: streamlining of procedures and eliminating obvious workflow bottlenecks for enhanced efficiency of operations; and, 3. Reengineering: radical redesign of business processes that depends upon information technology intensive radical redesign of workflows and work processes. The information-processing paradigm has been prevalent over all the three phases that have been characterized by technology-intensive, optimizationdriven, efficiency-seeking organizational change (Malhotra 1999c, 1999d, in press). Deployment of information technologies in all the three phases was based on a relatively predictable view of products and services as well as contributory organizational and industrial structures. Despite increase in risks and corresponding returns relevant to the three kinds of information technology-enabled organizational change, there was little, if any, emphasis on business model innovation – ‘rethinking the business’ — as illustrated in Figure 1. Based on the consensus and convergence-oriented view of information systems, the information processing view of knowledge management is often characterized by benchmarking and transfer of best practices (cf: Allee, 1997; O’Dell and Grayson, 1998). The key assumptions of the information-processing view are often based on the premise about the generalizability of issues across temporal and contextual frames of diverse organizations. Such interpretations have often assumed that adaptive functioning of the organization can be based on explicit knowledge archived in corporate databases and technology-based knowledge repositories (cf: Applegate et al., 1988, p. 44; italics added for emphasis): “Information systems will maintain the corporate history, experience and expertise that long-term employees now hold. The information systems themselves — not the people — can become the stable structure of the organization. People will be free to come and go, but the value of their experience will be incorporated in the systems that help them and their successors run the business.” The information processing view, evident in scores of definitions of knowledge management in the trade press, has considered organizational memory of the past as a reliable predictor of the dynamically and discontinuously changing business environment. Most such interpretations have also made simplistic assumptions about storing past knowledge of individuals in the form of routinized rules-of-thumb and best practices for guiding future action. A representative compilation of such interpretations of knowledge management is listed in Table 1.
Malhotra 181
Table 1. Knowledge Management: The Information Processing Paradigm The process of collecting, organizing, classifying and disseminating information throughout an organization, so as to make it purposeful to those who need it. (Midrange Systems: Albert, 1998) Policies, procedures and technologies employed for operating a continuously updated linked pair of networked databases. (Computerworld: Anthes, 1991) Partly as a reaction to downsizing, some organizations are now trying to use technology to capture the knowledge residing in the minds of their employees so it can be easily shared across the enterprise. Knowledge management aims to capture the knowledge that employees really need in a central repository and filter out the surplus. (Forbes: Bair, 1997) Ensuring a complete development and implementation environment designed for use in a specific function requiring expert systems support. (International Journal of Bank Marketing: Chorafas, 1987) Knowledge management IT concerns organizing and analyzing information in a company’s computer databases so this knowledge can be readily shared throughout a company, instead of languishing in the department where it was created, inaccessible to other employees. (CPA Journal, 1998) Identification of categories of knowledge needed to support the overall business strategy, assessment of current state of the firm’s knowledge and transformation of the current knowledge base into a new and more powerful knowledge base by filling knowledge gaps. (Computerworld: Gopal & Gagnon, 1995) Combining indexing, searching, and push technology to help companies organize data stored in multiple sources and deliver only relevant information to users. (Information Week: Hibbard, 1997) Knowledge management in general tries to organize and make available important know-how, wherever and whenever it’s needed. This includes processes, procedures, patents, reference works, formulas, “best practices,” forecasts and fixes. Technologically, intranets, groupware, data warehouses, networks, bulletin boards and videoconferencing are key tools for storing and distributing this intelligence. (Computerworld: Maglitta, 1996) Mapping knowledge and information resources both online and offline; training, guiding and equipping users with knowledge access tools; monitoring outside news and information. (Computerworld: Maglitta, 1995) Knowledge management incorporates intelligent searching, categorization and accessing of data from disparate databases, e- mail and files. (Computer Reseller News: Willett & Copeland, 1998) Understanding the relationships of data; identifying and documenting rules for managing data; and assuring that data are accurate and maintain integrity. (Software Magazine: Strapko, 1990) Facilitation of autonomous coordinability of decentralized subsystems that can state and adapt their own objectives. (Human Systems Management, Zeleny, 1987)
Based primarily upon a static and ‘syntactic’ notion of knowledge, such representations have often specified the minutiae of machinery while disregarding how people in organizations actually go about acquiring, sharing and creating new knowledge (Davenport 1994). By considering the meaning of
182
Knowledge Management & New Organization Forms
knowledge as “unproblematic, predefined, and prepackaged” (Boland 1987), such interpretations of knowledge management have ignored the human dimension of organizational knowledge creation. Prepackaged or taken-forgranted interpretation of knowledge works against the generation of multiple and contradictory viewpoints that are necessary for meeting the challenge posed by wicked environments characterized by radical and discontinuous change: this may even hamper the firm’s learning and adaptive capabilities (Gill 1995). A key motivation of this article is to address the critical processes of creation of new knowledge and renewal of existing knowledge and to suggest a framework that can provide the philosophical and pragmatic bases for better representation and design of organizational knowledge management systems. Philosophical Bases of the Information-Processing Model Churchman (1971) had interpreted the viewpoints of philosophers Leibnitz, Locke, Kant, Hagel and Singer in the context of designing information systems. Mason and Mitroff (1973) had made preliminary suggestions for designing information systems based on Churchman’s framework. A review of Churchman’s inquiring systems, in context of the extant thinking on knowledge management, underscores the limitations of the dominant model of inquiring systems being used by today’s organizations. Most technology-based conceptualizations of knowledge management have been primarily based upon heuristics — embedded in procedure manuals, mathematical models or programmed logic — that, arguably, capture the preferred solutions to the given repertoire of organization’s problems. Following Churchman, such systems are best suited for: a) well-structured problem situations for which there exists strong consensual position on the nature of the problem situation, and Figure 2: From Best Practices to Paradigm Shifts REENGINEERING PARADIGM SHIFTS
RATIONALIZATION
“Re-Everything”
AUTOMATION Reengineering ...IT-intensive Radical Redesign Rationalization ...Streamlining Bottlenecks in Workflows Automation ...Replacing humans with machines
Malhotra 183
b) well-structured problems for which there exists an analytic formulation with a solution. Type (a) systems are classified as Lockean inquiry systems and Type (b) systems are classified as Leibnitzian inquiry systems. Leibnitzian systems are closed systems without access to the external environment; they operate based on given axioms and may fall into competency traps based on diminishing returns from the ‘tried and tested’ heuristics embedded in the inquiry processes. In contrast, the Lockean systems are based on consensual agreement and aim to reduce equivocality embedded in the diverse interpretations of the world-view. However, in absence of a consensus, these inquiry systems also tend to fail. The convergent and consensus building emphasis of these two kinds of inquiry systems is suited for stable and predictable organizational environments. However, wicked environment imposes the need for variety and complexity of the interpretations that are necessary for deciphering the multiple world-views of the uncertain and unpredictable future.
BEYOND EXISTING MYTHS ABOUT KNOWLEDGE MANAGEMENT The information-processing view of knowledge management has propagated some dangerous myths about knowledge management. Simplistic representations of knowledge management that often appear in popular press may often result in misdirected investments and system implementations that often do not yield expected returns on investment (Strassmann, 1997; 1999). Given the impending backlash against such simplistic representations of knowledge management (cf: Garner 1999), it is critical to analyze the myths underlying the ‘successful’ representations of knowledge management that worked in a bygone era. There are three dominant myths based on the information-processing logic that are characteristic of most popular knowledge management interpretations (Hildebrand, 1999 – interview of the author with CIO Enterprise magazine). Myth 1: Knowledge management technologies can deliver the ‘right information’ to the ‘right person’ at the ‘right time.’ This idea applies to an outdated business model. Information systems in the old industrial model mirror the notion that businesses will change incrementally in an inherently stable market, and executives can foresee change by examining the past. The new business model of the Information Age, however, is marked by fundamental, not incremental, change. Businesses can’t plan long-term; instead, they must shift to a more flexible “anticipation-of-surprise” model. Thus, it’s
184
Knowledge Management & New Organization Forms
impossible to build a system that predicts who the right person at the right time even is, let alone what constitutes the right information. Myth 2: Knowledge management technologies can store human intelligence and experience. Technologies such as databases and groupware applications store bits and pixels of data, but they can’t store the rich schemas that people possess for making sense of data bits. Moreover, information is context-sensitive. The same assemblage of data can evoke different responses from different people. Even the same assemblage of data when reviewed by the same person at a different time or in a different context could evoke differing response in terms of decision-making and action. Hence, storing a static representation of the explicit representation of a person’s knowledge — assuming one has the willingness and the ability to part with it – is not tantamount to storing human intelligence and experience. Myth 3: Knowledge management technologies can distribute human intelligence. Again, this assertion presupposes that companies can predict the right information to distribute and the right people to distribute it to. As noted earlier, for most important business decisions, technologies cannot communicate the meaning embedded in complex data as it is constructed by human minds. This does not preclude the use of information technologies for rich exchange between humans to make sense about bits and pixels. However, dialog that surfaces meaning embedded in information is an intrinsic human property, not the property of the technology that may facilitate the process. Often it is assumed that compilation of data in a central repository would somehow ensure that everyone who has access to that repository is capable and willing to utilize the information stored therein. Past research on this issue has shown that despite availability of comprehensive reports and databases, most executives take decisions based on their interactions with others who they think are knowledgeable about the issues. Furthermore, the assumption of singular meaning of information, though desirable for seeking efficiencies, precludes creative abrasion and creative conflict that is necessary for business model innovation. In contrast, data archived in technological ‘knowledge repositories’ does not allow for renewal of existing knowledge and creation of new knowledge. The above observations seem consistent with observations by industry experts such as John Seely Brown (1997): “In the last 20 years, U.S. industry has invested more than $1 trillion in technology, but has realized little improvement in the efficiency of its knowledge workers and virtually none in their effectiveness.” Given the dangerous perception about knowledge management as seamlessly entwined with technology, “its true critical success factors will be
Malhotra 185
lost in the pleasing hum of servers, software and pipes” (Hildebrand, 1999). Hence, it is critical to focus attention on the critical success factors that are necessary for business model innovation. To distinguish from the information-processing paradigm of knowledge management discussed earlier, the proposed paradigm will be denoted as the sense-making paradigm of knowledge management. This proposed framework is based on Churchman’s (1971, p. 10) explicit recognition that “knowledge resides in the user and not in the collection of information…it is how the user reacts to a collection of information that matters.” Churchman’s emphasis on the human nature of knowledge creation seems more pertinent today than it seemed 25 years ago given the increasing prevalence of ‘wicked’ environment characterized by discontinuous change (Nadler & Shaw 1995) and “wide range of potential surprise” (Landau & Stout 1979). Such an environment defeats the traditional organizational response of predicting and reacting based on pre-programmed heuristics. Instead, it demands more anticipatory responses from the organization members who need to carry out the mandate of a faster cycle of knowledgecreation and action based on the new knowledge (Nadler & Shaw 1995). Philosophical Bases of the Proposed Model Churchman had proposed two alternative kinds of inquiry systems that are particularly suited for multiplicity of world-views needed for radically changing environments: Kantian inquiry systems and Hegelian inquiry sys-
Figure 3: Paradigm Shifts: New World of Business
PARADIGM SHIFTS
RISK
70% Risks 70% Returns REENGINEERING RATIONALIZATION AUTOMATION
RETURN
186
Knowledge Management & New Organization Forms
TE
AM
FL Y
tems. Kantian inquiry systems attempt to give multiple explicit views of complementary nature and are best suited for moderate ill-structured problems. However, given that there is no explicit opposition to the multiple views, these systems may also be afflicted by competency traps characterized by plurality of complementary solutions. In contrast, Hegelian inquiry systems are based on a synthesis of multiple completely antithetical representations that are characterized by intense conflict because of the contrary underlying assumptions. Knowledge management systems based upon the Hegelian inquiry systems would facilitate multiple and contradictory interpretations of the focal information. This process would ensure that the ‘best practices’ are subject to continual re-examination and modification given the dynamically changing business environment. Given the increasingly wicked nature of business environment, there seems to be an imperative need for consideration of the Kantian and Hegelian inquiring systems that can provide the multiple, diverse, and contradictory interpretations. Such systems, by generating multiple semantic views of the future characterized by increasingly rapid pace of discontinuous change, would facilitate anticipation of surprise (Kerr, 1995) over prediction. They are most suited for dialectical inquiry based on dialogue: “meaning passing or moving through...a free flow of meaning between people...” (Bohm cited in Senge, 1990). As explained in the following discussion, the critical role of the individual and social processes underlying the creation of meaning (Strombach, 1986, p. 77) is important without which dialectical inquiry would not be possible. Therein lies the crucial sense-making role of humans in facilitating knowledge creation in inquiring organizations. Continuously challenging the current ‘company way,’ such systems provide the basis for ‘creative abrasion’ (Eisenhardt et al., 1997; Leonard, 1997) that is necessary for promoting radical analysis for business model innovation. In essence, knowledge management systems based on the proposed model prevent the core capabilities of yesterday from becoming core rigidities of tomorrow (Leonard-Barton, 1995). It is critical to look at knowledge management beyond its representation as “know what you know and profit from it” (Fryer, 1999) to “obsolete what you know before others obsolete it and profit by creating the challenges and opportunities others haven’t even thought about” (Malhotra, 1999e). This is the new paradigm of knowledge management for radical innovation needed for sustainable competitive advantage in a business environment characterized by radical and discontinuous change.
Team-Fly®
Malhotra 187
KNOWLEDGE MANAGEMENT FOR BUSINESS MODEL INNOVATION: FROM BEST PRACTICES TO PARADIGM SHIFTS As discussed above, in contrast to the information-processing model based on deterministic assumptions about the future, the sense-making model is more conducive for sustaining competitive advantage in the “world of reeverything” (Arthur, 1996). Without such radical innovation, one wouldn’t have observed the paradigm shifts in core value propositions served by new business models. Such rethinking of the nature of the business and the nature of the organization itself characterizes paradigm shifts that are the hallmark of business model innovation. Such paradigm shifts will be attributable for about 70 percent of the previously unforeseen competitive players that many established organizations will encounter in their future (Hamel, 1997). Examples of such new business models include Amazon.com and eToys, relatively new entrants that are threatening traditional business models embodied in organizations such as Barnes and Noble and Toys R Us. Such business model innovations represent ‘paradigm shifts’ that characterize not transformation at the level of business processes and process workflows, but radical rethinking of the business as well as the dividing lines between organizations and industries. Such paradigm shifts are critical for overcoming mangers’ “blindness to developments occurring outside their core [operations and business segments]” and tapping the opportunities in “white spaces” that lie between existing markets and operations (Moore, 1998). The notions of ‘best practices’ and ‘benchmarking’ relate to the model of organizational controls that are “built, a priori, on the principal of closure” (Landau & Stout 1979, p. 150, Stout 1980) to seek compliance to, and convergence of, the organizational decision-making processes (Flamholtz et al. 1985). However, the decision rules embedded in ‘best practices’ assume the character of predictive ‘proclamations’ which draw their legitimacy from the vested authority, not because they provide adequate solutions (Hamel & Prahalad 1994, p. 145). Challenges to such decision rules tend to be perceived as challenges to the authority embedded in ‘best practices’ (Landau 1973). Hence, such ‘best practices’ that ensure conformity by ensuring task definition, measurement and control also inhibit creativity and initiative (Bartlett & Ghoshal 1995, Ghoshal & Bartlett 1995). The system that is structured as a ‘core capability’ suited to a relatively static business environment turns into a ‘core rigidity’ in a discontinuously changing business
188
Knowledge Management & New Organization Forms
environment. Despite the transient efficacy of ‘best practices,’ the cycle of doing “more of the same” tends to result in locked-in behavior patterns that eventually sacrifice organizational performance at the altar of the organizational “death spiral” (Nadler and Shaw 1995, p. 12-13). In the e-business era, which is increasingly characterized by faster cycle time, greater competition, and lesser stability, certainty and predictability, any kind of consensus cannot keep pace with the dynamically discontinuous changes in the business environment (Bartlett and Ghoshal, 1995; Drucker, 1994; Ghoshal and Bartlett, 1996). With its key emphasis on the obedience of rules embedded in ‘best practices’ and ‘benchmarks’ at the cost of correction of errors (Landau and Stout, 1979), the information-processing model of knowledge management limits creation of new organizational knowledge and impedes renewal of existing organizational knowledge. Most of the innovative business models such as Cisco and Amazon.com didn’t devolve from the best practices or benchmarks of the organizations of yesterday that they displaced, but from radical re-conceptualization of the nature of the business. These paradigm shifts are also increasingly expected to challenge the traditional concepts of organization and industry (Mathur and Kenyon, 1997) with the emergence of business ecosystems (Moore, 1998), virtual communities of practice (Hagel and Armstrong, 1997) and infomediaries (Hagel and Singer, 1999).
HUMAN ASPECTS OF KNOWLEDGE CREATION AND KNOWLEDGE RENEWAL Knowledge management technologies based upon the informationprocessing model are limited in the capabilities for creation of new knowledge or renewal of existing knowledge. No doubt, such technologies provide the optimization-driven efficiency-seeking behavior needed for high performance and success in a business environment characterized by a predictable and incremental pace of change. Examples of technologies that are based on a high level of integration such as ERP technologies represent knowledge management technologies based upon the information-processing model. However, given a radical and discontinuously changing business environment, these technologies fall short of sensing changes that they haven’t been pre-programmed to sense and accordingly unable to modify the logic underlying their behavior. Until information systems embedded in technology become capable of anticipating change and changing their basic assumptions (heuristics) ac-
Malhotra 189
Figure 4: Knowledge Management for Business Model Innovation
RADICAL DISCONTINUOUS CHANGE (WICKED ENVIRONMENT)
ORGANIZATIONAL NEED FOR NEW KNOWLEDGE CREATION AND KNOWLEDGE RENEWAL
_
`
INFORMATION-PROCESSING MODEL OF KNOWLEDGE MANAGEMENT
SENSE-MAKING MODEL OF KNOWLEDGE MANAGEMENT
GUIDING FRAMEWORK OF KNOWLEDGE MANAGEMENT MODEL OF CONVERGENCE & COMPLIANCE (LOCKEAN/ LEIBNITZIAN)
TIGHT PROVIDES EFFICIENCIES OF SCALE & SCOPE
OPTIMIZATION-DRIVEN & EFFICIENCY-ORIENTED
LOOSE PROVIDES AGILITY & FLEXIBILITY
MODEL OF DIVERGENT MEANINGS (HEGELIAN/ KANTIAN)
KNOWLEDGE CREATION & KNOWLEDGE RENEWAL
cordingly, we would need to rely upon humans for performing the increasingly relevant function of self-adaptation and knowledge creation. The vision of information systems that can autonomously revamp their past history based upon their anticipation of future change is yet far from reality (Wolpert, 1996). Given the constraints inherent in the extant mechanistic (programmed) nature of technology, the human element assumes greater relevance for maintaining
190
Knowledge Management & New Organization Forms
currency of the programmed heuristics (programmed routines based upon previous assumptions). Therefore, the human function of ensuring the reality check—by means of repetitive questioning, interpretation and revision of the assumptions underlying the information system—assumes an increasingly important role in the era marked by discontinuous change. The human aspects of knowledge creation and knowledge renewal that are difficult —if not impossible—to replace completely with knowledge management technologies are listed below. • Imagination and creativity latent in human minds • Untapped tacit dimensions of knowledge creation • Subjective and meaning-making basis of knowledge • Constructive aspects of knowledge creation and renewal The following discussion explains these issues in greater detail and suggests how they can help overcome the limitations of the information-processing model of knowledge management. Imagination and Creativity Latent in Human Minds: Knowledge management solutions characterized by memorization of ‘best practices’ may tend to define the assumptions that are embedded not only in information databases, but also in the organization’s strategy, reward systems and resource allocation systems. The hardwiring of such assumptions in organizational knowledge bases may lead to perceptual insensitivity (Hedberg et al., 1976) of the organization to the changing environment. Institutionalization of ‘best practices’ by embedding them in information technology might facilitate efficient handling of routine, ‘linear,’ and predictable situations during stable or incrementally changing environments. However, when such change is discontinuous, there is a persistent need for continuous renewal of the basic premises underlying the ‘best practices’ stored in organizational knowledge bases. The information-processing model of knowledge management is devoid of such capabilities essential for continuous learning and unlearning mandated by radical and discontinuous change. A more proactive involvement of the human imagination and creativity (March, 1971) is needed to facilitate greater internal diversity [of the organization] that can match the variety and complexity of the wicked environment. Untapped Tacit Dimensions of Knowledge Creation: The information processing model of knowledge management ignores tacit knowledge deeply rooted in action and experience, ideals, values, or emotions (Nonaka & Takeuchi 1995). Although tacit knowledge lies at the very basis of organizational knowledge creation, its nature renders it highly personal and hard to formalize and communicate. Nonaka and Takeuchi (1995) have suggested that knowledge is created through four different modes: (1)
Malhotra 191
socialization which involves conversion from tacit knowledge to tacit knowledge, (2) externalization which involves conversion from tacit knowledge to explicit knowledge, (3) combination which involves conversion from explicit knowledge to explicit knowledge, and (4) internalization which involves conversion from explicit knowledge to tacit knowledge. The dominant model of inquiring systems is limited in its ability to foster shared experience necessary for relating to others’ thinking processes thus limiting its utility in socialization. It may, by virtue of its ability to convert tacit knowledge into explicit forms such as metaphors, analogies and models, have some utility in externalization. This utility is however restricted by its ability to support dialogue or collective reflection. The current model of inquiring systems, apparently, may have greater role in combination involving combining different bodies of explicit knowledge, and internalization which involves knowledge transfer through verbalizing or diagramming into documents, manuals and stories. A more explicit recognition of tacit knowledge and related human aspects, such as ideals, values, or emotions, is necessary for developing a richer conceptualization of knowledge management. Subjective and Meaning-Making Bases of Knowledge Creation: Wicked environments call for interpretation of new events and ongoing reinterpretation and reanalysis of assumptions underlying extant practices. However, the information processing model of knowledge management largely ignores the important construct of meaning (cf: Boland 1987) as well as its transient and ambiguous nature. ‘Prepackaged’ or ‘taken-for-granted’ interpretation of knowledge residing in the organizational memories works against generation of multiple and contradictory viewpoints necessary for illstructured environments. Simplification of contextual information for storage in IT-enabled repositories works against the retention of the complexity of multiple viewpoints. Institutionalization of definitions and interpretations of events and issues works against the exchanging and sharing of diverse perspectives. To some extent the current knowledge management technologies, based on their ability to communicate metaphors, analogies and stories by using multimedia technologies, may offer some representation and communication of meaning. However, a more human-centric view of knowledge creation is necessary to enable the interpretative, subjective and meaningmaking nature of knowledge creation. Investing in multiple and diverse interpretations is expected to enable Kantian and Hegelian modes of inquiry and, thus, lessen oversimplification or premature decision closure. Constructive Aspects of Knowledge Creation and Renewal: The information-processing model of knowledge management ignores the constructive nature of knowledge creation and instead assumes a pre-specified
192
Knowledge Management & New Organization Forms
meaning of the memorized ‘best practices’ devoid of ambiguity or contradiction. It ignores the critical process that translates information into meaning and action that is necessary for knowledge-based performance (Malhotra, 1999a; Malhotra & Kirsch, 1996; Bruner, 1973; Dewey, 1933; Strombach, 1986). The dominant model of inquiring systems downplays the constructive nature of knowledge creation and action. For most ill-structured situations, it is difficult to ensure a unique interpretation of ‘best practices’ residing in information repositories since knowledge is created by the individuals in the process of using that data. Even if pre-specified interpretations could be possible, they would be problematic when future solutions need to be either thought afresh or in discontinuation from past solutions. Interestingly, the constructive aspect of knowledge creation is also expected to enable multiple interpretations that can facilitate the organization’s anticipatory response to discontinuous change.
CONCLUSIONS AND RECOMMENDATIONS FOR FUTURE RESEARCH The proposed sense making model of knowledge management enables the organizational knowledge creation process that is “both participative and anticipative” (Bennis and Nanus, 1985, p. 209). Instead of a formal rule- or procedure-based step-by-step rational guide, this model favors a “set of guiding principles” for helping people understand “not how it should be done” but “how to understand what might fit the situation they are in” (Kanter, 1983, p. 305-306). This model assumes the existence of “only a few rules, some specific information and a lot of freedom” (Margaret Wheatley cited in Stuart, 1995). One model organization that has proven the long-term success of this approach is Nordstrom, the retailer that has a sustained reputation for its high level of customer service. Surprisingly, the excellence of this organization derives from its one-sentence employee policy manual that states (Taylor, 1994): “Use your good judgment in all situations. There will be no additional rules.” The primary responsibility of most supervisors is to continuously coach the employees about this philosophy for carrying out the organizational pursuit of “serving the customer better” (Peters, 1989 p. 379). The proposed model, illustrated in Figure 4, is anticipated to advance the current conception of ‘Knowledge-Tone’ and related e-business applications (Kalakota and Robinson, 1999) beyond the performance threshold of highly integrated technology-based systems. By drawing upon the strengths of both convergence-driven [Lockean-Leibnitzian] systems and divergence-oriented
Malhotra 193
[Hegelian-Kantian] systems, the proposed model offers both a combination of flexibility and agility while ensuring efficiencies of the current technology architecture. Such systems are loose in the sense that they allow for continuous reexamination of the assumptions underlying best practices and reinterpretation of this information. Such systems are tight in the sense that they also allow for efficiencies based on propagation and dissemination of the best practices. The knowledge management systems based on the proposed model do not completely ignore the notion of ‘best practices’ per se but consider the continuous construction and reconstruction of such practices as a dynamic and ongoing process. Such loose-tight knowledge management systems (Malhotra, 1998a) would need to provide not only for identification and dissemination of best practices, but also for continuous reexamination of such practices. Specifically, they would need to also include a simultaneous process that continuously examines the best practices for their currency given the changing assumptions about the business environment. Such systems would need to contain both learning and unlearning processes. These simultaneous processes are needed for assuring the efficiency-oriented optimization based on the current best practices while ensuring that such practices are continuously reexamined for their viability. Some management experts (cf: Manville and Foote, 1996) have discussed selected aspects of the proposed sense making model of knowledge management in terms of the shift from the traditional emphasis on transaction processing, integrated logistics, and work flows to systems that support competencies for communication building, people networks, trust-building and on-the-job learning. Many such critical success factors for knowledge management require a richer understanding of human behavior in terms of their perceptions about living, learning and working in technology-mediated and cyberspace-based environments. Some experts (cf: Davenport and Prusak, 1997; Romer in Silverstone, 1999) have emphasized formal incentive systems for motivating loyalty of employees for sustaining firm’s intellectual capital and loyalty of customers for sustaining ‘stickiness’ of portals. However, given recent findings in the realms of performance and motivation of individuals (cf: Malhotra, 1998c; Kohn, 1995) using those systems, these assertions need to be reassessed. The need for better understanding of human factors underpinning performance of knowledge management technologies is also supported by our observation of informal ‘knowledge sharing’ virtual communities of practice affiliated with various Net-based businesses (cf: Knowledge Management Think Tank at: forums.brint.com) and related innovative business models. In most such
194
Knowledge Management & New Organization Forms
cyber-communities, success, performance and ‘stickiness’ is often driven by hi-touch technology environments that effectively address core value proposition of the virtual community. It is suggested that the critical success factors of the proposed model of knowledge management for business innovation are supported by a redefinition of ‘control’ (Flamholtz et al., 1985; Malhotra and Kirsch, 1996; Manz et al., 1987; Manz and Sims, 1989) as it is relates to the new living, learning and working environments afforded by emerging business models. Hence, business model innovation needs to be informed by the proposed model of knowledge management that is based upon synergy of the information-processing capacity of information technologies and sensemaking capabilities of humans.
REFERENCES Albert, S. (1998). “Knowledge Management: Living Up To The Hype?” Midrange Systems, 11(13), 52. Allee, V. (1997). “Chevron Maps Key Processes and Transfers Best Practices,” Knowledge Inc., April. Anthes, G.H. (1991). “A Step Beyond a Database,” Computerworld, 25(9), 28. Applegate, L., Cash, J. & Mills D.Q.(1988). “Information Technology and Tomorrow’s Manager,” In McGowan, W.G. (Ed.), Revolution in Real Time: Managing Information Technology in the 1990s, pp. 33-48, Boston, MA, Harvard Business School Press. Arthur, W. B. (1996). “Increasing Returns and the New World of Business.” Harvard Business Review, 74(4), 100-109. Bair, J. (1997).“Knowledge Management: The Era Of Shared Ideas,” Forbes, 1(1) (The Future of IT Supplement), 28. Barabba, V.P. (1998). “Revisiting Plato’s Cave: Business Design in an Age of Uncertainty,” in D. Tapscott, A. Lowy & D. Ticoll (Eds.), Blueprint to the Digital Economy: Creating Wealth in the Era of E-Business, McGrawHill. Bartlett, C.A. & Ghoshal, S. (1995). “Changing the Role of the Top Management: Beyond Systems to People,” Harvard Business Review, May-June 1995, 132-142. Bennis, W. & Nanus, B. (1985). Leaders: The Strategies for Taking Charge, New York, NY, Harper & Row. Boland, R.J. (1987). “The In-formation of Information Systems,” In R.J. Boland and R. Hirschheim (Eds.), Critical Issues in Information Systems Research, pp. 363-379, Wiley, Chichester.
Malhotra 195
Bruner, J. (1973). Beyond the Information Given: Studies in Psychology of Knowing, In J.M. Arglin (Ed.), W.W. Norton & Co., New York. Business Week (1999). The Internet Age (Special Report), October 4. Chorafas, D.N. (1987). “Expert Systems at the Banker’s Reach,” International Journal of Bank Marketing, 5(4), 72-81. Churchman, C.W. (1971). The Design of Inquiring Systems, Basic Books, New York, NY. CPA Journal (1998). “Knowledge Management Consulting Gives CPAs a Competitive Edge,” 68(8), 72. Davenport, T.H. (1994). “Saving IT’s Soul: Human-Centered Information Management,” Harvard Business Review, Mar-Apr, 119-131. Davenport, T.H. & Prusak, L. (1997). Working Knowledge: How Organizations Manage What They Know, Harvard Business School Press, Boston, MA. Dewey, J. (1933). How We Think, D.C. Heath and Company, Boston, MA. Drucker, P.F. (1994). “The Theory of Business,” Harvard Business Review, September/October, 95-104. Eisenhardt, K.M., Kahwajy, J.L. & Bourgeois III, L.J. (1997). “How Management Teams Can Have a Good Fight,” Harvard Business Review, July-August. Flamholtz, E.G., Das, T.K. & Tsui, A.S. (1985). “Toward an Integrative Framework of Organizational Control,” Accounting, Organizations and Society, 10(1), 35-50. Fryer, B. (1999). “Get Smart,” Inc. Technology, 3, Sep. 15. Garner, R. (1999). “Please Don’t Call it Knowledge Management,” Computerworld, August 9. Ghoshal, S. & Bartlett, C.A. (1995). “Changing the Role of Top Management: Beyond Structure to Processes,” Harvard Business Review, JanuaryFebruary 1995, pp. 86-96. Ghoshal, S. & Bartlett, C.A. (1996). “Rebuilding Behavioral Context: A Blueprint for Corporate Renewal,” Sloan Management Review, Winter 1996, 23-36. Gill, T.G. (1995). “High-Tech Hidebound: Case Studies of Information Technologies that Inhibited Organizational Learning,” Accounting, Management and Information Technologies, 5(1), 41-60. Gopal, C. & Gagnon, J. (1995). “Knowledge, Information, Learning and the IS Manager,” Computerworld (Leadership Series), 1(5), 1-7. Hagel, J. and Armstrong, A.G. (1997). Net Gain: Expanding Markets Through Virtual Communities, Harvard Business School Press, Boston, MA.
196
Knowledge Management & New Organization Forms
TE
AM
FL Y
Hagel, J. and Singer, M.(1999). Net Worth, Harvard Business School Press, Boston, MA, 1999. Hamel, G. (1997). Keynote address at the Academy of Management Meeting, Boston, 1997. Hamel, G. & Prahalad, C.K. (1994). Competing for the Future, Harvard Business School Press, Boston, MA. Hedberg, B., Nystrom, P.C. & Starbuck, W.H. (1976). “Camping on Seesaws: Prescriptions for a Self-Designing Organization,” Administrative Science Quarterly, 21, 41-65. Hibbard, J.(1997). “Ernst & Young Deploys App For Knowledge Management,” Information Week, Jul 28, 28. Hildebrand, C. (1999). “Does KM=IT?” CIO Enterprise, Sep. 15. Online version accessible at: http://www.cio.com/archive/enterprise/ 091599_ic.html. Kalakota, R. & Robinson, M. (1999). e-Business: Roadmap for Success, Addison Wesley, Reading, MA. Kanter, R.M. (1984). The Change Masters: Innovation & Entrepreneurship in the American Corporation, Simon & Schuster, New York, NY. Kerr, S.(1995). “Creating the Boundaryless Organization: The Radical Reconstruction of Organization Capabilities,” Planning Review, Sep-Oct, 41-45. Kohn, A. (1995). Punished by Rewards : The Trouble With Gold Stars, Incentive Plans, A’s, Praise, and Other Bribes, Houghton Mifflin Co, Boston, MA, 1995. Landau, M. (1973). “On the Concept of Self-Correcting Organizations,” Public Administration Review, November/December 1973, pp. 533-542. Landau, M. & Stout, Jr., R. (1979). “To Manage is Not to Control: Or the Folly of Type II Errors,” Public Administration Review, March/April 1979, pp. 148-156. Leonard, D.(1997). “Putting Your Company’s Whole Brain to Work,” Harvard Business Review, July-August. Leonard-Barton, D. Wellsprings of Knowledge: Building and Sustaining the Sources of Innovation, Boston, MA, Harvard Business School Press, 1995. Maglitta, J.(1995). “Smarten Up!,” Computerworld, 29(23), 84-86. Maglitta, J. (1996). “Know-How, Inc.” Computerworld, 30(1), 1996. Malhotra, Y. “From Information Management to Knowledge Management: Beyond the ‘Hi-Tech Hidebound’ Systems,” in K. Srikantaiah and M.E.D. Koenig (Eds.), Knowledge Management for the Information Professional, Information Today, Inc., Medford, NJ, (in press).
Team-Fly®
Malhotra 197
Malhotra, Y. (1998a). “Toward a Knowledge Ecology for Organizational White-Waters,” Invited Keynote Presentation for the Knowledge Ecology Fair 98: Beyond Knowledge Management, Feb. 2 - 27, accessible online at: http://www.brint.com/papers/ecology.htm. Malhotra, Y. (1998b). “Deciphering the Knowledge Management Hype” Journal for Quality & Participation, July/August, 58-60. Malhotra, Y. (1998c). Role of Social Influence, Self Determination and Quality of Use in Information Technology Acceptance and Utilization: A Theoretical Framework and Empirical Field Study, Ph.D. thesis, July, Katz Graduate School of Business, University of Pittsburgh, 225 pages. Malhotra, Y. (1999a). “Bringing the Adopter Back Into the Adoption Process: A Personal Construction Framework of Information Technology Adoption,” Journal of High Technology Management Research, 10(1). Malhotra, Y. (1999c). “High-Tech Hidebound Cultures Disable Knowledge Management,” in Knowledge Management (UK), February. Malhotra, Y. (1999d). “Knowledge Management for Organizational White Waters: An Ecological Framework,” in Knowledge Management (UK), March. Malhotra, Y. (1999e). “What is Really Knowledge Management?: Crossing the Chasm of Hype,” in @Brint.com Web site, Sep. 15. [Letter to editor in response to Inc. Technology #3, Sep. 15, 1999, special issue on Knowledge Management]. Accessible online at: http://www.brint.com/advisor/ a092099.htm. Malhotra, Y. & Galletta, D.F. (1999b). “Extending the Technology Acceptance Model to Account for Social Influence: Theoretical Bases and Empirical Validation,” in the Proceedings of the Hawaii International Conference on System Sciences (HICSS 32) (Adoption and Diffusion of Collaborative Systems and Technology Minitrack), Maui, HI, Jan. 5-8. Malhotra, Y. & Kirsch, L. (1996). “Personal Construct Analysis of Self-Control in IS Adoption: Empirical Evidence from Comparative Case Studies of IS Users & IS Champions,” in the Proceedings of the First INFORMS Conference on Information Systems and Technology (Organizational Adoption & Learning Track), Washington D.C., May 5-8, 1996, pp. 105-114. Manville, B. & Foote, N. “Harvest your Workers’ Knowledge,” Datamation,42(13), 78-80. Manz, C.C., Mossholder, K. W. & Luthans, F.(1987). “An Integrated Perspective of Self-Control in Organizations,” 19(1), Administration & Society, May, 3-24.
198
Knowledge Management & New Organization Forms
Manz, C.C. & Sims, H.P. (1989). SuperLeadership: Leading Others to Lead Themselves, Prentice-Hall, Berkeley, CA. March, J.G. (1971).“The Technology of Foolishness” Civilokonomen, May, 7-12. Mason, R.O. & Mitroff, I.I.(1973). “A Program for Research on Management Information Systems,” Management Science, 19(5), 475-487. Mathur, S.S. & Kenyon, A. (1997).“Our Strategy is What We Sell,” Long Range Planning, 30, June. Moore, J.F. (1998). “The New Corporate Form,” In Blueprint to the Digital Economy: Creating Wealth in the Era of E-Business (Ed. Don Topscott), McGraw Hill, New York, NY, 77-95. Nadler, D.A. & Shaw, R.B. (1995). “Change Leadership: Core Competency for the Twenty-First Century,” In Discontinuous Change: Leading Organizational Transformation (D.A. Nadler, R.B. Shaw & A.E. Walton), Jossey-Bass, San Franscisco, CA. Nadler, D.A., Shaw, R.B. & Walton, A.E. (Eds.) (1995). Discontinuous Change: Leading Organizational Transformation, Jossey-Bass, San Francisco, CA. Nonaka, I. and Takeuchi, H. (1995). The Knowledge-Creating Company, Oxford University Press, New York, NY. O’Dell, C. and Grayson, C.J. (1998). “If Only We Knew What We Know: Identification And Transfer of Internal Best Practices,” California Management Review, 40(3), Spring 154-174. Peters, T. (1989). Thriving on Chaos: Handbook for a Management Revolution, Pan Books, London, UK. Seely-Brown, J. (1997). “The Human Factor”, Information Strategy, December 1996-January. Senge, P.M. (1990). The Fifth Discipline: The Art and Practice of the Learning Organization, New York, NY, Doubleday. Silverstone, S. (1999).“Maximize Incentives,” Knowledge Management, October, 36-37. Stout, R., Jr. (1980). Management or Control?: The Organizational Challenge, Indiana University Press, Bloomington, IN. Strapko, W.(1990). “Knowledge Management,” Software Magazine, 10(13), 63-66. Strassmann, P.A. (1997). The Squandered Computer: Evaluating the Business Alignment of Information Technologies, Information Economics Press, New Canaan, CT. Strassmann, P.A.(1999). “The Knowledge Fuss,” Computerworld, October 4.
Malhotra 199
Strombach, W. (1986). “Information in Epistemological and Ontological Perspective,” in Philosophy and Technology II: Information Technology and Computers in Theory and Practice, C. Mitcham and A. Huning (Eds.), D. Reidel Publishing Co., Dordrecht, Holland. Stuart, A. (1995). “Elusive Assets,” CIO, November 15, 28-34. Taylor, W.C. (1994). “Contol in an Age of Chaos,” Harvard Business Review, November-December 1994,72. Willett, S. & Copeland, L.(1998). “Knowledge Management Key to IBM’s Enterprise Plan,” Computer Reseller News, Jul 27, 1, 6. Wolpert, D.H. (19996).“An Incompleteness Theorem for Calculating the Future,” Working Paper, The Santa Fe Institute. Zeleny, M. (1987).“Management Support Systems,” Human Systems Management,” 7(1), 1987, 59-70.
200
Implementing Virtual Organizing in Business Networks
Chapter 15
Implementing Virtual Organizing in Business Networks: A Method of Inter-Business Networking Roland Klueber, Rainer Alt and Hubert Österle University of St. Gallen, Switzerland
Virtual organizations and knowledge management have been discussed on a very broad scale in literature. However, a holistic view and methods that support implementation of these concepts are rare. Based on the understanding derived from literature and the experience of many action research-based projects, a method is described that addresses these issues for business networks. This includes the dimensions of strategy, process and IS required for establishing and managing business networks. By providing a systematic and documented procedure model, techniques and results, this method aims to improve the efficiency of setting up business networks, thus improving a company’s networkability. In order to illustrate why this method is needed and how it can be applied, a project for implementing a business-networking solution for electronic procurement is described. It shows how a structured approach helps to identify the scenarios, aids implementation and applies previously as well as newly created knowledge. The outlook describes areas for future research and new developments.
Previously Published in Knowledge Management and Virtual Organizations edited by Yogesh Malhotra, Copyright © 2000, Idea Group Publishing.
Klueber, Alt & Österle
201
INTRODUCTION TO BUSINESS NETWORKING Essence of Business Networking Business Networking (BN) has become one of the most powerful strategic business trends. A deconstruction of the economy is taking place, involving a move from vertically integrated hierarchies towards flexible network organizations, and the ability to quickly and efficiently setup, maintain, develop and dissolve partnerships with business partners —a competence we refer to as networkability (Österle et al., 2000) — is a critical success factor. Networkability includes the collaborative advantage termed by Moss-Kanter (1994) as “the propensity to be a good partner” and aims at pursuing common goals when applied to a specific relationship. Achieving networkability is at the heart of Business Networking, which describes the design and management of relationships between (internal or external) business units. There are two main driving forces behind the need for Business Networking which are highly interrelated. First, management is being confronted with trends such as globalization, shorter innovation cycles and deregulation leading to increasingly dynamic markets. This requires new strategies, such as core competence focus, outsourcing, and a stronger customer orientation. Business Networking is an inherent element of these strategies. Second, information technology (IT) allows for the efficient exchange of information among organizations and acts as a main enabler for networking among businesses. Wigand et al. describe the consequences as follows: “Classical corporate boundaries are beginning to blur, to change internally as well as externally, and in some cases, even dissolve” (Wigand, Picot, & Reichwald, 1997). During the last decade companies have integrated their functional information systems (IS) in enterprise resource planning (ERP) systems which provide an integrated database for various functions, such as finance, marketing, and production. These ERP systems have emerged on a large scale and have become the backbone for Business Networking. ERP vendors such as SAP, Baan, Oracle or Peoplesoft are eagerly adding Business Networking functionality for electronic commerce (EC), supply chain management (SCM) and (customer) relationship management (RM). Enhancing and extending existing ERP systems as well as implementing Business Networking strategies is of foremost importance for companies and requires decisions concerning strategy, processes and systems.
202
Implementing Virtual Organizing in Business Networks
Role of Virtual Organizing as a Method in Business Networking Virtual organizations denote an organizational form which is based on a “temporary network of independent companies – suppliers, customers and rivals – linked by IT to share skills, costs and access to one another’s market” (Byrne, 1993; Wang, 1997). This organizational form is largely enabled by IT in order to overcome the limitations of time, space and stable organizational forms (Skyrme, 1998). Following Faucheux (1997), Venkatraman and Henderson (1998) and DeSanctis and Monge (1998), the term virtual organizing is chosen instead of virtual organization in order to emphasize its processlike nature as well as to avoid connotations of a static nature and a limitation to an organizational form. According to Venkatraman and Henderson (1998), virtual organizing is defined as “a strategic approach that is singularly focused on creating, nurturing, and deploying key intellectual and knowledge assets while sourcing tangible, physical assets in a complex network of relationships”. As described by Rockart (1998), virtuality allows for both economies of scale and local innovation as well as a ‘single face to the customer’. We see an increasing acceptance of these ideas both in academia and in practice. However, these forms are temporary in nature and require more information and coordination. This can be achieved by a more intensive use of IT, as IT can lead to more coordination-intensive structures due to reduced coordination costs (Malone and Rockart, 1991). This mainly concerns transaction partners and patterns, exchanged products and services as well as the negotiated conditions. Typically, this level is covered by ERP, EC and SCM Systems.
Figure 1: Role of Virtual Organizing and Knowledge Management in Business Networking Business Networking Strategies Electronic Commerce
Supply Chain Mgmt
Relationship Mgmt
Customer Interaction
Dimensions of Virtual Organizing
Asset Configuration Knowledge Leverage
Klueber, Alt & Österle
203
Role of Knowledge Management in Business Networking Knowledge about setting-up relationships with partners, about their preferences and performance profiles, is not bound to a specific transaction although some knowledge may be derived from individual transactions. This is the domain of knowledge management that aims at making unstructured information, implicit or tacit knowledge available. According to Wiig (1999), “knowledge management is the systematic and explicit management of knowledge-related activities, practices, programs and policies within the enterprise”. Multiple phases of the knowledge management process are usually distinguished: goal definition, identification, acquisition, development, distribution, application, maintenance and assessment of knowledge (Probst, Raub, & Romhardt, 1997). In their survey of knowledge management projects , Davenport, De Long and Beers (1998), identified four categories: • creation of knowledge repositories (e.g. databases with research reports, marketing materials, techniques, competitive intelligence or experiences), • improvement of knowledge access (e.g. Yellow Pages, competence directories or videoconferencing), • enhancement of knowledge environment (e.g. guidelines for performing activities) and • management of knowledge as an asset (e.g. patents database for improved monitoring of licensing revenues). Knowledge management is a key process for establishing and sustaining networkability. Firstly, knowledge on business partners, their processes and systems is required. Secondly, knowledge of how to set-up and configure these relationships allows rapid identification and qualification of potential partners. Thirdly, it enables an increase in efficiency due to the effects of experience in linking processes and systems. One focus of knowledge management in BN is the identification of relevant cooperative areas and partners, its support with IT and IS tools and the acquisition of knowledge about partners from transaction systems. Therefore, both concepts, knowledge management and virtual organizing are inherent elements in Business Networking projects. This argument is supported in the model of Venkatraman and Henderson (1998) who distinguish three dimensions of virtual organizing (Figure 1): customer interaction, asset configuration, and knowledge leverage. The advantage of this model is that it already includes knowledge management as an element of all Business Networking strategies.
204
Implementing Virtual Organizing in Business Networks
Business Networking Challenges Due to the enabling role of IT, Business Networking per se includes the usage of information systems in order to contain increasing coordination costs.1 This represents a major challenge since Business Networking is not a technological concept and involves decisions concerning strategy and processes as well. As Klein (1996) shows, multiple layers and options have to be included when configuring relationships among businesses. For example, companies can decide to outsource, insource or enter new market segments with Business Networking and they can decide to pursue various networking strategies such as EC, SCM or RM (Alt, Österle, Reichmayr & Zurmuehlen, 1999). More details need to be discussed on the process level when, for instance, an electronic purchasing service or a vendor managed inventory (VMI) process is being implemented. On the systems level, a variety of tools are on the market for performing these activities and choosing the right standards strongly determines networkability. As outlined above, Business Networking involves a variety of complex issues which have to be tackled. More complexity is added with the different nature of inter-business relationships compared to internal relationships (Alt & Fleisch, 1999). Especially the relative autonomy of business partners permits less direct influence and entails greater potential for conflict. In addition, there is—traditionally—only a lower level of knowledge about the business partner’s processes which can be a result of the more frequent change of partners (lower stability) or of less information exchanged between partners. Figure 2: Business Engineering Model Applied to Business Networking Strategy level
Business unit
Business unit
Business network Cooperation strategy
Process level
IS level
IS network
Business process
Business process Transactions and coordination techniques
Process network
Information system
Information system Communication link
Klueber, Alt & Österle
205
RESEARCH APPROACH FOR METHOD DEVELOPMENT In the following we will present four conceptual foundations which are the basis for developing a method for inter-Business Networking. These are action research, business engineering, the business model of the information age and method engineering. Action Research In general, a method for Business Networking is only useful when it is applicable in practice. Action research is a research tradition which combines pure (management) action and pure scientific research (Checkland & Holwell, 1998). Action research has two characteristics (Mansell, 1991): “1. The researcher seeks to add to knowledge, but is also concerned to apply knowledge and become involved in the implementation of plans. 2. The problem to be solved is not defined by the researcher but by participants in human activity (e.g. managers in organizations).” Since IT is an applied discipline, action research methods have proven to be appropriate research methodologies in this area (Baskerville & Wood-Harper, 1998). Furthermore, action research provides advantages when studying organizational networks (Chisholm, 1998). Therefore, action research is chosen as the leading research approach. Figure 3: Business Model for Business Networking
Suppliers
Aggregator/ Integrator
Business Port
Customer
Suppliers • Middleware
Business Port
Business Port Virtual Organization
Business Port • EAI • Converter • Standards
Business Bus Business Port
Business Port
Business Port
ATP service e.g. FedEx,UPS
Payment service e.g. Visa, Intuit
Directory service e.g. D&B
Business Port Trust Service e.g. Verisign
Business Port Message service e.g. Harbinger
• Protocols • Access to Services • Information based • eService
Solutions
206
Implementing Virtual Organizing in Business Networks
The setting for the method development project is the Competence Center inter-Business Networking (CC iBN) at the Institute for Information Management of the University of St. Gallen. Eight companies (Bayer AG, Robert Bosch Group, Deutsche Telekom AG, ETA SA, HiServ, HoffmannLaRoche, Riverwood International Inc., and SAP AG) are currently collaborating in the field of Business Networking with the researchers from the university. This approach follows the principles of action research as suggested by Probst and Raub (1995): it is problem-driven, action-oriented, and practitioners are actively participating in the projects.
TE
AM
FL Y
Business Engineering Model Action research provides the general direction for conducting research, but it does not provide a framework for structuring research in a specific field. An approach which is geared towards the business-oriented conceptualization of information systems is business engineering. The framework of Österle (1995) combines various theoretical disciplines and “structures the organization, data and function dimensions at the business strategy, process and information systems levels”. It encompasses business, organizational and information systems aspects in a structured approach that tries to overcome the shortcomings of isolated approaches. Distinguishing these three layers has proved helpful for analyzing and designing ISs. The generic model of business engineering has been enhanced in several aspects. Political and change management have been added (Gouillart and Kelly, 1995) and, as shown in Figure 2, they have been applied to network settings. Business Model of the Information Age Closely interrelated with this, the technological and management trends described above are leading towards a business model for Business Networking. This model has three main characteristics (Österle et al., 2000): • Customer processes determine the design of the value chain. To give an example from the travelling industry, solutions to satisfy a customer’s entire demand would include products and services from flight to hotels, cars, theater tickets and the like. Computer reservation systems are models which will become relevant in other industries as well, e.g. the health sector. • An aggregator/integrator which mirrors the customer’s processes manages the relationship. This new role forms the basis for new business (Hagel & Singer, 1998). Current examples for such ‚infomediaries‘ are Dell or Amazon.com. Dell also highlights that infomediaries outsource
Team-Fly®
Klueber, Alt & Österle
207
Figure 4: General Procedure Model BN Procedure Model STRATEGY Definition of co-operation strategy
PROCESS Definition of transaction & coordination processes
IS Definition of applications, services and communication links
1. Potentials Analysis
• Assess existing networkability • Elaborate future potentials
2. Strategy Definition
• Decide on BN strategy • Elaborate chosen strategy
3. Project Set-up
• Select Partners • Define shared business plan
4. Process Network Analysis
• Define alternative processes • Check eService types • Select future process
5. Process Network Design 6. IS/IT Network Design 8. IS/IT Implementation
• Assign partners to processes • Select software solution • Define process measurement
7. Bus Configura- • IS/IT architecture & parameters tion & Port Design • Select standards • Select eServices & define port
9. Port Implementation
• Define (master) data standards • Coordinate implementation • Implement
10. Continuation
all non-core competencies to suppliers and invest in the management of these supplier relationships. • A business bus supports communication among business partners. The business bus is a concept which is based upon the increasing availability of modular electronic services and standards for processes, data, and interfaces. As shown in Figure 3, the services include not only basic infrastructure services (e.g. messaging) but also directory services (e.g. the database from Dun&Bradstreet), payment and logistics services. Standardization is reflected in the business bus by defining syntactic and semantic standards used by partners. These are implemented by business ports which denote a company’s ability to interface with a large number of partners. Examples are standards for catalogs (e.g., RosettaNet, CXML, UN/SPSC) and processes (e.g., SCOR or CPFR).2 Initial solutions for business ports are already on the market (e.g,. SAP Business Connector) and are expected to develop with the spread of XML-related standards. Electronic services offered via the business bus
208
Implementing Virtual Organizing in Business Networks
can be considered as enablers for emerging virtual organizations (Klueber, Alt & Österle, 1999). Principles of Method Engineering In order to structure the method, we rely on the method engineering principles as defined by Gutzwiller (1994), which have been used successfully to define several methods of business engineering (see IMG, 1999). The main objective of a method is to “decompose a project into manageable activities, determine techniques, tools and roles as well as defining results” (Österle, 1995). The procedure model reflects an ideal sequence of top-level activities. Techniques describe how one or more results can be achieved and facilitated with the use of specific tools (e.g., the software selection process). Tools offer conceptual or structural support to produce the result documents with appropriate semantics. For example, the SCOR model facilitates the categorization, modeling and measuring of supply chain processes by providing common semantics for interorganizational supply chain projects. An example for a computer assisted tool is ARIS Easy-SCOR. Finally, roles describe the required know-how and competencies needed to complete the required result documents. However, the benefits of using a method are not limited to structuring a project. Methods are also used to facilitate training and (self-)learning by
Figure 5: Information Intensity Matrix high
IS supports co-ordination
High potential of applying IS
Information intensity of the co-ordination Little relevance for iBN
IS supports production & delivery
low low
Information intensity of the product or service
high
Klueber, Alt & Österle
209
example. One major advantage is Figure 6 – Activities in Strategy that they aid coordination and un- Definition 1 derstanding by providing a comAlignment Alignment with with corporate corporate strategy strategy mon language for people with heterogeneous skills, knowledge 2 and backgrounds. Also, a method Organizational Organizational Resource Resource Strategy Strategy provides specimen result documents that can be used in similar 3 Networking Networking Strategy Strategy projects. These benefits should lead to improvements in terms of 4 hard (time, cost, quality) and soft Organizational Organizational Scope Scope (flexibility and knowledge) fac5 tors. Partner Partner Type Type It is a challenge for a method of Business Networking to comply with the principles of action research, to include the analytical layers of business engineering and to take into account the nature of upcoming business models.
TOWARDS A METHOD OF BUSINESS NETWORKING The motivation to design a method stems from the complexity and novelty of Business Networking projects. Complexity is a function of the number of partners involved, their heterogeneity and the multiple strategies Figure 7: Major strategy decision Organizational Resource Decision Virtual Organization
Insourcing
Electronic Commerce
eProcurement for C goods
Content Management for Z catalogs
Supply Chain Management
SCM for X parts with Y partners
Networking Decision
Outsourcing
Relationship Management
210
Implementing Virtual Organizing in Business Networks
of Business Networking (e.g., EC, SCM, RM). Novelty is based on new approaches towards partnering (i.e., collaborative planning) and processes supported by new software solutions and infrastructures such as the Internet (Rockart, 1998). The method tries to handle these issues and avoids pitfalls and fallacies by offering an ideal but still flexible and customizable path of how to decide and implement one combination of strategies. Method Overview It is an assumption of the method that Business Networking projects pursue comparable activities. Figure 4 depicts these activities which can be further detailed in sub-activities. The philosophy of the method is that the first two phases and the last phase are common for all Business Networking projects. Due to their higher degree of process and application-specificity, all other activities are developed in specific procedure models.3 Potentials Analysis Typically, Business Networking projects start with a potentials analysis, which transforms vague ideas on cooperation potentials into specific alternatives. A prerequisite is an initial health check of internal capabilities and processes in order to avoid solutions which address the wrong problems. For example, we have seen that external improvements often require internal excellence. Potentials analysis contains three main sub-activities: • For the assessment of networkability, a clear understanding of corporate resources, customer and market strategies is required. It requires information about cooperation strategies and capabilities as well as future cooperation areas (Stein, 1997; Kuglin, 1998; Hillig, 1997). Networkability includes strategy, cultural, cooperation process, IS connectivity and architectural issues (Rockart, 1998) as well as the human resource and organizational dimension (Hillig, 1997). The degree of networkability can be measured by the time required to establish cooperation, the content of the cooperation and the number of cooperations that can be managed in parallel. It has implications on the strategy level (Doz and Hamel, 1998), the operational processes, and the potentials of innovative information systems. Our findings show that mental models, organizational structures and the information systems and architectures often have to be adapted if a company wishes to reap the benefits and take advantage of the opportunities of virtual organizing. One major result of this sub-activity may be to get a clearer view on the as-is situation as well as to initiate a change of cultural, political and mental frames.
Klueber, Alt & Österle
211
• Identification of Figure 8: Combination of Method and specific BN areas. To procedure models identify the PROMET®iBN Method areas(s) which are attractive for • activities Business NeteP • result documents Specific working, we ap• techniques Elements ply the frame• roles work of Case 1 Case 2 Venkatraman and Henderson (1998). It can be used to analyze the current state along the dimensions of asset configuration (virtual sourcing), customer interaction (virtual encounter) and knowledge leverage (virtual expertise). The architecture of virtual organizing integrates organizational and exchange considerations with knowledge management. In doing so, it serves as a high-level classification scheme to identify the development potentials. • The information intensity analysis addresses the question of the information-intensity of the product or service group and of its interorganizational coordination mechanism. This is done by adding an aggregated vector to the previous framework (cf. Klueber, 1998) which is determined by the product’s/service’s information-intensity (X-axis) (Porter and Millar, 1985) and by the information intensity of the interorganizational coordination mechanism (Y-axis) (see Figure 5). Examples of measuring information intensity are: the degree of shared information objects (i.e. shared planning data) (Ludwig, 1997), the degree of mutual adjustment supported via IT (i.e., electronic discussion groups or shared access to knowledge databases), and the support of consumer-producer interdependencies (Malone and Crowston, 1994). Potentials analysis concerns not only the assessment of the status quo but also helps to define the scenario to-be. This enables an initial assessment of future benefits and completes the understanding of a companies’ networkability and the areas with a high potential for Business Networking. Strategy Definition Input from potential analysis serves as a starting point to the strategy definition phase which consists of five steps (Figure 6). First, the general strategy (e.g., cost and service leadership) has to be applied to the related
212
Implementing Virtual Organizing in Business Networks
organizational areas and the BN goals are identified. The second decision is on the organizational resources that are necessary in order to achieve the goals in the identified area and how they are accessed. The choices are between outsourcing, virtual organization and insourcing (cf. Marakas and Kelly, 1999). Third, a decision about the networking strategy is required to define the category of solutions that will be dominantly used to achieve the defined BN goals. The relevant IS-related solution strategies are EC, SCM, and RM. The decision is supported through process categorization and best practices as well as process characteristics that are likely to be achieved with one of the networking strategies. The result is documented in a matrix which shows the decision on the organizational resource and the networking dimensions for a product group/ process/organizational unit combination as alternative future cooperation areas in the portfolio (see Figure 7). If the choice is building a virtual organization, the networking strategies are pursued in collaboration with partners. If it is outsourcing, the cooperation intensity with partners is lower and, if it is insourcing, it is usually higher. The alternatives are assessed and chosen on the basis of a scoring model which includes multiple dimensions (e.g., cost, time, quality, flexibility, knowledge, strategic fit). The sequence is ideal but we have assumed that all these interdependent decisions have to be made before a successful BN project can be started. At a minimum it could serve as a checklist if top management decisions are made, the resource questions are solved, the IS potentials are analyzed and the development path is aligned with the overall direction. Furthermore it can be a recursive process, the entry points may vary and results are elaborated in later phases. Continuation Activity Finally, continuation activity aims at realizing the whole potential of BN projects identified in the BN project portfolio or at systematically identifying new BN projects. If the strict sequence and starting from the top cannot be applied in a real situation, where a variety of constraints or actual problems have to be taken into account, the clear description of the requirements to start in one activity allows lower starting points, parallel activities and cycles. Specific Procedure Models Work with project partners has shown the difficulty of supporting the variety of Business Networking strategies. The challenges vary considerably in the intensity of the cooperation, the quality and quantity of information exchanged and the IS solution categories needed. We decided to introduce
Klueber, Alt & Österle
213
Figure 9 : Categories of Goods and Services for Electronic Procurement
Procurement Value (100%) Primary Demand (45%)
Secondary Goods* & Services (55%) transaction intensive
transaction poor
(25%)
(30%)
Goods
Services
• office materials & furniture
• hotel & travel bookings
• computers • marketing material • magazines, books • laboratory demand • tools
• car rentals • copy and printing services • training & IT consulting • Catering • Cleaning & Security
* Compendium Electronic Commerce Benchmarks (www.compendium.nl)
specific procedure models as a new element of method engineering, which offers the following advantages: • Higher problem-specific support and wording, • Easier comparisons between similar projects, • Less abstract and more oriented to action, and • Better basis for capturing and transferring knowledge. Specific procedure models represent a bottom-up approach compared to the top-down development of the method. They inherit elements of the general procedure model and add specific activities, techniques, result documents, roles and cases (see Figure 8). The underlying common structure and origin from the BN procedure model facilitates the coordination of multiple BN projects. Link to Virtual Organizing and Knowledge Management Virtual organizing is the main concept behind the organization strategies in Business Networking. The strategic configurations for internal and external resources are insourcing, virtual organization and outsourcing. A company which plans to introduce a Business Networking strategy would start to
214
Implementing Virtual Organizing in Business Networks
Figure 10: Elements of electronic procurement processes Strategic procurement process
● Selection of
Partners ● Negotiation with
Customer Customer Management Management
Supplier Management
Partners ● Monitoring of Partners ● Supplier’s electronic
product catalogue
Electronic Product Catalogues of Suppliers
Content Mgmt Catalogue Mgmt
Desktop Purchasing System (DPS)
Payment and Logistic Services
Operational procurement process
eProcurement Activities
● Content Management
merges multiple catalogues ● Catalogue Management provides platform for catalogues ● DPS provide workflow ● Integration with
payment & logistics
analyze its position in the framework of Venkatraman and Henderson (1998). For example, a low level of virtual sourcing implies an improvement potential with EC and/or more intense SCM solutions. The method should help to evaluate when virtual organizing is adequate, what configuration and combination of EC and SCM solutions are required in specific situations and how these could be implemented. The structuring of these new or complementary solutions and guidance towards feasible implementation is at the heart of the method. It adds important elements such as the SCM networking strategy to assess different alternatives and to move further towards an implementation. Thus, virtual organizing is an inherent element in the entire method. The role of knowledge management becomes apparent when use of the method is regarded as a knowledge management tool (Probst et al., 1997). The definition of knowledge goals pursued in BN projects is a salient element in the strategy definition phase. The identification of knowledge gaps and sources to fill them is also vital when deciding on BN strategies. Knowledge acquisition is supported by providing a structured approach with context knowledge embedded into the procedure model, techniques and result documents. Furthermore, in using the method, explicit knowledge gets internalized and subsequently supports the knowledge distribution process and its application (cf. Nonaka & Takeuchi, 1995). Also, implicit knowledge that gets codified in cases and best practice examples is made accessible to other individuals. The knowledge application phase is entered when the method is applied to a specific business context. It helps to communicate internalized knowledge by the project members and may aid the externalization and creation of new knowledge for a project. In a second step, knowledge
Klueber, Alt & Österle
215
Figure 11: Procedure Model for eProcurement Analyze iBN Potentials & Define iBN Strategy 1
2
3 eProcurement Project Set-up 4
Analysis & Re-design Procurement Process Network 5 IS/IT Network Design 7
Detail IS Implementation Implementation Project X
6 Bus Configuration & Port Design
eP Partners
PROMET® iBN-eP
PROMET® iBN
eProcurement Process Scenario Design
Initiator
Procurement Process Potentials Assessment
8
Port Implementation
Implementation Project Y
Continuation
management gains importance after operational processes have been implemented. This is based on better information about business partners that can be used to intensify the relationship in order to develop potentials for further improvements on a win-win basis (Österle et al., 2000). To summarize, the knowledge management aspect is addressed in two perspectives. First, the method helps to identify process areas where a company has vital knowledge gaps and sustains the processes of knowledge acquisition, development, application, storage and maintenance. Second, the implementation of new BN strategies and processes leads to new operative information systems and provides new information bases that can be used as new areas for knowledge management.
CASE: IMPLEMENTING BUSINESS NETWORKING AT DEUTSCHE TELEKOM AG Following the principles of action research, case studies undertaken at partner companies (see Chapter 2) are driving ahead the development of the
216
Implementing Virtual Organizing in Business Networks
method, especially for specific procedure models. In the following we will present the case of Deutsche Telekom AG which focuses on the procurement of indirect goods via Internet services. We will briefly describe the history and business context of Deutsche Telekom, the procurement process, the project itself, and the procedure map for implementing the Business Networking solution for electronic procurement for indirect goods (eProcurement) which was defined in this project.
TE
AM
FL Y
Business Context of Deutsche Telekom AG Deutsche Telekom AG is a formerly state-owned telecommunications company that was converted into a stock corporation on the 1st of January 1995 and went public in November 1996. In 1998, its revenues were DM 69 billion (approx. US$38 billion) with a net income of DM 4.2 billion (approx. US$2.27 billion). The core business is mainly from a national full-service telecommunication business with 179,500 employees (1998). This included 46 million customers in the area of fixed lines and 10.1 million ISDN customers in 1998, which is more ISDN lines than the USA and Japan together. Furthermore, Deutsche Telekom is a major cellular provider with 5.5 million customers, Europe’s biggest online and Internet service provider and serves 17.6 million customers with the cable TV infrastructure (Deutsche Telekom, 1999). The transformation process from a vertically integrated monopolist towards a worldwide active competitive organization involved major changes in all dimensions. This has partly been enforced due to deregulation and liberalization in the European telecommunications market since January 1, 1998. Another major driving force is the convergence of the media, telecommunications and IT industries enabled through digitalization. Two of the challenges Deutsche Telekom faces is to determine (1) what products and services should be provided for which customers and (2) which areas partnerships and alliances are important for competitiveness. As these questions have not been as prevalent in the protected monopolist past, new skills in the management of internal competencies and partnerships have to be acquired in all dimensions of business engineering (cf. Chapter 2). Therefore, a knowledge, skill and IS/IT gap had to be filled in order to sustain success in the global telecommunications marketplace. One of the first steps in the Deutsche Telekom project concerned the definition of potential areas for Business Networking in terms of knowledge acquisition, internalization and leverage as well as operational improvements and IS/IT innovation. In workshops with Deutsche Telekom, the main strategic options to initiate a networking project were presented as best
Team-Fly®
Klueber, Alt & Österle
217
practice examples. An area which was further examined was the process of procuring indirect goods (eProcurement). The decision to set up the first project to improve the procurement process was based on a high-level scoring model derived from a previous market, resource and strategy analysis. Procurement Process for Indirect Goods Indirect goods encompass MRO goods4 , office materials and other Cgoods5 . Figure 10 shows a categorization and some examples of the product range that could be procured electronically via the Internet. We will refer to the electronic procurement process for indirect products as eProcurement. The specifics of the eProcurement process for indirect goods are that demand is not planned, variety of products is large, standardization of products and processes is typically high, value of goods is low, the number of potential users is high and the process involves catalog and authorization processes (Killen and Associates, 1997). The procurement process for these products consists of a strategic and an operational element (Figure 9). Strategic procurement includes customer and supplier management and deals with the selection, contracting and evaluation of the partners. These partners are suppliers of indirect goods as well as content and catalog management service providers. Although this process has a strong knowledge component, it relies on (operational) transaction data. The goal of knowledge management is to gain information about the performance of partners in order to improve contracts and to establish win-win relationships. The operational procurement process consists of five elements (see Figure 10). Desktop purchasing systems (DPS) permit end-users to purchase indirect goods directly via an Internet browser. These systems include conditions which were agreed upon in outline agreements for the goods listed in the catalog. Additionally these systems cover the integration into ERP systems, the authorization workflow as well as the link to procurement departments for exceptional requests. The interorganizational element is the electronic integration and transaction-based monitoring of suppliers, service providers for catalog and content management as well as the integration of payment and logistics services. Benefits of eProcurement E-Procurement leads to some major economic benefits. The entire procurement process is IT supported, involves end users and covers products which were not procured electronically before. Three important aspects are: • Process improvements. Systems for indirect procurement such as Ariba’s Operations Resource Management System, Commerce One’s Market
218
Implementing Virtual Organizing in Business Networks
Site and Buy Site or SAP’s B2B Procurement allow users to browse and order goods directly which have been pre-configured according to central standards and responsibilities. This not only increases the transparency of the procurement process, thus allowing for the pooling of procurement volume, but also improves the management of suppliers and reduces transactions that bypass outline contracts (‘maverick buying’). • Cost savings for buyer. Findings from software and service providers indicate that systems which support indirect procurement would lead to savings of up to $ 50 per transaction (c.f. Reinhardt, 1998). Other sources report savings of 5-15% of the expenses on indirect goods (Killen & Associates, 1997). Assuming a user population of more than 10.000 and standard installations, savings of several million US$ and a prospective ROI of less than a year were calculated. • Opportunities for sellers. Incentives for suppliers emerge when physical catalogs are replaced and printing and distribution costs are reduced. Suppliers profit from a faster cash-in cycle and suppliers that quickly establish eProcurement capabilities have the opportunity of increasing their turnover with customers (e.g. Deutsche Telekom). Procedure Model for eProcurement During the project at Deutsche Telekom, a procedure model was developed for implementing an eProcurement solution. Once the decision was made for the procurement of indirect goods, eight steps were identified (see Figure 11). In the first activity, procurement process potentials were assessed in more detail. After setting-up a joint project team of IT and procurement department, the project portfolio analysis technique showed that similar projects already existed within the Deutsche Telekom group. The generation of alternative scenarios for the procurement process were supported by templates of solutions that have been implemented in the USA (Dolmetsch, 1999). Based on these potentials, scenarios were detailed within the eProcurement process scenario design for a catalog and content management service. Since a market survey did not offer any operative solution providers in Germany, the decision was made to enhance internal competencies for establishing an internal service. The strategic procurement technique produced the first feasible scenarios. It helped to identify the potentials and supported the analysis of different scenarios. It includes qualitative aspects such as strategic fit, impact on strategic flexibility, knowledge and risk as well as specific categories to assess the more quantifiable benefit and cost
Klueber, Alt & Österle
219
implications based on a financial model for eProcurement. The activity led to the evaluation and the decision for one or more feasible scenarios. The eProcurement project set-up converts strategic ideas and scenarios into tangible BN goals and deliverables with the identified partners and settled cooperation contracts. The identification is supported by the partner profiling technique (cf. Alt and Fleisch, 1999) to select appropriate partners. At Deutsche Telekom, early talks with software providers and a reality check with possible suppliers were conducted. Also, a first impact analysis on the IS architecture was performed. The decision on the software solution was not completed, as the value of a customized solution was perceived to be higher than the benefits of implementing existing software processes. In the more detailed analysis and redesign procurement process network activity the to-be processes were defined. The chosen process network was specified in more detail and partner companies or external service providers were assigned. The results were used to detail the software requirements, which were aligned with Deutsche Telekom’s strategic IS architecture planning and legacy systems. The idea of the business bus (see Chapter 2) raised the potential not only to create the catalog and content management for internal use but to exploit the buying power, high number of transactions, brand name, critical mass and the IT infrastructure of Deutsche Telekom to establish a dedicated electronic service (eService) for business-to-business eProcurement. Since these requirements increased the complexity the eService was set up as a parallel project. The method supported software requirements analysis and selection, supplier selection and project management by providing best practice examples from the USA and results from desk research. Furthermore the activities and techniques helped to produce result documents to push and document the progress of the project. The implementation-oriented section of the method envisages four activities. The IS/IT network design activity defines the implementation details such as data, protocol and application standards. It is followed by the detail IS implementation activity which provides activities to coordinate the implementation projects. The final implementation may be supported by implementation methods such as PROMET‚ (IMG, 1999). In parallel, the integration of business bus components, which were identified earlier, is supported by the bus configuration and port design activity. Specific eServices are implemented in the port implementation activity. The lessons of Deutsche Telekom reflect the challenges of Business Networking on different levels: • Early addressing of the IS/IT solution potentials and the integrative process perspective can be considered as a key success factor.
220
Implementing Virtual Organizing in Business Networks
• Knowledge transfer was included by providing successful cases and strategic options for the scenario development to support the decision processes. • The procedure model and the techniques of the method served as a vehicle to enable this knowledge transfer. They were particularly helpful since the software solutions and the awareness of this area regarding business opportunities was nearly nonexistent in non-English speaking companies in Europe. • Action research proved beneficial for mutual learning and achieving results and acquiring knowledge that leads to practical solutions. • Change management and the building of reciprocity and trust as a further enabler for implementation in business networks (Klein, 1996) have also proven to be critical.
CONCLUSIONS AND OUTLOOK Business Networking is one of the major trends for companies today with virtual organizing and knowledge management as inherent elements. Implementing Business Networking requires multiple decisions on the strategic, process and IS/IT level which have to be taken into account among multiple partners. Therefore, a method which ensures the coherent organization and management of knowledge in virtual organizations has been presented. This method not only helps in identifying and assessing strategic options as some existing organizational approaches do (e.g., the Venkatraman/Henderson model), but also offers a structured route from (strategic) analysis and conceptualization to implementation. It supports solutions for different levels of cooperation intensity by addressing IS solution potentials at an early stage and, therefore, enables organizations to follow a development path towards more virtual and knowledge-based ways of doing business with closer and more intense relationships. The method has also been described as a knowledge management tool itself since critical configuration and implementation know-how is included— learning from past experiences, lessons learned and best practices. The refinement of the method is performed in close collaboration with the partner companies during the development phase. It is open for further elaboration in specific case studies or company-specific techniques. However, the concept is not limited to the implementation of transactional software solutions. A further implication of the research is that knowledge management in business networks may be addressed after the operative transaction-oriented systems have been improved.
Klueber, Alt & Österle
221
Several foundations were used for method development in order to meet the challenges in establishing a general method for Business Networking which has the flexibility to include all strategic patterns of Business Networking (e.g., EC, SCM, RM). Based on the principles of action research and method engineering, a top-down approach and a bottom-up approach were combined. The former was mainly derived from business and method engineering and the latter from projects pursued with companies. One of the projects, eProcurement at Deutsche Telekom, was presented in this paper. Currently, the project proceeds further towards implementation and the evaluation of extending the eProcurement solution towards an eService offered on the market. This option also includes the definition of a market concept, considerations of critical mass, network externalities and the like. In parallel, other projects are pursued which concentrate on a procedure model for SCM (Klueber et al., 2000). A future development area might be the application and extension towards learning and more knowledge-oriented cooperations in order to build new competencies or to access specific know-how and other immaterial resources such as patents (Doz & Hamel, 1998). For example, these business relations are common in the pharmaceutical industry where companies work together with many research institutions. In these areas, the organizational form may be closer to the ideal of a virtual organization.
ENDNOTES 1
Network governance typically involves higher coordination costs than (traditional) hierarchical governance (cf. Wigand, Picot, & Reichwald, 1997) 2 CXML stands for Commerce Extended Markup Language, SCOR for the Supply Chain Operations Reference-model (www.supply-chain.org) and CPFR for Collaboration, Planning, Forecasting and Replenishment. 3 The objective is not to provide a full account of the activities but to focus on the major findings of our research. For more details on the method and its elements see Klueber and Alt (1999) or Klueber et al. (2000). 4 MRO stands for Maintenance, Repair and Operations goods (Dobler & Burt, 1996). 5 C-goods are goods and services with a low procurement value. The C stems from the classification according to a typical ABC-analysis.
REFERENCES Alt, R., & Fleisch, E. (1999, June 7 - 9). Key Success Factors in Designing and Implementing Business Networking Systems. In Proceedings 12th International Electronic Commerce Conference, Bled, Slovenia.
222
Implementing Virtual Organizing in Business Networks
Alt, R., Reichmayr, C., Österle, H., & Zurmühlen, R. (1999). Business Networking in the Swatch Group. EM-Electronic Markets, 3(9), . Baskerville, R., & Wood-Harper, A. T. (1998). Diversity in Information Systems Action Research Methods. European Journal of Information Systems (7), 90-107. Checkland, P., & Holwell, S. (1998). Action Research: Its Nature and Validity. Systemic Practice and Action Research, 11 (1), 9-21. Chisholm, R. F. (1998). Developing Network Organizations: Learning from Practice and Theory. Reading, Mass.: Addison-Wesley. Davenport, T. H., De Long, D. W., & Beers, M. C. (1998). Successful Knowledge Management Projects. Sloan Management Review (Winter), 43-57. DeSanctis, G., & Monge, P. (1998). Communication Processes for Virtual Organizations. Journal of Computer-Mediated Communications, 3 (4). Dobler, D. and D. Burt (1996). Purchasing and Supply Management - Text and Cases. New York, McGraw-Hill. Dolmetsch, R. (1999). Desktop Purchasing - IP-Netzwerkapplikationen in der Beschaffung von indirekt/MRO-Produkten. Ph.D., University of St. Gallen, St. Gallen. Doz, Y. L., & Hamel, G. (1998). Alliance Advantage: The Art of Creating Value through Partnering. Boston, Mass.: Harvard Business School Press. Deutsche Telekom. (1999). Company portrait. Available: http://www.dtag.de/ english/company/profile/index.htm [1999, 25.5.1999]. Faucheux, C. (1997). How Virtual Organizing is Transforming Management Science. Communications of the ACM, 40 (9), 50-55. Gouillart, F. J., & Kelly, J. N. (1995). Transforming the Organization. New York: McGraw-Hill. Gutzwiller, T. A. (1994). Das CC RIM-Referenzmodell für den Entwurf von betrieblichen, transaktionsorientierten Informationssystemen. Heidelberg: Physica-Verlag. Hagel, J., & Singer, M. (1998). Net Worth : Shaping Markets When Customers Make the Rules. Boston: Harvard Business School Press. Hillig, A. (1997). Die Kooperation als Lernarena in Prozessen fundamentalen Wandels - Ein Ansatz zum Management von Kooperationskompenz. Bern: Haupt. IMG. (1999). Methods. Available: http://www.img.com/jscript/E/S_310.htm Mai 25th]. Killen&Associates. (1997). Operating Resources Management: How Enterprises can Make Money by Reducing ORM Costs (White Paper ). Palo Alto: Killen & Associates.
Klueber, Alt & Österle
223
Klein, S. (1996). The Configuration of Inter-Organizational Relationships. European Journal of Information Systems. Klueber, R. (1998, April 27-28). A Framework for Virtual Organizing. Paper presented at the Workshop on Organizational Virtualness, Bern. Klueber, R., & Alt, R. (1999). PROMET®iBN Method - Development of a method for inter-Business Networking (Working Paper 05 (Version 0.5)). St. Gallen: Institute for Information Management, University of St. Gallen. Klueber, R. et al. (2000). Towards a Method for Business Networking. In Österle et al. (2000) Business Networking: Shaping Enterprise Relationships on the Internet. Berlin: Springer, 257-276. Kuglin, F. (1998). The Customer-Centered Supply Chain Management - A Link-by-Link Guide. New York: AMACOM. Ludwig, H. (1997). Koordination objektzentrierter Kooperationen Metamodell und Konzept eines Basisdienstes fuer verteilte Arbeitsgruppen. Ph.D., University of Bamberg, Bamberg. Malone, T. W., & Crowston, K. (1994). The Interdisciplinary Study of Coordination. ACM Computing Surveys, 26 (1), 87-119. Malone, T. W., & Rockart, J. F. (1991). Computers, Networks, and the Corporation. Scientific American, 91 (9), 92-99. Mansell, G. (1991). Action Research in Information Systems Development. Journal of Information Systems (1), 29-40. Marakas, C., & Kelly, W. (1999). Building the Perfect Corporation: From Vertical Integration to Virtual Integration. VoNet Newsletter, 3(1). Moss-Kanter, R. (1994). Collaborative Advantage: The Art of Alliances. Harvard Business Review (July-August), 96-108. Nonaka, I., & Takeuchi, H. (1995). The Knowledge Creating Company - How Japanese Companies Foster Creativity and Innovation for Competitive Advantage. Oxford. Österle, H. (1995). Business in the Information Age: Heading for New Processes. Berlin: Springer. Österle, H., Fleisch, E., & Alt, R. (2000). Business Networking: Shaping Enterprise Relationships on the Internet. Berlin: Springer. Porter, M. E., & Millar, V. E. (1985). How Information Gives You Competitive Advantage. Harvard Business Review(4), 149-160. Probst, G., & Raub, S. (1995). Action Research - Ein Konzept angewandter Managementforschung. Die Unternehmung(1), 3-19. Probst, G., Raub, S., & Romhardt, K. (1997). Wissen managen. Wiesbaden: Gabler. Reinhardt, A. (1998, June 22). Extranets: Log on, Link up, Save big Companies Are Using Net Tech to Forge New Partnerships And Pile Up Eye-popping Savings. Business Week.
224
Implementing Virtual Organizing in Business Networks
Rockart, J. F. (1998). Towards Survivability of Communication-Intensive New Organizational Forms. Journal of Management Studies, 35(4), 417420. Skyrme, D. J. (1998, April 27-28). The Realities of Virtuality. In. P. Sieber & J. Griese (eds.), Organizational Virtualness (pp. 25-34), Bern: Simowa. Stein, J. (1997). On Building and Leveraging Competences Across Organizational Borders: A Socio-cognitive Framework. In A. Heene & R. Sanchez (Eds.), Competence-based Strategic Management (pp. 267-284). Chichester: John Wiley & Sons. Venkatraman, N., & Henderson, J. C. (1998). Real Strategies for Virtual Organizing. Sloan Management Review, Fall, 33-48. Wang, H. Q. (1997). A Conceptual Model for Virtual Markets. Information and Management, 32, 147-161. Wigand, R., Picot, A., & Reichwald, R. (1997). Information, Organization and Management. Chichester: John Wiley & Sons. Wiig, K. M. (1999). Introducing Knowledge Management into the Enterprise. In J. Liebowitz (Ed.), Knowledge Management Handbook (pp. 1-41). Boca Raton: CRC Press.
Burn & Ash 225
Chapter 16
Managing Knowledge for Strategic Advantage in the Virtual Organisation Janice M. Burn and Colin Ash Edith Cowan University, Australia
This chapter looks at the virtual organisation and suggests that the basic concepts of virtual management are so poorly understood that there are likely to be very few such organisations gaining strategic advantage from their virtuality. The authors begin by providing some clear definitions of virtual organisations and different models of virtuality which can exist within the electronic market. Degrees of virtuality can be seriously constrained by the extent to which organisations have preexisting linkages in the marketplace and the extent to which these can be substituted by virtual ones, but also by the intensity of virtual linkages which support the virtual model. Six virtual models are proposed within a dynamic framework of change . In order to realise strategic advantage, virtual organisations must align their virtual culture with the virtual model for structural alignment. This paper further proposes a model for virtual organisational change which identifies the factors internal to the virtual organisation that need to be managed . Critical to this is the role of knowledge management. The authors develop this concept within a framework of virtual organising and relate this to organisations using ERP in an Internet environment. Specific examples will be used relating such developments to organisations employing SAP and illustrating strategic advantage.
Previously Published in Knowledge Management and Virtual Organizations edited by Yogesh Malhotra, Copyright © 2000, Idea Group Publishing.
226 Managing Knowledge for Strategic Advantage
TE
AM
FL Y
Virtual organisations are very much in vogue but there is very little empirical research to show how “virtuality” can provide a strategic advantage to organisations. There is even less guidance provided with respect to the management of change in organisations that embrace some degree of virtuality by leveraging their competencies through effective use of information and communication technologies (ICT). It could be argued that there is a degree of virtuality in all organisations but at what point does this present a conflict between control and adaptability? Is there a continuum along which organisations can position themselves in the electronic marketplace according to their needs for flexibility and fast responsiveness as opposed to stability and sustained momentum? To what extent should the organisation manage knowledge both within and without the organisation to realise a virtual work environment? While there may be general agreement with regard to the advantages of flexibility, the extent to which virtuality offers flexibility and the advantages which this will bring to a corporation have yet to be measured. There is an assumption that an organisation that invests in as little infrastructure as possible will be more responsive to a changing marketplace and more likely to attain global competitive advantage, but this ignores the very real power which large integrated organisations can bring to the market in terms of sustained innovation over the longer term (Chesbrough and Teece, 1996). Proponents of the virtual organisation also tend to underestimate the force of virtual links. Bonds which bind a virtual organisation together may strongly inhibit flexibility and change rather than nurture the concept of the opportunistic virtual organisation (Goldman, Nagel and Preiss, 1995). Aldridge (1998), suggests that it is no accident that the pioneers of electronic commerce fall into three categories: • Start-ups, organisations with no existing investment or legacy systems to protect; • Technology companies with a vested interest in building the channel to market products and services; • Media companies, attracted by low set-up costs and immediate distribution of news and information. When is a virtual organisation really virtual? One definition would suggest that organisations are virtual when producing work deliverables across different locations, at differing work cycles, and across cultures (Gray and Igbaria, 1996; Palmer and Speier, 1998). Another suggests that the single
Team-Fly®
Burn & Ash 227
common theme is temporality. Virtual organisations centre on continual restructuring to capture the value of a short term market opportunity and are then dissolved to make way for restructuring to a new virtual entity(Byrne, 1993; Katzy, 1998). Yet others suggest that virtual organisations are characterised by the intensity, symmetricality, reciprocity and multiplexity of the linkages in their networks (Powell, 1990; Grabowski and Roberts, 1996). Whatever the definition (and this paper hopes to resolve some of the ambiguities) there is a concensus that different degrees of virtuality exist (Hoffman, Novak, and Chatterjee, 1995; Gray and Igbaria, 1996; Goldman, Nagel and Preiss, 1995) and within this, different organisational structures can be formed (Palmer and Speier, 1998; Davidow and Malone, 1992; Miles and Snow, 1986). Such structures are normally inter-organisational and lie at the heart of any form of electronic commerce yet the organisational and management processes which should be applied to ensure successful implementation have been greatly under researched (Finnegan, Galliers and Powell, 1998; Swatman and Swatman, 1992). It could be argued that there is a degree of virtuality in all organisations but at what point does this present a conflict between control and adaptability? Is there a continuum along which organisations can position themselves in the electronic marketplace according to their needs for flexibility and fast responsiveness as opposed to stability and sustained momentum? To what extent should the organisation manage knowledge both within and without the organisation to realise a virtual work environment? A virtual organisation’s knowledge base is inevitably distributed more widely than a conventional one, both within the organisation and without – among suppliers, distributors, customers, and even competitors. This wide spread can deliver enormous benefits; a wider range of opportunities and risks can be identified, costs can be cut, products and services can be improved and new markets can be reached by using other people’s knowledge rather than recreating it. However, this does make it both more important and more difficult to manage knowledge well. It is harder to share knowledge and hence exploit it in a dispersed organisation and there is an increased risk both of knowledge hoarders and of duplication leading to possible loss of integrity and wasted effort. While competencies and their associated knowledge may be more effectively bought from business partners or outsourced if there are economies of scale, expertise or economic value, care must also be taken to avoid losing the knowledge on which core competencies are based or from which new competencies can be developed quickly. The ability of the organisation to change or to extend itself as a virtual entity will reflect the extent to which an understanding of these concepts has
228 Managing Knowledge for Strategic Advantage
been embedded into the knowledge management of the virtual organisation as a Virtual Organisational Change Model (VOCM). Managing these change factors is essential to gain and maintain strategic advantage and to derive virtual value. This chapter addresses these aspects as follows. Firstly, a definition of virtual organisations is developed and related to the concept of virtual culture which is the organisational embodiment of its virtuality. This may take a variety of different virtual models which will reflect the strength and structure of inter-organisational links. The paper identifies six virtual models—the Virtual Alliance Models (VAM)—and suggests that each of these will operate along a continuum and within a framework of dynamic change . In order to maximise the value derived from the VAM the organisation needs to ensure that there is a consistency between the alignment of its Virtual Strategic Positioning and the VAM and the organisation and management of internal and external virtual characteristics. The ability of the organisation to change from one VAM to another or to extend itself as a virtual entity will reflect the extent to which an understanding of these concepts has been embedded into the knowledge management of the virtual organisation as a Virtual Organisational Change Model (VOCM). Managing these change factors is essential to gain and maintain strategic advantage and to derive virtual value (Burn and Barnett, 1999). The authors expand these concepts by using case examples of organisations using SAP and illustrate the three levels of development mode – virtual work, virtual sourcing and virtual encounters and their relationship to knowledge management, individually, organisationally and community wide through the exploitation of ICT.
VIRTUAL ORGANISATIONS AND VIRTUAL CULTURES Virtual organisations are electronically networked organisations that transcend conventional organisational boundaries (Barner, 1996; Berger, 1996; Rogers, 1996), with linkages which may exist both within (Davidow and Malone, 1992) and between organisations (Goldman, Nagel and Priess, 1995). In its simplest form, however, virtuality exists where IT is used to enhance organisational activities while reducing the need for physical or formalised structures (Greiner and Mates, 1996). Degrees of virtuality (the extent to which the organisation operates in a virtual rather than physical mode) then exist which will reflect:
Burn & Ash 229
• The virtual organisational culture (strategic positioning) • The virtual network (the intensity of linkages and the nature of the bonds which tie the stakeholders together as internal and external structures) • The virtual market (IT dependency and resource infrastructure, product, customer) Culture is the degree to which members of a community have common shared values and beliefs (Schein, 1990). Tushman and O’Reilly (1996) suggest that organisational cultures that are accepting of technology, highly decentralised, and change oriented are more likely to embrace virtuality and proactively seek these opportunities both within and without the organisation. Virtual culture is hence a perception of the entire virtual organisation (including its infrastructure and product) held by its stakeholder community, and operationalised in choices and actions which result in a feeling of globalness with respect to value sharing (e.g., each client’s expectations are satisfied in the product accessed) and time-space arrangement (e.g., each stakeholder has the feeling of a continuous access to the organisation and its products). The embodiment of this culture comes through the Virtual Strategic Perspective (VSP) which the organisation adopts. Networks can be groups of organisations but also groups within organisations where the development and maintenance of communicative relationships is paramount to the successful evolution of a virtual entity (Ahuja and Carley, 1998). However, the ability to establish multiple alliances and the need to retain a particular identity creates a constant tension between autonomy and interdependence, competition and cooperation (Nouwens and Bouwman, 1995). These relationships are often described as value-added partnerships based on horizontal, vertical or symbiotic relationships. These in turn relate to competitors, value chain collaborators and complementary providers of goods and services, all of whom combine to achieve competitive advantage over organisations outside these networks. The nature of the alliances which form the virtual organisation, their strength and substitutability define the inherent virtual structure. Markets differ from networks since markets are traditionally coordinated by pricing mechanisms. In this sense, the electronic market is no different, but further “central to the conceptualisations of the electronic marketplace is the ability of any buyer or seller to interconnect with a network to offer wares or shop for goods and services. Hence, ubiquity is by definition a prerequisite” (Steinfield, Kraut and Plummer, 1995). There are different risks associated with being a market-maker and a market-player and different products will also carry different risks. Criteria for successful electronic
230 Managing Knowledge for Strategic Advantage
Figure 1: Virtual Organisations and Virtual Cultures E-Market Culture E-Business Culture Virtual Organization Culture
market development include products with low asset specificity and ease of description, and a consumer market willing to buy without recourse to visiting retail stores (Wigand and Benjamin, 1995). Necessarily, the most important asset to an electronic market is the availability of pervasive ICT infrastructures providing a critical mass of customers. A virtual organisation is both constrained and supported by the electronic market in which it operates and the stage to which its business environment has developed as an e-business. Figure 1 shows this set of relationships. Despite the growth of on-line activity, many firms are nervous of the risks involved and fear a general deterioration of profit margins coupled with a relinquishment of market control (Burn and Barnett, 1999). Nevertheless, as existing organisations are challenged by new entrants using direct channels to undercut prices and increase market share, solutions have to be found that enable organisations to successfully migrate into the electronic market. The authors suggest that there are six different models of virtuality which may be appropriate.
MODELS OF VIRTUALITY This section identifies six different forms of virtual organisations as : Virtual faces Co-alliance models Star-alliance models – core or satellite Value-alliance models – stars or constellations Market-alliance models Virtual brokers
Burn & Ash 231
Figure 2: The Virtual Face Put simply, virtual faces are the cyberspace incarnations of an existing non-virtual organisation (often described as a “place” as opposed to “space” organisation, Rayport and Sviokola, 1995) and create additional value such as enabling users to carry out the same transactions over the Internet as they could otherwise do by using telephone or fax, e.g., Fleurop selling flowers or air tickets by Travelocity. The services may, however, reach far beyond this enabling the virtual face to mirror the whole activities of the parent organisation and even extend these, e.g., the Web-based versions of television channels and newspapers with constant news updates and archival searches. Alternatively they may just extend the scope of activities by use of facilities such as electronic procurement, contract tendering or even electronic auctions or extend market scope by participating in an electronic mall with or without added enrichment such as a common payment mechanism. There is obviously an extremely tight link between the virtual face and the parent organisation. This model can be actualised as an e-shop, e-auction or even e-mall. Co-alliance models are shared partnerFigure 3: Co-alliance Model ships with each partner bringing approximately equal amounts of commitment to the virtual organisation thus forming a consore f tia. The composition of the consortia may change to reflect market opportunities or to reflect the core competencies of each member (Preiss, Goldman and Nagel, 1996). Focus can be on specific functions such as collaborative design or engineering or in providing virtual support with a virtual team of consultants. Links within the co-alliance are normally contractual for more permanent alliances or by mutual convenience on a project-by-project basis. There is not normally a high degree of substitutability within the life of that virtual creation. Star-alliance models are coordinated networks of interconnected members reflecting a core surrounded by satellite organi-sations. The core comprises leaders who are the dominant players in the market and supply competency or expertise to members. These alliances are commonly based around similar industries or company types. While this form is a true network, typically the star or leader is identified with the virtual face and so the core
232 Managing Knowledge for Strategic Advantage
organisation is very difficult to replace whereas the satellites may have a far greater level of substitutability. Value-alliance models bring together a range of products, services and facilities in one package and are e f e f based on the value or supply chain model. Participants may come together on a project by project basis but generally coordination is provided by the general contractor. Where longer term relationships have developed the value alliance often adopts the form of value constellaFigure 5: Value-alliance model tions where firms supply x ^ ^ each of the companies in the value chain and a comx ^ ^ x ^ ^ x x plex and continuing set of ^ ^ ^ x x x strategic relationships are embedded into the alliance. Substitutability will relate to the positioning on the value chain and the reciprocity of the relationship. Market-alliances are organisations that exist primarily in cyberspace, depend on their member organisations for the provision of actual products and services and operate in an electronic market. Normally they bring together a range of products, services and facilities in one package, each of which may be offered separately by individual organisations. In some cases the market is open and in others serves as an intermediary. These can also be described as virtual communities but a virtual community can be an add-on such as exists in an e-mall rather than a cyberspace organisation Figure 6: Market-alliance Model perceived as a virtual organisation. Amazon.com is a prime example of a market-alliance model where substitutability of links is very high. Virtual Brokers are designers of dynamic f
e
f
^ x
f
e
e
e
f
Figure 4: Star -alliance Model
Burn & Ash 233
Figure 7. Virtual Broker
▼
▼ ▼
▼
▼
▼
▼
▼
▼
▼
▼
networks (Miles and Snow, 1986). These prescribe additional strategic opportunities either as third-party value-added suppliers such as in the case of common Web marketing events (e-Xmas) or as information brokers providing a virtual structure around specific business information services (Timmers, 1998). This has the highest level of flexibility with purpose built virtual organisations created to fill a window of opportunity and dissolved when that window is closed. As discussed previously each of these alliances carries with it a set of tensions related to autonomy and interdependence. Virtual culture is the strategic hub around which virtual relationships are formed and virtual links implemented. In order to be flexible, links must be substitutable, to allow the creation of new competencies, but links must be established and maintained if the organisation is going to fully leverage community expertise. This presents a dichotomy. The degree to which virtuality can be implemented effectively relates to the strength of existing organisational links (virtual and non virtual) and the relationship which these impose on the virtual structure. However, as essentially networked organisations they will be constrained by the extent to which they are able to redefine or extend their virtual linkages. Where existing linkages are strong e.g. co-located, shared culture, synchronicity of work and shared risk (reciprocity) these will both reduce the need for or perceived benefits from substitutable linkages and inhibit the development of further virtual linkages. Figure 8 provides a diagrammatic representation of these tensions and their interaction with the Virtual Alliance Models (VAM). These six models are not exclusive but are intended to serve as a way of classifying the diversity of forms which an electronic business model may assume. Some of these are essentially an electronic re-implementation of traditional forms of doing business, others are add-ons for added value possibly through umbrella collaboration and others go far beyond this through value chain integration or cyber communities. What all of these have in
234 Managing Knowledge for Strategic Advantage
Figure 8. Virtual Alliance Models Autonomy/Substitutability of virtual links Low
High Virtuality
Interdependence/ Strength of Organisational Links - colocation - culture - synchronicity - shared risks
* star
# High virtualFace
co-
satellite star-
value
market
alliance
virtual broker
common is that they now seek innovative ways to add value through information and change management and a rich functionality. Creating value through virtuality is only feasible if the processes which support such innovations are clearly understood.
VIRTUAL ORGANISATIONAL CHANGE MODEL These six forms of virtual organisations all operate within a dynamic environment where their ability to change will determine the extent to which they can survive in a competitive market. Organisational theorists suggest that the ability of an organisation to change relates to internal and external factors (Miles and Snow, 1986), including the organisation’s technology, structure and strategy, tasks and management processes, individual skills and roles, and culture (DeLisi, 1990; Venkatraman, 1994) and the business in which the organisation operates and the degree of uncertainty in the environment (Donaldson, 1995). These factors are also relevant to virtual organisations but need further refinement. Moore (1997) suggests that businesses are not just members of certain industries but parts of a complex ecosystem that incorporates bundles of
Burn & Ash 235
Table 1. E-Market Ecosystem EcoSystem Stage Birth
Expansion
Leadership Challenges Maximise customer delivered value
Authority
Attract Critical Mass of Buyers Lead co-evolution
Renewal or Death
Innovate or Perish
Cooperative Competitive Challenges Challenges Find & Create new Protect your ideas value in an efficient way Work with Suppliers Ensure market and Partners standard approach Provide compelling Maintain strong vision for the future bargaining power Work with Innovators Develop and
different industries. The driving force is not pure competition but coevolution. The system is seen as “an economic community supported by a foundation of interacting organisations and individuals. Over time they coevolve their capabilities and roles, and tend to align themselves with the direction set by one or more central companies” (p. 26). The ecosystems evolve through four distinct stages: • Birth • Expansion • Authority • Death And at each of these stages the system faces different leadership, cooperative and competitive challenges. This ecosystem can be viewed as the all-embracing electronic market culture within which the e-business maintains an equilibrium. The organisational “virtual culture” is the degree to which the organisation adopts virtual organising and this in turn will affect the individual skills, tasks and roles throughout all levels of the organisation. Venkatraman and Henderson (1998) identify three vectors of virtual organising as: • Virtual Encounters • Virtual Sourcing • Virtual Work Virtual encounters refers to the extent to which you virtually interact with the market defined at three levels of greater virtual progression: • Remote product/service experience • Product/service customisation • Shaping customer solutions
236 Managing Knowledge for Strategic Advantage
Virtual Sourcing refers to competency leveraging from: • Efficient sourcing of standard components • Efficient asset leverage in the business network • New competencies through alliances Virtual Work refers to: • Maximising individual experience • Harnessing organisational expertise • Leveraging of community expertise
TE
AM
FL Y
Figure 9 is an adaptation of the ‘Virtual Organising’ model proposed by Venkatraman and Henderson (1998). The component parts of this paper have been embedded into their original diagram. As a holistic model it summarises the way the four dimensions (activities) work together with synergy, to enable an ERP organisation to delivery information-rich products and services— sustainable competitive advantage. Observe the value and complexity increases for each activity as you step up axes away from the origin. The small triangle as it moves away from the origin represents an ERP organisation able to deliver its products and services with increased value. As organisations step up the ICT axis, there is a cause and effect or pull of ‘enabling technologies’ on the other axes. This is illustrated in the model by a shift of the small triangle (ERP organisation) away from the origin along
Figure 9: Information rich Products and Services by ERP Organisations Increasing information rich products and services by ERP organisations
Info. & Comm. Technology Autonomous software agents
Electronic consultative commerce Knowledge empowered service
Intranet value chain Internet site 'Value triad'
Knowledge value networks Individual expertise
Information 'Pull' Consultation/cooperation
ERP Organisation
Collaboration
Organisational expertise Community expertise
Competency Leverage
Knowledge Management
Team-Fly®
Burn & Ash 237
this axis. It also means a shift to higher levels in the other three dimensions of competency, management, and market behaviour, thus migrating the organisation towards an electronic consultative enterprise. Furthermore, there is the potential to take the organisation beyond an electronic consultative enterprise, where collaboration and competition are in tension with each other at all levels. To obtain returns on investment, the networked organisation or virtual organisation must establish explicit processes to increase collaboration and to facilitate the flow of knowledge throughout the enterprise. In an organisation where an Enterprise Resource Planning (ERP) package is used to align usage of ICT with the virtual alliance model, the stages of development which would be involved in virtual organising are shown in Figure 9 where the third levels all relate to an organisation with an “information rich” product and the highest degree of use of ICT. The component parts of ERP have been embedded into their original diagram. As a holistic model it summarises how the four dimensions (activities) work together with synergy, to enable an ERP organisation to deliver information rich products and services - sustainable competitive advantage. Observe the value and complexity increases for each activity as you step up axes away from the origin.
Figure 10. Virtual Organisational Change Model (VOCM)
Electron ic M arket Ecosystem
e-Business
Strategy
Structural Alli ance s
ICT Virtual Cultur e
Know ledge Manage ment
238 Managing Knowledge for Strategic Advantage
Where the third levels all relate to an organisation with an “information rich” product and the highest degree of use of ICT. If we view this as the virtual culture of the organisation then this needs to be articulated through the strategic positioning of the organisation and its structural alliances. It also needs to be supported by the knowledge management processes and the ICT. These relationships are depicted in a dynamic virtual organisation change model as shown below. The degree to which virtuality can be applied in the organisation will relate to the extent to which the VOCM factors are in alignment. When these are not aligned then the organisation will find itself dysfunctional in its exploitation of the virtual marketspace and so be unable to derive the maximum value benefits from its strategic position in the VAM framework. The organisation needs to examine the VOCM factors in order to evaluate effectiveness and identify variables for change either within that VAM or to move beyond that VAM according to the virtual culture. Change directions should be value led but there is as yet very little empirical research to identify how value is derived in a virtual organisation and even less to identify how that knowledge should be built into the management of the virtual organisation.. For virtual organisations performance measurements must cross organisational boundaries and take collaboration into account but it is also necessary to measure value at the individual level since it is feasible that one could be effective without the other (Provan and Milward, 1995).
VIRTUAL KNOWLEDGE MANAGEMENT DEVELOPMENT MODELS This new world of knowledge based industries is distinguished by its emphasis on precognition and adaptation in contrast to the traditional emphasis on optimisation based on prediction. The environment is characterised by radical and discontinuous change demanding anticipatory responses from organisation members leading to a faster cycle of knowledge creation and action (Denison and Mishra, 1995). Knowledge management is concerned with recognising and managing all of an organisation’s intellectual assets to meet business objectives. It “ caters to the critical issues of organisational adaptation, survival and competence in the face of increasingly discontinuous environmental change. Essentially, it embodies organisational processes that seek synergistic combination of data and information processing capacity of information technologies, and the creative and innovative capacity of human beings.” (Malhotra, 1997).
Burn & Ash 239
Knowledge does not come from processes or activities; it comes from people and communities of people. An organisation needs to know what knowledge it has and what knowledge it requires – both tacit and formulated—who knows about what, who needs to know and an indication of the importance of the knowledge to the organisation and the risks attached. The goal of a knowledge management strategy should be to understand the presence of knowledge communities and the various channels of knowledge sharing within and between them, and to apply ICT appropriately. This takes place at the level of the individual networks of knowledge within the organisation and community networks.
EMPOWERING THE INDIVIDUAL Figure 11: Deploying Web Technology Information Pull
Technology Infrastructure
x
User 1
x
User 2
x
User 3
R O I
Table 2: A Summary of Traits of Knowledge Workers INDIVIDUAL
TEAM
ORGANISATION
Learning
Sharing
Codifying
behaviour I am responsible for learning
My knowledge grows when it flows
My company benefits from my knowledge
beliefs Self-esteem
Respect
Trust
values
The key characteristic of ICT is it enables a shift in the control of information flow from the information creators to the information users (Telleen, 1996). Individuals using the Web are able to select the information they want, a model of retrieval referred to as information pull. This contrasts with the old ‘broadcast’ technique of information push where the information is sent to them ‘just-in-case’, normally determined by a prescribed list. Such technology empowers individuals. (Figure 11). For success in deploying ICT management needs to focus on internal effectiveness. In particular, effective integration of the technology into the enterprise infrastructure and a shift in the control of infor-
240 Managing Knowledge for Strategic Advantage
mation flow to the users. To be effective and not just efficient (high ROI), requires not only a new information infrastructure, but also a shift in individual attitudes and organisational culture. This can be summarised as in Table 2. To supplement the ideas expressed in Figure 11, Gonzalez (1998), gives two key factors for successful intranet development. Firstly, the intranet must fulfill its value proposition. Secondly, employees must want to pull content to themselves. Here the term value proposition is used to expand the requirements for a successful web site: • satisfy employees’ communication and information needs, e.g., helps me do my job better, • possess outstanding product features, e.g. intuitive navigation and visually pleasing, • exhibit operating excellence - e.g. convenient, reliable. These three elements referred to as the Value Triad, work together to create a value proposition. If any one of the three is weak or fails, then the value proposition is reduced.
KNOWLEDGE VALUE NETWORKS Prior to the development of the Internet manufacturing companies successfully utilised the value chain approach to increase their ability to compete. Faced with increasing cost pressure from global competitors with significantly more favourable labour costs, companies understood that pure price competition was not a viable option. Through the use of the value chain Figure12: Electronic Consultative Commerce (adapted from Carlson, 1995) Businesses
Autonomous Software Agents
Production Financials Marketing
Distribution
HRM
Customer Customer Service Support
Customers
Burn & Ash 241
model, companies determined that speed and service would offer the best hope for continued success and growth. But are they able to sustain their success? The sustainable competitive advantage of the firm derives from the “synergy” of the firm’s various capabilities. Porter (1985) has proposed a similar concept in his notion of “complementarities.” He argues that the various competitive capabilities of the firm should be “complementary” or “synergistic” so that the synergy resulting from them can- not be easily imitated by current or potential competitors. Carlson (1995) uses the idea of synergy to develop a ‘totally new’ model called the Value Network. This model involves creating a shared knowledge infrastructure that enables and “harnesses the flow of knowledge within and between communities”. The premise used here is that sustainable competitive advantage can only be attained through a careful integration of activities in a firm’s value chain (network), and with knowledge being the basis for this activity integration. Whereas a chain implies sequential flow, a network carries a “connotation of multidimensional interconnectedness”. He has developed a model for guiding or managing the change of an old world enterprise through three stages of migration to a knowledge based enterprise that is able to deliver information rich products and services, namely: • Knowledge Value Networks—extend the value chain model for competitive advantage to a highly interconnected internet of knowledge flows; • Knowledge Empowered Service—builds on the value network, enabling customer service representatives to become more effective agents of the organization by giving them better access to the shared knowledge; • Electronic Consultative Commerce—creates competitive advantage by taking e-commerce to the next higher plane, where customers have direct access to the organization’s intelligence. The knowledge value network and knowledge empowered service, are the first steps towards electronic consultative commerce. With electronic consultative commerce a customer would engage in a collaborative process, where human and computer software agents both perform tasks, monitor events, as well as initiate communication. In Figure 12, the diagram illustrates how the various communication links or channels are assisted by software agents and/or human consultants. The various channels for doing business are usually categorised as: consumerto-business, business-to-business, and Intranet employee-relationships inter-
242 Managing Knowledge for Strategic Advantage
actions. Together they contribute to an increasing level of knowledge creation. Many organisations are expanding the role of consultative customer interaction beyond sales to include consultation in every customer contact. For example, SAPNet is SAP’s main medium for information and communication between SAP and its customers and partners. SAPNet contains nearly everything you may wish to know about SAP AG; products and processes, partners and solutions, news groups and SIGs. Most of these roles can be supported, at least partially, with a simple Internet site design that includes an underlying information base and a consultative interaction (SAPNet, 1998). However, more advanced solutions are being developed that employ knowledge-based system technology traditionally found in expert systems. To bring consultative sales directly to the customer is through use of autonomous agents (software) that provide assistance to users. ‘Instead of user-initiated interaction via commands and/or direct manipulation, the user is engaged in a cooperative process in which human and computer agents both initiate communication, monitor events and perform tasks (Carlson, 1995). Market niche players like Radnet provide tools and expertise to develop collaborative enterprise applications that fully leverage relational database technology and the entire range of intranet, extranet, and Internet standards. Radnet’s InternetShare products and technology help companies realise their competitive advantage by bringing advanced Internet-based collaborative features to their core product offerings (Radnet, 1998). Autonomous agents can make decisions on a set of options learned from past experiences. So they are fundamentally knowledge-based and belong to the world of artificial intelligence. These agents can be classified in two types: business agents that perform tasks based on business practices, and learning agents that act as electronic teachers. For example, business agents can search product catalogs or ‘smart catalogs’ while learning agents can assist in configuring products of all combinations, all accessible via an Internet browser (Curran and Keller, 1998). It’s often appropriate, and necessary, for organisations to supplement an electronic commerce strategy with human involvement. A software agent underlying the customer’s system interface determines when human assistance is necessary and automatically establishes a telephone connection with a service representative. A more advanced use of learning agents for product configuration can be extended to solve problems associated with R/3 installations. Learning agents have the capacity to search out irrelevant detailed information and deliver the most appropriate information for the user to learn— addressing the problem of information overload. R/3 installation
Burn & Ash 243
learning agents would greatly reduce the time for business consultants to implement R/3 as well as radically change the way industry-specific applications are deployed. In response to this problem SAP has developed employee self-service intranet application components that deliver preconfigured access to R/3 application servers, making implementation simple and fast (SAP, 1998). Ultimately the learning agents will enable the non-technical employees to configure new business processes. This assumes IT specialists and employees come together to perform the activities of the value chain so that it becomes possible for users to have a part in the enterprise reengineering. Furthermore, this merging of roles represents a change in ownership of the electronic consultative enterprise’s business processes. There are many claims about enabling technologies which can help to capture and leverage knowledge within the organisation but little about explicit knowledge sharing strategies. Although knowledge is a strategic asset (Eisenhardt and Schoonhoven, 1996; Winter, 1987), embedded or tacit knowledge is difficult to transfer and also vulnerable to loss through misfortune or asset transfers and terminations. Such an important asset should be cultivated and measured but this becomes an impossible task without trust and a close relationship at all levels of the organisation (Scott and Gable, 1997; Badaracco, 1991). This is particularly true of the virtual organisation. To leverage the benefits of supply chain modeling and management via the Internet, you need to be aware of the influences beyond your company. Success in exposing your business partners to enterprise systems depends as much on people issues–trust, understanding and communication–as it does on technology (Chirgwin, 1998). This implies a shared vision of culture across all levels of the enterprise.
CONCLUSIONS The virtual organisation is recognised as a dynamic form of interorganisational systems (Burn, Marshall and Wild, 1999) and hence one where traditional hierarchical forms of management and control may not apply. Little, however, has been written about the new forms which management and control might take other than to espouse a “knowledge management” approach. Managing knowledge about what? In an organisation where change is the only constant, there has to be a system which can capture the organisational core competencies and leverage these to provide strategic advantage. This may be a competitive advantage or a strategic advantage in collaboration with the competition. Knowledge has become the major asset of the organisation, and its recording, communication and management
244 Managing Knowledge for Strategic Advantage
deserve attention. Without the ability to identify who has the key information, who the experts are, and who needs to be consulted, management decisions are unlikely to be optimal. Both the importance and the difficulty of the issue are magnified by virtuality in the form of decentralisation and dispersion, empowerment and continual change. In interdependent organisations the synergy of knowledge may be the principal benefit of the interdependence and the issue is again magnified. In dispersed organisations more conscious efforts and explicit procedures are needed. Skills may not be available where they are wanted, data may not be shared and might be used inefficiently or wrongly. New skills need to be developed quickly and employees will have to take personal responsibility for their own knowledge development. This implies that the virtual organisation will need a number of managers with converging expertise in the areas identified within the VOCM. There may no longer be a separate ICT or knowledge management function. Indeed there may no longer be any management function which does not explicitly demand expertise in these areas. The implications for IS professionals are quite frightening. Whole areas of new skills need to be acquired and these skills are themselves constantly in a process of development, demanding continual updates. We are still struggling with the information age as we are being thrust into the knowledge age but without the intermediation services to support this. Opportunities abound for skilled IS professionals at every level of the organisation but this must be supported by an on-going education programme at the heart of every organisation. The virtual organisation that succeeds will be the learning organisation where people are regarded as their greatest core asset.
REFERENCES Ahuja, M. K., & Carley, K. M. (1998). Network structure in virtual organizations. Journal of Computer-Mediated Communication [On-line], 3 (4). Available: http://www.ascusc.org/jcmc/vol3/issue4/ahuja.html. Aldridge, D. (1998). Purchasing on the Net – The New Opportunities for Electronic Commerce. EM – Electronic Markets, 8(1), 34-37. Badaracco, J.L. The Knowledge Links, in Myers, P.S. (1996 ) Knowledge Management and Organisation Design, Butterworth-Heinemann, USA, 133. Barner, R. (1996). The New Millenium Workplace: Seven changes that will challenge managers and workers. Futurist, 30(2), 14-18. Berger, M. (1996). Making the Virtual Office a Reality. Sales and marketing Management, SMT Supplement, June, 18-22.
Burn & Ash 245
Burn, J. M. and Barnett, M. L., Communicating for Advantage in the Virtual Organisation, IEEE Transactions on Professional Communication, 42(4), 1-8. Burn,J., Marshall, P. & Wild, M., (1999). Managing Change in the Virtual Organisation, ECIS, Copenhagen Denmark, Vol. 1, 40-54. Byrne, J. (1993). The Virtual Corporation. Business Week, 36-41. Carlson, DA (1995). Harnessing the Flow of Knowledge, [http:// www.dimensional.com/~dcarlson/papers/KnowFlow.htm] Chirgwin, R. (1998). The Culture of the Model Enterprise, Systems, February, Australia; 14-22. Chesbrough, H. W. and Teece D. J. (1996). When is Virtual Virtuous? Harvard Business Review, Jan-Feb,65-73. Davidow, W. H. and Malone, M. S. (1992). The Virtual Corporation, New York: Harper Business. DeLisi, P. S. (1990). Lessons from the Steel Axe: Culture, Technology and Organisation Change. Sloan Management Review. Denison, D.R. and Mishra, A.K. (1995) ‘Toward a Theory of Organizational Culture and Effectiveness’, Organization Science, 6(2), March-April, 204223. Donaldson, L. (1995). American Anti-Management theories of Organisation. Cambridge UK., Cambridge University Press. Eisenhardt, K M. Schoonhoven, C B (1996 ). Resource-Based View of Strategic Alliance Formation: Strategic and Social Effects in Entrepreneurial Firms, Organization Science (7:2), March-April, 136-150. Finnegan, P., Galliers, B. and Powell, P. (1998);. Systems Planning in an Electronic Commerce Environment in Europe: Rethinking Current Approaches. EM – Electronic Markets, 8(2), 35-38. Goldman, S. L., Nagel R. N. and Preiss, K. (1995). Agile Competitors and Virtual Organisations: Strategies for Enriching the Customer, New York: Van Nostrand Reinhold. Gonzalez, J.S. (1998) The 21st-Century INTRANET, Prentice-Hall, N.J. 189215, 240. Grabowski, M. and Roberts, K. H. (1996). Human and Organisational Error in Large Scale Systems. IEEE Transactions on Systems, Man and Cybernetics, 26(1), 2-16. Gray, P. and Igbaria, M. (1996). The Virtual Society, ORMS Today, December, 44-48. Greiner, R. and Metes, G. (1996). Going Virtual: Moving your Organisation into the 21st Century. Englewood Cliffs, NJ: Prentice Hall.
246 Managing Knowledge for Strategic Advantage
TE
AM
FL Y
Hoffman, D.L., Novak, T.P., & Chatterjee, P. (1995). Commercial scenarios for the Web: Opportunities and challenges. Journal of Computer-Mediated Communication [On-line], 1 (3). Available: http://www.ascusc.org/jcmc/ vol1/issue3/hoffman.html Katzy, B. R. (1998). Design and Implementation of Virtual Organisations. HICSS, Vol, 44-48 . Malhotra, Y. (1997). Knowledge Management for the New World of Business, [http://www.brint.com/km/whatis.htm]. Miles, R. E. and Snow, C. C. (1986). Organisations: new concepts for new forms. California Management Review 28(3), 2-73. Moore, J. F. (1997). The Death of Competition: Leadership and Strategy in the Age of Business Ecosystems. New York, Harper Business. Nouwens, J., & Bouwman, H. (1995). Living apart together in electronic commerce: The use of information and communication technology to create network organizations. Journal of Computer-Mediated Communication [On-line], 1 (3).vailable: http://www.ascusc.org/jcmc/vol1/issue3/ nouwens.html Palmer J. W. and Speier, C. (1998). Teams: Virtualness and Media Choice, Proceedings of HICSS, Vol IV. Porter, M.E. (1985) Competitive Advantage, Macmillan, N.Y. Powell, W. W. (1990). Neither Market nor Hierarchy: Network Forms of Organisation. Research in Organisational Behaviour, 12, 295-336. Preiss, K., Goldman, S. L. and Nagel, R. N. (1996). Cooperate to Compete. New York: Van Nostrand Reinhold. Provan, K. and Milward, H. (1995). A Preliminary Theory of InterOrganisational Network Effectiveness: A Comparative Study of Four Community Mental Health Systems. Adminstrative Science Quarterly, 14, 91-114. Radnet, (1998) [http://www.radnet.com/] Rayport, J. F. and Sviokola, J. (1995). Exploiting the Virtual Value Chain. Harvard Business Review, 73 (6), 75-86. Rogers, 1D. M. (1996). The Challenge of Fifth generation R and D. Research Technology Management, 39(4), 33-41. SAP, (1998) [ http://www.sap.com/internet/] SAPNet, (1998) [ http://www.sap.com/SAPNet/] Schein, E. (1990). Organisational Culture. American Psychologist, 45(2), 109-119. Scott J.E and Gable, G.(1997) Goal Congruence, Trust, and Organizational Culture: Strengthening Knowledge Links, ICIS 97 proceedings, 107-119.
Team-Fly®
Burn & Ash 247
Steinfield, C., Kraut, R., & Plummer, A. (1995). The impact of electronic commerce on buyer-seller relationships. Journal of Computer-Mediated Communication [On-line], 1 (3). Available: http://www.ascusc.org/jcmc/ vol1/issue3/steinfld.html Swatman, P. M. C. and Swatman, P. A. (1992). EDI System Integration: A Definition and Literature Survey. The Information Society (8), 165-205. Telleen, S.L. (1996). Intranets and Adaptive Innovation, [http:// www.amdahl.com/doc/products/bsg/intra/adapt.html] Timmers, P. (1998). Business Models for Electronic Markets. EM – Electronic Markets, 8(2), 3-8. Tushman, M. L. and O’Reilly, III, C. A. (1996). Ambidextrous Organisations: Managing Evolutionary and Revolutionary Change. California Management Review, 38(4), 8-29. Venkatraman, N. (1994). IT-Enabled Business Transformation: From Automation to Business Scope Redefinition, Sloan Management Review, Winter. Venkatraman, N. and Henderson, J. C. (1998). Real Strategies for Virtual Organizing, Sloan Management Review, Fall, 33-48. Wigand, R.T., & Benjamin, R.I. (1995). Electronic Commerce: Effects on electronic markets. Journal of Computer-Mediated Communication [Online], 1 (3). Available: http://www.ascusc.org/jcmc/vol1/issue3/ wigand.html Winter S. G. (1987). Knowledge and Competence as Strategic Assets in Teece, D.J. The Competitive Challenge, Harper and Row, 159-184.
248
Virtual Organizations That Cooperate and Compete
Chapter 17
Virtual Organizations That Cooperate and Compete: Managing the Risks of Knowledge Exchange Claudia Loebbecke Copenhagen Business School, Denmark Paul C. van Fenema Erasmus University, The Netherlands
‘Co-opetition’ describes the phenomenon that firms engage in a virtual form of interaction where they cooperate and compete with their counterparts. Such hybrid relationships challenge traditional notions of firm boundaries and strategic resource management. There seems a contradiction in the fact that partners are supposed to share knowledge which is at the same time a key determinant of their competitive advantage. This balancing act suggests the need for special competencies that enable companies to reap the benefits of temporary synergy, while avoiding risks associated with making knowledge available to external partners. This chapter explores the art of controlling knowledge flows in ‘coopetitive’ relationships. We conceptualize types of knowledge flows and dependencies, resulting in four configurations. For each of these, risks in terms of deviations from the original agreement are examined. We propose control strategies that allow companies engaged in co-opetition to anticipate deviant trajectories and define adequate responses. Directions for future research on this topic are indicated. Previously Published in Knowledge Management and Virtual Organizations edited by Yogesh Malhotra, Copyright © 2000, Idea Group Publishing.
Loebbecke & van Fenema
249
THE VIRTUAL ECONOMY Digital technologies are changing economic relationships for the exchange of products, services, and knowledge. Electronic interaction facilities and information environments complement and substitute traditional business models for customer transactions (Venkatraman & Henderson, 1998). Clients start to experience the Internet as a vast resource of information and a facilitator of their consumption cycles. This ranges from a priori obtaining information on products, services and outlets, to purchasing and ex post support (Schwartz, 1999). In turn, companies approach existing and new clientele on the Web with a digital identity and experience environment (Breen, 1999). Behind the emerging digital façade, organizations are changing their operations. ‘Virtuality’ impacts companies along two lines. First, companies start to operate in a distributed fashion. Electronic media and infrastructure allow employees to interact remotely on the same project or business process (Evaristo & van Fenema, 1999). Digital communication infrastructures make real time and asynchronous connectivity possible, independent of the location of actors involved (Dertouzos, 1999). New organizational forms emerge that translate the advantages of electronic communications into flexible modes for organizing work (DeSanctis & Fulk, 1999). Virtuality also has a second connotation that is different but often interacts with the first one. It implies cooperation among multiple companies in such a way that a quasi-organizational entity emerges. Traditional business models assume that each firm is responsible for a well-defined and complete portion of the supply chain. This relative independence is transformed to a tissue of firms that are strongly connected. Market opportunities trigger combinatorial processes that result in ad hoc forms of cooperation (Meyerson, Weick, & Kramer, 1996). Each firm contributes interactively to a coherent, aggregated performance that individual organizations could not achieve (Goldman, Preiss, & Nagel, 1997). The intricate connectivity among contributing firms implies exchange of valuable resources like knowledge and information. In this chapter, we are interested in organizations that form a quasi single entity but have interests that partially diverge (Preiss, Goldman, & Nagel, 1997).
KNOWLEDGE EXCHANGE AND CO-OPETITION Theorists adopting a resource-based approach to strategic management have emphasized a firm’s need for unique, internal resources and competen-
250
Virtual Organizations That Cooperate and Compete
cies (Nelson & Winter, 1982; Wernerfelt, 1984). Further refinements and extensions stress the role of corporate competencies to enable dynamic adaptation and competitive advantage (Barney & Hesterley, 1996). Ever since the contribution from Penrose (1959), this approach has recognized the importance of knowledge as one of the supreme enablers of competitive differentiation. Recently, some papers in Strategic Management Journal, in particular the Winter Special Issue 1996, and in Organization Science proceeded along this route by investigating synergies of knowledge management and strategic management theory (Grant, 1996a). From multiple perspectives this growing body of literature contributes to our understanding of managing knowledge transfer, integration and creation within corporations (Nonaka & Takeuchi, 1995). In addition to intra-corporate knowledge sharing, some academics have started to investigate knowledge sharing processes across organizational boundaries (Loebbecke & van Fenema, 1998; Wathne, Roos, & von Krogh, 1996). Knowledge sharing has been defined as “the transfer of useful knowhow or information across company lines” (Appleyard, 1996: 138). Research on inter-organizational knowledge sharing recognizes the fact that firms are nowadays involved in multiple temporal or more permanent agreements for cooperation (Kodama, 1994). Organizations find temporary modes for leveraging knowledge as one of their primary resources. However, inter-organizational collaboration may confront companies with a paradox (Hamel, Doz, & Prahalad, 1989). On the one hand, reciprocal knowledge sharing may enhance the summed and individual added value. Partners can translate unique, hardly accessible resources from their counterparts into new business opportunities. However, from a resource-based perspective, inter-firm knowledge sharing may affect the uniqueness and thus competitive contribution of a firm’s knowledge repository. Opportunistic behaviors of counterparts may erode anticipated benefits of cooperation and result in unevenly distributed value. In their book Co-opetition Brandenburg and Nalebuff (1996) point to the potential tension of relationships where firms cooperate and compete, the latter possibly in other markets or at other points of time. Since companies increasingly open up to engage in these hybrid organization modes, it becomes important to understand and develop the phenomenon (Loebbecke, van Fenema, & Powell 1999). The purpose of this chapter is to investigate strategies for controlling knowledge as one of the primary resources in ‘coopetitive’ relationships. We investigate inter-firm collaboration involving knowledge with assumed operational and business value beyond the context of the cooperative agreement. We assume that both parties can translate the collaborative knowledge into
Loebbecke & van Fenema
251
adjacent or overlapping business capabilities and hence exploit additional opportunities beyond the collaboration. This suggests partially diverging interests between collaborating partners and motivates the development of a strategic perspective on managing knowledge flows across organizational boundaries. We discuss background theory on inter-organizational governance and elaborate modes for controlling inter-firm transactions. We interpret the strategic issues and paradoxes of inter-firm knowledge sharing as a problem of coordinating and controlling the behaviors of people within the corporation as well as exchanges across organizational boundaries. The chapter then develops a concept for distinguishing knowledge flows, and presents four configurations of knowledge exchange in virtual organizations. For each of these we examine potential risks and control strategies.
GOVERNANCE OF INTER-FIRM TRANSACTIONS Virtual organizations operate in a fluid environment with little enduring connectivity among participating firms. For that reason, transactions among these firms become a pivotal unit of analysis (Williamson, 1994). Transaction Cost Economics (TCE) has structured our understanding of governance transactions by contrasting markets and hierarchical forms of exchange (Williamson & Ouchi, 1981). Determinants of economic governance modes include environmental factors like uncertainty and the number of transactions. In addition, a set of assumptions concerning human behaviors play a role, like bounded rationality and opportunism (Williamson, 1975). Market governance typically applies to situations where reciprocal performances are specified in detail (Williamson, 1994). Contracts thus facilitate the process of ensuring compliance between intended and actual exchange, leaving little room for opportunistic deviations (Ouchi, 1979).1 An alternative to market governance are internal organizations which are characterized by extensive horizontal and vertical differentiation (Rice & Shook, 1990). The functioning of individuals being part of such a ‘bureaucracy’ is closely prepared, tracked, and evaluated. Moreover, the availability of collective experience implies refined communication processes that allow actors to economize on problem solving. For that reason, internal governance forms are apt to handle transactions that concern incompletely specified activities (Williamson, 1975). New Perspectives on Transaction Governance Scholars have extended the original premises of TCE in several directions. First, the rational perspective on the operation of bureaucracies is complemented with insights from Japanese firms (Ouchi, 1979). Ouchi
252
Virtual Organizations That Cooperate and Compete
suggests a clan mode of organizing. The selection and promotion of individuals is not only based on task-related competence, but also relies on their commitment to company goals. In the Academy of Management Review, More, Ghoshal and Moran (1996) critique the underlying assumption of TCE that individual behavior is driven by opportunism. They warn that organizations may translate this assumption into coercive control systems that rely on measurable behaviors and work outcomes. As a consequence, firms may recede from work that requires fluid adaptation and instead focus on specifiable work. In turn, companies may fail to leverage one of their original advantages over markets: their capability to accomplish innovative work and achieve dynamic efficiency (Ghoshal & Moran, 1996; Williamson, 1991). Second, researchers nuance the opportunistic drive of firms operating in markets (Macneil, 1978). Organizations may decide to build sustainable relationships and focus on common interests (Kumar & van Dissel, 1996). Situations where instantaneous exchanges of tightly controlled performances are not feasible may necessitate closer examination of the counterpart’s identity to still ensure quality (Ben-Porath, 1980). For example, transactions evolve over time and need reciprocal interactions among firms to identify expectations (Rousseau & McLean Parks, 1993; Thompson, 1967). This occurs in large, complicated projects where parts of the work are outsourced or even subcontracted to different firms (Bryman, Bresnen, Beardsworth, Ford, & Keil, 1987). Similarly, empirical research claims that relational contracts provide the primary means for governing transactions in regional network structures (Powell & Smith-Doerr, 1994), like the Italian industrial districts (Kumar, van Dissel, & Bielli, 1998; Lazerson, 1995). A third stream of research claims that firms mix elements of ‘price’ (market governance), ‘authority’ (hierarchy) and ‘trust’ (clan modes) to sculpt their internal operations and exchanges with other companies (Bradach & Eccles, 1989). Bradach elaborates examples of these hybrid or plural forms. His research on restaurant chains shows that these organizations combine internal bureaucracy with a franchising network to create large numbers of outlets that have the same outward appearance to customers (Bradach, 1997). In fact, this quasi single entity provides an example of a virtual organization as different governance forms are combined to pursuit (temporarily) shared business objectives. Finally, researchers have explored the variety of coordination mechanisms employed in inter-firm relationships (Grandori & Soda, 1995). Depending on contingencies like the type of workflow interdependence and structurability, organizations choose modes for interacting and planning exchanges (Grandori, 1997; Kumar & van Dissel, 1996).
Loebbecke & van Fenema
253
CONTROL STRATEGIES Coordination and control approaches have dominated organization theory, and are still at the core of scholarly thinking on organizational phenomena. Theorizing has long followed two separate lines of inquiry, with one group focussing on intra-corporate linkages (Chandler Jr. & Daems, 1979), and other scholars studying inter-firm strategies for managing transactions (Williamson, 1975). As indicated, the field starts to intermingle both perspectives as more complex, hybrid forms emerge that combine elements of both (Bradach & Eccles, 1989). In the spirit of that emerging tradition, we categorize control strategies along four dimensions. We briefly introduce both intra and inter-firm equivalents, to be used later on when we investigate the control of knowledge exchanges. Intra-organizational coordination and control refers to the mechanisms that structure, execute and evaluate organizational task accomplishment (Ching, Holsapple, & Whinston, 1992). Management of inter-firm tasks includes contractual formalization as well as inter-organizational roles like liaisons or project teams (Grandori, 1997). Procedural strategies indicate a process of conceiving work beforehand, and documenting that understanding in formal boundary objects like schedules, plans, and generic work instructions (Star & Griesemer, 1989). The same principle returns in case of inter-firm transactions; classical contracts govern exchanges that are “sharp in by clear agreement; sharp out by clear performance” (Macneil, 1974: 738). The fact that work is a priori conceived and prescribed implies that control efforts are simplified to monitoring for deviations in the actual execution of work (O’Reilly & Chatman, 1996). Organizational structures stand for the design of roles that are interconnected and intended to enact the control process (Gupta, Dirsmith, & Fogarty, 1994). Within organizations, vertical control relationships are embedded in a managerial hierarchy, while lateral roles include peer assessment (McCann & Galbraith, 1981). The responsibility for inter-firm transactions are often exclusively delegated to liaisons or formal linking pins (Grandori, 1997). Social control strategies refer to norms governing interpersonal communications in working relationships (Gabarro, 1990), groups (Barker, 1993) or organizations (Kunda, 1992). Actors shape norms for behaviors and monitor each other to ensure compliance (Schein, 1992). Organizations may also foster relationships across their boundaries as actors on both sides get to know each other (Ben-Porath, 1980), and can identify with their counterparts’ preferences and interests (Bryman et al., 1987).
254
Virtual Organizations That Cooperate and Compete
Technology supports the process of work definition, and monitoring actual behaviors and outputs. Traditional forms like mechanization (Edwards, 1981) have been complemented with advanced monitoring devices and integrated business applications like ERP (Orlikowski, 1991). Technology also supports inter-organizational transactions with EDI, supply chain applications, and access to intranet or databases (Kumar & van Dissel, 1996).
KNOWLEDGE-INTENSIVE TRANSACTIONS IN VIRTUAL ORGANIZATIONS Research on strategic knowledge management has predominantly focused on cognitive processes within a firm’s boundaries (Nonaka & Takeuchi, 1995). These processes include creation of knowledge, making tacit knowledge explicit (Nonaka & Takeuchi, 1995), knowledge transfer (Szulanski, 1996) and knowledge integration (Grant, 1996b). The importance of knowledge management for intra-organizational processes equally applies to interfirm transactions. Companies are engaged in diverse modes of external cooperation (Bradach & Eccles, 1989), and the life cycle of goods and services becomes more knowledge intensive (Grant, 1996b). Our analysis of knowledge exchange in inter-firm relationships proceeds along three lines. First, we expand on the distinction between tacit and explicit knowledge, and provide examples for our argument. Second, the direction of knowledge flows between organizations is elaborated. Finally, we propose a model that combines these dimensions and provides a steppingstone for analyzing risk control strategies. Tacit and Explicit Knowledge Flows Literature provides a rich basis for exploring the different types and characteristics of knowledge. Defining the concept itself seems a more Table 1. A model for inter-firm knowledge flows
Explicit, structured knowledge flows
Tacit, nonstructured knowledge flows
Unidirectional knowledge sharing
Reciprocal knowledge sharing
Configuration 1 Outsourcing strategies: Client-supplier software specifications
Configuration 2 Exchange of complementary market research information between competitors
Configuration 3 Client-supplier nexus in automotive industry
Configuration 4 Collaboration of R&D units in semi-conductor industry
Loebbecke & van Fenema
255
feasible challenge for philosophers and social scientists. Hence, as an alternative, we pursue Grant’s (1996b) suggestion to focus on types of knowledge and their consequences for managerial actions (Machlup, 1980). Knowledge is commonly distinguished in explicit and tacit knowledge, as initially proposed by Polanyi (1967). These two types have influenced subsequent conceptual and empirical research on strategic and organizational knowledge management (Kogut & Zander, 1992; Nonaka & Takeuchi, 1995). Explicit knowledge refers to concepts, information and insights that are specifiable, and that can thus be formalized in rules and procedures (Walsh & Dewar, 1987). Access, storage and transfer of this knowledge is achieved by corporate documents and information systems like databases. Examples include detailed engineering specifications for software development or product manufacturing which capture and support inter-human communications (Star & Griesemer, 1989). On the other hand, implicit or tacit knowledge refers to less specifiable insights and skills which are carried in individuals’ minds or embedded in an organizational context (Weick & Westley, 1996). Employees develop and refine collectively routines to achieve organizational adaptation and learning (Nelson & Winter, 1982). March and Simon (1958: 142) referred to ‘programs’ to describe these routines: “Most programs are stored in the minds of the employees who carry them out, or in the minds of superiors, subordinates, or associates.” Understanding and transferring this type of knowledge depends on direct participation and inclusion in the context where it resides (Tyre & von Hippel, 1997). Researchers refer to this phenomenon as ‘stickiness’ (Szulanski, 1996), and pointed to the arduous process of explaining or even integrating tacit knowledge (Grant, 1996a). Exchanging tacit knowledge across organizational boundaries is supposed to exacerbate these issues as professionals lack a set of commonly shared concepts and values provided by an organization’s culture (Weick & Westley, 1996). Direction of Knowledge Sharing Inter-organizational knowledge sharing is achieved by patterns of transmitting and receiving information. These knowledge-based workflows may take the characteristics of one-way traffic. For example, in an outsourcing agreement, clients share knowledge with their vendors to enable delivery of the product or service (Hamel et al., 1989). This does not necessarily mean that a reverse flows exists, that is, vendors sharing knowledge with their clients. We call this unidirectional knowledge sharing. One-way knowledge flows also occur in organizations like marketing research or news agents that even make their business of selling knowledge and expertise.
256
Virtual Organizations That Cooperate and Compete
TE
AM
FL Y
On the other hand, in many cases the underlying logic of collaboration suggests bidirectional or reciprocal knowledge flows. The legacy of such cooperative endeavors relies on integration of complementary knowledge and competencies. Hence, reciprocal sharing of knowledge is a principal determinant for reaping the anticipated benefits of cooperative synergies. These include taking advantage of complementary knowledge and synergistically creating knowledge. An example is collaboration of R&D units where companies share costs by jointly investing in development and manufacturing facilities. Often, like in the semiconductor industry, collaboration is required as investments would exceed an individual firm’s resources and require economies of scale. On an operational level, the different modes of knowledge exchange are associated with different types of workflow interdependencies (Thompson, 1967). Unidirectional, one-way knowledge flows are of a pooled or sequential nature. They comprise subsequent steps of identifying and transferring in a single direction priorly agreed-upon knowledge and information. On the other hand, organizations engaged in reciprocal knowledge sharing face more complicated workflows. Managing these requires inter-firm taskforces of professionals to elaborate and control knowledge exchanges. The work of such a team flows back and forth between both organizations and has been referred to as reciprocal (Thompson, 1967) or team interdependence (Van de Ven, Delbecq, & Koenig Jr., 1976). This intricate mode of cooperation implies that specifying the scope and content of the flows is often not feasible (Kumar & van Dissel, 1996). Model for Analysis When the two dimensions are combined, an interesting model emerges as depicted in Table 1. We refer to each type of interaction among the variables as a configuration. The use of configurations for investigating organizational phenomena is a common approach in organization science (Meyer, Tsui, & Hinings, 1993). Scholars like Burns and Stalker Figure 1: Controlling the unidirectional (1961) and Mintzberg flow of structured knowledge Company B Company A (1979) have built typologies of organizaTeam A1 Team B1 tional forms. Choosing relevant variables, they reCoordinator Team A2 Team B2 duce real-life complexity to a limited set of templates. Team A3 Team B3 In our case, each configu-
Team-Fly®
Loebbecke & van Fenema
257
ration epitomizes how virtual organizations can be interconnected. Their distinct properties have different implications for potential risks and control strategies. After the next section, we explore and illustrate each configuration successively.
CO-OPETITION AND THE RISKS OF KNOWLEDGE FLOWS Virtual organizations involved in cooperative-cum-competitive relationships may experience deviation between intended and actual knowledge flows. Be it deliberately or unconsciously, parties may have different perspectives on the direction and boundaries of the knowledge component in their exchange relationship. Understanding these risks is important to avoid undesirable distribution of valuable knowledge at the end of the cooperative life cycle. We assess here for the dimensions presented in Table 1 potential deviations between originally assumed and actually evolving knowledge interactions. Explicit Knowledge Flows Knowledge that is explicit can be specified and documented. This enables storage, transfer and sharing by means of corporate documents or information systems. Coordinating these flows requires determining which knowledge companies are willing to share, and reaching agreement on transfer modes. Contracts formalize the contents, the procedures and the deliverables, supplemented by control procedures to verify that actual delivery of knowledge occurs within contractually predefined standards. However, as earlier indicated, our focus is on companies sharing knowledge that both can leverage to adjacent business opportunities beyond their initial agreement. Hence, access to partners’ knowledge repository seems a tempting opportunity to absorb knowledge in excess of priorly agreed-upon boundaries as defined in the contract. This may include collecting more knowledge of the same type parties formalized in the contract. Alternatively, a company tries to Figure 2: Bilateral control strategies Company A
Company B
Team A1 Team A2 Team A3
Team B1 Coordinator
Coordinator
Team B2 Team B3
258
Virtual Organizations That Cooperate and Compete
pull tacit knowledge on top of the explicit knowledge that was specified in the contract. An example of such ‘overgrazing’ behavior is an outsourcing agreement in which the vendor indicates its need for more detailed specifications and in-depth corporate knowledge that does not bear direct relevance to the execution of the contract (Hamel et al., 1989). Risks of Tacit Knowledge Flows Virtual organizations that agree to share tacit knowledge expose themselves to the risk of minimally specified interactions. The embedded, intricate nature makes it difficult to confine these knowledge exchanges in advance. Assuming some degree of opportunistic behavior, the receiving firm may employ dynamic tactics to enlarge the flows beyond initial agreements. Similarly, an organization pretending to share tacit knowledge may in practice structure knowledge flows and thus reduce the value of cooperation to its counterpart. Risks Related to Direction of Knowledge Flows Unidirectional knowledge flows occur when companies subscribe to a research agency to keep them updated on market trends and developments. At the outset, the agency is supposed to deliver valuable information to its clients, for example by means of a controlled Internet environment. Yet in an attempt to customize its products or resell client-related information, it may track and trace clients’ search behaviors. Unilateral provision of knowledge thus transforms into reciprocal exchanges, a possible deviation from the clients’ original intentions. The inverse situation occurs when partners agree upon reciprocal exchanges. Firms formally agree upon collegial collaboration and the necessity of bidirectional information flows. However, if firms pursue a resource-based strategy, they may attempt to restrict mutual sharing to one-way knowledge absorption. Hamel, Doz and Prahalad (1989: 138) quote managers from a Japanese firm employing this strategy: “We don’t feel any need to reveal what we know. It is not an issue of pride for us. We’re glad to sit and listen. If we’re patient we usually learn what we want to know.” Of course, combinations of the risks discussed so far may surface. The next four sections assess for each configuration possible deviation strategies and modes for controlling these.
Loebbecke & van Fenema
Figure 3: Controlling one-way tacit knowledge flows Company A
Coordinator
Company B
Team A1
Team B1
Team A2
Team B2
Team A3
Team B3
259
UNIDIRECTIONAL, STRUCTURED KNOWLEDGE FLOWS (CONFIGURATION 1)
In a strategy aimed at concentrating and nurturing core competencies, increasingly firms outsource peripheral business services like IT projects, marketing, and investment management (McFarlan & Nolan, 1995). To some extent, vendors need corporate knowledge and information to provide their services, and client firms will hence allow vendors to pull from their know-how repository. Our focus here is on the unidirectional transmission of explicit knowledge, like specifications for building and maintaining software. In principle, the outsourced activities do not bear strategic relevance to the client’s business, implying a low-risk knowledge flow. Yet both client and vendor may run the risk that exchanges evolve in an undesirable manner. First, vendors may attempt to absorb more (tacit) knowledge than initially agreed upon, a change to Configuration 3 in Table 1. As they are nestled in a particular industry and serve clients with competitive positions, they may build increasingly sophisticated industry-specific knowledge and use it synergistically in their network. If a client’s knowledge is thus leveraged to the vendor’s clientele, it becomes a commonly shared good and may lose its uniqueness. In addition, vendors may collect industry-specific know-how to strengthen their competitive position. This enables them to bypass clients and enter their market. Second, clients may change the direction of knowledge flows from unidirectional to reciprocal, encouraging vendors to display their internal competencies (a shift to configuration 4 in Table 1). Incorporation of this know-how decreases the uniqueness of the vendor’s performance and hence its competitive position. Control Strategies We outline strategies for controlling transitions of exchange relationships using Figure 1. Continuing our example, Company A stands for the client, Company B for the vendor. Knowledge is supposed to flow one-way from A to B. At each organization three teams are involved in the transaction process. Since the transaction concerns the structured transmission of knowl-
260
Virtual Organizations That Cooperate and Compete
edge, it is formalized in detail. The contract specifies which knowledge will be shared, how transmission will be organized, and what procedure will be followed for special requests. The structure of the process substitutes for direct interactions between staff. The articulated character of the knowledge enables transfer modes with low information processing capacity, such as document handovers or controlled access to databases (Daft & Lengel, 1986). Company A reinforces prevention of undesirable knowledge leakage by indirect organizational structures (Perrow, 1999). That is, a centralized coordinator assembles knowledge intended for the external partner prior to the actual exchange process (Allen, 1984). Interaction with Company B is exclusively handled by this gatekeeper, either in person or by controlled access to a digital environment. Unilateral flows imply that teams at Company B (the vendor) have only direct contact with A’s liaison. B can avoid reciprocal interactions between teams at both sides by sticking to that procedure. Enforced by internal guidelines, this approach substitutes for an elaborate safety structure like A has.
RECIPROCAL, STRUCTURED KNOWLEDGE FLOWS (CONFIGURATION 2) Presence in, and knowledge of local markets often differs between companies having comparable R&D and marketing competencies. In order to enable both companies to leverage their competencies, exchange of complementary local knowledge often seems a viable strategy. This may trigger a process of exchanging for example marketing and sales information, and knowledge of local business opportunities. From a strategic point of view, such exchange processes make sense. However, when it comes to the actual execution, partners may choose two deviation trajectories. First, each partner can attempt to limit outgoing Figure 4: Organizing two-way tacit knowledge flows Company A
Coordinator
Company B
Team A1
Team B1
Team A2
Team B2
Team A3
Team B3
Coordinator
Loebbecke & van Fenema
261
knowledge flows. As long as only one actor succeeds in pulling knowledge while holding back deliverables, a shift to Configuration 1 occurs. Bilateral attempts to abuse the transactional agreement results in a deadlock situation. Second, one of the firms may try to collect more extensive, tacit knowledge on top of the structured information as agreed upon. In response, the counterpart may choose to pursue the same strategy. Lacking sufficient a priori mandate, the cooperation tends to evolve in an intricate exchange relationship (shift to Configuration 4). Controlling Reciprocal Exchange The fact that both partners receive and deliver knowledge provides for a complex form of interdependence with recurrent interactions (Thompson, 1967). Still, they can formalize the exchange process as a basic form of transaction governance. The contract stipulates how reciprocal deliveries are intertwined across the life cycle of cooperation. Teams at both sides do not cooperate directly, but funnel their interactions through respective coordinators (Figure 2). Internal control procedures stress the exclusive use of coordinating liaisons and technology to submit knowledge for external use (Jaeger & Baliga, 1985). Exchange between partners is quid pro quo: “I will share this with you if you share that with me” (Hamel et al., 1989: 136). Hence, coordinators closely monitor and document knowledge flows to avoid uneven accumulation of know-how. Since the behaviors of partners are complexly interlocked, deviations from the planned exchange process easily occur (Van de Ven et al., 1976). This leads to feedback loops to assess mutual constraints and feasible action patterns. Technology may support the coordinating liaison to plan, monitor and document exchanges. Coordinators have a pivotal role to funnel interactions between teams from both sides. As a substitute for direct interactions between teams, this procedure helps avoid the risk of flow expansion; that is, partners attempting to collect additional, tacit knowledge contrary to initial agreements.
UNIDIRECTIONAL, TACIT KNOWLEDGE FLOWS (CONFIGURATION 3) The automotive industry has featured many examples of adaptive coordination between a focal organization and its network of suppliers. Pivotal organizations like Toyota tend to intertwine with suppliers to share tacit knowledge in a keiretsu network structure (Powell, 1996). In turn, this enables
262
Virtual Organizations That Cooperate and Compete
the supplier network to fine-tune their strategic development and business processes (Reve, 1990). In this example, the focus is not on reciprocal knowledge sharing in local industrial networks. Instead, this section analyzes one-way adaptive behavior of a supplier to the client’s processes. This induces the client to share knowledge that is intricate, contextual and tacit, enabling adjustment and integration of the supplier’s operational processes. The knowledge we discuss here resides in a tissue of actors used to cooperate on a daily basis (Asch, 1952). The fluid evolution of interaction patterns implies that know-how resides in the minds of participants rather than being sedimented in extensive documentation (Kogut & Zander, 1992). Hence, access to this knowledge tissue can hardly rely on remote electronic communications. People need to explain the context of their know-how, and show artifacts like drawings (van Fenema & Kumar, 1999; von Hippel, 1994). Moreover, unlike the previous configurations, knowledge transfer cannot rely on mediating coordinators because contextual information would get distorted. Besides, clarifying feedback loops would suffer from long turnaround times as coordinators need to screen and pass on reciprocal exchange flows. Possible Deviation Trajectories As teams from both sides start to interact directly, a number of risks emerge. First, the supplier in our example may try to pull more information from the client than agreed upon. The supplier’s legitimate access to tacit knowledge enables gaining an in-depth understanding of the client’s competencies and integrative capabilities (Grant, 1996b). If the supplier maintains connections to firms competing with the client organization, this access becomes rather undesirable. The supplier may share, leverage or even sell its understanding to these competing firms. For the supplier this is a tempting option as tacit knowledge is assumed to provide more intricate and thus valuable information. Alternatively, under the guise of a temporary role as a supplier, a firm can copy the client’s competencies and subsequently intrude their market (Hamel et al., 1989). The distinction between permissible and excessive knowledge absorption is not easy as partners can only pre-specify broad dimensions of interaction flows. Second, the client may attempt to limit and quasi-structure outgoing knowledge flows, a vertical shift to Configuration 1. For the supplier this implies a dysfunctional restriction that undermines his process of identifying with client’s needs and concerns. Third, as teams from both organizations interact directly, the transmitting client organization may attempt to absorb knowledge from the receiving vendor, a shift to Configuration 4. Client teams may abuse requests for clarifications to pull additional information from the
Loebbecke & van Fenema
263
supplier context. Reciprocal flows enable the client to decrease the uniqueness of the supplier’s competencies and their business. Tacit Knowledge Control Strategies The need for direct exchange among teams changes the nature of the contract. As the contractual specifications necessarily remain vague and general, control strategies will focus on progressively managing the dynamics of inter-firm cooperation. The position of the coordinator also changes as team members take over his role of facilitating external connectivity (Ancona, 1992). His role of mediating knowledge flows alters to coaching teams from a background position (Figure 3). That includes promoting a clan-type of environment to make sure that team members remain committed and comply to organizational goals (Ouchi, 1979). As Hamel, Doz and Prahalad (1989: 138) point out: “Limiting unintended transfer ultimately depends on employee loyalty and self-discipline.” This suggests people-based strategies that cover selection, socialization and training of people to internalize organizational values and commitment (Pfeffer, 1978). The purpose is to develop and internalize collective routines and commitment that enables staff to define the boundaries of tacit knowledge sharing. Bureaucratic control strategies bear less relevance in this environment since detailed specification and observation of appropriate behaviors is not feasible (Kunda, 1992). Still, the organization’s interests and goals are translated into generic rules for external knowledge transfer. This so-called semi structure ‘exhibits partial order, and lies between the extremes of very rigid and highly chaotic organization’ (Brown & Eisenhardt, 1997: 28). The client can impose generic structure to the supplier organization. For example, supplier personnel are only granted access for a limited period of time or to distinct locations. The supplier organization must avoid that the client limits the type of knowledge shared, or even triggers reverse information flows. As Figure 3 depicts, a more complex interaction environment arises between firms. Hence, the client may attempt to abuse the multiplicity of available channels between teams to press for more information on the supplier’s organization. At the same time, outgoing flows may be unduly codified and deliver insufficient resources for the supplier’s processes. Similarly to the client organization, supplier teams should relate in a clan type of fashion to avoid these risks. Maintaining lateral contacts, they reinforce and enact external behaviors that are consistent with company goals. At the same time, internal procedures specify escalation processes in case the client organization violates earlier agreements. Managers are alerted when deviations occur to liaise with counterparts from the client.
264
Virtual Organizations That Cooperate and Compete
RECIPROCAL, TACIT KNOWLEDGE FLOWS (CONFIGURATION 4) In hi-tech industries that thrive on rapid R&D developments, information and knowledge sharing is crucial to remain on the competitive edge. Examples include semiconductor industry in which knowledge transfer has a prominent role (Appleyard, 1996). In conjunction with the pace of technological progress, such industries often require considerable investments in R&D. This motivates external cooperation to mutually benefit from complementary know-how and resources (Powell, 1996). Co-opetition implies here that although partners shape some form of sustainable entity, they may use final results to compete, like the examples of NEC/ Honeywell, GM/ Toyota show (Hamel et al., 1989). A similar business model is becoming common in the airline industry. Partners leverage resources to achieve economies of scale and enhance the quality of their products and services (Jain, 1999). Often, these virtual organizations start an exploratory process of exchanging resources without having a clear notion of operational consequences and risks. Staff at both sides are expected to team up together and connect. As team members materialize strategic objectives, an intricate form of connectivity unfolds (Van de Ven et al., 1976). Incomplete specification of the scope and content of knowledge transfer necessitates heedful interrelating to adjust and refine progress (Weick & Roberts, 1993). Hence, a quasi single team emerges consisting of staff from both sides. Subsequent phases of socialization and interpersonal contacts promote feelings of collegiality and commitment to the group’s functioning (Katzenbach & Smith, 1993). Such a context induces team members to share their know-how as a natural part of the cooperative effort. Yet the successful formation of inter-organizational teams also increases the risk that partners loose grip on the knowledge exchange process. Coopetition implies that the cooperation may end at some point of time with each partner counting their take home value. A cohesive team may lose sight of that strategic context and share valuable resources in excess of initial agreements and intentions (Hamel et al., 1989). In addition, their exclusive contacts outside the firm may alienate them from the internal organization, and reduce incorporation of newly acquired knowledge across other business units. A different type of risk occurs when partners deliberately deviate from the initial agreement. Because organizations can leverage information beyond their cooperative relationship and have partially conflicting interests, they are tempted to free ride on their counterpart’s input, for example by
Loebbecke & van Fenema
265
providing less or inaccurate information. They may also structure and restrict outgoing knowledge flows contrary to initial agreements. This strategy obviously undermines the partnership as it results in asymmetrical distribution of know-how. Bilateral Control Strategies As virtual organizations engage in a fluidly evolving exchange process, they need adjustable and flexible control strategies. A similar control structure will emerge at both sides as knowledge flows back and forth, and each partner runs comparable risks. Prior to starting operational connections, some form of relationship or reciprocal familiarity will probably exist. The intention to cooperate is translated into a relational contract that broadly outlines areas of exchange and codes of conduct (Macneil, 1978). It roughly structures scope, duration and content of exchange to provide a minimal backbone for steering the actual interaction process over time (Brown & Eisenhardt, 1997). Complementary strategies are required to cater for the remaining open ends and risks. Both companies install a coordinator who regularly meets with team members involved in the exchange process, and mediates between project teams and the stable organization (Figure 4). As the figure depicts, the coordinator is not directly involved in operational exchanges but has an internal role. The purpose of that role is to enhance and ensure knowledge reception from the partner organization. He also makes sure that novel resources are leveraged to the rest of the organization. In fact, this liaison role may require a small group that interfaces between the company and its teams that interact with the partner firm. Maintaining regular contact with the team, the coordinator keeps them focused on organizational goals. Although his role is on the background, he traces and guards knowledge exchanges to ensure reciprocity and quality of the flows. Team members are put into contact with other staff from their organization to share recently acquired insight (Hamel et al., 1989). To some degree experiences can be summarized and documented. Technologies like intranets or groupware facilitate controlled circulation of relevant information to others in the company. Long- term or remote cooperative endeavors suggest job rotation of team members to foster company-wide learning (Edström & Galbraith, 1977). Novel insights are leveraged and anchored to prepare the organization for subsequent competitive phases. Rotation also maintains the relationship between the organization and employees working on the fringe, and may avoid unwelcome turnover. Reciprocal, tacit exchange in virtual organizations calls for gradually adapting and elaborating control strategies. Temporary partners juggle to
266
Virtual Organizations That Cooperate and Compete
make a hybrid focus work. The complexity they face traces back to their effort to combine temporary collegiality with a ‘coopetitive’ relationship.
DISCUSSION AND CONCLUSION
TE
AM
FL Y
The virtual economy changes the mode of organizing transactions between firms. The traditional notion of individual organizations taking care of well-defined portions of supply chains is making place for an open and embedded perspective on their functioning. One implication is that firms seek to cooperate with partners in adjacent business domains, even including competitors. Cooperation means that firms combine specific resources to take advantage of novel opportunities. For virtual organizations that cooperate and compete, this exchange process introduces a complex decision environment. Each organization can use knowledge made available for purposes beyond the definition of their hybrid relationship. As long as both partners comply with the original agreement of cooperation, temporary exchanges evolve without too much risk. Yet at the same time, access to unique knowledge from the counterpart seems a tempting opportunity to enhance benefits derived from the relationship. Mastering this balancing act seems to be one of the novel competencies required in the virtual economy. This chapter extends earlier literature by focussing specifically on knowledge exchanges. We elaborate a model for knowledge flows and identify four configurations of coopetitive transaction modes. Each configuration features its own risk profile depending on the deviation trajectory counterparts may choose. Control strategies are proposed to anticipate and monitor the actual exchange processes. Uneven distribution of knowledge resources is avoided by a combination of control strategies. These apply to both the internal organization and the external relationship with the partner firm. Four categories of control mechanisms are distinguished: bureaucratic mechanisms like work specification and monitoring; organizational roles like coordinators; social relationships and interpersonal exchanges; and technology employed for organizing transfer of, and access to knowledge. The analysis extends the field of co-opetition in virtual organizations and has relevance to professionals and academics alike. Professionals may use the analysis to determine feasible configurations and anticipate risk profiles. In addition, they can detect eventual patterns of deviation, and implement remedies to increase the likelihood of a satisfactory coopetitive relationship. From an academic point of view, the model provides a starting point for conceiving and empirically investigating the complexity of hybrid relationships between firms. Academics may use the proposed line of thought for elaborating connections with other theoretical areas like supply chain man-
Team-Fly®
Loebbecke & van Fenema
267
agement, management of information systems, innovation management, management of joint ventures, and strategic management. In addition, researchers may want to introduce time as a variable to assess the evolution of cross-company interactions over time. Empirical observation may include survey type of research to validate hypotheses derived from the model. Another opportunity is (longitudinal) case study research where exchange processes are documented and analyzed. Finally, grounded research in the spirit of Burgelman (1983) may shed light on intra- and inter-organizational communications that help shape co-opetitive relationships.
ENDNOTE 1
As a spin-off from transaction cost economics, incomplete contracting theory elaborates on situations where parties cannot specify their transaction in advance. In particular this theory investigates the consequences of contingencies for the distribution of unanticipated value differences among parties (Hart, 1991).
REFERENCES Allen, T. J. (1984). Managing the Flow of Technology. Cambridge, MA: MIT Press. Ancona, D. G. (1992). Bridging the Boundary: External Activity and Performance in Organizational Teams. Administrative Science Quarterly, 37, 634-665. Appleyard, M. M. (1996). How Does Knowledge Flow? Inter-firm Patterns in the Semiconductor Industry. Strategic Management Journal, 17(Winter), 137-154. Asch, S. E. (1952). Social Psychology. Englewood Cliffs, NJ: Prentice-Hall. Barker, J. R. (1993). Tightening the Iron Cage: Concertive Control in SelfManaging Teams. Administrative Science Quarterly, 38(3), 408-437. Barney, J. B., & Hesterley, W. (1996). Organizational Economics: Understanding the Relationship between Organizations and Economic Analysis. In S. R. Clegg, C. Hardy, & W. R. Nord (Eds.), Handbook of Organization Studies. London: Sage. Ben-Porath, Y. (1980). The F-Connection: Families, Friends, and Firms and the Organization of Exchange. Population and Development Review, 6(March), 1-30. Bradach, J. L. (1997). Using the Plural Form in the Management of Restaurant Chains. Administrative Science Quarterly, 42(2), 276-303.
268
Virtual Organizations That Cooperate and Compete
Bradach, J. L., & Eccles, R. G. (1989). Price, Authority, and Trust: From Ideal Types to Plural Forms. Annual Review of Sociology, 15, 97-118. Brandenburger, A. M., & Nalebuff, B. J. (1996). Co-opetition. New York: Doubleday. Brown, S. L., & Eisenhardt, K. M. (1997). The Art of Continuous Change: Linking Complexity Theory and Time-paced Evolution in Relentlessly Shifting Organizations. Administrative Science Quarterly, 42(1), 1-34. Bryman, A., Bresnen, M., Beardsworth, A. D., Ford, J., & Keil, E. T. (1987). The Concept of the Temporary System: The Case of the Construction Project. In S. B. Bacharach & N. Ditomaso (Eds.), Research in the Sociology of Organizations (Vol. 5, pp. 73-104). Greenwich, Connecticut: JAI. Burgelman, R. A. (1983). A Process Model of Internal Corporate Venturing in the Diversified Major Firm. Administative Science Quarterly, 28, 223244. Burns, T., & Stalker, G. M. (1961). The Management of Innovation. London: Tavistock Publications. Chandler Jr, A. D., & Daems, H. (1979). Administrative Coordination, Allocation and Monitoring: A Comparative Analysis of the Emergence of Accounting and Organization in the U.S.A. and Europe. Accounting, Organizations and Society, 4(1/2), 3-20. Ching, C., Holsapple, C. W., & Whinston, A. B. (1992). Reputation, Learning and Coordination in Distributed Decision-Making Contexts. Organization Science, 3(2), 275-297. Daft, R. L., & Lengel, R. H. (1986). Organizational Information Requirements, Media Richness and Structural Design. Management Science, 32(5), 554-571. DeSanctis, G., & Fulk (Eds), J. (1999). Shaping Organization Form: Communication, Connection, and Community. Walnut Creek, CA: AltaMira. Edström, A., & Galbraith, J. R. (1977). Transfer of Managers as a Coordination and Control Strategy. Administrative Science Quarterly, 22(June), 248-263. Edwards, R. C. (1981). The Social Relations of Production at the Point of Production. In M. Zey-Ferrell & M. Aiken (Eds.), Complex Organizations: Critical Perspectives. Glenview, IL: Scott, Foresman. Evaristo, R., & van Fenema, P. C. (1999). A Typology of Project Management: Emergence and Evolution of New Forms. International Journal of Project Management, 17(5), 275-281.
Loebbecke & van Fenema
269
Gabarro, J. J. (1990). The Development of Working Relationships. In J. Galegher, R. E. Kraut, & C. Egido (Eds.), Intellectual Teamwork: Social and Technological Foundations of Cooperative Work. Hillsdale, New Jersey: Lawrence Erlbaum Associates. Ghoshal, S., & Moran, P. (1996). Bad for Practice: A Critique of the Transaction Cost Theory. Academy of Management Review, 21(1), 13-47. Goldman, S. L., Preiss, K., & Nagel, R. N. (1997). Agile Competitors and Virtual Organizations: Strategies for Enriching the Customer. New York: John Wiley. Grandori, A. (1997). An Organizational Assessment of Inter-firm Coordination Modes. Organization Studies, 18(6), 897-925. Grandori, A., & Soda, G. (1995). Inter-firm Networks: Antecedents, Mechanisms and Forms. Organization Studies, 16(2), 183-214. Grant, R. M. (1996a). Prospering in Dynamically-competitive Environments: Organizational Capability as Knowledge Integration. Organization Science, 7(4), 375-387. Grant, R. M. (1996b). Toward a Knowledge-Based Theory of the Firm. Strategic Management Journal, 17(Winter), 109-122. Gupta, P. P., Dirsmith, M. W., & Fogarty, T. J. (1994). Coordination and Control in Government Agency: Contingency and Institutional Theory Perspectives in GAO Audits. Administrative Science Quarterly, 39, 264284. Hamel, G., Doz, Y., & Prahalad, C. K. (1989). Collaborate With Your Competitors - And Win. Harvard Business Review(January-February), 133-139. Hart, O. D. (1991). Incomplete Contracts and the Theory of the Firm. In O. E. Williamson & S. G. Winter (Eds.), The Nature of the Firm: Origins, Evolution, and Development. New York: Oxford University Press. Jaeger, A. M., & Baliga, B. R. (1985). Control Systems and Strategic Adaption: Lessons from the Japanese Experience. Strategic Management Journal, 6, 115-134. Jain, M. (1999). SOC Success Needs ‘Coopetition’. Electronic Business, 25(7), 28. Katzenbach, J. R., & Smith, D. K. (1993). The Wisdom of Teams: Creating the High-Performance Organization. Boston, MA: McGraw-Hill, Harvard Business School Press. Kodama, F. (1994). Technology Fusion and the New R&D. In K. B. Clark & S. C. Wheelwright (Eds.), The Product Development Challenge: Competing Through Speed, Quality, and Creativity. Boston: Harvard Business School Press.
270
Virtual Organizations That Cooperate and Compete
Kogut, B., & Zander, U. (1992). Knowledge of the Firm, Combinative Capabilities and the Replication of Technology. Organization Science, 3(3), 383-397. Kumar, K., & van Dissel, H., G. (1996). Sustainable Collaboration: Managing Conflict and Co-operation in Inter-Organizational Systems. MIS Quarterly, 20(3). Kumar, K., van Dissel, H., G, & Bielli, P. (1998). The Merchant of Prato Revisited: Towards a Third Rationality of Information Systems. MIS Quarterly, 20(3). Kunda, G. (1992). Engineering Culture: Control and Commitment in a Hightech Corporation. Philadelphia: Temple University Press. Lazerson, M. (1995). A New Phoenix? Modern Putting-out in the Modena Knitwear Industry. Administrative Science Quarterly, 40, 34-59. Loebbecke, C., & van Fenema, P. C. (1998). Interorganizational Knowledge Sharing during Co-opetition. Paper presented at the European Conference on Information Systems (ECIS), Aix-en-Provence, France. Loebbecke, C., van Fenema, P. C., & Powell, P. (1999). Co-opetition and Knowledge Transfer. Database, 30(1). Machlup, F. (1980). Knowledge: Its Creation, Distribution, and Economic Significance. Princeton, NJ: Princeton University Press. Macneil, I. R. (1974). The Many Futures of Contracts. Southern California Law Review, 47, 691-816. Macneil, I. R. (1978). Contracts; Adjustment of Long-Term Economic Relations under Classical, Neoclassical, and Relational Contract Law. Northwestern University Law Review, 72, 854-906. March, J. G., & Simon, H. A. (1958). Organizations. New York: Wiley. McCann, J. E., & Galbraith, J. R. (1981). Interdepartmental Relations. In P. C. Nystrom & W. H. Starbuck (Eds.), Handbook of Organizational Design. New York: Oxford University Press. McFarlan, F. W., & Nolan, R. L. (1995). How to Manage an IT Outsourcing Alliance. Sloan Management Review(Winter), 9-23. Meyer, A. D., Tsui, A. S., & Hinings, C. R. (1993). Configurational Approaches to Organizational Analysis. Academy of Management Journal, 36(6), 1175-1195. Mintzberg, H. (1979). The Structuring of Organizations. Englewood Cliffs, N.J.: Prentice-Hall. Nelson, R., & Winter, S. (1982). An Evolutionary Theory of Economic Change. Cambridge, MA: Belknap Press. Nonaka, I., & Takeuchi, H. (1995). The Knowledge-Creating Company: Oxford University Press.
Loebbecke & van Fenema
271
O’Reilly, C. A., & Chatman, J. A. (1996). Culture as Social Control: Corporations, Cults, and Commitment. In L. L. Cummings & B. M. Staw (Eds.), Research in Organizational Behavior (Vol. 18, pp. 157-200). Greenwich, Connecticut: JAI Press. Orlikowski, W. J. (1991). Integrated Information Environment or Matrix of Control? The Contradictory Implications of Information Technology. Accounting, Management & Information Technology, 1(1), 9-42. Ouchi, W. G. (1979). A Conceptual Framework for the Design of Organizational Control Mechanisms. Management Science, 25(6), 833-848. Penrose, E. (1959). The Theory of the Growth of the Firm. Oxford: Oxford University Press. Perrow, C. (1999). Organizing to Reduce the Vulnerabilities of Complexity. Journal of Contingencies and Crisis Management, 7(3), 150-155. Pfeffer, J. (1978). Organizational Design. Arlington Heights, IL: AHM. Polanyi, M. (1958). Personal Knowledge: Towards a Post-Critical Philosophy. Chicago, IL: University of Chicago Press. Powell, W. W. (1996). Trust-Based Forms of Governance. In R. M. Kramer & T. R. Tyler (Eds.), Trust in Organizations: Frontiers of Theory and Research. Thousand Oaks, CA: Sage. Powell, W. W., & Smith-Doerr, L. (1994). Networks and Economic Life. In N. J. Smelser & R. Swedberg (Eds.), The Handbook of Economic Sociology. Princeton, NJ/New York: Princeton University Press/Russell Sage Foundation. Preiss, K., Goldman, S. L., & Nagel, R. N. (1997). Cooperate to Compete: Building Agile Business Relationships. New York: John Wiley. Reve, T. (1990). The Firm as a Nexus of Internal and External Contracts. In M. Aoki, M. Gustafsson, & O. E. Williamson (Eds.), The Firm as a Nexus of Treaties. London: Sage. Rice, R. E., & Shook, D. E. (1990). Voice Messaging, Coordination, and Communication. In J. Galegher, R. E. Kraut, & C. Egido (Eds.), Intellectual Teamwork: Social and Technological Foundations of Cooperative Work. Hillsdale, New Jersey: Lawrence Erlbaum Associates. Rousseau, D. M., & McLean Parks, J. (1993). The Contracts of Individuals and Organizations. In L. L. Cummings & B. M. Staw (Eds.), Research in Organizational Behavior (Vol. 15, pp. 1-43). Greenwich, Connecticut: JAI Press. Schein, E. H. (1992). Organizational Culture and Leadership. (Vol. 2). San Fransisco: Jossey-Bass Publishers.
272
Virtual Organizations That Cooperate and Compete
Schwartz, E. (1999). Digital Darwinism: Seven Breakthrough Business Strategies in the Cutthroat Web Economy. New York: Broadway Books. Star, S. L., & Griesemer, J. R. (1989). Institutional Ecology, ‘Translations’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. Social Studies of Science, 19, 387-420. Szulanski, G. (1996). Exploring Internal Stickiness: Impediments to the Transfer of Best Practice within the Firm. Strategic Management Journal, 17(Winter), 77-91. Thompson, J. D. (1967). Organizations in Action: McGraw-Hill. Tyre, M. J., & von Hippel, E. (1997). The Situated Nature of Adaptive Learning in Organizations. Organization Science, 8(1), 71-83. Van de Ven, A. H., Delbecq, A. L., & Koenig Jr, R. (1976). Determinants of Coordination Modes Within Organizations. American Sociological Review, 41(April), 322-338. van Fenema, P. C., & Kumar, K. (1999). Coupling, Interdependence and Control in Global Projects. In F. Hartman, R. A. Lundin, & C. Midler (Eds.), Projects and Sensemaking. Hingham, MA: Kluwer Academic Publishers. (forthcoming) Venkatraman, N., & Henderson, J. C. (1998). Real Strategies for Virtual Organizing. Sloan Management Review, 40(1), 33-48. von Hippel, E. (1994). “Sticky Information” and the Locus of Problem Solving: Implications for Innovation. Management Science, 40(4), 429439. Walsh, J. P., & Dewar, R. D. (1987). Formalization and the Organizational Life Cycle. Journal of Management Studies, 24(3), 215-231. Wathne, K., Roos, J., & von Krogh, G. (1996). Towards a Theory of Knowledge Transfer in Cooperative Context. In G. von Krogh & J. Roos (Eds.), Managing Knowledge: Perspectives on Cooperation and Competition. London: Sage. Weick, K. E., & Roberts, K. (1993). Collective Mind in Organizations: Heedful Interelating on Flight Decks. Administrative Science Quarterly, 38, 357-381. Weick, K. E., & Westley, F. (1996). Organizational Learning. In S. R. Clegg, C. Hardy, & W. R. Nord (Eds.), Handbook of Organization Studies. London: Sage. Wernerfelt, B. (1984). A Resource-Based Theory of the Firm. Strategic Management Journal, 5(2), 171-180. Williamson, O. E. (1975). Markets and Hierarchies: Analysis and Antitrust Implications. New York: Free Press.
Loebbecke & van Fenema
273
Williamson, O. E. (1985). The Economic Institutions of Capitalism: Firms, Markets, Relational Contracting. New York: Free Press. Williamson, O. E. (1991). Comparative Economic Organization: The Analysis of Discrete Structural Alternatives. Administrative Science Quarterly, 36, 269-296. Williamson, O. E. (1994). Transaction Costs Economics and Organization Theory. In N. J. Smelser & R. Swedberg (Eds.), The Handbook of Economic Sociology. Princeton, NJ/New York: Princeton University Press/ Russell Sage Foundation. Williamson, O. E., & Ouchi, W. G. (1981). The Markets and Hierarchies Program of Research: Origins, Implications, Prospects. In A. H. Van de Ven & W. F. Joyce (Eds.), Perspectives on Organization Design and Behavior. New York: John Wiley.
274
Becoming Knowledge-Powered
Chapter 18
Becoming Knowledge-Powered: Planning the Transformation Dave Pollard Ernst & Young Canada
In this article, Dave Pollard, Chief Knowledge Officer at Ernst & Young Canada since 1994, relates the award-winning process his firm has used, and which many of the corporations that have visited the Centre for Business Knowledge in Toronto are adapting for their own needs, to transform the company from a knowledge-hoarding to a knowledge-sharing enterprise. The article espouses a five-phase transformation process: • Developing the Knowledge Future State Vision, Knowledge Strategy and Value Propositions • Developing the Knowledge Architecture and Determining its Content • Developing the Knowledge Infrastructure, Service Model and Network Support Mechanisms • Developing a Knowledge Culture Transformation Program • Leveraging Knowledge into Innovation The author identifies possible strategies, leading practices, and pitfalls to avoid in each phase. He also explores the challenges involved in identifying and measuring intellectual capital, encouraging new knowledge creation, capturing human knowledge in structural form, and enabling virtual workgroup collaboration.
Previously Published in Knowledge Management and Virtual Organizations edited by Yogesh Malhotra, Copyright © 2000, Idea Group Publishing.
Pollard
275
KNOWLEDGE: DEFINITION, TYPES, AND EXAMPLES Ask most business leaders if knowledge is important to their company’s future and they’ll say “yes” without hesitation. However most of these leaders can’t articulate why it’s so important, or how they plan to optimize their organization’s knowledge to competitive advantage. The purpose of “Planning the Transformation” is to help business leaders and knowledge officers answer these questions and start to implement the answers. Our working definition of knowledge is any intangible resource of a business that helps its people do something better than they could do without it. Using the models developed by Hubert Saint-Onge1, Dr. Nonaka2 and others, we can say that an organization’s knowledge (i.e. its intellectual capital) consists of: 1. Tacit Knowledge (Human Capital)—the skills, competencies, knowhow, and contextual knowledge in people’s heads 2. Explicit Knowledge (Structural Capital)—the knowledge that is captured or codified in the company’s knowledge-bases, tools, catalogues, directories, models, processes and systems 3. Customer Knowledge (Customer Capital)—the collective knowledge about, and of, the company’s customers, their people, their needs, buying habits etc. 4. Innovated Knowledge (Innovation Capital)—the collective knowledge about as-yet undeveloped or unexploited markets, technologies, products, and operating processes As Dr. Nonaka3 has shown, knowledge creation is largely a result of the process of converting Tacit Knowledge to Explicit Knowledge (or to Cus-
Figure 1: Types of Knowledge Physical Capital
Financial Capital
Inventory & Fixed Assets
Cash, Net Receivables and Investments
Balance Sheet Capital Intellectual Capital Human Capital (Tacit Knowledge) Competencies of Individuals and Teams
Structural Capital (Explicit Knowledge) Intelligence in Databases, Tools, Products & Processes Knowledge Transfer: Codification & Re-Use Learning & Sharing
Customer Capital Relationships with & Solutions for Customers Innovation Capital Application of Knowledge
New Tools, Products, Processes, Solutions & Customers
276
Becoming Knowledge-Powered
Figure 2: The Cycle of Knowledge Creation (4) The new employee discusses the process with a colleague over coffee and it provokes a further improvement idea
(1) An employee comes up with & posts an idea to improve response to service calls
Codification
Conversation
Explicit Knowledge
Tacit Knowledge Re-Use & Combination
Learning & Internalization
(2) The idea gets built into the company’s automated Call Response System processes
AM
FL Y
(3) A new Call Centre employee uses the processand gets complimented by the customer
TE
tomer Knowledge or Innovated Knowledge), and back again, as shown in Figure 2. And, as knowledge-focused business games like Celemi’s Tango and Apples & Oranges4 have shown, the value of the organization’s knowledge is the incremental discounted cash flow that comes to the organization from applying this knowledge. These games also make it clear that the amount and balance of investment of the company in each type of knowledge (versus alternative financial and physical investments), and its ability to use (and reuse) this knowledge effectively, will determine its success in leveraging knowledge and creating value for the organization beyond its net tangible book value. Here are some specific examples of the four types of organizational knowledge, to give you an idea of how difficult it often is for companies to decide which, and how much, knowledge to invest in:
• • • •
Tacit Knowledge Investments: Salaries for new expert hires Training programs Mentoring and retention programs Profit sharing programs
Team-Fly®
Pollard
• • • •
Explicit (Structural) Knowledge Investments: Call centre with customer history database E-commerce systems Competitive analysis database Accelerated solutions centre
• • • •
Customer Knowledge Investments: Customer satisfaction survey with follow-up visit blitz Multimedia marketing & branding program Customer Care program Customer need diagnostic programs
• • • •
Innovated Knowledge Investments: New Product and Knowledge-Embedded Product Incubator Product & Service Life-Cycle value-add program Value Exchange program Pathfinder and thought leadership programs
277
These investments need to compete for people, time and dollars with physical and financial investments like: • New distribution centers or extra inventory • Upgraded staff computers • Investments in strategic businesses • Extended credit terms to volume buyers • Production robotics • Dividends to shareholders as well as investments needed to sustain existing capital and legacy programs. It takes a great deal of business acumen, or knowledge about knowledge if you like, to be able to objectively and skillfully balance and prioritize the competing demands of these different investments on a company’s scarce resources, and forecast and measure the value they produce.
STARTING THE TRANSFORMATION: KNOWLEDGE GOALS, OBJECTIVES AND STRATEGIES So where does the organization start, to decide what types of intellectual capital investment it should make, and how to improve their existing organizational knowledge?
278
Becoming Knowledge-Powered
One of the challenges is that much of the existing legacy knowledge of organizations is widely dispersed, under diverse control. In many cases, even senior management may be unaware of the existence of much of the intellectual capital hidden in the organization. For this reason, many organizations start the transformation to sound knowledge management by cataloguing and assessing current state knowledge, called knowledge mapping. The need and value of such an exercise, however, will vary from company to company, and there is a risk that starting from current state can lead to incremental, rather than innovative, thinking. Planning the transformation to improved knowledge management usually is best begun by doing these three things, iteratively: A1. Envision your ideal knowledge Future State (“where are we going”) A2. Ascertain your knowledge Value Propositions (“why”) A3. Determine your Knowledge Strategy (“how will we get there”) The Future State Vision serves several important functions: • Makes the benefits of knowledge investments more compelling and concrete • Illustrates the benefits of bringing the company’s scattered knowledge resources under a single, virtual organizational umbrella • Serves as a high-level blueprint for the knowledge improvement plan It’s often useful to lay out the Future State Vision as a “Day in the Life” scenario, showing what knowledge-leveraging activities will be possible in the Future State that are not currently possible. This requires a synthesis of the needs of diverse users in the company, as well as diverse knowledge behaviours (ranging from “I do all my own research” to “Do it for me—the only thing I do with my computer is e-mail”), and diverse knowledge sharing distribution channels (“push it out to me” versus “I’ll look it up when I need it”). The exercise of canvassing different users in the organization to prepare the Future State Vision is also useful: • It ensures the CKO or Knowledge Management leaders are aware of the diversity of knowledge needs throughout the organization. • It helps identify synergy between the knowledge processes and content (both Current and Future State) in various departments of the organization, and identify leading practices that can be deployed company-wide. Your Future State Vision should, as its name suggests, be far-reaching, ambitious and visionary, rather than focused on quick-wins and short-term projects. The Knowledge Plan will break the improvements suggested by the
Pollard
279
Vision into manageable (and affordable) chunks, and set priorities for what should be done short-run versus later. The knowledge Value Proposition(s) will vary from company to company, and even within business units. They specify the reasons, and drivers, behind the knowledge programs and investments of the company. They are the answer to the question that will inevitably and repeatedly be asked at all levels of the company: “Remind me again why we are doing this?” In line with the quadrants of the Balanced Scorecard5, knowledge Value Propositions tend to fall into four categories: • Growth—if the purpose of knowledge investments is to increase company revenues by adding value to the company’s products and services, or developing new markets, products and services, or innovations • Efficiency— if the purpose of knowledge investments is to reduce cost or cycle time through reuse of knowledge objects and process improvements • Customer—if the purpose of knowledge investments is to strengthen customer relationships or customer satisfaction • Employee—if the purpose of knowledge investments is to improve employee satisfaction, learning, recruitment or retention In many cases, it is impossible to demonstrate directly the degree to which knowledge investments contribute to the achievement of the Value Propositions above. If the company achieves its revenue targets, for example, sponsors of company programs other than knowledge management ones will be quick to claim credit for the success. More direct measures of knowledge creation are: (a) the rate at which tacit knowledge is codified to become explicit (i.e., the rate of new knowledge submissions), and (b) the rate at which explicit knowledge is internalized to become tacit (i.e., the rate of use of codified knowledge). Although these are only surrogate measures of knowledge transfer, they are fairly easy to measure using automated methods, and a strong correlation between the rates of knowledge submission/use, and the achievement of the selected Value Proposition objectives (e.g., increased revenue or employee retention), is fairly compelling evidence that knowledge is being effectively leveraged and knowledge ROI is high. There are two key tests of the soundness of a company’s Knowledge Strategy: • It must be congruent with and contribute to the company’s overall business strategy—a knowledge strategy that takes the company in a
280
Becoming Knowledge-Powered
different direction from that dictated by the company’s other business imperatives is almost certain to fail. • It must be a true strategy i.e. a selection between alternatives—if the “strategy” claims there is only one way to achieve the company’s knowledge objectives and achieve the Value Propositions, it’s probably missing something.
KNOWLEDGE ARCHITECTURE AND CONTENT So now you’ve decided, at least tentatively, on your Knowledge Vision, Value Propositions, and Strategy. The next two steps in Planning the Transformation are: B1. Design and Build (or Re-Build) the Knowledge Architecture i.e. how the company’s knowledge is organized and stored. B2. Identify and Acquire the appropriate Knowledge Content, both company-proprietary and externally sourced. Your Knowledge Architecture includes: • The way in which your knowledge is indexed and organized (taxonomy). • The platforms (technical and logical) on which the knowledge content is stored • The tools by which content is added to, and accessed from, your knowledge repositories. Taxonomy, at a basic level, specifies the subject matter, date, type, author, and other data for each “knowledge object”, so that your catalogues and search tools can find what the user is looking for. Subject matter can further be broken down by the affected business process, business segment, or other indexing and filtering tags relevant to your particular company. While there are many generic subject matter taxonomies, you will need to refine the ways in which you index your organizational knowledge to reflect the most common uses in your company. You must also take care that your taxonomy is not so specialized that new hires and outsiders with whom you want to share your knowledge find your index impenetrable. At a more robust level, taxonomy should specify context about knowledge. Information about a leading industrial practice, for example, can be worthless, or even dangerous, if you don’t have information on where, how, how quickly, by whom (so you can get further information if necessary from the source) and at what cost it was developed. Context is what provides the
Pollard
281
bridge between tacit knowledge (know-how) and explicit, codified knowledge, allowing the user to appreciate the value, risks, and likelihood of successful (re-)deployment of knowledge in another situation. Your taxonomy must walk a fine line between being too rigid (so that most of your knowledge can’t be accurately indexed with the available tags) and too loose (so that the indexing terms are so vague that searches return too many false-positives). Furthermore, the taxonomy must be flexible enough to accommodate changes in your business and in the world at large, but stable enough that you need not constantly re-index everything in your knowledge repositories. Your knowledge platforms are of two types: technical platforms (the software that “contains” your knowledge objects) and logical platforms (the layout of repositories, and of tools such as your intranet homepage). Again, there is no one right answer for all organizations, and there is a trade-off between power and flexibility. Platforms can vary from massive knowledge warehouses built on established, sophisticated database tools, to individual homepages using common “look and feel” templates that are regularly polled by search tools. Some companies prefer to integrate data and knowledge together, and encourage everyone to submit their knowledge, while others choose filters and submission processes that “promote” only the knowledge deemed by Subject Matter Specialists to be exceptionally valuable and transferable. It is generally wise to try to limit the number of different layouts of knowledge-bases and use standard templates for knowledge-bases. This ensures that users become familiar with their structure and where to find certain types of knowledge, and new knowledge-bases can be launched and populated quickly and easily. Many IT departments have learned a great deal, very quickly, about how organizational knowledge fits with traditional financial, HR and sales information systems, and the different tools and technologies that enable them to be developed. Nevertheless, close interaction between the organization’s knowledge leaders, executive sponsors, and IT management is needed to ensure the architecture design is feasible, affordable, meshes appropriately with legacy IT systems, and is efficiently and effectively built. Ideally, both the taxonomy and architecture should be all but invisible to end users. Users need to be provided with a suite of knowledge navigation tools that fit their personal knowledge behaviours (“push it out to me” versus “I’ll go pull it out when I need it”), and locate and deploy knowledge when and how it is needed. Knowledge navigation tools Ernst & Young has seen thus far tend to fall into four categories:
282
Figure 3: Knowledge Value Chain (© 1999 Ernst & Young)
Proprietary Customer, Industry & Market Information
Third Party Information
Through Push & Pull Mechanisms, Share Knowledge with User Communities Across Business Areas & Geography
Continuously Improve Scorecard and Results for each Business Unit GRB Balanced Business Scorecard GRB Strategies:
GRB Goals: •T imely Compliance •T ransparency of Effect •O ptimal Gain
• Manage Investment Efficiently • Manage Risk Proactively • Manage Resources Effectively
Customer Focus Authorized Investment
Product Focus
Actual
Target
Variance
$
$
%
Total Invested $’s: External Internal Total
$ $ $
$ $ $
% % %
Cost of Work Completed Cost of Work Planned
$ $
$ $
% %
% Work Complete Total LOB 1 LOB 2 LOB 3 LOB 4 LOB 5 LOB 6 LOB 7
Actual
Target
% % % % % % % %
% % % % % % % %
Varianc e % % % % % % % %
Major Variance Explanation: Major Variance Explanation:
Risk Focus Management Reserves
Proprietary Product Information
Human Capital
Regulatory Compliance Total North America EMEA Latin America Asia/Pacific Critical Event Horizon Risks Product A Product B Product C Major Open Issues:
Proprietary Analytic Outputs
Current $
Human Resource Focus Prior $
Change $
Regs Issued Complied With # # % # # % # # % # # % # # % Date % Complete mm/dd/yy % mm/dd/yy % mm/dd/yy %
Actual FTE’s Internal External Project Turnover Management Other Y2K Bonuses
# # #
Target # # #
Variance # # #
% % % % % % % % % Approved Acc rued Paid $ $ $
Corporate Objectives
Key HR Issues :
Apply this Knowledge to meet Customer and Internal Needs
Structural Capital
Customer & Innovation Capital
Financial Capital
Becoming Knowledge-Powered
Access Cost-Effective Information Sources and Structure
Through Research & Analysis, Convert Information to Knowledge: Frameworks, Analyses, Insights, Models,Templates, etc.
Achieve Corporate Objectives: Growth, Profitability, Innovation, Relationships, Learning, Satisfaction etc
Pollard
283
• Catalogues and Directories—that allow users to browse sequentially through relevant knowledge (analogous to reading a book’s Table of Contents). • Search Engines—that allow users to find a list of knowledge objects that contain certain keywords or meet other specified search criteria (analogous to reading a book’s Index). • Portals—that point users to a small, organized subset of knowledge from a much larger knowledge warehouse, which can then be browsed. • Road Maps—that provide users with dynamic step-by-step instructions to learn or find pertinent knowledge about a particular subject. The tools that users select will depend on the nature of their search situation and on the style of knowledge acquisition they prefer. There are three main styles of knowledge acquisition: • Browsing—reading through something sequentially until something of value is found (e.g., the way most people read a newspaper). • Searching—using an index or search term to locate knowledge that meets specific criteria, on a one-time basis. • Profiling or Subscribing—using a “net” to continuously catch knowledge that meets specific criteria. The process by which a user navigates through knowledge will often involve both “push” and “pull” mechanisms, one or more of the three styles of acquisition, and the use of one or more of the four knowledge acquisition tool types. Your knowledge architecture must be powerful enough to accommodate these diverse acquisition processes, without being too complex for users to learn easily—this is not an easy design challenge. Another aspect of knowledge architecture is the development of appropriate vendor management processes (for external-source content) and submission processes (for company-proprietary processes). Negotiation of site licenses with vendors partial to per-head or pay-per-use contracts requires excellent negotiation skills, thorough knowledge of alternative sources of supply, and patience, but can dramatically reduce (or increase) your organization’s total knowledge budget depending how well it is done. These contracts also often have complex copyright, redistribution, and indemnification clauses that require competent legal review. The submission process must be simple enough (and ideally transparent) to the end-user to encourage frequent contributions, yet sophisticated enough to capture the taxonomy basics and context needed to enable effective location and reuse of submitted knowledge. The submission process must also be reinforced with measure-
284
Becoming Knowledge-Powered
ment, reward and recognition programs that encourage sharing and discourage hoarding of knowledge. Finally, your knowledge architecture must be permeable—it must interface both with existing legacy information systems (FIS, HRIS, SMIS) so that relevant data like personnel and sales information can be shared, and with emerging inter-enterprise systems (Internet, extranets, e-commerce systems) that will use your internal knowledge content and processes as their engine. Determining the appropriate Knowledge Content for your organization also involves several considerations: • What types of knowledge to acquire, maintain and deploy. • What mix of company-proprietary and external-source knowledge to acquire, and how to integrate it. • What mix of information content (e.g. news stories), which is available in huge quantity and relatively inexpensive, versus knowledge content (e.g., leading practices), which is scarcer and costlier, to acquire for your organization. • How long to archive content, and QA processes to ensure accuracy and relevance • Access security for various types of content and various classes of users. • How much knowledge to keep in your own domain (secure, proprietary but expensive to maintain) rather than in a vendor’s or outsourcer’s domain (less secure and proprietary but cheaper).
• • • • • • •
Categories of business knowledge can include knowledge about: Your customers, their needs, businesses and people Your industry, markets & competitors Your people’s competencies & experiences Your products & services Your processes, practices, policies and procedures Your suppliers Your tools, models, methods & resources
Within each of these and other categories, there is a continuum of types from raw data (low value, low cost) to sophisticated, synthesized knowledge (high value, high cost). In addition, as shown in Figure 3, the “raw material” of a knowledge system can contain a mix of user-submitted proprietary and purchased external-source content, which should be integrated in a way that makes them useful, together, for your users. The process of canvassing your users to ascertain what content is, or
Pollard
285
would be, valuable to the organization, is not as straightforward as one might expect. If you ask users what they think of current state information and how to improve it, you can get misleading answers: • Users might not want to admit they don’t know what is currently available, or don’t know how to access it, or that they delegate knowledge searches to subordinates. • Users probably don’t know what other knowledge could be made available, unless you coach them on the possibilities. • Users often don’t differentiate between knowledge content and the tools and technologies that deliver it, and tend to over-value content when the underlying technology is reliable and powerful, and under-value it when it isn’t. • The “not invented here” and “knowledge is power” syndrome, among other cultural challenges, can make users skeptical of the value of using, or contributing, proprietary knowledge. The process must be iterative: training users what is currently or could be available, brainstorming with them on what else they might need, and taking them through a Future State “Day in the Life” to make the processes and benefits of knowledge contribution and knowledge use more tangible, will help gradually identify the content, and related processes and cultural obstacles, to be addressed for each user group.
KNOWLEDGE INFRASTRUCTURE AND SERVICES Once the architecture and content have been put in place, the next three steps in Planning the Transformation are: C1. Define the roles of knowledge providers, users, and Knowledge Centre support personnel. C2. Integrate and re-engineer existing library and research functions to reflect the new disintermediated architecture and add more value to knowledge services. C3. Create, enable, connect and support the company’s Communities of Interest. One of the current tenets of business is “nothing gets done unless it’s someone’s job” and the new tasks of knowledge management are no exception. Until new knowledge-powered activities and behaviours have been baked into the company’s business processes so they become second nature, it is important that all participants in knowledge sharing—providers, users, and support personnel—understand their knowledge roles and how they fit into the
286
Becoming Knowledge-Powered
TE
AM
FL Y
company’s new knowledge processes. A simple example of a Knowledge Centre organization has eight defined roles: knowledge navigation, research, analysis, knowledge-base management, knowledge stewardship, subject matter specialist, network coordination, and user. The user role includes responsibilities to contribute the user’s own knowledge as a subject matter specialist, and accommodates both “do it yourself” and “do it for me” knowledge behaviours. Alternative models of knowledge roles might focus on the customer relationship manager, or on the new product development or production process improvement team, depending on the nature of the business and its knowledge drivers, strategy and value propositions. The important thing is that the new or changed roles be clearly articulated, and performance of those roles appropriately measured and rewarded. As the new knowledge process allows end-users to access both externalsource and company-proprietary knowledge directly (a process Gartner Group calls disintermediation)5 the traditional “rip-and-ship” role of librarians (accessing and forwarding information without any value added) and the unleveraged role of researchers (doing everything from filing to analysis for a small group of users) must give way to new, centrally managed, virtually connected, high-value added functions. Researchers previously unconnected and providing similar services with a broad and shallow knowledge (what Gartner calls T-shape skills) must now work together, support each other, pool best practices, replace designed-from-scratch deliverables with reusable template deliverables, provide proactive “standing order” services in lieu of just reacting to incoming requests, and sharpen and deepen their knowledge to develop what Gartner calls I-shape skills.6 Likewise, new knowledge paradigm analysts must work in tandem with researchers, users, other analysts and subject matter specialists to take the researchers’ distilled and organized deliverables, add insight and fact-based strategic and implications analysis, and produce much more sophisticated multimedia deliverables ready for presentation to senior management, customers and other key decision makers. New specialty responsibilities in larger knowledge organizations, like content coordination and competitive intelligence, must be developed and deployed. This is a major reengineering of knowledge functions, but beyond the definition of new knowledge competencies, it is not an especially difficult one, if the people affected are change-resilient and up to the new job. Probably the greatest challenge is getting your knowledge workers’ current supervisors to cede authority of those people to the knowledge centre, in return for access to a much larger and more powerful knowledge organization. It has been noted that, because of internal office politics and jealousies,
Team-Fly®
Pollard
287
knowledge often flows more freely between departments, and between organizations, than between people in a single department. Although this is certainly a factor, it is equally true that sometimes the most valuable, objective, and creative knowledge exchanges are extra-organizational, and even within many organizations the most productive and valued “people networks” aren’t represented on the organization chart. To encourage this virtual networking activity, enable people with common problems to work together, and allow free cross-pollination of ideas, the knowledge infrastructure needs to be able to create, quickly and on-demand, versatile and powerful “knowledge spaces” where these people can collaborate, share ideas, and capture and post relevant knowledge. Furthermore, the knowledge centre needs to provide network coordinators and/or knowledge stewards that can manage and help populate these virtual spaces for the network team, identify and tie in other possible network members, and otherwise support the effective collaboration of the network team. Some companies with whom the author has worked consider this network support to be potentially the most important function of their knowledge organizations.
CREATING A KNOWLEDGE-SHARING CULTURE Experts have long warned infrastructure builders to beware of the assumption that “if we build it they will come”. This is particularly true in the knowledge arena, where building the appropriate architecture and infrastructure is a necessary precondition to knowledge sharing. But it is not a sufficient condition: once the architecture and infrastructure are in place, the hardest task begins: getting users to willingly contribute their own knowledge, and use others’. The tasks involved in changing this culture are: D1. Creating a sense of urgency, and obtaining executive leadership and sponsorship D2. Outreach programs: training, two-way communication, internal marketing D3. Reinforcement programs: measurement, reward and recognition, early wins D4. Embedding knowledge activities in your business processes and technologies D5. Tackling the Collaboration Dilemma In his book, Leading Change7, John Kotter outlines the steps needed to bring about any kind of change in an organization, and his first two steps are creating a sense of urgency and obtaining executive sponsorship. Creating a sense of urgency is not the same as convincing people that knowledge creation and management are important. At the time of writing most business leaders, judging
288
Becoming Knowledge-Powered
from the best seller lists of business bookstores, know that knowledge is vitally important to their future, but their spending budgets indicate that Y2K issues, for example, are more urgent. Since it seems to be human nature to look after urgent, unimportant matters before tackling important non-urgent ones8, it is up to you as a knowledge leader to create the urgency that will bring attention, time and spending priority to knowledge issues. Compounding the difficulty of making knowledge urgent is the fact that it is intangible, not very sexy (a legacy perhaps of our unfortunate stereotype of librarians), and difficult to tie directly to short-term, bottom-line results. But there are some things that can be done: • Use a knowledge Future State Vision to make the benefits of knowledge process transformation more concrete; make it daring and enjoyable and suggest the risk to the company if competitors embrace the vision before your company does • Demonstrate, particularly to key decision makers in your organization, how achievement of your knowledge strategy will go a long way to achieving the company’s overall business strategy (and how failure to do so will prevent it) • Customize your pitch to different audiences: different business leaders have different knowledge needs and priorities, and different Value Propositions for investments in knowledge that you must appeal to Once you have top management on-side, give them the material that you used to convince them (in brief, punchy, jargon-free format) to enable them to articulate clearly and push down your message to the rest of the organization. You’ll probably find that your new hires won’t need much convincing, so as long as your executive sponsors get through to middle management, your selling job is mostly done. Outreach programs like training, communications, and internal marketing can take, in our experience, as much as 30% of the time of your knowledge transformation team. No matter how simple your knowledge systems may be, it is almost impossible to give people too much training. Some other pointers: • Combine technology training with knowledge training, using case studies that introduce participants to the most valuable knowledge and learning resources you can offer, and illustrate powerful returns on their learning investment • Remember that eventually knowledge use must become second nature to your people, so plan to migrate and embed the material from “knowledge training” courses into the mainstream sales, technical, and service training programs of the business • Consider specialized training sessions or case studies customized to the needs and interests of specific audiences
Pollard
289
• Use desktop learning modules instead of or in addition to classroom training if you know your audience can use them and will save time doing so Communications is of course a two-way street. You should use specialized knowledge communications (ideally in electronic form) to communicate the highlevel whats, whys, and hows of knowledge infrastructure. You should plant knowledge-related articles in existing house organs (both general and specialized) as often as possible. You should inform users of any technical or content problems with your knowledge systems before the users stumble on them themselves, and indicate when the problems will be resolved. You should also spend as much “face-time” as possible with audiences and individuals in your organization, selling them on the value of what you’re doing, getting their assessments and ideas for improvement, and reinforcing key training and how-to messages. Regular user surveys should be used to canvass and automate collection of user satisfaction and suggestions. Focus groups of specialized user groups should also be held regularly, so that you can remind them of what you’ve done for them and brainstorm ideas on what to do next. Figure 5: Overview of the Knowledge Transformation Process (©1999 Ernst & Young) • Principles & Policies • Value Propositions • Strategy • Future State Vision
Knowledge Strategy and Vision:
Knowledge Architecture: • Knowledgebase Organization • Access Tools, Roadmaps • Taxonomy • Content: External source • Content: Proprietary • Intranet/Internet/Extranet eCommerce integration • Tech Platform & Interface with other IT apps
Knowledge Culture Programs:
Knowledge Infrastructure & Services: • Roles & Responsibilities • Value-Added Services (Research, Analysis) • Knolwledge Networks (Communities)
Knowledge Innovation Programs:
• Leadership/Sponsors • Sense of Urgency • Training • Communication • Internal Marketing • Measurement & Rewards • Early Wins • Embed Knowledge Activities in Core Business Processes
• Knowledge-Embedded Products • New Product Innovation • New Processes & Channels • New Tools/Centres of Excellence
290
Becoming Knowledge-Powered
Figure 6: The Evolution of Knowledge Management Prevalent Knowledge Sharing Behaviour
Past Knowledge is power, so don’t share anything
Present Knowledge is valued, so share knowledge content but keep context hidden so users have to come to you Scavenge knowledge from other sources when there is an obvious re-use saving Learn by yourself, from all the sources you can find Centrally-managed group, collaborating virtually, each with narrow, deep knowledge
Prevalent Knowledge Use Behaviour
Don’t use it if it’s Not Invented Here
Prevalent Learning Model
Learn at the foot of the Master
Knowledge Centre Model
People scattered throughout the company with broad, shallow knowledge
Knowledge Value Model
Knowledge is inherently valuable forever
Knowledge in context is valuable in use, for a limited time
Knowledge Access Model
Knowledge is kept within the originating group
Knowledge is kept within the originating company
Greatest Cultural Obstacle to Knowledge Sharing Dominant Knowledge Value Proposition
“I don’t have time” to look at/learn about this
“I don’t have time” to look at/learn about this
Deploying knowledge reduces cost and cycle time
Deploying knowledge accelerates growth
Future Knowledge sharing is valued, so share everything
Use templated knowledge from others almost always Just-in-time collaborative team learning Networked, boundaryless, global group using common research and analysis methods & tools Knowledge in context is valuable in use, for a limited time, and otherwise it’s free Knowledge is shared globally, even with “competitors” “We don’t collaborate effectively” in using knowledge Deploying knowledge powers innovation
As mentioned earlier, surrogate measures of knowledge success (rate of acquisition of new explicit knowledge, rate of use of that explicit knowledge) are not difficult to automate, and can demonstrate progress and quickly pinpoint problem areas (parts of your knowledge system that are not being used, and user groups that are not contributing or using knowledge). At Ernst & Young, we have found the following set of measures, and measurement methods, to be useful, compelling, and inexpensive to collect: • quantity of new knowledge acquisitions (both bought and submitted), by user group and subject matter area (collected automatically by polling software) • quantity of accesses of explicit knowledge (number of different users, sessions and hits), by user group and subject matter area (collected automatically by polling software) • knowledge success stories (identified and collected by a dedicated member
Pollard
291
of our communications team from a variety of firm-wide success reports including sales reports, process improvement reports and satisfaction surveys, followed up by “how did knowledge play a role in this success” interviews) • quality scores from user surveys—surveys of both knowledge repository users and knowledge (research and analysis) service users • annual penetration surveys and self-assessments by our knowledge management team To the extent our knowledge success stories reflect “early wins” on new knowledge initiatives they are especially valuable, since they can persuade senior management that not all the returns from knowledge investments are long term. In some cases early wins are vital to move beyond seed funding of controversial or risky knowledge projects. The reward and recognition processes you use to reinforce the importance of sharing and using knowledge will depend on the nature of your business and how far you have progressed in creating a knowledge culture. We have found that sometimes people are uncomfortable with being overtly rewarded for knowledge contributions, because they feel they are just doing their job. Coercive measures (e.g. minimum knowledge contribution “quotas”) can work well in some companies but backfire in others. Companies that use individual Balanced Scorecard-type measures of performance will often welcome new intellectual capital-related measures to add to the Scorecard. The ultimate goal of most Chief Knowledge Officers is to move knowledge-sharing from “something to do on top of everything else we do” to “the way we do things around here”. This means that knowledge activities and behaviours must be embedded in the regular business processes of the organization: sales, new product innovation, production, service, recruiting, performance measurement, training, etc. It has even been suggested9 that the CKO role may be a temporary one, lasting only until this embedding is complete. Some of the ways of embedding knowledge in regular business processes include: • Embedding knowledge training materials into the company’s established business training programs • Embedding knowledge activities into the company’s process and procedure manuals • Ensuring that forms (both paper and online) that must be completed include reference to accessing and contributing applicable knowledge (ideally online
292
Becoming Knowledge-Powered
forms can include “hot-buttons” that allow you to access and contribute knowledge automatically as you complete the form) • Programs which, by default, push out or jump to applicable knowledge as part of their routines, and programs which, by default, capture knowledge as it is entered by users, unless overridden by security considerations • Program changes that simplify work processes (e.g., those that access knowledge-bases and complete fields for you) instead of complicating them Most of the major corporations that have visited our Centre for Business Knowledge have affirmed that these cultural challenges are more difficult and take longer than building the knowledge architecture and infrastructure, and none is more challenging than creating a culture of collaboration. As most of us who have experienced the frustration and futility of dysfunctional committees and unnecessary meetings can testify, open collaboration, sharing and listening to others’ ideas, and objective and egalitarian decision-making between business colleagues is very difficult to achieve, even with an impartial and skilled facilitator. The challenge is even greater when this collaboration is virtual: the body-language is missing, discussion threads can go on tangents, and multiple discussions can go on simultaneously. Until new generations of workgroup tools can be developed, producing a robust, open collaboration environment in (and increasingly between) businesses will remain a huge challenge to the effectiveness of organizational knowledgesharing.
KNOWLEDGE AND INNOVATION Earlier in this paper, Innovated Knowledge was identified as one of the four main types of knowledge. Because the link between knowledge-sharing and innovation is not obvious (though it is intuitive), the reader may need to be convinced that almost all innovation is knowledge-powered. Following are five ways of using knowledge to make organizations more innovative: E1. Employ knowledge to create knowledge-embedded products E2. Employ knowledge to enhance the New Product Development process E3. Employ knowledge to innovate business processes and delivery channels E4. Employ knowledge to tap new markets E5. Employ knowledge to engineer new business work-tools
Pollard
293
What are “knowledge-embedded products”? They are products that contain programmed intelligence that make them more valuable, and (if the programs are upgradable) extend their useful lives10. Two examples are the module in some cars that shows current gas mileage, and the software in satellite dish receivers that downloads changes to station and program lineups. The advent of the Internet is allowing many more such products to be developed, such as on-line diagnostic systems. It is codified (structural) knowledge that gives these products their value. Shared knowledge also allows non-R&D departments to contribute to the new product development process. For example, if in a meeting with a sales representative a customer identifies a need not currently satisfied by existing products, and that knowledge is codified and shared so that the R&D department becomes aware of it, they can work with the sales representative and the client (possibly using a shared extranet) to develop and commercialize the solution. Knowledge can help businesses innovate processes, not just products. Socalled “sales force automation” tools effectively capture and deploy knowledge about customers and products to dramatically streamline and enhance the sales process. The change is sometimes revolutionary, enabling virtual sales-force deployment, allowing effective use of sales call centres, and substantially altering the necessary skill set for an effective salesperson. Much has been written about how the Internet and e-commerce are allowing local businesses to “go global”. Too often the technology is credited with this success, when in fact it is the knowledge shared between the company and its new customers and suppliers (including in some cases knowledge of the very existence of new suppliers) that opens up new markets. On-line surveys and customer knowledge-bases that electronically canvass the product attributes that new customers are prepared to pay for also help identify new niche markets and customer segments. As illustrated earlier in Figure Two, new knowledge-powered tools like Call Centre software, that captures knowledge about customer needs and buying habits, and matches it to product specifications, can innovate the workplace, using technology to drive process and behaviour change and enhance performance. New data mining tools are also being developed that use knowledge-bases in conjunction with neural network logic to identify and capture sales opportunities, trends and competitive threats.
CONCLUSION This chapter has attempted to illustrate a five-phase process to plan and
294
Becoming Knowledge-Powered
navigate the transformation from a knowledge-hoarding organization (“we don’t know what we don’t know”) to a knowledge-sharing organization. As a recap, Figure 5 illustrates these five phases. The transformation, even in leading-edge organizations, is not yet over. Figure 6 shows a comparison of the changes that have already occurred, and those that are yet to come. Speaking as someone who has held the Chief Knowledge Officer role at Ernst & Young for the last five years, and spoken to more than 30 major corporations most of which are just beginning this transformation, I can tell you that there probably isn’t a more challenging, important, and ultimately satisfying job to be found in the work world today. I would welcome any questions or continuing dialogue on any facet of knowledge management at [email protected], and hope to hear from readers visiting the definitive site on knowledge management matters at www.brint.com.
ENDNOTES 1
Tapping into the Tacit Knowledge of the Organization, Hubert SaintOnge, Toronto, CIBC Leadership Centre (unpublished), 1995. 2 The Knowledge-Creating Company, Ikujiro Nonaka & Hirotaka Takeuchi, New York, Oxford University Press, 1995. 3 Ba-A Place for Knowledge Creation, Ikujiro Nonaka, in “Perspectives on Business Innovation Vol. 2”, Boston, Ernst & Young Center for Business Innovation, 1998 (available through www.businessinnovation.ey.com). 4 These business simulations are more fully described and available through www.celemi.com; much of the work in their development was done by KarlErik Sveiby, who is on Celemi’s Advisory Board. 5 The Balanced Scorecard, Kaplan & Norton, Boston, HBS Press, 1996; the Harvard Business Review has since published several follow-up articles. 6 The Knowledge Management Scenario: Trends and Directions for 19982003, Gartner Group, Inc., March 18, 1999 (available through their database service). 7 Leading Change, John P. Kotter, Boston, HBS Press, 1996. 8 The Urgency Addiction, a chapter in First Things First, Stephen R. Covey, New York, Simon & Schuster, 1994. 9 Intellectual Capital: The New Wealth of Organizations, Thomas A. Stewart, New York, Doubleday, 1997. 10 As described in Blur-The Speed of Change in the Connected Economy, Stan Davis & Christopher Meyer, Perseus, 1998.
Pollard
295
Additional Useful Readings: 11 Wellsprings of Knowledge, Dorothy Leonard-Barton, Boston, HBS Press, 1995. 12 Perspectives on Business Innovation, Volumes 1-3, Boston, Ernst & Young Center for Business Innovation, 1996-99 (available through www. businessinnovation.ey.com). 13 Jumping the Curve, Nicholas Imparato & Oren Harari, San Francisco, Jossey-Bass, 1997. 14 The Knowledge-Enabled Organization, Daniel R. Tobin, New York, AMACOM, 1998. 15 California Management Review, Annual Special Editions on Knowledge Management.
296
About the Editor
About the Editor
TE
AM
FL Y
Vijayan Sugumaran is an Assistant Professor of Management Information Systems in the Department of Decision and Information Sciences at Oakland University. His research interests are in the areas of Intelligent Support Systems, Intelligent Agent and Multi-Agent Systems, Domain Modeling and Software Reuse, Component Based Software Development, Knowledge-Based Systems, Data & Information Modeling, and Internet Technologies. His most recent publications have appeared in Communications of the ACM, Data Base, Information Systems Journal, Data and Knowledge Engineering, Journal of Healthcare Technology and Management, Journal of Logistics and Information Management, Automated Software Engineering, and Expert Systems. He has also presented papers in various national and international conferences. Dr. Sugumaran has served on program committees of several international conferences such as International Conference on Information Systems (ICIS 2001), Conceptual Modeling (ER 2002), Workshop on Information Technologies and Systems (WITS 2001, 2002) and International Workshop on Applications of Natural Language to Information Systems (NLDB 2001, 2002). He is also the Chair of Intelligent Information Systems track for IRMA Conference (2001, 2002) and Intelligent Agent and MultiAgent Systems in Business mini-track for AMCIS (1999, 2000, 2001).
Team-Fly®
Index
297
Index
A activity-related constraints 75 agent 84 agent host 91, 93 agent implementation 91 agent reference 91 agent technology 84 agent-based systems 98 aggregator/integrator 206 anesthetists 76 application domain 3 application domain independence 57 artificial intelligence (AI) 47 Artificial Intelligence Research 65 assessing strategic options 220 automation 179 autonomous agents 87 autonomy 86
B Bilateral Control Strategies 265 browsing 283 bus configuration and port design 219 business bus 207 business engineering 206 business enterprises 26 business environment 177 Business Model Innovation 177 Business Networking 201, 202, 203, 209 Business Networking Challenges 204 Business Networks 200 Business Process Reengineering (BPR) 4, 5
business strategy 279 business-networking solution 200 business-oriented conceptualization 206
C catalogues 283 change management 220 Chathound 130 Chathound Implementation 131 Chathound Requirements 130 chatrooms 130 Chief Knowledge Officers 292 circulation of daily newspapers 34 co-alliance models 232 co-opetition 248, 249 collaborating agents 2 collaboration 256 collaborative knowledge 250 commercialization 63 communications 289 communications infrastructure 33 competitive advantage 226 Competitive Intelligence (CI) 4 Computer Supported Co-operative Work 72 computerization 33 conceptual data models 54 conceptual design 44 consensus building 183 constraints 74 context-dependencies 78 continuation activity 212 contribution 106 Control Strategies 253, 259 controlling knowledge 250
298
Index
coopetitive 248 critical success factor 201 culture 229 Customer Capital 275 customer knowledge 275 customer knowledge investments 277 customer loyalty 154, 155 customer processes 206 customization 156
D data collection 10 Data Model Independence 57 Data Models 52 database 43 database design 63 database design methodology 44 database design support 49, 64 database design tools 43, 64 decrease stage 111 defuzzification 119 detail IS implementation 219 Deviation Trajectories 262 digital communication 249 digital technologies 249 disintermediation 286 dollar exchange rate 31 Dynamic Evolution 3
E E-Commerce systems 284 Economic Sectors 31 education 34 electronic commerce (EC) 201 electronic communications 262 electronic consultative commerce 241 electronic consultative enterprise 237 electronic marketplace 227 electronic procurement 200 electronic purchasing service 204 electronic trust 103 Email Reader Agent 91 enterprise resource planning 201 Enterprise Resource Planning (ERP) 238 eProcurement 218 executing simulation 114
Explicit (Structural) Knowledge Investments 277 explicit knowledge 254, 275 explicit knowledge flows 257 external debt 31 Extranets 284
F Failure Model 113 Financial Capital 31 firm boundaries 248 fixed priority scheduling 89 forecasting 109 formal testing 65 Future State Vision 278 fuzzy controller 116 fuzzy reasoning 109, 116 fuzzy rules 118
G globalness 229 goal definition 203 gross domestic product (GDP) 31 growth stage 111
H hospital scheduling problem 73 Hospital-Scheduling 72 Human Capital 28, 35, 275
I indirection 81, 82 inflation 32 information and communication technologies (ICT) 226 information intensity analysis 211 information processing 180 information sources 3 information systems (IS) 4, 177, 180, 201 information technology (IT) 98, 201 Information-Processing Model 182 information-processing paradigm 180 Innovated Knowledge 275 Innovated Knowledge Investments 277 Innovation Capital 275
Index
intellectual assets 29 intellectual capital 23, 27, 39 intellectual capital assessment 22 intellectual capital measures 40 intelligent agent 1, 85, 98, 99, 100, 124, 126 intelligent database design tools 47 intelligent machines 165 intelligent systems 164 Inter-Agent Communication 132 Inter-Business Networking 200 inter-firm relationships 252 inter-firm transactions 251 inter-organizational collaboration 250 Interface Agent 91 Interface Usabability 156 intermediate instance 79 International Events 32 Internet 284 Internet Relay Chat (IRC) 130 Internet use 34 introduction stage 111 IS/IT network design 219
K keiretsu network structure 261 knowledge 275 knowledge application phase 214 knowledge architecture 280 knowledge assets 22, 23 knowledge base 227 knowledge creation 191, 275, 279 knowledge empowered service 241 knowledge exchanges 248, 249, 253 knowledge flows 248, 254, 257, 258 knowledge infrastructure 286 knowledge investments 278 knowledge management 177, 179, 180, 183, 186, 200, 203, 213 knowledge mapping 278 knowledge navigation tools 281 knowledge object 280 knowledge plan 278 knowledge repository 203, 250 knowledge roles 286 knowledge sharing 255 knowledge strategy 279
299
knowledge transfer 254 Knowledge Value Networks 241 knowledge-based database design tools 44 knowledge-based system 43, 46 knowledge-based workflows 255 knowledge-focused business games 276 knowledge-intensive transactions 254 Knowledge-IntensiveEnvironments 135 knowledge-powered 274 knowledge-sharing culture 288
L learning agents 242 logic programming 164 logical design 44
M management agent 81 Market Capital 28, 32 market needs 32 market-alliances 233 markets 230 maturity stage 111 method development 205 method engineering 208 minutiae of machinery 181 monetary cost 158 multi-agent systems 73 multi-agent technologies 100 multiple inheritance 75
N National Intellectual Capital 22 networkability 210, 211 networked systems 39 networks 229 news postings 124 Newshound 124 Newshound Architecture 125 Newshound Implementation 125 Newshound Requirements 125 Newshounds Interface 126 non-working hyperlink 149
300
Index
O
S
organizational business model, 178 Organizational Capital 28 organizational structures 253 organizations 252
Sales Model 111 scalability 3 search engines 283 searching 283 security 158 Semantic Heterogeneity 3 shared knowledge 293 simulation 109 simulation results 116 smart agents 85 social ability 86 social control strategies 253 software agents 241 software use 34 star-alliance models 233 stickiness measures 159 strategic advantage 225 strategic resource management 248 structural capital 28, 275 sub-models 110 subscribing 283 supply chain 208 supply chain management 201 supply chain management (SCM) 201 surgeons 76 synchronization constraints 75 system testing 63
P personal assistants 126 personalization 156 physical design 44 policy objects 79 Policy-Agents 72 port implementation 219 portals 283 predictions 52 pro-activeness 86 procedural strategies 253 process capital 28, 33 procurement process 218 procurement process potentials 218 Product Life Cycle 111 profiling 283 program maintenance module 58 programming languages 84 proposed model 185 “push” and “pull” mechanisms 283
R rationalization 179 reactivity 86 Reciprocal Exchange 261 reciprocal knowledge 256 reengineering 180, 287 relationship management (RM) 201 Renewal and Development Capital 29, 36 requirements collection 44 resource 76 resource agent 81 resource class agent 81 resource-based approach 249 Return Model 114 risk cost 158 road maps 283
T tacit knowledge 203, 255, 275 tacit knowledge control strategies 263 tacit knowledge flows 258 tacit knowledge investments 276 TAM (Technology Acceptance Model) 135 targeted users 54 taxonomy 280 teaching tool 136 theory of reasoned action (TRA) 137 time cost 158 Tolerable Waiting Time 147 Total Quality Management (TQM) 4, 5 training 289 Transaction Cost Economics (TCE) 251 Transaction Governance 251
Index
transformation 274, 277 Trust Environment 98
U unemployment 31 unidirectional knowledge sharing 255 usage model 114 user interface 61, 78
V value-alliance models 233 vendor managed inventory 204 View Integration 49 Virtual Alliance Models (VAM) 228 virtual brokers 234 virtual cultures 228 virtual economy 249 virtual faces 232 virtual management 225 virtual organisation 225, 226 virtual organisational change 225
301
Virtual Organisational Change Model (VOCM) 228 virtual organizations 200, 208, 220, 228, 248, 254 virtual organizing 200, 213 Virtual Strategic Perspective (VSP) 229 virtuality 231
W waiting time 147 weak negation 166 Web Users’ Waiting Time 145 Web-Based Customer Loyalty 153 Webhound Implementation 132 Webhound Requirements 132 website design 161 World Wide Web (WWW) 1, 145
Y Y2K issues 288
A New Title from IGP! Business to Business Electronic Commerce: Challenges & Solutions Merrill Warkentin Mississippi State University, USA
In the mid-1990s, the widespread adoption of the Web browser led to a rapid commercialization of the Internet. Initial success stories were reported from companies that learned how to create an effective direct marketing channel, selling tangible products to consumers directly over the World Wide Web. By the end of the 1990s, the next revolution began—business-tobusiness electronic commerce. Business to Business Electronic Commerce: Challenges and Solutions will provide researchers and practitioners with a source of knowledge related to this emerging area of business. ISBN 1-930708-09-2 (h/c); US$89.95; eISBN 1-591400-09-0 ; 308 pages • Copyright © 2002
Recommend IGP books to your library!
IDEA GROUP PUBLISHING IGP
Hershey • London • Melbourne • Singapore • Beijing 1331 E. Chocolate Avenue, Hershey, PA 17033-1117 USA Tel: (800) 345-4332 • Fax: (717)533-8661 • [email protected]
See the complete catalog of IGP publications at http://www.idea-group.com
NEW from Idea Group Publishing • • •
• • • • • • • • •
• • • • • • •
• • •
• •
• •
Data Mining: A Heuristic Approach, Hussein Aly Abbass, Ruhul Amin Sarker & Charles S. Newton ISBN: 1-930708-25-4 / eISBN: 1-59140-011-2 / 310 pages / US$89.95 / © 2002 Managing Information Technology in Small Business: Challenges and Solutions, Stephen Burgess ISBN: 1-930708-35-1 / eISBN: 1-59140-012-0 / 367 pages / US$74.95 / © 2002 Managing Web Usage in the Workplace: A Social, Ethical and Legal Perspective, Murugan Anandarajan & Claire A. Simmers ISBN: 1-930708-18-1 / eISBN: 1-59140-003-1 / 386 pages / US$74.95 / © 2002 Challenges of Information Technology Education in the 21st Century, Eli Cohen ISBN: 1-930708-34-3 / eISBN: 1-59140-023-6 / 290 pages / US$74.95 / © 2002 Social Responsibility in the Information Age: Issues and Controversies, Gurpreet Dhillon ISBN: 1-930708-11-4 / eISBN: 1-59140-008-2 / 282 pages / US$74.95 / © 2002 Database Integrity: Challenges and Solutions, Jorge H. Doorn and Laura Rivero ISBN: 1-930708-38-6 / eISBN: 1-59140-024-4 / 300 pages / US$74.95 / © 2002 Managing Virtual Web Organizations in the 21st Century: Issues and Challenges, Ulrich Franke ISBN: 1-930708-24-6 / eISBN: 1-59140-016-3 / 368 pages / US$74.95 / © 2002 Managing Business with Electronic Commerce: Issues and Trends, Aryya Gangopadhyay ISBN: 1-930708-12-2 / eISBN: 1-59140-007-4 / 272 pages / US$74.95 / © 2002 Electronic Government: Design, Applications and Management, Åke Grönlund ISBN: 1-930708-19-X / eISBN: 1-59140-002-3 / 388 pages / US$74.95 / © 2002 Knowledge Media in Health Care: Opportunities and Challenges, Rolf Grutter ISBN: 1-930708-13-0 / eISBN: 1-59140-006-6 / 296 pages / US$74.95 / © 2002 Internet Management Issues: A Global Perspective, John D. Haynes ISBN: 1-930708-21-1 / eISBN: 1-59140-015-5 / 352 pages / US$74.95 / © 2002 Enterprise Resource Planning: Global Opportunities and Challenges, Liaquat Hossain, Jon David Patrick & M. A. Rashid ISBN: 1-930708-36-X / eISBN: 1-59140-025-2 / 300 pages / US$89.95 / © 2002 The Design and Management of Effective Distance Learning Programs, Richard Discenza, Caroline Howard, & Karen Schenk ISBN: 1-930708-20-3 / eISBN: 1-59140-001-5 / 312 pages / US$74.95 / © 2002 Multirate Systems: Design and Applications, Gordana Jovanovic-Dolecek ISBN: 1-930708-30-0 / eISBN: 1-59140-019-8 / 322 pages / US$74.95 / © 2002 Managing IT/Community Partnerships in the 21st Century, Jonathan Lazar ISBN: 1-930708-33-5 / eISBN: 1-59140-022-8 / 295 pages / US$89.95 / © 2002 Multimedia Networking: Technology, Management and Applications, Syed Mahbubur Rahman ISBN: 1-930708-14-9 / eISBN: 1-59140-005-8 / 498 pages / US$89.95 / © 2002 Cases on Worldwide E-Commerce: Theory in Action, Mahesh Raisinghani ISBN: 1-930708-27-0 / eISBN: 1-59140-013-9 / 276 pages / US$74.95 / © 2002 Designing Instruction for Technology-Enhanced Learning, Patricia L. Rogers ISBN: 1-930708-28-9 / eISBN: 1-59140-014-7 / 286 pages / US$74.95 / © 2002 Heuristic and Optimization for Knowledge Discovery, Ruhul Amin Sarker, Hussein Aly Abbass & Charles Newton ISBN: 1-930708-26-2 / eISBN: 1-59140-017-1 / 296 pages / US$89.95 / © 2002 Distributed Multimedia Databases: Techniques and Applications, Timothy K. Shih ISBN: 1-930708-29-7 / eISBN: 1-59140-018-X / 384 pages / US$74.95 / © 2002 Neural Networks in Business: Techniques and Applications, Kate Smith and Jatinder Gupta ISBN: 1-930708-31-9 / eISBN: 1-59140-020-1 / 272 pages / US$89.95 / © 2002 Managing the Human Side of Information Technology: Challenges and Solutions, Edward Szewczak & Coral Snodgrass ISBN: 1-930708-32-7 / eISBN: 1-59140-021-X / 364 pages / US$89.95 / © 2002 Cases on Global IT Applications and Management: Successes and Pitfalls, Felix B. Tan ISBN: 1-930708-16-5 / eISBN: 1-59140-000-7 / 300 pages / US$74.95 / © 2002 Enterprise Networking: Multilayer Switching and Applications, Vasilis Theoharakis & Dimitrios Serpanos ISBN: 1-930708-17-3 / eISBN: 1-59140-004-X / 282 pages / US$89.95 / © 2002 Measuring the Value of Information Technology, Han T. M. van der Zee ISBN: 1-930708-08-4 / eISBN: 1-59140-010-4 / 224 pages / US$74.95 / © 2002 Business to Business Electronic Commerce: Challenges and Solutions, Merrill Warkentin ISBN: 1-930708-09-2 / eISBN: 1-59140-009-0 / 308 pages / US$89.95 / © 2002
Excellent additions to your institution’s library! Recommend these titles to your Librarian! To receive a copy of the Idea Group Publishing catalog, please contact (toll free) 1/800-345-4332, fax 1/717-533-8661,or visit the IGP Online Bookstore at: [http://www.idea-group.com]! Note: All IGP books are also available as ebooks on netlibrary.com as well as other ebook sources. Contact Ms. Carrie Stull at [[email protected]] to receive a complete list of sources where you can obtain ebook information or IGP titles.