Annals of
Cases on Information Technology Mehdi Khosrow-Pour Information Resources Management Association, USA
This book is a release of the Cases on Information Technology Series
Idea Group Publishing
Information Science Publishing
Hershey • London • Melbourne • Singapore • Beijing
Editor-in-Charge Mehdi Khosrowpour Pennsylvania State University, USA Associate Editors Adi Armoni Tel-Aviv University, Israel Guisseppi A. Forgionne University of MarylandBaltimore County, USA Ray Hackney Manchester Metropolitan University, UK Sherif Hussein Kamel American University of Cairo, Egypt Hans Peter Lehmann University of Auckland, New Zealand Sorel Reisman California State University, Fullerton, USA Kees van Slooten Universite of Twente, The Netherlands Managing Editor Jan Travers Idea Group Publishing Assistant Managing Editor Amy Poole Idea Group Publishing
Idea Group Publishing 1331 E. Chocolate Avenue Hershey PA 17033-1117 USA Tel: 717/533-8845 Fax: 717/533-8661 URL: http://www.ideagroup.com
International Editorial Advisory Board Muhammad Al-Khaldi, King Fahd Univ , Saudi Arabia Adel M. Aladwani, Kuwait University, Kuwait Amir Albadvi, Tarbiat Modarres University, Iran Marios C. Angelides, South Bank University, UK Norman Archer, McMaster University, Canada Tonya Barrier, Southwest Missouri State University, USA Shirley Becker, Florida Institute of Technology, USA Harry C. Benham, Montana State University, USA Dennis Bialaszewski, Indiana State University, USA Amita Goyal Chin, VA Commonwealth University, USA Ronald Clute, Metropolitan State University of Denver, USA Eli Boyd Cohen, Informing Science Institute, USA Jakov Crnkovic,State University of NY at Albany, USA Connie Wilson Crook, University of NC, Charlotte, USA Maeve Cummings, Pittsburg State University, USA George Ditsa, University of Wollongong, Australia David Feinstein, University of South Alabama, USA Eugenia Fernandez, Indiana Purdue University, USA Gerry Gingrich, National Defense University, USA Janis Gogan, Bentley College, USA Gerald Grant, Carleton University, Canada Syed Zahoor Hassan, Lahore Univ. of Mgmt Science Pakistan Herman P. Hoplin, Syracuse University, USA Nancy J. Johnson, Capella University, USA Annette Marie Jones, University of Canterbury, New Zealand Eugene L. Kaluzniacky, University of Winnipeg, Canada Jahangir Karimi, University of Colorado at Denver, USA Julie Kendall, Rutgers University-Camden, USA Omar E.M. Khalil, University of Massachusetts, USA Barbara Klein, University of Michigan-Dearborn, USA Mathew J. Klempa, Information Systems Consultant, USA Ram L. Kumar, University of North Carolina, Charlotte, USA Gwynne Larsen, Metropolitan State College-Denver, USA Susan K. Lippert, George Washington University, USA Hao Lou, Ohio University, USA Mo Adam Mahmood, University of Texas, El Paso, USA Nina McGarry, George Washington University, USA Kathleen Moffitt, California State University, Fresno, USA Janette W. Moody, The Citadel, USA Fiona Fui-Hoon Nah, University of Nebraska-Lincoln, USA Karen S. Nantz, Eastern Illinois University, USA Ali Nazemi, Roanoke University, USA Robert Neilson, National Defense University, USA Michael L. Nelson, NASA Langley Research Center, USA David J. Paper, Utah State University, USA Raymond Papp, Central CT State University, USA Arun Rai, Georgia State University, USA Rohit Rampal, University of Rhode Island Eugene J. Rathswohl, University of San Diego, USA Syed Rahman, Minnesota State University-Mankato, USA Ali Salehnia, South Dakota University, USA Keng Siau, University of Nebraska-Lincoln, USA Pascal Sieber, University of Bern, Switzerland Eileen Trauth, Pennsylvania State University, USA Wim Van Grembergen, University of Antwerp, Belgium Dennis Viehland, Massey University Albany, New Zealand Jennifer L. Wagner, Roosevelt University, USA Nancy C. Weida, Bucknell University, USA Stu Westin, University of Rhode Island, USA Marilyn Wilkins, Eastern Illinois University, USA Jessie Yuk Yong Wong, Nanyang Tech University, Singapore Vincent Yen, Wright State University, USA Ira Yermish, St. Joseph’s University, USA
Acquisitions Editor: Managing Editor: Development Editor: Copy Editor: Typesetter: Printed at:
Mehdi Khosrow-Pour Jan Travers Michele Rossi Maria Boyer LeAnn Whitcomb Integrated Book Technology
Published in the United States of America by Idea Group Publishing 1331 E. Chocolate Avenue Hershey PA 17033-1117 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.idea-group.com and in the United Kingdom by Idea Group Publishing 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 3313 Web site: http://www.eurospan.co.uk Copyright © 2002 by Idea Group Publishing. All rights reserved. No part of this book may be reproduced in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Annals of Cases on Information Technology is part of the Idea Group Publishing series named Cases on Information Technology Series (ISSN 1537-9337) ISSN: 1537-937X (formerly published under the title: Annals of Cases on Information Technology Applications and Management in Organizations/1098-8580) ISBN 1-930708-40-8 eISSN: 1533-8002 eISBN: 1-59140-026-0 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library.
Annals of Cases on Information Technology (ISSN 1537-937X ) is published annually in January by Idea Group Publishing, 1331 E. Chocolate Avenue, Hershey, PA 17033-1117, USA. Annual cost is US$89. The views expressed in this publication are those of the authors but not necessarily of Idea Group Inc.
Annals of Cases on Information Technology 2002/Volume 4
Table of Contents
Preface ......................................................................................................................... viii DataNaut Incorporated: Growing Pains of a Small Company ........................................ 1 on the Verge of an Internet Revolution Nancy Shaw, George Mason University, USA Joan O'Reilly Fix, Citibank N.A., USA Military Applications of Natural Language Processing and Software .......................... 12 James A. Rodger, Indiana University of Pennsylvania, USA Tamara V. Trank, Naval Health Research Center, USA Parag C. Pendharkar, Pennsylvania State University at Harrisburg, USA IT-Based Decision Tools for Item Processing Operations ........................................... 29 Management in Retail Banking Charles J. Malmborg, Rensselaer Polytechnic Institute, USA The Dilemma of Dairy Farm Group Between Redesigning of ..................................... 39 Business Processes and Rebuilding of Management Information Systems Eugenia M.W. Ng, Hong Kong Institute of Education, Hong Kong Ali F. Farhoomand, University of Hong Kong, Hong Kong Probir Banerjee, City University of Hong Kong, Hong Kong Developing a Hypertext Guide Program for Teaching the Simple Tasks ..................... 58 of Maintaining and Troubleshooting Educational Equipment Kamel Hussein Rahouma, Minia University, Egypt Peter Zinterhof, University of Salzburg, Austria IS Strategy at NZmilk ................................................................................................... 73 Paul Cragg, University of Canterbury, New Zealand Bob McQueen, University of Waikato, New Zealand Implementing Information Technology to Effectively Utilize ..................................... 84 Enterprise Information Resources Yousif Mustafa, Central Missouri State University, USA Clara Maingi, Central Missouri State University, USA
Implementation of Information Technology in a Job Shop .......................................... 103 Manufacturing Company–A Focus on ManuSoft Purnendu Mandal, Marshall University, USA Shared Workspace for Collaborative Engineering ..................................................... 119 Dirk Trossen, Nokia Research Center, USA Andre Schuppen, Aachen University of Technology, Germany Michael Wallbaum, Aachen University of Technology, Germany IT in Improvement of Public Administration ............................................................. 131 Jerzy Kisielnicki, Warsaw University, Poland The Foreign Banks’ Influence in Information Technology Adoption ........................ 141 in the Chinese Banking System Michelle W.L. Fong, Victoria University, Australia Adopting the Process View: A Case Study of Modeling Change in ........................... 162 the Not-For-Profit Sector Antony Bryant, Leeds Metropolitan University, UK Veena Syan, Forzani Group, Canada Developing Interorganizational Trust in Business-to-Business .................................. 184 E-Commerce Participation —Case Studies in the Automotive Industry Pauline Ratnasingam, University of Vermont, USA Analyzing the Evolution of End User Information Technology Performance:........... 195 A Longitudinal Study of a County Budget Office John Sacco, George Mason University, USA Darrene Hackler, George Mason University, USA Adopting IT: Food Program Sponsor Discovers It’s No Picnic ................................. 209 John M. Anderson, University of North Carolina Wilmington, USA William H. Gwinn, University of North Carolina Wilmington, USA Everyone’s Watching: The Remarkable Public Reorganization of the ...................... 225 Nevada Department of Motor Vehicles William L. Kuechler, University of Nevada-Reno, USA Dana Edberg, University of Nevada-Reno, USA IT Help Desk Implementation: The Case of an International Airline ......................... 241 Steve Clarke, University of Luton, UK Arthur Greaves, London Borough of Hillingdon, UK Application of Tree-Based Solutions: A Case Study with INEEL ............................. 260 David Paper, Utah State University, USA Kenneth B. Tingey, Utah State University, USA
Recognizing Runaway IS Projects When They Occur: The Bank Consortium Case ... 272 Joan Ellen Cheney Mann, Old Dominion University, USA Long Term Evolution of a Conceptual Schema at a Life Insurance Company ........... 280 Lex Wedemeijer, ABP, The Netherlands Incentives and Knowledge Mismatch: The Deemed Failure of a BPR Project .......... 297 in a Large Banking Organization Parthasarathi Banerjee, National Institute of Science, Technology and Development Studies, India Risks in Partnerships Involving Information Systems Development: ......................... 316 Lessons from a British National Health Service Hospital Trust G. Harindranath, Royal Holloway College, University of London, UK John A.A. Sillince, Royal Holloway College, University of London, UK A Case on Communications Management .................................................................. 328 Susanne Robra-Bissantz, Universitat Erlangen-Nurnberg, Germany A Case Study of One IT Regional Library Consortium: VALE – .............................. 345 Virtual Academic Library Environment Virginia A. Taylor, William Paterson University, USA Caroline M. Coughlin, Consultant, USA Prudential Chamberlain Stiehl: The Evolution of an IT Architechture ...................... 360 for a Residential Real Estate Firm, 1996-2001 Andy Borchers, Kettering University, USA Robert Mills, Prudential Chamberlain Stiehl Realtors, USA Seaboard Stock Exchange’s Emerging E-Commerce Initiative .................................. 376 Linda V. Knight and Theresa A. Steinbach, DePaul University, USA Diane M. Graf, Northern Illinois University, USA Added-Value Benefits of Application of Internet Technologies ................................ 390 to Subject Delivery Stephen Burgess, Victoria University, Australia Paul Darbyshire, Victoria University, Australia Enterprise Information Portal Implementation: Knowledge Sharing ......................... 410 Efforts of a Pharmaceutical Company Alison Manning, Washington State University, USA Suprateek Sarker, Washington State University, USA Design and Implementation of a Wide Area Network: Technological ....................... 427 and Managerial Issues Rohit Rampal, Portland State University, USA
An Experience of Software Process Improvement Applied to Education: ................... 440 The Personal Work Planning Technique D. Antonio de Amescua Seco, Carlos III University of Madrid, Spain Javier Garcia Guzman, Carlos III University of Madrid, Spain Maria-Isabel Sanchez-Segura, Carlos III University of Madrid, Spain Paloma Martinez Fernandez, Universidad Politecnica of Madrid, Spain Juan Lloreas Morillo, SEIU Local 36 Benefits Office: The Y2K Crisis and Its Aftermath .......................... 456 Ira Yermish, St. Joseph's University, USA Credit Card System for Subsidized Nourishment of University Students .................. 468 Kresimir Fertalj, Damir Kalpic, Vedran Mornar & Slavko Krajcar University of Zagreb, Croatia Designing a First-Iteration Data Warehouse for a Financial Application .................. 487 Service Provider Nenad Jukic. Loyola University of Chicago, USA Tania Neild, InfoGrate Incorporated, USA Reengineering the Selling Process in a Showroom .................................................... 499 Jakov Crnkovic, University at Albany, State University of New York, USA Nebojsa Janicijevic. University at Belgrade, Yugoslavia Goran Petkovic, University at Belgrade, Yugoslavia Leveraging IT and a Business Network by a Small Medical Practice ....................... 513 Simpson Poon, Charles Sturt University, Australia Daniel May, Monash University, Australia Systems Design Issues in Planning and Implementation: Lessons ............................. 526 Learned and Strategies for Management Mahesh S. Raisinghani, University of Dallas, USA Index ......................................................................................................................... 535
viii
Preface
The decade of the 1990s brought Web-enabled technologies and their most popular application, e-commerce. Many predicted that the technologies of e-commerce would revolutionize the way organizations conduct their businesses and manage their resources. E-commerce is still strong and vibrant in assisting organizations of all sizes and types, but now during the first decade of the 21st century, many organizations are trying to learn from the pitfalls and successes of these new technologies and their applications in modern organizations. Like any other information technologies of the past, Web-enabled technologies of e-commerce will greatly benefit from the lessons that can be learned from its previous applications and management. The case studies included in this publication focus on many issues including e-commerce related issues facing modern information technologies and their challenges for management. This book consists of 36 case studies authored by more than 60 scholars and practicing managers from all over the world. The following paragraphs provide summaries of cases included in this publication. DataNaut Incorporated: Growing Pains of a Small Company on the Verge of an Internet Revolution, by Nancy Shaw, George Mason University (USA) and Joan O’Reilly Fix, Citibank N.A. (USA) This case discusses a small, locally run company that faces several strategic decisions at the end of 1999, marketing its new high-tech products, securing sufficient venture capital financing and creating a profit-sharing plan for current and future employees. The case describes challenges that this company had to deal with the new Internet revolution banging at its front door. This case combines new technology development, HR decisions, marketing and finance, which makes it a true cross-disciplinary case describing challenges of small businesses and Internet. Military Applications of Natural Language Processing and Software, by James A. Rodger, Indiana University of Pennsylvania (USA), Tamara V. Trank, Naval Health Research Center (USA), and Parag C. Pendharkar, Pennsylvania State University at Harrisburg (USA) This case describes a preliminary feasibility study aboard U.S. Navy ships utilizing voice interactive technology to improve medical readiness. A focus group was surveyed about reporting methods in health and environmental surveillance inspections to develop criteria for designing a lightweight, wearable computing device with voice interactive capability. The case study describes the process of planning, analysis, design and implementation of an integrated voice interactive device (VID) for the Navy. The case reports challenges that need to be considered to enhance health protection and improve medical readiness by applying voice interactive technology to environmental and clinical surveillance activities aboard U.S. Navy ships. IT-Based Decision Tools for Item Processing Operations Management in Retail Banking, by Charles J. Malmborg, Rensselaer Polytechnic Institute (USA) This case reports IT challenges facing Merit Bank, a multi-line financial services company with
ix
$75 billion in assets and approximately 1,000 retail branches distributed across 20 geographic divisions in 16 states. Merit’s aggressive acquisition and consolidation strategy in its retail and commercial banking divisions has significantly increased check processing volumes and motivated major investments in automated imaging technology and branch operations reporting systems. The case describes re-focusing of IT resources to improve item processing operations in retail banking. Branch operations and item processing software tools are integrated to develop courier scheduling tools minimizing uncollected checks at branch offices. Automated encoding systems are adapted for just-in-time processing to maximize cost savings in check clearing operations. The Dilemma of Dairy Farm Group Between Redesigning of Businesses Processes and Rebuilding of Management Information Systems, by Eugenia M.W. Ng, Hong Kong Institute of Education (Hong Kong), Ali F. Farhoomand, University of Hong Kong (Hong Kong), and Probir Banerjee, City University of Hong Kong (Hong Kong) This case reports the IT challenges of the Dairy Farm Group (DFG) of Companies , as a leading food and drugstore retailer in the Asia-Pacific Region. DFG and its associates operated supermarkets, hypermarkets, convenience stores and drugstores in nine territories and had sales of US$6.9 billion in 1997. However, the profit margin of DFG was low compared to its competitors in Hong Kong and China and other retailers in Europe and the U.S. Consequently, a new chief executive officer was hired in June that year. The case study describes a preliminary investigation report of the existing DFG information systems and the recommended changes by two independent consulting firms, that were brought in to conduct the investigation and to determine how DFG can better utilize their IT resources to improve their profitability picture and to enhance their strategic positions Developing a Hypertext Guide Program for Teaching the Simple Tasks of Maintaining and Troubleshooting Educational Equipment, by Kamel Hussein Rahouma, Minia University (Egypt) and Peter Zinterhof, University of Salzburg (Austria) This case reports challenges facing educational technology programs at Minia University, Egypt, and hypertext technology was utilized to remedy the shortcomings assessing the effectiveness of the educational programs delivered. The case study describes the process of designing, implementing and applying a hypertext GUIDE program for teaching the educational technologists that graduated from the Department of Educational Technology and ways that improvements can be made to the existing education technology programs. IS Strategy at NZmilk, by Paul Cragg, University of Canterbury (New Zealand) and Bob McQueen, University of Waikato (New Zealand) This case describes the current situation of the NZmilk, a small, fresh milk supplier that is contemplating using IS to a greater extent to become more competitive due to the changes in the deregulation of the industry, and how supermarkets and home delivery contractors could purchase milk from wherever they chose, rather than a required local manufacturer. This had opened up both competition and expansion opportunities within the industry. The case reports the process of developing a new IS strategy in assisting NZmilk to become more competitive and to improve its strategic posture.
x
Implementing Information Technology to Effectively Utilize Enterprise Information Resources, by Yousif Mustafa, Central Missouri State University (USA) and Clara Maingi, Central Missouri State University (USA) This is a typical case of implementing information technology in order to assist an enterprise to effectively utilize their production information resources. The enterprise, a world class leader in the pharmaceutical industry, currently keeps a huge number of technical research reports on shared network media. The case reports that the best solution to the problem is to create an information system which will keep track of these reports, provide a concise synopsis of each report, enable the researchers to search using keywords and give a direct link to locate that report via a friendly web-based user-interface. Implementation of Information Technology in a Job Shop Manufacturing Company–A focus on ManuSoft, by Purnendu Mandal, Marshall University (USA) This case describes A.B.C. Engineering, a Melbourne-based job shop manufacturing company that attempted a major improvement in the information technology area by implementing and enhancing the capability of an MIS software package called ‘ManuSoft.’ The case reports the challenges of the implementation of ManuSoft, as a generic MIS package, and enhancement of its effectiveness to the management with the development of object-oriented interfacing programs. Shared Workspace for Collaborative Engineering, by Dirk Trossen, Nokia Research Center Boston (USA), André Schüppen , Aachen University of Technology (Germany), and Michael Wallbaum, Aachen University of Technology (Germany) This case deals with the difficult task of developing collaborative engineering due to the variety of proprietary data and tools to be integrated in a shared workspace in the field of chemical engineering research. The case study describes the design process for a collaborative engineering workspace at the University of Technology, Aachen, Germany, under development within a research project considering distributed chemical engineering as an example. Current solutions and challenges as well as future work are outlined, including the lessons learned from the study. IT in Improvement of Public Administration, by Jerzy Kisielnicki, Warsaw University (Poland) The case study describes the process of implementation of IT for the improvement of public administration in Bialystok (Poland) . The city of Bialystok has 280,000 inhabitants. The new management system has been based on new IT solutions, including an extranet network and integrated data base. The result of implementation of the new IT was a reduction of the decisionmaking time by an average of 30% and the reduction of the routine affairs handling time by the average of 25%. The Foreign Banks’ Influence in Information Technology Adoption in the Chinese Banking System, by Michelle W.L. Fong, Victoria University (Australia) This case study examines the foreign banking sector’s potential in transferring technology to the domestic banks in the People’s Republic of China. Although the rationale of the Chinese government’s admission of foreign banks into its domestic banking industry was to attract foreign capital and banking expertise, the case reports the difficulties involved in foreign banks'
xi
transfer and how potential information technology transfer can be fully utilized as a secondary benefit. Adopting the Process View: A Case Study of Modeling Change in the Not-For-Profit Sector, by Antony Bryant, Leeds Metropolitan University (UK) andVeena Syan, Forzani Group (Canada) This case study focuses on the operation of an adoption agency in the UK, illustrating the issues involved in a small, not-for-profit organization seeking to respond to the pressures to streamline and automate its routines and procedures. It illustrates the limitations of inadequately planned IT-centered initiatives, and how such strategies can be redeemed by process-oriented methods– specifically those derived from a combined BPR and soft systems approach. It also exemplifies the critical importance of organizational issues and the constraints they impose on effective implementation of IT. Developing Interorganizational Trust in Business-to-Business E-Commerce Participation By Pauline Ratnasingam, University of Vermont (USA) This case reports the interorganizational systems such as EDI that have been the main form of business-to-business e-commerce participation in the automotive industry for the last two decades. The case study describes efforts to clarify and refocus information management and operational procedures in an organization concerned with placement of children for adoption in the UK, following the introduction of a database system. The case illustrates the use of process modeling in the context of an information-intensive organization in the not-for-profit sector. . Analyzing the Evolution of End User Information Technology Performance: A Longitudinal Study of a County Budget Office, by John Sacco, George Mason University (USA) and Darrene Hackler, George Mason University (USA) This study reports the evolution of the personal computer's utilization in public sectors and how the budget office of a large county government designed and implemented end user information technology (IT) from personal computers (PCs) and local area networks (LANs) to an intranet and Web pages over a 15-year period. The study evaluates end user information technology performance and comments on organizational, technical and social issues that accompany information technology implementation and how public organizations can deal with them. Adopting IT: Food Program Sponsor Discovers It’s No Picnic, by John M. Anderson, University of North Carolina Wilmington (USA) and .... William H. Gwinn, University of North Carolina Wilmington (USA) This case reports on how traditionally small companies are often reluctant to try innovative approaches to information management because of the cost of the hardware and software, the potential disruption of processes already dependent on overstressed resources and the lack of inhouse expertise. This case looks at the experience with information technology (IT) implementation of one small nonprofit company that provides administrative services for child care providers. The case discusses the difficulties encountered by Quality Care, Inc. in implementing information technology.
xii
Everyone’s Watching: The Remarkable Public Reorganization of the Nevada Dept. of Motor Vehicles, by William L. Kuechler, University of Nevada-Reno (USA) and Dana Edberg, University of Nevada-Reno (USA) This case reports the situation at the Nevada Department of Motor Vehicles and Public Safety that launched the “Genesis” project in 1999 for planning, organizational restructuring and system development, and to the accompaniment of great publicity, the project fell dramatically short of expectations. This case provides the background necessary to understand the origins and shortcomings of the system, then focuses on the turn-around effort that took the system to a point of successful operation within a year of its going into production. IT Help Desk Implementation: The Case of an International Airline, by Steve Clarke, University of Luton (UK) and Arthur Greaves, London Borough of Hillingdon (UK) This case study concerns IT help desk management within an international airline. The core of what is described relates to attempts at implementing help desk procedures in practice, and illustrates the problems of treating these both as predominantly technology systems and predominantly human systems. The case discusses the failure attempts and an alternative approach that was proposed based on the application of methods drawn from an understanding of critical social theory. The practical problems and theoretical issues are discussed, and a theoretically informed framework is applied retrospectively to the case. Application of Tree-Based Solutions: A Case Study with INEEL, by David Paper, Utah State University (USA) and Kenneth B. Tingey, Utah State University (USA) This case describes a tree-based solution at the Idaho National Engineering and Environmental Laboratory (INEEL) for rapid development of a computerized system to meet complex, yet exacting compliance requirements for thousands of employees. The case discusses advantages and disadvantages of this project and the implementation issues/challenges, and overall effects of the project on other components of the information systems, working environments and implications of management at INEEL with respect to all aspects of enterprise systems development. Recognizing Runaway Projects When They Occur: The Bank Consortium Case, by Joan Ellen Cheney Mann, Old Dominion University (USA) This case reports the situation at KPMG, its challenge of dealing with 35% of their largest clients currently having a runaway project and how in 1991 the number increased to 60%. The traditional definition of a runaway project is any project that grossly exceeds budget and time targets but yet has failed to produce an acceptable deliverable. Given that each runaway project is a dysfunctional use of organizational resources, it is important for practitioners to be able to identify them early and react appropriately. This case discusses many issues related to the dilemma of runaway and provides remedies to deal with complex runaway projects.
xiii
Long-Term Evolution of a Conceptual Schema At a Life Insurance Company, by Lex Wedemeijer, APB, (The Netherlands) This case discusses how enterprises need data resources that are stable and at the same time flexible to support current and new ways of doing business. However, there is a lack of understanding how flexibility of a conceptual schema design is demonstrated in its evolution over time. This case study outlines the evolution of a highly integrated conceptual schema in its business environment. The case reports that a real conceptual schema is the result of ‘objective’ design practices as well as the product of negotiation and compromise with the user community, and discusses drivers such as not only ‘accepted’ causes like new legislation, but also error correction, changing user perceptions and elimination of derived data. Incentives and Knowledge Mismatch: The Deemed Failure of a BPR Project in a Large Banking Organization, by Parthasarathi Banerjee, National Institute of Science, Technology and Development Studies (India) This case reports the situation at a large public bank in an economy now under transition to liberalization, and how the organization attempted reengineering its structure and business processes to deal with the sharpening. The case describes a process reengineering project at this organization and issues/challenges that organization had to face regarding IT strategy, structure, technology, process and personnel involved in planning and implementating this project. This discusses that integration of disparate processes on an information technology platform can be proved with mixed success. Risks in Partnerships Involving Information Systems Development: Lessons from a British National Health Service Hospital Trust, by G. Harindranath, Royal Holloway College, University of London (UK) and John A.A. Sillince, Royal Holloway College, University of London (UK) This case describes a US$ 30 million project to establish a new form of rapid healthcare service delivery within the context of a highly politicized National Health Service Hospital (NHS) Trust in the United Kingdom (UK). This project involved large-scale redesign of long-established healthcare procedures and the development of sophisticated new information systems (ISs) through a unique partnership between the public sector (the UK’s NHS) and a number of privatesector companies (a software developer, a facilities manager, a hardware vendor and a builder). The case study concentrates on, what is often, one of the more important determinants of the success or failure of such partnerships involved in information systems development, i.e. ‘risk’. A Case on Communication Management, by Susanne Robra-Bissantz, Universitat of ErlangenNuremberg (Germany) This case reports the situation at the Bissantz & Company GmbH, a small software producing company that is enjoying a rapid growth and is in need of a strategic concept for communication activities with external partners. The case study describes the application of a concept for communication management in the Bissantz & Company GmbH and challenges of achiveing communication goals and strategies for all communication forms in the organisation. This case reports many proposals for the contents of messages and media selection, especially in the field of external business communication.
xiv
A Case Study of One IT Regional Library Consortium: VALE – Virtual Academic Library Environment, by Virginia Taylor, William Paterson University (USA) and Caroline M. Coughlin, Consultant (USA) This case discuss that in modern years, historic models of library management are being tested and modified in the digital age due to several interrelated factors. First, the importance of place or a home library space changes as electronic opportunities for dispersal of library collections increase with IT innovations and availability. Second, the high cost of IT has made library managers more sensitive to issues of cost in general while the ability of IT systems to provide easy access to managerial data, data previously difficult to capture, has allowed library managers to begin to differentiate costs for services based on use. This case reports the findings of an IT role, implications on regional library information delivery and challenges of a virtual library environment in the digital information transmission age. Prudential Chamberlain Stiehl: The Evolution of an IT Architechture for a Residential Real Estate Firm, 1996-2001, by Andy Borchers.Kettering University (USA) and Robert Mills, Prudential Chamberlain Stiehl Realtors (USA) This case describes the evolution of an IT architecture for Prudential Chamberlain Stiehl Realtors (PCSR), a 14-office, 250-sales-agent real estate firm located in Southeast Michigan. Initially, the CIO of the firm concentrated on providing basic connectivity to sales agents and a simple World Wide Web presence. Although this was accepted by users and moved the firm forward technically, management questioned the value of this technology. In the next phase of development, PCSR worked to build a “rich” set of applications that enhance the firm’s relationships with clients and agents. Seaboard Stock Exchange’s Emerging E-Commerce Initiative, by Linda V. Knight, DePaul University (USA), Theresa Steinbach, DePaul University (USA) and Diane M. Graf, Northern Illinois University (USA) This case describes the situation at the Seaboard Stock Exchange, one of the top stock exchanges in the United States, and how its relative position in the world is threatening and slipping due to the e-commerce and entrance of new competitors into Seaboard’s market. The case study describes how this traditional organization is now at the verge of coming back, through the use of its new integrated Internet-based technologies strategies and some of the organizational struggles they had to deal with in order to adopt this technology based on new strategies. This case also discusses the system development methodologies and the impact of standards and controls in an emerging technology environment. Added Value Benefits of Application of Internet Technologies to Subject Delivery, by Stephen Burgess, Victoria University (Australia) and Paul Darbyshire, Victoria University (Australia) This case examines a range of subjects taught in the School of Information Systems at Victoria University, Australia. Each subject uses Internet technologies for different ‘added-value’ benefits. Subject coordinators comment upon the use of the Internet technologies for both academic and administrative aspects. The case study explores the similarities between businesses
xv
using Internet technologies to “add value” to their products and services, and the reasons academics use Internet technologies to assist in traditional classroom delivery. This case examines benefits derived by faculty and students when using the Internet to supplement four different subjects at Victoria University, Australia. Enterprise Information Portal Implementation: Knowledge Sharing Efforts of a Pharmaceutical Company, by Alison Manning, Washington State University (USA) and Suprateek Sarker, Washington State University (USA) This case study provides a detailed account of the formation of a knowledge management (KM) division within a multinational pharmaceutical company, and the subsequent undertaking of the first major KM project, which involved the implementation of a portal software technology. Specific issues discussed include rationale for replacing the existing intranet with portal technology, selection of the portal, justification for this selection, challenges in organizing and linking documents, as well as the social and behavioral factors influencing the implementation. A number of dilemmas and tradeoffs are presented with respect to each of the issues. Design and Implementation of a Wide Area Network: Technological and Managerial Issues, By Rohit Rampal, Portland State University (USA) This case deals with the experience of a school district with about 2700 students in five schools, and the Board of Education that overseas those schools and the bus garage. The buildings that house these seven entities are spread over four towns and distance between locations is more than ten miles. The case discusses the issues of design and implementation of a wide area network and the problems faced by the school district that made the WAN a necessity are enumerated. The choice of hardware and software is explained within the context of the needs of the school district and how the choice of technology can greatly impact the utilization and management of WAN in organizations. An Experience of Software Process Improvement Applied to Education: The Personal Work Planning Technique, by D. Antonio de Amescua Seco, Javier Garcia Guzman, Maria-Isabel Sanchez-Segura, Carlos III University of Madrid (Spain), Paloma Martinez Fernandez, Universidad Politecnica of Madrid (Spain) and Juan Llorens Morillo, Carlos III University of Madrid (Spain) This case describes the use of the Personal Work Planning (PWP) technique as a time management tool for student projects in a software engineering course at Carlos III University in Madrid. The case reports the methodology used to implement activities associated with the PSP technique in an academic institution. In addition, the case discusses ways that the institution has determined the level of student satisfaction after using this technique, and how many students have realized the usefulness of PWP for their assignments. SEIU Local 36 Benefits Office: The Y2K Crisis and Its Aftermath, by Ira Yermish, St. Joseph’s University (USA) This case describes how a service organization approached the Y2K compliance issue and how a complex decision-making process led to near operational disaster. The case reports how
xvi
software vendor relations can be complicated by vendor viability and technological innovations. This case also explores the issues of IT management and the role that outsourcing for software and support plays in its operational life, and the character of management and problems caused by vendor-client relationships. Credit Card System for Subsidized Nourishment of University Students, by Vedran Mornar, Kresimir Fertalj, Damir Kalpic, & Slavko Krajcar (University of Zagreb (Croatia) This case describes the situation in the Croatian Ministry of Science and Technology and its major role in providing funds for higher education. There are four universities, each consisting of a number of relatively independent and dislocated faculties and academies. The case reports the process and challenges of computerizing the system of subsidized nourishment of the university students. The initial plan was to establish a simple credit card system but faced with political and technical infrastructure difficulties, the system had to develop a heterogeneous distributed database scheme as a proprietary replication mechanism, capable to exchange high volumes of data over a slow network or over dial-up networking. Designing a First-Iteration Data Warehouse for a Financial Application Service Provider, by Nenad Jukic, Loyala University of Chicago (USA) and Tania Neild, InfoGate, Incorporated (USA) This case describes the efforts behind designing a first iteration of an evolutionary, iterative enterprise-wide data warehouse for AIIA Corp., a financial application service provider. The case reports the importance of a well-defined mission, effective requirement collection, detailed logical definitions, and an efficient methodology for source systems and infrastructure development, during a data-warehousing project. The case discusses issues and challenges dealing with this data-warehousing project at AIIA Corp. Reengineering the Selling Process in a Showroom, by Jakov Crnkovic, State University of New York at Albany (USA), Nebojsa Janicijevic. University at Belgrad (Yugoslavia), and Goran Petkovic, University at Belgrade (Yugoslavia) This case describes the reengineering efforts of a small Yugoslavian showroom wholesaler. Following an initial period of success, the company subsequently became unable to deliver the promised level of quality and service. A team of consultants was engaged who recommended business-process reengineering in order help improve performance. The strategy they devised for the company involved replacing functional specialists with case managers. While the strategy was successfully implemented, it was not followed by appropriate changes in information technology, thus limiting the effectiveness of the entire process. Leveraging IT and a Business Network by a Small Medical Practice, by Simpson Charles Sturt University (Australia) and Daniel May, Monash University (Australia)
Poon,
This case describes that although many medical information technologies require significant financial investment and are often out of reach of small medical practices, it is possible through careful alignment of IT and customer strategy, together with a network of strategic alliances to
xvii
exploit IT effectively. This case reports on how a small medical practice managed to leverage skills, expertise and opportunities in the professional and knowledge-based industry and to improve its strategic posture with IT through the strategic alliance to enhance its competitive advantage without heavy up-front financial investment. The case also discusses the pros and cons of strategic alliances and potential issues related to building trust, consolidating relationships among members and risk management of such alliances on an ongoing basis. Systems Design Issues in Planning and Implementation: Lessons Learned and Strategies for Management, by Mahesh S. Raisinghani, University of Dallas (USA) This case describes a Schedule Graph (SG) System that was designed to automate the sales schedule process that had previously been a paper and pencil process in a telecommunication company. The system was designed and implemented in a matter of months to reduce cost and deliver an application that was long overdue. The project had been proposed for years, but funding issues had routinely delayed initiation. The case discusses the process of planning and implementation of this integrated software designed, where after to a lengthy delay, the systems was released with numerous software, hardware and network problems, with significant negative impacts on the customer community, the information systems department and other stakeholders. We hope that cases included in this publication will be instrumental in better understanding the issues, trends and challenges of information technology utilization and management in modern organizations. In addition, practices and lessons described in the above cases, in terms of both success and pitfalls related to various applications and technologies of IT, should teach/assist information technology students, researchers and practicing managers in devising more effective management strategies and programs to achieve greater utilization and management of IT applications and resources. Mehdi Khosrow-Pour, DBA Executive Director Information Resources Management Association October 1, 2001
DataNaut Incorporated 1
DataNaut Incorporated: Growing Pains of a Small Company on the Verge of an Internet Revolution Nancy C. Shaw George Mason University, USA Joan O'Reilly Fix Citibank, N.A., USA This case was written for the 8th annual Kogod School of Business Case Competition at American University. It discusses a small, locally run company that faced several strategic decisions at the end of 1999: marketing its new high-tech products, securing sufficient venture capital financing, and creating a profit-sharing plan for current and future employees. The case involves an actual corporation (although some of the employee names have been changed) and the issues that confronted the management team at the end of 1999. The case includes a complete description of the company’s products, a glossary of terms, a list of Web sites summarizing existing radio market research, detailed operating expenses and pro-forma financial statements (numbers have been altered for confidentiality). This case combines new technology development, HR decisions, marketing and finance, which makes it a true cross-disciplinary case that can be used in several different courses.
EXECUTIVE SUMMARY At the end of 1999, a small software development company located on the outskirts of Washington, D.C. is faced with several strategic decisions regarding the marketing and financing of its high-tech products. The principals of the company must decide the type and dollar amount of financing they will try to secure, which of their two products should be the focus of their marketing efforts and how they should structure an equitable compensation plan for their existing and future employees. Cash flow has been an ongoing problem for this small company, which began as a one-person technical consulting company and has grown into a consulting and product development company with several full and part-time employees. While consulting has traditionally paid the bills, the CEO is interested in becoming a part of the “Internet Revolution” with the development of multimedia streaming applications.
BACKGROUND On the evening of September 3, 1999, Mark Snuffin and his small staff sat around the living room of Mark’s house, which also served as an office, and contemplated the future of their company, DataNaut Incorporated (“DataNaut”). Copyright © 2002, Idea Group Publishing.
2 Shaw & Fix
DataNaut was at a critical stage in its development. The three-year-old consulting company had just completed a business plan for a new product idea and was in the early stages of developing a demonstration model (a “demo”) that would be used to illustrate the product’s features to potential investors. Although Mark and his team were confident that the new product would be a success in the marketplace, they were also aware that raising sufficient capital to finance the development of this product at such an early stage would be a challenge. Since its inception, DataNaut had financed its daily operations with a steady flow of income from consulting work. Mark’s goal in founding DataNaut was to create a company that would focus on developing next-generation technologies for the Internet. Mark started the company with an advanced concept for broadcasting audio and information, and the resulting product was extremely innovative. Mark had always been “ahead of the curve” with his inventions, and he was sensitive to timing issues with respect to Internet technologies. His team was also acutely aware of the importance of timing, and the product issue had become an increasingly important topic of discussion within DataNaut. DataNaut’s reputation for expert consulting services was growing, and Mark was involved with several simultaneous projects that consumed the majority of his time. The existing contracts were scheduled to last into the following year, and Mark remained busy planning his life around these contracts. Even though the consulting revenue was increasing steadily, DataNaut often found itself in a cash-crunch. The management of cash flow became a delicate issue in Mark’s small company, as the receipt of payments for consulting services rendered did not always correspond to the payment of bills and payroll. In addition, Mark subcontracted much of his consulting work to individual software developers, and the cost of doing so was high (Exhibit 1). Mark often felt that the time spent on consulting was an opportunity cost to pursuing product development. Mark knew that he could maintain his consulting practice and grow it steadily over time, but his passion was in product development. DataNaut’s situation had changed dramatically over a period of four months, and Mark had recently hired a strategic consultant to help him sort out the various issues that confronted his company. It was time to make a decision.
SETTING THE STAGE Prior to forming DataNaut in May 1996, Mark had worked for several years in prestigious consulting firms. By 1996, the Internet had exploded, becoming a legitimate environment in which to conduct business. Mark decided to venture out on his own and form a consulting company that would specialize in extending Microsoft technologies to the Internet, while maintaining a product business that would focus on the development of turnkey Internet applications called “Weblications”. For the past three years, DataNaut has operated as a virtual corporation, using an outsource model to support business operations. DataNaut has utilized outsourcing partners to assist with software development, telecommunications and visual imagery, as well as functional areas such as accounting and legal services. In October 1998, Mark hired a full-time software engineer, Eric Lorenzo, to assist with the consulting practice. In May of 1999, Mark hired two MBA students, Monique LaChance and Paul Lee, to handle the business aspects of the company’s operations, including marketing, business development and financial planning. In hiring the MBA students, Mark hoped to rekindle the product development side of his business, which had become a lower priority due to an increase in consulting work.
CASE DESCRIPTION DataNaut is divided into two core businesses, one dedicated to Weblication (product) development and the other focused on consulting services for Microsoft BackOffice solutions. As of September 1999, DataNaut outsourced a portion of its consulting and Weblication development to five different consultants, four of whom lived between Washington, D.C., and Baltimore. The fifth consultant lived in Australia and assisted mainly with highly technical graphic
DataNaut Incorporated 3
design work. Each consultant specialized in either software development or graphic design and had at least 10 years of work experience. Two of the local consultants, George and Chris, had been working exclusively for DataNaut for several months and planned to continue for the duration of the existing consulting contracts. Mark had often discussed the possibility of George and Chris joining the DataNaut team, as DataNaut would save money by hiring them full time. Consulting fees averaged $85.00 per hour per consultant. George and Chris, both entrepreneurs with their own start-up companies, were interested in DataNaut’s product technology. However, they were hesitant to abandon their respective practices to join a company that had neither a capital investment nor a tested product. Operating out of Mark’s townhouse in Bethesda, Maryland, DataNaut is not unlike many startup companies in the Washington DC “technology corridor.” Space is limited, and the software engineer, Eric, and the two business-oriented employees, Monique and Paul, share the three workstations that are located in the former living room of the townhouse. On a typical day, the fulltime employees work anywhere from 8 to 16 hours. Each employee has the flexibility to determine his or her own work schedule, some preferring to arrive early, and others not arriving until well into the afternoon. Mark prefers to work into the early hours of the morning.
The Entrepreneur Mark has many of the typical characteristics found in pioneers of the Internet. He has been writing software since he was 14 years old and has always been interested in technology. While he began his career in prestigious institutions, Mark always knew that he would one day work for himself. Mark has spent the last three years building his business from scratch, working late into the night on consulting projects, while squeezing time in between client meetings during the day to work with his employees on new product development. He is dedicated to his work, and his entire life revolves around this endeavor. Mark, himself, is a study in contrasts. With his engaging personality and surfer looks, Mark is not the typical software developer. He is a former standout college wrestler, and his athletic discipline fuels a work ethic and determination that will not allow him to give up once his mind is set on a particular project or idea. He is very comfortable around people, as well as computers, and enjoys speaking about his industry in public. Mark runs his company as he would a family. He cares about each employee’s welfare, offering a solid insurance package, and in exchange, he expects each employee to exhibit similar enthusiasm, drive and dedication to the company and its ideas. Mark believes that his technology has true market potential, and he desperately wants his fledgling company to succeed. He has realized that, in order to maintain the momentum that has brought the company to this stage, he will need to give up full control and begin to offer equity shares to his employees, as the development effort and search for venture capital becomes more intense.
Consulting Services Since its formation in 1996, DataNaut has provided consulting services to large organizations wishing to implement or extend Microsoft BackOffice technologies on the Internet. DataNaut’s clients have included companies such as NASDAQ, Level 3 Communications, Sylvan Learning Systems and Microsoft Consulting Services (see Exhibit 2). DataNaut has traditionally relied upon consulting services to offset internal Weblication development costs and to gain exposure to cutting-edge Microsoft technologies. Understanding and applying these technologies has allowed DataNaut to remain current with the evolution of Internet software, develop contacts and uncover potential applications for new software (see Exhibit 2). Through the knowledge and experience gained by consulting, DataNaut has been able to develop Weblications that meet real-world business needs.
4 Shaw & Fix
Weblications Product #1: virtualFan™ VirtualFan™ is an enhancement to Internet broadcasting that allows users to receive real-time audio broadcasts of sports events synchronized with other media events such as live scoreboards, interactive interviews, action photos and video. The broadcasts can also be replayed at a later time with all of the live elements intact and synchronized with a television look and feel. VirtualFan™ is a content management service targeted to organizations, mainly universities that wish to broadcast their content but do not want the technical headaches associated with Internet broadcasting. In a typical client service contract, DataNaut would host a university sports Web site and provide the technical support to update information such as player profiles, schedules, or team rosters. Events would be broadcast from a customized Web site as a multimedia stream to Internet fans, who could view the event using a standard Web browser configured with a multimedia player. The eventual goal for virtualFan™ is to enable universities to manage their own Web sites through a licensing agreement, thereby empowering them with a Web-publishing tool to obtain jurisdiction over their content. As a former American University (“AU”) athlete, Mark was able to sell this product to the AU Athletics Department and use AU as a prototype for the development of this multimedia streaming technology, as well as the content management system. While Mark has spent a significant amount of time developing virtualFan™, he has recently become discouraged about the prospects of marketing and selling this product directly to university athletic departments. Within the last two years, the university sports market has become saturated with Internet companies offering similar services at no charge. For example, companies such as TotalSports and University Netcasting offer free Web sites to universities in exchange for a percentage of advertising revenues generated by the sites. In addition, companies, such as Rivals.com, generate revenues by consolidating university sports information on non-official sites, thereby capturing revenues that could otherwise have been gained by the universities themselves. By combining multiple universities’ information on one site, the Rivals.com model presents a threat to universities, as recruiters and sports fans will be tempted to go to the “unofficial” site to read about their favorite teams. Advertisers are also more likely to pay for advertisements on such a site, as it would attract more viewers. In pursuing the development and sale of virtualFan™, DataNaut is faced with the issue of whether to target companies that consolidate content (Rivals.com), or companies that host university sites (Total Sports, University Netcasting or Broadcast.com). Alternatively, DataNaut could sell its products and services directly to the universities at a cost, but the schools must be willing to pay this cost. DataNaut believes that universities will want a tool that allows them to maintain jurisdiction over their content and maximizes the Internet as an additional source of revenue without depending on an external provider. However, the initial costs to the university would be significant (see Exhibit 3), considering that many Web-hosting companies now provide free sites. Also, the costs associated with establishing a sales force to sell to each school and host the sites would be significant for DataNaut. Nevertheless, DataNaut is interested in this opportunity, as companies such as TotalSports have signed four-year agreements with universities for which they host Web sites. According to DataNaut’s research, many of these contracts will expire within the next two years. Product #2: MusicBeam MusicBeam is a hardware and software solution for traditional radio stations that wish to broadcast their live signal and bring their brand equity to the Internet. MusicBeam is based on the virtualFan™ platform of multimedia streaming. Several years of development of streaming technologies in the university sports market led Mark to a broader application of the technology. Mark recognized an opportunity in Internet radio and created a turn-key solution that adapts to the popular multimedia players available (e.g., RealPlayer, Windows Media Player), allowing for the simultaneous
DataNaut Incorporated 5
streaming of audio and interactive content. The following are two key benefits of MusicBeam to radio stations: (1) the ability to add multiple, secondary channels, thereby promoting brand equity; and (2) the ability to manage content and gather demographic information on listeners, thereby facilitating targeted e-commerce and advertising. Benefits to the Internet radio listener include the ability to interact with the radio station directly, via computer, without waiting for an open telephone line (including song voting, quizzes, purchasing of CDs, etc.) and the ability to receive a greater variety of information from a favorite radio station than would be possible through conventional radio transmission (“one-stop shopping”). MusicBeam will allow a radio station to quickly, easily and inexpensively add secondary channels to its Internet broadcast, in order to service a wide variety of listener tastes and manage mandatory programming requirements (see Exhibit 4). The content management feature of MusicBeam will place the radio station in charge of the content that is streamed over the Internet, and the “push” aspect of MusicBeam will create a “sticky” environment for the end-user. Listeners will be compelled to stay on the site by the TV-like fashion in which information is presented to them. However, unlike television, MusicBeam will allow the end-user to interact with the radio station and become a participant in the broadcast. For example, if a radio station uses MusicBeam to play a Rolling Stones CD, the end-user, in addition to interacting with Rolling Stones trivia, quizzes, etc. will have an opportunity to submit additional content for a site that he or she has found which relates to the Rolling Stones. Additional content could include a Uniform Resource Locator (“URL”) that identifies the Web address of a site. The content manager, or program director, will consolidate and screen incoming URLs, monitoring listener habits, and be able to “push” applicable sites out in future Rolling Stone broadcasts. In addition to attracting people to the site, the gathering of URLs will present a powerful information opportunity to the radio station that can be used for targeted e-commerce and advertising. The end-user may also submit quizzes, photos and movies. In May 1999, Mark hired an MBA student, Monique LaChance, to focus on business development for DataNaut. With an increase in consulting work in June, Mark was able to hire another MBA student, Paul Lee, to help out with technical as well as business issues. Mark intended for Monique and Paul to focus on the development of the virtualFan™ concept and the consulting practice. However, after seeing a demonstration of the MusicBeam concept, which was based on virtualFan™ technology, both Monique and Paul recognized the potential of this product to succeed in the market. The Internet Radio industry had become a “hot” area, and DataNaut would have a chance to take advantage of the current interest among radio stations for such a product. Only half of the 12,000 radio stations in the United States had an online presence (a Web site), and a fraction of those stations engaged in Internet broadcasting. Monique and Paul spent the summer working on a business plan to obtain capital investment so DataNaut would be able to produce, market and sell MusicBeam to radio station owners in America and Europe. Monique had connections in Paris that she was eager to explore, once the product was ready. After two months of market research and a continuous evolution of the MusicBeam conceptual design (and back-end development from Mark), Monique and Paul completed the business plan. As of September, they were ready to begin contacting venture capital firms and “angel” investors. DataNaut estimated that it would require approximately $1,000,000 of initial investment to bring MusicBeam to market. Monique and Paul felt this money could come from a venture capitalist, a group of angel investors or a potential customer –a large radio station company who would absorb DataNaut and all of its technology. Mark knew that raising this type of money would not be easy. DataNaut was a young company with no prior external financing, and the risks to the investor would be significant, given the early stage of product development. In addition, DataNaut was still heavily committed to consulting contracts, and Mark’s time would be divided until a better solution could be reached.
6 Shaw & Fix
CURRENT CHALLENGES FACING THE ORGANIZATION On the evening of September 3, Mark gathered Eric, Monique and Paul into the conference room (previously known as Mark’s living room) for an important staff meeting. Mark began by reflecting upon the last few months and all of the changes that had recently occurred at DataNaut. Within a period of three months, DataNaut’s focus had changed from exploring market opportunities for virtualFan™ to searching for investment capital for MusicBeam. As MusicBeam technology was based on the virtualFan™ platform, Mark felt that both products were important and developments to one would enhance the other in the long run. The important issues were competition and the accessibility to customers in each distinct market. Both Paul and Monique felt that, given the company’s current limitations with respect to resources, DataNaut should focus on the development and marketing of MusicBeam, as MusicBeam represented an immediate growth opportunity for DataNaut. However, Mark was hesitant to abandon his first development effort completely, believing that the university sports market would soon be ready for a quality product such as virtualFan™. The four-year contracts signed by many universities with other vendors would soon expire, and perhaps these universities would begin to seek alternative solutions to meet their Internet broadcasting needs. In the meantime, Mark had to contend with cash-flow issues and the time constraints posed by the consulting contracts. Mark wanted to learn how to make the best use of his time - how to work smarter, rather than harder. He also wanted to make some internal adjustments by implementing an incentive system, whereby his employees would obtain equity shares in the company. However, Mark was uncertain as to the types of models that existed for structuring such internal equity. Monique and Paul had begun to develop relationships with venture capital firms, and Mark was enthusiastic about the prospects of financing his product development. VirtualFan™ and MusicBeam were two great opportunities in Mark’s opinion, and market research indicated that both were feasible at the time. Mark’s consulting practice was becoming increasingly lucrative, and his good reputation was spreading rapidly. Mark wondered how he should position DataNaut, with respect to strategy and product development, in order to obtain venture capital investment. Would micro-investment be a better option to solving short-term cash-flow issues? If so, how would DataNaut attract angel investors, and how would Mark determine the appropriate equity amounts to offer in exchange for investment dollars? Which product or combination of products should be the focus of DataNaut’s marketing efforts? How many products would DataNaut have to sell in order to break even? If DataNaut focused on product development and marketing, could the company handle the loss of Mark’s consulting revenues, which were fueling its day-to-day operations? Mark had many decisions to make, and he was hoping for some guidance from his staff.
The Alternatives One week earlier Mark had engaged a strategic consulting firm for outside assistance, as the sudden changes within DataNaut and the tremendous market opportunities ahead prompted Mark to reevaluate his position. Mark felt that an outside perspective would assist him in making important decisions about capital investment, product development and operations. At tonight’s meeting he wanted to outline the alternatives proposed by the consulting firm, and obtain a consensus from the team as to the direction in which they should move. The consulting firm had sketched out three courses of action for Mark. Proposal One: As a first alternative, DataNaut would continue product development on MusicBeam, accompanied by heavy target marketing, and continue the consulting practice as a source of revenue. In this scenario, Mark would manage the consulting side of the business, and DataNaut would hire an experienced CEO to oversee daily operations and product development. The consultants suggested that Mark raise an initial $1 million in venture capital to finance ongoing operational costs and product development of MusicBeam, and recommended that Mark retain 20% of the company himself, offer 20% to the incoming CEO and allow no more than 30% to VC investors. The remaining 30% would be used for employee profit sharing.
DataNaut Incorporated 7
Proposal Two: As a second alternative, DataNaut would phase out the consulting business and focus solely on the pursuit of funding with the intention of becoming a product-oriented company. The consultants felt that the MusicBeam demo, along with Mark’s dynamic personality and confidence, would quickly sell the concept, and DataNaut would be able to secure financing (either VC or angel) within a period of four months, during which time the remaining consulting income would cover expenses. Under this aggressive plan, Mark would remain as CEO with 40% equity, allowing up to 40% for investors and 20% for employee profit sharing. Proposal Three: As a third alternative, DataNaut would remain as a technical consulting company, but focus its efforts strategically towards clients that would offer the company the opportunity to engage in streaming application development. Under this scenario, DataNaut would grow gradually, acquiring knowledge and connections, and perhaps eventually be able to engage in a joint product venture with one of its partners. This alternative does not indicate a total abandonment of current product efforts but rather a strategic re-focusing of client targets. VirtualFan™ and MusicBeam technology could be used as a part of a custom client solution. Mark would retain 60% ownership and use the remaining 40% to offer attractive compensation packages to its new and existing employees. Mark and his team now had to choose between the three scenarios outlined by the consultants– or develop a new one of their own.
FURTHER READING Further information on DataNaut can be found on the company Web site: http:// www.datanaut.com. An article from The Industry Standard by Maryann Jones Thompson contains an excellent summary of radio market research. http://www.thestandard.com/metrics/display/0,1902.9954.00.html. The ultimate guide to streaming media can be found at http://www.streamingmedia.com. Additional reading on similar products and companies that would compete with DataNaut can be found at the following two sites: • http://www.spinner.com “Spinner.com is the first and largest Internet music service, broadcasting over 22 million songs each week to listeners all over the world. With over 375,000+ songs in rotation on 150+ music channels, Spinner spans an extraordinarily diverse range of musical styles. The free Spinner Plus downloadable music player offers reliable, high-quality audio while providing dynamic links to comprehensive artist information and music purchase options. High-profile music downloads and promotional features with marquee artists are also available from the Spinner.com Web site. Based in San Francisco, CA, Spinner is dedicated to providing an exciting, interactive alternative to traditional broadcasting, effectively revolutionizing the Internet music listening experience with its breadth and depth of quality content. Spinner.com was acquired by America Online, Inc. in May 1999, and merged with Nullsoft, Inc., providing us with greater resources to produce innovative products and extend our reach.” (http://www.spinner.com, April 19, 2001) • http://www.totalsports.com “The Revolution will not be televised. It will be streamed. Downloaded. Uploaded. Digitized, analyzed and customized. It will be synchronized. Layered. Played, replayed and emailed. It’s taking place RIGHT NOW on the Southern Ocean. At the top of Everest. In stadiums and ballparks and the most god-forsaken, far-flung corners of the globe. But, most of all, this revolution is taking place in the hearts and minds of sports fans suddenly given powerful new tools. Suddenly given the ability to get inside their favorite sports. To understand what really goes on behind the screaming engines, the blasting windstorms, the sweaty grimaces of heroic exertion. To see what the athletes see. To hear
8 Shaw & Fix
what they hear. And to feel some of what they feel as they explore the boundaries of human ability. It’s all accessible through Quokka Sports. On the Internet. From your home or office. And the way the world experiences sport will never be the same. In 1996, Quokka began bringing down the “old ways” of following sports by launching a whole new form of digital entertainment. It’s called Quokka Sports Immersion and it’s changing the face of sports coverage, as we know it” (http://www.totalsport.com, April 19, 2001).
GLOSSARY OF TERMS Angel Investors: Internet Broadcasting: Microinvestment: Microsoft BackOffice: Multimedia Streaming: Sticky Web site: Turnkey Solution: Weblication:
Wealthy individuals (usually successful entrepreneurs) who invest in start-up companies, usually taking an active role in the management of the company. See Multimedia Streaming The receipt of small amounts of investment from individuals for a small equity stake in a company. Suite of software applications written for the Windows NT platform (see http://www. microsoft.com/backoffice). The delivery of electronic data types - text, audio, video or spatial data (such as maps) over the Internet such that viewing begins instantly, without the need for downloading. A site that attracts repeat visitors who stay longer. A packaged solution with no customization required. Turnkey Internet application.
Exhibit 1: Distribution of Time (Hours Per Day) as of September 1999 Individual Status Consulting Music Virtual Admin Beam Fan™ Mark owner 12 2 0 2 Eric full-time 10 0 0 0 Monique part-time 0 7 0 3 Paul part-time 0 8 1 1 George consultant 10 0 0 0 Patrick consultant 8 0 0 0 Chris consultant 8 5 0 0 Mike consultant variable variable 0 0 Sharon consultant variable variable 0 0
Salary ($per hour) 30 20 15 10 100 75 75 90 95
Exhibit 2: Projected Statement of Income–DataNaut Consulting 1999
2000
2001
Revenue Consulting Total Revenue
600,000 600,000
900,000 900,000
1,350,000 1,350,000
Operating Expenses Employee Salaries Consultant Fees Telecommunications Miscellaneous Total Operating Expenses
165,000 360,000 20,000 10,000 555,000
220,000 468,000 13,000 10,000 711,000
300,000 608,400 13,000 10,000 931,400
45,000
189,000
$418,600
EBIT
DataNaut Incorporated 9
Exhibit 3: virtualFan™ Projected Operating Costs per University
Web Page Design Costs 1. virtualFan Web Features Hours of Software Design: Software Design Hourly Rate: Software Design Costs:
15 $85 $1,275
Hours of Creative Design: Creative Design Hourly Rate: Creative Design Costs:
15 $75 $1,125
Total Design & Construction Costs
$2,400
2. Event Broadcast Costs Number of Events: Average Event Duration (hrs.) Broadcasting Hours:
70 3 210
Event Technician Hours: Event Technician Hourly Rate: Event Technician Cost:
42 $10 $420
Technical Administrator Hours: Technical Administrator Hourly Rate: Technical Administrator Cost: Photographer Costs Per Event: Number of Events Photographed: Photographer Cost: Long Distance Charge per Hour Broadcast: Total Long Distance Charge:
42 $50 $2,100 $240 70 $16,800 $10 $2,100
Total Broadcast Costs
$21,420
3. Network & Equipment Costs Network Contract Duration (yrs.) Space Dedicated to Web Site (Mb) Annual Cost of Mb on Network Cost of Residing Web Site on Network
1 100 $10 $1,000
Equipment New Equipment Computer Lease for Data Entry at Site
$700 $200
10 Shaw & Fix
Exhibit 3: virtualFan™ Projected Operating Costs per University (continued)
Total Network & Equipment Costs
$1,900
Total Operations Costs
$25,720
Mark Up Costs Cost to University
$5,000 $30,720
Exhibit 4: MusicBeam Projected Operating Costs per Radio Station
1. MusicBeam System Costs System Software Cost
$5,000
Hours of Software Design: Software Design Hourly Rate: Software Design Costs:
64 $60 $3,840
Hours of Creative Design: Creative Design Hourly Rate: Creative Design Costs:
60 $50 $3,000
Total Design & Construction Costs
$11,840
2. Station Broadcast Costs Number of Simultaneous Listeners: Bandwidth Cost/Year:
500 $30,000
Technical Administrator Hours/Year: Technical Administrator Hourly Rate: Technical Administrator Cost:
260 $40 $10,400
Total Station Broadcast Costs
$40,400
3. Hosting & Equipment Costs Hosting Contract Duration (yrs.) Site Resource Consumption (expressed in Mb) Annual Cost of Mb on Network Cost of Site Residing on Network
1 1,000 $6 $6,000
Equipment Lease for Onsite Equipment
$1,000
Total Network & Equipment Costs 4 S
$7,000
DataNaut Incorporated 11
Exhibit 4: MusicBeam Projected Operating Costs per Radio Station (continued)
4. Summary Site Design & Construction Costs: Broadcast Costs: Network & Equipment Costs: Total Operations Costs Mark Up Costs Cost to Customer
$11,840 $40,400 $7,000 $59,240 $11,848 $71,088
BIOGRAPHICAL SKETCHES Nancy C. Shaw received her Ph.D. in Information Systems from the National University of Singapore. She holds an MBA and a BBA from the University of Kentucky. Dr. Shaw has been a practitioner and consultant in the information systems industry for over twenty years. She has worked for AT&T, General Electric and most recently as a senior systems analyst for the Central Intelligence Agency. She also served as a Military Intelligence Officer in the US Army Reserves during the Persian Gulf War. Currently she is an Assistant Professor of Information Systems at George Mason University in Fairfax, Virginia. Joan O’Reilly Fix received her M.B.A. in Finance from American University (“AU”) in May 2000. While studying at AU, she worked at DataNaut Inc. as the Director of Business Development. Ms. O’Reilly Fix also co-founded a student investment club and served as the Executive Director of the 1999 Kogod Case Competition, for which this case was written. Prior to attending business school, Ms. O’Reilly Fix worked for six years as an international banker. She is currently a Vice President in the Worldwide Securities Services area of Citibank, N.A. with a focus on product management.
12 Rodger, Trank & Pendharkar
Military Applications of Natural Language Processing and Software James A. Rodger Indiana University of Pennsylvania, USA Tamara V. Trank Naval Health Research Center, USA Parag C. Pendharkar Pennsylvania State University at Harrisburg, USA
EXECUTIVE SUMMARY A preliminary feasibility study aboard U.S. Navy ships utilized voice interactive technology to improve medical readiness. A focus group was surveyed about reporting methods in health and environmental surveillance inspections to develop criteria for designing a lightweight, wearable computing device with voice interactive capability. The voice interactive computing device included automated user prompts, enhanced data analysis, presentation and dissemination tools in support of preventive medicine. The device was capable of storing, processing and forwarding data to a server. The prototype enabled quick, efficient and accurate environmental surveillance. In addition to reducing the time needed to complete inspections, the device supported local reporting requirements and enhanced command-level intelligence. Where possible, existing technologies were utilized in creating the device. Limitations in current voice recognition technologies created challenges for training and user interface.
BACKGROUND Coupling computer recognition of the human voice with a natural language processing system makes speech recognition by computers possible. By allowing data and commands to be entered into a computer without the need for typing, computer understanding of naturally spoken languages frees human hands for other tasks. Speech recognition by computers can also increase the rate of data entry, improve spelling accuracy, permit remote access to databases utilizing wireless technology and ease access to computer systems by those who lack typing skills.
Variation of Speech-to-Text Engines Since 1987, the National Institute of Standards and Technology (NIST) has provided standards to evaluate new voice interactive technologies (Pallett, Garofolo & Fiscus, 2000). In a 1998 broadcast Copyright © 2002, Idea Group Publishing.
Military Applications of Natural Language Processing and Software 13
news test, NIST provided participants with a test set consisting of two 1.5-hr subsets obtained from the Linguistic Data Consortium. The task associated with this material was to implement automatic speech recognition technology by determining the lowest word error rate (Herb & Schmidt, 1994; Fiscus, 1997; Greenberg, Chang & Hollenback, 2000; Pallett, 1999). Excellent performance was achieved at several sites, both domestic and abroad (Przybocki, 1999). For example, IBM-developed systems achieved the lowest overall word error rate of 13.5%. The application of statistical significance tests indicated that the differences in performance between systems designed by IBM, the French National Laboratories’ Laboratoire d’Informatique pour la Mechanique et les Sciences de l’Ingenieur and Cambridge University’s Hidden Markov Model Toolkit software were not significant (Pallett, Garfolo and Fiscus, 2000). Lai (2000) also reported that no significant differences existed in the comprehension of synthetic speech among five different speech-to-text engines used. Finally, speaker segmentation has been used to locate all boundaries between speakers in the audio signal. It enables speaker normalization and adaptation techniques to be used effectively to integrate speech recognition (Bikel, Miller, Schwartz, & Weischedel, 1997).
Speech Recognition Applications The seamless integration of voice recognition technologies creates a human-machine interface that has been applied to consumer electronics, Internet appliances, telephones, automobiles, interactive toys, and industrial, medical, and home electronics and appliances (Soule, 2000). Applications of speech recognition technology are also being developed to improve access to higher education for persons with disabilities (Leitch & Bain, 2000). Although speech recognition systems have existed for two decades, widespread use of this technology is a recent phenomenon. As improvements have been made in accuracy, speed, portability, and operation in high-noise environments, the development of speech recognition applications by the private sector, federal agencies, and armed services has increased. Some of the most successful applications have been telephone based. Continuous speech recognition has been used to improve customer satisfaction and the quality of service on telephone systems (Charry, Pimentel & Camargo, 2000; Goodliffe, 2000; Rolandi, 2000). Name-based dialing has become more ubiquitous, with phone control answer, hang-up, and call management (Gaddy, 2000a). These applications use intuitive human communication techniques to interact with electronic devices and systems (Shepard, 2000). BTexact Technologies, the Advanced Communications Technology Centre for British Telecommunications, (Adastral Park, Suffolk, England) uses the technology to provide automated directory assistance for 700 million calls each year at its UK bureau (Gorham & Graham, 2000). Studies in such call centers have utilized live customer trials to demonstrate the technical realization of full speech automation of directory inquiries (McCarty, 2000; Miller, 2000). Service performance, a study of customer behavior and an analysis of service following call-back interviews suggest user satisfaction with the application of speech automation to this industry (Gorham & Graham, 2000). Speech recognition technologies could expand e-commerce into v-commerce with the refinement of mobile interactive voice technologies (McGlashan, 2000; Gaddy, 2000b; Pearce, 2000). As an enabler of talking characters in the digital world, speech recognition promises many opportunities for rich media applications and communications with the Internet (Zapata, 2000). Amid growing interest in voice access to the Internet, a new Voice-extensible Markup Language (VoiceXML™, VoiceXML Forum) has surfaced as an interface for providing Web hosting services (Karam & Ramming, 2000). VoiceXML promises to speed the development and expand the markets of Web-based, speech recognition/synthesis services as well as spawning a new industry of “voice hosting.” This model will allow developers to build new telephone-based services rapidly (Thompson & Hibel, 2000). The voicehosting service provider will lease telephone lines to the client and voice-enable a specific URL, programmed in VoiceXML by the client. This model will make it possible to build speech and telephony services for a fraction of the time and cost of traditional methods (Larson, 2000).
14 Rodger, Trank & Pendharkar
Haynes (2000) deployed a conversational Interactive Voice Response system to demonstrate site-specific examples of how companies are leveraging their infrastructure investments, improving customer satisfaction and receiving quick return on investments. Such applications demonstrate the use of speech recognition by business. The investigation of current customer needs and individual design options for accessing information utilizing speech recognition is key to gaining unique business advantages (Prizer, Thomas, & Suhm, 2000; Schalk, 2000). A long-awaited application of speech recognition, the automatic transcription of free-form dictation from professionals such as doctors and lawyers, lags behind other commercial applications (Stromberg, 2000). Due to major developments in the Internet, speech recognition, bandwidth and wireless technology, this situation is changing (Bourgeois, 2000; Pan, 2000). Internationalizing speech recognition applications has its own set of problems (Krause, 2000). One such problem is that over-the-phone speech applications are more difficult to translate to other languages than Web applications or traditional desktop graphic user interface applications (Head, 2000). Despite the problems, internationalizing speech applications brings with it many benefits. Internationalization of an application helps to reveal some of the peculiarities of a language, such as differences in dialects, while providing insight on the voice user interface design process (Scholz, 2000; Yan, 2000). Furthermore, speech comprehension can work effectively with different languages; studies have documented both English and Mandarin word error rates of 19.3% (Fiscus, Fisher, Martin, Przybocki & Pallett, 2000). Speech technology has been applied to medical applications, particularly emergency medical care that depends on quick and accurate access of patient background information (Kundupoglu, 2000). The U.S. Defense Advance Research Projects Agency organized the Trauma Care Information Management System (TCIMS) Consortium to develop a prototype system for improving the timeliness, accuracy, and completeness of medical documentation. One outcome of TCIMS was the adoption of a speech-audio user interface for the prototype (Holtzman, 2000). The Federal Aviation Administration conducted a demonstration of how voice technology supports a facilities maintenance task. A voice-activated system proved to be less time-consuming to use than the traditional paper manual approach, and study participants reported that the system was understandable, easy to control, and responsive to voice commands. Participants felt that the speech recognition system made the maintenance task easier to perform, was more efficient and effective than a paper manual, and would be better for handling large amounts of information (Mogford, Rosiles, Wagner & Allendoerfer, 1997). Speech recognition technology is expected to play an important role in supporting real-time interactive voice communication over distributed computer data networks. The Interactive Voice Exchange Application developed by the Naval Research Lab, Washington, DC, has been able to maintain a low data rate throughput requirement while permitting the use of voice communication over existing computer networks without causing a significant impact on other data communications, such as e-mail and file transfer (Macker & Adamson, 1996). Pilots must have good head/eye coordination when they shift their gaze between cockpit instruments and the outside environment. The Naval Aerospace Medical Research Lab, Pensacola, FL, has investigated using speech recognition to support the measurement of these shifts and the type of coordination required to make them (Molina, 1991). Boeing Company, Seattle, WA, has investigated ways to free pilots from certain manual tasks and sharpen their focus on the flight environment. The latest solution includes the use of a rugged, lightweight, continuous-speech device that permits the operation of selected cockpit controls by voice commands alone. This technology is being applied in the noisy cockpit of the Joint Strike Fighter (Bokulich, 2000).
Existing Problems—Limitations of Speech Recognition Technology Even though applications of speech recognition technology have been developed with increased frequency, the field is still in its infancy, and many limitations have yet to be resolved. For
Military Applications of Natural Language Processing and Software 15
example, the success of speech recognition by desktop computers depends on the integration of speech technologies with the underlying processor and operating system, and the complexity and availability of tools required to deploy a system. This limitation has had an impact on application development (Markowitz, 2000; Woo, 2000). Use of speech recognition technology in high-noise environments remains a challenge. For speech recognition systems to function properly, clean speech signals are required, with high signal-to-noise ratio and wide frequency response (Albers, 2000; Erten, Paoletti, & Salam, 2000; Sones, 2000; Wickstrom, 2000). The microphone system is critical in providing the required speech signal, and, therefore, has a direct effect on the accuracy of the speech recognition system (Andrea, 2000; Wenger, 2000). However, providing a clean speech signal can be difficult in high-noise environments. Interference, changes in the user’s voice, and additive noise—such as car engine noise, background chatter and white noise—can reduce the accuracy of speech recognition systems. In military environments, additive noise and voice changes are common. For example, in military aviation, the stress resulting from low-level flying can cause a speaker’s voice to change, reducing recognition accuracy (Christ, 1984). The control of the speech recognition interface poses its own unique problems (Gunn, 2000; Taylor, 2000). The inability of people to remember verbal commands is even more of a hindrance than their inability to remember keyboard commands (Newman, 2000). The limited quality of machine speech output also affects the speech recognition interface. As human-machine interaction becomes increasingly commonplace, applications that require unlimited vocabulary speech output are demanding text-to-speech systems that produce more human-sounding speech (Hertz, Younes, & Hoskins, 2000). The accuracy of modeling has also limited the effectiveness of speech recognition. Modeling accuracy can be improved, however, by combining feature streams with neural nets and Gaussian mixtures (Ellis, 2000). The application of knowledge-based speech analysis has also shown promise (Komissarchik & Komissarchik, 2000). Pallett, Garofolo and Fiscus (1999) pointed out that potential problems associated with the search and retrieval of relevant information from databases have been addressed by the Spoken Document Retrieval community. Furthermore, standards for the probability of false alarms and miss probabilities are set forth and investigated by the Topic Detection and Tracking Program (Doddington, 1999). Decision error trade-off plots are used to demonstrate the trade-off between the miss probabilities and false alarm probabilities for a topic (Kubala, 1999). Security issues and speech verification are major voids in speech recognition technology (Gagnon, 2000). Technology for the archiving of speech is also undeveloped. It is well recognized that speech is not presently valued as an archival information source because it is impossible to locate information in large audio archives (Kubala, Colbath, Liu, Srivastava & Makhoul, 2000).
Army Voice Interactive Display Until recently, few practical continuous speech recognizers were available. Most were difficult to build, resided on large mainframe computers, were speaker dependent, and did not operate in real time. The Voice Interactive Display (VID) developed for the U.S. Army has made progress in eliminating these disadvantages (Hutzell, 2000). VID was intended to reduce the bulk, weight, and setup times of vehicle diagnostic systems while increasing their capacity and capabilities for hands-free troubleshooting. The capabilities of VID were developed to allow communication with the supply and logistics structures within the Army’s common operating environment. This effort demonstrated the use of VID as a tool for providing a paperless method of documentation for diagnostic and prognostic results; it will culminate in the automation of maintenance supply actions. Voice recognition technology and existing diagnostic tools have been integrated into a wireless configuration. The result is a hands-free interface between the operator and the Soldier’s On-System Repair Tool (SPORT).
16 Rodger, Trank & Pendharkar
The VID system consists of a microphone, a hand-held display unit and SPORT. With this configuration, a technician can obtain vehicle diagnostic information while navigating through an Interactive Electronic Technical Manual via voice commands. By integrating paperless documentation, human expertise, and connectivity to provide user support for vehicle maintenance, VID maximizes U.S. Army efficiency and effectiveness.
SETTING THE STAGE In support of Force Health Protection, the U.S. Navy has launched a VID project that leverages existing technologies and automates the business practices of Navy medicine. The goal of the Naval Voice Interactive Device (NVID) project is to create a lightweight, portable computing device that uses speech recognition to enter shipboard environmental survey data into a computer database and to generate reports automatically to fulfill surveillance requirements.
Shipboard Environmental Surveys—The Requirement To ensure the health and safety of shipboard personnel, naval health professionals—including environmental health officers, industrial hygienists, independent duty corpsmen (IDCs) and preventive medicine technicians—perform clinical activities and preventive medicine surveillance on a daily basis. These inspections include, but are not limited to, water testing, heat stress, pest control, food sanitation, and habitability surveys. Chief of Naval Operations Instruction 5100.19D, the Navy Occupational Safety and Health Program Manual for Forces Afloat, provides the specific guidelines for maintaining a safe and healthy work environment aboard U.S. Navy ships. Inspections performed by medical personnel ensure that these guidelines are followed. Typically, inspectors enter data and findings by hand onto paper forms and later transcribe these notes into a word processor to create a finished report. The process of manual note-taking and entering data via keyboard into a computer database is time consuming, inefficient, and prone to error. To remedy these problems, the Naval Shipboard Information Program was developed, allowing data to be entered into portable laptop computers while a survey is conducted (Hermansen & Pugh, 1996). However, the cramped shipboard environment, the need for mobility by inspectors, and the inability to have both hands free to type during an inspection make the use of laptop computers during a walkaround survey difficult. Clearly, a hands-free, space-saving mode of data entry that would also enable examiners to access pertinent information during an inspection was desirable. The NVID project was developed to fill this need.
Strengths of NVID The NVID project was developed to replace existing, inefficient, repetitive survey procedures with a fully automated, voice interactive system for voice-activated data input. In pursuit of this goal, the NVID team developed a lightweight, wearable, voice-interactive prototype capable of capturing, storing, processing, and forwarding data to a server for easy retrieval by users. The voice interactive data input and output capability of NVID reduces obstacles to accurate and efficient data access and reduces the time required to complete inspections. NVID’s voice interactive technology allows a trainee to interact with a computerized system and still have hands and eyes free to manipulate materials and negotiate his or her environment (Ingram, 1991). Once entered, survey and medical encounter data can be used for local reporting requirements and command-level intelligence. Improved data acquisition and transmission capabilities allow connectivity with other systems. Existing printed and computerized surveys are voice activated and reside on the miniaturized computing device. NVID has been designed to allow voice prompting by the survey program, as well as voice-activated, free-text dictation. An enhanced microphone system permits improved signal detection in noisy shipboard environments. All of these capabilities contribute to the improved efficiency and accuracy of the data collection and retrieval process by shipboard personnel.
Military Applications of Natural Language Processing and Software 17
CASE DESCRIPTION Shipboard medical department personnel regularly conduct comprehensive surveys to ensure the health and safety of the ship’s crew. Currently, surveillance data are collected and stored via manual data entry, a time-consuming process that involves typing handwritten survey findings into a word processor to produce a completed document. The NVID prototype was developed as a portable computer that employs voice interactive technology to automate and improve the environmental surveillance data collection and reporting process. This prototype system is a compact, mobile computing device that includes voice interactive technology, stylus screen input capability, and an indoor readable display that enables shipboard medical personnel to complete environmental survey checklists, view reference materials related to these checklists, manage tasks, and generate reports using the collected data. The system uses Microsoft Windows NT®, an operating environment that satisfies the requirement of the IT-21 Standard to which Navy ships must conform. The major software components include initialization of the NVID software application, application processing, database management, speech recognition, handwriting recognition, and speech-to-text capabilities. The power source for this portable unit accommodates both DC (battery) and AC (line) power options and includes the ability to recharge or swap batteries to extend the system’s operational time. The limited laboratory and field-testing described for this plan were intended to support feasibility decisions and not rigorous qualification for fielding purposes. The objectives of this plan are to describe how to: • Validate NVID project objectives and system descriptions • Assess the feasibility of voice interactive environmental tools • Assess the NVID prototype’s ease of use
Components of Questionnaire To develop an appropriate voice interactive prototype system, the project team questioned end users to develop the requirement specifications. On July 11, 2000, a focus group of 14 participants (13 hospital corpsmen, 1 medical officer) completed a survey detailing methods of completing surveys and reporting inspection results. The questionnaire addressed the needs of end users as well as their perspectives on the military utility of NVID. The survey consisted of 117 items ranging from nominal, yes/no answers to frequencies, descriptive statistics, rank ordering, and perceptual Likert scales. These items were analyzed utilizing a Windows software application, Statistical Package for the Social Sciences (SPSS, 1999). Conclusions were drawn from the statistical analysis and recommendations were suggested for development and implementation of NVID. The following discussion presents the results of the survey and how the information was incorporated into the prototype. The commands and ranks of these participants are shown in Table 1. These participants possessed varying clinical experience while assigned to deployed units (ships and Fleet Marine Force), including IDCs (independent duty corpsmen), preventive medicine, lab technicians, and aviation medicine. Section 1: Environmental Health and Preventive Medicine Afloat In the first section of the questionnaire, inspectors were asked about the methods they used to record findings while conducting an inspection (see Table 2). Response to this section of the questionnaire was limited. The percentage of missing data ranged from 7.1% for items such as habitability and food sanitation safety to 71.4% for mercury control and 85.7% for polychlorinated biphenyls. An aggregate of the information in Table 2 indicates that the majority of inspectors relied on preprinted checklists. Fewer inspections were conducted utilizing handwritten reports. Only 7.1% of the users recorded their findings on a laptop computer for inspections focusing on radiation protection, workplace monitoring, food sanitation safety and habitability.
18 Rodger, Trank & Pendharkar
Table 1: NVID Focus Group Participants Command
Rank/Rate
Navy Environmental Preventive Medicine Unit-5
HM2
Navy Environmental Preventive Medicine Unit-5
HM1
Navy Environmental Preventive Medicine Unit-5
HM3
Commander Submarine Development Squadron Five
HMCS
Naval School of Health Sciences, San Diego
HMCS
Naval School of Health Sciences, San Diego
HM1
Naval School of Health Sciences, San Diego
HMC
Commander, Amphibious Group-3
HMC
Commander, Amphibious Group-3
HMC
USS CONSTELLATION (CV-64)
HMC
USS CONSTELLATION (CV-64)
HMC
Commander, Naval Surface Force Pacific
HMCS
Commander, Naval Surface Force Pacific
HMCS
Regional Support Office, San Diego HM1, Hospital Corpsman First Class HM2, Hospital Corpsman Second Class HM3, Hospital Corpsman Third Class
CDR HMCS, Hospital Corpsman Senior Chief HMC, Hospital Corpsman Chief CDR, Commander
In addition to detailing their methods of recording inspection findings, the focus group participants were asked to describe the extensiveness of their notes during surveys. The results ranged from “one to three words in a short phrase” (35.7%) to “several short phrases, up to a paragraph” (64.3%). No respondents claimed to have used “extensive notes of more than one paragraph.” The participants were also asked how beneficial voice dictation would be while conducting an inspection. Those responding that it would be “very beneficial” (71.4%) far outweighed those responding that it would be “somewhat beneficial” (28.6%). No respondents said that voice dictation would be “not beneficial” in conducting an inspection. In another survey question, participants were asked if portions of their inspections were performed in direct sunlight. The “yes” responses (92.9%) were far more prevalent than the “no” responses (7.1%). Participants also described the types of reference material needed during inspections. The results are shown in Table 3. “Yes” responses ranged from a low of 28.6% for procedure description information to 78.6% for current checklist in progress information. When asked how often they utilized reference materials during inspections, no participants chose the response “never.” Other responses included “occasionally” (71.4%), “frequently” (21.4%) and “always” (7.1%). In another survey question, participants were asked to describe their methods of reporting inspection results, which included the following: preparing the report using SAMS (Shipboard Not-tactical ADP Program (SNAP) Automated Medical System) (14.8%), preparing the report using word processing other than SAMS (57.1%), and preparing the report using both SAMS and word processing (28.6%). No respondents reported using handwritten or other methods of reporting inspection results. Participants were also asked how they distributed final reports. The following results were tabulated: hand-carry (21.4%); guard mail (0%); download to disk and mail (7.1%); Internet e-mail (64.3%); upload to server (0%); file transfer protocol (FTP) (0%); and other, not specified (7.1%). When asked if most of the problems or discrepancies encountered
Military Applications of Natural Language Processing and Software 19
Table 2: Methods of Recording Inspection Findings Handwritten
Inspections Asbestos
Preprinted Check Lists
%
%
14.3
50.0
Laptop Computer
Missing
0
35.7
%
Heat Stress
14.3
71.4
0
14.3
Hazardous Materials
21.4
50.0
0
28.6
Hearing Conservation
21.4
64.3
0
14.3
Sight Conservation
7.1
71.4
0
21.4
0
71.4
0
28.6
Electrical Safety
14.3
50.0
0
35.7
Gas-Free Engineering
14.3
28.6
0
57.1
Radiation Protection
7.1
28.6
7.1
57.1
Respiratory Conservation
Lead Control
0
64.3
0
35.7
Tag-Out Program
7.1
50.0
0
42.9
Personal Protective Equipment
7.1
42.9
0
50.0
Mercury Control
0
28.6
0
71.4
PCBs
0
14.3
0
85.7
Man-Made Vitreous Fibers
7.1
28.6
0
64.3
Blood-Borne Pathogens
0
50.0
0
50.0
Workplace Monitoring
0
42.9
7.1
50.0
Food Sanitation Safety
14.3
71.4
7.1
7.1
Habitability
28.6
57.1
7.1
7.1
Potable Water, Halogen/Bacterial Testing
35.7
57.1
0
7.1
Wastewater Systems
21.4
50.0
0
28.6
0
0
0
100
Other
PCBs, polychlorinated biphenyls, pentachlorobenzole Table 3: Types of Reference Information Needed During Inspections Information
Yes
No
%
%
Current Checklist in Progress
78.6
21.4
Bureau of Medicine Instructions
71.4
28.6
Naval Operations Instructions
71.4
28.6
Previously Completed Reports for Historical References
71.4
28.6
Exposure Limit Tables
57.1
42.9
Technical Publications
57.1
42.9
Type Commander Instructions
50.0
50.0
Local Instructions
42.9
57.1
Procedure Descriptions
28.6
71.4
Other
21.4
78.6
20 Rodger, Trank & Pendharkar
during an inspection could be summarized using a standard list of “most frequently occurring” discrepancies, 100% of respondents answered “yes.” The average level of physical exertion during inspections was reported as Light by 42.9% of respondents, Moderate by 50.0% of respondents and Heavy by 7.1% of respondents. Survey participants were also asked to describe their level of proficiency at ICD-9-CM (Department of Health and Human Services, 1989). An expert level of proficiency was reported 7.1% of the time. Other responses included “competent” (14.3%), “good” (28.6%), “fair” (28.6%) and “poor” (7.1%). Missing data made up 14.3% of the responses. Section 2: Shipboard Data Communications Technology In the second section of the questionnaire, end users addressed characteristics of shipboard medical departments, NVID, medical encounters and SAMS. When asked if their medical departments were connected to a local area network (LAN), respondents answered as follows: “yes” (71.4%), “no” (7.1%) and “uncertain” (14.3%). Missing responses totaled 7.1%. Participants asked if their medical departments had LANs of their own responded “yes” (14.3%), “no” (57.1%) and “uncertain” (21.4%). Another 7.1% of responses to this question were missing. When asked if their medical departments had access to the Internet, participants responded “yes, in medical department” (85.7%); and “yes, in another department” (7.1%). Another 7.1% of responses were missing. Various methods for transmitting medical data from ship to shore were also examined in the survey. It was found that 78.6% of those surveyed said they had used Internet e-mail, while 14.3% said that they had downloaded data to a disk and mailed it. No users claimed to have downloaded data to a server or utilized File Transfer Protocol (FTP) for this purpose. Missing responses totaled 7.1%. Table 4 shows respondents’ rankings of the desirable features of the device. “Voice activation dictation” and “durability” were tied for the top ranking. “Wearable in front or back” and “earphones” were tied for lowest ranking. “Voice prompting for menu navigation” and “LAN connectivity” were the number 3 and 4 choices, respectively. In another question, participants rated their computer efficiency. Just 14.3% rated their computer efficiency as “expert,” while 42.9% chose “competent.” “Good” and “fair” were each selected by 21.4% of respondents. Participants reportedly used “name of area” as the most used element (85.7%) to identify an inspected area (Table 5). Table 6 provides respondents’ rankings of the areas of environmental surveillance in which voice automation would be of the greatest value. According to this rank ordering, “Food Sanitation Safety” would most benefit from voice automation. “Heat Stress” and “Potable Water, Halogen” were also popular choices
Table 4: Ranking of Device Features Feature
Average
Rank
2.64
1(tie)
Durability
2.64
1 (tie)
Voice Prompting for Menu Navigation
2.93
3
LAN Connectivity
4.21
4
Belt or Harness Wearability
4.57
5
Wireless Microphone
5.29
6
Touch Pad/Screen
5.93
7
Earphones
6.14
8 (tie)
Wearable in Front or Back
6.14
8 (tie)
Voice-Activated Dictation
Military Applications of Natural Language Processing and Software 21
Table 5: Elements Used in Reports to Identify Inspected Areas
Identifying Element
Yes
No
Compartment Number
57.1
42.9
Department
57.1
42.9
Name of Area
85.7
14.3
0
100
Other
Table 6: Surveillance Areas Benefiting from Voice Automation
Areas
Average
Rank
Food Sanitation Safety
1.21
1
Heat Stress
3.29
2
Potable Water, Halogen
3.86
3
Habitability
4.14
4
Potable Water, Bacterial
4.21
5
Inventory Tool
4.43
6
Hazard-specific Programs with Checklist
4.86
7
Table 7: Frequencies of Desirable Attributes Opinion
Strongly Agree %
Agree %
Unsure %
Disagree %
Care for Patients
71.4
28.6
Reduce Data Entries
21.4
71.4
7.1
Reduce Paperwork
14.3
57.1
14.3
14.3
Conduct Outbreak Analysis
21.4
35.7
21.4
21.4
On-Line Tutorial
14.3
57.1
21.4
7.1
Lightweight Device
21.4
71.4
7.1
See an Overview
28.6
50.0
14.3
7.1
Automated ICD-9-CM
35.7
42.9
7.1
14.3
Difficulties Using ICD-9-CM Codes
14.2
28.6
28.6
28.6
Strongly Disagree %
ICD-9-CM: internationally recognized system used to code and classify morbidity data (Department of Health and Human Services, 1989).
Section Three: Professional Opinions In the third section of the survey, participants were asked which attributes of NVID they would find most desirable (Table 7). Other survey questions provided insights into the workloads of respondents and their preferences related to NVID training. It was reported that 64.3% of respondents saw 0 to 24 patients in sick bay daily. A daily count of 25 to 49 sick bay visits was reported by 28.6% of respondents, while 7.1% reported 50 to 74 visitors per day.
22 Rodger, Trank & Pendharkar
When asked how much time they would be willing to devote to training a software system to recognize their voice, 21.4% of respondents said that a training period of less than 1 hour would be acceptable. According to 57.1% of respondents, a training period of 1-4 hours would be acceptable, while 21.4% of respondents said that they would be willing to spend 4-8 hours to train the system. To train themselves to use the NVID hardware and software applications, 42.9% of survey respondents said they would be willing to undergo 1-4 hours of training, while 57.1% said that they would train for 4-8 hours. All respondents agreed that a longer training period would be acceptable if it would guarantee a significant increase in voice recognition accuracy and reliability.
Environmental Surveillance Module Based on the responses from the surveyed population, we chose to automate the following surveys: 1) Food Sanitation Safety, 2) Habitability, 3) Heat Stress, 4) Potable Water, and 5) Pest Control. While requested for automation by the focus group, choices 3-5 were appealing because they already exist in the SAMS program. By including these surveys in the prototype, the research effort hooked into systems already resident in ship medical departments, increasing the appeal of the prototype. The surveys indicated that the inspectors utilized preprinted checklists most often. NVID automated these business practices by providing checklists for each survey. While some “free dictation” was incorporated, allowing the inspector to include comments during the inspection, predetermined checklists with a limited necessary vocabulary (command and control) allowed the NVID team to use smaller computer devices with slower processors. Extensive “free dictation” capability requires faster processors that do not yet exist on small, portable computing platforms. From the survey, all respondents agreed that most problems encountered during an inspection can be summarized using a standardized list of frequently occurring discrepancies. A master tickler, a calendar that tracks the progress of surveys and the dates of their required completion, was included in the module. Navy references and instructions were made resident on the system, allowing inspectors access to regulations during surveys. Compatibility of the NVID system with medical department computer equipment was ensured so that downloads and sharing of information between computing platforms could easily be achieved. Final reports may be printed and delivered or distributed electronically via email.
Improving Study Validity One of the major limitations of this study is the small sample size (n = 14). In the focus group study, the small, conveniently available sample detracts from the external validity of the results. These results may not be generalizable to other populations and situations. More data must be collected to improve the external validity of the study. In the future: • Having end users from a more geographically dispersed setting would add to the validity of the research. • Conducting in-depth interviews with end users may help determine missed variables. Several valid conclusions were drawn from the sample population, though non-response biases may be an issue in determining the generalizability of the results. These results were used to determine features and surveys included in the NVID prototype. First and foremost, a majority of end users indicated that NVID would be very beneficial. This validated the need for NVID and convinced us that this is a worthwhile effort for the Navy. The Fujitsu Stylistic ™ 3400 Tablet (Santa Clara, CA) with an Intel Pentium® III processor (Santa Clara, CA) was the chosen computing platform. The commercial software included L&H Dragon NaturallySpeaking® 5.0 (Flanders, Belgium). For most purposes, an SR1 headset microphone (Plantronics Inc., Santa Cruz, CA) focused at the lips was adequate for the system tested under conditions of 70-90 decibels of background noise.
Military Applications of Natural Language Processing and Software 23
CURRENT CHALLENGES/PROBLEMS Specific challenges and problems of the NVID prototype system that were examined and tested included: • Shipboard operation in tight spaces • Operation in high-noise environments • Data gathering and checklist navigation • Report generation • Access to reference materials • Comment capture capability • Access to task schedule and survey data • User system training • Prototype effectiveness Shipboard Operation in Tight Spaces Space and resource constraints on Navy ships make it necessary to complete surveys in enclosed, tight spaces. Ease of use, portability, and wearability of the NVID unit when maneuvering through these areas were validated based on surveys of military users. A study of the ergonomics associated with the use of an NVID computer was also performed. The human factors evaluated included, but were not limited to, the following parameters: • Safety equipment compatibility • Work clothing, including gloves, glasses, and hard hats • Sound suppressors/hearing protection • Respirators • Data input comparison and user acceptance (voice command vs. touchscreen) based on the opinions of Navy personnel aboard ship • User interface evaluation (ease of use) • User comfort • User adjustability • Subcomponent connection procedure • Assessment of mean time to proficiency Operation in High-Noise Environments Naval ships are industrial environments that contain the potential for high noise levels. To verify the effectiveness of the NVID prototype under such conditions, the difference in error rates using the unit with and without background noise were determined. Voice recognition software training was first conducted by using a script consisting of a repeatable set of voice commands. The following sets of tests were performed with consistent background noise: • Lab test in a normal office environment (<70 decibels) • Lab test with baseline background noise up to the expected level (90 decibels) • Field test aboard ship with typical background noise (75-90 decibels)
Error Rates Were Then Recorded For Each Test and Compared Between Tests Data Gathering and Checklist Navigation. NVID prototype system users were capable of navigating through survey checklists by using voice commands, as well as other computer navigational tools, such as a mouse, touch pad, and stylus. The data collected were then automatically stored in an on-system database. To determine whether the system could successfully open each checklist and allow entry and storage of the required data, a script was developed that thoroughly tested the
24 Rodger, Trank & Pendharkar
functionality of the hardware and software. Report generation. The ability to generate reports and save them as files for downloading or printing was verified. Tests were performed to verify that the data were captured during inspection procedures and properly rendered into a usable working report. Access to reference materials. Users may require access to survey reference materials, schedules, previous survey results or discrepancies found during the survey process. Tests were performed to verify that the application software enabled access to designated reference material as specified within each checklist. Comment capture capability. The NVID application provides the ability to document the inspector’s notes via handwriting recognition, voice dictation and a touch screen keyboard. Verification of all three methods of data capture was performed using a predefined script of repeatable voice commands. Access to task schedule and survey data. The NVID application software provides the ability to schedule tasks and review past reports. Verification of the software was performed using both voice command and touchscreen technologies. User system training. To evaluate the effectiveness of user system training, the amount of training time required to achieve the desired level of voice recognition accuracy was first determined. Minimum training for the voice recognition software was conducted, and the length of time required to complete the training was documented. The system was then operated using a scripted, repeatable set of voice commands, and the number of errors was recorded. This process was repeated with additional training until the desired level of voice recognition accuracy was achieved. Prototype effectiveness. Throughout the test and evaluation process, the current manual method of shipboard surveillance was compared with the NVID prototype system. A test script was used to exercise the functionality of all components of the software application. The test parameters included: • The time it takes to perform tasks • Ease of reviewing past survey data, task schedules and comments The NVID prototype power source accommodates AC/DC power options. In a lab environment, the battery power meter was periodically monitored to determine the expected usage range. The functionality of the NVID prototype system’s voice prompting and speech-to-text capabilities was verified. The usefulness of the device’s ability to read menu selections and checklist questions was measured through user feedback. Although users reported positive responses to the prototype tested, the device exhibited the limitations of current speech recognition technologies. The processors in lightweight, wearable devices were not fast enough to process speech adequately. Yet, larger processors added unwelcome weight to the device, and inspectors objected to the 3.5 pounds during the walk-around surveys. In addition, throat microphones used in the prototype to limit interference from background noise also limited speech recognition. These microphones pick up primarily guttural utterances, and thus tended to miss those sounds created primarily with the lips, or by women’s higher voice ranges. Heavier necks also impeded the accuracy of throat microphones. Accuracy of speech recognition also depended on the time a user committed to training the device to recognize his or her speech, and changes in voice quality due to environmental or physical conditions. Accuracy rates varied from 85-98% depending on the amount of time users took to train the software. Optimal training time appeared to be one hour for Dragon Naturally Speaking software and one hour for NVID software. In addition, current software interprets utterances in the context of an entire sentence, so users had to form complete utterances mentally before speaking for accurate recognition. As speech recognition technologies evolve, many of these limitations should be addressed.
Military Applications of Natural Language Processing and Software 25
CONCLUSIONS The NVID survey established criteria for developing a lightweight, wearable, voice-interactive computer capable of capturing, storing, processing, and forwarding data to a server for retrieval by users. The prototype met many of these expectations. However, limitations in the current state of voicerecognition technologies create challenges for training and user interface. Integration of existing technologies, rather than development of new technology, was the intent of the design. A state-of theart review of existing technologies indicated that commercial, off-the-shelf products cannot yet provide simultaneous walk-around capability and accurate speech recognition in the shipboard environment. Adaptations of existing technology involved trade-offs between speech recognition capabilities and wearability. Despite current limitations in speech recognition technology, the NVID prototype was successful in reducing the time needed to complete inspections, in supporting local reporting requirements, and in enhancing command-level intelligence. Attitudes of the users toward the device were favorable, despite these restrictions. Users believed that the prototype would save time and improve the quality of reports.
ACKNOWLEDGMENTS The authors would like to thank Dr. Cheryl Reed for her timely appearance and Ms. Susan Paesel for her expert editing. Report No. 01-17 was supported by the Bureau of Medicine and Surgery, Washington, DC, under Work Unit No. 0604771N-60001. The views expressed in this chapter are those of the authors and do not necessarily reflect the official policy or position of the Department of the Navy, Department of Defense or the U.S. Government. Approved for public release; distribution unlimited. Human subjects participated in this study after giving their free and informed consent. This research has been conducted in compliance with all applicable Federal Regulations governing the Protection of Human Subjects in Research.
REFERENCES Albers, J. (2000). Successful speech applications in high noise environments. SpeechTEK Proceedings, 147-154. Andrea, D. (2000). Improving the user interface: Digital far-field microphone technology. SpeechTEK Proceedings, 155-160. Bikel, D., Miller, S., Schwartz, R., & Weischedel, R. (1997). Nimble: A high performance learning name finder. Proceedings of the Fifth Conference on Applied Natural Language Processing, Association for Computational Linguistics, Washington, DC, 194-201. Bokulich, F. (2000). JSF [Joint Strike Fighter] voice recognition. Aerospace Engineering Online [online]. Available: http://www.sae.org/aeromag/techupdate_5-00/03.htm. Bourgeois, S. (2000). Speech-empowered mobile computing. SpeechTEK Proceedings, 223-228. Charry, M., Pimentel, H., & Camargo, R. (2000). User reactions in continuous speech recognition systems. AVIOS Proceedings of The Speech Technology & Applications Expo, 113-130. Christ K. A. (1984). Literature review of voice recognition and generation technology for Army helicopter applications (Army Report No. HEL-TN-11-84). Aberdeen Proving Ground, MD: Human Engineering Lab. Department of Health and Human Services. (1989). International classification of diseases, 9th revision, clinical modification, third edition. Washington, D.C.: Government Printing Office. Doddington, G. (1999). Topic detection and tracking: TDT2 overview and evaluation results. Proceedings of DARPA Broadcast News Workshop, Herndon, VA, Feb. Ellis, D. (2000). Improved recognition by combining different features and different systems. AVIOS Proceedings of the Speech Technology & Applications Expo, 236-242.
26 Rodger, Trank & Pendharkar
Erten, G., Paoletti, D., & Salam, F. (2000). Speech recognition accuracy improvement in noisy environments using clear voice capture (CVC) technology. AVIOS Proceedings of The Speech Technology & Applications Expo, 193-198. Fiscus, J. G. (1997) A post-processing system to yield reduced word error rates: Recognizer Output Voting Error Reduction (ROVER). Proceedings, 1997 IEEE Workshop on Automatic Speech Recognition and Speech. Fiscus, J. G., Fisher, W. M., Martin, A. F., Przybocki, M. A. & Pallet, D. S. (2000). 2000 NIST evaluation of conversational speech recognition over the telephone: English and Mandarin performance results. Proceedings of DARPA Broadcast News Workshop, February-March. Gaddy, L. (2000a). The future of speech I/O in mobile phones. SpeechTEK Proceedings, 249-260. Gaddy, L. (2000b). Command and control solutions for automotive applications. SpeechTEK Proceedings, 187-192. Gagnon, L. (2000). Speaker recognition solutions to secure and personalize speech portal applications. SpeechTEK Proceedings, 135-142. Goodliffe, C. (2000). The telephone and the Internet. AVIOS Proceedings of the Speech Technology & Applications Expo, 149-151. Gorham, A., & Graham, J. (2000). Full automation of directory enquiries¯A live customer trial in the United Kingdom. AVIOS Proceedings of the Speech Technology & Applications Expo, 1-8. Greenberg, S., Chang, S., & Hollenback, J., (2000). An introduction to the diagnostic evaluation of switchboard-corpus automatic speech recognition systems. Proceedings of DARPA Broadcast News Workshop, February-March. Gunn, R. (2000). ‘Voice’: The ultimate in user-friendly computing. SpeechTEK Proceedings, 161-178. Haynes, T. (2000) Conversational IVR: The future of speech recognition for the enterprise. AVIOS Proceedings of the Speech Technology & Applications Expo, 15-32. Head, W. (2000). Breaking down the barriers with speech. SpeechTEK Proceedings, 93-100. Herb, G., & Schmidt, M. (1994, October). Text independent speaker identification. Signal Processing Magazine, 18-32. Hermansen, L. A. & Pugh, W. M. (1996). Conceptual design of an expert system for planning afloat industrial hygiene surveys (Technical Report No. 96-5E). San Diego, CA: Naval Health Research Center. Hertz, S., Younes, R., & Hoskins, S. (2000). Space, speed, quality, and flexibility: Advantages of rulebased speech synthesis. AVIOS Proceedings of the Speech Technology & Applications Expo, 217-228. Holtzman, T. (2000). Improving patient care through a speech-controlled emergency medical information system. AVIOS Proceedings of the Speech Technology & Applications Expo, 73-81. Hutzell, K. (2000). Voice Interactive Display (VID) (Contract Summary Report: Apr 98-May 2000). Johnstown, PA: MTS Technologies, Inc. Ingram, A. L. (1991). Report of potential applications of voice technology to armor training (Final Report: Sep 84-Mar 86). Cambridge, MA: Scientific Systems Inc. Karam, G. & Ramming, J. (2000). Telephone access to information and services using VoiceXML. AVIOS Proceedings of the Speech Technology & Applications Expo, 159-162. Komissarchik, E., & Komissarchik, J. (2000). Application of knowledge-based speech analysis to suprasegmental pronunciation training. AVIOS Proceedings of the Speech Technology & Applications Expo, 243-248. Krause, B. (2000). Internationalizing speech applications. AVIOS Proceedings of the Speech Technology & Applications Expo, 10-14. Kubala, F. (1999). Broadcast news is good news. Proceedings of DARPA Broadcast News Workshop, Herndon, VA, Feb. Kubala, F. Colbath, S., Liu, D., Srivastava, A., & Makhoul, J. (2000) Integrated technologies for indexing
Military Applications of Natural Language Processing and Software 27
spoken language. Communications of the ACM, 43(2), 48-56. Kundupoglu, Y. (2000). Fundamentals for building a successful patent portfolio in the new millennium. AVIOS Proceedings of the Speech Technology & Applications Expo, 229-234. Lai, J. (2000). Comprehension of longer messages with synthetic speech. AVIOS Proceedings of the Speech Technology & Applications Expo, 207-216. Larson, J. (2000). W3C voice browser working group activities. AVIOS Proceedings of the Speech Technology & Applications Expo, 163-174. Leitch, D., & Bain, K. (2000). Improving access for persons with disabilities in higher education using speech recognition technology. AVIOS Proceedings of The Speech Technology & Applications Expo, 83-86. Macker J. P., & Adamson R. B. (1996). IVOX–The Interactive VOice eXchange application. (Report No. NRL/FR/5520—96-980). Washington, DC: Naval Research Lab. Markowitz, J. (2000). The value of combining technologies. AVIOS Proceedings of the Speech Technology & Applications Expo, 199-206. McCarty, D. (2000). Building the business case for speech in call centers: Balancing customer experience and cost. SpeechTEK Proceedings, 15-26. McGlashan, S. (2000). Business opportunities enabled by integrating speech and the Web, SpeechTEK Proceedings, 281-292. Miller, R. (2000). The speech-enabled call center, SpeechTEK Proceedings, 41-52. Mogford, R. M., Rosiles, A., Wagner, D. & Allendoerfer, K. R. (1997). Voice technology study report (Report No. DOT/FAA/CT-TN97/2). Atlantic City, NJ: FAA Technical Center. Molina, E. A. (1991). Continued Performance Assessment Methodology (PAM) research (VORPET). Refinement and implementation of the JWGD3 MILPERF-NAMRL Multidisciplinary Performance Test Battery (NMPTB) (Final Report: 1 Oct 89 – 30 Sep 91). Pensacola, FL: Naval Aerospace Medical Research Laboratory. Newman, D. (2000). Speech interfaces that require less human memory. AVIOS Proceedings of the Speech Technology & Applications Expo, 65-69. Pallett, D. S., Garofolo, J. S., & Fiscus, J. G. (1999). 1998 broadcast news benchmark test results: English and non-English word error rate performance measure. Proceedings of DARPA Broadcast News Workshop, Herndon, VA, Feb. Pallett, D. S., Garofolo, J. S., & Fiscus, J. G. (2000). Measurements in support of research accomplishments. Communications of the ACM, 43(2), 75-79. Pan, J. (2000). Speech recognition and the wireless Web. SpeechTEK Proceedings, 229-232. Pearce, D. (2000). Enabling new speech-driven services for mobile devices: An overview of the ETSI standards activities for distributed speech recognition front-ends. AVIOS Proceedings of the Speech Technology & Applications Expo, 175-186. Prizer, B., Thomas, D., & Suhm, B. (2000). The business case for speech. SpeechTEK Proceedings, 1526. Przybocki, M. (1999). 1998 broadcast news evaluation information extraction named entities. Proceedings of DARPA Broadcast News Workshop, Herndon, VA, Feb. Rolandi, W. (2000). Speech recognition applications and user satisfaction in the imperfect world. AVIOS Proceedings of the Speech Technology & Applications Expo, 153-158. Schalk, T. (2000). Design considerations for ASR telephony applications. AVIOS Proceedings of the Speech Technology & Applications Expo, 103-112. Scholz, K. (2000). Localization of spoken language applications. AVIOS Proceedings of the Speech Technology & Applications Expo, 87-102. Shepard, D. (2000). Human user interface¯HUI. SpeechTEK Proceedings, 262-270. Sones, R. (2000). Improving voice application performance in real-world environments. SpeechTEK Proceedings, 179-210.
28 Rodger, Trank & Pendharkar
Soule, E. (2000). Selecting the best embedded speech recognition solution. SpeechTEK Proceedings, 239-248. SPSS for Windows, Rel. 10.0.5. (1999). Chicago, IL: SPSS Inc. Stromberg, A. (2000). Professional markets for speech recognition. SpeechTEK Proceedings, 101-124. Taylor, S. (2000). Voice enabling the Web¯modification or new construction. SpeechTEK Proceedings, 69-76. Thompson, D., & Hibel, J. (2000). The business of voice hosting with VoiceXML. AVIOS Proceedings of the Speech Technology & Applications Expo, 142-148. Wenger, M. (2000). Noise Rejection¯The essence of good speech recognition. AVIOS Proceedings of the Speech Technology & Applications Expo, 51-63. Wickstrom, T. (2000). Microphone voice recognition performance in noise: A proposed testing standard. SpeechTEK Proceedings, 211-219. Woo, D. (2000). Desktop speech technology: A MacOS Perspective. AVIOS Proceedings of the Speech Technology & Applications Expo, 39-50. Yan, Y. (2000). An introduction to speech activities at Intel China Research Center. SpeechTEK Proceedings, 79-82. Zapata, M. (2000). LIPSinc¯We have ways of making you talk. SpeechTEK Proceedings, 273-280.
BIOGRAPHICAL SKETCHES James A. Rodger is an Associate Professor of Management Information Systems at Indiana University of Pennsylvania (IUP). He received his Doctorate in MIS, from Southern Illinois University at Carbondale, in 1997. Dr. Rodger teaches Network Administration, System Architecture, Microcomputer Applications and Intro to MIS, at IUP. He has worked as an Installation Coordinator for Sunquest Information Systems, and presently does consulting work on Telemedicine Connectivity for the Department of Defense and Management Technology Services Inc. Dr. Rodger has published several journal articles related to these subjects. His most recent article, “Telemedicine and the Department of Defense” was published in Communications of the ACM. Lieutenant Tamara V. Trank is Deputy Manager of Field Medical Technologies at the Naval Health Research Center in San Diego, CA. She received a BS degree in Kinesiology and a Ph.D. in physiological science from UCLA. LT Trank has published her research in neuroscience and the neural control of locomotion in major scientific journals, and has presented numerous papers at national and international research conferences. In 2001, she received the Naval Achievement Medal. LT Trank is a member of the Society for Neuroscience, the American Association for the Advancement of Science, and the American College of Sports Medicine. Parag C. Pendharkar is assistant professor of information systems at Penn State Harrisburg. His research interests are in artificial intelligence and expert systems. His work has appeared in Annals of Operations Research, Communications of ACM, Decision Sciences and several others. He is a member of INFORMS, APUBEF, and DSI.
IT-Based Decision Tools for Item Processing Operations Management
29
IT-Based Decision Tools For Item Processing Operations Management in Retail Banking Charles J. Malmborg Rensselaer Polytechnic Institute, USA
EXECUTIVE SUMMARY Merit Bank is a multi-line financial services company with $75 billion in assets and approximately 1,000 retail branches distributed across 20 geographic divisions in 16 states. In 1999, Merit’s retail banking operations generated $2.1 billion of revenues and $1 billion in net income. Over the past decade, Merit’s aggressive acquisition and consolidation strategy in its retail and commercial banking divisions has significantly increased check processing volumes and motivated major investments in automated imaging technology and branch operations reporting systems. When these investments failed to reduce overall check processing costs, a consulting team was formed to define the breakthrough opportunities and best-in-class management practices needed to restructure under performing operations. By using updated scheduling criteria reflecting current business conditions and more fully exploiting imaging and branch reporting software, the consulting team successfully developed and implemented interfacing tools responsible for significant cost savings in check processing operations.
BACKGROUND Technology and business conditions in the retail banking industry are changing more rapidly today than at any time in recent history. Consolidations of independent financial institutions are occurring at an unprecedented rate. Some analysts predict that fewer than half of the financial institutions operating today will exist as independent enterprises by the end of this decade (Malmborg, 1999). Merit Bank is a 50-year-old financial institution with a history of innovation and successful adaptation to change. During the past five years, multi-line financial services institutions (MFSIs) such as Merit have been vying to become top players in a select group of high-growth regional markets as well as market share leaders in geographic areas where they have an established presence. Following a “build and buy” strategy of acquisitions and regional asset exchanges, Merit is seeking to develop profitable niches in various areas of consumer finance and investment services as well as in its Copyright 2002, Idea Group Publishing.
30 Malmborg
traditional fields of retail and commercial banking. Among these multiple business lines, retail operations have retained their importance to Merit by serving as an inexpensive source of operating funds (deposits) and a significant source of revenues. In the retail segment of the banking industry, Merit follows a strategy of building broad-based client relationships through continuous pursuit of best-in-class services. In a number of regional markets, inroads made by Merit in the retail area are pressuring smaller independent banks to reduce costs and re-focus on the specialized financial services for which they are best positioned. Many of these smaller institutions have been forced to follow Merit’s lead in rapidly introducing new products and services, improving customer access to funds and reducing operating costs while maintaining the “personal banking” relationships on which their businesses were founded. Following a merger with a major investment firm in the late 1990s, Merit accelerated the acquisition-based growth strategy that had been its hallmark since the early 1980s. By 1999, Merit transitioned into a leading MFSI with over $75 billion in assets and approximately 1,000 retail branches in 20 geographic divisions spread over 16 states. Throughout the period of rapid expansion, retail operations remained a priority responsible for over $2.1 billion of revenues (almost 40% of total revenues), and over $1 billion in net income during 1999 alone. Under pressure to maintain annual growth in retail operations in the 10% range, Merit launched a productivity improvement program during 1999 that targeted cost reductions of over $80 million. A significant portion of this was focused on leveraging investments in new information technology to achieve economies of scale in item processing operations to enable closings and consolidations of acquired item processing facilities. Item processing (IP) operations involve the retrieval of paper checks from retail points of transaction and associated processing, distribution and funds transfers to and from correspondent banks. Despite explosive growth in paperless financial transactions over the past two decades, strong consumer resistance has frustrated efforts by Merit and other MFSI’s to drastically reduce consumer dependence on paper checks. Subsequently, IP operations have remained a major back-room operation for Merit and most other full service financial institutions. Moreover, IP processing volumes have expanded rapidly at Merit as formerly independent local and regional banks have been acquired and Merit has sought to grow into a provider of IP services for smaller institutions. Merit’s IP operating policies in retail and commercial banking involve the coordinated management of document retrieval courier operations, check encoding and cash lettering. Like many financial institutions, Merit outsources courier services to vendors that provide contracts to financial institutions for a fixed number of vehicles and drivers to be directly scheduled by the management of the bank’s local IP operations centers. A predominant management objective in courier operations is to determine the routings and departure times for vehicles to retrieve checks and other paper transit that accumulate at retail branch locations during business hours. After pickup, the material is transported to the local IP operations center. As Merit expanded rapidly in the 1990s, its role as a vendor of IP services for smaller institutions grew substantially in many of its regional markets. This usually took the form of Merit’s local IP facilities taking in transit for various levels of processing from smaller banks, utilities, cable television companies, insurance companies and other businesses with high check volumes. The addition of external transit to Merit’s IP work stream significantly increased the variability of check processing volumes and has complicated the forecasting of workloads at IP operations centers. Since the large majority of check volumes through the early 1990s had been from Merit-operated retail branches, IP workloads had been considerably more forecastable based on historical business patterns in Merit’s retail branch networks. During most of this period, monthly transactions reports from retail branches were a useful guide in developing courier, encoder and cash letter schedules based on standard personnel scheduling models for minimizing operator idle time (Ftizsimmons and Fitzsimmons 1998; Starr 1996). The processing sequence at most of Merit’s IP operations centers is essentially the same. Transit dropped off by couriers is encoded, batched for proofing, sorted by destination bank and cash lettered
IT-Based Decision Tools for Item Processing Operations Management
31
to a federal reserve or commercial correspondent bank in the district where the transfer of funds is effected. Encoding of the large majority of individual items is accomplished at workstations where the destination bank, item amount and other data are recorded in magnetic characters on each check. Encoded items are transferred to high-speed reader/sorters that create a photographic record of each check and physically batch checks by the federal reserve or correspondent bank to which they are sent for collection. A cash letter is then generated for each batch which is sent, along with the paper checks, to the correspondent according to that institution’s schedule of cutoff times for off peak item handling fee discounts and one-day or two-day collected funds availability. For high volume correspondents, several cash letters can be sent on the same shift. The operating shift at most IP operations centers starts during the late morning and, at some facilities, can run as late as 2:00 a.m. on high-volume weekdays. When courier schedules result in late arrivals of transit to an IP operations center, more checks miss early cash letter deadlines resulting in higher item handling fees and delayed funds availability. Courier schedules with pickups that are too early may miss large accumulations of transit at branch offices that result from the daily pattern of retail business. These accumulations may then have to wait for pickup on the next business day. During the 1980s, Merit was a pioneer in successfully applying complex, integrated cost models for coordinating courier routing and scheduling, encoder scheduling and correspondent bank fee and availability schedules to support IP operations management, (see Malmborg and Lutley, 1989; Malmborg and Simons, 1989). However, these tools had not been effective during much of the 1990s as IP operations were expanded to serve more external clients and item work streams from acquired institutions were merged for processing in the same facilities. Concurrently, rising competition for market share among federal reserve and commercial correspondent banks was fostering new marketing strategies in the industry for check clearing services resulting in lower unit item handling charges. However, Merit had not been successful in exploiting this opportunity due to its inability to adapt IP operations to more frequent and rapid changes in the timing of off-peak item handling fee discounts and funds availability schedules offered by its correspondents. Recognizing the importance of operations to overall competitiveness, Merit management began to focus on alternative, more convention methods for improving the efficiency of document retrieval operations (Chase and Hayes, 1991; Haksever et al., 2000, Kotter et al., 1986).
SETTING THE STAGE Merit’s overall item processing costs had not decreased significantly during the 1990s despite two major investments in information technology upgrades meant to exploit higher operating volumes. Among these was the acquisition of automated encoding technology. Although it was hoped that these systems could be cost justified within a five-year period, the initial motivation in adopting the new technology was strategic. Merit sought to position itself as a leading IP services provider in most of the regional areas where it operated a retail branch network. The automated encoding systems were intended to displace manual encoding stations where part-time hourly operators would read and encode the face amount of individual checks for subsequent processing by reader/sorters. Early challenges in implementation of automated encoding technology included unacceptably high error rates in the imaging software tools used to interpret handwritten check amounts. During the 1990s, newer generations of imaging software had become increasingly robust. By the late 1990s, the automated encoding system was capable of processing the majority of Merit’s transit work stream with a level of speed and accuracy sufficient to relegate manual encoders to the supporting role of handling exception items. Although this improvement did reduce encoding labor costs and the direct dependence of encoder scheduling on courier scheduling, the investment in automated encoding software and hardware technology had not proven cost effective. IP operating costs had not been significantly reduced after the five-year start-up period as initially envisioned, even though imaging software and automatic encoders had significantly increased processing rates and potential encoding capacity.
32 Malmborg
Given rising check volumes and the de-coupling of courier scheduling and encoder scheduling, Merit executives believed that the investment could be better adapted to achieve meaningful cost savings in IP operations. Subsequently, IP was targeted as a priority area for restructuring during 1999 with an emphasis on improving business process flows on the part of the organization. (Anupindi et.al., 1999). A second information technology upgrade identified as a potential source of IP cost reductions was the Branch Operations Reporting and Information System (BORIS). BORIS is a proprietary software system developed by Merit to track retail transactions at the branch level. It was originally developed and rolled out to branches in 1998 as a general-purpose operations management and marketing support system to track business activity and build branch activity databases. A key feature available in BORIS is a code value assigned by tellers during customer transactions indicating the type of transaction processed. This parameter can be used to identify transactions resulting in checks for collection from banks outside the regional collection district and thereby identify transit accumulations sensitive to the timing of courier stops at a branch office. BORIS was not fully successful in achieving some of its original objectives, including support of courier scheduling and auditing of branch performance for timely proofing of transit bags for courier pick up. Delays in transit proofing by branch managers were believed to be one factor contributing to under-performing IP operations. Aside from its investments in information technology upgrades, Merit identified business trends that were favorable to achieving cost savings in IP operations. The most obvious was the consolidation of IP operations associated with the closure of processing centers operated by acquired banks as well as some of its own satellite encoding centers. While aggressive marketing of IP services to regional businesses was beginning to yield some increases in the utilization of the capacity of automated encoding systems, competition in the market for check clearing services was driving down unit handling fees charged by correspondent banks. The barrier to exploiting this development was that this competition also resulted in greater uncertainty and frequency of changes in funds availability and item handling fee schedules. Ironically, the same business changes promising lower IP operating costs were now contributing to Merit’s under performance. The previous system bottleneck associated with the coupling courier scheduling and encoder scheduling had just been shifted to a new bottleneck associated with the coupling of courier scheduling and cash letter dispatching.
CASE DESCRIPTION Challenged to turn around the situation in IP operations, Merit executives identified IP as a strategic opportunity for restructuring and made it a key element of an $80 million cost savings program implemented in 2000. The first step was to focus a team of internal consultants on a typical IP operations center to better understand current operations and define breakthrough opportunities. The key criterion for selecting a regional site was that it had to be representative of IP facilities serving important regional markets. Once the breakthrough opportunities were identified, the team would develop tools and management practices for best-in-class IP operating procedures that could be rolled out to other facilities. Prior to selecting a site, the project team examined the information systems used by Merit’s IP operations managers. The primary system for courier scheduling was the IP Decision Support System (IPDSS). The IPDSS module for courier scheduling was based on a relatively complex, integrated cost model that could be used for interactively optimizing courier schedules based on a static forecast of hourly check volumes by branch, and a cash letter dispatching plan assuming fixed funds availability and per-item handling fee schedules. The system was designed for interactive, simultaneous development of courier and cash lettering schedules that would remain static for a fixed period (weekly to monthly or longer), depending on the variability in item volumes for the branches included on a courier route. From the analysis of the IPDSS, the project team determined that its failure to effectively adapt to new technology and prevailing business conditions was attributable to changes in the relative
IT-Based Decision Tools for Item Processing Operations Management
33
importance of different capabilities for managing IP operations, particularly courier schedules. The frequent revision and dynamic development of good quality courier schedules for check retrieval was clearly more important in 2000 than construction of near optimal, static schedules based on fixed demands in a branch network with predictable item volumes. It was no longer feasible to directly couple check retrieval and processing with correspondent bank handling fee and funds availability schedules due to the instability of these schedules. Nonetheless, it was obvious that cost reductions from minimizing delays in item retrieval to achieve off-peak discounts in check clearing and maximization of funds availability were still fundamental to the success of IP operations. Whatever new innovations correspondent banks were introducing, the key to exploiting them was still the ability to expediently retrieve, process and dispatch checks on the same day that they are received at the branch office (Malmborg, 1999). Subsequently, the breakthrough opportunity identified by the project team focused on the networking of check accumulation data by branch from BORIS with the IPDSS, and then developing revised functionality for courier scheduling based on realistic cost and performance models. This innovation combined with the faster turnaround of transit associated with automated encoding systems would position Merit’s IP managers to adapt courier schedules to transactions flows in the branch network and rapidly changing cash letter schedules. Using interstitial software to exploit Web connectivity linking BORIS with the IPDSS would enable access to near real time data on check volumes by branch. These updates were a means to partially overcome the non-forecastability of check volumes arising from the merging of transit streams from different institutions that caused short term item volume forecasting to become highly accurate relative to long-term forecasting. This was opposite to the situation existing through the early 1990s when long-term forecasting was considerably more accurate and volumes coming into a given IP operations center were relatively stable. Knowing the location of the highest item accumulations in the branch network, IP managers could be more effective in dynamically scheduling couriers to minimize pick-up delays. High-speed automated encoding could then make checks available for cash lettering shortly after drop off at the IP facility. After detailed analysis and brainstorming by the consulting team, it was determined that the key to turning around Merit’s under performing IP operations was courier scheduling that would assure that checks received at branch offices were cash lettered to correspondent banks on the same day that they were received. At the same time, it could not be assumed that branch personnel could be required to wait for after-hours courier pickups. Therefore, the operational challenge for the team was to develop a general method for scheduling couriers to complete all stops before branch closing times while simultaneously minimizing the volume of checks that had to be held over for pick up on the next day. To develop and test a prototype implementation of this concept, the team selected the Merit IP operations center in Springfield, Massachusetts. The Springfield site was typical of many of Merit’s other facilities in several respects. Work flows were heavily influenced by external transit streams, highly variable and difficult to predict. Only about 40% of stop locations on courier routes were Merit branches using BORIS. In addition, Springfield was making heavy use of aggressive commercial banks based in New York City for check clearing as well as the regional federal reserve banks. Potentially, BORIS could support dynamic scheduling of couriers by enabling dispatchers to adjust courier routings and departure times on a more frequent basis. Therefore, Springfield was a good site to test the concept of courier scheduling with the simple goal of minimizing the number of checks held over for next-day pickup. This was a significant departure from the explicit linkage of courier schedules with item accumulation forecasts and cash letter deadlines that was programmed in the IPDSS. The advantage of the simple hold-over volume reduction concept was that it would avoid reliance on outdated assumptions concerning item processing rates, funds availability schedules and correspondent bank fee structures, while exploiting information provided through BORIS to address these criteria in an approximate fashion. High-speed automated encoding would also eliminate the added complication of offsetting courier drop off times to accommodate limitations in encoding capacity.
34 Malmborg
Another key feature of the Springfield facility was that it resembled other IP facilities by serving two distinct groups of stop locations. The first group included branches and other stop locations in the geographic core of the region. This area had the highest density of stop locations that were served by overlapping courier schedules requiring simultaneous, multiple-vehicle routings. After an investigation of current practices in Springfield, the team determined that the IPDSS was being applied with reasonable effectiveness for scheduling couriers serving this first group of stop locations. In this case, dispatchers took full manual control to direct the route building process while utilizing the IPDSS to estimate courier travel times and the spacing of courier pickups at individual stop locations (often by different couriers). The compressed timing of large check accumulations, the high density of stop locations and the generally earlier closing times for urban branches reduced this part of the scheduling process to one of simply avoiding obvious errors such as high mileage routings, early final pickups or redundant stops. Opportunities for improvement seemed limited. The second group of stop locations consisted of suburban and rural branches served by couriers dedicated to mutually exclusive clusters of stop locations. The problem of scheduling couriers serving this group of stop locations involved the determination of a starting time and sequence of stop locations to establish the departure times from branch offices that define a courier schedule. Like the branches in the first group, courier schedules for these branches were generated manually by dispatchers in Springfield using the interactive portions of the IPDSS. These schedules were then printed, implemented, and reviewed periodically when weekly or monthly summaries of check receipts, (generated using BORIS) were reported by stop location. The same courier schedule could remain in effect for several months or longer. The second group of stop locations involved more geographically dispersed branch offices but was still responsible for a high proportion of total item volume. Dispatchers seemed less effective in applying the interactive scheduling tools of the IPDSS to serve these stop locations and reported the process as being more difficult and time consuming. In particular, pick-up volumes were more sensitive to the schedule since only one or two stops were made at any given location on the same day, suburban branches usually had later closing times and item accumulations were spread more widely over the operating period. It was not possible to provide frequent stops at individual locations and simply avoiding redundant stops and inefficient routings was no guarantee against holding over large check accumulations. For these branches, the problem was one of routing couriers to minimize the number of checks accumulating at the branch offices between the final pickup of the day and the branch closing time. The prospect of more dynamic courier scheduling with revision of courier schedules on a weekly or even daily basis seemed more promising for the second group of locations. To achieve this, a method was needed to quickly and automatically generate efficient courier schedules based on near-real-time downloads of transactions data from BORIS. Focusing on the simple criterion of minimizing the number of checks held over, the team carefully analyzed the Springfield operation which included a total of 217 stop locations and an average of 14 courier schedules per day. For the suburban courier schedules, the operational issues boiled down to setting the time for dispatching each courier from the operations center and the sequence of branch locations for the final pickup. Estimating driving times between branches was not an issue of concern since this functionality was already available within the existing IPDSS. Driving routes, pick-up time allowances and an electronic map were used by the IPDSS database to compute travel time estimates with an accuracy of approximately +/- 4% based on weighting factors for route length, local area density and traffic patterns. Generating the software interfaces and coding to compute expected delay times using the IPDSS driving time models and transactions data from BORIS would be a simple matter for programmers once the basic scheduling method was established. Based on two fundamental relationships, the team concluded that development of a simple but reliable method for scheduling couriers to minimize the volume of checks held over at retail branches provided the most leverage for improving under-performing IP operations. First, minimizing hold-over volume was the key to utilizing the high item throughput capacity provided through automated
IT-Based Decision Tools for Item Processing Operations Management
35
encoding systems for just-in-time processing. These systems could easily encode a high volume of checks arriving relatively late in the day, leaving ample time for meeting most early cash letter deadlines. Second, maximizing the volume of checks cash lettered to correspondent banks on the same day of receipt at the branch was the prerequisite for exploiting item handling fee discounts and funds availability schedules offered by correspondent banks. Refining this scheduling concept and validating the effectiveness for the Springfield operation, and then rolling it out to other regional IP operations centers, had the potential to contribute directly and significantly to the $80 million cost savings target established for Merit’s retail division.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION There were four factors pressuring the team to achieve a quick success in turning around the Springfield IP operation. The first was a widespread recognition of the importance of retail operations to Merit’s balance sheet. As a source of low-cost operating funds and direct revenues, retail operations were a central part of Merit’s business strategy and were apt to remain so for the foreseeable future. Growing Merit’s market share as a provider of IP services was an integral part of this strategy. The articulation of this strategy as one aimed at achieving “best-in-class” performance in key areas made it all the more essential for the team to show results. The second factor was management’s emphasis on restructuring IP operations to achieve near term cost reductions. There was an expectation that a significant portion of the 1999 cost savings target of $80 million would be achieved through improvements in IP operations. The team’s project was among the most visible initiatives in this area. The third factor was a decreasing level of institutional patience with a lack of return on investments in information technology tools designed to improve the efficiency of retail operations. The high expectations associated with the rollout of BORIS a year earlier were contributing to a growing concern that the system could prove to be a costly failure. A significant cadre of managers that had supported its development were eager to see its reputation rehabilitated. The fourth factor was a problem created by the team itself. This was from overselling the concept that a simpler approach to managing IP operations would prove more effective than the traditional operations strategy. To build support and buy-in for its plan, the team had to convince management stakeholders that scheduling couriers to minimize hold-over volume could turn around under performing IP operations. This tended to oversimplify the problem by under-emphasizing other changes needed in the way that Merit’s IP operations were managed. The result was a perception that the problem was considerably easier than it actually was and that results should be forthcoming in the short run. There were also a number of implementation-related and technical problems associated with the primary user community of courier dispatchers. The chief technical problem was designing a scheduling method that would work effectively for IP centers throughout the Merit system where there was wide variation in the urban/suburban mix of courier schedules, the average number of stops on courier routings and other factors influencing the scheduling problem. The method had to feature an initial schedule development routine that was mostly independent of manual user inputs and then provide an efficient interface for schedule adjustment that exploited the domain knowledge of dispatchers. Therefore, a clear understanding of the core scheduling logic of the system was seen as a key to gaining credibility and ultimately acceptance among the dispatcher community. Recognizing the time pressure under which most dispatchers were working, the team had to sell the concept that the new tool would produce immediate results by better exploiting the computational power of the IPDSS. Finally, there was the problem of reliance on the somewhat discredited BORIS as a key source of inputs. The user community was skeptical of the idea of adapting databases within BORIS to support the IPDSS. The team had to demonstrate that the use of this data would produce schedules that were of sufficient quality with respect to holdover times to either replace or at least benchmark existing schedules.
36 Malmborg
Recognizing these implementation issues, the technical approach developed by the team was to use Monte Carlo simulation for generating courier schedules within starting time windows provided by local dispatchers. The technique used “breadth” and “depth” parameters where the breadth parameter corresponded to the number of different branch sequences to generate for a particular courier schedule and the depth parameter represented the extent to which sequences were “improved” prior to fixing a schedule. For example, for a breadth parameter of 50 and a depth parameter of 1,000, the procedure would generate 50 random routings (branch sequences), and then evaluate the expected volume of checks held over for each sequence using data from BORIS and the travel time calculation functions of the IPDSS. Any sequence that was infeasible (i.e., could not be completed before a given deadline) would be discarded and replaced with another one until 50 feasible random routings were obtained. For each of these 50 random routings, the improvement step would be applied 1000 times. The improvement step involved exchanging the locations of two randomly selected stops in the routing, and then reevaluating the corresponding hold-over volume. If the exchange yielded an improvement, i.e., a lower hold over volume and still met feasibility criteria, it was retained. Otherwise it was discarded. The IPDSS could be used to print any dispatcher-specified subset of the 50 improved schedules, and its existing schedule editing functions could be applied to evaluate and implement dispatcher-defined changes for any schedule prior to printing. The team’s concept was therefore to use real-time transactions data from BORIS and the schedule building and editing utilities of the IPDSS. The method was practical, simple and would scaleup efficiently for courier routes with a very large number of stop locations. It can be illustrated using the sample data set with the check accumulation rates and courier driving times summarized in Table 1. The sample problem has eight branches that operate over the same eight-hour period each day and the daily check volume associated with each branch is obtained by totaling the volume in each hour over the eight hour operating period to obtain: Branch Office: B1 B2 B3 B4 B5 B6 B7 B8 Daily Total (checks): 913 1364 1862 2271 2457 2479 2157 1952 where branch office j is designated as Bj. These values are obtained by multiplying the estimated number of checks accumulating per minute in each hour of operation by 60 minutes per hour, e.g., 913 = 60(2.29+1.21+2.10+1.65+2.03+2.94+1.48+1.51). With a breadth parameter of two and a depth parameter of five, a total of 10 schedules would be examined To illustrate, suppose the following two sequences that start and finish at the item processing operations center (IPOC) define the two randomly generated branch sequences: Sequence 1:
IPOC
B2
B7
B6
B3
B5
B1
B4
B8
IPOC
Sequence 2:
IPOC
B3
B6
B8
B1
B7
B2
B5
B4
IPOC
The first step would be to evaluate each of these sequences. The schedule driving times are obtained using the driving time data from Table 1 and the schedules are generated by setting the schedule starting time to allow exactly enough time for a courier to complete the schedule. That is, the schedule starting time is set to allow the courier to arrive at the IPOC at the end of the eight- hour (480minute) operating period. For example, the schedule associated with sequence 1 starts at minute 235 in order to complete at minute 480. The schedule associated with sequence 2 starts at minute 222 in order to complete at minute 480. For each stop on each schedule, the pickup volume is computed by tabulating the number of checks that arrive at the branch office prior to the pickup time. The hold-over volume is then obtained as the difference between the total daily volume and the pickup volume. The calculations for sequences 1 and 2 are summarized below: Sequence 1:
IPOC
B2
B7
B6
B3
B5
B1
B4
B8
Stop Time: Pickup Vol.: Hold Over:
235 -
280 843 69
315 739 625
351 1571 291
394 2033 238
428 2225 232
434 1792 687
438 1264 893
464 480 1882 70
IPOC
IT-Based Decision Tools for Item Processing Operations Management
37
Sequence 2:
IPOC
B3
B6
B8
B1
B7
B2
B5
B4
Stop Time: Pickup Vol.:
222 -
248 724
291 1137
337 904
357 2129
366 2381
401 1326
455 1615
463 480 1300 -
IPOC
Hold Over:
-
188
227
958
142
76
1153
542
652
-
Thus, total hold-over volumes of 3106 and 3938 checks result from schedules 1 and 2 respectively. To perform the second, “improvement” step in the scheduling procedure, two branches in a sequence are randomly interchanged. If the hold-over volume is reduced, an interchange is retained otherwise it is discarded. The results below summarize the improvement steps for each of the two initial sequences for a depth parameter of five: Initial Sequence 1: IPOC-B2-B7-B6-B3-B5-B1-B4-B8-IPOC (hold over volume of 3106) Adjusted Sequence: IPOC-B2-B7-B6-B1-B5-B3-B4-B8-IPOC IPOC-B5-B7-B6-B3-B2-B1-B4-B8-IPOC IPOC-B2-B5-B6-B3-B7-B1-B4-B8-IPOC IPOC-B2-B5-B4-B3-B7-B1-B6-B8-IPOC IPOC-B2-B5-B6-B3-B7-B8-B4-B1-IPOC
Interchange: B1 & B3 B2 & B5 B7 & B5 B4 & B6 B1 & B8
Holdover: 3696>3106 4205>3106 3007<3106 3801>3007 3090>3007
Retain/Reject Reject Reject Retain Reject Reject
Initial Sequence 2: IPOC-B3-B6-B8-B1-B7-B2-B5-B4-IPOC (hold over volume of 3938) Adjusted Sequence: IPOC-B3-B6-B8-B1-B4-B2-B5-B7-IPOC IPOC-B2-B6-B8-B1-B4-B3-B5-B7-IPOC IPOC-B2-B6-B8-B5-B4-B3-B1-B7-IPOC IPOC-B2-B8-B6-B5-B4-B3-B1-B7-IPOC IPOC-B2-B8-B6-B5-B1-B3-B4-B7-IPOC
Interchange: B4 & B7 B2 & B3 B1 & B5 B6 & B8 B1 & B4
Holdover: 4232>3938 3440<3938 3430<3440 3281<3430 3136<3281
Retain/Reject Reject Retain Retain Retain Retain
Thus, the best schedule obtained using a breadth parameter of two and a depth parameter of five is: IPOC-B2-B5-B6-B3-B7-B1-B4-B8-IPOC, with a total of 3007 checks held over.
APPENDIX Table 1: Summary of Data For Test Problem 1 Estimated Driving Times (in minutes) Proc. Branch Office Ctr.: #1: #2: #3: #4: #5: #6: #7: #8: Proc. Ctr. - 17 45 26 29 17 72 13 16 Branch #1 17 - 40 27 4 6 26 9 20 Branch #2 45 40 - 29 54 48 106 35 30 Branch #3 26 27 29 - 35 34 43 20 10 Branch #4 29 4 54 35 - 8 40 12 26 Branch #5 17 6 48 34 8 - 37 17 28 Branch #6 72 26 106 43 40 37 - 36 46 Branch #7 13 9 35 20 12 17 36 - 12 Branch #8 16 20 30 10 26 28 46 12 Item Accumulation Averages Per Minute for Each Hour of Operation Branch No: 1: 2: 3: 4: 5: 6: 7: 8: Hour 1 2.29 4.29 5.31 8.55 8.11 10.04 7.52 6.72 Hour 2 1.21 1.48 2.98 3.51 4.08 3.34 2.83 2.32 Hour 3 2.10 2.17 2.03 2.24 2.41 2.33 2.16 2.19 Hour 4 1.65 2.96 4.43 4.40 5.03 4.65 4.84 4.30 Hour 5 2.03 2.12 2.34 2.54 2.14 2.09 1.93 2.27 Hour 6 2.94 4.35 7.00 6.72 10.20 8.77 7.14 6.27 Hour 7 1.48 2.31 3.70 4.22 4.52 4.67 4.92 4.08 Hour 8 1.51 3.05 3.24 5.67 4.46 5.47 4.61 4.38
38 Malmborg
FURTHER READING Evans J.R. & Olson D.L. (2000). Introduction to Simulation and Risk Analysis. Upper Saddle River, NJ: Prentice Hall. Fishman G.S. (1996). Monte Carlo Concepts, Algorithms and Applications. New York, NY: SpringerVerlag New York, Inc. Tapiero C.S. (1998). Applied Stochastic Models and Control for Finance and Insurance. Hingam, MA: Kluwer. Crainic T.G. & LaPorte G. (1998). Fleet Management and Logistics. Hingam, MA: Kluwer. Golden B.L. & Assad A.A. (1988). Vehicle Routing: Methods and Studies. Amsterdam, Netherlands: Elsevier Science Publishers.
REFERENCES Anupindi, R., Chopra S., Deshmukh D., Van Mieghem J, & Zemel E. (1999). Managing Business Process Flows. Upper Saddle River, NJ: Prentice Hall. Chase R.B. & Hayes R.H. (1991). Operations’ Role in Service Firm Competitiveness, Sloan Management Review, 33(1), 15-26. Fitzsimmons J.A. & Fitzsimmons M.J. (1998). Service Management: Operations, Strategy and Information Technology. Boston, MA: Irwin Mc-Graw-Hill. Haksever C., Render B., Russell R.S., & Murdick R.G. (2000). Service Management and Operations. Upper Saddle River, NJ: Prentice Hall. Kotter J.P., Schlesinger L.A. & Sathe V. (1986). Management of Service Operations. Homewood, IL: Irwin. Malmborg C.J., Lutley R. (1989). A PC Based System for Financial Transit Retrieval Operations. Industrial Engineering. 21(12), 30-36. Malmborg C.J., Simons G.R. (1989). Integrating Logistical and Processing Functions Through Mathematical Modelling. Applied Mathematical Modelling. 13(6), 357-364. Malmborg C.J. (1999). Current Modeling Practices in Bank Courier Scheduling. Applied Mathematical Modelling, 24(4), 315-325. Starr M.K. (1996). Operations Management: A Systems Approach. New York, NY: International Thompson Publishing Company.
BIOGRAPHICAL SKETCH Charles J. Malmborg is Professor and Acting Chair in the Department of Decision Sciences and Engineering Systems and Professor of Information Technology at Rensselaer Polytechnic Institute in Troy, New York. His research interests are in material flow logistics, facility design and operations management. He teaches courses in Rensselaer’s information technology, industrial engineering, management of technology distance learning and executive MBA programs. He is author or co-author of more than 60 refereed technical papers in leading international journals and has served as Project Director and Principal Investigator on over $2 million in sponsored research and educational projects.
The Dilemma of Dairy Farm Group 39
The Dilemma Of Dairy Farm Group Between Redesigning of Business Processes and Rebuilding of Management Information Systems Eugenia M. W. Ng Hong Kong Institute of Education, Hong Kong Ali F. Farhoomand University of Hong Kong, Hong Kong Probir Banerjee City University of Hong Kong, Hong Kong
EXECUTIVE SUMMARY The Dairy Farm Group of Companies (DFG), is a leading food and drugstore retailer in the AsiaPacific Region. DFG and its associates operated supermarkets, hypermarkets, convenience stores and drugstores in nine territories and had sales of US$6.9 billion in 1997. However, the profit margin of DFG was low compared to its competitors in Hong Kong and China and other retailers in Europe and the U.S. Consequently, a new chief executive officer was hired in June that year. The new management team hired the services of two consulting firms to independently carry out a preliminary investigation of existing systems at DFG and to recommend solutions. Firm A stressed primarily the development of a management information system and use of emerging trends in technology and firm B focused on the re-engineering of crucial business processes with supporting technology. If you are the management team, which firm will be awarded the contract?
BACKGROUND The Dairy Farm Group of Companies (DFG; http://www.dairyfarmgroup.com/dfarm_graphic/ corporate/default.html), is a major Hong Kong-based food retailer with operations in a large number of major cities in the Asia-Pacific region. DFG’s shares were listed on the Hong Kong, Singapore, Bermuda, London and New York Stock Exchanges. The primary share listing of the parent company, Dairy Farm International Holdings Limited, was in London and the bulk of its shares were traded in Singapore. The Company was incorporated in Bermuda and its businesses were managed from Hong Copyright © 2002, Idea Group Publishing.
40 Ng, Farhoomand & Banerjee
Kong by Dairy Farm Management Services Limited through regional offices in Asia and Australasia. Fifty-five per cent of DFG’s shares were owned by Jardine Matheson Ltd. and the balance was held by the public (http://www.dairyfarmgroup.com/dfarm_graphic/corporate/annual/97/ann10.htm). Sir Patrick Manson, a Scottish surgeon, and five prominent Hong Kong businessmen formed Dairy Farm with the objective of supplying cow’s milk to Hong Kong people in 1886. In 1904, the Company began importing frozen meat and opened its first retail store at the Central District depot, and by 1957, it had three retail stores and had started expanding its product range, marking the start of its transformation into a major food retailer and distributor. By 1986, its centenary year, it had more than 300 retail outlets, including the Wellcome grocery chain. It had become a leading force in the manufacturing, wholesaling and distribution of dairy and other food products in the Pacific region and in China. In the same year, the Company acquired a 50 per cent interest in the Maxim’s chain of restaurants in Hong Kong. In 1987, it acquired 25 per cent of Kwik Save Group plc - the sixth largest grocery retailer in the UK–and commenced supermarket operations in Taiwan. The Company subsequently acquired 228 branches of the 7-Eleven convenience stores from Jardine Matheson in 1989, followed by acquisition of the 108-store Simago chain in Spain and the 61store Woolworths chain in New Zealand in 1990. Other major expansion activities included the establishment of a 49 per cent-owned joint venture with Nestlé to develop dairy factories throughout China in 1992, acquisition of the 142-store Cold Storage chain in Singapore in 1993, establishment of a 50/50 joint-venture with Cold Storage and joint ventures to develop supermarkets and discount stores in Malaysia and Japan in 1994 and 1995 respectively. In 1995, it also signed agreements with the Hero group in Indonesia and the RPG group in India to manage and develop supermarket chains in the two countries. In 1996, Guardian pharmacy joint ventures were established in Malaysia and India and a 51/49 supermarket joint venture was formed in Sichuan. By 1997, it had operations in all major cities in the Asia-Pacific region, Australia, New Zealand and Europe with the Asia-Pacific being the most profitable region (Exhibit 1) 1 . In order to concentrate on its core retailing business in the Asia-Pacific region, DFG disposed of its 49 per cent interest in Nestlé Dairy Farm to Nestlé and decided to close down the loss-making Mannings drugstores in Taiwan and the Wellsave discount stores in Japan. As of December, 31, 1997, DFG operated 1,352 outlets, principally supermarkets, convenience stores and drugstores, and employed some 45,600 people. It had also entered into the restaurant business through a 50 per cent interest in Maxim’s Caterers Limited, Hong Kong’s largest restaurant and catering company with more than 300 outlets in Hong Kong and Mainland China. The reported sales and profit figures for the year 1997 were US$6.9 billion and US$154 million respectively (Exhibit 2). While DFG operated on a sales-to-profit margin of 2.23 per cent in 1997 and 1.98 per cent in 1996, competitor A.S. Watson Group, comprised of the Park’N Shop, Watsons and Fortress chain of stores in Hong Kong and China, had a 9.22 per cent sales-to-profit margin in 1997, up from 5 per cent in 1996. Another competitor, US-based Wal-Mart, reported a 2.92 per cent margin in 1997. The profit margins of DFG was low as compared to its competitors in Hong Kong and China and other retailers in Europe and the U.S . DFG’s business mission was: “To be the leading food and drug store operator in sales and shareholder value creation in Asia-Pacific”. To successfully pursue its business mission it was crucial for DFG to redefine its business strategy towards “sensing and responding to customer needs” as opposed to the traditional “buying and selling”. In order to retain its dominant position in the AsiaPacific region, DFG had decided on a new business strategy. Firstly, the strategy entailed defending its existing markets through a process of rationalisation focused on the disposal of, or closure of, its non-core operations. Such a process was expected to put DFG in a position to expand its core operations and become a dominant player in each of its chosen markets. A combination of acquisition and rapid establishment of new formats funded from its cash-rich position was decided upon. The second component of the strategy was to respond to the increased competitive threat through changes to its organisational structure, and in this respect it had decided to de-federate its businesses and
The Dilemma of Dairy Farm Group 41
operate as a single entity wherever possible. Such a move, it was felt, would enable DFG to capitalise on both its size and resources and work cohesively as a single company with a single vision, identity and brand where appropriate. The third dimension of the strategy was to improve on its market share and customer base through exploiting new markets and opportunities consistent with its core market capitalisation strategy.
SETTING THE STAGE Existing Systems The existing systems within DFG were built according to business functions to provide transaction information rather than providing information for management or decision making. Basically, there were two systems, namely, the Store Systems and the Operational Systems. Store Systems These systems were those deployed at the retail stores of various business units of DFG. The retail store applications were point-of-sale systems, procured from several vendors and customised for each store’s requirements. Each store had its own local area networks (LANs) on which the store system was implemented. The store systems did not interface with any of DFG’s operational systems. Some stores had optical scanners while some had cash registers at the customer checkout points. Sales data was transmitted to the related business unit data centres through fax and modem for the purpose of consolidation and operational decisions. Historical data storage at the store level was minimal, owing to the high volume of daily transactions. Operational Systems The operational systems that supported DFG’s retail operations were the following: • Central Merchandise Management, which included Item and Vendor Management, Pricing and Promotions, Stock Management, Trading terms/costs and Store Replenishment • Financial and Accounting Management • Inventory Management • Warehouse and Distribution Management • Human Resource Management Operational systems in DFG’s business units were large in scale, complex in operation and business-critical in nature. Each business unit had its own merchandising, inventory and warehousing systems, implemented on diverse hardware and software platforms. Managers did not have direct access to the database in the mainframe. Some standard reports were generated for management analysis. Any new reports required coding by programmes, and turnover rate of IT staff had been fairly high as elsewhere in Hong Kong.
Problems With Existing Systems Historically, information systems developers had developed systems to meet the requirements of specific business functions within an organisation. DFG was of no exception. It had a wide range of disparate, independent application systems, each built around specific business functions such as finance, merchandising, warehousing, etc. These systems were closed walls, with no provision for information exchange across business functions. Furthermore, as DFG expanded its operations into various countries in Asia-Pacific, the applications had to be customised to adapt to the multilingual and multicultural environments of the various countries. As a result, several different versions of the software existed, significantly increasing the maintenance overhead. Additionally, many business processes within DFG were manual and inefficient, thereby slowing information processing and decision-making. Store replenishment was an example of one such system that required considerable manual action and intervention.
42 Ng, Farhoomand & Banerjee
Another handicap with DFG’s systems was that the store systems were not designed to capture the full details of customer transactions. As such, the customer database allowed only vague notions about the customer to be developed. The anonymity of cash transactions, personal privacy laws and the sheer volume of transactions prevented DFG and other similar retailers from developing the intimate customer knowledge that was taken for granted in many other industries. This lack of customer knowledge was seen as a formidable barrier to realising the quantum-level leap that DFG intended to achieve in “sensing and responding” to customer needs. There was also a dearth of information for management decisions. The relatively scant information that was available was fragmented, voluminous, spread across diverse formats and not current. As such, it was difficult to integrate or analyse. Sometimes, important information was not available at all. As an example, calculating lost sales opportunity at the stores arising out of stock outages was not possible because such data was not captured. Information was exclusively made available in paper form and to analyse it in any other way required that the information be entered into a spreadsheet and modified into a new structure. Like other retail businesses, DFG’s business was also highly distributed. Mobile employees such as the travelling salesmen and those in the warehouse and distribution functions had problems accessing corporate information from remote locations. All DFG employees were tied to specific locations in order to access corporate information resources. In the first instance, this occurred due to the specific nature of the access devices (i.e., an IBM 3270/5250 terminal) and their corresponding dedicated networks. Subsequently, access locations had been fixed because the wide variety of system security services had not extended to allow access from more than one location. User identity was often combined with the notion of a fixed location-dependent network identity.
Problems With Existing Processes DFG operated as a federated organisation, more by compulsion than by choice. The compulsion stemmed from the fact that a large number of small companies at geographically dispersed locations had been acquired over a period of time. Since there was no communications network, each of these companies operated independently. There was duplication of some assets and waste of human resources. For example, although Wellcome operated in the daytime and 7-Eleven mostly at night, each had its own fleet of trucks for stock movement, thereby increasing transportation costs. In terms of business processes, store replenishment was one process that required significant manual intervention. Each company had its own set of suppliers from whom goods were purchased. This resulted in different prices being paid for the same items because they came from different suppliers. Economies of scale were not possible. There was little power over vendors in terms of negotiating terms of trade, particularly in regard to discounts on bulk purchases. Monitoring of stock “shrinkage” was also a major problem. Sometimes excess goods were supplied and charged to the stores. Average stockholding for some items (except fresh food) was about 35 days, against the industry norm of seven days. A consequence of high inventory cost was that DFG’s business units operated at margins that were much lower than those of other operators such as Tesco in the UK and Wal-Mart and K-Mart in the U.S. Another business process that wasted human resources was the accounting function. The function being centralised, copies of purchase orders raised by the stores were sent to the Central Office. Copies of goods receipt note and supplier invoices were also received at the Central Office. Manually matching such a large number of orders with supplier invoices and goods receipt notes was a daunting task, with phenomenal waste of man-hours. In 7-Eleven alone, 375,000 invoices were matched annually by 120 people. The shareholders were critical of the extant management’s ability to provide DFG with the direction needed to fulfill its mission. They were worried that 1997 was the beginning of a slump in retail sales for DGF. The economic crisis that gripped Asian countries during the later part of 1997 could be
The Dilemma of Dairy Farm Group 43
one reason for the downturn. A second and more important reason was the increasing competitive pressures that DFG faced from European and U.S. retail chains which were prepared to gain a foothold in the Asian market. Consequently, a new CEO was hired in June 1997. The new management team hired the services of two consulting firms through an “open bidding” system to independently carry out preliminary investigations of existing systems at DFG, and to recommend solutions. The final contract was to be awarded to the firm whose recommendations were seen as being actionable and directly contributing to the bottom line (i.e., competitive advantage through quantum-level leaps in customer satisfaction and shareholders’ wealth, at significantly reduced costs). The two consulting firms locked horns to win the contract.
CASE DESCRIPTION After a detailed investigation and analysis of the existing business operations, systems and procedures, and the supporting infrastructure, the two consulting firms submitted the following recommendations:
Firm A The firm felt that DFG had an excellent network of retail stores and product lines, which gave it a head-and-shoulders advantage over its competitors. The snag was management control, attributable to geographically disperse, disparate and close-walled systems. It found that significant improvements in DFG’s operations and profitability could be achieved by building a corporate management information system, supported by the right technology and communications infrastructure. It would enable managers to access corporate information in various ways and across all functions, thereby facilitating the decision-making process. The systems could also provide an arena to develop electronic commerce (EC) with business partners and customers. As the telephone density in Hong Kong was 70 telephones or 55 exchange lines per 100 population, which was one of the highest in the world, the B2C service is surely an area for growth. The major recommendations were as follows: Store Operations It was felt that DFG could significantly improve sales and customer satisfaction by adopting electronic retailing as a subset of its retailing function. In this respect it suggested that the store operate in two environments–the physical and the virtual (electronic). (A schematic view of the proposed application is shown in Exhibit 3). Features suggested for inclusion in the electronic store application were as follows: • It would be a World Wide Web (Web)-based retail system using client-server architecture, aimed at providing an Internet-based home shopping system consisting of an Electronic Store and an Electronic Distribution Centre (E-DC). The E-DC was to provide the services of a fulfillment centre, including product availability, delivery status, physical delivery and product warehousing. • The application would interface with other functional units linked to the retail operations, such as the head office, distribution centres, vendors, consumers, and the financial institutions for settlement of electronic transactions. • It would provide full retail capability, similar to that provided by a physical store, to the virtual customer. The functions recommended for inclusion in the application were product browsing, product purchasing, product information, purchase methods, purchase status, purchase history and personalised customer service. Network Computing and Application Re-Design The development of the Web and its corresponding Web-server and browser technologies had created an opportunity for organisations such as DFG to re-think the architecture of their applications.
44 Ng, Farhoomand & Banerjee
It was recommended that applications, centrally deployed in the mainframe environment, be rewritten and ported on the latest n-tier client server architecture, with the Internet browser (or other thin-client access device) providing the user interface and an application server or information broker as the middle tier. This design model would help in realising the benefits of information access and data sharing. Other benefits of adopting such a topology would be scalability, application maintenance and software component reuse. To support the rewritten applications, it was recommended that an organisational intranet be developed wherein the LANs at the stores would be interconnected with each other through Common Gateway Interfaces (CGI), forming a Wide Area Network (WAN). (See Exhibit 4 for a suggested plan for the communications network). Application Integration and Information Sharing The firm’s investigation revealed that DFG’s applications had developed over time, independent of one another. Stove-piped applications all communicated with their own databases. Little attention had been paid to the requirement to exchange information between applications, or to reliably create new information from data sets derived from two or more applications. The firm recommended that it was important for DFG to have an integrated set of applications within each business unit so as to facilitate information interchange across business functions. In this regard, it was recommended that store applications interface with all the operational systems, such as finance, merchandising, warehousing and human resources. Remote Access DFG had three distinct classes of mobile users with three different requirements for mobile services. They were the telecommuters, travelling employees and task-oriented mobile users working for the warehouse, distribution, inspection departments and the stores. It was recommended that these users be provided with secured, location-independent access to corporate information resources. Such facilities had to be usable, secure, manageable and provisioned at acceptable cost. Data Warehouse and Data Mart Like most organisations, DFG only stored transaction records arising from inputs to the operational systems in the course of daily business. It was the only repository of raw data. Analysis of raw data was a time-consuming and tedious process. Data aggregation, collation etc. was a repetitive process in such analysis. The consulting team recommended that DFG should develop Data Warehouse and Data Marts to facilititate data analysis and decision-making. Inventory Management Firm A found that inventory was the largest single cost-factor in DFG’s retail operations. DFG had an average inventory holding of 35 days, as against a norm of seven days for the industry. The high stockpiling was attributed to delayed processing of purchase orders. It was recommended that access points on the proposed corporate intranet be provided to the vendors so that in cases where direct buying by the stores was involved, store orders could be sent to the vendors in EDI formats. Similarly, vendor invoices could also be sent to the Central Office in EDI formats. The method would facilitate faster processing of orders, leading to reduced inventory at warehouses. Additionally, it would enable computerised matching of orders with invoices and thereby significantly reduce manpower overheads. Development of Core Competencies Firm A believed that in seeking to maximise IT effectiveness, an organisation should not seek to minimise the cost of its technology but maximise the effectiveness of its staff. To this end, it recommended that a systematic assessment of the required technical skills be made, and
The Dilemma of Dairy Farm Group 45
necessary training plans and recruitment schemes be charted out so as to have core competence within the organisation.
Firm B Firm B believed that technology alone could not bring competitive advantage unless it was planned in the context of an organisation’s present and future business needs. In addition to technology, it placed emphasis on business processes, viewing them as major determinants of an organisation’s ability to develop and retain competitive advantage. Accordingly, the identification and re-engineering of business processes supported DFG’s business strategy while having IT as the essential enabler (Hammer & Champy, 1993). The ultimate goal is not just to cut cost but to reduce time and improve quality (O’Neill & Sohal, 1998). In this regard, it recommended a technology planning process, the output from which was to be known as “DFG’s Technical Architecture”, essentially a framework enabling rapid changes in DFG’s business processes and its supporting applications by providing a clear definition of endorsed standards, technologies and policies governing their use for the whole DFG rather than individual business enterprises. [A schematic representation of the Technology Planning Process is shown in Exhibit 5]. Technology Planning Process The recommended technology planning process comprised of a business component and a related technical component. The business component consisted of the identification of business processes and the re-engineering of business processes considered critical to the successful pursuit of DFG’s business strategy. The technical component pertained to the identification of technologies to support the business processes and an implementation plan. Business Component The business component included the identification and re-engineering of crucial business processes. In trying to address this issue, Firm B viewed the entire set of operations within DFG’s business units as a set of Business Process Domains (BPDs). These were essentially the logical grouping of business processes and their associated systems dedicated to a common purpose, with the possibility of such grouping being geographically dispersed. Five such BPDs were identified: In-Store Systems, Business Unit Operational Systems, Business Unit Analytical Systems, Group Systems and Core Infrastructure Services (see Exhibit 6 for a schematic view of the BPDs and associated processes). The primary purpose of grouping the processes under BPDs was the ability to specify the desired system characteristics for a BPD as a whole, as opposed to individual processes. Specifically, the following characteristics were considered crucial for success of systems within each BPD. • Availability, specifying recovery plans, tolerance for system outages, etc. • Assurance, specifying security, integrity and credibility requirements • Usability, specifying user interface requirements • Adaptability, specifying interoperability and porting requirements The following systems and business processes were identified for re-engineering: Store Systems In respect of store applications, the recommendation was that common store applications be built around standardised technology. The store applications would be resident on a back-end server, which would also store transaction data so as to facilitate stock monitoring at the store level. Store LANs would be networked with the data centres and the central office at Hong Kong for data transmission. In order to have scalability, the client-server architecture was proposed for its deployment. (See Exhibit 7 for the suggested application partitioning logic). It was also recommended
46 Ng, Farhoomand & Banerjee
that the store applications be redesigned to ensure that all data conducive to decision-making was captured at the transaction points. The application would be extended to support new channels of marketing, such as facsimile and telephone, IVR and ITV. It was suggested that Store Systems have the following attributes: • Share a common information base • Be integrated • Support coordinated actions. In-store systems needed to support distributed transactions • Interface with systems external to the Store Systems • Be flexible to accommodate changing business requirements and operation in multiple geographic territories • Be scalable from a minimum of two to a maximum of 50 shopping lanes • Be adaptable to operate across DFG’s multiple retail formats • Be extensible to accommodate new technologies and releases • Be remotely maintainable • Integrate with standard DFG management mechanisms Inventory Management It was felt that the suggested business strategy of operating as a single entity in a de-federated organisational structure would enable DFG to achieve economies of scale in terms of inventory management. It was recommended that store systems would integrate with operational systems and also be extended to include the buyers and suppliers. A central merchandising system within each business unit would monitor the inventory levels at each store and generate individual store orders. The store orders would then be routed to the respective stores for their approval. Approved orders would be consolidated and purchase orders generated to suppliers. For example, if a certain soft drink is running low at some 7Eleven store and at Wellcome Supermarket, an order will be put together but goods will be delivered to respective stores. In generating purchase orders, leverage would be obtained in respect of the terms of trade with different suppliers. (A descriptive model of the suggested method is shown in Exhibit 8). Significant benefits were expected to accrue from such a process. While stock outages at stores would be minimised, it would also ensure that the stores maintained optimal stock levels. It would also provide DFG with additional power to negotiate terms of trade with its suppliers and substantially reduce the number of purchase orders and invoices. The process entailed the integration of the POS, merchandising and the warehousing functions across the in-store and the operational BPDs. A future direction recommended in this regard was the integration of the supply chain across business units (Copacion, 1997), and continual stock level monitoring at individual stores directly by the suppliers, who would then supply stocks as necessary. Category Management Traditionally, DFG, like all retailers, maintained separate buying and selling functions. Buying was not fully geared towards customer requirements. The selling function entailed selling whatever was available in the stores. The result was warehouses carrying large inventories of slow-moving stocks and stock outages at stores for fast-moving items. The establishment of a common process, one that would enable customer requirements to be sensed and catered to, depended to a large extent on detailed, accurate and timely information on customer transactions. While the suggested changes in the store operations would enable detailed data capturing on customer transactions, it was recommended that data warehousing technology be adopted to facilitate organisation and analysis of data for marketing to customer segments. Electronic Commerce The firm felt that DFG could benefit from advertising its products on the Web. They however expressed reservation in regard to buying and selling through the Internet, particularly because of the
The Dilemma of Dairy Farm Group 47
security problems inherent in electronic transactions. It would be a long time before consumers would feel comfortable with giving their credit card information on the Internet for the purpose of settling Internet-based transactions. Technical Component For DFG to evolve from a federation to a group business structure as per its business strategy, significant additional investment in a common supporting infrastructure linking the various business units was required. The level of linkage required between the business units and the support required to integrate the supply chain across business units depended upon successful deployment of the services under the Core Infrastructure Services. (See Exhibit 9 for the applications planned for deployment within this BPD.) A corporate-wide area network to provide group-wide information systems through interconnections across the various BPDs identified within DFG was recommended. Over time, the Core Infrastructure Services BPD was seen to develop as the foundation for distributed computing within DFG and hence provide group-wide information management capability. It was initially planned to extend only up to the boundary of each business unit so as to provide interbusiness services. As the functions available within the BPD became more relevant to the information systems within business units, the domain was planned to be extended to provide intra-business unit services. To facilitate this progression, all internal services pertaining to a business unit were required to be provided in accordance with standards and technologies selected for the Core Infrastructure Services BPD. A key function of the Core Infrastructure Services was to provide enterprise-wide system security, integrity and credibility, particularly within the In-Store BPD. Some of these services were to be invoked explicitly by application programmes, while others were to be used implicitly through the use of operating systems, databases, communications, security or message-based components. In respect of each BPD, continuous or near continuous availability of systems, interoperability with other systems, usability in terms of user interfaces, etc., were other functions provided by the Core Infrastructure Services.
Technology Planning Team To carry out the recommended technology planning, it was proposed that a Technical Architecture Programme Group (TAPG) be established under the auspices of a Technology Strategy and Architecture Group (TA Team) which would be the body formally established to conceive, develop and maintain DFG’s Technical Architecture. TAPG membership would be drawn from three groups: the DFG TA Team, DFG’s Technology Partners and the consultant organisations with specific technology expertise.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION Both consulting firms had a successful track record. Firm A stressed primarily the development of a management information system and use of emerging trends in technology, but firm B focused on the re-engineering of crucial business processes having IT as the enabler. There were advantages and disadvantages of both proposals and some of the recommendations might not be mutually exclusive. Furthermore, it was difficult to put quantitative values on some of the qualitative benefits accruing from such projects. For example, Firm A’s solution has less disruption to the existing organizational structure with a good potential for reducing cost if e-commerce will be developed for B2B business (Fox, 2001). Re-engineering fits the vision of DFG to be the leader in food and drugstore retailer in the Asia Pacific Region (Whitman & Gibson, 1997). The high failure rate of BPR (Kliem, 2000) requires organisational and culture change (Olalla, 1999) which are of great concern. Even though most
48 Ng, Farhoomand & Banerjee
of the employees were ready for adopt IT widely, they were not too comfortable to have drastic job nature changed. Therefore, quantitative analysis such as cost benefit analyses such as payback periods, net present value of expected future cash flows etc. were only gross approximations. DFG’s management team faced this dilemma of selecting the proposal which would achieve customer satisfaction in an affordable and feasible way and yet accountable to shareholders. If you were one of the management team, what and how would you evaluate and recommend the proposal?
ENDNOTE 1 US$1=HK$7.74
FURTHER READING Altinkemer, K., Chaturvedi, A. & Kondareddy, A. (1998). Business process re-engineering and organizational performance: An exploration of issues. International Journal of Information Management, 18(6), 381-392. Caron, J. R., Jarvenpaa, S. & Stoddard, D. (1994). Business reengineering at CIGNA Corporation: Experiences and Lessons Learned from the First Five Yeas. MIS Quarterly, l8 (3), September, 23350. Cooper, R. & Markus M.L. (1995). Human re-engineering. Sloan Management Review, 36(4), 39-50. Dairy Farm Group (http://www.dairyfarmgroup.com/dfarm_graphic/corporate/default.html) Davenport, T.H. & Short J.E. (1990). The new industrial engineering: Information technology and business redesign. Sloan Management Review, 31(4),12-26. Dewar, R.D. & Dutton, J.R. (1986). The adoption of radical and incremental innovations. Management Science, 32(11), 1422-1433. Grover, V. & Kettinger, W.J. (1995). Business Process Change: Concepts, Methods and Technologies. Harrisburg, PA: Idea Group Publishing. Hammer, M. (1990). Reengineering: Don’t automate, obliterate. Harvard Business Review, 68 (4), 104112.
REFERENCES Copacino, W. C. (1997). Supply Chain Management: The basic and beyond. Boca Raton, Floria: St. Luci Press. Fox, P (2001). B2B: You can find success in private sites. Computerworld, 35(19), May 7, 24-25. Hammer, M., & Champy, J. A. (1993). Reengineering the Corporation: A Manifesto for Business Revolution. New York: Harper Collins. Hong Kong (1999). Hong Kong Year Book. Retrieved May 19 2001 from the World Wide Web: http:/ /www.info.gov.hk/hkar99/eng/18/18_11.htm. Kliem, R. L. (2000). Risk management for business process reengineering projects. Information Systems Management, Fall, 71-73. Olalla, M.F. (1999). Information technology in business process reengineering. Proceedings of FortyO’Neill, P., & Sohal, A.S. (1998). Business process re-engineering: application and success – an Australian study. International Journal of Operations and Production Management, 18, 832864.
The Dilemma of Dairy Farm Group 49
APPENDIX Exhibit 1: Summary By Regions Capital Expenditure and Investments (Gross) by Region
Sales by Region (Including Associates) US$m
US$0m
'93
'94
Subsidiaries
'95
'96
'97
Associates
Europe Australasia Asia
Europe Asia
'93 Asia
'94
'95 '96
Australasia Europe
'97 Total
50 Ng, Farhoomand & Banerjee
Profit Before Interest by Region
Profit After Tax and Minorities by Region US$0m
US$0m
'93
'94
'95
Corporate Europe Australasia Asia
'96
'97
'93 '94 '95 '96 '97 Corporate Europe Australasia Asia
The Dilemma of Dairy Farm Group 51
Exhibit 2: Five Year Summary of Financial Statements 1993
1994
1995
1996
1997
US$m
US$m
US$m
US$m
US$m
Sales
4,979.6
5,585.3
6,235.5
6,968.4
6,888.3
Sales including associates
9,605.1
10,402.7
11,695.4
12,766.8
12,658.1
Profit after taxation, minority interests and preference dividends
188.8
213.8
135.2
27.9
115.7
Net profit excluding discontinued activities and exceptional items
197.1
187.6
192.4
138.3
154.1
Earnings per ordinary share (US¢)
11.28
12.52
7.86
1.60
6.45
Earnings per ordinary share excluding discontinued activities and exceptional items (US¢)
11.77
10.99
11.18
7.94
8.59
Dividends per ordinary share (US¢)
5.65
6.00
6.00
6.00
6.00
--
--
--
1.4
1.1
967.2
1,105.9
1,221.8
1,369.3
1,267.6
234.3
284.5
332.1
338.4
333.0
(28.8)
131.1
(170.9)
160.7
61.4
(159.0)
(334.0)
(162.9)
(602.4)
(422.5)
(9.2)
(12.6)
(16.5)
(14.2)
(1.7)
1,164.9
1,203.6
1,253.2
1,238.9
SALES AND PROFIT
BALANCE SHEET Intangible assets Tangible assets Associates investments
and
Net assets/(liabilities)
other current
Term Loans Other non-current liabilities Net operating assets
1,004.5
52 Ng, Farhoomand & Banerjee
Exhibit 3: Electronic Store Application–Schematic View Electronic Store ESTORE Data Store
Customer Customer Interface
Customer
MIS CIS
CSR/ Store Management
ESTORE Communication
Courier
Banks Fulfillment Center
Head Office
Suppliers
Key: CIS - Customer Interface Server MIS - Management Interface Server CSR - Customer Service Representative Source: Technical Architecture document (v 1.0) of Dairy Farm Group
The Dilemma of Dairy Farm Group 53
Exhibit 4: System Schematic of Proposed Communications Network Leased Circuit Frame Relay Permanent Circuit (PVC)
Internet
OpenNet Hong Kong Convenience Stores Ltd.
Dairy Farm Group
Country Hub Australia
Country Hub HK
Country Hub Singapore
Sims Trading Company Ltd.
Australia Office
Manning Retail Ltd.
Singapore Office
Wellcome Company Ltd.
Oliver's
MDNS New Zealand Office
3
Indonesia Office
Taiwan Office
2 Malaysia Office
Traveling Users
PRC Office*
Japan Office *refer to section 2.5 for details in China connection
Source: Technical Architecture document (v 1.0) of Dairy Farm Group
54 Ng, Farhoomand & Banerjee
Exhibit 5: Recommended Technical Planning Process D F G D ire c tio n a n d B u s in e s s R e q u ire m e n ts
S ta n d a rd s
F u tu re T e c h n ic a l A rc h ite c tu re
P rin c ip le s
In v e s tm e n t D e c is io n s
A p p lic a tio n s H a rd w a re S o ftw a re C o m m u n ic a tio n s
C u rre n t (d e fa c to ) A rc h ite c tu re s
P ro d u c t S e le c tio n
In d u s try T e c h n o lo g y T re n d s
Source: Technical Architecture document (v 1.0) of Dairy Farm Group
Exhibit 6: Business Process Domains GROUP
Executive
Information Systems
Analytical
BUSINESS UNIT Operational
In-Store
Data
Financial
Data Marts
Human
Analytical
Merchandising
In-Store
Warehouse
Labour
Very High Availability
Time &
Disaster Recovery
Loyalty
Warehouse
Applications
High Availability
POS
Resource Merchandising Scheduling
Attendance
Directories Enterprise Transaction Management Security Management Application • DNS • Network Office Time Integration • DHCP • System Automation (EAI) • Email • Application • LDAP CORE INFRASTRUCTURE SERVICES File & Print
Source: Technical Architecture document (v 1.0) of Dairy Farm Group
The Dilemma of Dairy Farm Group 55
Exhibit 7: Electronic Store Application–Suggested Partitioning Logic User Interface L
Application L
Data & Transaction L
Remote Application Services Browser
SSL / HTTP
Web Server
Active X
Application Services
Data
Source: Technical Architecture document (v 1.0) of Dairy Farm Group
56 Ng, Farhoomand & Banerjee
Exhibit 8: Store Replenishment–Suggested Logistics 6a 1 STORE 1
3
SUPPLIER
6
1a 2
4
STORE 2
7
STORE N
n
2a 5
MERCHANDISING SYSTEM
7a
TRADING TERMS
N WAREHOUSE
Na na
LEGEND: 1, 2, n 1a, 2a, na 3 4 5 6, 7, n 6a, 7a, na
Suggested orders for stores 1, 2 … N (raised by merchandising system) Confirmed orders (Confirmed by stores 1, 2 … N) Supplier wise orders (raised by merchandising systems with leverage of trading terms) Supplier delivery Warehouse delivery receipt Store delivery of allocated orders Store delivery receipt
Source: Technical Architecture document (v 1.0) of Dairy Farm Group
The Dilemma of Dairy Farm Group 57
Exhibit 9: Core Infrastructure Services–Suggested Application Topology Standard Client
Core Systems & DBMS
Application Services
User Interface
Data Management
Application Business Logic Software Engineering
Application Business Logic
Application Business Logic
Software Engineering
Software Engineering
Infrastructure
Infrastructure
Infrastructure
Data I/C - IBM MQSeries Dist. Computing & Objects - Microsoft DCOM Network - TCP/IP Transaction Process - Nil Security - IBM Tivoli Network, System & Application Management - IBM Tivoli
Data Interchange - IBM MQSeries Distributed Computing & Object Services - Microsoft DCOM Network - TCP/IP Transaction Processing - Nil Security - IBM Tivoli Network, System & Application Management - IBM Tivoli
Data I/C - IBM MQSeries Dist. Computing & Object - Microsoft DCOM Network - TCP/IP Transaction Process - Nil Security - IBM Tivoli Network, System & Application Management - IBM Tivoli
PLATFORM
PLATFORM
PLATFORM
WAN
LAN or WAN
Source: Technical Architecture document (v 1.0) of Dairy Farm Group
BIOGRAPHICAL SKETCHES Eugenia Ng is the Deputy Head of the Department of Information and Applied Technology, the Hong Kong Institute of Education. She is responsible for teaching information technology modules, ranging from the sub-degree to post-graduate diploma level. She has taught diverse learners ranging from primary school to master’s level locally and overseas. She is interested in various research areas and has had more than 40 articles published in conference proceedings, journals and as book chapters in the area of Web-based learning, Decision Support Systems, Information Systems Education, Information Systems Skills and Career Development. More information can be found at: http://www.ied.edu.hk/iat/staff/eugenia.htm. Ali F. Farhoomand is Director of the Centre for Asian Business Cases (CABC) at the University of Hong Kong School of Business (http://www.business.hku.hk/research.centres/cabc). He is the author of numerous academic articles and books, including Global e-Commerce: Text and Cases (Prentice Hall, 2001). Probir Banerjee is a graduate research student at the City University of Hong Kong (CityU). He has over 25 years of experience in a wide range of business and IT functions in India, Hong Kong and Canada. He has co-authored several business cases as a Senior Researcher with the University of Hong Kong and is currently actively engaged in research in the area of electronic commerce adoption. He has a B.Sc. (Engineering) from India and an MBA from Fort Hays State University, USA.
58 Rahouma & Zinterhof
Developing a Hypertext Guide Program for Teaching the Simple Tasks of Maintaining and Troubleshooting Educational Equipment Kamel Hussein Rahouma Minia University, Egypt Peter Zinterhof University of Salzburg, Austria
EXECUTIVE SUMMARY The educational technologists (ETs) are needed in all the stages of the teaching and learning processes at all the educational levels. In these processes, they become responsible for the used educational equipment (EE). To perform their roles effectively, they should acquire the simple maintenance and troubleshooting (MaT) tasks of the used equipment. The hypertext GUIDE system has been used to implement a hypertext program for this purpose. A list of the commonly used EEs and their simple MaT tasks was first collected and then validated. The list includes 81 MaT items divided into 11 groups and each group represents one or a group of the EEs. The first group includes seven general MaT tasks of the EEs. The following 10 groups regard the simple MaT tasks of the projectors (11 items), the audio and video cassette recorders and players (6 items), the normal television (4 items), the photographic camera (5 items), the microfiche (9 items), the photocopiers (7 items), the audio mixers (5 items), the personal computers (8 items for hardware and 8 items for software), the speakers-headphones-microphones (8 items) and the lighting devices (3 items). An explorational study showed that there is a common view between the Arab and non-Arab experts in the fields of education and educational technology regarding the importance of the listed equipment and its simple MaT tasks which should be acquired by the ETs. The list was programmed in a hypertext program that has been validated. The program was applied on a group of educational technology students, faculty of specific education, Minia University, Egypt. A special test for evaluating the program effectiveness was also designed, validated and applied to the study sample. The test results showed the Copyright © 2002, Idea Group Publishing.
Developing a Hypertext Guide Program 59
effectiveness of the designed program in acquiring the simple MaT tasks of the EEs. The study has concluded some main points and suggested some useful recommendations and future work points. Important note: A copy of all the materials of this chapter including the validated list of EEs and their simple MaT tasks, the achievement test, and the hypertext program, can be requested from the first author who hold the copyrights for the materials.
BACKGROUND The new media and technologies affect all the aspects of today’s life. Education is one of the important fields which prove such effects. Theories of teaching and information delivery are widely easier applied in presence of computer and other educational aids. These aids depend directly or indirectly on using some equipment. Such equipment is called the EEs. Because of the wide use of such equipment in the different educational processes, it became essential to have a specialist who is responsible for its maintenance and troubleshooting. For this purpose, the departments of educational technology were founded. The students in these departments study the educational curricula, beside some of the necessary courses regarding the different aspects of the EE, including their operation, use and MaT. After graduation, these students become ETs. These technologists are different from the maintenance engineers in two phases. The first is that they are mainly helpers in performing the educational process in a good way. The second phase is that they become responsible for the simple MaT tasks of the EEs which are used in their workplaces. They also have the chance to work in the schools, universities, libraries or in the media centers. They might work as computer teachers or media specialists either in the library or in the media center. To perform their roles effectively, these specialists should be trained on performing the simple MaT tasks of the EE. Thus in case of having any troubles while using one of these EEs, a lot of time, effort and money can be saved instead of requesting and waiting for the maintenance engineer or specialist. Using the new computer technologies and software such as hypertext, hypermedia, and multimedia has proved its effectiveness in many educational fields such as the training programs. Various platforms are available for developing such programs. The hypertext GUIDE system has been used to implement a program for teaching the ETs the simple MaT tasks of the commonly used EE. The program combines the engineering or technical and educational dimensions in presence of the computer. These dimensions and some other theoretical aspects of hypertext (Rahouma, Zinterhof and Astleitner, 2000) and the used GUIDE system are introduced in the following subsections. This includes the definition of hypertext, difference between hypertext and printed materials, advantages and disadvantages of hypertext, hypertext in courseware production and some aspects of the GUIDE system. After that, the rest of this case is organized as follows. The third section gives the settings of the stage and the fourth section describes the case. Then we explain the current challenges and problems facing the organization and list some related readings and the used references in the present case.
The Program Dimensions The program has two main dimensions. The first is an engineering or technical dimension and the second is an educational dimension. The engineering dimension refers to the skills of treating the EEs that might be used in the educational process such as the projectors, video and audio cassette recorders and players, copy machine, etc. Awareness of the simple MaT tasks of the equipment is an important issue especially in places of information delivery such as in the schools, universities, and training centers. While the maintenance here means protecting the equipment from having problems, troubleshooting means repairing the problems which the equipment might have. By the simple maintenance tasks, we mean performing the regular maintenance instructions which are usually given by the production company such as cleaning the different parts and oiling the mechanical parts
60 Rahouma & Zinterhof
of the equipment. By the simple troubleshooting tasks we also mean repairing the simple and not heavy problems and faults that the equipment might have during its operation such as replacing the projector lamp and cleaning the operation heads of the audio and visual equipments. The heavy problems such as the electronic faults should however be left to the maintenance engineer. The educational dimension refers to some interesting points. Firstly, the mentioned problems might prevent the equipment from working in a fully effective order and consequently the educational process might not continue as it is planned and required. Secondly, these problems might be simple enough such that there is no need to call the maintenance specialist or engineer and cost the educational organization the time and money to troubleshoot these simple faults and problems. Thirdly, the ETs who have graduated from the department of educational technology, faculty of education or specific education become responsible for assisting in operating such equipment and they should be aware of how to maintain and troubleshoot them. Although these graduates are subjected to some maintenance courses in their undergraduate study, they lack the skills and competencies of doing that. This combination of the engineering and educational dimensions was the motivation to developing a program for training the ETs on the simple MaT of the EEs. A list of the commonly used EEs and their simple MaT tasks was first collected and validated in a study done by the first author. In the presence of computer technology and software applications, there were many options to implement the intended program. One of the ideas to do that was to apply this program using hypermedia facilities and capabilities and mix the different elements of information such as the text, graphics, audio and video. The program was aimed at the students of the department of educational technology, faculty of specific education, Minia University, Minia, Egypt, in the university year 1998/ 1999. Because of the limited computer facilities there at that time and non-availability of hypermedia tools such as video cards, the hypertext facilities and capabilities were the replacement. The program was developed using the hypertext GUIDE system and applied on the study sample. A special test was also designed, validated, and applied on the study sample. The test results were used to evaluate the effectiveness of the designed program in acquiring the simple MaT tasks of the EEs. The study has concluded some main points and suggested some useful recommendations. Also, some points for a future work extension are given.
Hypertext Definition Hypertext represents an important change in the way we think about and organize information. Hypertext is a preferred medium for delivering complex and dynamic information because it provides a self-explanatory interface, facilitates direct searching, and encourages more general exploration via browsing facilities (Bassetti, Pagani and Smyth, 1991). The tremendous amount of interest generated by hypertext has led to a multitude of competing definitions, different implementations and a considerable amount of confusion. Defining hypertext is however much like describing beauty where a great part of it is inherent in the eye of the person who provides the definition. Hypertext can be reasonably defined as an associative information management system using a series of links and nodes to associate units of information (Franklin, 1989) or as a specific form of data retrieval (Carr, 1988). Also, it may be defined as a collection of nodes, cards, windows and note cards connected by links. There may be many types of links depending on the reason behind the use of that link. A hypertext can be modeled by a directed graph with labels on its nodes corresponding to the information they contain and labels on the links corresponding to the different types of links (Lai and Manber, 1991; Nielsen, 1990). According to Conklin (1987), a hypertext system is a medium that facilitates thinking and communication of parallel and multiple lines of arguments in non-linear, holistic and coherent succession. Hypertext is also defined as a combination of natural language text with the computer’s capacity for interactive branching, or dynamic display of a nonlinear text that cannot be printed conveniently on a conventional page (Nelson, 1978).
Developing a Hypertext Guide Program 61
Difference Between Hypertext and Printed Materials Printed materials are easily understood by ordinary people. These materials can be defined as items that usually store the messages which are printed by any mechanical processor such as letterpress and lithography. Based on the definitions which are given in the last subsection, it may be noticed that hypertext is an expansion of the concept of traditional printed materials. Obviously, both of these technologies share some characteristics, but they are different in many aspects. Some of the characteristics of hypertext could be attributed to the use of computers, since it is conveyed primarily through a computer (Conklin, 1987; Nelson, 1978). Hypertext is a non-linear or dynamic text that was developed to describe non-sequential writing. In traditional text the readers/learners are expected to follow the author’s style and organization of the text. This reflects the author’s knowledge and structure. Learners most often read the text from the beginning and follow it through sequentially to the end. Hypertext, on the other hand, allows the user an immediate access to any of the information in the knowledge base. Users may seek or select information from a hypertext-based knowledge structure upon any one of several criteria, including personal relevance to the reader, interest level of the reader, curiosity fulfillment of the information, experience level of the reader, information needs of the reader and task demands causing the reader to access the text (Conklin, 1987; Wei, 1991).
Advantages and Disadvantages of Hypertext Hypertext systems provide links between pieces of information such as text, figures and pictures so that the reader can follow many different paths corresponding to lines of thoughts, levels of description, levels of details , and so on. The navigation problem is one of the main problems in using hypertext systems (Nielsen, 1990). Users tend to get lost partly because the information they are reading can have a complicated structure which is usually unknown to them. Flying through hypertext is one solution to this problem. This is analogous to flipping the pages of a book with one notable exception: the flipping is not necessarily in a linear order. This flying is not intended to replace any of the other navigation tools; it is an additional tool (Lai and Manber, 1991; Nielsen, 1990). The goal of hypertext is to provide an electronic environment to facilitate knowledge exploration by the learner. Hypertext may also be linked to other media, such as videodisk, CD-ROM, audio, etc. These mixed-media systems controlled by hypertext are referred to as hypermedia systems. Because hypertext is a node-link system based upon semantic structures, it can map fairly and directly the structure of the presented knowledge (Jonassen, 1988). Some of the hypertext problems refer to the integration of the learners’ own knowledge structure and what they learn in the hypertext as well as the cognitive overhead, which means that the exponentially greater number of learning options available to the learners places increased cognitive demands upon the learners such that they are often unable to fulfill. Also, authors may have some other problems such as how they start a hypertext, where they can begin, how the hypertext should be structured, what options it can provide to the users, and finally how the knowledge structure of expert or the content structure of the subject matter can be manifested and then mapped onto a hypertext (Jonassen, 1988).
Hypertext in Courseware Production Hypertext, to a large extent, leaves control of the process in the hands of the user. The first reaction of the user to this may be as much a drawback as an advantage. After all, one characteristic of teaching/training is that the learner must be led. Thus, a hypertext platform can be modified to serve as a tutor such that it can lead the learners by offering avenues and paths through the material, even though the learners retain control and the course designer can direct them through the material in
62 Rahouma & Zinterhof
helpful ways. The effectiveness of this is clearer when looked at in the context of the intuitive nature of a hypertext system. In all practical purposes, every screen in a hypertext system provides the user with the information and links through the hot buttons to other information. Control is provided by the fact that the links are made to certain topics and not to others. But the ease and intuitiveness of the system makes the screen appear more like an invitation than a control. Furthermore, this structure adapts itself to the needs of the specific learner. A hypertext system, though, can accept wide variations among users without the same complexity (Franklin, 1989). Electronic libraries or hypertext databases provide very fast routes to search, view and extract materials. Word processors with outline editing facilities or hypertext authoring systems can help users in organizing materials into different sections or chapters (Anton, Lam and Chang, 1992). A course can be prepared to be usable by individuals with a wide variety of backgrounds and motivations. This allows the designer to pay less attention to the ”learner modeling” and get a picture of where the student is and how he is learning. To some extent, hypertext can allow the learner to ”model” himself and make selections based on that model. The actual creation of teaching material in hypertext is also relatively easy. A useful hypertext training package is complex, and like any other complex structure, it takes time to plan and implement (Franklin, 1989). Hypertext is a rapidly expanding technology that offers phenomenal instructional potential. Its flexibility in structure and style make it perhaps the most effective technology system for individualizing instruction (Jonassen, 1988). One major requirement of the courseware production and presentation system is to provide a user-friendly environment. Since the GUIDE User Interface (GUI) is gaining recognition in the last two decades for its consistent and easy-to-use interface, its PC version (3.1) has been selected as the development platform.
Aspects of the GUIDE System There are many different hypertext systems. For instance, the GUIDE system was developed at the University of Kent at Canterbury (UK) from 1982 onwards and is now available as two separate products: a) a PC/Mac version, developed by OWL (Office Workstations Ltd. of Edinburgh); b) a version for UNIX workstations which remains a product of the University of Kent at Canterbury. The PC GUIDE version (3.1) will be used in the present work. It is a menu-driven authoring system for hypertext. Creation of a hypertext document begins with the entry of text in a format very similar to that of a word processor. Window overlays, replacements and links between different parts of the text are incorporated into the document by menu-driven selection procedures. The GUIDE system of menus is very easy to use, giving powerful programming capabilities to relatively novice computer users. Both text and graphics can be incorporated into the GUIDE window along with sound and videodisk controls. Recent releases of the program will allow a record of user responses to be stored. The GUIDE software runs on both the IBM PC and the Macintosh systems (Nelson, 1978; OWL International Inc., 1992). The GUIDE system offers a new approach to information delivery and a comprehensive authoring environment which enables the user to create, format and distribute dynamic interactive electronic documents that are optimized in order to be displayed on a computer screen. With the GUIDE system, a rapid on-line access can be provided to virtually any reference materials. Also, updates can be generated and distributed quickly at a fraction of the cost of paper revisions. The GUIDE system is a simple and flexible authoring tool that allows the user to combine text, graphics, and multimedia elements. Paths can be created between related topics for readers to follow and present information in a way that informs rather than overwhelms. Document layout tools and user interface configuration give the user complete control over the application of the documents. The GUIDE documents offer an effective, intuitive framework for readers. Using hypertext to electronically link topics, natural document structure can be mirrored such that headings, cross
Developing a Hypertext Guide Program 63
references, and reference links instantly display related topics, and footnotes appear in pop-up windows at the click of a mouse. Readers instinctively know how to get to the information they want. The GUIDE electronic document publishing system is designed around an object-oriented information model which presents information as a series of linked ”objects” and manages relationships between them. Every component in a GUIDE document such as a single word, a phrase, a paragraph or a graphic can be represented as an object. Objects are defined in GUIDE with commands on the Make menu and, once defined, can be linked to other objects. The Object-oriented framework of GUIDE is transparent to users, so reading GUIDE documents is largely an intuitive activity. The GUIDE system provides a variety of Object types which perform different functions. Buttons constitute one class of GUIDE Objects. Buttons enable readers to move quickly and easily between related topics in GUIDE documents. Using a mouse, readers click on Buttons to display linked Objects. In contrast to the paper documents which are limited by a left to right and a top to bottom sequential structure, GUIDE documents allow readers to move from one topic (Object) to another non-sequentially. This makes full use of the computer’s ability to locate and present the related information quickly. Released from the limited linear format of conventional paper documents, readers browse easily through the GUIDE documents, quickly locating topics according to individual needs and interests. The GUIDE system offers another advantage to the process of creating and accessing information. While it matches paper ability to encompass text and graphics, it adds new dimensions by offering the options to create links to video, animations and audio data. With the appropriate hardware and software, these multimedia accompaniments can be integrated directly into GUIDE documents. The GUIDE system (version 3.1) provides the following features and capabilities: 1. An intuitive, flexible authoring environment where interactive electronic documents can be created for on-screen display. 2. Easy integration of existing text and graphics in a variety of formats. 3. Powerful text search capabilities using hypertext links to search related documents. 4. Navigation tools supported by a powerful engine for fast access to information. 5. LOGiiXTM, and embedded scripting language, allow to perform complex Object manipulation, yet keep the user interface simple and intuitive. LOGiiX also enables to add new capabilities to documents which are not available in GUIDE standard feature set. For example, a log can be created to monitor the amount of time a user spends in each section of a document. 6. Extensive text formatting features, including rulers, a wide variety of fonts and sizes, justification, and colored text. 7. Document and application window customization which can be used to determine the final appearance of your application. 8. Object and link editing and verification tools to help to test and maintain electronic documents that are built. 9. GUIDE ViewerTM, a royalty-free document viewer that enables distribution of electronic documents to readers. The GUIDE Viewer simple iconic interface provides the most commonly used features to display and navigate GUIDE documents.
SETTING THE STAGE Maintenance of the EEs is one of the main courses taught to the educational technology students. The faculty of specific education, Minia University in Egypt is one of the faculties which have departments of educational technology. The department and university authorities allocate these courses a great attention and look continuously for the good ways and methods to improve their teaching process. This is because of the importance of the maintenance work which their graduates might have in their workplaces either in the schools and universities, in the libraries, or in the media centers. Because of the well-known effectiveness of using the computer facilities in such cases, the
64 Rahouma & Zinterhof
present case describes a hypertext program for teaching the students in these departments the simple MaT of the EEs. Hypertext was used because of the easiness and little requirements to be installed and used. Also, using hypertext could strongly motivate the students to benefit from the program because they believe that learning it as a new computer technology will give them more confidence in their workplaces. The corresponding study was based on the following hypotheses: 1. There is a common view between the educational technology experts regarding the commonly used EEs and their simple MaT tasks which the ETs should acquire. 2. Using the concepts, tools, capabilities and facilities of hypertext programming has a strong positive effect in acquiring the simple MaT skills and tasks of the EEs. To test the given hypotheses, a list of the commonly used EEs and their simple MaT tasks which the ETs should acquire was collected. This list was distributed in a questionnaire on a group of the experts in the fields of education and educational technology. The referees were asked about the importance of the listed equipments to the ETs in their workplaces and the responsibility of these technologists to acquire the listed MaT tasks. The results of the questionnaire are used to test the first hypothesis. The study sample was randomly chosen from the students of the Educational Technology Department, Minia University in Egypt. They were divided into two groups. These are the experimental and control groups. A special achievement test was developed and distributed on a group of experts in the fields of education and educational technology to validate it. The test was applied on the study sample to measure their acquirement of the listed MaT tasks before and after applying the developed program. The results of applying the test were used to test the second hypothesis. The GUIDE system version (3.1) was installed and used to apply the program to the study sample. This system does not require very high computer specifications. To use it, you need an IBM PC or 100% compatible computer with an Intel 386 or higher processor, a high resolution VGA display or better; a color display is recommended, an MS-DOS version 4.0 or later, an MS Windows version 3.1 or higher, a hard drive with at least 3 MB of free space, at least 2 MB of memory, the printer specified when the MS windows software is installed and finally, a mouse or other pointing device compatible with MS-Windows. These requirements can however be found with even any old computer system. The GUIDE system has been used before in developing a hypertext tutorial platform for microcomputer MaT [Rahouma, Astleitner, and Zinterhof, 1999] and it will be used here to implement a program for the MaT of the EEs which the ETs deal with.
CASE DESCRIPTION Description of the case introduces the different steps and procedures of developing and applying the program. These procedures are described in the following sections.
Listing and Validating the Educational Equipment and their Simple Maintenance and Troubleshooting Tasks A list of the commonly used EEs and their simple MaT tasks which the ETs should acquire was collected from different resources including the educational technology books, literature and courses. Also, the experts in the fields of education and educational technology were asked about these EEs and their simple MaT tasks. The list includes 81 MaT items divided into 11 groups and each group represents one or a group of the EEs. The first group includes seven general MaT tasks of the EEs. The following 10 groups regard the simple MaT tasks of the projectors (11 items), the audio and video cassette recorders and players (6 items), the normal television (4 items), the photographic camera (5 items), the microfiche (9 items), the photocopiers (7 items), the audio mixers (5 items), the personal computers (8 items for hardware and 8 items for software), the speakers-headphones-microphones (8 items), and the lighting devices (3 items). The projectors include the overhead projector, the opaque projector, the 2x2 inch slide projector, the 35 mm filmstrip projector, and the video projector.
Developing a Hypertext Guide Program 65
The list was prepared in a questionnaire form and distributed on a group of five referees from the experts in the fields of educational technology to explore their views and opinions about the listed equipments and their simple MaT tasks. The distribution process was done two times. The referees were asked to answer if the listed task is important (yes) or not important (no) to the ETs. They answered all the items of the questionnaire and returned it back. The importance averages of the listed items were computed in the two times as 61.4% and 62.2 % respectively, and the correlation coefficient between these averages was computed as (0.86) at a significance level less than (0.01). This indicates that the list has a very highly significant reliability. Also, the referees’ answers were coincidences in the two times and they agreed on most of the given items. This indicates that the list was highly valid. The validated list was distributed on a group of 50 Arab and 14 non-Arab experts in the fields of education and educational technology. The distribution process was done by hand or by post or by electronic mail (e-mail) or on even on the Internet. This distribution aimed to test the difference between the Arab and non-Arab referees’ interests regarding the listed EEs and the importance of their MaT tasks to the ETs. The questionnaire asked the experts if the given MaT tasks are important or not. Importance here means that the ETs should acquire the skills or competencies of performing these tasks to be able to perform their roles in the educational process in an effective way. The experts were asked to choose either Y (for important) or N (for not important). Only 37 (74%) of the 50 Arab referees (36 from Egypt and 1 from Yemen) and 7 (50%) from the 14 non-Arab referees answered and returned the questionnaire. The averages of the referees’ selections regarding the importance of the different groups of the listed tasks were computed. It should be pointed out here that because the groups of the Arab and non-Arab referees were different in their sizes, the rates of the obtained averages were computed and given in Table 1. To test the agreement between the Arab and non-Arab referees’ views regarding the importance of the listed groups of tasks, the correlation coefficients between the averages of their selections were computed and given in Table 1, as well. Almost all the equipment got correlation coefficients above 0.8 at a significance level less than 0.05 except the normal television which got a correlation coefficient of 0.9 at a significance level above 0.05. These high and significant correlation coefficients mean a high agreement or a common view Table 1: The average rates of the Arab and non-Arab referees’ groups regarding the importance of the listed MaT tasks and the corresponding correlation coefficients according to these averages. The 4th, 5th, 8th and 11th rows indicate that the corresponding equipments are not included in the designed program.
1 2 3 4 5 6 7 8 9 10 11
Maintenance and Troubleshooting General tasks Projectors Recorders and players Normal television Photographic camera Microfiche Photocopier Audio mixers Personal computer Speakers, headphones, microphones Lighting devices
Average repetition rates Arab Non-Arab 74.13 % 71.42 % 70.27 % 58.44 % 49.10 % 40.48 % 39.86 % 32.14 % 41.10 % 37.14 % 45.00 % 47.62 % 50.58 % 53.00 % 44.86 % 37.14 % 72.80 % 75.89 % 45.95 % 41.00 % 42.34 % 33.33 %
C 0.8 0.9 1.0 0.9 1.0 0.9 0.8 0.9 0.9 0.8 1.0
66 Rahouma & Zinterhof
between the Arab and non-Arab referees regarding the importance of the listed equipment and their simple MaT tasks. This however confirms the first hypothesis stating that there is a common view between the educational technology experts regarding the commonly used EEs and their simple MaT tasks which the ETs should acquire. Note that the present case is interested only in those equipments which got an importance rate not less than 40% as a minimum limit from both of the Arab and non-Arab referees’ groups. The 4th, 5th, 8th, and 11th rows in Table 1 indicate that only 7 from the 11 listed groups of the simple MaT tasks fulfill that condition. These include the group of the general MaT tasks as well as the groups of the projectors, the audio and video cassette recorders and players, the microfiche, the photocopiers, the personal computers, and the speakers-headphones-microphones.
Developing and Validating the Achievement Test An achievement test was developed to be used in measuring the students’ acquirement of the different MaT tasks of the EEs. The test is composed of three parts. The first part consists of 50 true and false items such that the student selects 40 items to answer and the total score of this question is 20 points. The second part consists of 50 multiple choice items such that the student selects 40 items to answer and the total score of this question is 20 points. The third part includes 20 questions that need to be performed by the student in the laboratory. The score of each question in the third part is 10 points giving a total score of 200 points. Thus, the total score of the test is 240 points (40 points for the first two parts and 200 points for the third part). The pass score in the test was chosen to be 168 points or 70% of the total score. Each one of the first and second parts was allocated 20 minutes and the third one was allocated 155 minutes. Thus the total time slot of the test was 195 minutes. The answer sheets of the first and second parts were given to the students to write down their answers in them. Also, answer keys of these two parts were prepared to help the assistants in marking the students’ answers. Also, observation sheets were prepared and given to the assistants to use them in watching the students and evaluating their performance in each question of the third part. The designed test and its related materials–including the answer sheets, answer keys and observation sheets–were distributed on a group of five experts in the fields of education and educational technology. The distribution process was done two times and the referees were asked about a group of questions about: 1. Scientific accuracy 2. Scientific content 3. Content relevance to the current reality 4. Question difficulties 5. Repetition of the question items 6. Suitability of the test scoring and timing 7. Suitability of the possible pass score The referees were asked to evaluate the weight of each of the last items from 100 points. They also were asked to give a general evaluation weight for the whole test. The referees’ answers were coincident in the two times of distribution. They mentioned the same comments and modifications. They gave almost the same weight averages (Table 2). This means that the designed test was reliable and valid.
Applying the Designed Test for the First Time (Pre-Testing) The designed test was applied for the first time on the whole study sample including the experimental and control groups to evaluate the students’ background and their knowledge regarding the simple MaT tasks of the EEs. The students’ results were collected and statistically treated by using the Statistica Program for Windows (Release 4.0, Copyright Statsoft, Inc. 1993). Table 3 gives the mean values of the control and experimental group students’ scores in the first time of applying the designed test.
Developing a Hypertext Guide Program 67
Table 2: Judgment Items and the Referees’ Selection Averages in Points from 100 (as a Maximum Weight) in the Two Times of Judging the Designed Test
Item No. 1 2 3 4 5 6 7 8
Items of judgment Scientific accuracy Scientific content Content relevance to the current reality Question difficulties Repetition of the question items Test scoring and timing Pass score General evaluation
Judgment averages 1st time 2nd time 82 80 80 79 81 82 73 72 82 83 83 82 90 87 81 80
Table 3: Student Number (No.) and Rates of the Students, Scores in the First-Time Application of the Measuring Test of the Control Group (C.G.) and the Experimental Group (E.G.)
No. C.G. E.G.
1 39.3 37.5
2 36.4 38.0
3 34.1 38.2
4 34.1 38.9
5 37.5 37.5
6 37.5 36.6
7 40.0 37.0
8 36.6 38.0
9 39.1 35.9
10 38.9 37.3
The mean values of the students’ score rates given in table (3) for the control and experimental groups were computed as 37.3% and 37.5% respectively. This shows a very small difference (0.2%) between the mean values. In other words this indicates that on average the students had almost the same knowledge and background about the MaT tasks of the EEs before applying the hypertext program.
Developing and Validating the Hypertext Program The designed program utilizes the different hypertext facilities and capabilities, especially the branching concept. It starts by giving a main window for the following options: 1. The title page 2. Preface 3. Summary 4. Introduction 5. Educational technologists (ETs). 6. Educational equipments (EEs). 7. List of the EEs. 8. The related study. 9. Navigating the program. 10. Recommendations. 11. Future work. Each one of these options gives a window explaining the topic. For example, the title page gives a window that shows the title of the program and the author information. If the chosen option has some other sub-options, then a corresponding window appears to give these sub-options. For instance, the option of ”Navigating the program” has the following list of sub-options:
68 Rahouma & Zinterhof
1. General guidelines and tasks for maintenance and troubleshooting 2. Maintenance and troubleshooting of the microcomputer 3. The main tasks of maintaining projectors 4. The main tasks of maintaining photocopiers 5. The main tasks of maintaining microfiches 6. Recorders and players 7. Headphones, speakers and microphones Each one of these sub-options is branched into some other sub-options that appear in a corresponding window and so on. The list of MaT tasks is considered in the implementation of this program. The program was distributed on a group of five experts in the fields of education and educational technology. The distribution process was done two times to judge the program from the sides of: 1. General presentation of the program elements. 2. Covering the instructional material regarding the MaT of the listed EEs. 3. Easiness of using the program. 4. Technical depth of the presented elements. 5. Technical accuracy of the presented elements. 6. Suitability of the program to the intended audience. 7. The overall rate of the program. The referees were asked to rate these items from 100 points and to add any other comments they might have. The referees answers and comments of the two times were coincident in the two times. Table 4 gives the referees’ answers. This however confirms the reliability of the program. Also, the coincidence of the referees’ answers and comments in the two times confirms the validity of the program.
Applying the Program The program was applied to the experimental group in May 1999. At the same time, the control group members were taught the same instructional material about MaT of the EEs in the normal or traditional teaching method through the lecture and laboratory. However, the application of the experiment was done in steps as follows. Preparing the Computer Laboratory and Installing the Program The number of computers was firstly determined in the computer laboratory at the department of educational technology, faculty of specific education, Minia university, Egypt. Ten computers were chosen to install the hypertext proposed program on them. To have this done, the chosen computers were checked to satisfy the system requirements given in the section, Setting the Stage. Because hypertext-based learning is an individualized learning, it was supposed to have a computer system for Table 4: The Items of Judgment and the Corresponding Averages of the Referees’ Selections in the Two Times of Program Judgment
Item No. 1 2 3 4 5 6 7
Items of judgment General presentation Covering the MaT instructional material Ease of using the program Technical depth of the elements Technical accuracy of the elements Suitability of the program to the audience The overall rating of the program
Selection averages 1st time 2nd time 80 79 81 82 79 79 75 76 80 81 76 77 80 80
Developing a Hypertext Guide Program 69
every student. Accordingly, the experimental group was chosen to be 10 students and consequently, the whole study sample was decided to be 20 students (10 for the control group and 10 for the experimental group). Preparing the Equipment Laboratory Preparing the equipment laboratory started by determining the equipments available at the department of educational technology to perform the listed tasks of MaT on them. The following equipments were available: 1. The overhead projector 2. The 35 mm motion sound color picture projector 3. The 2 x 2 inch sound slide projector 4. The opaque projector 5. The video projector 6. The video tape recorder and player. 7. The cassette tape recorder and player. 8. The microfiche. 9. The photocopier. 10. The personal computers. 11. The speakers, headphones, and microphones After determining the available equipment, the ones having troubles were identified to be used in implementing the listed tasks of troubleshooting while the tasks of maintaining the equipments were applied to the good working equipment. Application of the experiment was done in two weeks.
Applying the Measuring Test for the Second Time (Post-Testing) The educational contents were taught to the students of the control group through the lectures and laboratory and to the students of the experimental group through the running of the hypertext program and laboratory. Then, the measuring achievement test was applied for the second time as a post-test. It was applied to both the control and experimental groups. Table 5 gives the mean and standard deviation of the students’ scores. The mean values of the students’ score rates given in Table 5 for the control and experimental groups were computed as 57.3% and 78.3% respectively. This shows a big difference (21%) between the mean values. It also means that on average the experimental group members became more knowledgeable performers in the post-test than the control group members because of using the hypertext program. In other words using the concepts of hypertext and its facilities and capabilities has affected the achievement of the students. It also confirms the hypothesis that using the concepts, tools, capabilities and facilities of hypertext programming has a strong, positive effect on acquiring the simple skills and tasks of MaT of the EEs. Applying some statistics on this table, the results in Tables 3 and 5 could be obtained where m1=37.3%, m2=37.5%, m3=57.3% and m4=78.3% indicate the control and experimental groups before and after teaching the instructional material respectively. Table 6 gives the mean difference, standard deviation difference, and t-test values corresponding to the values of m1, m2, m3 and m4 respectively. It is also clear from Table 6 that: Table 5: Student Number (No.) and Rates of the Students, Scores in the Second-Time Application of the Measuring Test of the Control Group (C.G.) and the Experimental Group (E.G.)
No. C.G. E.G.
1 60.5 78.6
2 56.1 78.6
3 55.2 78.4
4 54.5 79.1
5 56.4 78.2
6 59.1 77.0
7 56.8 77.0
8 55.0 77.3
9 63.2 77.7
10 56.6 80.7
70 Rahouma & Zinterhof
a.
There are no statistically significant differences between the means of score rates of the control and experimental groups in the pre-test measuring the student skills and background regarding the simple MaT tasks before applying the hypertext program. b. There are statistically significant differences between the means of score rates of the control and experimental groups in the post-test measuring the student skills and background regarding the simple MaT tasks before and after applying the suggested program. These differences are to the side of the experimental group. c. There are statistically significant differences between the means of score rates of the experimental group in the pre- and post-test measuring the student skills and background regarding the simple MaT tasks after applying the suggested program. These differences are to the side of the experimental group in the post-test. This however confirms the second hypothesis which states that: ”Using the concepts, tools, capabilities and facilities of hypertext programming has a strong positive effect on acquiring the simple MaT skills and tasks of the EE”.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION Because of the importance of using the EE in all the educational sectors and departments, it is necessary to have an experienced and well-trained category of personnel who can help in the continuity of using this equipment in the teaching process in a good way. This can be achieved by keeping the used EEs running in a good order by maintaining them and by troubleshooting or repairing the defected ones from the problems they might have during their operation. Sometimes, the used EE might have some simple problems and faults. Although the responsible people (i.e., ETs) are subjected to study some undergraduate courses regarding the maintenance and troubleshooting of the EE, they may lack the practical skills and theoretical bases to perform the required tasks. This costs the educational institutions and organizations the time and money to call the maintenance engineer or specialist to treat the simple faults and problems. In other words, the ETs should be qualified for such purposes. This means that these technologists should acquire the skills of performing these simple MaT tasks. The problem with the educational technology students is that they lack the motivation of studying the delivered courses in this regard. This might be due to the feeling that these tasks belong Table 6: The Mean Difference (mean diff), Standard Deviation Difference (st. diff), and T-Test Values Corresponding to the Values of m1, m2, m3, and m4 Which Indicate the Means of the Students’ Score Rates of the Control and Experimental Groups Before and After Applying the Proposed Hypertext Program Respectively. The Significance Levels (p) is Less Than 0.00001 For All the Given Tests Except the First One (0.95).
Statistical Test Name mean diff (m1,m2) mean diff (m3,m4) mean diff (m2,m4) st. diff (m1,m2) st. diff (m3,m4) st. diff (m2,m4) t (m1,m2) t (m3,m4) t (m2,m4)
Test value 0.2 21 41 2.62 3.18 1.12 - 0.06 -20.93 -115.45
Developing a Hypertext Guide Program 71
to another area of specialization (i.e., engineering) and consequently the less interaction between them and their instructor (usually from the faculty of engineering). To solve these problems, the educational technology department (faculty of education or specific education) should have the infrastructure, facilities, and teaching staff and methods that help and motivate the students to acquire the mentioned skills in an easy and effective way. This means also that the institution should plan for facing these challenges. The present program is however a step in that direction. Applying and implementing similar programs in the different educational places is recommended. This means that the institution should support the computer laboratories to have complete computer facilities for hypermedia programs such as video, and audio facilities. These recommendations mean adding new challenges and problems, including the financial, technological and teaching faces. Solving these problems could, however, be scheduled according to a certain plan that considers the other plans of the organization.
FURTHER READINGS Ali, A.M.A.M. (1999). Maintenance of the educational equipments: Practical side, Department of educational technology, Al-Azhar University, Cairo, Egypt. Ali, A.M.A.M.(1999).Maintenance of the educational equipments: Theoretical bases, Department of educational technology, Al-Azhar University, Cairo, Egypt. Ali, A.M.A.M.(1994).The practical skills in the field of educational technology, Department of educational technology, Al-Azhar University, Cairo, Egypt. Black, B.R. (1989). ”Troubleshooting micro’s”, ERIC Document ED319 365, 13. Flesher, J.W. (1993). ”An exploration of technical troubleshooting expertise in design, manufacturing, and repair context”, Journal of industrial teacher education, 31(1), 34-56. Foshay, W.R. (1989). ”What we know (and what we don’t know) about training of cognitive strategies for technical problem-solving”, Journal of structural learning, 10, 111-125. Hawaii State Department of Education (1993).An inventory of skills and knowledge necessary for a career as a data processing equipment maintenance technician, computer technician, or computer repairer, Hawaii State Board for Vocational Education, Honolulu, ERIC document ED 365801, August, 1993. Idaho State Department of Education(1990). Industrial maintenance technology, A technical committee report, Idaho State Department of Education, Division of Vocational Education, ERIC Document ED 346 270, 1990. Johnson, S.D. (1989). A description of expert and novice performance differences on technical troubleshooting tasks”, Journal of industrial teacher education, 16, 19-37. Johnson, S.D. (1991). ”Training technical troubleshooters”, Technical and skills training, 27(7), 916. Johnson, S.D.; Flesher, J.W.; and Chung, S.P.(1995). Understanding troubleshooting styles to improve training methods, ERIC document, ED-389 948, Dec. 1995. Johnson, S.D.; Flesher, J.W.; Jehng. J.C. and Ferej, A.(1993). ”Enhancing electrical troubleshooting skills in a computer-coached practice environment”, Interactive learning environments, 3(3), 199-214. Lodahl, Dan(1989). Electronics troubleshooting. High-technology training module, ERIC Documents, ED 347372, Sept. Ontario Ministry of Skills Development (1991). Building maintenance mechanic. Apprenticeship training standards, Ontario ministry of skills development, ERIC document ED 340 831, May 1991. Rasmussen, J. and Jensen, A.(1974). ”Mental procedures in real life tasks: A case study of electronic troubleshooting”, Ergonomics, 17(3), 293-307. Sappe, H. and Squires, S.S.(1989). Electrical distribution: Project report phase I with research findings, Georgia University, Athens. Div. of vocational education, ERIC ED 350 466.
72 Rahouma & Zinterhof
Sappe, H. and Squires, S.S.(1989). Instrumentation technology: Project report phase I with research findings, Georgia University, Athens. Div. of vocational education, ERIC document ED 350 466. Schendel, J.D.(1994). ”Training for troubleshooting”, Training and development, May, 89-95. United Cerebral Palay Assoc. Inc. (1995). Ideas for organizing, storing, and using equipments/ materials, United Cerebral Palay Assoc. Inc., Washington, DC., ERIC ED 411 645.
REFERENCES Anton, S.Y. Lam and Chang, C.S. (1992). Prototype of a courseware production and presentation system, Educational technology, xxxii(4), 20-27. Bassetti, O.; Pagani, D. and Smyth, M. (1991). Applications navigator: Using hypertext to support effective scientific information exchange, Hypertext ‘91 proceedings, Dec. 1991, 411-416. Carr, C. (1988). Hypertext: A new training tool?, Educational technology, xxviii(8), 7-11. Conklin, J. (1987). Hypertext: An introduction and survey, Computers, 20(9), 17-41. Franklin, C. (1988). Hypertext defined and applied, On-line, May 1989, 37-47. Jonassen, D.H. (1988): Designing structured hypertext and structuring access to hypertext, Educational technology, xxviii(11), 13-16. Lai, P. and Manber, U. (1991). Flying through hypertext, Hypertext ‘91 proceedings, Dec. 1991, 123131. Nelson, T.H. (1978). Electronic publishing and literature, in E.C. Deland (Ed.), Information Technology in Health Science Education, New York: Plenum. Nielsen, J. (1990). The art of navigating through hypertext, Communications of ACM, 33, 15-26. OWL International Inc. (1992). GUIDE: The complete Electronics Document Design and Publishing Tool, Version 3.1, GUIDE operating manual, OWL International Inc. Rahouma, K.H.; Astleitner, H. and Zinterhof, P. (1999). Developing a hypertext tutorial platform for microcomputer maintenance and troubleshooting, Proceedings of the ISIMADE conference in Baden-Baden, Germany. Rahouma, K.H.; Zinterhof, P. and Astleitner, H. (2000). Hypertext: Some general aspect and considerations, Invited paper, Proceedings of the 12th International conference on system research information and cybernetics held in Baden-Baden, Germany from July 31 to August 5th, 2000 (This paper was chosen as the best paper submitted to the conference and nominated for the best paper award of the society of cybernetics). Wei, C. (1991). Hypertext and printed materials: Some similarities and differences, Educational technology, xxxi(3), 51-53.
BIOGRAPHICAL SKETCHES Kamel Rahouma is an assistant professor at the Electrical Engineering Department, Faculty of Engineering, Minia University, Minia, Egypt. Now, he is visiting the Research Institute for Software Technology, University of Salzburg, Austria. The main job of Dr. Rahouma is teaching the different courses of electrical and communications engineering as well as engineering mathematics. He is interested in 4 areas of research where his scientific contribution is mainly focused. These areas include: (1) hypertext and hypermedia educational applications, (2) cryptography and computer security, (3) design of smart card systems, and (4) the new emerging communications technologies including the Jini, PnP, Bluetooth, and JetSend technologies. Also, Dr. Rahouma joined the Salzburgresearch group where he implemented some Java applications with SunTrec team in TechnoZ Salzburg, Austria. Dr. Rahouma is a chartered engineer (in the British engineering council, an Associate member of the Institute of Electrical Engineers (IEE) in the UK, a member of the Society for Computer Simulation (SCS) International in San Diego in the USA, a member of the Institute of Electrical and Electroincs Engineering (IEEE) in the USA, and a member of the Egyptian Society of Electrical Engineers in Egypt. Peter Zinterhof received his doctoral degree in Mathematics, Physics, and Philosophy from the University of Vienna in 1968 and his Habilitation (Tenure for Mathematics) at the Technical University of Vienna in 1972. Since that time, he has worked as a Professor of Mathematics at the Institute of Mathematics at the University of Salzburg, Austria. From 1994 until 1999 he was the head of the Institute of Mathematics and in 1999 he founded the Institute of Scientific Computing and since thaen has served as its head.
IS Strategy at NZmilk 73
IS Strategy at NZMILK Paul Cragg University of Canterbury, New Zealand Bob McQueen University of Waikato, New Zealand
EXECUTIVE SUMMARY NZmilk is a small, fresh milk supplier that is contemplating using IS to a greater extent to become more competitive. Following deregulation of the industry in 1997, supermarkets and home delivery contractors could purchase milk from wherever they chose, rather than a required local manufacturer. This had opened up both competition and expansion opportunities within the industry. NZmilk recognised that they would have to fight hard to retain and increase their share of the market. They had already lost some of their local market to competitors coming in from outside their region, but had also gained a contract to supply Woolworths supermarkets outside their traditional market area. Improvements to production facilities and distribution systems were in place, but NZmilk knew that a fresh look at how they interacted with their customers would be needed. Their general manager was convinced that information systems had a greater role to play at NZmilk beyond just the accounting and order processing that was presently undertaken. A new direction in using information systems to support NZmilk’s rapid growth and new strategy was needed, but he was unsure of which way to proceed.
BACKGROUND Whangarei Milk Company was formed as a private company in 1946 to supply home delivery milk in the growing town of Whangarei. In 1990, the company changed its name to NZmilk, and became a fully owned subsidiary of Northland Milk Products, an established, progressive dairy cooperative operating in the Northland region of New Zealand. This relationship with Northland Milk had brought benefits in terms of a guaranteed supply of whole milk. Previously, a number of dairy farms were directly contracted to supply NZmilk 365 days of the year, so NZmilk had to make use of all the milk provided each day from these suppliers. Now NZmilk could request the volume of milk it required by obtaining a milk tanker delivery from Northland’s major processing factory (during most of the year) on relatively short notice. Another advantage of the association with Northland Milk Products had been the ability to call on their resources when needed, particularly in the managerial, technical and financial areas. The parent company required NZmilk to submit monthly reports on their operations, and any major initiatives required approval from the Directors of Northland Milk Products. Copyright © 2002, Idea Group Publishing.
74 Cragg & McQueen
By 2000, NZmilk had become the fourth largest supplier of fresh white milk in New Zealand, with annual sales of $25 million. Milk had always been the heart of their business, but they had recently increased their product range to include fruit drinks and fruit juices, and were considering developing other food products to add to their product range. NZmilk occupied a modern plant on the outskirts of Whangarei, in one of the fastest growing regions of New Zealand. It employed 80 people, plus a distribution system involving an additional 36 vendors. These vendors were self-employed contractors who delivered on a daily basis to supermarkets, convenience outlets, and homes.
SETTING THE STAGE Up until 1997, the home delivery of milk had been tightly regulated. Licensed local processors set the retail price of milk but were compelled to provide a home delivery service regardless of economics. Each home delivery milk processor had sole rights to a district. For NZmilk, this effectively meant no competitor could supply milk into the Whangarei region. Any firm could compete outside their restricted territories with other products like flavoured milk and yogurt. However, fresh milk was still the major product sold. Although the fresh white milk industry in New Zealand was worth about $400 million per year, milk consumption was slowly falling and losing market share to other beverages. Sales of flavoured milk were helping to slow the decline. New Zealand’s largest dairy company, New Zealand Dairy Group (NZDG), with revenues of $2 billion, was mainly focussed on the export of powdered milk, butter, cheese and other manufactured milk products, but also had a dominant market share of the pre-deregulation fresh white milk market in New Zealand’s North Island, where about 80% of the New Zealand population of 3.5 million lived. NZDG’s stated strategy was to become the low-cost leader in both the NZ domestic market as well as for the export products. Deregulation was forced on the industry by the government, rather than the industry choosing deregulation. Many milk companies initially resisted deregulation, but some, like NZmilk, saw deregulation as a business opportunity and a way for the company to grow. After deregulation, milk companies began to supply their products into competitor’s previously protected regions. Supermarkets were one target for additional sales outlets, but convenience outlets and home delivery drops in remote regions were less attractive, as more complex warehousing and distribution systems were required. The move into markets outside their region had been anticipated when NZmilk changed its name, and was further reflected with the introduction of “NZ Fresh” as the Company’s major brand. Pricing was another area that had changed. Prior to deregulation, pricing was controlled to the extent that prices in supermarkets had to be within three cents of the price of home-delivered milk from vendors. Deregulation removed such controls. At times, competitors had cut prices of milk, particularly during the spring and summer when milk was plentiful. This meant that pricing policies had to be flexible and able to respond to competitive pressures in the marketplace, particularly in supermarkets. Home delivery vendors had seen further erosion of their sales to homes. Prior to deregulation, supermarkets supplied less than 10% of fresh white milk, with the balance through home delivery and convenience outlets. The supermarkets’ share had risen considerably since deregulation. Various initiatives were taken to protect home delivery systems. For example in Nelson, most home delivery vendors had purchased hand-held computers so they could respond easily to changes in price and demand. Elsewhere, NZ Dairy Group had begun rationalising its distribution system by reducing the number of route contractors and amalgamating various supply companies. While supermarkets now had the advantage of a number of suppliers eager to sell them fresh white milk at competitive prices, they were not solely interested in stocking the lowest priced product. Reliability of supply, product quality (freshness), ease of ordering and obtaining product quantities
IS Strategy at NZmilk 75
matched to daily store demand, delivery frequency and ability to minimize the paperwork required for head office payments were all part of the equation.
CASE DESCRIPTION NZmilk had grown from a small home milk supply company providing a single product in glass bottles for a local market, into a progressive, highly sophisticated, multifaceted organization which even manufactured its own plastic milk containers. Every day, tankers brought about 85,000 litres of fresh milk to the NZmilk plant. Within only a few hours of arriving, most had been processed into full cream, homogenised or low-fat milk varieties, packaged in plastic bottles or cardboard cartons, and trucked for delivery to retail outlets and homes around the region. The product range included fresh cream, yoghurt and flavoured milks. Developing high standards of product quality was an important priority. NZmilk had established a quality assurance section which changed the emphasis from a random sampling “quality control” philosophy into an ISO 9002-accredited total quality management (TQM) program for ensuring top quality products. The emphasis on quality had helped NZmilk win a major contract to manufacture milk shake and soft-serve ice cream mix for McDonald’s restaurants throughout the North Island of New Zealand. Another international food company, Cadbury Schweppes, has its famous Roses Lime Juice bottled on contract by NZmilk. Innovation was another important characteristic of the company. NZmilk manufactured its plastic bottles for its own use, but had sufficient excess blowmoulding capacity to be able to sell containers to outside firms. It had pioneered a scheme for the collection of plastic milk bottles which were then recycled by plastics manufacturers into other plastic products, which had been successfully copied by other milk processors throughout the country. Their in-house product research and development programme had produced a New Zealand first in honey-sweetened natural yoghurt. NZmilk planned to grow by competing on quality and service to extend their sales through supermarkets while defending their current local vendor network. Sales of fresh white milk had been falling since the mid-70s, but it was still a profitable market. The home delivery market might fall by another 50% in the next 10 to 30 years, but there were large barriers to new entrants as good delivery systems were essential. The market for home delivery and convenience outlets seemed to be relatively price insensitive. NZmilk had seen a decline in volume sold through independent home delivery vendors, with a corresponding reduction in the number of vendors from 45 in 1991, to 36 in 1995, to 20 by 2000. Sales in 2000 averaged 300,000 litres per vendor. The selling price of products to vendors varied by the type of end customer. Prepayments were made to NZmilk by vendors weekly, and at month-end vendors reported the exact number and type of product sold in each category. A credit or debit was then issued with payment due on the 20th of the following month. The sales through their main channels for 2001 were expected to be about 25% to home delivery via vendors, 25% to local shops and small supermarkets delivered by vendors, and the rest to large supermarkets through bulk delivery and direct invoicing. Total sales for 2001 were expected to be 21 million litres of milk, and 4 million litres of other products. However, with growing interest in Internet shopping, and in particular the trials of grocery shopping over the Internet which highlighted the requirement for effective home delivery logistics, NZmilk saw a potential opportunity to supplement the home delivery of milk with the delivery of other grocery products. Thus, maintaining a healthy and profitable existing vendor network was important to both current operations and possible future strategic initiatives, and could not be discarded lightly.
CURRENT CHALLENGES Doug Allen was NZmilk’s Sales and Marketing Manager, and William Edwards the Sales Manager. The customer relationships they had to manage were with the home delivery vendors, the
76 Cragg & McQueen
owners of local shops and small supermarkets, and the buyers, product managers and local store deli managers of the large supermarket chains. There were a host of daily problems and complaints that had to be handled promptly and professionally which soaked up most of their time each day. In addition, they had to try and look to the future expansion of sales volumes in the context of a steady annual decrease in per capita consumption of milk products, and increased potential for competition in their own local area. Liaison with the 20 vendors was one of William’s responsibilities. It was important not to take these vendors for granted, as they were the point of contact with NZmilk’s customers and consumers. Most vendors were small operators. The owners usually drove the delivery trucks themselves, hired some students to assist with the evening deliveries and did the paperwork when they returned home at night. Some of these vendors had computers to assist with their accounting, but there was a wide spectrum of computer capability, and computer-to-computer links were only possible with some vendors. Doug and William were aware that reducing administrative burdens of invoice checking and reconciliation was a high priority for supermarket managers. At Woolworths, shipping dockets had to be initialled by department managers, and then reconciled in store with invoices submitted by suppliers. The invoices were batched, and submitted to the head office in Auckland, where the invoices were collected together with others from that same supplier, batched together and then approved for payment. While it seemed at odds with the need to reduce administration, the supermarket head office was indicating that daily invoicing of goods received at each store was the way of the future. Daily invoicing would allow better control over in-store stocks, and avoid some of the batching and reconciliation steps presently required. Some snack food vendors had instituted off-truck invoicing, where an invoice was printed by the company truck driver from a computer in the truck as the goods were delivered. This allowed for flexibility in meeting the needs of the store at the exact time the delivery was made. However, this system was somewhat less attractive for perishable products like milk which had short duration “best before” dates. There had been some discussion of electronic data interchange of invoices between suppliers like NZmilk and the supermarket head offices. However, the transfer of the invoice was only a small part of the process of ordering, checking of incoming goods and reconciliation of invoices that was undertaken at each individual supermarket, and systems that supported the whole process electronically seemed still some time away. Most stores now had checkout scanning systems that read the bar code label on each product, and kept track of how much of every product went through the checkouts every day. NZmilk had been assigned unique bar codes for each of its products and sizes, so it was theoretically possible to tap into these checkout systems, either to individual stores or to the head office host computers to determine volumes of product sold each day and how much inventory remained in the store. However, it was an area of rapid change for the major supermarket chains, and it was not known whether they would be keen to provide access to their computers to outside firms like NZmilk. A further complication in this area was that the “best by” date was not presently bar coded, so that tracking of shrinkage from in-store inventory could not be exactly matched to the daily deliveries made.
ManufacturingFacilities NZmilk’s plant contained four filling production lines where containers were filled, labelled, capped and crated. Two lines were dedicated to plastic bottles, one to cardboard containers and the fourth to fruit drinks. Other parts of the factory manufactured plastic bottles and food mixes. Once packed in crates, the milk was transferred to a cooler from which orders were picked and assembled for their customers. The fridge pickers started at 6.30 am, and the first truck for the day would leave at about 8.30 am. Two other large trucks left at about 11 am for local vendors and the town of
IS Strategy at NZmilk 77
Kamo, and smaller trucks for local destinations left at 9.30 am, 10.30 am and 12.30 pm. Other trucks departed for other towns during the afternoon, some only on alternate days. During the evening, one large truck left at 10.30 pm for Auckland. All loads went as a mix of specific orders for vendors or supermarkets, with the exception of the Auckland load for Woolworths, which was a bulk order which was then broken down for specific supermarkets. Often what was delivered to individual Woolworths supermarkets was different from what was ordered the day before, following a last minute telephone call to the Auckland depot to change the order. Seven lock-up depots were located in the Whangarei region, but each vendor had their own secure area for collection at their convenience. John Tobin was the production manager overseeing all of the manufacturing areas, as well as raw material and finished goods stock holding, and distribution. There were 34 production staff, plus nine in blowmoulding. John had implemented changes in the manufacturing process to reduce unit costs, but more efficiency could be gained if longer runs and fewer setups were possible. He was concerned that the production schedule seemed to be disrupted every two or three days in order to meet urgent delivery needs. The daily production schedule determined which products and sizes were to be run on which packaging line. The daily plan often had to be made before all orders for that day had been received, and what went out on a truck was a mix of product that had been in the fridge from the previous night’s production, and what was just coming off the packaging lines as the order was assembled. Vendors and other customers preferred to receive consignments where all product had the same “best before” date. Therefore, production was sometimes disrupted to change the date back one day to be consistent with stocks held overnight. Such “best before” dates were conservative estimates; if kept refrigerated, milk could last considerably longer. Because of this dating scheme, it was impossible to stockpile product for future use. What went out on a given day had to be packaged that day or the night before, to the requirements of orders just received that day. Once all orders for the day had been received (sometimes as late as 11 am), it took time to work out if there was likely to be a shortage of any product or size. Therefore, the schedule was not finalised until well after the production run for the day had started. If a shortage was expected, then two changeovers were required; one to run the shortfall product, and another to return to the original plan. Although a shortage was never ignored, mixed “best before” dates were tolerated to avoid additional setup delays, despite concerns expressed by the marketing staff. These disruptions extended production times and led to total overtime hours of 170 to 200 hours per week. John felt that better production planning could reduce overtime to 50 to 70 hours per week. Fridge capacity was limited, but workable. However, the layout presented difficulties to pickers, with only 90% of orders being picked correctly. This led to one or two complaints each day from customers, so NZmilk was investigating total automation and a move to just-in-time manufacturing. However, initial enquiries suggested that this would require an investment of at least $500,000 in additional packaging and computing equipment. As a result, it seemed that NZmilk was unlikely to move in this way within the next three years.
Order Entry Vendors and major supermarkets were expected to fax, email or phone their daily order to NZmilk, either the night before or up to 10 am on the day of delivery. Order data was collected in both the factory office and the front office. When Diane McCann arrived to work in the factory office at 5 am, she collected the order faxes and the messages left on the answering machine overnight. Some orders had to be ready by 6.30 am for loading the first truck. She analysed the orders just received to check the production plan that had been prepared for that day. Many orders for afternoon deliveries were taken during the morning of the same day. Local vendors telephoned, faxed or emailed their orders to Diane directly, while Jane Roberts and one other in the front office typically spent two hours every morning telephoning the 30
78 Cragg & McQueen
Woolworths supermarkets to obtain that store’s order details. They rarely got through to the right person (such as the deli manager) in the store the first time they called, and often had to wait on the phone while the person was located, the store needs worked out and the order finalised. If the deli manager was not available, they had to deal with one of the other junior employees in that area, and this sometimes caused problems with over or under ordering of required products. Jane spent a further hour each day collating the Woolworths order data. An accurate picture of the orders required to be manufactured and shipped that day was often not available until late in the morning, which meant that the planned packaging run of some products in the plant had already been completed. This meant that additional short runs of products might have to be done late in the day, which entailed significant time to clean out and reconfigure the packaging lines for these products.
Invoicing and Accounts Receivable Control Joan Proudfoot was in charge of NZmilk’s major computer system, which was called Milkflex. Because of the very different business processes and billing arrangements with vendors that had arisen from the previously regulated environment, off-the-shelf order and billing systems were not suitable for NZmilk’s needs, and the development of a custom system had been required. Milkflex had been developed about five years previously by a local software services company in Dataflex, a PC database language. Milkflex had been designed to specifically incorporate NZmilk’s existing business processes during the time of government regulation. The major function of Milkflex was to produce invoices and accounts receivable reports, after sales order data had been entered from the order sheets. Milkflex was originally written to invoice home delivery vendors, but was modified to include centralized weekly billing for Woolworths, and to incorporate different pricing policies. About 50 supermarkets were invoiced weekly, and about 65-70 other customers invoiced monthly. The unique system of discounts and credits for the home delivery vendors complicated this billing cycle. Furthermore, as supermarkets were billed weekly, but often supplied with product from vendors’ trucks, these orders were initially entered into order forms by vendor, later entered into Milkflex under the vendor code, then later recoded by supermarket. Typically, vendors were charged for what they ordered, while supermarkets were charged for what they sold. Other changes had been made to Milkflex over the years. The range of reports was extended to include daily stock reports and a production plan based on stock on hand and past sales. Monthly sales analysis reports were typically available by the middle of the following month. Further minor changes were still outstanding; for example, it was not easy to prepare monthly summaries by supermarket. Joan found that Milkflex worked well for her, and wondered why production and marketing were not keen on their new reports, after asking her for them for months. Angela and Joan assisted Les Brown, the financial controller, with the wages each week, using a Milkflex module, which worked well, despite limited reporting functions and very demanding formatting.
Financial Planning and Control NZmilk had been using modules of the standalone PC-based Charter accounting package since the early 1980s for creditors, general ledger and fixed assets. The system had been upgraded in 1995 after Les Brown looked for a more user-friendly product, but decided to stay with Charter as the only obvious replacement (CBA) could not be supported locally. Only Les and Joan had access to Charter, and much of the data came from the Milkflex system printouts and was transferred manually. Data input times were typically less than 15 minutes each time. The Charter system met most accounting needs, although its reporting features were very limited. NZmilk had about 600 creditors accounts, and 150 payables cheques were sent out each month. Les and Joan also used Excel spreadsheets to produce forecasts, budgets and plans, typically for use by the whole management team. Much of this data was extracted from other reports, either from Charter or Milkflex.
IS Strategy at NZmilk 79
Production Planning A new report from Milkflex had been created to assist with production planning. However, this report had not been accepted or used by production staff, as the computer data never matched the physical closing stock data. Instead, production staff used a manual approach to plan the day’s production, and hoped that Joan and Les would sort out the problems with the new report. As Robert Kokay put it, “The system seems to be right for the front office, but not user friendly for us”. Production planning was made even more difficult as there were high sales at weekends, but there were no deliveries on Sunday. Ideally they wanted to generate a production plan by packaging line, by product, by time of day, for every shift. Brian English and Diane McCann drew up the production plan late in the afternoon for the following morning. This had to be done when none of the next day’s final orders were known. Instead, data from the corresponding day two weeks earlier was used as the best estimate. A two-week period was used because some of the more distant vendors collect on alternate days, and some social security benefits are paid on a two-week cycle. The preparation of the production plan started after a physical stock-take. The planning task was demanding as there were about 50 to 60 products to consider, with most having to be made on a daily basis. Much of the afternoon was spent preparing the next day’s plan, and it took an hour just to determine the bulk milk needs for the following day. During the day, the plan was checked frequently as incoming order data became available. Revisions to the plan were made during the day if needed. It was desirable to finish a day’s production with sufficient finished goods in the refrigerator overnight to satisfy the first three truck loads the next day, but on 80% of days, this stock was inadequate to completely fill the orders for these trucks. Part of the Milkflex system was designed to assist with the control of finished goods. However, it was not user friendly, failed to help when stock data did not balance and had a poor data entry screen.
Quality Assurance The Quality Assurance manager, Tony Fineran, had a staff of two. The team conducted tests in the laboratory on samples of both raw and finished product. The results were analysed and presented on typed sheets, but Tony was trying to make greater use of a PC spreadsheet. Quality reports were also available, but only McDonalds requested them for the products made for them. No data could be automatically transferred from the testing equipment to the computer. As a result, Tony was concerned that some patterns/trends might go unnoticed. Furthermore, Tony received about 200 complaints from customers each year, and a further 150 from staff at NZmilk, all of which had to be handled. His team dealt mainly with serious cases, and had time to investigate only some of the others. Tony suspected there was an opportunity to use computers to assist in the monitoring and handling of complaints, which was currently a paper-based system.
Present Use of Information Technology Up until the present situation, NZmilk would not have been considered to have been an intensive or strategic user of information technology. The early systems installed were financially focused, and to a large extent were automation of existing manual billing and accounting systems. The firm relied on external local firms to supply software, hardware, programming and technical support when needed. Only a few employees were knowledgeable about the systems that were operated, and they generated ideas and extensions for the custom software, which was then contracted out to an external firm. The focus of these changes was more about tailoring the software to fit existing business processes and manual procedures, rather than thinking about the re-design of business processes (supported by IT) to support company strategy. Most changes were oriented to make life easier or provide additional reports for the internal employees interacting with the system, rather than provide information useful to external customers.
80 Cragg & McQueen
Major hardware and software additions occurred in early 1991, when NZmilk installed a network of Windows-based PCs, in early 1995 when Milkflex was expanded and in 1998 when the Internet began to be used for email and WWW browsing. There were no employees of NZmilk dedicated to support applications or develop software, although the hiring of an IT coordinator was being considered. Some training of employees on IT packages had taken place. For example, Les Brown and Joan Proudfoot had spent some time with consultants from Baycom, looking at the ways to change Milkflex report formats. Otherwise, most have learned by being users of the packaged software, and a few were able to specify changes required for custom applications like Milkflex, but the company had to rely on outside people to provide anything beyond these basic functions. Ed Doughty was the local agent for the Charter accounting package, which had about 60 sites installed in New Zealand’s North Island. He had an accounting background, and believed that NZmilk’s needs were unique because of their vendors and their billing requirements. The Charter package used by NZmilk was written in QuickBasic, and could accept datafiles straight into the General Ledger. There was a Bill of Materials module which might be able to help with bulk milk forecasting, but there was no easy way within Charter to use order data to determine a production plan. The general ledger and creditors modules that NZmilk were using were the latest versions. Other modules exist for Charter which were not presently used by NZmilk, such as a financial report writer. There was also a Global Report Writer, but this was for application developers rather than end-users. Modules averaged about $1,300 each. An external person who had regular contact with NZmilk was Hugh Gurney, who used to work for Whangarei Milk, and developed the first version of Milkflex after forming his own software company called Baycom. Baycom provided support to NZmilk and a large number of other clients. Baycom used various tools for system development, including Dataflex, a 4GL relational database programming language designed for experienced software developers. Baycom also sold hardware and other software, and had installed the Novell Netware software at NZmilk. Over the years, Baycom had built various systems for their clients using Dataflex, including modules for general ledger and creditors, but not fixed assets. Some of these could be adapted for use by NZmilk if required. Baycom also had products for legal practices, as well as EDI experience with a local bakery for order entry via modem. Hugh Gurney’s business partner Graham Jackson felt that NZmilk’s needs were unique, so no existing product could be used to meet all of their needs. Graham regretted missing out on NZmilk’s contract for the upgrade to their local area network, and attributed it to various factors, including spending too little time determining requirements, and not providing top service to NZmilk at times. He would like to have extended Milkflex beyond the upgrades, and saw this as a viable option rather than NZmilk trying to develop their own systems. He was keen that NZmilk should retain Dataflex as their base technology, and would be happy to offer training so that users could more effectively use the Milkflex report writer for simple inquiries. FlexQL would be needed for more complex relational queries, but this had a steeper learning curve. Neil Dickie, a local, independent consultant won a recent contract to supply and install three new PC’s for NZmilk, and a notebook for Maurice Lloyd, the general manager. When extending the network, Neil noticed that the network required better configuration. Security and access were not well setup or managed, and all the Microsoft files were in one directory. He also wondered why some applications were set up to work on only one PC. Neil expressed interest in spending a day sorting out these problems, which likely resulted from the network being set up by numerous people at various times, with no plan in mind.
Potential Uses of Technology Maurice Lloyd was convinced that information technology could play a key role in NZmilk’s growth strategy. There were a number of exciting ideas for using computers which included:
IS Strategy at NZmilk 81
• • • • • • • •
EDI with supermarkets, although it seemed that firms like Woolworths were not likely to force this, at least within the next few years. Internet-based ordering with vendors and supermarkets (some already placed orders via email). Invoice at point of sale through in-truck computer systems. Support for production planning and forecasting decisions A fully automated warehouse. Business process reengineering. A telemarketing system to contact stores, solicit orders and sell additional products. Addition of home delivery of groceries via telephone or computer ordering.
Pressure for Change There had been approaches by several New Zealand representatives of manufacturing and distribution software packages (both MRP II and ERP), and the operations and sales people in NZmilk were clearly interested in looking at what might be done. Maurice Lloyd had been exposed to the use of IT for strategic advantage in the part-time MBA degree he had been undertaking, and also had been following reports in trade magazines about manufacturing and distribution software systems, the industry transforming impact of the Internet, and the rise of customer relationship marketing. Their parent company, Northland Milk, also had some experience with the purchase of both packaged and custom-developed software, primarily in the manufacturing and financial areas. However, Maurice was unsure whether their focus should be on solving their manufacturing and distribution problems with a tried and proven off-the-shelf packaged system tailored and modified to their requirements, or whether they should take a step back, and try and understand the impact of the new ways they might be doing business in the future, and the new trust and information exchange relationships they might develop with both present and future customers and business partners. He could see that a significant investment in IT was looming, and probably critical for their survival and growth in an increasingly competitive market. However, the potential was also there for an expensive disaster if an appropriate and realistic path was not taken. The key question he kept turning over in his mind was: “How should I get this process underway?”. But he was not sure how to proceed, what process he should follow and who he should involve, including who should be project leader. If NZmilk was to grab the opportunities that were available, and avoid the pitfalls, he had to make the right decision.
APPENDIX: PRODUCTION CAPACITY Typically the filling lines are working from 5 am to 2:30 pm. 55,000 litres of milk are held overnight. Daily supply averages 85,000 litres, typically from 24,000 litre milk tankers. Raw milk is tested for quality on delivery. Typically only three or four loads are rejected per year, and problems occur rarely during peak periods. The blowmoulding plant works 24 hours per day, with one machine producing 1 litre bottles, the other 2 litre bottles. Bottles require 4 hours to cool and shrink, otherwise they expand and thus take more milk to fill. Filling capacities are: Line one: 60 units per minute (2 litre) or 112 units per minute (1 litre) Line two: 38 units per minute (1 litre) or 75 units per minute (300 ml) Line three: 25 units per minute (1 litre carton) The production cycle moves from low fat milk first through to higher fat content products. During the last few years changeover techniques have improved, so that change over times are now a maximum
82 Cragg & McQueen
of 5 minutes rather than the previous 15 minutes. The change to cream requires a flush of the system, taking about 20 minues. On average, there are about eight prodct changes per line per day.
FURTHER READING AND REFERENCES Bergeron, F. & Raymond, L. (1992). Planning of Information Systems to Gain a Competitive Edge. Journal of Small Business Management, 30, 21-26. Currie, W, (1995).Management Strategy for IT: an international perspective, Pitman. Earl, M.J., (1989). Management Strategies for IT, Prentice-Hall. Earl, M.J. (Ed) (1996). Information Management: the organizational dimension, Oxford UP. Galliers, R. D. (1991). Strategic Information Systems Planning: Myths, Reality and Guidelines for Successful Implementation. European Journal of Information Systems, 1, 1, 55-64. Horne, M., Lloyd, P., Pay, J. & Roe, P. (1992). Understanding the competitive process - A guide to effective intervention in the small firms sector. European Journal of Operational Research, 56, 54-66. Luftman, J. (1996). Competing in the Information Age, Oxford Press. Martin, E.W, C.V. Brown, D.W. DeHayes, J.A. Hoffer & W.C. Perkins (1999). Managing Information Technology: what managers need to know, 3rd Ed., Prentice-Hall. Papp, R. (2001). Strategic Information Technology: Opportunities for Competitive Advantage, Idea Group Publishing. Robson, W. (1997). Strategic Management of Information Systems, Ch. 9, IS resource management, Pitman, 2nd edition. Thong, J.Y.L., C.S. Yap & K.S. Raman (1996). Top Management Support, External Expertise and Information Systems Implementation in Small Businesses, Information Systems Research, 7(2), 248-267. Ward, J. & Griffiths, P.(1996). Strategic Planning for Information Systems, 2nd Ed., Ch. 8, Wiley.
WEB RESOURCES Customer Relationship Management. Customer relationship management research centre. http:// www.cio.com/forums/crm/other_content.html. Gartner group report on CRM. http://www3.gartner.com/1_researchanalysis/executive/ premierecrmmgt.pdf. Manufacturing Resource Planning (MRP II). A guide to MRP and ERP. http://www.bpic.co.uk/ erp.htm. List of over 500 manufacturing software vendors. http://www.softmatch.com/manufact.htm. Site of a prominent manufacturing software vendor. http://www.qad.com/. Strategic Information Systems Planning. Information Technology Management Web. http:// www.itmweb.com/. Milk industry in New Zealand. New Zealand Dairy Foods, a manufacturer similar to NZmilk. http:// www.nzdf.co.nz/. New Zealand. General information about New Zealand. http://www.nz.com/.
BIOGRAPHICAL SKETCHES Paul Cragg is an associate professor in information systems in the Faculty of Commerce at the University of Canterbury, New Zealand, where he teaches on the MBA programme, as well as within the B.Com., M.Com. and PhD degrees. Previously he was on the staff at the University of Waikato, New Zealand, and before that at Leicester Polytechnic, England. Cragg’s research centres on small firm computing. Current studies focus on IT alignment, benchmarking, IT sophistication, and
IS Strategy at NZmilk 83
adoption and use of the Internet. He has published in many international journals including MISQ, EJIS, Information & Management, and JSIS. Bob McQueen is Canadian by birth (Toronto area), and New Zealander by choice. He has a BApSc in Electrical Engineering from the University of Waterloo, an MBA from Harvard Business School, and a PhD (Computer Science) from the University of Waikato. He is presently an Associate Professor in the Department of Management Systems, University of Waikato in Hamilton, New Zealand. He has been living in New Zealand since 1988. He has also worked with IBM in Toronto and Digital Equipment in Vancouver. Teaching is in the area of electronic commerce, information systems, and information technology in organisations. McQueen is an enthusiastic proponent of case method teaching as an effective inductive learning approach. Thirteen cases have been developed in the IT Policy area under his supervision, using the Harvard Business School approach, for teaching IT Policy at fourth year undergraduate and graduate (MBA) levels.
84 Mustafa & Maingi
Implementing Information Technology to Effectively Utilize Enterprise Information Resources Yousif Mustafa and Clara Maingi Central Missouri State University, USA
EXECUTIVE SUMMARY This is a typical case of implementing information technology in order to assist an enterprise to effectively utilize its production information resources. The enterprise, a world-class leader in the pharmaceutical industry, currently keeps a large number of technical research reports on shared network media. These reports contain scientific specifications extremely essential to the enterprise’s final products. In order to utilize these reports, a researcher has to navigate and literarly read through each report to identify whether it is relevant to what he/she is currently working on. Often times, researchers find it more feasible to create their own reports rather than wasting time and energy on the searching process. Our solution to the problem is to create an information system which will keep track of these reports, provide a concise synopsis of each report, enable the researchers to search using keywords, and give a direct link to locate that report via a friendly Web-based user-interface.
BACKGROUND The subject company is a world leader in life sciences focused primarily on two core business areas: pharmaceuticals and agriculture. Its dedication to improving life has been through the discovery and development of innovative products in the areas of prescription drugs, vaccines, therapeutic proteins, crop production and protection, animal health and nutrition. The company is also involved in the research, development, production, marketing and sales of organic and inorganic intermediate chemicals, specialty fibers, polymers, pharmaceuticals and agricultural chemicals. The company employs over 95,000 professional employees in more than 120 countries around the globe. Financial data are shown in Appendix A of this case.
SETTING THE STAGE The company uses SAP Enterprise Integrated Software. SAP integrates and automates business processes starting with the procurement of raw materials, human resources, manufacturing and ending with the sale of the finished products. In order to manage the organization, the Decision Support Copyright © 2002, Idea Group Publishing.
Implementing Information Technology
85
Department frequently requires its employees, report developers, to generate various reports to respond to numerous types of queries. These reports are the major source of information for the organization to make decisions at any level of management. However, these report developers are not permitted to directly access the SAP database because of the following reasons: 1. Direct access of the SAP database would greatly slow down the SAP system performance. 2. The generic format and contents of the reports generated by SAP do not have specific use for most users. 3. Reconfiguring SAP to generate specific reports is very expensive since it is huge and written in ABAP (which is a German programming language), which makes it even more expensive to hire a programmer who knows ABAP. 4. Reconfiguring SAP would make it more difficult for the organization to easily upgrade to newer versions of SAP. Therefore, the organization decided to set up a process in which data from the SAP tables are automatically copied to DB2 tables. The DB2 tables are immediately updated whenever the SAP data is changed. The SAP database is stored on Oracle tables on UNIX servers while the DB2 database are kept into IBM-DB2 servers. The company also decided to utilize a user-friendly report generator called Impromptu as their primary choice to access the DB2 database tables. These reports cannot be generated by running a simple query on the DB2 tables because these reports often include computations which convert different sets of data into more complex information, such as calculating cycle-time for a product from the moment the raw materials are acquired in the warehouse to the moment the finished products are completed. This part of report generation takes the longest time because the formulas created must be tested for their accuracy. Due to the nature of the company, we are not at liberty to show samples of their actual reports. However, we have attached some general purpose sample reports that can be derived from the Impromptu report generator (see Appendix B). Impromptu report developers individually generate their reports and store them on a shared network location. Currently, there are more than 5000 reports and close to 60-70 are created daily. However, storing these reports on a shared network location is of little or no use to the Impromptu report developers. Each time a report is needed, developers often start making an Impromptu report from scratch even though a closely similar report may have already been available on the network. Searching through the 5,000 plus reports is both time consuming and frustrating. A developer has to retrieve each report and read through it to determine whether or not it is relevant to his/her current needs. Almost all developers prefer to start from scratch rather than try to search the network. A single Impromptu report could be very costly since each may take anywhere from 15 minutes to 12 months to generate, depending on the complexity of the report. The cost of generating a report can be broken down into: • Searching the database tables for the required fields. • Analyzing and deciding on the logical combination of these fields, then generating the correct mathematical and statistical functions required for the report. • Testing the accuracy of the formulas on the report. • Fully documenting the report. Each Impromptu report is saved in two formats, .pdf and .imr format. The .pdf format is a snapshot of the report that can only be viewed using Acrobat Reader. The .imr format, on the other hand, represents the executable version of the report which can be “run”. The .pdf format is necessary as an Impromptu report developer can quickly glance to decide if it is the report that he/she needs. This is important because “running” an Impromptu report is a CPU time-consuming operation.
CASE DESCRIPTION The first step in our problem-solving approach is to explicitly and clearly identify users’ requirements. We used the personal interviews technique (described in Dennis and Wixom, 2000;
86 Mustafa & Maingi
Hoffer, George, and Valacich, 1999; Osborne and Nakamura, 2000; Whitten, Bentley, and Dittman, 2001), with the system users to identify the two following requirements: 1. Providing developers with the capability of documenting and saving their reports in a searchable manner. 2. Enabling the developers to search quickly and easily, via a user-interface, for a target report using different search items (which will be discussed in greater detail later). Our solution to the problem is to develop an information system that would provide a rapid and easy tool to document, store, and search reports. Our system will keep a repository of searchable data of each report developed (in addition to the existing ones), including a link to the storage media where the report was saved. The system would enable developers to quickly search for any report using date report created, developer name, developing department, or a combination of key words. In order for us to describe the processes of the new system, which we named the ImpromptuReport Dictionary, along with the data flowing between them, we used the DFD Gane and Sarson notations (described in Dennis and Wixom, 2000; Hoffer, George, and Valacich, 1999; Jordan and Machesky, 1990; Osborne and Nakamura, 2000; Whitten, Bentley, and Dittman, 2001). ADFD (Data Flow Diagram) is a graphical modeling tool used to depict the processes that a system will perform along with the data that flows in and out of each process. Figure 1 shows the context diagram of the system, which is the highest level of abstraction. Usually, the context DFD shows one process representing the whole system, the data which flows in and out that process, the origin of data (source), and the final destination of the data (sink). Figure 2 shows level-0 DFD, which is a decomposition of the context DFD, where the system performs two major processes: updating the ImpromptuReport, and searching ImpromptuReport Directory Database. The diagram also identifies three data stores: 1. D1, our proposed searchable repository. 2. D2 and D3 are the locations on the network where the Impromptu reports (both the pdf and imr versions respectively) reside after they are submitted. D3 is the location on the Impromptu report developers’ personal computer on which they saved the Impromptu report so that they can modify it at later time if the developer chooses to. Figures 3 and 4 depict further DFD decomposition in order to identify more processes. Next, we modeled the data which the system needs to function properly using the ERD Chen’s notations (explained in Dennis and Wixom, 2000; Hoffer, George, and Valacich, 1999; Jordan and Machesky, 1990; Whitten, Bentley, and Dittman, 2001). The ERD (Entity Relationship Diagram) is a graphical modeling tool used to depict the data entities, their attributes, and relationships. Both the DFD and the ERD are excellent graphical tools for modeling purposes; they are also beneficial communication tools to validate that the software development team has accurate underFigure 1: The Context DFD Search Item, Report Info
0 ImpromptuReport Directory System
Report Developer
Search Results
Implementing Information Technology
87
Figure 2: Level-0 DFD Report to be saved
Search Item
D4
1.0 Search
Report Developer
Local Developer network computer
Information Found
ImpromptuReport Directory Database Search Results
D1
Report Info
2.0
ImpromptuReports Database
D2 .PDF Report format
Update ImpromptuReports Directory Database
Report & its Info to be saved
D3 .imr Report format
Figure 3: Level-1 DFD for Process 1.0 Report Id/Name
Report Developer
Search Item
1.1 Search
List of reports
1.3 Run a report .imr report
1.2 Select a report
D3 .imr Report format
1.4 List of reports
View .PDF .PDF report D2 .PDF Report format
1.5 Save the report
the target Report D4
Local Developer network computer
88 Mustafa & Maingi
standing of the system and users’ requirements. Eventually each process on the DFD will be translated to a program, and almost every ERD may become a database table. Figure 5, shows that the system contains three entities, which are relevant to the functions of our system, with only their partial attributes shown due to space limitations. However, to increase the efficiency and maintainability of our system, we made the decision to merge these three entities into one database table. This resulted in minimizing response time since querying one table is often quicker than navigating three. We named the resulting table, ImpromptuReport, which has the following set of attributes: • Report Id: a unique numeric identification number which will automatically be generated whenever a report is archived. This is the primary key of the Impromptu Report table. • Developer Id: A string representing the developer’s unique Id within the enterprise. • Business Function: The department which the Report was made for (e.g. Inventory, Human Resources). Figure 4: Level-1 DFD for Process 2.0
2.3 Save .pdf New Report report Developer
2.1 Submit new report
email report & report info
D2 .PDF Report format
2.4 Save report info
2.2
D1
ImpromptuReport directory database
Save .imr D3 .imr Report format
Figure 5: The ERD for the ImpromptuReport Dictionary System Date created
Cataloge DeveloperId
Title
Developer e-mail
DeveloperId Description
ReportId ReportDeveloper
develops
imrPath path
Location
Report
resides on
Implementing Information Technology
•
89
Catalog: A string used to identify the various databases where certain reports are saved. Each business area within the company has its own database identified by a unique Id. • Report Title: A string representing the title of the report as given by the developer. • Description: A sting describing the functions and contents of the report. • imrPath: A hyperlink to the .pdf version of the Impromptu report on the network. • pdfPath: A hyperlink to the .imr version of the Impromptu report on the network. • HotFiles: A list of the data files from the Oracle database needed in the Impromptu report. This data is not available from the SAP database. • Date Created: The date when a report was created by the Report Developer. • Date Revised: The date when the report Developer revised a report. In order to avoid any anomalous behavior (O’Neil, 1994)of this table, we had to make sure that the table is normalized in the third normal form (3NF) using the following tests (Ramakrishnan, 1997; Ricardo, 1990): 1. Since there are no multi-valued (repeating) fields, then the table is in the 1NF. 2. The table is in the 2NF if it is the 1NF and all the nonkey attributes are fully functionally dependent on the key. In other words, if the key is a single attribute, which is true in our table, then the table is in the 2NF automatically. 3. The table is in the 3NF if it is in the 2NF and no nonkey attribute is transitively dependent on the key. By examining our table, it is clear to us that the value of every nonkey attribute is only determined by the primary key of the table and not any other attribute. Any further testing of a table which is in the 3NF is often unnecessary since many real-world databases in 3NF are also in BCNF (O’Neil, 1994). A typical ImpromptuReport table would look like the one shown in Table 1. Shown on the following pages are our Web-based graphical user-interfaces that users will use to provoke the various system functions. We followed the design guidelines explained in Dennis and Wixom (2000); Hoffer, George, and Valacich (1999); Jordan and Machesky (1990); Navarro and Tabinda (1998); Whitten, Bentley, and Dittman (2001). Figure 6, below, shows the system Dialogue Diagram (as described in Dennis and Wixom, 2000; and Hoffer, George, and Valacich, 1999), where the system can be provoked via the company Web page. Developers will then be given the choice to search or submit a report for saving as shown in Screen 1. Upon selecting the search option, developers can use a number of search keys as shown in Screen 2. Once the system finds a match, the developer can then highlight the specific report and version to display. Upon selecting the “submit a new report” option, Screen 4 will be displayed and all the information will be submitted to the system administrator. The system administrator, in turn, will use the same information given to save the report information to the Impromptureport database. The final step in our case was to implement and operate our system. The following systematic steps were followed in order to materialize our design into a fully functional system which meets users’ requirements stated in the beginning of the case. 1. Creating the Database: This includes creating the ImpromptuReports table using Oracle database, creating the form necessary to enter data into this database, and populating the table with data. All authenticated users (Impromptu report developers) will have read-only access to this database while a system administrator will have read-write access. 2. Creating the Web interface and the search mechanism: A Web-based interface was created to be used to navigate through this system. Creating this web interface includes creating an ASP form and developing all the codes which will be required to connect the Web interface to the database and enable the user to search the Oracle database by submitting a search on the HTML forms. Some valuable tips and procedures to execute this step were founded in Champeon and Fox (1999), Friedrichsen (2000) and Hentzen (1999).
90 Mustafa & Maingi
Table 1: An example of an ImpromptuTable Attribute Name Report Id
Sample Value 00990
Developer Id
Nm7435
Business Function
Inventory
Catalog
R3 Battg
Report Title
Manufacturing Goods Receipts
imrPath
Summarizes goods receipts from process orders for a product, product group, material type, month, year, and plant. Purpose is to provide data on manufacturing performance \Reports\Planning
pdfPath
R3 Battrchkg.Pdf
HotFiles
Dbrport.MIS
Date Created
04/05/00
Date Revised
0/06/01
Description
3. Training and documenting: All users will be trained to search for any report as well as submit their own reports for saving. A full-scale documentation of all aspects of the system, including operation and troubleshooting, was conducted as part of our project.
Current Challenges/Problems Facing the Organization our 1.
2. 3.
We believe that the company will face three types of challenges as a result of implementing system: Cultural: the system will enforce the concept of team work in which report developers have to adapt to reuse and build on top of other players’ work. The system will also enforce the culture of personal accountability where each developer has the responsibility of fully and properly documenting his/her reports so that it can be utilized by other developers. Additionally, report developers will have to follow a standard procedure and format when developing and/or saving their reports. Operational: The company must develop an operational procedure and allocate the required resources in order to maintain the system on a regular basis. Maintaining the database and the other files and providing developers with Ids are two examples on ongoing operational procedure. Technological: Report developers have to face the challenge of learning and utilizing the advances of information technology in order to improve their performance. The company, on the other hand, will need to search fo the most efficient report development tool. SAP is about to release a new version that has more report development features, therefore the company will have to evaluate SAP development tool versus Impromptu.
Implementing Information Technology
Figure 6: The Dialogue Diagram for the ImpromptuReport Dictionary System 0 Company Web Page
1 Main Menu
0
2
4
5
Search
Submit a New report
Add/update a new report
0,1
0,1
0,1
3 Select from search results
2,1,0
Diagram 1: Narrative View, Screen 1 Narrative Overview Form: Screen 1 Purpose: Initial Web-page Main Menu of the ImpromptuReport Dictionary System Users: All Impromptu Report Developers
Sample Design
Please Make a Selection: • •
Exit
Search for a Report Submit a Report
Continue
91
92 Mustafa & Maingi
Diagram 2: Narrative Overview, Screen 2 Narrative Overview Form: Screen 2 Purpose: To search the ImpromptuReport Dictionary System Users: All Impromptu Report Developers
Sample Design Please enter one or more search fields: Report ID Report Name Report Author Description / Purpose Catalogs Hot files Report Business Function
Exit
Clear Fields
Search
Diagram 3: Narrative Overview, Screen 3 Narrative Overview Form: Screen 3 Purpose: To display results obtained from searching the database Users: Impromptu Report Developers Sample Design Click on the .pdf link to view the Impromptu pdf file in Acrobat Reader Report ID
Report Title
Developer Id
Description
Date Catalog Created
HotFiles
Business Function
pdf.Path
*If there are no matching reports this will be displayed by a text message.
Exit
Back
.imr Path
Implementing Information Technology
Diagram 4: Narrative Overview, Screen 4 Narrative Overview Form: Screen 4 Purpose: To submit a new Report to the ImpromptuReport Dictionary System Users: All Impromptu Report Developers Sample Design Please provide the following information: Report Title Developer Name DeveloperId Description Catalog Business Function HotFiles Date Created Date Revise (when applicable)
Exit
Attach .imr
Submit
93
94 Mustafa & Maingi
Diagram 5: Narrative Overview, Screen 5 Narrative Overview Form: Screen 5 Purpose: To add/update a new Report to the ImpromptuReport Database Users: System Administrator ONLY Sample Design
Report ID Report Title Developer Name DeveloperId Description Catalog Business Function HotFiles Report Business Function Date Created Date Revised (when applicable)
Exit
Save
Implementing Information Technology
APPENDIX A Financial Summary For the six months ended on 06/2000, net sales rose 9% to EUR11.09 billion. Net income applicable to Common before U.S. GAAP rose 57% to EUR337M when compared to 1999 results. Results reflect increased life sciences sales. Recent Earnings Announcement For the 3 months ended 09/30/2000, revenues were 5,429; after tax earnings were 126. (Preliminary; reported in millions of Euro.)
95
96 Mustafa & Maingi
Statistics at a Glance – NYSE:AVE Price and Volume
As of 5-Dec-2000
Per-Share Data
52-Week Low
Management Effectiveness
Book Value (mrq*)
$11.60 Return on Assets
on 8-Mar-2000 $45.50
N/A
Earnings Recent Price
N/A Return on Equity (ttm) $79.25
1.23%
Earnings (mrq) 52-Week High
$0.14
Financial Strength
on 30-Nov-2000 $79.938
Sales
Current Ratio (mrq*) N/A
1.06
Beta 0.46
Cash (mrq*)
Debt/Equity (mrq*) $0.15
1.39
Daily Volume (3-month avg) 126.0K
Valuation Ratios
Total Cash (mrq) $120.9M
Price/Book (mrq*)
Daily Volume (10-day avg)
6.83
166.0K
Stock Performance
Price/Earnings
Short Interest As of 8-Nov-2000
N/A Shares Short 658.0K
Price/Sales N/A Percent of Float
big chart [1d | 5d | 3mo | 1yr | 2yr | 5yr
Income Statements
0.1%
Implementing Information Technology
97
APPENDIX A (CONTINUED) Income Statements Sales
Shares Short N/A (Prior Month)
52-Week Change +23.7%
490.0K
EBITDA (ttm*) -$153.1M Short Ratio
52-Week Change
5.93
relative to S&P500 +26.6%
Income available to common (ttm) $110.9M
Daily Volume 111.0K
Share-Related Items Profitability
ADR Information
Market Capitalization $61.8B
Profit Margin N/A
Shares Outstanding
Shares/ADR 1
779.8M
Operating Margin (ttm) -1.3%
Float 662.8M
Fiscal Year Fiscal Year Ends
Dividends & Splits
Dec 31 Annual Dividend none
Most recent quarter (fully updated)
Last Split
30-June-2000 none
Most recent quarter (flash earnings)
98 Mustafa & Maingi
APPENDIX B
Implementing Information Technology
99
100 Mustafa & Maingi
Implementing Information Technology
101
REFERENCES Champeon, S., Fox, D. (1999). Building Dynamic HTML GUIs. CA: M&T Books. Dennis, A., Wixom, B. (2000). Systems Analysis and Design. NY: John Wiley and Sons, Inc. Friedrichsen, L. (2000). Access 2000. AZ: Cariolis Group, LLC. Hentzen, W. (1999). Access 2000 Programming. CA: Osborne/ McGraw-Hill. Hoffer, J., George, J., and Valacich, J. (1999). Modern Systems Analysis and Design. MA: AddisonWesley. Jordan, E., Machesky, J. (1990). Systems Development. MA: PWS-Kent Publishing Company. Kowal, J. (1988). Analyzing Systems. NJ: Prentice Hall. Navarro, A., Tabinda, K. (1998). Effective Web Design. CA: Sybex Inc. O’Neil, P. (1994). Database, Principles, Programming, Performance. CA: Morgan Kaufman Publishers, Inc. Osborne, L., Nakamura, M. (2000). Systems Analysis for Librarians and Information Professionals. CO: Libraries Unlimited, Inc. Ramakrishnan, R. (1998). Database, Management Systems. NY: WCB/McGraw-Hill. Ricardo, C. (1990). Database, Principles, Design, and Implementation. NY: Macmillan Publishing Company. Whitten, J., Bently, L. and Dittman, K. (2001). Systems Analysis and Design Methods. NY: McGraw Hill Irwin.
102 Mustafa & Maingi
BIOGRAPHICAL SKETCHES Yousif Mustafa received a Ph. D. in Industrial and Manufacturing Engineering in 1998 and a M.S. in Industrial and Manufacturing Engineering in 1993 from Wayne State University, Detroit, MI. Dr. Mustafa is currently an assistant professor at the Computer Information Systems Department of Central Missouri State University, Warrensburg, Missouri. Clara Maingi is a senior with a double major in CIS and Accounting at the College of Business of Central Missouri State University, Warrensburg, Missouri. Currently, Clara is doing her internship as an application developer at the Information Systems Department of Aventis Pharmaceuticals, Kansas City, Missouri.
Implementation of IT in a Job Shop Manufacturing Company 103
Implementation of Information Technology in a Job Shop Manufacturing Company: A Focus On ManuSoft Purnendu Mandal Marshall University, USA
EXECUTIVE SUMMARY A.B.C. Engineering is a Melbourne-based job shop manufacturing company. The company attempted a major improvement in the information technology area by implementing and enhancing the capability of a MIS software package called ‘ManuSoft’. The General Manager of A.B.C Engineering felt that the implementation of this commercially available package would enhance the productivity and help managers in the planning process. The company carried out a detailed study on various IT tools and information systems softwares that are applicable to the job shop manufacturing situation. Considering the prevailing company situations, it was decided that ‘ManuSoft’ would satisfy the information requirements. A project team was set up to study the scope of IT improvements and implement the required IT/IS system.
BACKGROUND A.B.C. Engineering Limited is a precision engineering jobbing company, which provides precision machining, fabrication, toolmaking and assembly services to a wide range of industries. The company began as a two-person business in 1971 and since then expanded to become one of Australia’s largest precision engineering companies. A.B.C. Engineering employs over 250 personnel with a turnover of A$78 million in 1999. All machine operators are skilled tradesmen, or trades apprentices, fully capable of manufacture from drawings with a minimum of supervision. As can be seen from the company organizational structure, shown in Figure 1, the management structure is flat and product orientated. The General Manager reports to the board of directors and the manager of each functional section reports directly the General Manager. The production managers (Aerospace, Projects, Manufacturing and Operations) are responsible for customer liaison as well as general project and work management. Copyright © 2002, Idea Group Publishing.
104 Mandal
Figure 1: Organizational Chart
ABC ENGINEERING GENERAL MANAGER
SALES & MARKETING MANAGER
FINANCE MANAGER
ACCOUNTANT
AEROSPACE
PROJECTS MANAGER
MANUFACTURING MANAGER
OPERATIONS MANAGER TOOLS & DIES
DESIGN OFFICE MANAGER
QUALITY ASSURANCE MANAGER
QA TOOLROOM
ESTIMATOR PROGRAMMERS SALES ENGINEER
PRESS SHOP MANAGER
OPERATORS
QA PRESS SHOP
ACCOUNTING STAFF
PURCHASING OFFICER
FMS MANAGER
PRODUCTION SUPERVISOR
PROJECT PLANNERS
JIGS & FIXTURES MAT'L HANDLING & CLEANING
EDM
TURNING
GRINDING
MILLING SMALL
LARGE MILLS
TOOLMAKERS/FITT
AUTO/MOULD DIES
MANUAL MILLS WELDING
GENERAL STORE TOOL STORE
APPRENTICES
TOOLROOM STORE
The main operation at A.B.C Engineering centres around two units: the Tool Room and the Press Shop. This study is concerned with the Tool Room, as it represents the job shop environment in its most dynamic form. The Press Shop is also a job shop but is far more batch orientated, providing a relatively simple manufacturing environment. A schematic view of the factory layout is shown in Figure 2. The A.B.C. Engineering Tool Room makes parts to customer-designed order. The company provides a machining service to many types of industry. The parts made have been loosely categorised into eleven major ‘product streams’ by management. The categories are defined primarily for business control and reporting purposes. Every part made by A.B.C. Engineering is pre-required by its customers, none are made for stock. The eleven product streams, as defined by A.B.C. Engineering management, are: Canning and Packaging, Wire Cut, Large Machining, Small Machining, Jigs and Fixtures, Large Press Tools, Small Press Tools, Mould Tools, Refurbishments and Repairs, Design Only and Major Projects. The definitions of the products attempt to provide management with a picture of the demand on shopfloor resources made by a particular job, or group of jobs. A.B.C. Engineering’s customer profile is also reflected in the product stream definitions. The Canning and Packaging product stream provides the canning and packaging industries with high precision tools for repetitive manufacture of containers such as beverage cans and food tins. This is a highly evolutionary market. The demand of the large food and beverage organizations for innovative packaging, along with wear on existing equipment, provides a steady demand on A.B.C. Engineering for high precision tooling. The tooling for this product stream makes use of leading-edge materials technology to produce the properties required of parts for repetitive manufacture. The
Implementation of IT in a Job Shop Manufacturing Company 105
Figure 2: Layout of Work Centers at A.B.C. Engineering
Inwards Goods Store
Tool Store
Small NC Milling 2
Small NC Milling 1
NC Turning
Manual Turning
Small Manual Milling 1
Toilets
Small Manual Milling 2 EDM
Wire Cut
Small NC Milling 1
Medium NC Milling (Zayer)
NC Turning
NC Turning
Large Turning
Quality Control
Manual Turning
Large NC Turning (Berthiez)
Large NC Milling (Okumas)
Assembly / Storage
Maesurement
Jigs & Fixtures
Grinding
Miscellaneous Milling (Large Huron)
Jigs & Fixtures
Tool / Die making
Proposed FMS
Planning / Production Control
principle properties required of such parts are hardness, toughness and surface finish. To machine such parts successfully requires state-of-the art facilities. Although the requirements of the canning and packaging industries are highly dynamic, this is the most repetitive market for A.B.C. Engineering and provides a large proportion of the few repeat jobs manufactured. The Wire Cutting stream primarily provides punches and dies for customers in the metal stamping industries. Intricate parts are also cut for the biomedical industry. The technology allows intricate throughcuts in almost any material. The cutting process does not detrimentally affect the properties of pre-hardened materials, as the heat produced is minimal and confined to an extremely small area. The Small and Large Machining streams provide machined parts for many types of industry. These product streams can service any organization that requires metal parts such as a shaft, brackets or other simple components. An example of a customer for these product streams is the machine manufacturing industry, which often requires precision parts for specialised machinery. Maintenance departments of large manufacturing organizations also require parts for machine upgrade and repair. The three Tooling streams primarily provide the automotive industry with metal forming press tools and the plastics industry with injection moulding tools. The building of press and mould tools is a complicated process. Parts for the tools are made throughout the factory drawing on many resources. Once the parts have been made, they are fit together to form the tools. The tools are then tested and refit in an incremental fashion until the products they produce are satisfactory. The process of toolmaking requires precision machining, skilled fitting and close customer liaison to achieve satisfactory results. The Refurbishment and Repairs product stream provides work on highly specialised machinery for a wide variety of industries. It involves the stripping and rebuilding of complex machinery. This can be the most complicated product stream, as the work requirements are often not known before work begins. Refurbishment of a machine could include simple rewiring, grinding or a complete remake of all machine parts and components.
106 Mandal
The Design Only product stream is for work that involves the manufacture of CAD files or drawings only. Digitising sample parts most commonly does this. The first stage of building a tool to manufacture a plastic part is often producing a 3D CAD model of a solid part model made from clay or wood. This technique can also be used to analyse parts made by competitors within the plastics or metal forming industries. The Major Projects product stream principally provides the automotive and aerospace industries large and precise jigs and fixtures. Products in this class are often multimillion dollar operations that involve on-going customer relations such as installation and maintenance. Major projects have a significant effect on the manufacturing resources of A.B.C. Engineering. They often require internal services from many work centres. As A.B.C. Engineering provides a customer service the distinction between part categories is often difficult to make. Approximately 65% of parts are produced on a one-off basis for a wide variety of industries in the manufacturing arena. Manufacture in this environment requires a highly flexible facility. From information requirement’s point of view, A.B.C. Engineering exhibits the behaviour of a very dynamic environment. Clearly, the company needs an integrated and powerful information system to be successful in this rapidly changing environment.
SETTING THE STAGE In the early 1990s the company installed an MIS software package called ‘ManuSoft’, but the package remained underutilized. Some of the reasons for poor utilization of the software package cited are: • ManuSoft is not user-friendly. • The software works on DOS operating system, which is considered to be obsolete in today’s computer technology environment. • Shortage of computer skilled manpower in the company. • The senior managers do not consider the IS a value-added activity in the organization. However, with the change of leadership in 1995 the new General Manager realized the importance of an IS in his organization. He commissioned a project team to investigate the state of IT and was keen to develop an appropriate system that would meet the information requirements for managers at various levels in the organization. We discuss here developments in the IT area, how these developments affect the existing IS at A.B.C. Engineering, to what extent ManuSoft meets the information requirements and what could be done to alleviate the situation.
IT/IS Developments Manufacturing organizations across the board are struggling to keep step with the aggressively dynamic computer industry. Apart from general hardware, the largest area of expenditure in the hardware field has been in client / server technology. A survey conducted in 1995 in the USA amongst 2,400 organizations showed that, of the many companies moving to client / server technology, 41 percent planned to increase mainframe purchases (Miles 1995). The survey also showed that 49 percent of the responding companies that did not have client / server strategies planned to boost their purchases of mainframes. There is a definite view in the manufacturing arena that it is necessary to remain up to date with evolving computer hardware technology, and that it is not clear as to what type of platform, client/ server or mainframe, is best suited to MIS development. The roles of computer technology in manufacturing are evolving as the technology itself evolves. The traditionally separate MIS systems (for example, MIS designed for financial applications only) handling few data types with high volumes per data type, and manufacturing systems handling highly dynamic data subject to great changeability and timeliness, are being blended together into comprehensive systems that can be applied company wide (Ronen & Palley, 1988).
Implementation of IT in a Job Shop Manufacturing Company 107
Client/Server Technology Client /server architecture is an approach to cooperative processing where the functions of an application are shared between multiple computers on a network. A user’s workstation in a client / server architecture is called a client which is linked to the server on an interactive basis. The client serves as a user interface, processing time-consuming tasks to distribute the computing load from the server. Software that is run on client/server technology typically makes use of open operating systems developed by vendors such as Unix or Microsoft. One of the main strengths of client/server technology is that they allow users to interface various software applications by multiple vendors and build a MIS environment that suits an organization. There are however a number of weaknesses in client / server technology. The high powered clients that act as the user interface are expensive to purchase and maintain, and the process of integrating data between software applications supplied by independent vendors can also be time consuming and costly.
Mainframe Technology Mainframe technology focuses on the power of the server, using one highly developed machine to handle all of the data processing and storage tasks. Users access data via inexpensive ‘dumb’ terminals, which in themselves have no processing power, but allow access to the mainframe. Mainframe operating systems are usually proprietary, developed specifically for a particular hardware configuration, with hardware, operating system and software applications developed by a single vendor or through joint ventures between the hardware and software development teams. The main strengths of mainframe systems lie in the consistent ‘data warehousing’ that is a result of having only one data storage area, the pure processing power of the mainframe computers, and the fully integrated solution supplied by a proprietary system. The major weakness of mainframe is that proprietary systems typically make it difficult, if not impossible, to interface to systems from other vendors. As users find areas that the proprietary software applications do not handle well (e.g., spreadsheet reporting), small independent data warehouses are developed on personal computers throughout an organization, destroying the integrity of a single database.
Mixed Systems Traditionally MIS has utilized mainframe computers for corporate transaction processing using vendors such as IBM, Unisys, or Hewlett-Packard. The large process manufacturing industries typically favor proprietary systems by DEC, Fisher Controls, Foxboro, Honeywell, Allen Bradley and Cincinnati Milacron. Smaller manufacturing organizations have traditionally used minicomputers and more powerful workstations on client / server networks for applications such as CAD (Laudon & Laudon, 1991; Piszczalski, 1992). Mainframes were generally run with a refusal to share the knowledge that bestows power on manufacturing staff. Because of this users became frustrated with mainframe technology and rushed into client / server technology. However, studies suggest that many corporate respondents who invested heavily in client / server networks are yet to see the cost savings, improved efficiencies and increased productivity they expected (Miles, 1995). As computer technology develops, users from the various industries can see that they are less restricted by the limitations of the technology available and are more interested in utilizing the strengths of both client / server and mainframe systems. Hybrid systems that incorporate the strengths of both client / server and mainframe technology were the tools typically being sought by manufacturing organizations in the late 1990s. One such system was the AS/400, originally developed by IBM as a midrange, mainframe server running the OS/400 operating system also from IBM. The question of which system is best suited to a manufacturing environment still remains unanswered, however it seems that a balance must be struck between the processing power of mainframes and the flexibility of client / server systems. Factors such as cost per user interface and
108 Mandal
processing times are important. The importance of these factors varies with the size and structure of organizations. It must also be kept in mind that with the current rate of technology development the factor of processing time, a strength of mainframe technology is decreasing in importance on an almost daily basis. It seems that the industry is heading towards the mixed system approach to make use of the flexibility of client / server architecture. Vendors of software for such hardware environments must be aware that this flexibility is also an important factor in software selection. The flexibility provided should run all the way through the systems, from hardware compatibility to user interfaces.
Networking and Communications Whether a mainframe using ‘dumb’ terminals, a client/server, or some hybrid system is used as the backbone of an MIS, a means of connecting the machines together and controlling data transfers between them is necessary. The means is termed ‘networking’ and the type of network used defines the method, speed, consistency and accuracy levels of data transferrals. For communications within a single plant, a local area network (LAN) is the commonly used type of network. The technology of LANs has been the subject of a great deal of development because of their vital importance in computer-integrated manufacturing; the commercial opportunities have been recognized by vendors. Any implementation of a LAN has many different solutions. Therefore, the design and implementation must be given careful consideration.
Software Developments A continuing major argument in the literature is one on the topic of ‘off-the-shelf’ software versus ‘home-grown’ applications. There are convincing arguments on both sides. Some benefits of standard software are that it is usually cheaper than writing company-specific software, can be seen in use by other companies before being purchased and will generally take less time to introduce (Kochan & Cowan 1986). The major drawback is that off-the-shelf software will rarely suit any manufacturing organization completely. Software development is the number one technical challenge associated with implementation of CIM systems (Meredith 1987, Ettlie & Getner 1989). The application of MIS-orientated software design tools and methodologies to manufacturing applications have generally resulted in poor performance in manufacturing systems. Problems with software not only arise because of the complexities associated with driving and controlling various elements of the factory, but also in interfacing manufacturing with both engineering and business software. Software development practices found in a manufacturing environment are typically poor, and lead to major problems in software maintenance. In the press for timely implementation, software “fixes” can be temporary “band-aid” solutions, where the intention is that the correction will be completed later. This rarely happens. Subsequent modifications or upgrades in the software can prove extremely difficult due to the nature of typical manufacturing program development (Ettlie & Getner 1989). The application of MIS software has developed considerably over the years. The seventies were characterized by a transition from the structured (pre-defined) management reports of the sixties to new concepts in data modeling and decision support. The need to respond quickly to changing circumstances –corporate and government responsiveness–became a key factor in the design of MIS, leading to the rapid adoption of productivity aids, such as Data Dictionary / Directory, Fourth Generation Languages and Database Management Systems (DBMS).
Major Commercial MIS Packages Several software vendors can be said to have succeeded at various levels of the MIS market. When comparing vendors of software and the applications themselves, it must be kept in mind that it is an extremely dynamic business environment. Many of the major applications such as BPCS, Movex
Implementation of IT in a Job Shop Manufacturing Company 109
and Manugistics are being programmed or re-programmed in object-oriented languages and most now employ external databases. Most of the larger applications can also run on many different external databases such as Oracle, DB2, Informix, and SQL. The larger application developers place great importance on keeping up with the latest available technology, but smaller developers such as ManuSoft can often fall behind. Table 1 presents a comparative assessment of seven large MIS vendors and their products. The seven products, listed in Table 1, represent a range of technology that are all very successful in their own field. • SAP: R/3 Application One of the undisputed giants of the enterprise resource planning software development world is Systems, Applications and Products in Data Processing (SAP). Founded in 1972, in Walldorf, Germany, SAP is the leading global provider of client/server business application solutions. By collaborating with business, IT executives and partners worldwide, SAP developed a unique understanding of the challenges faced in implementing technology solutions for business users. They developed software that could help companies link their business processes, tying together disparate business functions and helping the whole enterprise run more smoothly. The versatile, modular software could be quickly and easily adapted to new business processes. As a business grew, so did the capabilities of the software. SAP’s innovative approach soon made it the top software vendor in Germany. Today, SAP is the largest supplier of business application software in the world and the world’s fourth largest independent software supplier, overall. SAP’s sales revenue increased by 38% to $US 2.39 billion in 1996. SAP’s customers are served by more than 10,000 employees worldwide. The R/3 software is built with a fully integrated modular structure, making it flexible and scalable. After installing R/3’s core modules, additional modules can be selected as needed, and added to the system over time. The incremental modules fully integrate with the system as soon as they are installed, providing immediate enterprise-wide functionality. More than 1,000 business process modules are included in the R/3 model. • Marcam: MAPICS XA Application Like SAP, Marcam is a giant in the enterprise resource planning software development field. However, their major system, MAPICS XA, a modular family of products, currently provides a selection of only 35 application modules for custom configured solutions, as compared with SAP’s 1,000 plus business process modules. Successful industrial and distribution companies worldwide, ranging from producers of soft drink concentrate in Ireland, ball bearings in India to horse trailers in the United States, have formed strategic business partnerships with Marcam and Marcam’s MAPICS enterprise solutions. The manufacturing industry’s long-term confidence in MAPICS has created one of the largest installation bases of any enterprise software system. Nearly 15,000 Marcam customers are supported in more than 141 locations in 60 countries. A corporate staff of 1,200 employees and 1,530 sales and support affiliates throughout North America, Europe, Africa, the Middle East, Asia Pacific and Latin America are supplemented by 24-hour-a-day call-in centers based in Best, The Netherlands; Kuala Lumpur, Malaysia; and Atlanta, Georgia. The confidence in Marcams’ software comes through flexibility and choice of modular, adaptable, inclusive, integrated solutions. Introduced in 1978, the MAPICS product family was the industry’s first comprehensive manufacturing resource planning (MRP II) system. Expanded and quickly changing customer requirements fuelled accelerated market responsiveness when Marcam acquired MAPICS in 1993. Numerous enhancements and additional applications, including client/server processes, EDI and Windows-based screen presentations, have made MAPICS a functionally advanced system. Continued expansion and evolution is the foundation for confidence in today’s MAPICS Extended Advantage, MAPICS XA. MAPICS XA modules can be purchased, installed and added in almost any sequence or combination.
Datalogix International Incorporated (Under Oracle)
Manugistics
Baan
GEMMS USA
Netherlands, USA USA
Sweden
Movex
Tritan (Baan IV) Manugistics
USA
BPCS
USA
MAPICS XA
System Software Associates (SSA) Intentia International
Germany
R/3
Systems Applications and Products in Data Processing (SAP) Marcam (MAPICS)
Origin
Software
Vendor
Table 1: Software Developers and Applications
835 (Oracle)
82
UNIX
UNIX Windows NT
Various
AS/400
167 388
Various
Windows
Windows, UNIX OS/2, Windows UNIX , Windows Windows OS/2
Windows
AS/400
197 340
Various
Major Client(s)
Various
Major Server
2390
Software Sales Revenue 1996 ($ million)
ERP/MRPII
Supply-chain management
Medium/large
Medium/large
Medium/large
Medium/large
MRPII, JIT ERP/MRPII
Medium/large
Large/medium
Large/medium
Size of business
ERP/MRPII
ERP/MRPII
ERP/MRP
Software Description
Manufacturing environment
External various External various ExternalDB2/400 External various External Oracle, Informix, Sybase External Oracle Various
External various
Database
9
8
9
9
9
9
9
Caters for job shop environment?
110 Mandal
Implementation of IT in a Job Shop Manufacturing Company 111
Despite possessing a smaller application range compared to R/3, MAPICS XA is a functionally rich solution for enterprise requirements and the Windows-style interfaces make it easier for those users who struggle with proprietary systems. • SSA: BPCS Application System Software Associates Incorporated (SSA) targets manufacturing companies with annual sales of $100 million to $500 million. SSA’s Business Planning and Control System (BPCS) is designed to run on IBM’s AS/400 but is also currently being ported to Unix. The BPCS client/server is a comprehensive set of integrated client/server applications which address the core system needs of industrial sector companies. The BPCS client/server is the worldwide standard for the process industries. It offers integrated Electronic Commerce and EDI capabilities through a strategic partnership with Harbinger and the BPCS Data Gateways. While designed to run on the AS/400, BPCS can also be configured for the HP9000, RISC SYSTEM/6000, DEC and ALPHA servers and can be run on Windows NT and UNIX operating systems in client/server mode as well although it will probably be two to three years before the software is tested and running well on Unix. Many companies are committed to going to Unix because it offers a more open solution. • Intentia International: Movex Application The Movex software developed by Intentia International, a $110 million Swedish software company, is a relatively new entry in the MIS market. Intentia has developed applications for the midrange market to run on client/server architecture. The suite of applications is developed to run on the IBM AS/400 and include integrated financial, manufacturing distribution and inventory applications. It is being sold in the United States through value-added resellers focused exclusively on the AS/400 market. Intentia has licensed the software to 1,500 companies, which have implemented it at more than 3,000 sites worldwide. In the U.S.A, the Movex system is targeted at similar companies to BPCS, i.e., manufacturing companies with annual sales of $100 million to $500 million. Advanced Manufacturing has estimated that in 1996 about one-third of the 100,000 U.S. companies with 100 or more employees will decide to purchase enterprise software. Movex operates simultaneously in 22 different languages and supports both discrete and process manufacturing functions. It also contains an executive performance measurement feature that allows business users to individually configure the data they want to track on a regular basis. A plant manager, for example, can set the system to track inventory turns without the intervention of the information systems department. • Baan: Triton Application Dutch developer Baan Corporation is aiming its Unix-based enterprise client/server software at a few key vertical markets, the first of which is the automotive industry. The $122 million company also has teamed up with a handful of niche software companies to boost the functionality of its suite of manufacturing and financial applications known as Triton (Baan IV). Baan is developing Triton as a low-cost alternative to SAP. Compared with SAP’s R/3 software suite, Baan software is typically 20% to 40% less costly to purchase. It is believed Baan is more receptive to the practice of customizing the product, as opposed to a supplier such as SAP, whose strength lies in its ability to cover most situations with pre-developed applications through its R3 software. • Manugistics: Manugistics Application Manugistics, in Rockville, Maryland, another leading player in the client/server manufacturing software arena released Manugistics a complete reworking of the company’s applications across its supply chain management suite of distribution, manufacturing, and transportation planning applications. The new release is built on a three-tier architecture that allows in-house developers to create Manugistics applications. The new architecture also allows several sites to access the applications at once. The software’s new architecture provides networking performance gains and better support for remote applications. Manugistics is also continually developing software applications to improve the scope of their product such as the demand-planning module that provides tools for forecasting
112 Mandal
supply-and-demand ratios as well as analyzing the impact of changes in variables such as price, promotions, and delivery problems. Manugistics supports OS/2, Windows 95 and Windows NT clients. Manugistics is one of the new generation applications that work with programs such as Unix versions of Oracle7, Sybase SQL Server, Informix On-Line Dynamic Server, and IBM’s DB2 to manage their data, rather than going through the process of designing and maintaining software-specific databases. • Datalogix: GEMMS Application Datalogix International Incorporated is a successful vendor of client / server manufacturing software, with the release 3.2 of GEMMS - Global Enterprise Manufacturing Management System. The package includes a key piece of software that funnels data from the factory floor into enterprise systems that process business data. GEMMS runs on Windows 3.x clients and Unix servers. The system works with any Open Database Connectivity-compliant database. Datalogix, in Valhalla, New York, also has a partnership with Oracle Corporation to pair Oracle’s SmartClient line of financial and distribution client/server applications with GEMMS even though Oracle has begun releasing its own manufacturing application line. In summary, from the array of available products and solutions it is obvious that one-size software, or even one method of obtaining a software solution, cannot fit all businesses. It is also unlikely that any single developer will meet all of a company’s computing requirements. One of the fastest ways to distinguish among systems and find a ‘best fit’ to an individual company is to find out what is required to adapt the software to a particular (and sometimes peculiar) method of doing business. Flexibility translates into the ability to customize fields, screens and relationships of data. The systems by the large developers, offer full customization of fields, screens and layout. As flexibility increases, price tends to go up. But low price does not always translate into loss of flexibility. Many systems at the low end of the price range have some flexibility. The names of miscellaneous fields may be changed, or perhaps unwanted items can be masked, and in some cases custom screens can be added. But, how much customization does an organization truly need? Today companies are forced to distinguish between “must have” requirements and “nice to have” components (O’Connell, 1995). Along with the operation of the MIS, companies must decide which MIS tools are important (e.g., financial management, materials control, personnel management or production control), and choose or develop an MIS that suits their specific needs. It must also always be kept in mind that a MIS is a management information solution, not a management solution. An example of development of system-user interfaces is given in this case study. In order to make the understanding of the steps/logics easier to follow, a description of a commercially available MIS software–ManuSoft–and the actual company on which it was applied are discussed next.
ManuSoft Management Information System ManuSoft is a modular package similar to its larger competitors R/3, MAPICS XA and BPCS. There are five modules in the complete package: • Quoting and Estimating System (QES), • Production Scheduling and Control (PSC) system, • Manufacturing Control System (MCS), • Manufacturing Inventory Control (MIC)package, and • Materials Requirements Planning (MRP) package. When all five modules are implemented the system presents a fully integrated MRPII system, with information control from order to dispatch. The system can be installed in total or in separate modules. The modules used will be dependent on the requirements of the user. The ManuSoft system is designed specifically for the DOS operating system. When recording data it locks only individual records for update so the rest of the file is open for multiple user access. This overcomes multiple user difficulties that can be encountered by programs running on DOS.
Implementation of IT in a Job Shop Manufacturing Company 113
Any IBM-compatible PC with 4Mb memory and an 80386 processing chip (or higher) can handle its operation. To operate in the multi-user configuration, it is preferable to run from a more powerful computer, although this is not a requirement, only a recommendation. The most important technology for ManuSoft with respect to operating speed is the server harddrive. This is because the ManuSoft system, to allow multiple users to operate simultaneously in the DOS environment, must write all transactions that may cause changes in the state of its database directly to disk. As with any system, memory is a valuable asset, in ManuSoft’s case specifically for output operations. When compiling reports, the ManuSoft system writes and sorts many temporary files in order to organize data. As these files and the output reporting functions do not make any changes to the database files, the system can make use of main memory for these processes. ManuSoft recommends the program be run on an Ethernet Local Area Network (LAN), controlled by Novell Network software. ManuSoft’s original design was for this platform, although use of most networking systems is conceivable. Earlier versions of ManuSoft would not run on Windows networking systems, NT or Windows 95. The powerful caching tools of these systems over-rode the record-locking facilities of the MIS, causing system failure. ManuSoft has overcome these difficulties in later versions but only through robust, proprietary programming measures. Unfortunately these measures also slow down the system. The ManuSoft system can collect data from the shop floor in three different ways: ‘touch screen’ technology, bar code readers or manual keyboard entry. ManuSoft have developed a factory control system that operates primarily through a network of touch screens that allow shopfloor workers to interact with the system to correctly choose priorities and to update work progress. The touch screens are nothing more than simple DOS-based PCs with touch sensitive displays. However, the ManuSoft system has certain disadvantages. • The structure of the data files makes it necessary to write specific operation programs for every data file. It also means that users are presented with a partially rather than fully integrated relational database. • The operation of the index system makes it impossible for a user to define or restructure the search priorities or configure specific tracking of items. • The combination of the Ethernet networking and the DOS operating system makes it unlikely that ManuSoft will ever run advanced data collection and distribution routines using CAD data for example. • The user interface for system operation is inflexible forcing users to follow specified steps to produce outputs whether or not the user requires the steps. • The structure of the data files makes comprehensive and flexible reporting to management requirements very difficult to achieve. The system’s inability to arrange and convey required information inhibits its ability to meet the information requirements of an MIS.
CASE DESCRIPTION As mentioned earlier, ManuSoft remained largely under-utilized in the company. In order to investigate the truth in this allegation the project team as the first step inquired into to what extent ManuSoft meets the information requirements for mangers at various levels. Senior management has many requirements of an information system at the strategic level in order to make informed decisions on the direction of the company. Expenditure to upgrade current equipment and recruiting new people to meet production demands are just two examples. Unfortunately the ManuSoft MIS is not capable of fulfilling many of the information requirements of this level. ManuSoft provides no way in which to compare actual costs with sales values, on the same report, for a number of jobs. To perform such an analysis for a group of jobs, reports have to be produced from the job cost reporting system and the invoice reporting system, and the results manually collated.
114 Mandal
However, ManuSoft does provide information on work centre loading, but the interface provided to generate reports is so cumbersome that nobody is willing to learn it. The loadings charts produced by ManuSoft are half A4 page in size and are fixed for a three-month period. The information is far too detailed and complicated for a manager. ManuSoft, being a text-based system, does allow a middle-level manager to view both the present and future loading of resources, but not on the same screen. In fact an entirely different section of the program must be accessed to go from current loading to future loading. On top of this the format of the interface provided is such that no more than 12 jobs per work centre could be listed. This makes comparison and prioritising of jobs extremely difficult. As managers at the middle level are responsible for maintaining the efficiency of each work centre, they need to have access to an effective information system. Decisions that can affect the efficiency of a work centre involve innumerable variables such as personnel, maintenance and tooling. It is vitally important that management has a means by which to record and measure the effects of any decisions made. Without a process to provide feedback information on such decisions efficiently, management at the planning level will have no means with which to monitor and ensure improvement. Supervisors, at lower level management, are responsible to allocate individual jobs to resources based on the priorities set by the middle level managers. Deciding which job should go on what machine and when is a highly complex decision making area. Any job that has a milling operation can be performed on any milling machine, but certain preconditions need to be satisfied. As such ManuSoft is good in providing information required by supervisors. In spite of that the supervisors are not using the ManuSoft primarily due to issues related to motivation. The factors related to motivation in using the ManuSoft are not addressed in this case study. The project team investigated the avenues to improve the usage of ManuSoft through user-interface development. Designing software interfaces that fit the work environment is a success factor proven by many researchers. A key factor to successful IS implementation often cited by IS researchers is the teamwork between the developers and the end users of the system. When IS developers work closely with the end users of the software the fit of software to the application is much better. In order to develop MIS-user interfaces the information requirements were assessed at strategic, planning and operations levels. Staffs were interviewed to discuss reporting requirements and preferences at each level. The requests for information showed that the ManuSoft system caters well for the operations level of the business, but the strategic and planning levels required a great deal of work. In total 20 program files were developed to meet the shortfalls of information requirement. Table 2 provides a brief description of the program purpose and file names. To describe the development of the interface programs and their operation this section examines three examples; Delivery performance, Production by cost centre and Invoicing performance.
Delivery Performance Interface (Program 3) The delivery performance program file is an example of early interface development. The program simplifies the production of the required reports immensely. At the introduction of this project ABC Engineering had no measurement of delivery performance. Developing a management interface capable of such measurement therefore was high priority. The importance of delivery performance measurement was reflected in a survey of customers carried out by ABC Engineering’s General Manager in 1995, which ranked ‘on-time’ delivery as the number one requirement of a good supplier. Delivery performance is an area that is important to companies’ worldwide, not only small job shops. Surveys of manufacturing executives in large successful companies in Europe, the USA and Japan repeatedly rank dependable and fast delivery among their top five competitive priorities. The problem of tardy deliveries can be tackled on many levels–company wide, by product type or on an individual job-by-job basis. The frequency of reports varies depending on which level the
Implementation of IT in a Job Shop Manufacturing Company 115
Table 2: Developed Interface Programs
Program Number 2 3 5 8 9 11 12 14 15 20 1
4 6 10 13 16 17 18 19
7
Program Description Strategic Level Work by cost centre Delivery details for a period Invoices by product code for a period Hours by product code with orders, reworks and invoices Monthly order intake report Hours, rework, invoices and orders by planner for a period Hours, rework, invoices and orders by product code for a period Progress claim information for a period Financial information on completed jobs for a period Open orders at a point in time, with data on costs and sales. Planning level Highlights sub-assemblies on the order book for which the final-assembly has been despatched. Such sub-assemblies must be removed manually. Find the planner given a job number Download labour list from ManuSoft Download parts and processes from ManuSoft Manufacturing production sheets Information on hours worked by sub contractors for a period Presents the hours spent by employees on nonproductive work activities such as cleaning machines, training and maintenance Presents machine loading and work scheduled to a chosen work centre in a graphical format. Highlights by responsible planner, discrepancies between hours estimated and actual hours taken to produce the job on the shopfloor. Operations level Adjust Nightshift Clock-offs to the correct period
Program File Name COST_CEN.XLS DESP_R2.XLS INVOICE3.XLS NIGHTLY.XLS ORDER.XLS PLANNERS.XLS PRD_CODE.XLS PROGRES2.XLS RJS_COMP.XLS OPENSALE.XLS SUBASS.XLS
FIND_PL.XLS LABOUR.XLS PARTDOWN.XLS PROCESS3.XLS SUBBIES.XLS ZEROP2.XLS SCHED3.XLS EST01.XLS
NIGHT.XLS
116 Mandal
report is aimed at. As there was no delivery performance measurement available at ABC Engineering before this project the global delivery-reporting task was priority. The reports are required on a monthly basis. Two delivery reports were developed to meet the management requirements at ABC Engineering. The first is a delivery trend line as percentage of ‘on time’ deliveries on a monthly basis. The second is a summary report in graphical format, product code wise.
Production by Cost Center Interface (Program 2) The production by cost center report was one of the first ‘direct access’ interfaces developed. The user is not required to interface with the ManuSoft system at all. The program accesses the complex JOBCOST.DAT file directly and extracts data from the file to the appropriate areas on the final report. It is important for the accounting department at ABC Engineering to be able to keep track of hours worked at various cost centers on the shop floor between set dates. This allows a labor recovery value to be assigned to each work center for that period and produces a part of the performance evaluation of the individual cost centers. Various management divisions such as the Board of Directors or General Manager require data from the Accounts department on cost center hours for company evaluation. This data may be required at any time depending on its application and usually it is required instantly. Often the Accounts department is tasked with providing evaluations for specific time periods with durations of anything from days to years. The report provides data for each work center. All data required for this report is contained in the ManuSoft JOBCOST.DAT file and as such requires only a simple program to extract the data. There is not even a need to use the ManuSoft indexing system. To gather the information from the ManuSoft system would require the running of 27 reports (one for each cost center) from the job cost reporting section then transferal of data to a spreadsheet. Each report from that particular section of ManuSoft takes approximately 15 minutes to run (a total of approximately seven hours), while accessing the data through this interface takes only 15 minutes in total. The logic of the program is simple. It compares the operation dates recorded in the job cost file, for each of nine possible operations, in every job cost record. If a date falls between the user-defined reporting dates, the program then records the work center of operation and converts the time, recorded in base 90 code, to hours. Finally the work center is located on the report spreadsheet and the converted time is added to any time previously recorded.
Invoicing Performance Interface (Program 5) The invoicing performance report reflects the development of auxiliary programs allowing the design of interfaces that automatically access a number of the ManuSoft data files. This program draws on data from three different data files, sorts, summarizes and presents the final report through a ‘onetouch’ interface. The senior management of a jobbing shop requires data from the invoice reporting system to help answer many important questions. For example, is there enough money this period to pay everyone? Is there a strong market for product C, or should the business concentrate on product D? The data on invoicing alone cannot answer all of the questions, but it can provide information on money due to come into the business. This information can then be compared with data from cost reporting systems to give an indication of financial progress. To gather information required by management at ABC Engineering for the monthly invoice report through the ManuSoft system, would require the generation of 12 reports from the selective invoice file printout. Such reports require approximately five minutes each followed by manual data transfer to a spreadsheet for presentation. This process would not only be time consuming but would run a high risk of error due to manual data handling. To overcome these difficulties a program was written to access the ManuSoft databases directly and produce a ‘one-touch’ report that could be run by any staff member with access to the appropriate equipment in a manner of minutes.
Implementation of IT in a Job Shop Manufacturing Company 117
CURRENT CHALLENGES As stated in the beginning, ABC Engineering was facing the challenge of making its existing information system more effective to users at various levels in the organization. The MIS ManuSoft, which was introduced to ABC Engineering prior to the start of this project, could not provide the right information to different levels of management at the appropriate point in time. So, what were the remedies? The development of the interface programs was one of the ways to solve this critical problem at ABC Engineering. The system/user interface development in this case is considered to be a short-term measure of a possibly long-term problem. The business and work environment of ABC Engineering is very volatile, and to satisfy its vast amount of data storage and analytical requirements, one would perhaps advocate a much more powerful information system than the ManuSoft system. However, the development of interfaces by object-oriented programming provided an extra lease of life to the existing system, and much-needed cost savings. While this case study concentrated on the provision of business information to management in small manufacturing organizations, it did not tackle the problem of data collection and data integrity. This is an extremely difficult area in practical installations of an information system. The collection of data, for example - hours spent on a job throughout the working day, is no simple task. Reconciling the reported data with actual data can cost an organization thousands of dollars every year; yet it is this information that drives an MIS. If the input information is not accurate, the information supplied to all levels of management will not be accurate. The outputs of the MIS may then be useless, and possibly even damaging to an organization. The method of data collection also poses a challenge. In the job shop environment, where at present a great deal of work is still done manually, the data collection method presents an even greater challenge than interfacing to management. Because there is little benefit to workers on the shop floor, there is no incentive to them for accurate data entry. This motivational problem is generally overcome by making the data entry process (e.g. , clocking on and off jobs) as part of shop floor work requirements. Research needs to be done into this side of MIS development. At the time of writing this case study the company is going through a process of take-over bid, and most likely ABC Engineering will be sold to a large manufacturing corporation. If that happens, the information systems at ABC Engineering will certainly attract a fresh look, which will pose a great opportunity as well as challenge to the new management.
ACKNOWLEDGMENT The author gratefully acknowledges the contribution of Matthew Sweeney in the preparation of this case study.
FURTHER READING Mandal, P. and Baliga, B. (2000). ‘MIS-user interface design for job shop manufacturing environment’, International Journal of Operations and Production Management, 20(4), 468-480. SAP ‘R/3. Better information, faster’, SAP AG [About SAP], http://www.sap-ag.de/aboutus/sapr3.html. Manufacturing Systems, ‘Top 75 Ranking’, http://manufacturingsystems.com/ software/1_10.html.
REFERENCES Ettlie, J.E. & Getner, C.E. (1989). Manufacturing Software Maintenance, Manufacturing Review, 2(2), 129-133. Kochan, A. & Cowan, D. (1986). Implementing CIM, Computer Integrated Manufacturing, IFS (Publications) Ltd.
118 Mandal
Laudon, C. & Laudon, P. (1991). Management Information Systems: A Contemporary Perspective, 2nd ed., Macmillan, New York, NY. Meredith, R. (1987), Implementing New Manufacturing Technologies: Managerial Lessons over the FMS Life Cycle, Interfaces, 17(6), 51-62. Miles, G., L. (1995), Mainframes: The next generation, International-Business, Aug., 14-16. O’Connell, S.E. (1995). HR Systems: Does Higher Price Mean a Better Product, HR Magazine, July, 32-3. Piszczalski, M. (1992), Power Struggle: Can MIS Rule the Shop?, Corporate Computing, 1(3), 217-19. Ronen, B. & Palley, A. (1988), A Topology of Financial versus Manufacturing Management Information Systems, Human Systems Management, 7(4), 291-8.
BIOGRAPHICAL SKETCH Purnendu Mandal is an associate professor of MIS and his current teaching interests include principles of MIS, database management, electronic commerce and strategic MIS. He taught in England, India, Singapore, Australia and the USA. He published over 100 refereed journal and conference papers. His works appeared in International Journal of Production and Operations Management, Industrial Management and Data Systems, International Journal of Quality and Reliability Management, Intelligent Automation and Soft Computing: An International Journal, Logistics Information Management, European Journal of Operational Research, etc. Dr Mandal serves in a number of professional bodies and currently is on the editorial board of three international journals.
Shared Workspace for Collaborative Engineering
119
Shared Workspace for Collaborative Engineering Dirk Trossen Nokia Research Center, USA André Schüppen and Michael Wallbaum Aachen University of Technology, Germany
EXECUTIVE SUMMARY In the broad context of the collaborative research center project IMPROVE (Information Technology Support for Collaborative and Distributed Design Processes in Chemical Engineering), the presented case study has been concentrating on the provision of appropriate communication technology, specifically shared workspace means, to enable collaborative working between distributed engineering teams. Issues like distributed developer meetings, sharing common data, or even sharing entire workspaces including off-the-shelf tools being used for the development process are the driving forces for the studies on how to provide appropriate technology means in collaborative engineering. The considered case in the field of chemical engineering and development represents a difficult candidate for collaborative engineering due to the variety of proprietary data and tools to be integrated in a shared workspace. Furthermore, common aspects of cooperative working among development teams have to be considered as well. The resulting architecture–based on the findings of the current stage of the case –is presented, trying to use as many existing software as possible. Drawbacks and challenges being encountered during the case study due to the a-posteriori approach are outlined, leading to a revised architecture proposal to be used in the future as a common platform for the information technology support within the context of the research project. Expected benefits and problems of the introduction of the new architecture are drawn.
BACKGROUND The collaborative research center project 476 IMPROVE (Information Technology Support For Collaborative and Distributed Design Processes in Chemical Engineering), sponsored by the German Research Foundation, is a cooperation between chemical engineers and computer scientists to improve development processes in chemical and plastic engineering and to study information technology aspects to support development processes by means of evolving multimedia, planning, and modeling tools. Copyright © 2002, Idea Group Publishing.
120 Trossen, Schüppen & Wallbaum
A development process in chemical and plastic engineering is concentrating on designing a new or revising an existing production process, including its implementation (Nagl & Westfechtl, 1999). A development process in process engineering, similar to other engineering sciences, is divided into different phases. In a first phase, the requests for the material products are determined on the basis of a market analysis. Afterwards, the concept design and the basic engineering take place, finally leading to a definition of the basic data of the engineering process. It follows the detail engineering, deepening the data determined in the basic engineering. Then it is transferred into apparatuses, machines, piping including measuring technique, control engineering and automatic control. Finally, the development process is finished, and the procurement of all units, the assembly of the system, their line-up and the system operation take place. As outlined in Nagl & Marquadt (1997), the first phases of a development process, i.e., basic and detail engineering, are of utmost importance due to several reasons. Firstly, around 80 percent of the production costs are defined in these phases. Secondly, the complexity of these phases is fairly high due to the different aspects of the production process to be considered. Hence, a proper support of these phases by appropriate methodologies as well as information technology is crucial to guarantee a successful completion of the development process.
SETTING THE STAGE The focus of the research project IMPROVE is on some subtasks of the concept design, which seem particularly important, to improve and accelerate future development processes. Typical characteristics, among others, of these subtasks are the following (Nagl & Westfechtl, 1999): • Number and background of developers: Different tasks of the concept design are usually solved by different developers and development teams. The number and size of these teams as well as the personal background of the individual team members usually complicates information exchange and understanding of solutions, additionally caused by different terminology and lack of command tool support. • Geographical and institutional distribution: Due to the globalization of institutions, the aspect of geographically distributed teams becomes more and more important. Intra- and interteam meetings become a challenging task for the supporting information technology since arranging physical meetings at a single place adds significant overhead to the development process in terms of additional journeys. Hence, appropriate synchronous as well as asynchronous communication means are desired to support the widely dispersed developer teams. • Team coordination and planning of development process: Planning and management tools are desired for team coordination and planning purposes inherently supporting the dynamic nature of a development process. • Cooperation and information exchange: Information of all sorts, such as documents and planning information, has to be exchanged among all developers while ensuring the information’s consistency. This task places a burden on the supporting information technology, specifically on the version and database management of the project. • Reusability: Due to the desire to reduce development costs, reusing well-known techniques becomes more and more important. Specific techniques as well as generalized patterns for solutions are typical candidates for reusability. Appropriate modeling and documentation means are required for this issue. Within IMPROVE, a chemical development process is used as a specific case of the different tasks to be covered by the project. The project is divided into four main parts which are further divided into subtasks. The first part is dealing with development processes for chemical engineering, the second one covers methods and tools to support development processes, while the third one is investigating the mapping onto new or existing information technology platforms. Finally, the fourth part is responsible for integrating the project to disseminate the results towards industrial partners.
Shared Workspace for Collaborative Engineering
121
Figure 1 shows an overview of the different parts and their subtasks within IMPROVE (Nagl & Marquadt, 1997). The considered case study of this chapter is covering the information technology support for the multimedia communication within IMPROVE. This subproject (dashed box in Figure 1) is aiming to improve synchronous multimedia-based interaction among distributed developer teams as one of the core elements of development processes nowadays.
CASE DESCRIPTION As mentioned in the previous section, multimedia communication and interaction among distributed development teams is crucial to improve the overall design process. Hence, the provision of an appropriate architecture to enable collaborative synchronous working among the teams will be the considered case in the following. For that, the development scenario within IMPROVE is outlined, leading to a requirements definition for the communication architecture. Finally, the current architecture within IMPROVE is presented to fulfill the previously-defined requirements.
Collaborative Development Scenario The typical engineer in a chemical development process is a specialist, being only responsible for a small part, e.g., special chemical reactors, of the whole project. Usually, specialized applications are used within his knowledge domain. Input of a different sort, such as specifications and parameters, has to be worked in, but also results from his specific part have to be distributed to other experts to be used in their specific tasks. At certain points in the development process, meetings are used to discuss the results and to synchronize the work. In addition to, mostly pre-announced, meetings, adhoc coordination might be necessary due to several ambiguities or errors during the engineering process, which makes it necessary to contact another involved specialist. For instance, the specialist for the degassing has problems reaching the specifications caused by a special input parameter. He can ask the expert of the chemical reactors to modify the reactor’s specification by changing this parameter. Besides audiovisual exchange of information, the data-sharing aspect is of utmost importance, specifically concerning off-the-shelf applications used by the individual engineers.
Requirements for the Information Technology Support From a user’s perspective, the major requirement of the information technology support within IMPROVE is to provide multimedia conferencing among distributed groups of developers by means Figure 1: Structure of IMPROVE A: Development Processes Processes for Concept Design
I: Integration Design Process for Extruders
Product Data Models
Scenariobased Analysis
B: Methodologies and Tools Experiencebased Development Processes
Incremental Integration Tools
Multimedia Communication
Reactive Administration System
C: Mapping onto new and existing Platforms Information Flow Administration
Service Management and Trading
Work Processes and Communication
Software Integration and Framework
122 Trossen, Schüppen & Wallbaum
of audiovisual real-time communication and shared workspace facilities, allowing different groups to use their existing local tools. In the following, a non-formal description of the required functionality from a user’s perspective will be presented to define requirements for the selection process. As mentioned above, the considered conferencing scenario is typically a project meeting among developers which takes place mostly in small groups. For that, facilities like special conferencing rooms with special audio/video equipment (Trivedi & Rao, 1999) might be used as well as standard PC equipment. The potential group of users is usually well-known, i.e., the conference might be announced beforehand or ad-hoc meetings might be established among this group. A conference database might reflect the participating members together with additional information, e.g., site address or phone number. Functionality like authentication and authorization is needed to provide secure and restricted access conferences due to the closed group character of the case. As a minimal functionality for a virtual meeting, users should be able to exchange audiovisual information, either mixed from the streams of all participants or selected according to a given access scheme. Furthermore, one participant is often defined to conduct the meeting and enforce the adherence to certain social rules (Walker, 1982) for the interaction among the participants, e.g., indicating a question before sending the own audiovisual data. Thus, a rule-based communication among the conference members takes place which has to be supported by the underlying system. Besides the aspect of sharing audiovisual information among the users, distributing data of legacy applications such as specific development tools is crucial to share specific knowledge among the group. For that, legacy applications have to be shared among different sites while ensuring consistency of the different copies of the application. Each local change of this application is displayed on each user’s terminal and is appropriately synchronized. For interactive communication, the system should support, giving control of the shared application to certain users. In the following, this non-formal description of required functionality is broken down to technical requirements of the desired architecture to highlight the issues to be addressed during the technology selection process. Conference Management The issue of managing conferences is divided into several aspects to be covered, namely management of the environment, provision of a conference database, handling security aspects and managing network resources. These aspects are described in detail in the following: • Management of the conference environment: Functionality to announce and to initiate a conference is needed. For that, advertisement functionality is required. Furthermore, a conference setup functionality is crucial, including features to invite other users to a (running) conference or to merge conferences by appending or inviting entire conferences. Additionally, splitting capabilities can be used to form new sub-conferences out of existing ones. In general, sophisticated reconfiguration features have to be provided by the management services. Furthermore, the conference management should enable conducted conferences in the sense that a dedicated user is defined having special management rights, like expelling other users from a running conference or denying the access to a conference. • Provision of a common conference database: A common conference database is crucial for the provision of membership information containing commonly used information (like naming and location information) as well as extended, scenario-specific data. Note that this database is not handling any legacy-specific data which is addressed by a separate database management system. • Handle security aspects: For the provision of secure conferences, authorization and authentication have to be provided to support secure transfer of user data. • Manage network resources: The conferencing architecture should provide facilities to control the network resources, e.g., by providing a defined and guaranteed quality of service.
Shared Workspace for Collaborative Engineering
123
Multipoint Transfer The aspect of secure, location-transparent exchange of user data among several developers independent of specific networks and end systems has to be supported by appropriate multipoint transfer means. It should provide communication patterns ranging from one to-one to many-to-many communication. Furthermore, secure transfer of the data has to be implemented. For the location transparency of the transfer service, a transport layer abstraction has to be provided to address a certain user group in the conference. This abstraction should also enable the use of different kinds of transport facilities depending on the transmitted data type, e.g., streaming protocols for real-time audiovisual data. Moreover, the transfer of user data among heterogeneous end systems has to be provided, i.e., appropriate marshaling capabilities (Trossen, 2000a) should be provided by a common transfer syntax of the exchanged user data. Floor Control This topic covers functions for the operation of distributed applications in general. This means in particular the provision of means to synchronize the application state and to enable a concurrent access to shared resources such as storage devices or printers but also to objects or streams. For that, the access to resources is abstracted by the notion of a floor which was introduced by turn-taking psychological studies (Walker, 1982). Dommel et al. proposed a floor control protocol with basic operations like allocate, ask for, and pass a floor. These floors might be defined exclusively or nonexclusively. Additionally, with a member query operation, it was shown that these operations are sufficient to abstract access control and conventional turn-taking in conferencing scenarios. Thus, this functionality should be provided for the access control in distributed applications. Legacy Support Since development processes mostly use specific tools during the lifetime of the process cycle, the support of legacy software and its distribution among the different sites is a crucial aspect for the provision of information technology in development processes. Hence, the desired architecture should specifically enable sharing of any legacy software among the sites to preserve the given pool of software in use within the different processes. Specifically for the considered case, the exchange of chemical process data and its rendering information is desired. Process Integration Besides the support of legacy software, an overall integration of the shared workspace functionality in the engineering process is of utmost importance. This includes organizational challenges, such as planning, as well as integration with other information technology means, such as calendar and database systems. From these points, the calendar and scheduling aspect directly influences the architecture for the considered case, since the initiation of collaborative meetings has to be coordinated with appropriate scheduling systems. Organizational Issues Although some of the technical requirements above contain organizational aspects like the integration of legacy data and applications, the overall organizational paradigm for the selection process as such is following an a-posteriori approach, i.e., as many existing software as possible are to be integrated in the solution. However, specialized hardware to provide some functionality is not desired due to the increasing cost effect of this solution and thus should be avoided, if possible. Chosen Architecture Based on the findings concerning information technology requirements, the current architecture is outlined in the following section. For that, the main components of the architecture are sketched with
124 Trossen, Schüppen & Wallbaum
respect to their provided functionality before giving a more technical insight in the chosen technology in terms of implemented protocol stacks. Component Overview The current workspace within IMPROVE provides transfer of audiovisual information. Currently available audiovisual conferencing products, e.g., H.323-based systems (ITU, 1996), requires highly expensive Multipoint Control Units for performing mixing capabilities in multi-user conferences, which is contradictory to the goal to keep costs low. Furthermore, these centralized solutions increase network load due to traffic duplication which restricts the applicability due to the high bandwidth usage. As a consequence, self-developed audio/video tools are used to deliver the audiovisual information to all participants using the Internet’s multicast capability and performing the mixing at the end systems. Floor controlled conferences are currently not supported. For the presentation of non-real-time data such as slides or whiteboard contents, but also for displaying local application’s output, the shared application functionality of a commercial tool is used based on the current standard of the International Telecommunications Union (ITU 1998) for this functionality. This solution facilitates sharing any local application with a set of users. Figure 2 shows the different components of the chosen architecture. It can be seen that the workspace comprises three components providing audio, video, and data transfer functionality. All components are realized either as separate tools, i.e., for audio and video, or by using commercial products, i.e., for data transfer, enabling a simple replacement of the different components. Additionally, a user interface was realized for launching and terminating the individual components, abstracted in Figure 2 by the gray block spanning around the different tools. Hence, the user is confronted with an integrated user interface which transparently executes the desired tool. Implementation Overview After outlining the components in Figure 2, this section gives a brief technical insight into the components. For that, the realized protocol stack is depicted in Figure 3 outlining the functionality of Figure 2: Current Workspace Architecture Video
Video Tool
Audio
Audio Tool
Data
T.120-based Tool
Internet
Figure 3: Implemented Protocol Stacks
Starter Tool Audio tool
Video tool
T.120-based tool
RTP/RTCP
T.120 Protocols
UDP
TCP IP/IP Multicast
Shared Workspace for Collaborative Engineering
125
the different layers to implement the workspace architecture of Figure 2. Since this section only gives a rough overview of each layer, please see the specific references for a more technical description. The gray blocks outline the different tools residing on top of transport-level protocols. The starter tool is executing the different underlying tools. The audio/video tools have been developed within internal research projects about audio/video transfer over the Internet using adaptive transmission technologies (Wallbaum & Meggers, 1999). With this technique, the transmission rate of the audiovisual information is adapted to the changing bandwidth of an IP-based network using rate adaptation techniques such as adaptive nodes (Meggers, 1999). Both tools use the Real-time Transfer Protocol (RTP) (Schulzrinne et al. 1996) for exchanging the audiovisual information either using a point-to-point connection to a peer station or based on the IP multicast capability (Eriksson 1994). For realizing the desired shared application functionality, an ITU-T.120 (ITU, 1998) standardconform application sharing tool is used. For that, a parallel data conference is established using the management functionality provided by this ITU protocol suite. The shared application as well as the conference management information for the data conference part is transmitted using the multipoint protocol defined in the protocol suite (T.122, see ITU 1998). It is worth mentioning that this protocol maps the multipoint communication of the shared application protocol onto a point-to-point transfer independent of the underlying network, even if a multicast-capable network is available. Hence, it can be seen that the chosen implementation realizes a mixture of multicast transfer using IP multicast tools and point-to-point transfer by integrating ITU-based commercial products.
Current Problems The main motivation for the current realization of the IMPROVE workspace is the integration of existing software following the a-posteriori approach. This minimizes the in-house development of the workspace to the integration of these tools in the common interface. However, this approach has several drawbacks and problems with respect to the specific requirements for the considered application scenario, i.e., enabling shared distributed engineering. These problems are presented in the following section before outlining a proposal for a revised architecture to be realized in the future. Drawbacks and Problems of the Current Workspace Regarding the derived requirements for the considered case, several drawbacks and problems have been encountered when using the current workspace. Lack of Common Conference Management The current workspace does not provide sophisticated conference management facilities. Basic conference management features are only supported for the data conferencing part. A common conference database and appropriate invitation and initiation mechanisms are not provided for the entire workspace. This directly affects the integration in calendar and scheduling tools, and therefore the integration in other sub-processes within the engineering process. Moreover, reconfiguration functionality, such as merging or splitting conferences, is not supported. Furthermore, management for closed user groups, such as authorization and user management mechanisms, is not covered by the chosen architecture. Hence, the current architecture does not provide a rich functionality to establish ad-hoc or even scheduled closed group meetings. Lack of Floor Control Floor control mechanisms (Dommel, 1995) are currently not supported, i.e., role-based communication, such as conducted conferencing, is not feasible. Furthermore, access control to distributed resources, such as files, is not supported. For the data conferencing part, the access control for the shared application is realized by a simple centralized token mechanism which has proved to scale poorly (Trossen, 2000b) with respect to the response time for a token request.
126 Trossen, Schüppen & Wallbaum
Performance of Shared Application The performance of the shared application functionality being provided by the ITU-conform commercial product is very poor. This is mainly caused by two reasons. Firstly, the mapping of multipoint communication onto point-to-point transfer leads to a bottleneck in transfer performance, even in multicast-capable environments. This is caused by the multipoint transfer part of the T.120 protocol (T.122), even if there is a multicast-capable network available. Currently, there is no implementation available based on pure multicast transfer capabilities. Secondly, the chosen implementation paradigm of the used application sharing protocol is following the shared GUI approach (Trossen, 2001). This means that a local application is shared by transferring every local output change to the set of users and sending input events from the users back to the application hosting end system for control purposes. This approach causes significant overhead at the hosting end system due to the interception delay (Trossen, 2001) being necessary for capturing the desired GUI output data. Furthermore, transferring this graphical output leads to a steadily high network load. This performance drawback becomes even worse in our case since the chemical developer teams are often working with graphic-intensive visualization tools, which cause a high workload on the local machine due to the rendering operations. Sharing this large amount of graphical output makes the situation even worse due to the additional workload on the hosting machine and due to the large amount of data to be transferred over the network. Hence, these tools cannot be shared effectively due to the large amount of data to be distributed among the set of users. As a summary of the detailed outline of the encountered problems, it can be stated that the lack of conference management and mediation support, i.e., floor control functionality, indicates shortcomings of the underlying conferencing system in general, while the lack of performance specifically affects the considered case due to the necessity of using off-the-shelf software for the engineering process. From a user’s perspective, the outlined problems lead to a certain reluctance of using the provided systems, since the user’s perception of the workspace functionality suffers from the described problems. In the following, a revised architecture is presented addressing these problems by introducing new or revised components to the current solution. Revised Architecture The encountered drawbacks and problems described in the previous section indicate a lack of core functionality to be required in the considered case. This could be solved partially by additional hardware, e.g., for mixing purpose, which is not desired due to the enormous costs for these components. However, some of the missing functionality is not even provided in the fully equipped environment due to shortcomings of the standard as such (Trossen, 2000b). Hence, a revised architecture is proposed, introducing a common middleware layer for the provision of a common conference control infrastructure while remaining the component architecture of Figure 2 intact. With this middleware layer, the missing conference management and floor control is introduced to the shared workspace. For that, the Scalable Conference Control Service, as proposed in Trossen (2000a), (see also Trossen, 2000b), is used to realize generic conference control functionality. The generic functionality of the service allows for future developments in the area of conference course control (Borman et al., 2001) as well as the usage of commercial tools, if this is desired. The following functionality is provided by SCCS: • conference management features: Membership control, reconfiguration of conferences during runtime, and invitation of users is provided by SCCS. Moreover, a conference database containing information like name, location, or phone numbers is supported which can be accessed by each participant. Conducted conferences can be implemented using privileged
Shared Workspace for Collaborative Engineering
127
functions for expelling users or denying join requests. Moreover, conferences can be set up based on external events, which allows for the integration of calendar and scheduling means. Therefore, the integration in the development process is improved. • multipoint transfer abstraction: A channel addressing scheme is used to abstract from the specific transport layer interface. The protocol to be used is specified during creation of the channel. Since the abstraction is independent of the specific protocol, multicast-capable transport protocols can easily be supported. • floor control: This service is to be used for the implementation of application state synchronization or access control in distributed environments. Thus, interaction rules for specific conference scenarios can be realized. The current implementation of SCCS uses a self-developed routing scheme to scale the service to a larger number of users without degrading the response time of the floor control request (Trossen, 2000a). On protocol level, SCCS establishes a conference control tree topology interconnecting the different conference participants for control purposes. However, a multicast transport protocol is also used to improve the performance and scalability of the conference database management. It has to be emphasized that any user data like audiovisual information is not transferred using the control topology. This is realized by the specific transport protocols which are only abstracted by SCCS. In Figure 4, the resulting protocol stack of the revised architecture is depicted. SCCS resides on top of TCP and MTP-SO (Multicast Transport Protocol – Self Organizing, see Borman et al., 1997), which are used for transferring the protocol’s control data. The former is used for transfer within the control tree topology while the latter is used for distributing the conference database content. The functionality of the simple starter tool in the current architecture is extended by integrating conference management functionality like conference database access, inviting users or merging conferences, leading to a common management tool. The audio/video tools are built on top of SCCS using its services for management, transfer abstraction and floor control. Different schemes are to be implemented controlling the access onto the audiovisual stream by using the floor control facilities of SCCS. For that, the core of the tools are subject to minor changes since only the addressing has to be revised. The additional functionality for access control, i.e., the floor control support, has to be added to the tools from scratch, of course. The second change in the current IMPROVE architecture is addressing the performance drawback of the shared application part. For that, a new application sharing module is proposed based on the event sharing paradigm (Trossen, 2001) instead of using the T.120-based solution. Using this paradigm, graphic-intensive applications are to be shared across host boundaries by running the applications locally and sharing the generated input events only. Despite the restrictions of this paradigm, which are shown in Trossen (2001), this approach is very promising for our case since the
Figure 4: Future Workspace Protocol Stack
Management Tool Audio tool
Video tool
Application Sharing tool
SCCS RTP/RTCP
TCP
UDP IP/IP Multicast
MTP-SO
128 Trossen, Schüppen & Wallbaum
engineering teams are mostly working on a common data basis so that the data inconsistency problem can easily be avoided.
FURTHER READING A detailed overview of the goals, approaches, and structure of the entire IMPROVE research project and its subprojects can be found in Nagl & Marquadt (1997) and Nagl & Westfechtl (1999). In Meggers (1999) and Meggers & Wallbaum (1999), an overview and more technical description of adaptive networking techniques is given including a presentation of the audio and video tools that are used in the current architecture of IMPROVE. In Ingvaldsen (1998), studies concerning the importance of delay factors in collaborative computing can be found. An overview of the generic conferencing service SCCS is presented in Trossen (2000a), while a lengthy technical description of the service and its implementation can be found in Trossen (2000b). The chosen implementation architecture as well as activity diagrams, protocol sequence charts, and modeling means for evaluation are depicted. Other examples for collaborative environments are presented in Altenhofen et al. (1993), Chen et al. (1992), and Gutekunst & Plattner (1993), while Schooler (1996) and Lubich (1995) give a more general introduction into the area of collaborative computing. The notion of a floor to realize access control in distributed environments is presented in Walker (1982) and Dommel (1995).
CONCLUSIONS This case study presented the architectural findings of the IMPROVE project based on requirements for collaborative engineering in chemical development processes. For that, the current as well as a proposed revised architecture were presented, trying to meet the defined requirements. Four issues can be pinpointed to summarize the major challenges encountered in the current phase of the project. Firstly, currently available solutions for conferencing usually require additional costly hardware like centralized mixing units. Secondly, the integration of legacy data and applications in a common workspace is still a research topic and is only poorly supported in systems nowadays. The considered application case aggravates the situation due to its variety of different off-the-shelf software that has to be shared among the development groups. Thirdly, integration of access control and social rules in the conferencing scenario, e.g., to provide more sophisticated conducted meetings, is yet poorly supported by available commercial systems. Fourthly, current systems lack efficiency concerning their bandwidth usage due to inefficient usage of multicast capabilities of the underlying network. This is especially important in the considered case, which usually connects a small number of local groups usually located in a multicast-capable local area network. While the first issue was avoided by our requirements, the proposed revised architecture, to be realized in our future work within IMPROVE, specifically addressed the three latter issues. It introduced a middleware layer approach which allows for future development in this area to be replaced by more sophisticated mechanisms. Additionally, a paradigm shift was proposed for the problem of sharing off-the-shelf software by using the sharing event paradigm, which is specifically suited for the usually graphics-intensive chemical development process. As a concluding remark, it can be stated that cooperative engineering is still a challenging task once it comes to its realization in real life. Lack of efficiency and integration of legacy software as well as missing provision of sophisticated floor control, reduces users’ perception and makes the introduction of information technology for collaborative engineering difficult.
REFERENCES Altenhofen, M. et al. (1993). The BERKOM multimedia collaboration service, Proceedings of the 1st ACM Conference on Multimedia, 457-463.
Shared Workspace for Collaborative Engineering
129
Borman, C. , Kutscher, D. , Ott, J. & Trossen, D. (2001). Simple Conferencing Control Protocol – Service Specification, Internet Draft, Work in Progress, Available: http://globecom.net/ietf/ draft/draft-bormann-mtp-so-01.html . Borman, C. & Ott, J. & Seifert, N. (1997).MTP/SO: Self-Organizing Multicast. Internet Draft, Work in Progress, Available: ftp://ftp.ietf.org/internet-drafts/draft-ietf-mtp-so-01.txt . Chen, M. & Barzilai, T. & Vin, H. M. (1992). Software architecture of DiCE: a distributed collaboration environment, Proceedings of the 4th IEEE ComSoc International Workshop on Multimedia Communication, 172-185. Dommel, H.-P. & Garcia-Luna-Aceves, J. J. (1995). Floor Control for Activity Coordination in Networked Multimedia Applications, Proceedings of Asian-Pacific Conference on Communications. Eriksson, H. (1994). MBONE: The Multicast Backbone, Communications of the ACM, 37(8), 54-60. Gutekunst, T. & Plattner, B. (1993). Sharing Multimedia Applications among Heterogeneous Workstations, Proceedings of 2nd European Conference on Broadband Islands, 103-114. Ingvaldsen, T. & Klovning, E. & Wilkins, M. (1998). A Study of Delay Factors in CSCW Applications and Their Importance, Proceedings of 5th International Workshop on Interactive Multimedia Systems and Telecommunication Services, 159-170. ITU-T (1996).Visual Telephone Systems and Equipment for Local Area Networks Which Provide a Non-Guaranteed Quality of Service. ITU-T Recommendation H.323. ITU-T (1998). Data Protocols for Multimedia Conferencing. ITU-T Recommendation T.120. Meggers, J. (1999). Adaptive Admission Control and Scheduling for Wireless Packet Communication, Proceedings of IEEE International Conference on Networks. Lohmann, B. & Marquardt, W. (1996). On the Systematization of the Process of Model Development, Computers Chem. Eng. 20, 213-218. Lubich, H. P. (1995). Towards a CSCW Framework for Scientific Cooperation in Europe, Lecture Notes in Computer Science 889, Berlin: Springer Publishing. Nagl, M. & Westfechtel, B. (Eds).(1999). Integration von Entwicklungssystemen in Ingenieuranwendungen. Berlin: Springer-Verlag. Nagl, M. & Marquadt, W. (1997). SFB 476 IMPROVE: Informatische Unterstützung übergreifender Entwicklungsprozesse in der Verfahrenstechnik. In Jarke, M. & Pasedach, K. & Pohl, K. (Eds.), Informatik 97: Informatik als Innovationsmotor, (pp. 143-154), Berlin: Springer Verlag. Schooler, E. M. (1996). Conferencing and collaborative computing, ACM Multimedia Systems 4(5), 210225. Schulzrinne, H. & Casner, S. & Frederick, R. & Jacobson, V. (1996). RTP: A Transport Protocol for RealTime Applications, IETF Request for Comment 1889. Trivedi, M. M. & Rao, B. D. (1999). Camera Networks and Microphone Arrays for Video Conferencing Applications, Proceedings of SPIE International Symposium Voice, Video & Data Communications, 384-390. Trossen, D. (2000a). Scalable Conferencing Support for Tightly-Coupled Environments: Services, Mechanisms, and Implementation Design, Proceedings of IEEE International Conference on Communications, 889-893. Trossen, D. (2000b). Scalable Group Communication in Tightly Coupled Environments, Dissertation Thesis, University of Technology Aachen, Germany Trossen, D. (2001). Application Sharing Technology: Sharing the Application or its GUI?, Proceedings of International Resource Management Association Conference, 657-661. Wallbaum M. & Meggers. J. (1999). Voice/Data Integration in Wireless Communication Networks, Proceedings of the 50th Vehicular Technology Conference. Walker, M. B. (1982). Smooth transitions in conversational turn-taking: Implications for theory, Journal of Psychology, 110(1), 31-37.
130 Trossen, Schüppen & Wallbaum
BIOGRAPHICAL SKETCHES Dirk Trossen has been a researcher with Nokia Research since July 2000. In 1996, he graduated as a M.Sc. in mathematics from University of Technology in Aachen, Germany. Until June 2000, he was with the same university as a researcher, obtaining his Ph.D. in computer science in the area of scalable group communication implementation and modeling. His research interests include evolving IP-based services and protocols for the future Internet, group communication architectures, as well as simulation and modeling of distributed systems. André Schüppen received his Diploma degree in computer science from the Aachen University of Technology (RWTH) in 1996. In the same year, he joined ELSA AG as an engineer for analog modems. Since late 1999, he has been a researcher at the Computer Science Department of the RWTH. His research interests include mobile communication and real-time multimedia technology. He is currently involved in the collaborative research center project 476 IMPROVE, sponsored by the German Research Foundation. Michael Wallbaum received his Diploma in Computer Science from the Aachen University of Technology in 1998. He wrote his thesis at the University of Cape Town, South Africa, on the topic of security in communication systems. Since 1998, he is a researcher and Ph.D. candidate at the Department of Computer Science in Aachen. In the past, he has participated in several European research projects and is currently involved in the European IST-project ParcelCall. His research interests include mobile communications, multimedia, Internet telephony and active network technology.
IT in Improvement of Public Administration 131
IT in Improvement of PublicAdministration Jerzy Kisielnicki Warsaw University, Poland
EXECUTIVE SUMMARY Bialystok City Hall is an organ of public administration. The city of Bialystok has 280,000 inhabitants. In result of the political transformation in Poland, the new authorities have inherited a bureaucratic and inefficient management system as well as an outdated IT. In the electoral programme for 2000 - 2004, the following objectives have been set for the City Hall: to significantly improve the quality of operations and, in particular, to reduce time of handling affairs; to provide complex and professional customer service; to improve the management of assets. In order to improve the City Hall management system, reengineering and TQM rules have been applied. The new management system has been based on new IT solutions, including extranet network and integrated database. In consequence of those changes, some significant results have been achieved, e.g., an improvement of the quality of customer service and also a possibility to monitor the City Hall operational procedures. The vital result however, was a reduction of the decision-making time by the average of 30% and the reduction of the routine affairs handling time by the average of 25%.
BACKGROUND INFORMATION ON THE PROBLEM The case regards the issue of the IT role and its application in the improvement of the quality of operations of the Bialystok City Hall which serves one of the biggest cities in Poland as well as the regional capitals of Podlasie region. It is based on experiences gained during the development of the IT system (MIS) for public administration purposes. The basic objectives of the presented CASE, besides training purposes, are: • To prove that the improvement of the Public Administration management system can be achieved only through IT. • To show that application of IT allows, for the sake of improvement of the management process, to use such advanced organisational methods as reengineering and TQM. Most of the existing analyses relate to reengineering and TQM application in business organisations. Here, our objective is to prove that they can be successfully applied to improve Public Administration operations. Within the Polish Public Administration, there is a three-level system of management, i.e. a voivodship level (Poland is divided into 16 voivodships), a county level and a gmina level. Bialystok Copyright © 2002, Idea Group Publishing.
132 Kisielnicki
is the capital of Podlaskie voivodship. It is located in the Northeast part of the country. It has about 280 thousand inhabitants. The Bialystok City Hall is in charge of, among the others, public finances, public health care, public security, as well as public education and transport. The organisational structure of the City Hall before the organisational transformation is presented in Appendix 1. In 2001 (according to the plan), the Bialystok City Hall will have at its disposal: a revenue of 488, 676 thousand zloty while the projected expenses amount to 543 051 thousand zloty ( 1USD = 4,02 PLN – according to the National Bank of Poland exchange rates of April 18, 2001). The analysis of the Bialystok City Hall management system conducted in 1998 exposed the following: the IT system in use is very much outdated, there are numerous gaps to be filled and the existing IT resources are not being used appropriately. At the time of the analysis, all the data had been traditionally gathered on paper or on the independent, not connected into a network, computers. This situation complicated the City Hall’s operations and made it very difficult. IT in the form of a PC had only been used as a tool to write letters and regulations. It was also used to access very simple databases. In consequence, there was no integrated IT system to service the Bialystok City Hall. Thus, the analysis concluded that such an integrated IT system was vital in ensuring an efficient flow of data and documents between the City Hall’s organisational units and it is also of utmost importance for overall citizen (customer) services. There had been no unification of data in the field of a diversified environment of information protection either. The analysis of the City Hall organisational system showed enormous diversity in the management system as such; 12 people or organisational units reported directly to the City President, while there were only two or three people reporting directly to some members of the City Board. (The literature on the subject recommends five to seven people or units as an optimum for those managerial levels). The city inhabitants had been grossly dissatisfied with the City Hall work. Their dissatisfaction was documented by: • numerous complaints on the length of time spent to handle various affairs; • long queues in front of individual desks; • critical articles in the local press on the City Hall work as well as on individual departments and the people responsible for an efficient working system; • the fear of the party coalition in power as to the results of the coming elections (the coalition took part in the previous elections under the banner promising to improve the existing management system in the city).
SETTING THE STAGE In 1998, the newly appointed local authorities, in order to improve the Bialystok City Hall operations, began their work to change the existing management system. The statement made by one of the party leaders, “If we do not improve the City hall operations, we may not survive until the next elections,” has best illustrated the importance of the problem. On the basis of the users’ needs analysis, which included the City hall authorities, clerks and Bialystok inhabitants, it had been concluded that a new management system should be based on the options provided by the IT and it should meet the following criteria: • improvement of the City Hall organisational structure and management methods in the aspect of an integrated IT system for the entire City Hall with clearly defined hierarchy and links between all the organisational units; • efficient flow of information in the City Hall within the newly defined organisational structure, • diversified quality and safety of servicing the institutions in which the IT system is being installed; • easy adaptation and an increase of service function of the IT system to meet the increasing needs and requirements; • fulfilling open system requirements - X/Open standard - which guarantee system compatibility of the existing and future hardware and software.
IT in Improvement of Public Administration 133
In order to improve the management system and in order to develop a new IT system, the following methodology was applied: 1. Reengineering attitude supported by TQM methods. This attitude recommends sudden and significant changes. In order to introduce those changes, the management system is being analysed in terms of the following criteria: • an increase of cooperation between individual organisational units of the City Hall; • a reduction of intermediate stages in the task realisation process, i.e., maximum elimination of indirect links; • an integration of those organisational units which perform similar functions. Thus, a typically processor-type attitude has been applied in this case. It focused on the improvement of the management system process. In order to significantly improve the quality of the citizen service, the reengineering method was supplemented with the rules applied in TQM method. The application of TQM methods results from the objective to ensure that the inhabitants receive welljustified decisions. It aimed at the reduction of the number of appeals. 2. Integration of computer systems with the IT methods. Before the choice was made, several variants of IT solutions had been considered. The basic variants were as follows: • Improvement of the existing computer system, i.e., an extension and modernisation of the existing PCs and linking local databases through a Local Area Network. • Construction of extranet-type network connected to the Internet and winding-up local databases in favour of one major database. The analysis of cost and results of individual variants was extremely difficult, mainly due to problems with estimating the results of the IT application in public administration. It was also difficult because of the existing regulations regarding cost registration which are still not adjusted to the management accounting system requirements. On the basis of the existing and available data and estimates of both cost and results, it has been concluded that the first solution requires about 60% less investment resources than the second one. However, the conducted SWAT analysis showed clearly that the second solution offered more prospects for the future and could ensure more feasible realisation of the electoral postulates. This attitude was supported by the recommendation to create, subject to available financial resources, data warehouses. A data warehouse is treated as a complete repository of data created on the basis of the transaction systems already in existence in the City Hall and on the basis of the outside IT systems such as banks, statistical office and public records. The choice of solutions, which enable to benefit from the data warehouse, is justified by the fact that it ensures an immediate access to information required by the user. 3. Creation of the Function Centres within the City Hall organisational structures. This attitude is similar to the methods applied in business organisations where Profit Centres have been created. Mintzberg, who talks about creating of the so-called Hubs, also recommends a similar attitude.
CASE DESCRIPTION The Procedure of Introducing Changes (Basic Phases) The changes which aim at the improvement of the Bialystok City Hall management system have been developed in the following phases: 1. Defining the problem 2. Analysing information needs of the City Hall 3. A project of the new management system for the Bialystok City Hall 4. A project of the IT system to support the improved management system 5. Implementing changes and change evaluation
134 Kisielnicki
Description of the Individual Phases of the Procedure The issue of the improvement of the Bialystok City Hall management system presented in “Setting the stage” was determined on the basis of cooperation of the project designers and the City Hall employees. The analysis of the City Hall’s IT needs was an iterative process prepared on the basis of the following sets of documents: 1. Organisational documentation prepared by the Bialystok City Hall employees. The documentation covered, among the others, analysis of the character of acts and resolutions and also the City Hall regulations. 2. Reports prepared by individual organisational units of the City Hall on the links of the specific organisational unit with other units of the City Hall and also on the citizen service system. The objective of the prepared materials was to identify and evaluate the level and strength of connections between the City Hall individual organisational units. These materials served as the basis of the developed data flow diagrams–DFD. Interviews and discussions supported written materials with the City Hall managers and the representatives of the individual City Hall units. The analysis of needs, as previously stated, was conducted in iterative way. In the first stage, the working hypothesis on information needs had been developed and project tasks had been determined on the basis of source materials delivered by the City Hall. Then, a number of interviews had been conducted and number of discussions with the appropriate representatives of the organisational units attended. The data required to answer two vital questions was obtained: • What type of information do you pass to other City Hall units? • What type of information do you need to obtain, from other City Hall organisational units, in order to operate properly? On the basis of this data, taking into consideration all the materials gathered previously, the appropriate conceptual models regarding individual operating procedures of the City Hall have been developed. The Project of the New Management System for the Bia³ystok City Hall The conducted analysis served as the basis to suggest a new organisational structure of the City Hall based on the functional centres. On the basis of the conducted analysis and the applied methodology, the project of an improved City Hall organisation takes the following shape: • Centre for Securing the City Hall operations • Centre for Finance and Administration • Centre for Social Affairs • Centre for Social Infrastructure • Centre for Technical Infrastructure and its Management • Security Centre Two other centres were proposed to be created in the future (2002 - 2004): • Information Centre • Centre for the City Development Strategy I would like to stress that social problems are the main concern of two Centres, namely the Centre for Social Affairs and the Centre for Social Infrastructure. It is a direct result of the fact that social issues are treated very seriously in Bia³ystok and also that the handling of current social problems is not connected to the issue of managing resources allocated to this field. The list of organisational units within the new organisational structure is presented in Appendix 2. Appendix 3 presents mutual connections of the Centres. The new organisational structure approved by the appropriate City authorities has the following advantages:
IT in Improvement of Public Administration 135
•
Concentration by the similarity of the performed functions allows close linking of the appropriate organisation. In consequence, it allowed development of an overall policy of the City Hall authorities and also ensured much quicker and more efficient citizen and organisation services. • Even allocation of tasks which ensures more effective monitoring of the citizen service system than the one existing so far. Within the applied methodology, the changes are to be introduced constantly and the improvement of the management system is to be a constant objective. Thus, the recommendations for the future directions (for the period of 2002 - 2004) in the Bia³ystok City Hall organisational improvement have now been determined. These include: • In order to meet the present and future city Hall information needs, it would be advisable to create a special organisational unit in charge of the overall introduction of the information technology in the Bialystok City Hall. I would suggest creating the Centre for Securing the City Hall Operations–a Department of the City Hall Information Services which should later transform into the City Hall Information Centre in charge of the overall information flow within the City Hall and in the field of the City Hall and a citizen, as well as in the field of the City Hall and the Outside Units including the Council and Gmina organisations. Improving effectiveness of the local authorities will require much stronger assistance than the one provided at present. • I also suggest creating a Centre for the City Development Strategy in the future. This suggestion results from the need to separate operation and tactical management issues from the strategic issues. The Centre will focus on the future model of the City of Bia³ystok through development of overall forecast connected to such issues as public transport, education, health care etc. The new organisational structure is a very modest one. This is to be considered an advantage as no additional organisational units are being created except for those absolutely necessary in order to fulfil tasks undertaken by the City Hall. A Project of the IT System to Support the Improved Management System On the basis of the new organisation, the IT solutions suggested will support the new management system. The developed information system takes into account a new functional division of organisational units. It was based on the functional modularity of the system: each of the seven centres has been allocated an information system module marked with the same number. It means that such a module creates a unified group of functions supported by the computer processing and electronic exchange of data (EDI). The life span of the newly created IT system, due to fast ageing of the IT, is about eight years. The basic assumption in creating the system was an integration of data at the logical level. It ensures access to the unified data in all the utilised applications. As a result, the requirement of common hardware and software platform had to be fulfilled even in the environment of diversified data safety. The system also allows for a certain leeway for system modification and further development in line with new needs and requirements arising during the use of the IT system. The basic development tool for the analysis of the data flow diagrams in the system at the level of the introduced centres are DFDs (Data Flow Diagrams) developed in the Upper CASE IT tools style. These diagrams, presented in Appendix 4, made it possible to create a unified IT system. Implementing Changes and Changes Evaluation The improvement of the existing City Hall management system along with the supporting IT system has been implemented in phases. At present, the basic IT modules have been implemented. These modules service individual centres which are linked together through a MAN-type computer network called BIMAN. This network is connected to the Internet. There are extranet-type networks operating in the City Hall. The Steering Committee, headed by the City President’s Attorney, is monitoring the implementation and development of the IT system in the City Hall. It can be assumed
136 Kisielnicki
that the system, in its basic shape has already been implemented and from mid-2000 it has began operating. It is currently being developed and modified in accordance with the re-engineering rules. The vital rule to be followed within the implemented IT system was the requirement of a common hardware and software platform where the software project adjusted to the proposed organisational structure and data flow must precede the computer hardware solutions regarding servers, workstations, structural wiring, etc. Problems Facing the Organisation–Remarks on Project Realisation-Introducing Organisational Changes Within the City Hall Organisation and Development of the IT System Realisation of the project required close co-operation of many project teams. It is very difficult to determine the return on investment period, i.e.ROI. In the public administration organisations, the most important results are those visible on the outside, i.e., shortening of the customer servicing time. Those results have been estimated on the basis of the specially designed questionnaires. There are no such categories as, for example, profit or share value, in the organisation under analysis. Those categories exist only in business organisations. The investment outlays for the IT will be compensated by shortened decision-making time, easiness of monitoring the activities and fast creation of work teams for complex problem solving with parallel lack of arguments on competence and authority. The City Hall Management, after the first year of the system utilisation, listed the following results as the most significant: • shortening of decision making time regarding the citizen issues, such as for example: issuing a driving licence or a passport, probate matters, permits to build houses (the estimated time for consideration of those issues was shortened by up to 30%), • ease in monitoring the individual employee and team activities which resulted in the reduction of claims by 20% in comparison with the previous period; • fast creation of work teams for complex problem solving with parallel reduction of significant arguments on competence and authority; It is believed that the success of project implementation had also depended on: • training of the IT system users which ensured correct usage; • work of the Steering Committee, which headed the project and directly monitored the works in progress at individual Centres and Departments (the role of a Steering Committee was played by the Computer Technology Department of the Bialystok City Hall). The British experts from Cranfield School of Management estimated that, in the first half of the 1990s, more than 70% of the attempts to re-organise institutions by re-engineering in Great Britain ended in failure. Why then has the project presented by us been a success? I think, it results from the fact that our project designers cooperated closely with the City Hall employees. However, we shall be able to talk about full success only when the IT project is fully implemented and tested. We can talk about such full implementation and testing not sooner than 2002 – 2004.
APPENDIX I Previous organisational structure of the City Hall:
The City President: City Council Office (functionally dependant–in the field of human resources it reports to the President, but substantially it reports to the President of the City Council), Spokesman, the Team of Legal Advisors, Department of Geodesy Land Management and Agriculture, Municipal Inspectorate of Civil Defence.
IT in Improvement of Public Administration 137
I Vice President Department of Physical Education, Department of Health, Department of Culture, Registry Office.
II Vice President Department of Architecture, Spatial Management and Environment Protection, Department of Communal Management, Municipal Guard, Department of Computer Technology.
III Vice President Department of Social and Economic Policy, Department of Constructions and Investment, Department of Public Transport.
1st Member of the Board Department of Housing Policy, the Board of Communal Property
2nd Member of the Board City Board Attorney for Public Commission, Municipal Centre for Social Assistance, Daily Social Assistance House, Social Assistance House.
City Secretary Organisational Department, Administrative and Economic Department, Department for Citizen Affairs.
City Treasurer Finance Department, Department of Books and Accounts.
APPENDIX II •
• • •
•
•
Present organisational structure of the City Hall: Centre for Securing the City Hall operations City Council Office, Organisational Department, City Hall Information Department (after transformation of the Department of Computer Technology), Team of Legal Advisors, Spokesman. Financial and Administrative Centre Department of Finance, Department of Books and Accounts, Administrative and Economic Department, City Board Attorney for Public Commission. Centre for Social Affairs Department of Citizen Affairs, Department of Social and Economic Policy, Registry Office. Centre for Social Infrastructure Department of Culture, Department of Physical Education, Department of Health to which the following would report: Municipal Centre for Social Assistance, Daily Social Assistance House, Social Assistance House. Centre for Technical Infrastructure and its Management Department of Architecture, Spatial management and Environment Protection, Department of Geodesy Land Management and Agriculture, Department of Constructions and Investment, the Board of Communal Property, Department of Public Transport, Department of Housing Policy. Security Centre Municipal Inspectorate of Civil Defence, Municipal Guard.
138 Kisielnicki
APPENDIX III Information Exchange based on Functional Centres at Municipal Offices
City Co-operating Infrastructure
INTRANET
[ Module 1] Centre for : Securing the City Hall Operations
[ Module 7] Center for : * Information
* City Strategic Development
[ Module 6]
[ Module 2]
INTEGRATED INFORMATION SYSTEM OF THE CITY HALL
Centre of :
Technical Infrastructure
Center of :
Finance
[ Module 5]
[ Module 3]
Centre of :
Center of :
Social Infrastructure
Social Affairs
[ Module 4] Center of :
Security
IT in Improvement of Public Administration 139
APPENDIX IV Basic Data Flow Diagram at Functional Centres of the City Hall
clients Functional Center No. 2
DATA WAREHOUSE
Finance documents clients
Finance document Functional Center No. 1
Functional Center No. 3
Centers 1-6 jobs
EXECUTIVE
jobs
Functional Center No. 7
Functional Center No. 6
expertises
DATA WAREHOUSE
BOARD of CITY
Functional Center No. 4
Functional Center No. 5
Back bone data flow of the system Personal data Inter-centres data flow Centres to be built as a second systems development stage Data job co-ordination
140 Kisielnicki
FURTHER READING Andrews, D.C. (1995). Enterprise Reengineering, The Electronic College of Process Innovation, http://www.c3i.osd.mil/bpr/ Caudle, S.L.. (1995). Reengineering for Results: Key to Success from Government Experience. Nat’l Academy of Public Administration, Wash D.C. Carr, D.K., Johansson, H.J. (1995). Best Practices in Reengineering. N.Y. New York, McGraw-Hill. Davenport, T.H. (1993). Process Innovation, Reengineering Work through Information Technology. Boston, Massachusetts, Harvard Business School Press. Hammer, M., Champy, J. (1993). Reengineering the Corporation, A Manifesto for Business Revolution. New York, NY: HarperBusiness. Laudon, K.C., Laudon, J.P. (1999). Management Information Systems, Organization & Technology in the Network Enterprise. New Jersey, Prentice Hall. Laudon, K.C., Laudon, J.P. (2000-20001). Essential of management Information Systems, New Jersey, Prentice Hall. ProSci study report, (1999). Future Role of IT in Reengineering, http:www.prosci.com/IT99.htm Senn, A. J.( 1995). Information Technology in Business- Principles , Practices, and Opportunities. New Jersey, Prentice Hall. Stair, R.M. (1992). Principles of Information Systems, A Managerial Approach. Boston. Boyd&Fraser Publishing Company.
REFERENCES Grochowski L., Kisielnicki J. (1999). Reengineering in upgrading of public administration: Modelling and Design. International Journal of Services Technology and Management,1(4), 331-339 Hammer,M. and J. Champy (1993), Reengineering the corporation, A Manifesto for Business Revolution, New York: Harper Business; Hammer, M. and S.A. Staton (1995). The Reengineering Revolution, Harper Revolution; J. Kisielnicki, (1999), Reengineering: problems with theory and practical application , BIS’ 99 in: 3rd international Conference on Business Information Systems, Poznan, Poland, Springer – Verlag London Berlin Heidelberg p. 191-202. Kisielnicki J and Sroka H. (1999). Systemy Informacyjne Biznesu ( Business Information Systems), Placet, Warszawa . Mintzberg H., Van der Heyden L. ( 1999). Organigraphs: Drawing How Companies Really Work, Harvard Business Review, Sept.-Oct, 87-94. Yourdon, E. (1996), Wspóczesna analiza strukturalna (Modern structured analysis), WNT, Warszawa.
BIOGRAPHICAL SKETCH Jerzy Kisielnicki is a Professor and the Head of the Department of Management Information Systems at the Faculty of Management of Warsaw University. He specialises in organisation and management and, in particular, in: systems analysis, management information systems (IT), process innovation (re-engineering), strategic management, transition systems organisation and management in market economy. He is the author of numerous projects developed for the government and various companies. He is also a member of the Institute for Operations Research and the Management Science TIMS-ORSA and IRMA (representative for Poland).
The Foreign Banks’ Influence In Information Technology Adoption 141
The Foreign Banks’ Influence in Information Technology Adoption in the Chinese Banking System Michelle W. L. Fong Victoria University, Australia
EXECUTIVE SUMMARY Foreign direct investment has been a common conduit of technology transfer for the locally funded enterprises in the host country to adopt foreign technology. In addition, it could be a powerful agent in affecting technology adoption within a technologically backward host country. By contrast, foreign direct investment has not been a significant source of information technology transfer into the Chinese banking system. Neither has it been an effective agent in affecting technology adoption in this system. The priority and concern of the Chinese government in protecting, and retaining control of, its domestic banks and financial market have kept foreign direct investment in the banking industry at a relatively modest level. The controlled industry, the long wait for full market competition, and the inadequate infrastructure and operating framework have inhibited the foreign banks from adopting highly sophisticated information technology for their restricted business operations and from being an effective conduit in technology transfer.
BACKGROUND The Chinese Economy The Chinese economy’s GDP (Gross Domestic Product) has been riding on a positive growth phenomenon, since the initiation of its economic reform program and its transition from a command to a market-based economy in 1979. The new direction undertaken by the Chinese government has definitely propelled the growth of the economy between the pre-reform and reform periods, as shown in Graph 1. The real GDP growth between 1979 and 2000 (in the reform period) was at an average annual rate of 9.25%, superseding the average annual growth of 5.3% experienced between 1960 and 1978 (in the pre-reform period). Although the growth had lost its vigor between 1992 and 1999, many economists and observers remained optimistic in the potential of this emerging market economy.
Copyright © 2002, Idea Group Publishing.
142 Fong
Graph 1: China’s Real GDP Growth Rates: Pre-reform and Reform Periods and Annual
14 12 10 8 6 4
2000
1999
1998
1997
1996
1995
1994
1993
1960-1978 1979-1999 (pre-reform) (reform)
1992
0
1991
2 1990
Real GDP Growth Rates (in percentage)
16
Period/Year Source: The World Bank, 1980-1999; China Statistical Yearbook, 1989-1999; National Bureau of Statistics People’s Republic of China, 2001.
The Chinese Banking System Prior to 1979, the financial flows in the Chinese socialist economy were largely governed by the predetermined central plan. Under this system, the state-owned banks were the most active and important financial agents in the economy. They provided the amount of money required to produce the predetermined amount of output and supervised the utilization of funds in accordance with the requirements of the central plan. The banks virtually had no independent role in the creation of either money or credit from the funds deposited by the households and the state-owned enterprises. They merely acted as financial agents of the Ministry of Finance, and the inflow and outflow of money effectively belonged to the latter. The banking system at that time was a monobank system in which a single bank, the People’s Bank of China (refer to Appendix 1 for a brief history of this bank), undertook the roles of central and commercial banking. As compared to the capitalist system, the financial intermediary activity level and role of the Chinese socialist banking system was very limited and noncompetitive, and deliberately simple and passive. In terms of information technology adoption, there was less demand and incentive for banking technology applications. The decision made by the Eleventh Central Committee of the Chinese Communist Party in 1978 to transform the socialist country to a market economy has resulted in the implementation of the economic reform program. Since then, the economic reform program has been conducted on a gradual and experimental basis, with emphasis on opening economic sectors (at varying degrees) to market forces, trade and foreign investment. The Chinese government recognized that the support of a welldeveloped and active financial industry is one of the requisite conditions for the full operation of a market economy. Hence, the financial sector became one of the initial sectors selected for reform and for eventual full foreign participation. The ultimate aim of the reform of this sector is to achieve a sound financial system that is capable of deploying scarce capital resources in the most efficient way that
The Foreign Banks’ Influence In Information Technology Adoption 143
supports economic growth. The admission of foreign banks is regarded as a key attraction not only for foreign capital but also banking expertise to boost the growth of the fledgling market-based financial sector. As a result, several changes in the financial industry were targeted during the reform period, propelling the financial industry to play an active and pivotal role in the development of the economy. Effectively, the banking sector has been the central focus of financial reform because of its relatively established standing as the active financial intermediary in the economy since the prereform period. The reform of the banking sector has resulted in the People’s Bank of China becoming the country’s official central bank and the abolishment of competition restriction among its state-owned banks. Four Chinese state-owned banks dominated the banking sector during the pre-reform period and the reform period of the 1990s, and they are the Industrial and Commercial Bank of China, the Bank of China, the Agricultural Bank of China and the People’s Construction Bank of China (refer to Appendix 1 for a brief history of these banks). The high market share of these four state-owned banks is a legacy from the past monopoly position of each bank in the pre-reform period, being the exclusive banking unit to specific market segments. These four banks executed the credit allocation plans of the economy during the pre-reform period and have continued to play a primary role in the provision of financial intermediary services during the reform period. The high market share is also due to the fact that these banks have been relatively effective direct channels for the government in managing, controlling and regulating the economy since 1979. However, it is expected that as the reform of the financial system develops, the market dominant position of these banks will be dissipated in the long run by the entry and active participation of other financial units. Table 1 depicts a diminishing financial intermediary role played by the four state-owned banks over time. However, in the interim, the state-owned banks are expected to remain the core financial units at the transitional stage of the country’s movement towards a market economy. In addition to the above-mentioned state-owned banks, there are three state policy banks1 (State Development Bank, Export and Import Bank of China and Agricultural Development Bank of China); three state-held banks (Communications Bank of China, China Everbright Bank and CITIC Industrial Bank); three public-held banks (China Merchants Bank, Huaxia Bank and China Minsheng Banking Corporation); and over 80 city commercial banks, 3,200 urban credit cooperatives and 41,500 rural credit cooperatives in the country.
Table 1: The Extent of the Financial Intermediary Role Played by the Four State-Owned Banks in China Year Loan Deposit 1985 93 % 93 % 1986 93 % 93 % 1987 90 % 86 % 1988 89 % 81 % 1989 89 % 77 % 1990 88 % 73 % 1991 87 % 88 % 1992 86 % 89 % 1993 79 % 69 % 1994 67 % 68 % 1995 63 % 63 % 1998 71% 62% Source: Almanac of China’s Finance and Banking, 1991, 1994 – 1996; Mo, 1999
144 Fong
The Foreign Banking Sector in China Below is a brief history on the foreign banking sector in China, in regard to the origination of foreign banking presence in the country prior to 1949; the demise of foreign banks during the Communist regime; and the return of the foreign banks to the newly emerged market economy in 1984, after the implementation of the open door policy. 1) Prior to 1949 Foreign financial institutions were first located at treaty ports and in Beijing in the 1840s after the opium war. These foreign financial institutions constituted a powerful influence on the direction of the Chinese financial industry development prior to 1949. This was especially so during the reign of the Manchu government. For much of the period, the operation of the Chinese financial industry was in the control of the foreign financial institutions, which even had the power to overturn rules issued by the Chinese authorities (People’s Bank of China Education Editorial Committee, 1985). This extensive foreign power largely stemmed from the fact that the weak Chinese government allowed the operation of foreign financial institutions to be governed by the laws of their respective home country rather than by those of the host country. The vulnerability of the government was also reflected in the operations of joint venture banks in which the government had a capital share, for example the Russo-Asiatic Bank, Banque Industrielle de Chine, Chinese American Bank of Commerce, etc. The internal organization of these joint venture banks was completely in the hands of the foreign shareholder, irrespective of the shareholding configuration. The public accorded lesser confidence to these banks than to the independent and fully funded foreign banks in their financial activities (Lee, 1982). In effect, the fully funded foreign banking sector had a monopoly role in the economy’s trade with foreign countries. Prior to 1949, the foreign banks continued to remain powerful financial agents even despite the turbulent political events in the country. For example, after the Sino-Japanese war in 1894 to 1895 when many of the foreign banks withdrew their businesses from the country, the remaining 14 foreign banks still constituted a powerful force in the country’s financial system. The power of the foreign financial banks remained strong even during the rule of the KMT government and amidst attempts to strengthen the local Chinese banking sector. This was evidenced by the shore of total assets held by foreign banks within the industry and their influence in the monetary condition of the economy. In October 1947, the 13 foreign banks located in the active financial market in Shanghai had asset holdings at 26.2% of the total overall assets in the Shanghai’s financial market, whereas the 147 local Chinese banks’ assets only accounted for 54.2% of the total. In August 1948, when there were only 12 foreign banks left in Shanghai’s financial market; their assets were even higher than before, at 36% of the total assets. In terms of the monetary situation, the foreign banks always heavily influenced official and black market foreign exchange rates, and were very important in currency issuance. At the end of April 1949, for example, the currency issued by a foreign bank was about 5.8 billion yuan, which constituted two-thirds of the total currency issued for circulation for China (People’s Bank of China Education Editorial Committee, 1985). The foreign financial influence and power came to an end in the year 1949, when the government came under the control of the Chinese communists led by Mao Zedong. It was also the beginning of the era when the foreign banks and financial institutions, except the Hong Kong and Shanghai Banking Corporation, the Standard and Chartered Bank, the Overseas Chinese Banking Cooperation, and the Bank of East Asia, were either nationalized or had their assets expropriated or frozen by the ruling Chinese Communist Party (Wang et al., 1990). 2) 1949 to 1978 The revolutionary event in 1949 resulted in the withdrawal of many foreign banks from the scene or, in the case of the U.S. banks, they were penalized heavily (through property expropriation or the
The Foreign Banks’ Influence In Information Technology Adoption 145
freezing of assets) for their country’s role in the Korean war. However, not all the foreign banks were ousted from the Chinese banking system, as the Hong Kong and Shanghai Banking Corporation, the Standard and Chartered Bank, the Overseas Chinese Banking Cooperation and the Bank of East Asia were allowed to remain, largely for political rather than economic reasons. Since the heavy exodus of the foreign banks, the national banking system has undergone several deliberate changes to the role of those banks that remained, mostly in accordance with the political climate. In spite of the continued involvement of the four banks noted above, the closed economy era of 1949-1979 cut off any active presence of foreign banks in the industry. Through the People’s Bank of China, the government took steps to revoke the privileges enjoyed by the foreign banks in China, and consolidated and transformed private financial institutions firstly into public-private jointventure banks and then eventually nationalized them (Yan, 1993). 3) 1979 to 1990s With the open door policy in 1979 and the rapid growth experienced by the Chinese economy, the Chinese government declared the financial market opened to a number of foreign financial institutions2 in 1984. Despite this declaration, barriers to foreign entry have been high and heavily restricted in their business location and activities. Nevertheless, the number of operational establishments created by the foreign financial institutions in China has been on the increase since 1979, as shown in Table 2. A majority of these operational establishments are in the banking and insurance sectors.
SETTING THE STAGE Business Interest of the Foreign Banks One of the main purposes of the foreign banks in establishing an early presence in the huge potential Chinese market was to provide support to their clients from their home country. The flow of foreign direct investment (FDI) into China has been on the increase as more international corporations move into this country to take advantage of its newly but gradually liberalizing Chinese economic environment. FDI in China has grown from US$636 million in 1983 to US$45.6 billion in 1998, which saw a concurrent increase in the number of foreign banking establishments in the Chinese economy. Although FDI has dropped to US$40.4 billion in 1999, the country remains one of the largest Table 2: Number of Operational Establishments Created by Foreign Financial Institutions in China Year Number of foreign financial institutions in China 1979 33 1987 181 1990 209 1992 304 1994 404 1995 603 1996 694 1997 702 1998 717 Source: Jinrongshibao,1987; 1992; 1995, and 1997; Dipchand et al., 1994; People’s Bank of China, 1996; China Economic Information, 1997.
146 Fong
FDI recipients in the world. Another main purpose of the foreign banks in establishing an early presence in the Chinese market was to prepare for the opening up of the local currency (Chinese yuan or Renminbi) business to them. In the Chinese culture, the propensity to save is one of the traditional virtues that have been highly regarded and upheld by the Chinese populace. The Chinese domestic savings as a percentage of GDP averaged above 30% between 1978 and 1998. In 1998, the domestic savings as a percentage of GDP stood at 32%. This Chinese penchant for savings, which has placed the country as the world leader in savings rates, spells substantial business market potential for well-established foreign banks. Although the local currency business opening has been gradual, it had been assessed by experts to be inevitable, in view of the external institutional pressure, for example China’s desire to qualify for WTO (World Trade Organization) membership, and also of the evolution of a financial system that supports the development of a market economy. This assessment was in part realized in 1997, during which year nine foreign banks were permitted to deal in the local currency business in the Pudong region of Shanghai. By September 1999, 25 foreign banks located in Shanghai and Shenzhen were given approval by the central bank to conduct local currency business. However, the local currency business clientele of the approved banks is only limited to foreign corporations – foreign investors and Sinoforeign joint ventures. Foreign banks are not allowed to engage in transactions with local Chinese citizens and wholly Chinese institutions. It is expected that the foreign banks will be permitted to offer local currency services to local Chinese companies in two years, and to individual Chinese citizens in five years, after China became a member of the WTO. China is keen and resolute about joining the WTO, as evidenced by its concessions in opening financial businesses to offshore groups and giving foreign banks greater access to local currency business. Concessions such as the easing of geographical limits on those 25 foreign banks’ activities and the lifting of earlier prohibitions on their lending consortia formation, related management fees, and inter-branch transfers. However, concessions have been conducted on a gradual basis, which is considered necessary by the Chinese government because the Chinese banks are not ready for full market competition with the foreign banks. The Chinese government viewed that the fledgling stage of the financial sector does not warrant the response to the calls, from the foreign governments and financial operators, for immediate full access. Table 3 shows the types of foreign participation in China’s banking industry in 1980, 1994, 1998 and 1999. A representative office merely functions as a liaison office, and is prohibited from conducting business. As a result, the staff strength is kept to a size of between three to five staff. Except for those operational branches that have license to conduct local currency business, the usual business scope is confined to foreign exchange deposits and loans, note discounts, remittances, warranties, import and export settlements and ratified foreign exchange investment. Both types of operational branches (with and without the license to conduct local currency business) are restricted to activities with foreigners and foreign-funded enterprises. To qualify for establishing an operational branch in China, the foreign bank must have a representative office in the country for at least two years, its parent company must have total assets of over $20 billion, and its headquarters are located in a country where there are sound financial supervisory and administrative systems (Chen and Thomas, 1999). On the other hand, to obtain a license to conduct local currency business, the foreign bank must have a threeTable 3: Types of Foreign Bank Participation
Types of foreign bank participation Operational branches Representative Offices
Number in 1980 50 225
Number in 1994 100 302
Source: MacCormac, 1993; KPMG, 1994; Asia Intelligence Wire, 1999.
Number in 1998 173 253
Number in n 1999 175 248
The Foreign Banks’ Influence In Information Technology Adoption 147
year history of operation in China, show that it has a profitable position for the past two years and have assets of value of at least US$150 million (Chan and Reuters, 2000).
The Performance of Foreign Banks The market share of the foreign banks in China is estimated to be not more than 3%. The total value of assets of foreign-funded banks in China was US$11.8 billion in December 1994, with deposit at US$2.49 billion and loans US$7.5 billion. These amounted to 2.0%, 0.7% and 1.7% of the aggregate values for the four state-owned banks respectively. These values increased by December 1995 to US$19.14 billion, US$3.1 billion, US$12.75 billion respectively, with the relevant percentage being 2.1%, 0.7% and 3.0% respectively. At the end of 1997, 1998 and 1999, the total value of assets of foreignfunded banks in China was US$38 billion, US$34.2 and US$31.4 respectively, which is less than 3% of the aggregate values for the four state-owned banks in each year. Thus the level of participation of these foreign banking institutions remains very low. Though a greater role for the foreign banks in the industry (in the areas of local currency denominated deposits and loan transaction) is intended, the scope of market opening will be on a gradual basis. The 1996 announcement that the Pudong district in Shanghai will be the first test city open to foreign entry has caused concern among the domestic banks. Foreign banks were known to have performed well despite the restriction imposed on their business scope and activity. When the foreign banks were not authorized to engage in local currency banking business, 90% of their income was earned from trade bills discounting for importers and exporters and fees from document processing. One of the major banks in Shanghai has expressed the view that foreign banks will pose a serious challenge to its foreign trade settlement business, and have the potential to cut its business by 50% if full market access is granted to the foreign banks. This view is representative of the attitude of Chinese banks on foreign bank entry. The Chinese banks felt threatened by the foreign participants, especially in the international business area, and regard them as competitors with much higher levels of competitive advantage in capital resources, skills, services and technology. The strength of the foreign banks in this business area was illustrated by the case of Dalian, where state-owned banks started to provide foreign exchange deposit services in 1988. After about seven years in foreign exchange business, the total foreign exchange deposit achieved by the four dominant local players in Dalian’s financial market in 1995 totaled US$400 million. The six foreign banks, which were only allowed to deal in foreign exchange business about two years ago in Dalian, had achieved three-quarters of the state-owned banks’ foreign exchange business in 1995. This has greatly alarmed the domestic banks. The Chinese authorities have grave concerns that foreign banks may become overtly dominant if full market access is granted and lead to the repetition of the pre-1949 situation where control of the financial system fell into the hands of the foreign banks. On the one hand, the government authorities accorded heavy protection to the Chinese commercial banks, so that the protection of the interests of the Chinese banks was a priority in any new changes to be made. On the other hand, they found foreign banking participation indispensable in its emerging market economy, as China’s domestic commercial banks are experiencing severe capital and credit shortages in attempting to meet all the needs of growing business activity. The Chinese government preferred to maintain control over the finance industry, rather than take the opportunity of rapid financial development offered by the full participation of the foreign banks. Therefore, to avert the loss of control over the banking industry, foreign banks were initially only allowed to serve foreign business investors in the Chinese economy. Paradoxically, the customer scope of these foreign banks–mainly limited to foreign business enterprises, Sino-foreign joint-ventures and cooperative enterprises–has precluded their involvement with the high-risk major default borrowers which are the state-owned enterprises. The relative credit standing of these enterprises may be drawn from a survey conducted by a major bank in Guangdong, in which 92% of loan default was committed by the state-owned enterprises while the remaining 8%
148 Fong
was by Sino-foreign ownership enterprises. The foreign funded banks have generally performed well in view of the restrictions and their constricted scope of operations. In 1996, their average return on assets was 0.6%, and their after tax rate of return on investment was reported to have increased by 31% over 1995. In 1997, their average return on assets was 0.7%. Some of the foreign banks became profitable after two years of operation in China and made profound profits. For example, 13 of the 25 foreign banks allowed to operate in Tianjin were reported to achieve a total profit of US$7.4 million with an ROI (return on investment) of 3.08 times at the end of May 1999. However, because of the many restrictions imposed on the foreign banks, their businesses are limited to small clientele base and short-term loans. As a result, their business became saturated very fast.
CASE DESCRIPTION Information Technology Adoption in the Chinese Banking System Prior to 1978, the level of information technology adopted in the banking system was insignificant. Since the opening of the Chinese economy to world trade and the abolishment of the monobank system in favor of a market-oriented banking system during the reform period, the adoption of information technology within the Chinese banking industry has been on the increase in line with these developments. However, the increase in information technology adoption occurred mainly in the four dominant state-owned banks, among which the Industrial and Commercial Bank of China has been the leading information technology adopter in the Chinese banking industry. Table 4 shows that this leading bank has made a profound increase in the adoption of information technology between 1985 and 1999. The initial focus of computerization in the four state-owned banks was largely centred on the front-counter or front-desk in business and saving outlets. This is the most heavily computerized work system as compared to other work systems, some of which still rely on manual work process. As a result, 90% of these banks’ business outlets have computerized front-counter or front-desk support. However, a considerable number of the small and medium banks still rely on manual mode of operation in this work system. When the overall Chinese banking system is taken into consideration, the aggregate status of information technology adoption reflects a shallow pattern of technology applications, which affects the quality of information systems. The initiation and progression of technology adoption by the domestic banks are very much attributed to government’s efforts, which have been transmitted through the reform agenda and the specific projects targeted at establishing the CNFN (China National Financial Network3 ) infrastructure for the banking system. Competition pressure from the highly concentrated Chinese banking market lacks the type of verve displayed by a developed market-oriented economy and is not forceful enough to propel rapid and strategic adoption of information technology. Strategic moves, such as using Table 4: Information Technology Adoption in the Industrial and Commercial Bank of China
Forms of Information Technology Adopted Mainframe computer Minicomputer Microcomputer Mainframe centers Computerized business outlets Source: Shan, 1999, 2000.
1985
1999
7 units 10 units 100+ outlets
141 units 963 units 106,475 units 47 centres 41,216 outlets
The Foreign Banks’ Influence In Information Technology Adoption 149
information technology to create competitive advantage and innovative positioning, do not characterize the business strategy of the Chinese banks. In addition, there are limited market opportunities for the strategic use of information technology in the industry. The bank customers still perceive banking services in very traditional terms and have not been able to fully appreciate the benefits associated with information technology based products and services. For example, the number of bankcards on issue has been on a rapid increase since 1993, but the incidence of card usage at the Automatic Teller Machines and the Point-of-Sale Systems remains low. It has been assessed that the status of technology adoption in the Chinese banking system is equivalent to the 1980s standard of the developed countries (Liu, 1999). The pattern of technology applications still constitutes islands of automation. This is evidenced by the existence of manual and dual processing modes, and the inability of the banks to configure a virtual network that is capable of comprehensive geographical coverage and extensive interbank linkage. The internal focus of the banks during applications development, has led to the construction of proprietary networks. In the mid-1990s, a panel of 38 experts examined the status of the adoption of information technology within the financial system, and pointed out that the absence of coherent strategy and policy among the banks has hindered the interoperability of the banks’ corporate networks. Even though the headquarters set standards and requirements, these were not consistent across the different banks. In addition, the lack of a distinct national direction governing technology adoption strategy had led to the result of further incompatible technology applications. It was stressed by these experts that financial computerization should be listed as a national strategy, to realign adoption undertakings to ensure compatible technology applications. In 1997, the banks located in 12 major cities began to work towards an interoperable system for a unified banking system (Shang, 2000). However, banks continue to face difficulty in areas where telecommunication infrastructure is inadequate. In areas where the telecommunication infrastructure is inadequate, banks experience a connection gap not only among their own inter-organization networks, but also connection gap with the CNFN (China National Financial Network). The non-interoperability problem has resulted in partially automated or manual work processes. It is expected that the resolution of the system incompatibility and non-interoperability problem will involve a considerable amount of time and cost which in turn, impinges on the deteriorating profit and tight financial position of the Chinese banks. Another problem facing the Chinese banking system is the shortage of information staff. In addition to this problem, the available information staff has limited skill to cope with the complexity of advanced user applications systems and this difficulty has resulted in many different system applications. There were IBM mainframe systems, open systems, traditional systems; fund, savings and credit card systems, which were developed individually and demanded all types of different application environments. Information technology staff, who were knowledgeable in both technology application and in business organization, remained scarce and difficult to recruit. A majority of the information technology staff has largely applications skill, rather than skill in debugging and resolving problems which arose in the applied systems. An unstable information technology support force further aggravated the lack of strong skill in this area. With the new labor reform policy4 in force, these banks faced tough competition in the labor market in attracting, as well as in retaining, a stable pool of the required talent and expertise. The banks were extremely frustrated with losing their heavily sponsored employees to companies, which could afford higher wages and benefits. Even in the Special Economic Zones, where staff resources were comparatively richer in quantity and quality than the other areas, and where the bank branches registered a higher computerization rate, the problem of shortage of higher skilled information technology personnel constituted a crucial problem.
Information Technology Adoption: The Implications for Domestic Banks Information technology provides an opportunity for businesses to improve their efficiency and effectiveness, and even to gain competitive advantage (Benjamin et al., 1984; Earl, 1989; Ives &
150 Fong
Learmonth, 1984; Porter & Millar, 1985; Dierickx & Cool, 1989). In the developed countries, banks are among the biggest investors of information technology and they apply leading technology to achieve unprecedented cost efficiency and competitive advantage. Some of the technology investments undertaken by these banks in the past have become necessary tools for operations and competition today, such as the Automated Teller Machine, which constitutes the minimum standard of convenience expected of banks in the developed countries. A leading information technology application that has been expected to create new standard in the banking industry is the electronic network. Intelligent electronic networks capable of processing huge transaction volume and handling a multitude of business and consumer applications are emerging in the banking world, and their capabilities are stretching seamlessly across geographic borders. Despite the problem of privacy intrusion and security risk associated with online systems, the global banking community is very positive about embracing the Internet in its future strategic operations (Sheshunoff, 2000; Davidson, 2000; Stafford, 2001). Although the types of transaction currently supported by Internet banking are limited (mainly account enquiries, money transfer, bills payment and payroll deposit), it is anticipated that full service Internet banking would become the industry standard and not the exception in the nottoo-distant-future (Sheshunoff, 2000; Wilson, 2001). At present, the Chinese banks lag behind the overseas foreign competitors in using electronic networks for strategic competitiveness and the option of Internet banking in the country is still highly underdeveloped. The central bank is concerned about the impact of foreign competition on its domestic banks, particularly when China became a member of the WTO and when its financial sector is opened to full market competition. It has urged its domestic banks to gear up and gird themselves for the technological challenges that will be posed by the well-endowed foreign competitors, particularly Internet banking (Zeng, 2001). The effective adoption and applications of information technology to every aspect of the banking operations are not only crucial for the Chinese banks to survive the challenges posed by its foreign competitors when the country entered the WTO, but also for maintaining a timely, stable and reliable financial system that is vital to the maintenance of market confidence. Furthermore, this would facilitate the Chinese banks to integrate with the rest of the banking world and exploit opportunities associated with the rising number and volume of international financial transactions.
Foreign Direct Investment and Technology Transfer: A Literature Review Foreign participation, particularly through foreign direct investment (FDI), has been identified as an important attribute that could provide the host country with ready access to an advanced level of technologies and know-how, and also to a pool of financing resources (Conroy, 1992). There are studies that suggested FDI brought positive impact or spillovers, such as higher growth rate of productivity, higher competitiveness level and higher living standard, to a host country through technology transfer activities (Quinn, 1969; Globerman, 1979; Chen, 1983; Blomstrom & Persson, 1983; Morton, 1986; Schive, 1990; Conroy, 1992; Wang & Bromstrom, 1992; Caves, 1995; Borensztein et al., 1998; Sjoholm, 1999). On the other hand, there are studies that suggested otherwise (Cantwell, 1989; Haddad & Harrison, 1993; Aitken & Harrison, 1994; Kokko & Tansini, 1996; Perez, 1998). Explanations provided by researchers for the mixed evidence tend to be centered on the characteristics of the host country in encouraging or harnessing maximum benefits from technology transfer. For instance, Blomstrom et al. (1994), Kokko (1996) and Sjoholm (1999) found in their respective studies that a high technology gap between the foreign and domestic firms and a low degree of competition in the market could impede the exploitation of technology transfer. They stressed that FDI is an effective conduit for the transfer of technology when there is a sufficient absorptive capability5 of the advanced technologies in the host country (studies by Cohen & Levinthal, 1990; Borensztein et al., 1998; and Chuang & Lin, 1999, are of similar view). Otherwise, it will be difficult for the technologically backward domestic firms to close the technology gap and catch up with the foreign firms. They also stressed that the level of technology transferred via FDI is dependent on the competition pressure in the host
The Foreign Banks’ Influence In Information Technology Adoption 151
country because foreign participants from developed countries are likely to bring in relatively modern and efficient technologies to defend their position in a very competitive market (also supported by Wang & Blomstrom, 1992; Blomstrom & Lipsey, 1996). In the process, the competition pressure may also stimulate the adoption of technology or efficiency-enhancing strategies by domestic firms due to the desire to stay ahead of competition or catch up with the technology-wielding foreign firms (Caves, 1974; Dunning 1993; Blomstrom & Kokko, 1997; Gonclaves & Duque, 1999). For a better outcome, there have been suggestions that the developing host countries, particularly those not blessed with rich endowments, implement measures to stimulate competition and concurrently build up local technological competency and infrastructure for effective transfer of technology, rather than getting the required technological capability in place before introducing full market competition (Wang & Blomstrom, 1992; Frischtak, 1989; Blomstrom et al., 1994; McKendrick, 1995). From the above review, although the characteristics of the host country are important in technology transfer, the role of the government and the policymakers is a critical one in ensuring that their policies and actions optimize the benefits and spillovers from foreign technology transfer. Market liberalisation and foreign presence can be disastrous if not managed properly, and may generate adverse impact on the domestic firms, which is feared by host governments of developing countries. Foreign participants, especially those from developed countries, generally have technological advantages that enable them to compete successfully against the domestic firms and have easier access to international capital financing for investment in sophisticated technology (Mason, 1973; Mitchell, 1989, 1991). Host governments of developing countries are cautious to ensure that the foreign participants do not become too powerful and generate a detrimental effect on the profitability and future tenancy of their domestic firms (Cowling & Sugden, 1987; Young et al., 1994). This is a reason why some of the emerging market economies such as China adopted a gradual liberalizing stance for their markets. Even so, it is important that the host country government and policymakers do not stifle the positive impact or spillovers from FDI. There are indirect benefits associated with technology transfer that need to be taken into consideration. For example, employment from FDI may expand local capabilities to the advantage of the domestic firms. Human capital investment undertaken by the foreign firms (to equip its local labor to run the operations in the host country) may ultimately benefit domestic firms in the long run, as a result of labor migration. There are studies that revealed a considerable number of managers working in the local firms in Latin American and East Asian countries received their first training through their employment with these foreign multinationals (Katz, 1987; Hobday, 1995). Foreign participation can have a great impact on the host country. Therefore, rules and policies pertaining to the investment environment must also take into consideration of the motivations behind investments made by foreign firms because they have heavy implications on the types of incentive that will entice meaningful technology transfer from these firms. A foreign participant’s commitment to technology investment and transfer is likely to be positively associated with the strategic importance of its investment and its foreign market performance (Isobe et al., 2000). The foreign firm will engage in less resource commitment for FDI if it continuously suffers poor market performance or uncertainty surrounding its investment (Johanson & Vahlne, 1977, 1990).
The Potential of Foreign Banks in Technology Transfer According to the KPMG (1994) report, the rationale of the government’s admission of foreign banks into the Chinese banking industry was to attract foreign capital and banking expertise. However the regulated nature of the industry during the reform period so far has highly constricted the scope of foreign banking institutions’ participation, and the overall participation of these foreign banking institutions is very low. As a consequence, their impact on technology adoption has been insignificant. Even the Sino-foreign joint venture, which has been the most direct mean of technology transfer in other industries, is not easily accessible in the Chinese banking system. The highly confidential nature of banking business has kept this type of business formation at a low rate. About 10% of the
152 Fong
relatively active banking business entities were established on this basis. A primary research was undertaken on foreign banks in late 1994, to assess the role that they played in technology transfer in Beijing. This research showed that there was little opportunity for the domestic economy to tap foreign expertise in the area of banking technology from the fully funded foreign enterprises operating in China. All of the 98 main foreign banking offices located in Beijing at that time were approached, of which 67% responded to a telephone survey regarding their level of technology adoption in China. Due to the operational restrictions imposed on the foreign banks in Beijing, the number of staff in the Beijing representative offices is being kept at a low level. Seventy-three percent of the respondents described their organization structure as involving the following simple configuration: • Chief representative (normally expatriate from headquarters), • Assistant representative, • Secretary, • Driver. Seventeen percent of the foreign offices surveyed did not employ computers to operate their activities and about half of this group was the Japanese banks. However, these banks were equipped with basic communication and paperwork processing equipment, such as a fax machine and copier. Word processing tasks and data processing tasks were carried out on a labor-intensive scale, via typewriters and hand-operated calculators. On the other hand, 40% of the respondents have computer facilities on a sharing basis of at least two persons to one computer, while 12% of the offices surveyed provide a single personal computer station to every staff (except the driver). The latter group tended to be American, European, and Canadian banks. Only 4 % of the banks in the survey were equipped with computers that have international communication linkage, usually linked up with the headquarters. Less than 1% had domestic communication network linkages among its affiliated offices or branches located in China. Almost half (53%) of the offices surveyed have branches or offices in other regions of China, and almost all the business branches in Beijing and other areas that deal with business transaction, had adopted computer technology but with a low incidence of inter-organizational electronic linkage. When the foreign banks were further interviewed on their intentions concerning the future adoption of banking technology in China, the interviewees regarded the availability of technology support to be most inadequate in China and information technology staff was normally sent from headquarters. In addition, technology hardware and software tended to be sourced from overseas headquarters or from the main overseas regional office, where the application of technology was at a very sophisticated level. Business branches in most cities of the Special Economic Zone were commonly characterized by a higher number of staff members than the representative office in Beijing. Computers were adopted in business branches for transactional purposes. However, the local users were seen as having weak capability in computer usage. Training was either conducted on an in-house basis or at the overseas offices. Adoption of sophisticated technologies remained insignificant in the late 1990s because of the low level of participation given to the foreign banks and the weak infrastructure that supports the adoption of information technology. These foreign banks also experienced an unstable pool of staff. Overall, the plan to adopt further technology in these banks is dependent on the market opening for their participation and the infrastructure support (telecommunication and power) within the country. Thus, the central issue in the availability of technology for transfer from foreign banks to the Chinese market primarily lies in the degree of market participation open to them. This ability to participate in the market is closely associated to the opportunity of business participation or expansion. The infrastructural support took on second importance in attracting foreign technology introduction into the Chinese banking system. Thus far, the opening of the Chinese market to the foreign banks has been a carefully planned and controlled process, which explains the high passivity of foreign banks in technology adoption.
The Foreign Banks’ Influence In Information Technology Adoption 153
On the demand aspect of this technology transfer process, the ability to integrate or absorb any available foreign technology into the domestic system is also dependent upon technical expertise and managerial skills. In the Chinese banking system, not only is the opportunity for foreign technology transfer low, but the weak capability of the labor resources has not been able to absorb or assimilate any available foreign technology in the country. Even though the labor mobility factor6 is high, it does not benefit the pattern of technology diffusion. Because the four state-owned banks employed 95% of the employees in the Chinese banking industry, the low level of participation and highly limited employment opportunities from the foreign banks do not permit meaningful transfer of technology and knowledge to the local labor. The role of foreign banks in technology and knowledge transfer is limited in the Chinese system.
CURRENT CHALLENGES/PROBLEMS FACING THE CHINESE GOVERNMENT If the Chinese banks are to effectively exploit and harness benefits from technology and knowledge (banking expertise) transferred by the foreign banks through greater market access, further reform efforts are needed to overcome challenges and problems within the Chinese banking system, as well as those systems that are intimately linked to it. A stable and resilient banking system, a product of successful reform efforts, has the additional benefit of allaying the government’s concern about loss of market control to the foreign banks if greater market access is granted. The major challenges and problems that have to be tackled in the far-reaching reform efforts are as follows:
Loan Default In the Chinese banking system, the amount of non-performing loans stood at 20% and unrecoverable loans at 7%. The Chinese government is eager to clear up these massive and longstanding loans before opening the market fully to foreign competition. The default borrowers in the non-performing and unrecoverable loans were mainly the state-owned enterprises. Their failure in adapting to the market forces and poor investment management ability had resulted in huge losses and capital assets underutilization, thus their inability to meet loan repayment. The cumulating debt problem was also blamed on the absence of comprehensive bankruptcy law and lack of strong cooperation from the local government7 during the early reform period. In addition, the lack of experience and discipline of the bank staff have contributed to the inefficiency of the system. Although China has an unusually high rate of savings by East Asian standards, it has not been efficiently use as an investment resource by its domestic banks, in view of the lack of prudence in lending decisions and the high proportion of non-performing loans. The debt situation is threatening the profitability of the domestic banks and the viability of the country’s financial system. The deteriorating profit of the domestic banks has an adverse impact on their attempts in information technology adoption and capability development. To ensure that the banks are disciplined in their lending approach, the central bank has progressively implemented rules that engender prudent lending decisions. However, the remedial measures, such as ‘debt-to-equity conversions’ for the ailing state-owned enterprises and transferring bad debts to the newly created state asset management companies were not expected to produce effective result in eradicating the problem. More effective measures and reform efforts are needed to resolve and prevent further buildup of the longstanding loans, so that the Chinese banks can focus on establishing a modern and sophisticated commercial banking role. Otherwise, the opening of the banking industry to foreign banks will be a lengthy process.
Regulatory and Legal Framework Although the regulatory and legal framework has been evolving to keep in pace with the transition towards a market-based economy, shortcomings remained in the supervisory, regulatory,
154 Fong
legal and accounting systems which required further effort to bring them up to international standards. This is also relevant to bankruptcy procedure, which has implications on the accumulation of nonperforming loans in the financial system. Legislative incompetence and procedural irregularities have the effects of undermining the confidence of foreign investors. For example, the foreign banks became concerned and even more cautious when China did not bail out foreign creditors of the failed Guangdong International Trust and Investment Corporation (GITIC), which is one of the country’s provincial government investment arms, when it became bankrupt on January 16, 1999 with an unpaid accumulated debt of US$4.3 billion. Foreign banks were the last in preference of payment in the list of creditors. This eroded the confidence of foreign banks and caused severe contraction and retraction of credit from foreign banks. With the added strain from the long wait in gaining full access to the local currency market, a handful of the foreign banks are closing or downgrading their presence in China at the call of restructuring by their headquarters. According to the Foreign Banks’ Association in Beijing, 70% of foreign banks have scaled back in their lending or closed their branches in the past two years (Zheng, 2000). The nascent legal and regulatory framework requires further reform efforts. A strong legal and regulatory regime is necessary for strengthening the central bank’s ability to manage and control the monetary situation, tightening credit supervision of individual commercial enterprises, enhancing banks’ credit discipline, and establishing an integrated and open banking system where competition is orderly and management is effective.
Infrastructure Infrastructure is beneficial to the economy “only when it provides services that respond to effective demand and do so efficiently” (The World Bank, 1994, pp.2). A sound and efficient infrastructure is fundamental to the adoption and diffusion of information technology and banking expertise. However, China is still involved in trying to overcome the problem of inadequate basic infrastructure and has yet to reach a level that is fully beneficial to the economy. The Chinese telecommunication and power supplies infrastructure impose direct constraints on potential delivery systems in the banking industry, and this in turn influences the range of possible products or technologies that can be adopted. The institutional frameworks of these two infrastructures (telecommunications and power) share many similarities. The technical and particularly institutional constraints imposed by the infrastructure, especially the telecommunication infrastructure8 , have been impeding the ability of the Chinese banking industry in its adoption of information technology to realize benefits such as: • cost-effectiveness in linking the Chinese banking units, • extensive coverage of all provinces, • international standards for open access, • secure and reliable transmission of financial data and information, • rapid payment settlement without information transmission delay, • flexible structure for creation and further support of financial products and services. The stage of infrastructure development also has an impact on the foreign banks’ technology adoption position. A technology-oriented foreign bank will have to make heavy investment in information technology support to overcome the shortcomings in the infrastructure. However, such investment strategy will be made difficult by the nature of the market opening to the foreign banks and the business justification of return on the investment. Hence, infrastructure deficiencies hampered both the process of technology adoption within the banks and technology transfer to the banks.
Human Resource The shortage of appropriately qualified human resources is one of the most pertinent issues facing the banking system in their adoption and diffusion of information technology. The foreign
The Foreign Banks’ Influence In Information Technology Adoption 155
banks constitute one of the conduits in training and imparting banking expertise to the Chinese locals. However, their low level of participation limits this role. The government is also a critical force in cultivating a pool of the required talent. Government intervention would seem relevant in situations where market imperfections or externalities make it unfair for private participation to fulfill the entire role in training the resources. External facilities specialized in training people for information services remained relatively scarce in China. The average skill level of the information technology personnel in China remained low. There was also a lack of individuals with knowledge of both the technical applications of information technology and the business value systems of the banks. Such individuals are in high demand for their potential in enhancing business value through the exploitation of technological capability. From 1979 till now, it has been the responsibility of the domestic banks to create a pool of information technology talent to support the technology adoption and diffusion process. For the domestic banks, this has weighed on their already tight financial position. Although the government’s support for computers courses in primary and secondary education level is growing, basic education takes years. Support from the government in the area of providing external training facilities is needed. This would not only alleviate the constraint faced by the banks, but also may serve as a coherent national strategy to incorporate the concentration of scarce resources into developing indigenous technological capabilities. This is necessary to meet the immediate and pressing demand for talents in those economic systems that are already involved in information technology adoption. The consolidation of the different learning paradigms by establishing responsive and nationally coordinated training centres, for example, might not only alleviate the financial burden on the technology adopters but also have an impact on the adoption of consistent standards in technology operations and management.
APPENDIX 1 History of the Major Chinese Banks A) The Central Bank in China The People’s Central Bank of China was established in April 1948. Since its establishment, the bank has functioned as both a central bank and a commercial bank for about 35 years. From December 1948 onwards, the People’s Bank of China was given the responsibility of issuing the country’s standardized currency, the Renminbi, for circulation within the economy. This remains unchanged till today. In fact, the People’s Bank of China has always retained its principal identity and had remained a core bank in times of mergers, demarcation or division within the banking industry. In the year 1952, which witnessed the consolidation of the socialist economy, the People’s Bank of China became the monobank to regulate and manage the banking industry. The Bank of China’s overseas business activities were incorporated into the People’s Bank of China area of operations, while the Agricultural Cooperative Bank was dissolved. As a result, the banking structure became highly centralized and unitary, with the People’s Bank of China operating as the only bank in the economy. However, the arena of the principal bank’s activity was extremely limited, and was restricted to playing a subservient role to the Ministry of Finance in credit, savings and settlement activities. Lending activities were only for state-owned enterprises and in the form of working capital of a temporary and seasonal nature. Several changes occurred in the role of this bank from 1979 onwards, transforming the bank into a vital economic unit in the economy’s development path. The People’s Bank of China attained its independence, separate from the Ministry of Finance, in 1983. It was also given wider discretion and independence in its lending activities. In January 1984, the People’s Bank of China severed its direct involvement in commercial credit and deposit operations, and assumed the official status of a central bank. The Industrial and Commercial Bank of China was specially created to take over the severed commercial arm of this bank. Gradually, the People’s Bank of China adopted the functions typical of central bank in industrialized countries.
156 Fong
1)
2)
3)
4)
B) The Dominant Players: The Four State-Owned Banks The Industrial and Commercial Bank of China was established on January 1, 1984 under the approval of the State Council. Although it came into existence during the pre-reform period, its significant share of market activities was derived from the lateral transfer of established business activities from the People’s Central Bank of China when the latter became the country’s official central bank and had to sever any direct involvement in commercial credit and deposit operations. The transferred commercial portfolio consists of urban banking business that specialized in savings deposit and lending to commercial enterprise. In term of assets, this bank is the largest bank in China and was ranked among the largest banks in the world in 1997. Bank of China was built on the foundation of the Daqing Bank in 1912. It operated as the international bank prior to its nationalization by the government in 1949. In 1953, the bank was appointed by the government to undertake and control all foreign exchange activities within the country, thus making it the economy’s only specialized foreign exchange center. It operated under the jurisdiction of the People’s Bank of China and was referred to as the Foreign Exchange Bureau of the Economy. In 1979, the Bank of China was appointed by the government to play a crucial role in the country’s import and export policies. The bank underwent a restructuring exercise in order to fulfill this important assignment. In addition, it took on a new role as the clearinghouse for domestic transactions denominated in foreign currencies. Its activity in foreign exchange was further increased with the semi-floating currency system, with the Chinese yuan pegged against a basket of seven major foreign currencies, instead of just the Swiss franc used in the earlier reform period. Through this activity of pegging the Chinese currency against a basket of seven major foreign currencies, the bank became a major driving force in encouraging hi-tech import for industrial modernization and export to earn foreign exchange. It also provides financial credit to enterprises with export potential, in line with the country’s trade orientation strategy. However, its prime position in foreign exchange and international transactions has been gradually eroded by the entry of new domestic banks and foreign banks. The People’s Construction Bank of China was established in 1954 to handle the capital construction segment of banking activities. During the pre-reform period, it handled capital construction fund allocation and credit extension in accordance with the state budget and relevant policies. In 1958, the central government placed the bank under the jurisdiction of the Ministry of Finance as its Capital Construction Financial Division and closed all its branches. It took on a name identity as a bank again in 1962 but merged into the People’s Bank of China in 1970 during the Cultural Revolution era. In 1972, it successfully regained its name identity as a bank yet again and operated in the Department for the Ministry of Finance. Although this bank had been closed, merged and resurrected at particular times during the pre-reform period, its roles were confined to the handling of the country’s capital construction. The People’s Construction Bank of China’s banking role developed in the 1980s and involved in deposit activity and commercial lending to the construction industry. In 1996, the bank changed its English name to ‘China Construction Bank’. Agricultural Bank of China. During the pre-reform timeframe, several attempts had been made to establish a specialized bank to handle the financial matters of the economy’s agricultural sector. Credit cooperatives were the initial common financial institutions established since 1951 to support the agricultural sector. However, they were all closed off as part of the major banking industry restructuring exercise in 1952, during the Chinese internal movement against ‘the three evils’ – corruption, waste and bureaucracy. These credit cooperatives were reestablished in March 1955 to be known as the Agricultural Bank of China, but their service to the agricultural sector was lacking in strength and effectiveness. At that time, the branches of the Agricultural
The Foreign Banks’ Influence In Information Technology Adoption 157
Bank of China were absent at the ‘grass-roots’ level9 , and they were situated in popular towns and cities, above the county level. In remote agricultural areas where the Agricultural Bank of China did not have a branch, the People’s Bank of China branches played a substitute role in these areas. In 1957, the Agricultural Bank of China was shut down for its role in generating inflation. The bank was reestablished again in November 1963, however the revival lasted for two years before it was then merged into the People Bank of China in 1965 (Yan, 1993). After the various episodes of attempt and failure in establishing a specialized bank for the agricultural sector, a specialized agricultural bank was revived in 1979 and was named as the ‘Agricultural Bank of China’. This bank has been responsible for financial intermediation, mainly supporting the rural areas in its agricultural activity. The most active part of its banking business covers loans to state-owned enterprises in the rural areas and loans for rural agricultural activities such as crop advances, agricultural capital investment, and farm development.
ENDNOTES 1 These banks were established to take over the government-directed or ‘policy’ lending functions of the four state-owned banks. 2 Banks, financial companies, trust and investment companies, insurance companies, insurance brokerage and agent companies, securities companies, investment banks, merchant banks and fund management companies, financial leasing companies, foreign exchange brokerage companies, and companies providing consulting services in finance, insurance and foreign exchange. However, China bans foreign entry in commodity futures, financial futures and other derivative financial services. 3 CNFN (China National Financial Network) is a specialized banking network that supports information system applications for information flow, transaction processing and a range of traditional and modern financial services on an intra-city, inter-city, inter-region and inter-bank basis. The CNFN is to be supported mainly by a satellite-based telecommunication network infrastructure (Jinrongshibao, 1996). 4 During the command era, jobs were allocated to individuals by the government. This practice was abolished in the reform period in which individuals seek employment in the labor market and companies have to compete to attract quality labor. 5 Absorptive capability refers to the capability to acquire, assimilate and exploit the transferred technology. 6 The movement of labor carries the potential of knowledge flow and constitutes a potential factor in technology diffusion among economic entities in the developed overseas countries. 7 The desire of local governments to prevent business failure or business bankruptcy, in order to ensure continuous employment for the locals, has contributed to the high level of bad debt in the banking system. 8 Telecommunication infrastructure is a social overhead capital that has a macro influence on a firm’s pattern of information technology adoption and diffusion. It constitutes telephone lines, satellite communications, broadband communications, institutions and policies, etc., that made up a nation’s telecommunication framework which determines a firm’s internal and external connection with other entities. 9 In geographical localities that are below county level.
FURTHER READING Chen, J., & Thomas, S.C. (1999, Nov/Dec). Banking on China. The China Business Review, 26(6), 1619. Harner, S. M. (2000). Financial services and WTO: Opportunities knock. The China Business Review, 27(2), 10-15. Kremzner, M. T. (1994, June 15). Foreign banks in China: Financial sector reforms will open
158 Fong
opportunities. East Asian Executive Reports, 16(6), 9-13. Zhang, J. H., and Zheng, J. X. (1993). Challenges and opportunities for foreign banks in China (Policy Paper 7). Western Australia: Murdoch University. Asia Research Centre on Social, Political and Economic Change.
REFERENCES Aitken, B., and Harrison, A. (1994). Do domestic firms benefit from foreign direct investment? Evidence from panel data. World Bank Policy Research Working Paper (No. 1248). Washington, DC: World Bank. Almanac of China’s Economy. (1991, 1994-1996). Beijing: State Council Development Research Centre. Benjamin, R. I., Rockart J. F. Scott-Morton, M. C. & Wyman, J. (1984). Information technology: A strategic opportunity. Sloan Management Review, 25(3), 3-10. Blomstrom M., Kokko, A. & Zejan, M. (1994). Host country competition, labour skills and technology transfer by multinationals. Weltwirtschaftliches Archiv, 130, 521-533. Blomstrom, M., & Kokko, A. (1997). Regional integration and foreign direct investment. NBER Working Paper (No. 6019). Cambridge, Mass: National Bureau of Economic Research. Blomstrom, M., & Lipsey, R. E. (1996). Multinational firms and the diffusion of skills and technology. NBER Reporter, N, 11-13. Blomstrom, M., & Persson, H. (1983). Foreign investment and spillover efficiency in an underdeveloped economy: evidence from the Mexican manufacturing industry. World Development, 11, 492-501. Borenszten, E., De Gregorio, J. & Lee, J. W. (1998). How does foreign direct investment affect economic growth? Journal of International Economics, 45(1), 115-35. Cantwell, J. (1989). Technological innovation and multinational corporations. Oxford: Basil Blackwell. Caves, R. (1974). Multinational Corporations, competition and productivity in host-country markets. Economica, 41, 176-193. Caves, R. (1995). Multinational enterprise and economic analysis (2nd ed.). Cambridge: Cambridge University Press. Chan, C., and Reuters. (2000, January 25). Shanghai set to widen Yuan trade. China Web. Retrieved April 26, 2001 from the World Wide Web: http://www.chinaweb.com/english/cw_html/ thebigissue/the_renminbi/HK1954.html Chen, E. K. Y. (1983). Multinational corporations and technology diffusion in Hong Kong manufacturing.’ Applied Economics, 309-321. Chen, J., & Thomas, S.C. (1999, November/December). Banking on China. The China Business Review, 26(6), 16-19. China – foreign banks allowed enlarged business scope. (1999, August 6). Asia Intelligence Wire, p.1. China Economic Information. (1997, December 5). More foreign-funded banks set up in China. ChinaVista. Retrieved April 26, 2001 from the World Wide Web: http://www.chinavista.com/ business/news/archive/dec/dec11-03.html China Statistical Yearbook. (1989-1999). Beijing: Statistical Information and Consultancy Service Centre. Chuang, Y. C., & Lin, C. M. (1999). Foreign direct investment, R & D and spillover efficiency: Evidence from Taiwan’s manufacturing firms. The Journal of Development Studies, 35(4), 117-137. Cohen W. M. & Levinthal D. A. (1990). Absorptive capacity: A new perspective on learning and innovation. Administrative Science Quarterly, 35, 128-152. Conroy, R. (1992). Technological change In China. Paris: OECD. Cowling, K., & Sugden, R. (1987). Transnational monopoly capitalism. London: Wheatsheaf Books. Davidson, S. (2000, November). Internet banking: Key strategic and tactical issues for community
The Foreign Banks’ Influence In Information Technology Adoption 159
bankers. Community Bankers, 9(11), 48-50. Dierickx, I., & Cool, K. (1989). Asset stock accumulation and substainability of competitive advantage. Management Science, 35(12), 1504-1514. Dipchand, C. R., Zhang, Y. & Ma, M. (1994). The Chinese financial system. Westport, Conn: Greenwood Press. Dunning, J. G. (1993). Multinational enterprises, and the global economy. Reading: Addison-Wesley. Earl, M. (1989). Implementation: Management strategies for information technology. New York: Prentice-Hall. Frischtak, C. (1989). Competition as a tool of LDC industrial policy. Finance & Development, 26(3), 27-29. Globerman S. (1979). Foreign direct investment and ‘spillover’ efficiency benefits in Canadian manufacturing industries. Canadian Journal of Economics, 12, 42-56. Gonclaves, V. F.C., & Duque, J. (1999, July/August). Portuguese financial corporations’ information technology adoption patterns. Interfaces, 29(4), 44-57. Haddad, M., & Harrison, A. (1993). Are there positive spillovers from direct foreign Investment? Evidence from panel data for Morocco. Journal of Development Economics, 42, 51-74. Hobday, M. (1995). Innovation in East Asia: The challenge to Japan. Aldershot: London. Isobe, T., Makino, S. & Montgomery, D. B. (2000, June). Resource commitment, entry timing, and market performance of foreign direct investment in emerging economies: The case of Japanese international joint ventures in China. Academy of Management Journal, 43(3), 468-485. Ives, B., & Learmonth, G. P. (1984). The information system as a competitive weapon. Communication of the ACM, 27, 1193-1201. Jinrongshibao. (1987-1997). Jinrongshibao. Beijing. Johanson, J., and Vahlne, J. E. (1977). The internationalization process of the firm: A model of knowledge development and increasing foreign market commitments. Journal of International Business Studies, 8(1), 23-32. Johanson, J., and Vahlne, J. E. (1990). The mechanism of internationalization. International Marketing Review, 7(4), 11-24. Katz, J. M. (1987). Technology creation in Latin American manufacturing industries. New York: St. Martin’s Press. Kokko, A. (1996). Productivity spillovers from competition between local firms and foreign Affiliates. Journal of International Development, 8, 517-530. Kokko, A., & Tansini, R. (1996). Local technological capability and productivity spillovers from foreign direct investment in the Uruguayan manufacturing sector. Journal of Development Studies, 32, 602-611. KPMG. (1994). Banking and finance in China (1st ed.). KPMG. Lee, F. E. (1982). Currency, banking and finance in China. New York: Garland Publishing, Inc. Liu H. (1999). Bank network security. Financial Computer of China, 116(3), 42-46. MacCormac, S. (1993, May/June). Foreign bank branches on the move. China Business Review, 20(3), 40-43. Mason, R. H. (1973). Some observations on the choice of technology by multinational firms in developing countries. Review of Economics and Statistics, 55, 349-355. McKendrick, D. (1995, September). Sources of imitation: improving bank process capabilities. Research Policy, 24(5), 783-802. Mitchell, W. (1989). Whether and when? Probability and timing of incumbents’ entry into emerging industrial subfields. Administrative Science Quarterly, 34, 208-230. Mitchell, W. (1991). Dual clocks: Entry order influences on incumbent and newcomer market share and survival when specialized assets retain their value. Strategic Management Journal, 12, 85100. Mo, Y. K. (1999). Strengthening the banking system in China: Issues and experience. In Bank for
160 Fong
International Settlements (Ed.), A review of recent banking reforms in China: BIS policy papers no. 7 – October 1999 (pp. 90-109). Basel: Bank for International Settlement. Morton, K. (1986). Multinationals, technology, and industrialization : Implications and impact in third world countries. Lexington, Mass.: D.C. Heath and Company. National Bureau of Statistics People’s Republic of China. (2001, February 28). Statistical communiqué of the People’s Republic of China on the 2000 National Economic and Social Development. China Statistical Information Network. Retrieved April 27, 2001 from the World Wide Web: http:/ /www.stats.gov.cn/english/gb/gb2000e.htm People’s Bank of China Education Editorial Committee. (1985). China modern financial history. Beijing: China Financial Publisher. People’s Bank of China. (1996). China financial outlook, 1996. Beijing: China’s Financial Publishing House. Perez, T. (1998). Multinational enterprises and technological spillovers. The Netherlands: Harwood Academic Publishers. Porter, M. & Millar, V. E. (1985). How information gives you competitive advantage. Harvard Business Review, 63(4), 149-160. Quinn, J. B. (1969). Technology transfer by multinational companies. Harvard Business Review, 47(6), 147-161. Schive, C. (1990). The foreign factor: The multinational corporation’s contribution to the economic modernisation of the Republic of China. Stanford, Calif.: Hoover Institution Press. Shan, H. G. (1999). Realising the main objective of ICBC in computerisation. Financial Computer of China, 114(1), 8-10. Shan, H. G. (2000). Consolidate the computerization foundation of ICBC for the financial gobalisation challenge. Financial Computer of China, 126(1), 2-4. Shang, F. L. (2000). Speed up the development of bankcard business, drive the Golden Card Project construction. China Credit Card, 44(1), 4-5. Sheshunoff, A. (2000, January). Internet banking – An update from the frontlines. American Bankers Association Banking Journal, 92(1), 51-53. Sjoholm, F. (1999). Technology gap, competition and spillovers for direct foreign investment: Evidence from establishment data. The Journal of Development Studies, 36(1), 53-73. Stafford, B. (2001, February). Risk management and Internet banking: What every banker needs to know. Community Banker, 10(2), 48-49. The World Bank. (1980-1999). World Development Report. New York: Oxford University Press. Wang, J. Y., & Blomstrom, M. (1992). Foreign investment and technology transfer: A simple model. European Economic Review, 36, 137-155. Wang, T., Liu, H. & Zhang, X. (1990). China finance encyclopedia. Beijing: Economics Administration Publishing Bureau. Wilson, C. (2000, August). Using the Internet to serve business customers. Community Bankers, 9(8), 16-19. Yan, X. (1993). China business : Financial activity directory. Beijing: Beijing University of Science and Technology. Young, S., Hood, N. & Peters, E. (1994). Multinational enterprises and regional economic development. Regional Studies, 14(4), 489-502. Zeng, M. (2001). Internet banking urged. China Daily. Retrieved April 26, 2001 from the World Wide Web: http://chinadaily.com.cn.net/cover/storydb/2001/04/21/cb-2bank.421.html Zheng, Y. (2000). Foreign banks scale back even as China pledges to open more. China Web. Retrieved December 21, 2000 from the World Wide Web: http://www.chinaweb.com/eng…/ cw_html/thebigissue/wtodeal/HK10326.html
The Foreign Banks’ Influence In Information Technology Adoption 161
BIOGRAPHICAL SKETCH Michelle W. L. Fong is a lecturer in the School of Applied Economics, Victoria University. Prior to her academic and research career, she worked with different business systems in different corporations in Singapore, Malaysia, China and Australia. This gave her an insight into the information technology applications within these organizations, which spurred her research interest in the adoption, diffusion and leapfrogging of information technology.
162 Bryant & Syan
Adopting The Process View: A Case Study of Modeling Change In The Not-For-Profit Sector Antony Bryant Leeds Metropolitan University, UK Veena Syan Forzani Group, Canada
EXECUTIVE SUMMARY This case study focuses on the operation of an adoption agency in the UK, illustrating the issues involved in a small, not-for-profit organization seeking to respond to the pressures to streamline and automate its routines and procedures. It illustrates the limitations of inadequately planned IT-centered initiatives, and how such strategies can be redeemed by process-oriented methods– specifically those derived from a combined BPR and soft systems approach. It also exemplifies the critical importance of organizational issues and the constraints they impose on effective implementation of IT. The methods involved demonstrate the strengths and limitations of a business process orientation, and show how BPR can be applied to an organization where professionals, employees and volunteers work together and coordinate their activities. The overall conclusions to the case point to ways in which the organization’s processes can be improved and aligned; placing it in a far better position to take advantage of IT and associated technologies, both within the organization and with regard to its main sources of support and collaboration. As such it is a case study in organizational preparedness for IT rather than a straightforward study of an IT application.
BACKGROUND NCH was founded in 1869 by Methodist Minister, Thomas Bowman Stephenson, and was known as ‘The Children’s Home’, providing care to orphans and children who needed care and support (Philpot, 1994, p23). The organization currently (mid-2001) provides residential care and various types of community-based services to improve the lives of many children and families suffering from poverty. Also it is influential in Government Social Policies regarding poverty, unemployment, homelessness and children’s rights. Over the 130 years of its existence, emphasis on service provision has changed from residential homes and schools to community-based projects throughout the United Kingdom (UK). Copyright © 2002, Idea Group Publishing.
Adopting the Process View 163
The whole organization currently employs approximately 4,000 full and part-time people. There are approximately 436 projects managed by NCH that are geographically dispersed. These projects are regionally based and are accountable to a Regional Office which is overseen by NCH Head Office, based in London. The current strategic aim of NCH is ‘to improve the quality of life of the most vulnerable children and young people’ (NCH Action For Children, 1999). Adoption NCH Yorkshire (A-NCH-Y) is based in the north east region of England. A-NCH-Y is one of four NCH adoption and fostering projects in England. It is a strategic business unit (SBU) of the overall organization and is accountable to the Regional Office in Harrogate (12miles away). Its activities are governed by the Adoptions Agencies Regulations legislation. A-NCH-Y consists of four full-time and two part-time social workers, one full-time and one part-time administrative worker, one Project Manager, one Senior Practitioner and four sessional workers. Sessional workers are qualified social workers who are allocated work when the caseload of the established team of social workers is too heavy to take on new assessments. The Project Manager has an overall responsibility for the human and financial resources of the project. The social workers are responsible for recruiting, training and assessing prospective adoptive families, as well as matching and placing children with approved families, and supervising the adoption placements. The strategic aim of A-NCH-Y is ‘to enhance the lives of children needing permanent family placements, by providing a specialized quality adoption service to Local Authorities’. The project currently provides a wide range of services such as: • Recruitment, preparation, assessment and approval of prospective adoptive families • Post-placement support to families and children • Post-adoption support to families • Access to records and counselling to adults previously placed for adoption through NCH • Adoption counselling under Section 51 of the Adoption Agencies Regulation 1983
Overview of the Adoption Process The adoption process begins with A-NCH-Y marketing its services through information meetings and publicity to the general public and Local Authorities. Further information is sent out in response to inquiries made by interested parties. Prospective families are selected according to basic criteria for adoption, including age, relationship status, health, absence of criminal records. If the criteria are fulfilled, the family is eligible to apply to adopt children. Individuals or families who can offer a secure and stable placement, and who meet the recruitment criteria are eligible to adopt. Families undergo intensive assessment and preparation with a project social worker, and statutory references, including Health, Police, and Social Services, must be obtained. The completed assessment report is presented to the Adoption Panel, where recommendations about the suitability of applicants are made. The Regional Director of Children’s Services makes the final decision about approval of the applicants. Profiles of approved families are then circulated to Local Authorities who have children in need of permanent placements. Local Authorities send profiles of children, for whom they are seeking families, to Adoption NCH (covering all four adoption projects), in order to find a suitable match between a child and an approved family. Local Authorities pay an agreed inter-agency fee on behalf of the child for the approved placement offered by A-NCH. A child placed with a family is monitored for progress and adjustment. If the child and family are progressing well, the family can apply to the Court to gain an Adoption Order, giving them legal rights as parents. After the Adoption Order has been granted, there is no statutory involvement for A-NCH-Y or the Local Authority with the family. The family and child are, however, able to gain postadoption support at any time. On completion of the adoption process, the completed files are stored in the Adoption Archives at A-NCH South East, for future access and post-adoption counselling. Legislation and public sector involvement govern the adoption process. The role and responsibility of Local Authorities and voluntary agencies are defined in the Children’s Act 1989. The
164 Bryant & Syan
Local Authority has a statutory responsibility to provide an adoption service and receives Government grants to provide such services. NCH and Local Authorities work in partnership for many services such as adoption in order to provide the best service for children and families. Voluntary organizations are inspected and registered every three years by the Department of Health Inspectors, in order to be allowed to continue to function as an adoption agency. Although a voluntary and not-for-profit organization, A-NCH thus operates in an area in which there is competition and diversity. Furthermore there are varying, and inconsistent patterns for demands for placements. As such A-NCH needs constant awareness of changes and potential liabilities, and so its operations–or business processes–need to be flexible and resilient enough to respond to such pressures.
SETTING THE STAGE The first stage of the study involved an analysis of the specific strengths and weaknesses of the existing context at A-NCH-Y. This was done using a SWOT1 analysis coupled with a PLEST2 exercise. The SWOT analysis highlighted A-NCH-Y’s strengths, specifically: • its good public image and its reputation for providing a high-quality service to clients and collaborative organizations; • the project’s management structure, which is simple, with clear lines of accountability, providing support and direction to the staff; • the concept of best value is important throughout the organization, where best value for money is provided to families and Local Authorities in their inter-agency agreements. These strengths can be used as a resource to enable the application of BPR and improve the performance of business processes to function more consistently (see Table 1). The weaknesses that specifically relate to IT and the general use of information included: • ineffective management information systems and database; • lack of training to use information technology; • lack of systematic recording of information gained from networking. While the SWOT exercise can identify internal aspects of the organization, external ones can similarly be identified using a PLEST approach (Hill & Jones, 1992). Undertaking an external analysis is particularly important for A-NCH-Y since it is highly sensitive to factors that may affect its reputation and public standing. The PLEST exercise emphasized many external factors and future developments that need to be taken into account. The political factors include government initiatives and policies, such as Quality Protects and the White Paper on Modernizing Social Services (Department of Health, 1998). These initiatives directly affect Local Authorities and voluntary organizations. All adoption agencies must operate according to existing legislation covering the welfare and rights of children.3 All aspects of A-NCH activity must adhere to the relevant legal framework, and the organization’s practices must change in response to changes in legislation at both national and international levels. The economic cycle has always had an impact on the readiness of people to adopt, but changes in career patterns and increasing uncertainty of future employment even for middle-class and professional people have had a significant effect on motivation and commitment to adopt children. Levels of poverty have been increasing over the years, reaching 10.3 million people in Britain in 1995. Children are the most affected group, with 25% of British children living in poverty.4 Therefore, children are becoming more vulnerable and need additional care and support. Furthermore, the changing political and economic situations in many other countries have resulted in an increasing number of abandoned and orphaned children. Consequently the number of international adoptions may increase, affecting the chances of children in the UK who need to be placed for adoption. 5
Adopting the Process View 165
Table 1: SWOT Analysis Strengths • Skilled and experienced staff with vast knowledge of the adoption field and diverse skills • Flexibility in meeting different needs of children • Established reputation on NCH through the corporate image • Continuous support provided to families throughout the adoption process and provision of post-adoption support service • Prompt response to initial enquiries • Provide best value for customers by providing post-adoption support Weaknesses • Ineffective management information systems and database • Inadequate human resources to cope with increasing demands • Lack of training for administrative staff • General lack of human resources • Reliance on slow and unreliable manual systems • Poor knowledge about the composition of the local community • Cover a large geographical area–increases cost and pressure on staff • Lack of staff knowledge and time about business processes • Lack of systematic recording system of information gained through networking Opportunities • New government initiatives offer new opportunities of working in partnership with Local Authorities • Location in multi-cultural city–more opportunities to develop relationships in various communities • Expansion of the Project’s services in the East Coast area • Change in the structure of Local Authorities–more unitary authorities with which to form partnerships • The Children’s Promise campaign–sponsorship and raising profile of NCH • Formation of consortiums–gather information about children’s needs Threats • Financial insecurity • Increased inter-agency fee–Local Authorities less willing to buy the services provided by Adoption NCH • Inconsistent pattern of demand for placements–across age children’s age ranges • Formation of consortiums–preference is given to Local Authorities • Developments in IVF treatments–fewer families coming forward with requests to adopt • Widely available contraception–fewer babies and children coming for adoption • Competition from new entrants
Wider social factors are also influential in this area, since changes in lifestyle and demographic trends alter the ages at which people decide to marry, the number of children they have and the ages at which they decide to try to adopt. General attitudes to births outside marriage, single-parent families and cultural diversity are also critical. There has been a decline in the number of adoptions in England and Wales from 25,000 in 1968 to 5,306 in 1997, probably a result of wider availability and awareness of contraception, and acceptance of lone parent families (Department of Health, 1999). Attitudes towards illegitimacy have changed, with more acceptance and support for children born to unmarried mothers.
166 Bryant & Syan
There is more acceptance of and better provision for single-parent families, with an estimated 2.8 million single-parent families in the UK in 1996; although one-parent families represent 20% of families living in poverty. Changes in structures of families and attitudes mean that A-NCH-Y must be prepared to provide a service to meet changing and diversifying needs. It was forecast that there would be a 1.3% increase in the number of children aged 0-4 years, and a 10.5% increase of older children aged 5-14 years old in the period 1991 to 2001 (Mintel, 1994, p37). So there will be an increased pressure to find placements for 5-14 year-olds. This has taken place against a background of forecasts of a 9.1% increase in the 30-64 year age group; but with an aging population, the actual figure of those in the 30-50 year age range (i.e. eligible to adopt) may not be as great. Technological change is affecting organizational routines. In many cases people think that ‘more’ will mean ‘better’, and so organizations seek to increase their use of and reliance on technology – particularly computer and communications technology (IT). On the other hand, organizations such as A-NCH often feel that since their main focus is on human issues and professional expertise, they can ignore or downplay the impact of technology. This tension is tempered by a growing recognition that vast amounts of information must be dealt with from a wide variety of sources. This information has to be managed effectively so that the project can meet its operational and legal requirements. Consequently the demand for IT resources has grown in some project areas and across the organization as a whole. However, this demand extends beyond the use of computers, since many organizations are using the Internet to reach a wider audience to market their products and services; and it would be in A-NCH’s interest to market its services on the Internet to reach a wider, more diverse audience. NCH has now (2001) established a Web site giving basic information – but nothing on the individual projects. In general the utilization of IT at A-NCH-Y has been minimal. Until 1999 there were only two computers at A-NCH-Y, one used by a full-time and the other by a part-time administrative worker, to record social work visits and type reports for the Adoption Panel. Only these two members of administrative staff had access to the computers and the Family Placement database. There were several manual methods in place to collect and record various information, such as the number of families coming for assessment per quarter. Some manual systems were created to provide information as input to the database for production of quarterly and annual statistics. On the other hand, the Project Manager manually produced statistical information required by Regional and Head Office. Computers were poorly exploited at A-NCH-Y. Recently the two computers that only have word processing capabilities have been supplemented with two new ones: one with database access and one with Internet access. The social workers can now use the two original computers for word processing tasks, but the software is incompatible with the two new computers. Therefore, when administration are asked to proofread or print any document from the old computers, a great deal of time is spent converting documents and checking with the social workers in case any work is lost. The Internet was introduced to the Project in 2000, although few staff were trained to use e-mail or Internet facilities. Furthermore there is contention between staff, as the Internet machine is also required for administrative work. This means that there is considerable delay in gaining access to the machine, and frequent disruption to administrative tasks. Lack of any planned and coherent use of IT not only affects operational effectiveness of the Project, but also threatens its financial viability. The responsibility for raising fees with Local Authorities for the adoption service is managed between the Project and Regional office. There were discrepancies between the financial systems at both locations, causing delays in raising invoices, delays in receiving payments and occasional loss of payment. Attempts were made to resolve the discrepancies in both financial systems, with the Project Manager creating spreadsheets to record financial data, however these spreadsheets were used only by the Project; Regional Office recorded financial information in a different format, thus the discrepancy in recording information remained. To resolve these discrepancies, the Finance Manager agreed to meet with the Project Manager regularly
Adopting the Process View 167
to go through the invoices, but this was not always possible due to other pressures of work. The general position is that a great deal of management time is spent counting the fees received, and nobody on the Project has the time or expertise to develop PC-based facilities, let alone train others to use them. The Modernizing Social Services agenda introduced by the British Government in 1999 sets a clear target for Local Authorities to have improved IT-based management information systems fully operational by 2004. The Government considers IT to be a key enabler for Local Authorities to improve the provision of services to children and families in need. However, the implications of this for voluntary organizations have not been clarified. Regardless of this, organizations such as NCH have to resolve the matter. For instance, in 2000, the Government decided to create a National Adoption Register to consolidate the children and families matching process all over the UK, to prevent mismatches from occurring. This Register is a national database with which all organizations involved with adoption will have to work. In order to retain its role within the adoptions process, NCH has no choice but to improve its technological capabilities and align itself with this national database.
CASE DESCRIPTION The situation at NCH provides a classic example of the problems faced by a small organization seeking to harness the power and realize the potential of IT. Stafford Beer writing in the 1970s noted that… “The question which asks how to use the computer in the enterprise, is, in short, the wrong question. A better formulation is to ask how the enterprise should be run given that computers exist. The best version of all is the question asking, what, given computers, the enterprise now is” (Beer, 1981, Stress in original). This sentiment was echoed more recently, and perhaps notoriously, by Hammer and Champy in their book on BPR where they state that ‘a company that cannot change the way it thinks about information technology cannot reengineer’ (1993, p83). It might now be thought that BPR is outdated and discredited, and of little or no applicability to a small, not-for-profit organization; this would be an unfortunate misconception. The application of the central principles of BPR to A-NCH-Y demonstrates its effectiveness and relevance, at the same time as highlighting some shortcomings, and how these can be ameliorated by use of a soft systems approach. What the extracts from Beer, and from Hammer and Champy, illustrate is that IT has to be incorporated into organizations as part of a wider process of organizational change, and that it is insufficient and fallacious to see IT simply as a neutral tool that makes things happen faster. Organizations that fail to understand this will fall far short of any objectives that have been set for an IT-focused project or general strategy. Worse still, if no specific objectives have been articulated, there will often be a diffuse and indeterminate air of dissatisfaction that may pervade certain parts of the organization, impairing its operational effectiveness. In contexts such as these, the key is to move away from a focus on the technology to a larger, more complex and challenging perspective that is derived from a process orientation. This can be derived from work on BPR, but must also take account of earlier work on organizational maturity– i.e., the preparedness of an organization to develop its operational routines, and also add new ones -–in the light of the potential offered by technological advances. (See for instance Galliers & Sutherland, 1999, which discusses the work of Gibson & Nolan, and others who have developed the topic since the 1970s.) The present study shows that the process perspective can provide a deep understanding of the business context even for not-for-profit organizations, enhancing analysis of organizational performance, and providing the basis for improvement and greater effectiveness. From such a foundation it is possible to develop clear ideas about the role and potential of IT and associated technologies in the specific organizational context. The SWOT and PLEST analyses, plus other indications of procedural difficulties, provide a foundation for a thorough analysis and redesign of existing processes. The general strategy must be
168 Bryant & Syan
to expose and overcome weaknesses and threats, respond to challenges emanating from the external environment and build upon strengths to take advantage of new opportunities. Simply ‘throwing computers at the problem’ has already been shown to be ineffective, and counter-productive; it wastes resources and is proving to be demoralizing. The application of BPR in this context demonstrates the ways in which the organization needs to be reengineered. This is not BPR in the sense of radical transformation, down-sizing and other negative BPR-associated practices; but process modelling as a way of identifying responsibilities and roles, redesigning inefficient processes, providing information and training for the redesigned processes, and generally improving the overall performance of A-NCH-Y. 6 At certain points, decisions were taken within A-NCH regarding the introduction of some form of IT; but without any overall grasp of what the organizational ramifications of IT application might be. This is a good illustration of an organization that is immature in its use of technology (see Galliers & Sutherland, 1999); there is a focus on the tangible aspects of the technology and a neglect of critical management issues such as staff, skills, effects, and resources. Here are some classic symptoms: • Within A-NCH-Y final decisions regarding the use and introduction of IT were not the sole responsibility of any particular person or group of people. • The introduction and integration of IT within general operations and procedures was not part of any corporate strategic planning. • Consequently changes in use of IT had taken place by accident or stealth, rather than by any design. The most significant development was the introduction of the Family Placement database in 1996. The need for a database originated from the Social Services Inspectorate (a government body that inspects voluntary adoption agencies every three years) in 1994, after they found inconsistencies in the annual statistics gathered from the four different A-NCH projects. Initial discussions took place between senior NCH managers and external IT suppliers, and a database solution was piloted at the A-NCH project in the Midlands. The system was then installed in the other three projects, with handson training given to all the Project managers and a few administrative staff. All four A-NCH Projects were provided with the database so that a standard form of quarterly and annual statistics could be supplied to the Head Office. The decision to introduce the database was made initially by the Project Managers and then by the Adoption Sub-Committee. Each Project was responsible for maintaining its own database. After two to three years of use, the database was found to be inadequate, and went through extensive modification, carried out by a different IT supplier. One source of the problems with the initial version was that the designer of the system had no knowledge of the adoption process, and made little effort to consult experts. IT support for technical problems was provided centrally from Head Office. The Regional Director of Children’s Services in the North East region contributed financially to the costs of the database for A-NCH-Y, but was unable to contribute further. Once the database was operational in Yorkshire, there was little input from this person, or any equivalent colleagues, for the maintenance and improvement of the system. Seemingly senior management at NCH saw the database simply as a piece of equipment that could be purchased, installed and operated at the flick-of-a-switch: Again a classic symptom of technological immaturity–managing the technology rather than the information. One administrator at A-NCH-Y and the Project Manager were given training on the use of the database. Unfortunately the administrator subsequently left the Project, and the Project Manager was unable to attend the full training programme. The remainder of the project staff had limited IT training in general, and so were unable to make progress on their own in learning about the database and other applications of IT. Any priority for training and implementation of IT within A-NCH was focused on the Head Office and Regional Office, rather than on Project staff. Senior management at NCH operate with the belief that Head and Regional offices have the primary information management requirements and hence the most pressing need for IT; budgetary units such as A-NCH-Y, being smaller, are
Adopting the Process View 169
considered to have less pressing requirements. Again, this is symptomatic of a focus on the technology rather than the information, since the source of much of the key organizational information is at the project level, and the volume of information flow at this level is vast and increasing. Even when the technology was introduced at the regional level, the limited number of computers and the different demands for access meant that the Project team relied on one person to update the database and produce statistical information. The database was not always updated on time, and so its output was not considered reliable; therefore manual systems continued to be used for quarterly and annual statistics. This resulted in a situation where no-one was sure which source of information was accurate, and so many updating, recording and reporting processes were duplicated. This was exacerbated since the introduction of the database resulted in an increase in administrative tasks. This is not unusual, since any IT system requires an element of housekeeping and maintenance. Unfortunately there was no recognition of this at the Regional management level. Nor was any specific action taken to allow for the change in demand for skills and human resources for the new system. The problems mounted: • Existing staff did not have the skills to use the IT available to them effectively. • The database was not meeting the Project’s basic needs and the requirements were also changing. • Regional line management was unable to recognize the issues and pressures surrounding the use of an inadequate database and duplicated processes. • The cost of further enhancements to the existing database or the possibility of a new information system was not considered, particularly since it was felt that the existing technology was not being used in a cost-effective manner – a classic Catch 22. Nothing could be done at the Project level since there were considerable financial constraints. The Project was meant to be developing financial independence, so it was difficult to invest funds in more hardware, more training, additional personnel or extending its use of IT in general. In any case such decisions would have required approval from NCH itself – from both the NCH committee and the adoption subcommittee. Since the latter is accountable to the Adoption Agencies Regulations, any changes involve a great deal of bureaucracy, and hence delay. Moreover, A-NCHY could have not embarked on such changes without the agreement of the other adoption Projects, as NCH was keen to ensure consistency across all adoption Projects. In addition to the managerial and operational constraints, staff felt fearful about changing from manual to automated systems. Fear originated from lack of knowledge, changing technology and job insecurity. Staff felt they would experience more pressure to ensure information is regularly updated. The administrative staff felt insecure about social workers using computers to produce their own reports, as it was considered to be infringing on administrative responsibility. Furthermore, management considered social workers using computers to be a poor use of expensive resources as they are paid more than administrative staff. Again, a case of misunderstanding: As far as management were concerned, dealing with IT was a menial task, akin to typing or filing. The Project’s established culture of specialization in the childcare field led them to concentrating on the provision of a social service. IT was seen purely as a way of automating and hastening support, rather than as facilitating more effective flow of information and enhanced operational processes. This aligned with the senior management policies that seem to have been guided by a view of IT, and the Family finders database in particular, as a way of retaining centralized control, rather than as a way of enhancing and evolving operational effectiveness. For all the reasons given above, the chances of any smooth transition to an extended use of IT– led by successful operation of the database and change in procedures–were exceedingly poor. On the other hand the external and internal pressures to develop IT-based systems were immense and growing. Faced with such an impasse the Project Manager was eager to try any solution that would alleviate the symptoms and possibly promote genuine progress towards more effective operation,
170 Bryant & Syan
clearer responsibilities and enhanced use of IT across the various groups involved in the adoption process. The process modelling exercise was one such opportunity.
Process Identification Following the preparatory SWOT and PLEST exercises, the next stage of the study was to gain an understanding of A-NCH-Y itself. This involved identifying the processes that constitute the main activities of the Project. A process consists of inputs and a transformation to produce the required output. The transformation element is considered to be the ‘value-adding activity’ (Earl, 1994, p13). Organizations often ignore processes or do not recognize their importance, particularly since they cross departmental boundaries. In this sense BPR is a misnomer: the ‘R’ stands for re-engineering, but the processes were never engineered in the first place; they just developed. BPR highlights the significance of processes within organizations: if they are poorly understood, dysfunctional and mismanaged, they will disrupt organizational activities to the detriment of its performance. Moreover, efforts to improve performance that fail to take account of processes will falter or fail, particularly schemes that are limited to ‘throwing computers at a problem’. Earl’s (1994) categorization of processes was used to identify the processes at A-NCH-Y. First, core processes vital to business functionality, focus directly on the external customer; usually are the primary activities of the value chain. Secondly, support processes focus on internal customers and tend to support the core processes. These are often administrative, secondary activities of the value chain. Thirdly, business network processes extend beyond organizational boundaries and include suppliers, customers and associates. The redesign of external processes can often result in a reevaluation of business scope and alter the organization’s position in the market. Finally, management processes enable organizations to plan, organize and control resources. Analyzing an organization in terms of these four types of business processes often confronts the status quo since it challenges and undermines people’s assumptions, accepted structures and hierarchies. At the very least it evokes a clear justification of present procedures and operations. In order to identify the processes and activities at A-NCH-Y, the project team was consulted to share their working knowledge of the project. The Project Manager, project workers and administrative staff were interviewed face-to-face from a standard, open-ended questionnaire. The project workers were asked questions concerning: • their involvement & responsibilities in the project; • what they consider to be the main project processes; • areas of difficulties or problems; • areas for improvement; • information management; • communication flows; • the Family Placement database. The project workers were helpful in explaining the various activities that take place within the project, and identifying the areas where problems tend to occur. This does not mean that they immediately understood the concept of a process. On the contrary, many had no idea that the organization could be seen in these terms, since a process does not have the same visibility as a department, project or geographical location. More confusingly, many interviewees had their own, often esoteric, interpretation of what ‘process’ meant, usually one that did not concur with our definition. Therefore time was taken to explain the concept carefully, in terms of inputs, activities and outputs, with defined starting and finishing points. As a result the project workers were able to grasp the idea, and assist the researchers in identifying the processes that exist at the Project level. One useful consequence of the interviews was that respondents identified problem areas and issues, often pointing to symptoms of problems as well as the overall consequence of delays in the
Adopting the Process View 171
adoption process. Hammer and Champy (1993) suggest some symptoms of dysfunctional processes, such as vast amounts of information being exchanged, or spending excessive time making and storing numerous copies of documents. Responses from interviewees echoed precisely these observations, and so provided a basis for process identification and assessment for intervention. One outcome of this exercise was a list and categorization of the processes (see Table 2). This was shown and explained to Project staff, the Project Manager and the Regional Director of Children’s Services for review and validation. Certain processes could be placed in several categories. For example, the placement planning process is a support process to the core placement process, yet it also extends beyond the organizational boundaries, to include liaison with Local Authorities in the category of business network process. Therefore, certain difficulties emerged in using Earl’s categorization, as the boundaries of the definition for each type of process are not sufficient where there are numerous processes that cross various functions and activities of the project. Nevertheless, Earl’s categorization of processes was useful to clarify and prioritize processes into the core processes of the project. Review and categorization of the processes helped in selection of processes for redesign.
Process Mapping Once the processes had been identified and categorized, the process stages were mapped on to process flow charts. Process mapping is a useful technique as it ‘provides a graphical description of the activities, inputs and outputs of a process’ (McManus, 1996, p23).7 Furthermore, process mapping helps to identify problem areas, bottlenecks and responsibilities for each process. Exhaustive process mapping was performed for all core, support, business network and management processes– 40 processes in total. Process mapping was carried out by drawing process flow charts for each Table 2: Process Categories PROCESS
CATEGORY
DYSFUNCTION
Information Management
Core
Lots of information not effectively handled
Important and/or Feasible BOTH
Statutory References
Core
Various local authorities - different procedures
BOTH
Circulation
Core
Unclear responsibility; delays
BOTH
Matching
Core
Information not handled well; roles unclear
BOTH
Placement
Core
Admin & database not systematically informed
BOTH
Post Adoption
Support
Time spent not monitored
BOTH
Adoption Archives
Support
Files prepared for archives - delay
BOTH
Child Protection
Support
Unresolved cases & conflict - no partnership with LAs
BOTH
Networking with Local Authorities
Business Network
No regular meetings to exchange information; issues not recorded
BOTH
Communication
Management
Internal & external messages not effectively handled
BOTH
Human Resource
Management
Delays; conflict of responsibility; pressure due to lack of staff
Important
Financial
Management
Monthly cost/income report not received unaware of budget details
Important
Technology
Management
Insufficient terminals; unable to extract statistical information from database
Important
Control System
Management
No performance measures for the project or Adoption Process
BOTH
172 Bryant & Syan
process, including step-by-step stages described by interviewees, based on their knowledge of each process. This was validated and extended via discussions with project workers and observation of what they actually did – to confirm against what they said should be done. The process flow charts were reviewed with the Project Manager, in order to clarify the distinction between the actual processes being performed and the managerial view of what and how they should be performed. This was sufficiently successful for A-NCH-Y to make revisions to their own adoption process flow chart based on the process flow chart produced by the process mapping exercise. Process mapping can help to determine whether activities are performed sequentially or in parallel. Furthermore it assists identification of points at which processes cross functional areas, and relationships between processes.
Selection of Processes In the course of process definition and mapping, a range of issues and problem areas became clear. In order to clarify some of these issues in more detail, a soft systems methodology (SSM) was used to move from identification of a problem situation to an understanding of the diverse nature of the problem, something that was not always apparent to all or any of the problem owners.8 One outcome of both the process modelling and the SSM exercises was a table of processes and their respective key issues. A combination of criteria proposed by Hammer and Champy (1993) and Davenport (1993) was then used to identify which processes to consider for redesign. The criteria involve determining: • Dysfunction –identifying processes that are the cause of significant delay and impact on overall effectiveness. • Importance–those processes that are critical to the project and consumers. Feasibility–identifying those processes that are likely to be successfully redesigned. All the processes were considered in terms of all three criteria, resulting in a list of dysfunctional and important processes that are feasible to redesign. All the core processes were important, and all were feasible for redesign. The project workers and Project Manager highlighted processes with problems, such as Circulation and Matching, as they were aware of delays in processes resulting from breaks in information flow or communication. Difficulties arose in assessing the nature and extent of some forms of dysfunction. Moreover, it was sometimes difficult to determine whether a problem had minor or major impact on the overall adoption process, whether it was a management problem, whether slight adjustments or radical redesign were required to solve the problem and improve performance. The importance of processes was relatively easy to determine, as core processes must be considered to be the most important and critical to the functioning of the project. Nevertheless, there are processes in other categories, such as Networking with Local Authorities, in the business network category, that are key to project success and are critically dysfunctional. The feasibility criterion has to be treated carefully; it might be supposed that all processes can be successfully redesigned. However, there are often factors that may affect the success of redesign efforts or even prevent initiating redesign – e.g., financial constraints, limited human resources, time and scale of redesign. A-NCH-Y is a strategic business unit, with a small team of employees, with a heavy caseload and numerous responsibilities, and a limited budget. Any demand for increased investment in technology or more human resources must be carefully considered. Therefore, processes requiring large-scale redesign, with significant investment in technology and development of BPR project teams, were deemed unsuitable and unfeasible for A-NCH-Y at this time. Two processes met all the criteria. Information Management is both a core and management process; Networking with Local Authorities is an important business network process. Both were feasible for redesign. While identifying problems in specific processes, it became apparent that there were many
Adopting the Process View 173
recurrent problems common to many processes; particularly accumulating paperwork, delays in sending and receiving information, late payments, communication breakdown, duplication of effort. Staff were aware of these, but had been unable to confront them since there were such limited resources in the Project. Some process activities were being completed, but only after much delay, and with increasing stress placed on staff. Other activities were often left incomplete or misunderstood. Since there were no written procedures for some processes and basic tasks, many activities were being performed on an ad hoc basis. If nothing else, the process modelling exercise at least clarified what ought to be done, and it was no surprise that Project staff were keenly interested in the details of the process models of redesigned processes that emerged from the study. Indeed the process charts were left with the staff on completion. The two processes selected to undergo process redesign were: • Information Management (IM)–Core Process–Dysfunctional & Important • Networking with Local Authorities (NLA)–Business Network Process–Dysfunctional & Important
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION At this point a study that concentrated on an IT application would continue with consideration of the challenges that arise after completion of the scheme. This case study, on the other hand, concerns the problems–and opportunities–that have arisen in the context of an organization that has introduced IT in something of a haphazard manner. As such, many of the problems facing the organization predate the introduction of IT; but the opportunities afforded by the technology provide a way forward. Initially the issues relating to the two selected processes are outlined, followed by a discussion of the ways in which the organization can develop, making best use of IT in the context of a process-driven orientation, eventually leading towards what has been termed ‘technological maturity’. It should be understood, however, that the solutions to the problems outlined will always require a balance of technical and organizational changes. The key questions to be asked as one reads this final section of the case relate to the nature of this balance and how best the organization can assess if it has introduced effective change.
Information Management Process The IM process is very long and complex. Since the activities of A-NCH-Y are information dependent the IM process affects all aspects of the project, and NCH as a whole. Problems with the process have extensive ramifications on the project, NCH, client families, Local Authorities and other organizations that rely on NCH in some way. Problems occur largely as a result of breaks in information flow. These may range from a minor problem such as photocopying a medical report to be distributed and filed, to a major problem such as a family or Panel members not getting the assessment form passed on to them in time. Whatever the cause, the results are delays, conflict and confusion. The information management process is affected in the following ways: • required information is not reaching people on time; • the database is not updated in a timely and accurate manner; • information is collected and processed in a haphazard manner; • incoming information– particularly referrals–are not dealt with promptly and effectively; • time is wasted in looking for specific information in several places– it is unclear where it should be. These problems are partly caused by a lack of human resources, whereby a small administrative team is dealing with vast amounts of information, and requests for that information from various sources. Crucially the responsibility and ownership of tasks is unclear. Certain tasks are considered to be administrative, whereas other tasks must be performed by social workers. This leads to conflict
174 Bryant & Syan
and misunderstanding. For example, the Matching process relies on information received from Local Authorities about children who need adoptive placements. The task of registering receipt of this information is divided between administrative staff and Project social workers. Specific detailed information about the child, background and behavior must be recorded; but administrative staff are technically not qualified to do this. So it is left for one of the social workers, leading to delay at best, or incomplete information at worst. This is a procedural issue rather than a technical one. The situation is further aggravated, however, by the lack of training and experience in using the Family Placement database. In order for it to function as an information resource, specific data must be entered into the database accurately and regularly, only then can it be used as the source of information and statistics. Generally data can be entered into the database without a great deal of knowhow, although this does not mean that all Project workers feel confident to do this. Accessing the database and extracting relevant information is another matter, and both Project workers and administrative staff experience mounting frustration with these tasks. This frustration increases when quarterly and annual statistics are required for the Head Office. Since they are unable to produce this using the database, they have to spend significant time and effort compiling the returns from various –often paper-based–sources and completing the forms manually. It was not feasible to redesign the entire process since it is too complex, and there are legal restrictions that have to be heeded at some stages.
Examples of Areas for Redesign–Information Management Process Statutory Reference Request & Return Problem: Currently all statutory references are requested from Local Authorities, based on the assumption that Local Authorities do all the checks; however only some Local Authorities perform all the checks required, and so any remaining checks have to be requested elsewhere and re-initiated. Administration must find out which checks are not performed by the Local Authority, usually Police References, and must start the process again, involving a great deal of time and resources spent chasing the references requested. There is uncertainty about whether administrative or social workers should register the returned statutory references. Files are not centrally located, often they are in the possession of the social worker responsible for the case, and it is not feasible or practical for administration staff to locate the file to register the statutory references. Paperwork for Panel Meetings The preparation for a Panel meeting is a cause of great stress on the Project team. The assessment report must be completed and sent to the family a minimum of 28 days before the Panel date. This is a legal requirement. During this time, any issues that arise from the document have to be discussed between the family and the social worker. Any changes must be reported to Administration to make the necessary amendments, and ensure that Panel members receive the completed case files at least a week before the Panel date. Presently, the amount of time to complete this process from the typing stage to Panel date is six weeks, which includes the legally required 28 days. The time is calculated from the Panel date. For practical reasons, the Panel papers should be sent to Panel members a week beforehand. This leaves administration less than two weeks to type the documents, which are lengthy and complex, and need to done to a specified format. In addition, administrators have to prepare other reports to inform the Panel about the progress of placements. If a family takes the full 28 days, which is not usual, then administration has less than a week to make any amendments and circulate the copies of all the documentation. Although this time scale appears to be reasonable, it is important to note that one member of administrative staff performs most of these tasks, and has other administrative duties. There
Adopting the Process View 175
needs to be an allowance for extra time for certain aspects of this activity, but not at the expense of delaying the entire process for a family waiting for a decision, nor by adding to the work load of the social workers or administrative staff. Response to Circulation–Receiving Child Referrals Problem: There is a constant stream of child referrals and associated documentation into the Project. The child referrals are placed in a tray until social workers are able to meet to register them. Once the referrals have been registered by the social workers, the referrals are passed back to Administration for all the details to be logged onto the database. However, due to the amount of referrals coming in, and a lack of Administration time, referrals are piling up and not being registered or logged onto the database, affecting the statistics relating to the number of children in need of placements. Networking with Local Authorities Process The second process selected for redesign was Networking with Local Authorities (NLA). Local Authorities use A-NCH-Y to provide approved families for children in need of permanent adoptive placements. This process has many goals and activities it should be achieving, e.g., regular meetings to exchange information about the need for certain types of families to meet the needs of children, and establishing good working relationships that would be beneficial when dealing with other matters such as child protection cases. However many activities were not being performed. Meetings between ANCH-Y and Local Authorities occurred on an ad hoc basis, and there was no routine for recording the information exchanged at these meetings. The process symptoms are identified below: • disagreements and tension between the Project and Local Authorities; • formal complaints from families about Local Authorities; • no systematic way of arranging meetings between Project and Local Authority workers; • no system for recording information exchanged and issues raised at meetings; • delays occur when dealing with Local Authorities regarding statutory references, since different Local Authorities use different procedures; • failure to resolve issues quickly and easily; • A-NCH-Y not finding out about the needs of children in a systematic way; • A-NCH-Y used on a low priority basis by some Local Authorities; • ‘Matches’ that involve geographically distant Local Authorities are not cost or time effective for the Project; • no sharing of training resources; • A-NCH-Y not receiving up-to-date information about changes and new initiatives occurring in Local Authorities; • minimal and varying cooperation from Local Authorities in Child Protection cases, communication was blurred, causing delays and confusion. Required Outputs of the Networking with Local Authorities Process: • Identify the needs of children (for the type of family required) • Identify the backgrounds, ages, ethnicity, gender, special needs and religion of children • Identify clear procedures and standards for Statutory References • Resolution of conflict • Identify new opportunities • Establish working relationships with Local Authorities
New Challenges The next stage of the study involved articulating a series of challenges and targets for A-NCH-Y, establishing new process objectives and standards for the two selected processes. Having identified
176 Bryant & Syan
specific issues within the selected processes, the next step is for the project staff – and the organization as a whole – to move towards resolving the issues. This is best done by fostering a general acknowledgment of the shortcomings and their causes, complemented by a realization that the aim is to develop more effective processes, supported where relevant with enhanced IT capability. This can be summarized in the form of a new vision for each of the processes analyzed in detail. Process vision provides a means of identifying and clarifying the long-term goals of a specific process. The process vision acts as a point of reference from which a process can be redesigned to help meet the overall business objectives and deliver value. A new vision for an existing process consists of specific measurable objectives, which are not met by the existing process, and that should be met by the new process (Davenport, 1993). The articulated vision may include objectives relating to cost-effectiveness, timeliness and reducing the number of hand-offs (Martinsons, 1995). It is crucial that the process vision is clearly understood and shared by all stakeholders of the process. Davenport’s view of the emergence of a new vision starts from asking the question ‘How could we do things differently?’ He sees the development of a new vision as comprising activities oriented around a sequence of four questions (see Figure 1 - The Visioning Process). This can be applied to all processes identified in the case study, particularly the two selected for detailed analysis. Process objectives are derived from the process vision and consist of the overall process goal and the type of improvements sought (Davenport, 1993). Process objectives break down the overall process vision into component parts, helping to identify medium and long-term goals. Measuring performance and setting process standards will enable A-NCH-Y to determine the extent to which the selected processes are achieving their objectives. Reducing the time spent in each stage of the information management process is a key objective. According to Harvey (1994), the more complex a process, the more time it takes to complete; particularly where a process consists of several subprocesses and numerous activities–e.g., the IM process. Performance measurement may be established by benchmarking against best practice. Setting performance standards provides a practical means of assessing progress towards the objectives. Establishing standards should be incorporated into monitoring and evaluation of processes, in order to determine whether a process is reaching an acceptable minimum level of performance.
New Vision for the Information Management Process In order for the IM process vision to be established and its objectives to be achieved (see Table 3), information returned from external sources needs to be available at the right time, with clearly specified and understood roles and responsibilities. Each stage of the process must be formally identified. Furthermore, it is important that open communication exists between project workers. They must be encouraged and able to inform each other of important incoming and outgoing information, changes to assessment reports, the progress of statutory references and receipt of child referrals. In response to Davenport’s (1993) visioning question of why the process might not go right, it is possible that achieving the new objectives may be undermined by a lack of training, the lack of administrative staff and delayed response from external sources. Furthermore, constraints on technological, financial and human resources are a potential barrier to this process reaching its objectives.
New Vision for the Networking with Local Authorities Process In order for the networking process vision to be established, and its objectives achieved and targets to be met (see Table 4), specific conditions must prevail, e.g., effective planning to ensure that visits and meetings are arranged regularly. Furthermore, there must be willingness from both parties to create a working relationship, where communication must be open, in order for ideas and information to be exchanged. The purpose of meetings must be clearly identified at the outset, to ensure that the required outputs are achieved. The outputs of the networking process are identifying the needs of
Adopting the Process View 177
Figure 1: The Visioning Process "How could we do things differently?"
children, identifying clear standards and procedures for statutory references, resolution of conflict, identification of new opportunities and changes in Local Authority environment. Contingencies must be built into counter factors which may hinder the process – e.g., lack of planning, additional pressure of time and workload on Project workers. Setting standards will provide guidelines for A-NCH-Y, and effective planning will help process objectives to be met.
Enablers of Change The results that emerge from a process redesign exercise often confront existing procedures and assumptions. But such challenges have to be developed into a basis for profound change and improvement. This transformation from ‘challenge’ to ‘actualization’ requires identification and incorporation of what Davenport terms ‘enablers of change’. Examples of these potential drivers of change include financial, technological and human resources. IT is increasingly considered to be a powerful enabler of change, but this case study demonstrates all too clearly that technology is no substitute for intelligent analysis; careful preparation; and continuous assessment, learning and guidance. Organizations need to mature to a stage where they can evaluate their current problems and opportunities, and then decide if it is worthwhile seeking to incorporate technical solutions for specific organizational problems. In all cases this will require an insightful combination of technical and human enablers, and should remain focused on improved performance rather than technical gee-whizzadry. This demands not only identifying opportunities provided by human and technological resources, but
178 Bryant & Syan
Table 3: Information Management Process Vision Process Vision The systematic flow of information, to be easily and efficiently collected, recorded, extracted and collated when required, which is to be formalized and documented. Process Objectives • achieve coherent flow of information internally and externally • handle information on a planned and systematic basis • minimize delays between stages of the process and build in flexibility • facilitate extracting of required information and statistics on a regular basis • circulate family profiles to Local Authorities on a regular basis • ensure that all referrals received are logged on the database • ensure roles and responsibilities are clearly understood throughout each stage of the process • reduce pressure on project workers Process Standards • reduce extraction time of statistics by 75% • monthly circulation • referrals received must be logged onto the database within two weeks of receipt • approval decision must be logged on the database within one week • adoption file must be closed within two months of the Adoption Order and sent to Archives one month later Process Indicators • time spent extracting statistical information • time taken for a Statutory Reference to return and be logged • time taken to circulate information and documents to relevant people (panel members, family, workers) • number of circulations sent per year–minimum of 10
also being aware of constraints imposed by application of such resources.
Organizational and Human Resources as Change Enablers Human resources and organizational structure are critical factors in process change. There are numerous benefits of team working, although the compatibility of team members must be considered. Furthermore, allowance must be made for such teams to coalesce and evolve. The A-NCH-Y Project consists of a team with cross-functional skills, where each member is a specialist with experience in their particular field drawn from the general domains of social work and administration. As a consequence of resource constraints, each team member must have general knowledge about the daily functions and processes within the Project. The team draws its motivation from satisfaction at providing a valuable service, particularly the ways in which it benefits children. Additional human resources will enable the Project to enhance its operational efficacy and develop its service. This resource growth should take the form both of additional staff, particularly support and managerial, but should also encompass better use of existing resources. An enhanced IM process will facilitate this, since it should make effective use of new, suitably skilled, support staff, resulting in far better use of the time and skills of existing professional and administrative personnel. In this sense specific improvements will be enabled by a combination of technical, human and organizational features.
Adopting the Process View 179
Table 4: Networking with Local Authorities Process Vision A systematic process to establish a working relationship with Local Authorities, through various forms of networking on a regular basis, from which information obtained must be recorded in a formalized manner. Process Objectives • exchange information about children’s needs • ensure that Local Authorities perform Statutory References to an agreed standard • establish a good working relationship with neighboring Local Authorities • market services and raise Project file • solve conflicts and disagreements • record meeting issues and action to be taken • review links with Local Authorities regularly Process Standards • meetings between A-NCH-Y and Local Authority–every three months • consortium meeting–monthly • meet with 10 neighboring Local Authorities • annual review of links with Local Authorities Performance Indicator • number of meetings with Local Authorities taking place per year–four per year • number of Consortium meetings attended per year–12 per year
Constraints of Organizational and Human Resources Lack of human resources is the prime constraint facing A-NCH-Y. There are insufficient social workers to deal with the increasing caseload. Administrative workers cannot deal with increasing volumes of information. There is a dearth of qualified IT staff. Unfortunately budgeting for increased staffing is a complicated matter. A-NCH-Y is a fee-based Project; the fee income derives from linking approved families with Local Authorities. Since the number of successful matches is dependent on a series of factors – numbers coming forward for adoption, rate of approval, disrupted placements, and so on–that are outside the control of the Project, predictions about future income are largely guesswork. With such uncertainty surrounding annual income and limited financial resources, A-NCH-Y cannot afford to employ more staff even if they do find themselves with a current surplus, since there is no guarantee that the situation will persist. Information Technology as a Process Enabler IT should support organizational and human resources in process change, supplying staff with information for decision-making and assisting in the performance of the process. Furthermore, computing and communications technologies enable process change, by reducing time and distance, and speeding up activities to be performed. The problem with implementing IT in organizations is that it is often applied to existing processes, seen simply as a way of automating processes which are dysfunctional (see the earlier quote from Beer). It is, therefore, critical to analyze processes in conjunction with any introduction of IT and information systems strategy planning, to ensure the
180 Bryant & Syan
organization can benefit from redesigned, enhanced modes of working in conjunction with the potential afforded by the technology. The employees at A-NCH-Y believe that they will benefit from having more IT resources, with at least one terminal dedicated to social workers and one terminal for the Project Manager. The Project workers want to take advantage of the information and analytical opportunities provided by IT. The processes at A-NCH-Y would clearly be enabled by IT in transferring documentation and information electronically; reducing the amount of misplaced paperwork, multiple copies and errors. Furthermore, automating certain tasks, with a lower demand for human resources, would result in a more efficient use of limited and expensive resources. The management processes would be enabled by IT, helping produce and analyze statistics, offering a basis for improved decision-making, forecasting and planning. Communication within the Project and with other offices, such as Local Authorities, NCH Regional and Head Offices, and exchange of documents would be greatly improved with the introduction of an internal communication network, such as an Intranet. A further step for A-NCHY would be to use IT to market their services on the Internet, therefore increasing their consumer base and their presence. Some families have come to the Project commenting on the lack of NCH Internet presence whilst they were researching adoption agencies and services available. There are numerous opportunities provided by IT from which A-NCH-Y can benefit, however there are also constraints.
Information Technology as a Process Constraint Davenport (1993) explains that IT can be a constraint on process redesign and change implementation. The existing systems and technology infrastructure can itself impose inhibitions on possible developments. IT provides many benefits to organizations and process performance, but this is based on the assumption that employees already have skills and knowledge to use the equipment. It can be a constraint and financial liability if a great deal is invested in IT and no one is able to use it correctly. A-NCH-Y is in such a situation with the Family Placement Database. The database has been in the Project for five years, since when there has been considerable turnover of administrative staff. Those who began to develop experience and skill with the database have left. The current staff have received some training, but not sufficient or comprehensive enough to be able to use it for the intended purpose – i.e. to input data and extract statistics on a quarterly and annual basis. Even those staff who have developed some familiarity with the system have not been kept abreast of updates and new releases of the software. Consequently staff lack confidence and tend to postpone database tasks to a later time. So there is a build up of data to be entered, and output is produced using inaccurate or outdated statistics. Training must be provided to IT users to ensure the best use of available facilities. Such training must be more than simply learning to use the technology, since A-NCH-Y deals with highly sensitive and confidential personal information that must remain secure. Widening and enhancing use of and access to the database is only feasible with secure and established measures in place to ensure that any information or documentation electronically accessed or circulated remains secure and protected from intruders and hackers. Developing intranet and Internet technology will demand still higher assurances of security and integrity. The long-term objective of A-NCH-Y is to expand its presence in the North East of England, and to become the established market leader in its field. The process investigation exercise has provided the Project with the basis for a new process vision, objectives and standards for two critical processes –Information Management and Networking with Local Authorities. In order to realize these enhancements human, organizational and technological resources must be reassessed by members of the Project and NCH in general. A perspective that takes into account the overall maturity of the organization and its component parts is critical for the success of any development in the use and application of IT. Without this, enhancement and extensions to the database, whatever its potential benefits, are likely to have a severely negative impact. A-NCH-Y, and NCH as a whole, have to respond
Adopting the Process View 181
to the challenges and opportunities that have started to be identified from the BPR exercise, avoiding the temptation to go for a quick technological fix in favour of a measured, open-minded and comprehensive understanding of their present situation. The results of the BPR exercise – particularly the new process visions – provide a rigorous and well-understood basis from which A-NCH-Y can initiate such change (with clear indications of where IT can play a role), and against which their progress can be judged.
FURTHER READING Ahmed, P. & Simintras, A. (1996). ‘Conceptualising business process re-engineering.’ Business Process Re-engineering & Management Journal 2(2), 84-86 Archer, R. & Bowker, P. (1995). ‘BPR Consulting: An evaluation of the methods employed.’ Business Process Re-engineering & Management Journal, 1(2), 29-30 Boyle, A; Macleod, M; Slevin, A; Sobecka, N. & Burton, P. (1993) ‘The Use of Information Technology in the Voluntary Sector.’ International Journal of Information Management, Vol.13, 94-112 Braganza, A. & Myers, A. (1997). Business Process Redesign: A View from the Inside. International Thomson Business Press, UK Bryant, A. (1998). ‘Beyond BPR - confronting the organisational legacy.’ Management Decision, Vol.36, Issue, 1 Buchanan, D. (1997). ‘The Limitations & Opportunities of Business Process Re-engineering in a Politicised Organisational Climate.’ Human Relation, 50(1), 51-72 Chang, L. & Powell, P. (1998). ‘Towards a framework for business process re-engineering in small and medium-sized enterprises.’ Information Systems Journal, Vol.8, 199-215 Cliff, V. (1992) ‘Re-engineering becomes the CEO’s policy at Mutual Benefit Life.’ Journal of Strategic Information Systems, 1(2), 102-105 Edwards, C. & Peppard, J. (1994). ‘Business Process Redesign: hype, hope or hypocrisy.’ Journal of Information Technology, Vol.9, 251-266 Fiedler, K; Grover, V. & Teng, J. (1995). ‘An empirical study of information technology enabled business process redesign & corporate competitive strategy.’ European Journal of Information Systems, Vol.4, 17-30 Fiedler, K; Grover, V. & Teng, J. (1994). ‘Information technology-enabled change: the risks & rewards of business process redesign & automation.’ Journal of Information Technology, Vol.9, 267275 Grimwood-Jones, D. & Simmons, S. (1998). Information Management in the Voluntary Sector. London, Aslib Hunt, V. (1996). Process Mapping: How to Reengineer Your Business Processes. NewYork, John Wiley & Sons Jackson, I., (1986) Corporate Information Management, Prentice Hall. Martinsons, M. & Revenaugh, D. (1997). ‘Re-engineering is Dead; Long Live Re-engineering.’ International Journal of Information Management, 17(2), 79-82 Moreton, R. (1995) ‘Transforming the Organisation: the contribution of the information systems function.’ Journal of Strategic Information Systems, 4(2), 149-163 Riley, L. & Smith, G. (1997) ‘Developing & Implementing IS: A case study analysis in social services.’ Journal of Information Technology, Vol.12, 305-321 Talwar, R. (1993) ‘Business Re-engineering - a Strategy-driven Approach.’ Long Range Planning, 26(6), 22-40 Willcocks, L. & Smith, G. (1995) ‘IT-enabled business process re-engineering: organisational & human resource dimensions.’ Journal of Strategic Information Systems, 14(3), 279-301 Zaidifard, D. (1998) ‘Reframing the behavioural analysis of re-engineering: an exploratory case.’ Journal of Information Technology, Vol.13, 127-138
182 Bryant & Syan
Zairi, M. & Sinclair, D. (1995) ‘Business Process Re-engineering and Process Management.’ Business Process Re-engineering & Management Journal, 1(1), 8-10
REFERENCES Beer, S., (1981). The Brain of the Firm, 2nd edition, (1st edition 1971), John Wiley , England, UK Checkland, P. & Scholes, J. (1990), Soft Systems Methodology in Action. John Wiley & Sons, England, UK. Davenport, T. (1993). Process Innovation: Re-engineering Work through Information Technology. Harvard Business School Press, USA. Dance, C. (1997). Focus on Adoption: A Snapshot of Adoption Patterns in England – 1995. London, British Agencies of Adoption & Fostering. Department of Heath (1999). Modernising Social Services – Executive Summary [Internet] London. Available from: http://www.doh.gov.uk/scg/execsum.htm [Accessed 14 February 1999]. Department of Health – Social Services Inspectorate (1998). Inspection of NCH Action For Children Family Finders North East Voluntary Adoption Agency. Gateshead, Department of Health, Social Services Inspectorate Department of Health (1999). Children Looked After by Local Authorities, Year Ending 31 March, England Earl, M. (1994). ‘Viewpoint: The Old & New of Business Process Redesign.’ Journal of Strategic Information Systems, 3(1), 5-22 Galliers, R., & Sutherland, A. (1999). ‘Information systems management and strategy formulation: applying and extending the “stages of growth” concept’, in Galliers, R., Leidner, D., & Baker S., (eds) Strategic Information Management, 2nd edition, Butterworth-Heinemann Hammer, M. & Champy, J. (1993). Re-engineering the Corporation: A Manifesto for Business Revolution. Nicholas Brealey Publishing Limited, UK Harvey, D. (1994), Re-Engineering: The Critical Success Factors. Business Intelligence Limited, London, UK Hill, C. & Jones, G. (1992). Strategic Management: An Integrated Approach. Boston, Houghton Mifflin Company Martinsons, M. (1995). ‘Radical Process Innovation Using Information Technology: The Theory, the Practice and the Future of Re-engineering.’ International Journal Of Information Management, 15(4), 253-269 McCluskey, J. & Abrahams, C. (1998), NCH Action For Children Factfile 1999, Rochester, Chapel Press McManus, J. (1996). Reengineering your Business: An Implementation Guide. Hertfordshire, Technical Communications (Publishing) Mintel Retail Intelligence (1994), Clothing Retailing, Annual Company & Accounts, Mintel Vol.6 Philpot, T. (1994). Action for Children – The Story of Britain’s Foremost Children’s Charity. Lion Publishing Plc, UK Family Finders Project Annual Review April 1997 to March 1998 (unpublished) Family Finders Project North East Business Plan April 1998 to March 1999 (unpublished) NCH Action For Children (1999) Strategic Plan: 1999-2004. Highbury, London, NCH Action For Children (unpublished) NCH Action For Children (1998) Annual Review 1997/98. Highbury, London, NCH Action For Children NCH Action For Children – Family Placement Database User Guide (unpublished) www.nchafc.org.uk [Accessed January 15 2001)
Adopting the Process View 183
ENDNOTES Strengths, Weaknesses, Opportunities, Threats Political, Legal, Economic, Social, Technological 3 This includes the Adoption Agencies Regulation 1983 (amended in 1997), The Adoption Act 1976 and the Children Act 1989. 4 Most of the figures that are quoted in the following section have been taken from the NCH Factfiles 19982000. 5 At time of writing the lead item in many newspapers concerns the case of twin girls purchased for adoption over the internet by a UK couple (the Kilshaws), but also claimed by a US couple (the Allens) who had previously bought them. 6 The originators of BPR - Hammer & Champy, and Davenport – later retracted and restated their views; but continued to stress that the primary importance of BPR was the process aspect above all others. 7 The American National Standards Institute symbols were used to map the Project’s processes. 8 Lack of space precludes discussion of soft systems methods at this point - but it will be explored in the 1 2
case study exercises.
BIOGRAPHICAL SKETCHES Antony Bryant is Professor of Informatics in the School of Information Management at Leeds Metropolitan University (LMU), Leeds, UK. He graduated from Cambridge with a degree in Social & Political Science, followed by a PhD in Social Sciences from the London School of Economics. Later he obtained his MSc in Computing from the University of Bradford, after which he worked for a number of years in commercial systems and software development. In 1985 he joined Leeds Polytechnic (now LMU), where he held the post of BT Reader in Software Engineering, until appointed Professor in 1994. His current research interests include knowledge management and representation, IS development, globalization and the digital economy, and research methods. He is Visiting Professor at the University of Amsterdam, and is in charge of the Asia-Europe Master’s Programme in Information Economics & Management currently being developed with the University of Malaya and other universities in Europe and Asia. Veena Syan holds a Masters degree in Information Systems from Leeds Metropolitan University, where she focused on business processes analysis and investigated information technology to enhance business performance. She obtained her BA (Hons) in European Studies and French from the University of Wolverhampton, where she became interested in European business and subsequently spent time working in France. She is currently working as a business analyst in Canada.
184 Ratnasingam
Developing Inter-Organizational Trust in Business-to-Business E-Commerce Participation–Case Studies in the Automotive Industry Pauline Ratnasingam University of Vermont, USA
EXECUTIVE SUMMARY Inter-organizational-systems such as EDI have been the main form of business-to business ecommerce participation in the automotive industry for the last two decades. Previous studies in EDI adoption mostly examined environmental, organizational and technological factors. This study draws insights developed within the sociology of technology, in which innovation is not simply a technicalrational process of solving problems, but involves economic, behavioral and political processes required for building inter-organizational trust. The transition to cooperative relationships between buyers and suppliers may be more difficult for automotive companies because of complexity, compatibility, long lead times and ingrained adversarial supplier relationships (Langfield-Smith & Greenwood, 1998). Therefore, trust is important as organizations need to cooperate, collaborate and communicate timely and relevant information, in order to facilitate EDI that entails not only technological proficiencies, but also trust between trading parties, so that business transactions are sent and received in an orderly fashion. An analysis of the trust behaviors that influence EDI adoption will be useful for evaluating EDI participation. The aim of this study is to address the following intriguing questions: • How does trading partner trust impact EDI participation? • How do issues relating to coercive power among trading partners impact inter-organizational trust?, and • What is the importance of trust within an inter-organizational dyad. Ford has been using EDI since the electronic data transmissions commenced in 1988. The aim of EDI is to communicate production requirements of five car manufacturers (namely, Ford, General Motors Holden, Toyota, Mitsubishi and Nissan), to their component suppliers in order to meet the demands of the Australian and overseas motor vehicle markets. The automotive industry had more experience than other industries in developing inter-organizational relationships. Ford Australia was nationally and internationally popular because motor vehicles were exported to New Zealand and the Asia Pacific region. Copyright © 2002, Idea Group Publishing.
Developing Inter-Organizational Trust
185
BACKGROUND EDI implementation at Ford started with the Button Car Plan in the mid-1980s. The objectives of the Button Car Plan included: • Creating a timeframe to restructure and modernize (1985-1992); • Increasing the industry’s efficiency; • Holding down vehicle price rises to no more than raises in the consumer price index; • Minimizing disruption during restructuring; and • Reducing job losses and providing job stability (Mackay & Rosier, 1996). In 1984, the Federation Chamber Automotive Industries (FCAI) was formed to set up a standard procedure for adopting EDI. FCAI committee members discussed business issues, ramifications, and operations before negotiating with General Electrics in Information Services (GEIS) and Telstra Tradelink to create an EDI Value-Added-Network (VAN) system. Ford was one of the earliest innovators of EDI inter-organizational network technology. In late 1987 and early 1988, the company conducted acceptance testing of EDI business transactions was carried out. Telstra developed the Tradelink software in 1988. EDI messages such as Materials Requirements Schedule (MRS) and Advanced Shipping Notice (ASN) were initially implemented followed by other documents. Thus, by 1997, EDI use at Ford was in a mature stage. Ford aimed to streamline its business processes and optimize its supply chain management activities. Ford implemented two EDI systems and many application systems across its five branches: Parts and Accessories, Original Equipment, Non-production, Purchasing, Ford Credit and Finance. Ford’s parent company in America was two to three years ahead of their Australian counterparts and supervised EDI implementation in Australia. The automotive industry remains a major segment of the Australian manufacturing sector, despite a general decline in the manufacturing output in Australia. It is particularly important in Victoria, where Ford Australia, General Motors Holden, and Toyota have their headquarters and principal assembly plants. Although it is only a small part of the global motor vehicle industry, the Australian automotive industry makes an important contribution to the gross domestic product of Australia. In this research, the original manufacturers are subsidiaries of large transnational corporations based in the USA or Japan. Figure 1 demonstrates the flow of EDI transactions between Ford and Toyota (the manufacturers) and their first tier supplier (Patent Brakes and Replacement Ltd). For example, the supplier sends an Advanced Shipping Notice (ASN) to the manufacturer before supplying the parts. At the same time, a copy of the ASN is sent to the Transport Company for the truck driver to deliver the right quantity. The truck driver also brings a copy of the ASN that was sent electronically to the manufacturer. The completed motor vehicle is sent to the finance company and they collaborate with motor vehicle dealers and arrange credit terms for selling the motor vehicles.
SETTING THE STAGE EDI is one form of business-to-business e-commerce inter-organizational system (IOS) which transmits standard business documents electronically among trading partners. EDI allows firms’ to fundamentally change the way they do business, thereby improving the firm’s performance and enhancing its competitive advantages (Emmelhainz, 1990). While EDI clearly provides economic benefits, it may be costly to implement, particularly when an organization lacks hardware or software compatibility. Security becomes an important issue because EDI systems do not operate unilaterally. Organizations motivated to adopt EDI must either find similarly motivated trading partners or persuade and/or coerce their existing trading partners to adopt EDI (Hart and Saunders, 1998; Webster, 1995). One key barrier to this is the lack of trading partner trust derived from uncertainties, lack of open communications and information sharing (Cummings & Bromiley, 1996; Doney & Cannon, 1997; Ganesan, 1994; Gulati, 1995). Despite the assurances of technological security mechanisms, trading
186 Ratnasingam
Figure 1: EDI Implementation at Ford, and Patent, Brakes and Replacement Ltd.
Transport Company
Advanced
Shipping Notice
Materials Requirements Schedule Receiving Automotive Manufacturer (Ford)
Automotive Parts
Request for Quotation
Supplier
Quotation Purchase Order & Amendment Remittance Advice and Payment
(Patent, Brakes & Replacement
Ltd) Vehicle Finance Company
Bank
partners in business-to-business e-commerce do not seem to trust the “people side” of the transactions because of uncertainties. Uncertainties reduce confidence both in the reliability of business-tobusiness transactions transmitted electronically and, more importantly, in the trading parties themselves. For example, Scala and McGrath (1993), in their broad assessment of advantages and disadvantages of EDI, identified social and organizational issues that impact organizational culture, structure and low levels of adoption. The objective of this study is to investigate the importance of trading partner trust in EDI participation (adoption, integration and use). The automotive industry provides an interesting focus for studying this topic because of the following reasons: • The automotive industry has a well-developed supplier strategy, because it was the first Australian industry to introduce EDI on a coordinated industry-wide basis. For example, the automotive industry has been using EDI since electronic data transmissions commenced in 1988. Therefore, the automotive industry had more experience than other industries in developing trading partner relationships (Helper, 1991; Mackay & Rosier, 1996). • It has been suggested that the transition to cooperative relationships between buyers and suppliers may be more difficult for automotive companies, due to high levels of complexity,
Developing Inter-Organizational Trust
187
compatibility, long lead times and ingrained adversarial supplier relationships of the past (Langfield-Smith & Greenwood, 1998). Japanese automotive companies have a long established history of developing relationships with suppliers based on dependence and cooperation. Unlike the Japanese, in western countries like Australia and the U.S. recognizing cooperative partnerships is a relatively recent phenomenon, and may be a distinct contrast to the adhoc relationships of the past (Helper, 1991). Choosing two automotive organizations namely, Ford Motor Company of Australia Limited, and their first tier supplier Patent Brakes and Replacement Automotive Proprietary Limited in Australia–provides a better understanding of cooperative trading partner relationships and trust in EDI implementation.
CASE DESCRIPTION The Ford Motor Company of Australia Limited (Ford Australia), located in Melbourne, Australia, is a subsidiary of the Ford Motor Company at Dearborn, Michigan, USA. Employing 9,000 workers, Ford Australia is the second largest manufacturing enterprise in Australia. Ford produces about 125,000 motor vehicles per year. It uses approximately 8,000 local parts and 250 imported parts from their 220 parts suppliers. Ford is the largest consumer of locally manufactured parts. It is currently running an inventory of about 10 days stock (although the stock level of some components replenished by JIT is smaller than this). Ford’s main objective is to increase productivity and profitability by reducing costs. Patent Brakes Replacement (PBR Ltd) is a large company with 1,100 employees and is a major supplier, principally supplying original equipment (OE) to Ford and Toyota. PBR in aggregate supplies up to 92% of component parts to the Passenger Motor Vehicle (PMV) lines and/or spare parts divisions. PBR has two branches, namely Original Equipment (OE) and After Marketing Company (AMC). PBR started adopting EDI when their manufacturers (Ford and Toyota) demanded high efficiencies in EDI operations. Table 1 presents a summary of the background information of the two cases Ford and Patent, Brakes and Replacement Ltd.
Driving Factors for Adopting EDI Ford’s objective for adopting EDI is to streamline its business processes and contribute to more efficient transactions across the supply chain. Implementing EDI and electronic trading was expected to bring about a number of benefits in the automotive industry. The benefits include improvements in general logistics, increased productivity, improved product quality, enhanced customer service and lower inventory requirements. The automotive organizations were able to eliminate manual re-keying of data, thus reaping economics of scale in time and labor savings. “EDI was seen as a tool to transmit standard structured messages electronically from a computer application in one location to another computer application in another location. Therefore, EDI is an enabling technology which allowed Ford to meet their business goals, and the analogy is the same as if one wishes to purchase a mobile telephone or a fax machine, EDI gave us competitive advantage.” — Ford’s EDI project leader. The driving factors that led to EDI adoption include: • Time was saved from a faster trading cycle, because trading partners do not have to re-key the information. 70% of the output was treated as input into the receiving trading partner’s system, contributing to savings in time and cost. • Simplification of the business processes (via automation) eliminated the use of paper. • Speed from savings in time derived from the standardized routines and structured EDI messages, increased productivity and thus profitability. Table 2 presents a list of respondents who participated in the exploratory case study. Although, most of them were not directly involved in EDI adoption, they were employed for at least a decade and attended most of the meetings related to EDI implementation. The participants agreed that trust was important for EDI participation because business transactions had to be sent and received in an orderly fashion.
188 Ratnasingam
Table 1: Background Information of Ford, and Patent, Brakes and Replacement Ltd. Background Information
Ford – Buyer
PBR Ltd First–Tier Supplier
Year Implemented EDI
Mid 1980s - 1989
1987
Type of EDI Technology
EDI/VANs
EDI/VANs
Type of Translation
Telstra Tradelink
Telstra Tradelink
Software
Software
Software
Number of EDI Systems
2
3
Number of Staff Operating
2
6
Volume of Transactions
40-60 daily
5-10 daily
Types Transactions
Purchase Orders
Purchase Orders
Advance Shipping
Advance Shipping
Notice
Notice
Remittance Advice
Monthly Statements
EDI Systems
Acknowledgment 350
150
Number of Branches
5
2
Number of Employees
9000
1100
Size of Organization
Large
Large
Stage of IT Growth
Mature
Mature
Number of Trading Partners
CURRENT CHALLENGES FACING FORD AUSTRALIA AND PBR LTD Technological Issues Issues relating to streamlining Advanced Shipping Notices (ASNs) with the actual physical shipment of goods were identified in EDI adoption. Ford currently requires about 95% of its part suppliers to send an Advanced Shipping Notice (ASN) in advance of the actual physical shipment accompanying a printed delivery docket. The ASN pre-loads the receiving system, prints a ‘receipt list’ and uses it to check the physical shipment. The ASN identifies the physical shipment with the delivery truck registration number, and uses it as a reference. Discussions with Ford materials management staff revealed the following technical problems with its current EDI systems. • Use of the truck registration number effectively limits the system to parts that are delivered no more than once per day, and also causes problems when several suppliers’ shipments are consolidated onto one truck;
Developing Inter-Organizational Trust
189
Table 2: Interview Participants from the Two Case Studies Title of Participants
Name of Organization
Years of EDI Experience
Directly Involved in EDI Implementation
Project Leader Communications Operations Process Leadership Auditor
Ford Motor Co
15
Ford Motor Co Ford Motor Co
General Accounting Manager IT Manager Supply Chain Management, Materials Planning and Logistics Core Group Management Manager FCAI Chairman and EDI Coordinator
Yes
Number of Interview Sessions 5
Is Trust Important for EDI? Yes
10
No
2
Yes
12
Yes
3
Yes
Ford Motor Co Ford Motor Co
12
No
3
Yes
10
No
2
Yes
PBR Ltd
15
Yes
5
Yes
•
Ford’s leading position in the Australian automotive industry and its increasing insistence on JIT deliveries, forced Ford’s suppliers to be positioned less than 10 minutes drive away from the Ford plant. Ford currently polls its EDI/VAN service every 10 minutes, in order to retrieve the ASN data. As a result with parts that are being called by JIT, there is no guarantee that the ASN arrives before the physical shipments, thus causing delays and congestion in the production line; and • The EDI/ASN process is dependent on computers at the supplier’s site, and the VAN service at Ford. Given the computer-dependent nature of the automotive industry operations, the first problem can be solved by using a unique identifier for the shipment bar coded on the delivery docket or cartons. The second problem can only be solved within the existing framework by polling the VANS more frequently, and imposing an expensive option at a cost of $A0.50 upwards per call.
Political Issues - Power Among Trading Partners Power was seen as an important contextual factor in EDI adoption, because it was an important influence in the adoption of e-commerce and in building trust among trading partners. Power is “the capability of a firm to exert influence on another firm to act in a prescribed manner.” Ford applied power when its EDI network was introduced. Ford made it clear to its established suppliers that they should use EDI. Ford did provide its suppliers with initial training and software to run on IBM machines. Suppliers with incompatible systems or with no systems were requested to find appropriate solutions as quickly as possible. Clearly, this was a situation where coercive power exercised by Ford was seen in establishing connections that involved the expense of the suppliers
190 Ratnasingam
buying new equipment (Ratnasingam, 2000). Dependence can arise due to limited supply alternatives or from an imbalance of power between suppliers and car manufacturers. Furthermore, the inconvenience of having to use Ford’s system in addition to other systems for trading with other customers was another issue, especially at a time when the smaller suppliers were unaware of EDI’s potential. Similarly, Hart and Saunders (1997) suggested that in most cases the adoption of EDI is due to pressure from the more powerful trading partners, usually buyers. Their findings indicated that power was negatively related to the volume of EDI transactions, reflecting that while electronic networks may facilitate easier exchanges, they may not necessarily lead to increases in the frequency of business transactions. EDI not only affects the efficiency of coordination, but also power dependency and structural aspects of inter-organizational relationships. Thus, power exists on two levels: (1) as a motive, and (2) as a behavior. It is quite clear from both the design and implementation of Telstra Tradelink, that Ford does not regard their trading partners as if they were partnerships made on equal basis, but relationships involving domination and their subordination. Companies who supply to Ford find their trading relationship coercive and the strictness of using their EDI system caused unnecessarily expense and inconvenience. The findings relating to power in the automotive industry reported that doing EDI with Ford has increased the costs of their trading relationship and has not reduced expenses in any way. Similarly, Ford’s attitude towards their trading partners was revealingly expressed by the reactions of their suppliers across Europe. “The Spanish were extremely obedient. Ford is their bread and butter. When we say ‘Jump,’ they jump. The Germans gave us the most trouble. Among other things, they didn’t like the dedicated network.” (Webster, 1995:p 34). Ford’s main objective was to gain competitive advantage by locking their suppliers into their system, and their competitors out of them. “We felt that we were coerced to adopt EDI, although initial support and directions via software for our IBM machines was given by Ford.”— PBR Ltd EDI Coordinator. Hence, the way power was used to influence trading partners determined the extent to which trust was built during EDI implementation.
Behavioral Issues – Performance Assessment Ford possessed a set of punishments they used when their suppliers did not cooperate. A check on their suppliers’ competencies, product quality, timeliness of delivery, service quality and how they resolved disputes were observed. The supplier performance checklist determined whether to renew the contracts of their suppliers. “Our suppliers do have to meet the standards outlined in the Suppliers’ Performance Assessment. Although, our suppliers have been trading with us for a long time, we usually undertake a screening test to examine their credibility, technical ability and skills. A standard of 85% and above was expected in their performance.” — Ford EDI Coordinator.
Trust Issues: Lack of Cooperation Among Trading Partners The more likely they are perceived to use these punishments, the stronger will be their coercive sources of power. Examples of coercive sources of power an automotive manufacturer may exercise include slow delivery on vehicles, slow payment on warranty work, unfair distribution of vehicles, turndowns on warranty work, threat of termination and bureaucratic red tape. There is considerable evidence of coercion by large manufacturers to smaller suppliers to move to EDI, in order to suit information technology and business strategies of manufacturers. It is here where trust develops. Ford can either choose to see it proactively and renew their suppliers’ contract or choose to punish their suppliers by terminating their contracts. The absence of collaboration or prior consensus about the structure, function, and design of these networks provided suppliers with little opportunities to develop their knowledge and expertise in EDI use. In the EDI user community, this practice has been
Developing Inter-Organizational Trust
191
associated with the catch phrase “EDI or die” meaning that suppliers are required to use the system or the manufacturer (Ford) will not trade with them at all. It is a practice that has been particularly prevalent among large retail outlets in United Kingdom (Webster, 1995).
Trading Partner Trust – Key Findings The findings indicate that trust was embedded in the EDI adoption procedures, and was seen as an implicit factor, because trading partners were expected to behave in a rationale manner. Trading partner trust was rated high because trading partner performance significantly impacts EDI operations and system. For instance, the EDI via Value-Added-Network mailbox was shared by all trading partners’, thus demanding confidentiality and integrity measures to be taken by all trading partners who were registered to use it. In some cases, the participants indicated that their status of trust was based on their management representation of knowledge of EDI implementation. Trading Partner Trust at Ford Mayer, Davis and Schoorman (1995:p.712) defined trust as “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of their ability to monitor or control that other party.” Trust is important for EDI operations and the participants agreed that trading partner trust is essential. “We define trust as a level of confidence we have in our suppliers in being honest, reliable, having integrity and not taking actions that is detrimental to our business.” — Ford Communications and Operations Manager. Two Types of Trading Partner Trust Two types of trading partner trust were identified in this study. The first type of trust “soft trust or relationship trust,” focuses on the trading partner relationship, (that is between a manufacturer and a supplier). “We do not only communicate using EDI, but other means such as telephone, fax and email when there is a discrepancy. This related to communication openness as in high trust derived from information sharing and concern. We do not check the delivery of goods, due to consistency in the quality service provided by our suppliers.” — Ford Supply Chain Management Materials Planning and Logistics Core Group Manager. Prior history of trading partner relationships enabled Ford to make predictions about their suppliers’ performance. The second level of trust, “hard trust or technology trust,” focuses on integrity issues with IT departments, and EDI/VANs infrastructure. Thus technology compatibility and organizational readiness were seen as important in EDI participation. In addition, the following interactions led to trading partner trust. • Increased communication during initial EDI adoption. “Although EDI was established in the 1980s, to reflect back on our initial implementing procedures, we would still print off the order, and fax the same order again. After sending the order via EDI we would call our suppliers to check if they have received it.” – Ford Accounting manager. During the early stages of EDI implementation Ford relied heavily on the daily audit trail, and other feedback mechanisms such as fax and telephone: • Information sharing on the potential use of EDI. The Advanced Shipping Notice is sent before the actual delivery of the parts arrive; • Reliability of the trading partners, (that they did what they said they will do). Prior history of trading experiences enabled trading partners to depend on each other;
192 Ratnasingam
• •
Belief in trading partners’ ability to perform the required task; and Maintaining confidentiality and privacy of business information. Functional acknowledgments with unique identifiers and authorized mechanisms (user IDs and passwords) were implemented. Hence, beyond the apparent need for a cooperative relationship, trading partners formed a governance structure that brought about repeated encounters and used the passage of time to their advantage to build trust. These trust mechanisms, although coming from EDI, had a lot to do with trading partner interactions in the form of open communications and timely sharing of information that contributed to the smooth flow of EDI operations. Therefore, trust plays a very important role in EDI for two main reasons: • It encourages organizations to make investments necessary for electronic information exchange, which includes technical investments needed for supporting greater information exchange across organizational boundaries. This in turn contributes to improving inter-organizational coordination, in particular for EDI use and information-sharing requiring investments in computer integration at the time of EDI adoption. It is therefore important to reinforce trust during the EDI adoption process so that trading partners will be encouraged to make investments in computer integration, and over time will support expanded EDI use and information sharing; and • It discourages opportunistic behavior, which clearly reduces the opportunity for greater information sharing over time. Here trust is important because it reduces the probability of a firm behaving in an opportunistic way. Hence, trust mitigates risks and by reducing risks it reinforces the opportunity to expand information sharing over time. This study explored the impact of inter-organizational trust in EDI adoption between Ford Australia and their first tier supplier Patent, Brakes and Replacement Limited. The two case studies formed an inter-organizational-dyad between (a manufacturer and a supplier) in EDI participation paved the way to increased awareness and importance of trading partner trust in EDI. Implicit factors such as power among trading partners were found to impact trust among trading partners. Most of the participants agreed that their service-level agreements need to be amended in order to include trading partner trust development guidelines that will promote open communications, sharing of knowledge and information (that is accurate, timely, complete and relevant), thereby preventing privacy issues and encouraging good business practices. For example, trading partner agreements should encourage good business practices that will prevent trading partners from opportunistic behaviors.
The Future of the Australian Automotive Industry Australia’s automotive industry is moving closer to developing one of the largest industry-wide extranets seen in this country until now (in 1999). According to the new development, the Australian Automotive Network Exchange (AANX) project will specify and begin implementing a common TCP/ IP network infrastructure for the Australian automotive industry. The FCAI committee manages the AANX project, with members including nominees of the four Australian care manufacturers –Ford, Holden, Mitsubishi and Toyota. The Federation of Automotive Product Manufacturers (FAPM), importers and suppliers were also involved in this project. Telstra and Optus have been invited to participate in the initiative, but both were unable to meet the industry’s requirements. In addition, Telstra, with its X.400-based Tradelink network service is the current supplier of EDI, and links the Australian automotive industry. The decision to exclude Telstra will then raise a question on the future of the industry. The mission of the committee is to establish and govern a reliable and secure communication network capable of hosting applications of e-commerce and business-to-business transactions for the Australian automotive industry. The supplier (PBR Executive) describes the AANX project as an auto industry intranet and indicated that it is the next development of the industry’s EDI system. It will solve the main problems with doing EDI over the Internet, namely security and reliability. We are trying to duplicate precisely
Developing Inter-Organizational Trust
193
what they have in America and it looks like this is the way the rest of the world is going too (i.e., Japan and Europe). EDI is now becoming tired, and both it and our various other supply-chain links need to be brought into a Web-enabled e-commerce system.
FURTHER READINGS Emmelhainz, M. A. (1990) A Total Management Guide, NCC Blackwell. Frey, S.C., and Schlosser, M.M (1993) ABB and Ford: Creating Value through Cooperation, Sloan Management Review, Fall, 65-72. Raman, D (1997) The Internet and EDI, What is EDI’s place on the Information Superhighway? Tenth International Bled Electronic Commerce Conference, 66-73 Ratnasingam, P. (2000). The influence of power on trading partner trust in electronic commerce. Internet Research: Electronic Networking Applications and Policy, 10(1), 56-62. Rayport, J.F., and Jaworski, B.J (2001) Electronic Commerce, McGraw-Hill/Irwin Ring, P.S and Van de Ven, A.H (1994) Developing Processes of Cooperative Inter-organizational Relationships, Academy of Management Review, (19), 90-118. Saunders, C. and Clark, S (1992) EDI Adoption and Implementation: A Focus on inter-organizational linkages, Information Resources Management Journal, 5 (1), 9-19. Senn, J.A (1996) Capitalizing On Electronic Commerce – The Role of the Internet in Electronic Markets, Getting on Board The Internet, Information Systems Management, Summer, 15-25. Senn, J.A (1998) Expanding the reach of E-commerce, The Internet-EDI Alternative, Information Systems Management. Sullivan, J., Peterson, R.B., Kameda, N., and Shimada, J (1981) The Relationship Between Conflict Resolution Approaches and Trust – A Cross Cultural Study, Academy of Management Journal, 24 (4), 803-815. Sydow, J. (1998) Understanding the Constitution of Inter-Organizational-Trust, in Lane, C., and Bachmann, R eds, In Trust within and between organizations, Conceptual Issues and Empirical Applications.
REFERENCES Cummings, L.L. & Bromiley, P (1996) ‘The Organizational Trust Inventory (OTI): Development and Validation,’ in Kramer, R.M. and Tyler, T.R. (eds) Trust in Organizations: Frontiers of Theory and Research, Sage Publications, Thousand Oaks, CA, 302-220. Doney, P.M., and Cannon, J.P (1997) An Examination of the Nature of Trust in Buyer-Seller Relationships, Journal of Marketing, April, 35-51. Ganesan, S (1994) Determinants of Long-Term Orientation in Buyer-Seller Relationships, Journal of Marketing, (58), April, 1-19. Gulati, R. (1995) ‘Does Familiarity Breed Trust? The Implications of Repeated Ties for Contractual Choice in Alliances,’ Academy of Management Journal, 38 (1), 85-112. Hart, P. and Saunders, C. (1997) Power and Trust: Critical factors in the adoption and use of Electronic Data Interchange, Organization Science, 8 (1), 23-42. Helper, S. (1991) How Much Has Really Changed Between U.S Automakers and Their Suppliers?, Sloan Management Review, 32 (4), 15-28. Langfield-Smith, K. & Greenwood, M R. (1998) Developing Co-operative Buyer-Supplier relationships: A Case Study of Toyota, Journal of Management Studies, 35 (3), 331-353. Mackay, D., and Rosier, M (1996) Measuring organizational benefits of EDI diffusion, International Journal of Physical Distribution & Logistics Management, 26 (10), 60-78. Mayer, R.C., Davis, J.H., and Schoorman, F.D. (1995) An Integrative Model of Organizational Trust,’ Academy of Management Review, 20 (3), 709-734.
194 Ratnasingam
Ratnasingam, P. (2000). The Influence of Power on Trading Partner Trust in Electronic Commerce, Internet Research: Electronic Networking Applications and Policy, 10(1), 56-62. Scala, S. and McGrath, R. Jnr. (1993) Advantages and disadvantages of electronic data interchange: An industry perspective, Information & Management, 25 (2), 85-91. Webster, J (1995) Networks of collaboration or conflict? Electronic data interchange and power in the supply chain, Journal of Strategic Information Systems, 4 (1), 31-42.
GLOSSARY EDI – is the computer-to-computer (application-to-application) exchange of standard formatted business documents transmitted over computer networks (Senn, 1996:p.17) EDI – is the structured exchange of information between applications in different companies (Raman, 1997:p.67). IOS– Inter-Organizational-Systems (IOSs) is simply “an automated information system shared by two or more companies” implemented for efficient exchange of business transactions (Cash & Konsynski, 1985, p.134). B2B EC – refers to the full spectrum of electronic commerce that can occur between two organizations. Activities include purchasing and procurement, supplier management, inventory management, channel management, sales activities, payment management and service and support (Rayport & Jaworski, 2001). TRUST - Mayer, Davis and Schoorman (1995) defined trust as “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party” IOT – Inter-Organizational Trust is “the confidence of an organization in the reliability of other organizations regarding a given set of outcomes or events” (Sydow, 1997:35). This study defines IOT as “the confidence in the reliability of two organizations in a possibly risky situation that all trading partners involved in the action will act competently and dutifully.”
BIOGRAPHICAL SKETCH Pauline Ratnasingam is an Assistant Professor in the School of Business Administration, The University of Vermont, Burlington, Vermont. Before that she was a lecturerat Victoria University of Wellington, New Zealand. Her Ph.D dissertation examined the importance of inter-organizational trust in electronic commerce participation (extent of e-commerce adoption and integration). Her research interests include Business Risk Management, Electronic Data Interchange, Electronic Commerce, Organizational Behavior, Inter-Organizational Relationships and Trust. She has published several articles related to this area in conferences and refereed journals. She is an associate member of Association of Information Systems (AIS).
Analyzing the Evolution of End User Information Technology Performance 195
Analyzing the Evolution of End User Information Technology Performance: A Longitudinal Study of a County Budget Office John Sacco and Darrene Hackler George Mason University, USA
EXECUTIVE SUMMARY This study examines how the budget office of a large county government designed and implemented end user information technology (IT) from personal computers (PCs) and local area networks (LANs) to an intranet and Web pages over a 15-year period. The initial issue was internal to the organization—moving a time-consuming budget preparation process to a smoother one, where “what if” analysis could be completed. However, more recent end user IT challenges are less internal and shaped more by the demands and expectations of parties outside of the budget office. While the evolution of IT in this budget office was distinctive, we utilize a framework to flesh out both the unique and generalizable lessons of such IT development. A stages model from the IT literature holds promise for explaining the internal successes as well as problems that arose during implementation and transition. The stages model suggests that the proliferation of IT can be directed toward productive use by recognizing IT crises and adding management control to handle the crises. However, the stages model does not readily account for significant changes in external social facets of the techno-social environment. These changing external social facets include global competition and reinventing government. The study suggests that the stages model would benefit from incorporating social-change shocks to better understand the transitions, the nature of the stages and IT performance within each stage.
BACKGROUND The studied county budget office serves a county that is rapidly growing with a prosperous and strong service economy base. The county’s population increased from slightly more than 650,000 in 1984 to over 950,000 in 2000. In the past 30 years, it has changed from a bedroom suburb to a service and high-tech economy. During the 1980s, revenue growth doubled (see Table 1) and met most demands, but with rapid population expansion, revenue has been strained to keep up with schools, social services and other governmental needs. Thus the budget office was under great stress. Also the budget office staff, while very professional, was never large. During the period of the study, the Copyright © 2002, Idea Group Publishing.
196 Sacco & Hackler
Table 1: County Revenues
Year 1985 1988 1994 2000
Total Revenue $ (000) 958,664 1,280,477 1,861,560 2,644,216
professional staff grew from 30 to 36, much more slowly than growth in the budget. On the surface, this small increase in staff members indicates a positive impact from the implementation and coordination of various end user IT tools. Over the years, the management information systems (MIS) and IT functions within the budget office have been formalized into a systems maintenance and applications bureau. The county also has a separate Department of Information Technology (DIT), previously called Data Processing (DP). The DP office developed traditional applications for departments, often with the assistance of outside consultants. Now, DIT has a far wider range of functions that also include public access. DIT reports to the Chief Information Officer (CIO) who is at the same top organizational level as the Chief Financial Officer (CFO). The budget office reports to that CFO, who in turn reports to an appointed County Executive. The county also has a legislative body that is the paramount elected body.
SETTING THE STAGE Prior to the beginning of the study period in 1985, the DP office developed and guided the financial and budget preparation applications through the mainframe. These applications were the only automation tools aiding budget production. Typical of life cycle-oriented DP offices, the queue of tasks was much longer than the resources available to the DP office. As might be expected, before the introduction of PCs, the LAN and the intranet, the process of budget preparation was a tedious and largely manual task. The budget office staff distributed budget request forms to all county agencies. Once the agencies hand-keyed the requests into the mainframe financial program, the budget office obtained mainframe printouts or sometimes typed forms of agency requests. Analysts prepared large green work sheets to analyze and document the relationship between the agency requests and executive orders. With a handwritten budget in place, it was turned over to the secretarial staff for typing. The early wordprocessors (such as Wangs) helped, but the process still culled information from many places over several weeks. Once the budget was adopted, it was again hand-keyed into the mainframe financial application so that budget execution could be monitored. Seamlessness was a dream, and transaction costs were high. In 1985 the budget process in the county began to change when the entire professional staff received blazing 286 PCs with hard drives and a software package that included a spreadsheet, wordprocessing and database applications. Two of the budget analysts, who became the gurus in the office, were instrumental in the decision to purchase PCs for the entire professional staff, and budget office executives signed off on the decision. While some elected officials were worried that PCs might be a fad, it eventually became clear that PCs would play a major and continuing role in budgeting as well as implementing end user IT across the government. By the late 1980s and early 1990s, top county executives, professional and elected, were more supportive of end user IT, seeing PCs and end user IT as a mark of good management. Starting in 1990, the budget office, under DP’s direction, installed a LAN to connect everyone in the office.1 In 1996 connectivity among other agencies became a reality with the introduction of an intranet. Around the same time, the budget office and county went online, beginning to build Internet Web pages for outside IT end users. The intranet and Internet Web pages gradually meshed, with some of the data from the intranet being displayed on the Web.
Analyzing the Evolution of End User Information Technology Performance 197
The dual deployment of the intranet and Web over time has redefined who the IT end user is— first those within the budget office, then those within county agencies and finally anyone interested in the county. As a result, one of the primary issues confronting the budget office as it moves into the 21st century is determining what information outside users should be able to access.
METHODOLOGY Stages of IT Design and Implementation Theories concerning reasons for successes and problems with IT and its evolution come from a stages model encased in a hierarchical milieu (Nolan, 1973, 1979; Gibson and Nolan, 1974; McKenney and McFarlan, 1982). In essence, original stage theorists suggested that the early stages of instituting information technologies are typically isolated and experimental with little managerial control over application development and spending. But once users and managers feel comfortable with new technologies, more organizational control and connectivity is established over subsequent design and use of these technologies. How technologies in a data-processing environment would evolve was described best by Nolan’s (1979) stages of initiation, contagion, control, integration, data administration and maturity. Theorists who focused on a stages approach in an end user computing environment also envisioned uncoordinated growth followed by control and integration. In particular, Huff, Munro, and Martin (1988) anticipated that as end user computing matures, computing would be more interconnected and less stand-alone in nature. Munro and Huff (1988) reiterate that: Over the long term…most firms will migrate toward the controlled growth state. In the controlled growth situation, the organization has in place the policies and resources to enable end users to acquire technology without significant difficulty, and to enjoy steady improvement in their ability to apply the technology through effective support programs. (p. 18) This envisaged controlled growth state for end user computing is similar to the maturity stage predicted in the data-processing environment. The stages model is by no means without its critics. Drury (1983) did not find adequate evidence to support the idea that organizations were in only one stage at a time. Drury also did not find similar management reactions, as managers dealt with the crisis of moving from an uncontrolled stage to a controlled stage. Consequently, he concluded that the stages model was a reference point to help organize IT analysis but not a model for prediction. Given our analysis, we find that the traditional stages model is somewhat rigid because of its embedded linear assumption and inflexibility. While it is a framework meant to flesh out the common experiences an organization will experience undergoing IT development, the stages model does not allow for organizational dynamics that may affect stage progression and the lessons to be drawn from understanding those dynamics. Thus, the model utilized in this study is anchored in the organizational dynamics—the computing and social environment—relevant to IT evolution. The section below addresses how this model encompasses these issues.
The Analytical Model Instead of focusing solely on the literal stages, the analytical model (see Figure 1) selected for this study has two major elements. One is an enhanced stages model, characterized by four variables capturing the computing and social environment. The other includes the IT performance measures designed to assess the success or problems of the various characteristics of our model. As with the traditional stages model, the case assesses organizational control structure over time. Was it loosely structured and experimental or did it exude more organizational control? Beyond managerial control, driving forces (mainly who pushed for change), driving issues (what was the policy focus) and the techno-social context are used to understand the movement between stages, as suggested in part by Friedman (1994, 1999). Techno-social context points to IT sophistication and
198 Sacco & Hackler
Figure 1: Analytical Model Computing and Social Environment for Expanded Stages Model Control structure Driving forces Driving issues Techno-social context
Success, Problems and Evolution of End User IT Tools Redundancy Use Opportunism/inventiveness Satisfaction Productivity Decision quality Coordination/connectivity Global access to service performance
social expectations, such as annually balanced budgets or display of service performance to outside users on the Internet. The inclusion of external forces, such as global economic competition and reinventing government, addresses criticism that an internal-looking IT theory is insufficient to explain the use and evolution of end user IT (Yellen, 1997).
Judging the Design and Implementation Approach for End User IT In order to judge the success, problems and evolution of end user IT resulting from the configuration and efforts during the study, a set of performance indicators was developed to measure items such as productivity, decision quality, work quality (errors), and coordination of end user IT. • Redundancy: Did staff members use PCs or any end user computer technology to repeat what already existed on one of the automated systems? • Opportunism (inventiveness): Did the staff take advantage of IT to automate or improve on processes done manually? • Use: How extensive was use among the staff? • Personal satisfaction: Did the technology live up to expectations? • Productivity: Was more work generated with the same or less effort? • Decision-making quality: Were good project selections made and were jobs completed in an improved manner? • Coordination/connectivity: Did people, data and machines work together? • Errors: Did the technology introduce logical and computational errors not seen in manual operations? • Global access to service performance: How much information on the budget and impacts of the budget on service performance was available to outside IT end users? Past work such as a study by Igbaria and Nachman (1990) suggested these indicators are relatively well accepted as tools for assessing IT success and problems. They (1990, p. 74) advocated that the criteria for judging MIS success be based on system usage, user satisfaction and quality of decision-making. Like IT itself, these indicators and definitions were not static. For example, coordination in the early years meant both interpersonal coordination among stand-alone users and coordination with the mainframe. Later the definition was expanded to connectivity to account for the LAN. Finally, connectivity was extended from internal connectivity (inside the office and government) to external connectivity (namely, use of the Internet and the Web to handle presentation of service performance data). Also an indicator was added when it became apparent that access to internal decision making by IT end users outside the government (e-democracy) and the ability of IT end users outside the government to do business electronically (e-government) was becoming important.
Analyzing the Evolution of End User Information Technology Performance 199
Case Selection and the Interview Process In the mid to late-1980s, it was apparent that end user computing, with the PC as an integral element, would be an important IT tool. As a result, several case studies were initiated at all levels of governments. Very soon after the first round of studies, the county budget office was selected for more in-depth analysis from five other cases given its greater commitment to PCs. While its assertive PC acquisition behavior might predispose it to achieve greater success than other less-aggressive subjects, it provided a better return as it was a rich domain to investigate implementation and use. Because of the rapid growth and change in end user IT technology, it became apparent that a longitudinal study was in order. As with many longitudinal case studies using interviews, the desire was to include the same people during each visit. For the most part this was the case. The same lead PC person was interviewed each time except in 2000.2 On this occurrence, the same 1990 interviewee had become the IT person for the budget office and so became the appropriate person to interview again. The same person interviewed in 1988 from the DP office was interviewed in 2000 as a representative of DIT (see Table 2 for more description). To gain some baseline data, structured surveys were conducted in 1988 and 1994 comparing all IT performance indicators except global access. Table 2 summarizes the investigation’s approaches. Several caveats need to be mentioned with respect to the design of the case study and the longitudinal nature of the study. First, the case strategy permits a close look at events but suffers from lack of generalizability (Benbsat, Goldstein and Mead, 1987). In order to narrow the problems posed by lack of generalizability, important characteristics of the case are highlighted to give a sense of where the case might be most and least applicable. The main characteristic to appreciate is the highly professional quality of the staff. The second caveat is that several data-collection techniques were used over the period of study. However, some continuity was maintained by the fact that the same person was involved in each data-collection effort in all years except 2000.
CASE DESCRIPTION Utilizing our enhanced stages model, we found that the stages predicted by the traditional stage theorists in the data-processing environment and the end user computing environment were too static and linear. Our findings reflected the importance of organizational dynamics in understanding IT development. In addition, we found a considerable amount of overlap between the stages as well as unexpected shocks preventing attainment of the maturity stage. The stages found in our case were: 1) initiation, islands, gurus and experimentation; 2) expansion, connectivity and specialists; and 3) global horizons. The differences from the traditional stages model emerge with a deeper investigation into the stages. Only the first stage of experimentation was similar to the traditional model. However, each case stage, including the first, incorporated crises and solutions without top management playing a major role. This result is unlike the traditional stages model in which the solution follows in a later stage. For example, in the first stage, gurus suggested solutions to overcome the crisis at hand—lack of use. In the second stage, both gurus and specialists dealt with the connectivity crisis since some of the IT specialists were former gurus. The bond between gurus and IT specialists continued in the global horizons stage, although the participation of top management and elected officials was necessary to develop solutions to meet competitive and outside demands. The overall lesson is that understanding the organizational environment and resulting dynamics was essential in characterizing the stages of IT development. For a full description of IT development, organizational dynamics cannot be ignored. What follows is a detailed description of the case during three periods of time. Each discussion follows the analytical model with the computing and social environment of the period followed by the successes, problems and evolution associated with that particular environment.
200 Sacco & Hackler
Table 2: Interview Process T ype of in te rv ie w
1 9 8 7 -8 8
1990
1992
1994
O n e -o n -o n e in te rv ie w in th e b u d g e t o ffic e
In -d e p th in te rv ie w w ith le a d P C p e rso n a n d six e n d u se rs
In -d e p th in te rv ie w w ith sa m e le a d P C p e rso n a n d one c o lle a g u e
In -d e p th in te rv ie w w ith sa m e le a d P C p e rso n
In -d e p th in te rv ie w w ith sa m e le a d P C p e rso n
S u rv e y in stru m e n ts in th e b u d g e t o ffic e
S tru c tu re d su rv e y, 1 7 of 30 re tu rn e d
N one
N one
S tru c tu re d su rv e y c o m p a rin g situ a tio n in 1 9 8 7 -8 8 w ith 1 9 9 4 , c o m p le te d b y sa m e le a d P C p e rso n a n d sa m e c o lle a g u e fro m 1 9 9 0 in te rv ie w
O n e -o n -o n e in te rv ie w s o u tsid e th e budget o ffic e
In -d e p th in te rv ie w s w ith p e rso n in d a ta p ro c e ssin g o ffic e w h o w as fa m ilia r w ith budget o ffic e
N one
N one
1998
2000
In -d e p th in te rv ie w w ith sa m e le a d P C p e rso n
In -d e p th in te rv ie w w ith IT p e rso n in te rv ie w e d in 1 9 9 0
In -d e p th in te rv ie w w ith p e rso n in d a ta p ro c e ssin g o ffic e fa m ilia r w ith budget o ffic e
In -d e p th in te rv ie w w ith p e rso n in d a ta p ro c e ssin g o ffic e fa m ilia r w ith b u d g e t o ffic e (sa m e p e rso n in te rv ie w e d in 1 9 9 0 )
Initiation, Islands, Gurus and Experimentation: 1985 to 1988 This period is labeled “initiation, islands, gurus and experimentation.” Little upper management control, the major role of gurus, and a search for productivity within a traditional balanced budget framework characterize the period. It is not far from any of the traditional stages models, except that crises (lack of use) and solutions occurred very quickly, often on the spot, in the same stage. Computing and Social Environment The budget office’s procurement of PCs in 1985 was the starting point of the analysis. The local government’s central administration left end user procurement and implementation to the operating units and only set general guidelines on hardware (IBM or PC compatibles), software (for example, WordPerfect) and training policy. The budget office realized it was under pressure to wisely use the PCs since it was one of the first to acquire the machines for every staff member. The office only adopted the government’s standards for hardware and software. It did not establish an explicit, detailed strategy for using the PCs. Table 3 for the 1985 to 1988 period shows this as a loosely-structured approach for the control structure category, as would be indicated by work from organizational theorist Weick (1967) and stages theorists Nolan (1973) and McKenney and McFarlan (1982).
Analyzing the Evolution of End User Information Technology Performance 201
As for the driving forces category of the enhanced model, the office always had one or two staff members during this period whose avocation was PCs. While day-long training sessions were available, office gurus were the primary driving forces for design and implementation issues. Numerous studies show similar findings, echoing the importance of informal support in the early stages of end user IT deployment. (Rockart and Flannery, 1983; Halloran, 1993; Lu and Wang, 1997). With respect to the driving issues of implementation, the office viewed these first PCs as a productivity device. The office staff often kept long hours, pressured by budget deadlines and the many requests for data during the year. The PCs were seen as a way to meet deadlines “without working overtime,” as one IT professional stated. The techno-social context for this initial period mixed new with old. On the technical side, the notion of end user computer power via PCs was new and untested. On the social side, the demand on the budget office was still the annually balanced budget. Success, Problems and Evolution of End User IT Tools What did the office do with this loosely structured approach, that is, reliance on individual experimentation? What impact did the driving forces (mainly the gurus), driving issues (productivity) and techno-social context (PCs and balanced budgets) have on performance? Table 4 indicates the results for the 1985-1988 period. Opportunism and inventiveness were most evident in a number of newly created PC applications that addressed functions the mainframe did not handle. It received the only “high” mark in this period. One illustration was a revenue tracking system. With different Table 3: Computing and Social Environment 1985-88 Loosely structured
1990-92 Balanced structure and experimentation
1993-94 Control and integration
1998 Control and integration with some new experimentation
2000 Internal control and integration faces global demand
Driving forces
Office guru and some training
LAN, office guru, outside consulting, computer literacy
LAN, strategic plan with outside help, DP office, office MIS person, guru
Intranet, Internet, Web, some use of LAN, office IT person, Department of IT
Intranet, Internet, Web, some use of LAN, office IT person, Department of IT
Driving issues
Productivity
Connectivity
Connectivity, planning
Plans for service performance data on the Web
Release of service performance data in the budget
Techno-social context
Mainframe, PCs, annually balanced budget
Mainframe, PCs, LAN, annually balanced budget
LAN, PCs, mainframe, annually balanced budget
Intranet, LAN, PCs, Internet, Web, mainframe, annually balanced budget with service performance measures
Intranet, Internet, Web, PCs, mainframe, LAN, annually balanced budget, long term budgeting with service performance measures, the emergence of accrual accounting, global competition, reinventing government
Control structure
202 Sacco & Hackler
agencies handling different revenues (various taxes, fees and fines), one budget staff member consolidated these separate databases on a spreadsheet, allowing for tracking and estimating revenue for various periods. The system was institutionalized to the point that other agencies sent their estimates to this staff member in return for periodic reports on the total revenue picture. The revenue tracking system became so well known that citizens requested reports—representing an early version of the shift in the definition of IT end user from a person inside the office to include interested outside parties. Other niches that developed during this early period included systems that tracked capital spending and bond repayment. In sum, the PC allowed data collection and processing to be more inventive and decentralized. Norris (1988) found analogous opportunism and inventiveness in his case studies of PC use in government. The positive effects of PCs were “the ability to do work that had not been feasible before....[U]sers were able to examine more data, conduct more thorough analyses, and construct and evaluate a greater number of alternative courses of action with the microcomputer” (p. 143). Given the opportunistic and inventive strides just noted, it would seem that underuse of PCs would not be a problem. From the self-administered questionnaire carried out in this first visit, it appeared that more than half of the people responding said they used the PC mainly for writing memos and doing simple spreadsheets while some rarely used the PC. Consequently, those initial expectations about satisfaction and productivity were not fulfilled. Measures of satisfaction were low, and perception about productivity was only somewhat positive. The secretarial staff still used Wangs that forced them to retype the wordprocessed document. Islands of IT were readily apparent. The mainframe database, for instance, was essentially one massive file of budget and spending information. As a result, very little downloading occurred. Manual transfers from mainframe printouts to PC spreadsheets or wordprocessing remained the standard. The gurus saw this under-usage and made up for it with inventive applications that helped other employees to feel more comfortable with those “new” PCs. In some respects the budget office gurus and the IT specialist built a bond around the excitement of developing a LAN for the budget office.
Table 4: Success, Problems and Evolution of End User IT Tools Redundancy
1985-88 Non-existent
1990-92 Non-existent
1993-94 Medium
1998 Medium
2000 Non-existent
Use of PCs
Low
Medium
High
High
High
Opportunism/inventiveness
High
High
Medium
High
High
Satisfaction
Low
Medium
Medium
Medium
Medium
Productivity
Medium
High
High
High
High
Decision quality
Medium
Medium
Medium
Medium
Medium
Coordination/connectivity
Low
Medium
High
High
High
Errors
Low
Low
Low
Low
Low
Service performance data
None
None
None
Low
Medium
Global access to service performance
None
None
None
Low
Medium
Analyzing the Evolution of End User Information Technology Performance 203
Expansion, Connectivity and Specialists: 1990-1992 and 1993-1994 Perhaps because of the rapid innovation and adoption of IT during the early to mid-1990s, no separation was found between expansion, connectivity and IT specialists as suggested in various stages models (Nolan, 1979; Munro and Huff, 1988). LANs, gurus, technical experts, and mainframes were all part of the picture in both 1990-1992 and 1993-1994. Computing and Social Environment What was the computing and social environment during this expansion, connectivity and IT specialist period? As for the control factor, end user IT design and implementation in the office moved from a loosely structured approach during the pre-1990 period, to a balanced approach in 1990-1992, and finally to one closer to control and integration in 1993-1994 (see Table 3 under control structure). The driving forces and issues during this period were intertwined. The driving force was the emergence of the budget office LAN in 1992. With the LAN, the office gurus began to move aside, being supplemented by internal and external specialists and strategic planning. Consequently, the resulting organizational change supported Nolan’s (1997) finding that strategic planning followed periods of low control and limited planning, but expansion, connectivity and reliance on IT specialist occurred in tandem. Related to the LAN, the driving issue was connectivity. Addressing the complex issue of interconnecting the stand-alone PCs with a LAN required MIS expertise and strategic planning. Technical issues—like adaptor cards, shared files and shared disks–required strategic planning and thus the attention of network engineers (Dantzer 1994). As noted, the office gurus played a role but a more substantive one. They identified the data to be shared, the budget forms to be automated and, even more importantly, the logic of arraying the budget forms. The techno-social context also began to change during this period but without any direct impact on the budget office. By the early and mid-1990s, the use of the annually balanced budget as an effective managerial vehicle was under question (Ball, 1994). In the mid-1990s the county legislators and executive commissioned a study on the popular idea of “re-engineering” government. This study led to several government privatization and contracting out programs. Success, Problems and Evolution of End User IT Tools An environmental transformation occurred in this period—from a loosely-structured system to an environment with more control, forward-looking strategic planning, cooperation between office gurus and IT specialist, and a focus on connectivity. This transformation brought a number of positive changes in the performance criteria for end user IT. The central indicator of IT performance for this period, connectivity, climbed from low to medium (Table 4). Many of the automation gaps between the budget estimates generated by the mainframe program and the final adopted budget were addressed by the LAN. LAN-based standardized wordprocessing software as well as common forms to prepare the final adopted budget helped to fill many of the automation gaps. Data on the LAN could be downloaded to PC spreadsheets and then uploaded to the LAN. However, the hope of summarizing data from lower-level agency units into larger budgetary categories, such as the general fund, by using the LAN was not really achieved. That summarization process still took place in the mainframe financial application. The use of PCs indicator increased from low (1985-1989) to medium (1990-1992) to high (19931994) as indicated in Table 4. While some staff were still experts and possessed skills above those of the average user, everyone during this period used much of the software available on the LAN and PCs compared to the baseline measure of about half of the staff in the earlier PC-only period.3 With common budget forms available to all county agencies and the budget office, productivity also improved. Once the data were on the LAN or PC, manual changes were often not needed. “What if” calculations and presentations could be done more quickly, although such analyses were still stranded in the PC spreadsheet domain. As managers realized “what if” scenarios could be automated, it became
204 Sacco & Hackler
the practice and expectation of managers to “drop” a request on the desk of a staff person with the expectation that it be addressed that day or soon after.
Global Horizons: 1998-2000 Maturity and internal crises that play a role in the traditional stages model were transcended in this last period. While not necessarily a new paradigm, it was a shock that yields the title, “Global Horizons.” The IT end user can now be anywhere in the world with little or no control from the target organization. Computing and Social Environment With respect to the control structure factor of the computing and social environment, the office’s IT implementation took a new turn vis-à-vis control in the latter part of the 1990s. Until the mid 1990s, managerial and planning control was evident as technologies were adopted and applied to increase automation of the budget cycle. The goal of the driving forces and driving issues as well as the technosocial context until the mid-1990s was to make a large government operate more smoothly, with the preparation of balanced budgets as the key element. However, in the latter part of the 1990s, new techno-social phenomena emerged, affecting all facets of the computing and social environment. Control as well as driving issues and forces had to deal with internal and external demands for access and information. On the technology side of these new techno-social phenomena, the intranet emerged, which had the potential to add powerful countywide connectivity, search, and analytical capabilities to the LAN technology of the earlier period. Another technology was the use of the Internet for broadcasting information outside of the agency. On the social side, the reinventing government movement increased the importance of government’s transparency to parties outside of the county who had an interest in county achievements. Intertwined with these phenomena was the shock of global competition. Governments were now expected to compete with other sectors, private and nonprofit, for the right to deliver services. Annually balanced, cash-oriented budgets no longer met the competitive challenge. Government’s foremost response was the provision of performance measures on service accomplishments realized or not realized with the dollars budgeted. The conveyed results could be quite disturbing, especially with the Internet delivering both good and bad news on service performance and accomplishment. The budget agency experienced some but not serious reluctance from top officials to place performance measures on the Internet. Several of the legislators and top executives were committed to providing service performance results on the county Web page, albeit in phases. To an extent, this progression to data exposure outside the agency fits Nolan’s (1977) advanced stage, data administration, which focuses on obtaining and using good data. However, “data administration” was now both internal and external. The budget office’s technology built a bridge to allow information to flow to outside parties, even though the flow of traffic between the budget office IT end users and external end users had many existing roadblocks. What the external end user requested or searched for was not always available and was not an IT question but a political one. Success, Problems and Evolution of End User IT Tools While the computing and social environment received major shocks, several of the performance indicators were higher (see Table 4). Most important was that global access went from low to medium. Outsiders had better access and more data, particularly to service and accomplishment data. Satisfaction increased since more of the tedium associated with manual rekeying of data was gone. Because of the intranet, agencies accessed Web-enabled budget office forms and filled them out on their terminals; they were no longer reliant on the mainframe database for this budget preparation step. In Herman’s (1997) eyes, “[T]he islands of data and automation that once characterized most organizations are now being united” (p. 20). This period also witnessed some resurgence in
Analyzing the Evolution of End User Information Technology Performance 205
opportunism and inventiveness. A personnel application that tracked job vacancies in agencies significantly expanded to also track the budget impact of salary rewards (pay for performance) and departing employee payoffs. With agency budget requests more fully automated and with the presence of a full-time IT staff member, multiple budget-related tasks became possible. Budget analysts commonly requested downloads from the mainframe in order to do more refined analysis of agency budget requests. With service performance data available, they more easily analyzed compliance and performance issues related to the budget. Thus, the quality of the decisions vis-à-vis the budget rose slightly from its earlier medium rating in Table 4.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION Several challenges face the organization: IT theory is not adequate to predict or to guide deployment. The organization needs to be adaptive. • The rapid and changing nature of IT has and will continue to force the organization to change its goals. • Resource limits have forced continued use of legacy systems. • Major social changes, especially global market forces, will place new demands on the type of data needed and access to that data. • Managers must address external shocks in long-term IT management models. Taken together, these challenges require the budget office and the county to behave as if it was a private-sector entity in a global market, providing a wider range of information electronically to external users. In regard to the first challenge, the traditional stages model rightly notes problems and crises of moving to effective new levels of IT deployment. But in utilizing our enhanced stages model, we find that IT development cannot be studied separately from the organizational environment and resulting dynamics. While the locus for government IT was mainly internal in the past, the new challenge is external, global and competitive. In broadening the IT stages focus from internal to external, the 15-year period covered in this study produces a number of surprises. The development of IT in the form of the Internet and the Web allows several external environmental demands to surface. The manageable dilemma of balancing experimentation and control in the deployment of PCs in 1985 is rendered more complex by global competitive markets and reinventing government. These demands require the budget office to develop and reveal new data on service performance and accomplishments to outside IT end users. The longitudinal nature of the case study not only provides insight into the way IT was implemented, but it also points to how the rapidly changing nature of IT tools can reshape an organization’s goals and responsibilities. While external forces compelled the budget office to set and meet performance criteria and compete with private and nonprofit sectors for the right to deliver services, the IT tools made these responses possible. The evolution of end user IT tools in the office promotes better performance because more budget office employees are able to utilize IT in their jobs. The diffusion of IT skills from the gurus and support of continued skill development by the IT specialists enables the office as a whole to find appropriate IT solutions to external challenges. However, one of the greatest challenges facing the county budget office and other organizations is the rapid pace of IT’s development which relegates more systems to “legacy” status and hinders the integration of new systems. Obsolescence and legacy systems directly affect this budget office when it readied its mainframe budget application for Y2K compliance. Since other agencies in the county made similar choices, the amount of money and effort expended to continue the life of a legacy system now limits the office’s and the county’s ability to fully embrace an enterprise resource planning •
206 Sacco & Hackler
solution. Too much had been invested to replace these Y2K-compliant legacy systems before they’re fully depreciated. The nature of the public-sector organization, with accountability and responsibility to the citizenry taking precedence over leading-edge IT deployment, makes IT obsolescence and integration a significant challenge. With IT progressing and the county attempting to keep up with it, more pressure is likely to be on the horizon for governmental dissemination of information. IT deployment has moved the budget office from islands of IT with PCs, mainframes and Wangs, to established routes of connectivity with the LAN, and finally to providing a common bridge for all IT islands, agencies and even external IT end users through the Internet. However, while the common Web infrastructure exists among all, the flow of information over this interface is not without its roadblocks. The budget office, like other governmental organizations, is up against political forces, and with IT’s rapid progression, they may not be able to ignore the growing demands of outside parties for specific data. While the budget office, with the aid of the DIT, continues to work on connectivity within the budget office via more Web enabling of its still scattered data and applications, the office faces the challenge of developing an information policy that assists in the selection of what types of data should be displayed. The policy should be flexible enough to guide and improve the flow of traffic over this common Web platform. The significance of this political decision is that this budget office will not be the only government agency addressing who should be defined as the ultimate IT end user. Given the evolution of who the IT end user is, this trend also presents the budget office with a greater responsibility to ensure that the IT tools provide the means to produce and display desired data. With less than forceful management control being found in the later periods of the case study, the question of how future implementation of IT tools are evaluated is important. Garnering feedback from internal users is a difficult task, but not nearly as daunting as determining the level of support that IT tools should provide the growing number of external end users. Just as the above-mentioned problem deserves an information policy, this policy must also address the evaluation of IT tools. The budget office and county need to know how IT tools improve end user sophistication and productivity to garner the greatest gains. Additionally, the office should understand how far the global market economy would push most IT policies and decisions. The global market has made counties compete not only for the right to deliver services but also for businesses and residents. The budget office is a repository of important financial data detailing the economic health of the county, and enhanced IT tools enable the budget office to meet the concerns of investors, creditors, businesses and residents. And while it may be in competition with other counties, this budget office’s award-winning status makes it and its end user IT tools the envy of many peer counties. The global economy will continue its influence, and IT tools are one way in which this budget office chooses to rise to the challenge. The enhanced stage models proposed in this study, with the addition of the techno-social environment and the concern about assessing end user IT, also combines and exposes particular challenges for managers. While internal crises are now reasonably well understood, external shocks must become part of long-term IT management models. In this case, the major external shocks were the reinventing government movement, the external demand for solid information about service performance by global capital markets and the technical availability of “world casting” service performance results. Managers must be able to address such shocks and become sensitive to predicting subsequent shocks in order to effectively manage organizational change. Will the bond between the guru and IT specialist breakdown as IT technology permits more centralization? Additionally, what would end user IT and IT in general look like if the so-called anti-global market and Green activists become important forces in government and industry? Will IT need to directly address environmental monitoring, scanning of work conditions and close surveillance of income and wealth? All are examples of external shocks that managers should monitor. Enormous swings in government and social policy have taken place during the past century. Those swings will continue, and they also should be anticipated in IT management and development
Analyzing the Evolution of End User Information Technology Performance 207
models. Further stages model research would benefit in keeping the stages at a higher level of abstraction instead of literal stages that are characteristic of traditional models. Such an effort was made in this study. Rather than focusing solely on the static stages of initiation and expansion, we introduced a more abstract notion of a computing and social environment, the nature of IT development within this environment, and the resulting IT performance. Our findings suggest that the abstract approach is appropriate in this age of the continually evolving organization and rapid IT growth.
ENDNOTES 1 While other offices also installed LANs, DP decided network administration would be easier if LANs were not connected through a wide area network. 2 Some follow-up interviews were conducted in early January of 2001. 3 In viewing this period’s performance, it is important to note that new employees hired in the early 1990s were more knowledgeable about using PCs and networks than in the initial period of the study.
FURTHER READINGS Antonelli, C. & Geuna, A. (2000). Information and Communication Technologies and the Production, Distribution and Use of Knowledge. International Journal of Technology Management, 20(1/ 2), 72-94. Awad, E.M. (1997). From Data Processing to Artificial Intelligence: What does the Future Hold? Journal of End-User Computing, 9(3), 32-35. Brown, C. V., & Bostrom, R. P. (1994). Organization Designs for the Management of End-User Computing: Reexamining the Contingencies. Journal of Management Information Systems, 10(4), 183. Chistiaanse, E. & Huigen, J. (1997). Institutional Dimensions in Information Technology Implementation in Complex Network Settings. European Journal of Information Systems, 6(2), 77-85. Disher, C., & Walters, R. (1998). IT Model Balances Old, New. InformationWeek, 647, 11ER-12ER. Galletta, D.F., Hufnagel, E.M. (1992). A Model of End-User Computing Policy: Context, Process, Content and Compliance. Information & Management, 22(1), 1-18. Gammack, J.G. (1999). Constructive Design Environments: Implementing End-User Systems Development. Journal of End-User Computing, 11(1), 15-23. Giorgio, G. (2000). How Should Technology Be Managed in the Post-Fordist Era? International Journal of Technology Management, 20(1/2), 1-19. McFarlan, W. F., McKenney, J. L., & Pyburn, P. (1983). The Information Archipelago: Plotting a Course. Harvard Business Review, 61(1), 145-155. McLean, E.R., Kappelman, L.A., & Thompson, J.P. (1993). Converging End-User and Corporate Computing. Association for Computing Machinery. Communications of the ACM, 36(12), 7992. Raho, L. E., Belohlav, J. A., & Fielder, K. D. (1987). Assimilating New Technology into the Organization: An Assessment of the McFarlan and McKenney Model. MIS Quarterly, 11(1), 47-57. Tafti, M.H., Mohammed, H.A., & Ashraf, I. (1997). Hierarchy of End User Computing Needs: An Empirical Investigation. Journal of End-User Computing, 9(4), 29-35.
REFERENCES Ball, I. (1994). Reinventing Government: Lessons Learned from the New Zealand Treasury. The Government Accountants Journal, Fall, 19-28. Benbsat, I., Goldstein, D., & Mead, M. (1987). The Case Research Strategy in Studies of Information Systems. MIS Quarterly, 11(3), 369-388. Dantzer, V.H. (1994). Develop a Strategy. Computing Canada, 20(10), 54.
208 Sacco & Hackler
Drury, D. H. (1983). An Empirical Assessment of the Stages of DP Growth. MIS Quarterly, 7(2), 59-70. Friedman, A. L. (1999). Rhythm and the Evolution of Information Technology. Technology Analysis and Strategic Management, 11(3), 375-390. Friedman, A. L. (1994). The Stages Model and Phases of IS Field. Journal of Information Technology, 9(2): 137-148. Gibson, C., & Nolan, R. (1974). Managing the Four Stages of EDP Growth. Harvard Business Review, 52(1), 76-88. Halloran, J. P. (1993). Achieving World-Class End-user Computing: Making IT Work and Using IT Effectively. Information Systems Management, 10(4), 7-12. Herman, J. (1997). Managing Intranets and Extranets. Business Communications Review, 27(8), 20-21. Huff, S. L., Munro, M. C., & Martin, B. H. (1988). Growth Stages of End User Computing. Association for Computing Machinery. Communications of the ACM, 31(5), 542-550. Igbaria, M. & Nachman, S. (1990). Correlates of User Satisfaction with End User Computing: An Exploratory Study. Information and Management 19(2), 73-82. Lu, H. P., & Wang, J. Y. (1997). The Relationship between Management Styles, User Participation, and the System Success over MIS Growth Stages. Information and Management, 32(4), 203-213. McKenney, J., & McFarlan, F. (1982). The Information Archipelago: Maps and Bridges. Harvard Business Review, 60(5), 109-119. Munro, M., & Huff, S. (1988). Managing End User Computing. Journal of Systems Management, 39(12), 13-18. Nolan, R. (1973). Managing the Computer Resources: A Stages Hypothesis. Communications of the ACM, 16(7), 399-405. Nolan, R. (1979). Managing the Crisis in Data Processing. Harvard Business Review, 52(7), 115-126. Norris, D. (1988). High Tech in City Hall: Use and Effects of Microcomputers in United States Local Governments. Social Science Computer Review, 6(2), 137-146. Rockart, J., & Flannery, L. (1983). The Management of End-User Computing. Association for Computing Machinery. Communications of the ACM, 26(10), 776-784. Weick, K. (1967). The Social Psychology of Organizing. Reading, MA: Addison-Wesley. Yellen, R.E. (1997). End User Computing in a Global Environment. Journal of End-User Computing, 9(2), 33-34.
BIOGRAPHICAL SKETCHES John Sacco is an Associate Professor of Government and Politics in the Department of Public and International Affairs at George Mason University in Fairfax, Virginia. His teaching and research interests include government finance and accounting, information technology management and international political economy. He has been active in developing and delivering online courses. His primary research analyzes the impact of global economic markets on government finance and financial reporting. Professor Sacco has advanced degrees in financial accounting and political science. His Ph.D. in political science is from Penn State University. He also holds a B.S. in data processing. Darrene Hackler is an Assistant Professor of Government and Politics in the Department of Public and International Affairs at George Mason University in Fairfax, Virginia. Her teaching and research interests include information technology policy and management in government and nonprofits and information technology and economic development in regional economies. Her primary research analyzes industrial location of high-technology industry and its relationship with regional and local economic development policies and information technology innovation in the non-profit sector. She received her M.A. in public policy and Ph.D. in political science and economics, with specializations in information technology, quantitative methods and public policy, from the Claremont Graduate University in Claremont, California.
Adopting IT: Food Program Sponsor Discovers It’s No Picnic 209
Adopting IT: Food Program Sponsor Discovers It’s No Picnic John M. Anderson and William H. Gwinn University of North Carolina at Wilmington, USA
EXECUTIVE SUMMARY Small companies are often reluctant to try innovative approaches to information management because of the cost of the hardware and software, the potential disruption of processes already dependent on overstressed resources and the lack of in-house expertise. This case looks at the experience with information technology (IT) implementation of one small nonprofit company that provides administrative services for child care providers. Like many companies of all sizes, the focal company realized it must adopt new information technologies in order to survive. The company fit the profile for small companies just entering the world of IT. It experienced the expected internal problems associated with change. And then it discovered that its size and its relationship to government oversight agencies, themselves struggling to implement IT, posed special threats to its survival.
BACKGROUND The last half of the twentieth century saw a major movement of women out of the home and into the workforce. With that move came an increased demand for child day care that, in turn, spawned tens of thousands of family day care homes and day care centers, most of them licensed small businesses. Besides providing day care services, many of them participate in various state and federal programs aimed at subsidizing working parents, providing pre-school education to children, and improving nutrition among children of working parents. The company in this case — Quality Care, Inc. (QCI), a pseudonym used for this case — is a food program sponsor whose primary business is to administer day care homes and centers that participate in the federal government’s Child and Adult Care Food Program (CACFP). (See the list of Online Resources at the end of the case for links to Web sites related to the CACFP.) A large part of the state-licensed sponsor’s function consists of processing documents for its supervising state agency. Sponsors are compensated for their services based on a federal rate schedule keyed to the number of clients served. As a sponsor’s client list grows, the paperwork burden grows proportionately, but the marginal rates are regressive. At some point, a sponsor choosing to Copyright © 2002, Idea Group Publishing.
210 Anderson & Gwinn
increase its revenue by adding clients must turn to information technology to process the increased paperwork at reduced cost and within mandated deadlines. When QCI’s owner made the decision to incorporate information technology into its processes, the company fit the profile for a small business just entering the world of information systems (DeLone, 1988; Nooteboom,1988; Igbaria & Zinatelli, 1997; Soh et al., 1992): • they couldn’t afford to employ internal IS staff; • they had a general lack of computer knowledge; • they had inadequate hardware and software; • they needed to rely on outside resources; • they had a lack of financial resources and technical support; • they had recruitment difficulties; • they had a short-range management perspective imposed by a volatile competitive environment. As pointed out in Taylor (1999), small businesses implementation challenges are often more daunting as a result of those conditions. Many of the motivators and inhibitors described by Cragg and King (1993) appear in the case. Perhaps the most pertinent to this case is the significance of the owner’s level of enthusiasm. While the usefulness of a newly implemented information system was immediately apparent to both QCI’s staff and clients, the sponsor’s staff experienced varying individual rates of acceptance, giving rise to serious internal problems. Davis’ (1989) observations with respect to perceived usefulness versus ease of use and their relative impact on acceptance are reinforced in the case. But, the literature says little about the effect of discordant rates of technology implementation within and between the levels of an industry dominated by small businesses. Rates of technology implementation were different between QCI and its state oversight agency, and between the state and federal oversight agencies. Those varying rates of implementation coupled with a lack of coordination among organizations at various levels in the industry made industry-wide adoption of information technology appear chaotic. The inevitable result was increased sponsor uncertainty.
SETTING THE STAGE Since 1969 , the U.S. Department of Agriculture (USDA) has funded the CACFP with the goal of providing nutritious meals to adults and children who are in day care facilities. By 2000, the program reached an annual funding level of $1.7 billion and served over 2.4 million children. (See Tables 6 and 7 in the Appendix for data on the CACFP.) To administer the program, the USDA makes grants to the states that, in turn, designate administrative oversight agencies. Each state is responsible for establishing its own policies and procedures for the program’s operation, subject to administrative guidelines provided by the USDA and the enabling federal and state legislation. In the State of North Carolina, the administrative responsibility for the program rests with the Nutrition Services Section (NSS) of the Department of Health and Human Services. The NSS has a staff of 15 state employees who administer a variety of nutrition-related programs, including the CACFP. More than 5,000 day care homes and day care centers participate in one or more of the programs administered by NSS. To support them, NSS works directly with the 100 county governments, each of which has a department that deals with nutrition programs. In addition, NSS contracts with more than 40 nonprofit food program sponsors across the state for additional administrative support. Participation in the CACFP is voluntary on the part of a day care provider (a home or center), and each participating provider must choose either NSS or a sponsor for its administrative support. In its claims processing role, QCI collects and processes data on providers and their enrolled children, and submits claims for reimbursement on their behalf.
Adopting IT: Food Program Sponsor Discovers It’s No Picnic 211
Sponsors are responsible to the state for nutrition training, food safety and reimbursement for meals served by participating homes and centers. Under the CACFP, meals and snacks served to children up to the age of 12, and which meet certain nutritional guidelines, are reimbursable. QCI provides services to over 500 homes, 200 centers and more than 14,000 children in an area covering the eastern half of the State. Each client home or center must be visited several times a year by QCI staff. While more than half of their clients are located within 75 miles of QCI, some are more than twice that distance away. Table 1 gives an overview of the homes, meals, and reimbursement amounts associated with QCI for fiscal year 2000. A day care home is typically a small family business in which the owner takes in a few children other than his/her own and provides care for a fee. Depending on a variety of qualifying requirements, the home may have as few as two children or as many as several dozen. The day care center, on the other hand, can be a nonprofit or for-profit business, and is subject to more stringent standards. A center may have hundreds of children enrolled. Day care homes participating in the program are subject to a system called “tiering” in which each provider is classified on the basis of income, school district of its children, family income for each child and other considerations. The rate of reimbursement for meals is affected by the tier classification of the day care home. Most centers include meals as part of their fees. Centers receive payments based on the type of meal served and the child or adult’s eligibility for free, reduced-price or paid meals, while shelters and after-school care programs are reimbursed at the free rate. Table 2 shows FY2000 reimbursement rates for centers. Day care homes cannot charge separate fees for meals. Higher payments (Tier I) are paid to homes in low-income areas and to low-income providers. Meals and snacks served to children who are eligible Table 1: QCI Reimbursements Processed or Homes–2000 Month
Number of Average Daily Homes Attendance
Tier I Meals and Snacks Served
Tier II High Meals and Snacks Served
Tier II Low Meals and Snacks Served
Total Meals and Snacks Seerved
USDA Reimbursement for Meals and Snacks Served
January
452
2,031
150,621
4,341
14,162
169,124
$164,197
February
466
2,412
164,453
5,261
14,497
184,211
178,148
March
469
2,580
189,879
7,495
14,169
211,544
206,542
April
471
2,216
157,841
6,221
12,936
176,998
173,620
May
480
2,561
187,633
6,795
16,102
210,529
204,799
June
470
2,490
189,298
7,322
14,725
211,344
213,062
July
472
2,094
154,719
7,591
21,820
184,130
183,804
August
480
2,508
172,194
9,073
25,963
207,230
199,693
September
477
2,310
150,740
7,577
23,387
181,704
173,466
October
477
2,443
164,876
10,774
23,658
199,309
191,839
November
486
2,404
158,136
10,706
22,251
191,093
184,799
December
486
2,067
143,467
10,926
18,808
173,202
170,651
Table 2: Meal/Snack Reimbursement Rates For Day Care Centers Meal Type Breakfast Lunch or Supper Snack
Free $1.09 1.98 0.54
Reduced-price $0.79 1.58 0.27
Paid $0.21 0.19 0.05
212 Anderson & Gwinn
Table 3: Meal/Snack Reimbursement Rates for Day Care Homes Meal Type Breakfast Lunch or Supper Snack
Tier I $0.92 1.69 0.50
Tier II $0.34 1.02 0.13
Table 4: Sponsor Administrative Payment Rate Number of Homes 1 - 50 51 - 200 201 - 1,000 Each One Over 1,000
Rate $78 59 46 41
for free and reduced-price school meals also receive higher rates of reimbursement. FY2000 reimbursement rates for homes are shown in Table 3. Sponsoring organizations receive a monthly administrative payment for each client home from the state. The schedule for such payments for FY2000 is shown in Table 4. The sponsor fee for working with day care centers is determined by the sponsor. QCI charges each center 10% of its monthly reimbursement amount. QCI is located in Riverton, population 150,000, and one of the fastest growing metropolitan areas in North Carolina. With a growing number of manufacturing plants relocating to the area and a mushrooming tourist industry, the unemployment rate is low relative to other parts of the state and the demand for day care is increasing. QCI currently employs 22 full-time and part-time workers. The owner, Kate Carson, is a former public school teacher who started the business 15 years ago with a vision to improve the quality of child nutrition. Her unwavering focus on providing prompt and effective service to homes and centers became known throughout the network of providers and fostered a steady growth of loyal clients. Kate is an energetic and creative entrepreneur. She is motivated by the challenge to improve child nutrition and child care, but at the same time runs her business from the bottom line. She spends one or two weeks out of the month at QCI and the remainder on the road visiting either client homes or centers. In 1997, Kate organized QCI into three departments: Homes, Centers and Tutoring. Each department was headed by a salaried employee. The Homes department was managed by Betty Taylor. Betty spent a number of years as a public school teacher, but left teaching for a less stressful job. One of the selling points used by Kate when she recruits is the flexibility of hours, both when to work and how many hours to work. Most of the employees at QCI were paid by the hour and worked there because of that flexibility of schedules. Betty began as a part-time employee and quickly moved to full-time to take on the responsibilities of department manager. Besides Betty, there were six others who were involved in training, reviews and administering day care homes. Terry Mintz managed the Centers department. Terry taught elementary school for a few years before deciding that teaching was not for her. She tried several jobs before finding QCI. Kate convinced Terry that QCI was a growing company and that there would be expanding opportunities for her. Kate had six employees who were involved with Centers activities and administration. Janice Carter was responsible for Tutoring. Janice was a student at the local university and worked half-time in the afternoons overseeing the tutoring activities. QCI serves as a broker and connects public school teachers with children who need tutoring services. In the early years, QCI depended heavily upon tutoring services – and still offers those services – but the business has grown primarily due to its role as a sponsor in the CACFP.
Adopting IT: Food Program Sponsor Discovers It’s No Picnic 213
To comply with CACFP administrative requirements, a sponsor must collect detailed data on each home and center, the children enrolled in the homes and centers, and on the number and content of the meals and snacks served each day for each child claimed by a provider. In early 1997, QCI required its providers to complete daily entries on a mimeographed form called a menu sheet showing by child what was served and when. Meal contents were evaluated and reimbursement was made only for those meals that met nutritional requirements. During the first week of each month, all of QCI’s employees gathered at the company’s office and evaluated the menu sheets as they arrived by mail and FAX. The state required all claims to be submitted by the 8th of the month in order to be reimbursed during that month. For the daycare homes alone, QCI processed close to 1,500 menu sheets by hand. Employees stayed into the night and spent weekends on the task, running up overtime. And time spent processing menu sheets represented an opportunity loss of nearly 25% in consulting time. Processing a menu manually involved visually checking entries on the top half of the menu sheet where providers write the quantities and types of food served at each meal. If nutritional requirements were not met, the meal was disallowed. The bottom half of a menu sheet showed a seven-day attendance record for up to a dozen children. The attendance checks were tallied both by child and by meal and the sums verified. Certain meal and snack combinations were not authorized and had to be discovered by visual inspection. After completing the assessment of the individual menu sheets, the results were entered into a spreadsheet which was used to produce totals. The spreadsheet applied the appropriate rates and produced reimbursement amounts. Kate would then key the reimbursement amounts into an accounting program to print reimbursement checks. NSS reimbursed QCI for payments made to providers. In January of 1997, a recount of meals on a random sample of menu sheets revealed at least one error on 30% of the sheets. Despite a conscientious effort by the staff, human information processing was error prone. While the dollar value of errors was small, the frequency was noted by the Kate’s auditors and QCI was urged to improve its menu assessment process. At that time, NSS processed claims submitted by sponsors using a combination of manual checking, spreadsheets and a centralized data processing system that supported multiple agencies in the state government. Kate knew that if the company was to grow, it would need to find a way to handle an increased paperwork load with a lower error rate. She had relied on hiring additional staff to handle growth – usually one more full-time equivalent employee per 40 homes or centers. But the introduction of tiering in 1997 produced an increased volume of paperwork for the same number of clients. Kate concluded that additional staff alone was not the answer. QCI used three desktop computers, all pre-owned Macs that had been donated by a local manufacturer. Software included spreadsheets and a popular small business accounting program to write checks and keep up with the bank account. None of the employees had significant computer experience. Kate explored the availability of computer programs to support CACFP sponsors. She discovered several, including one called MenuMinder that was being used by another sponsor about QCI’s size. MenuMinder used fairly costly OMR technology like that used in the school system for scanning grade sheets. After checking into the details of the system, however, Kate felt that the paperwork required by the system was too complicated for her providers. Her initial impressions were reinforced when she was told by two new clients that they changed sponsors because of the complicated forms used by MenuMinder. Kate had only been using a computer and spreadsheet software for two years. She liked the fact that she could manage QCI’s finances with just a spreadsheet and a checkbook program. She was concerned about losing control if she moved more of her business processes to the computer. And she wondered how her employees would react. After all, processing the claims was a time of great social interaction, and there was a high level of satisfaction associated with taking on and overcoming the mountain of paperwork as a team.
214 Anderson & Gwinn
In a January meeting with her accountant, Kate mentioned her concerns. After listening to Kate describe the problems of errors and increasing paperwork, her accountant suggested that she contact a colleague of his in the MIS department at the university.
CASE DESCRIPTION A few days after her accountant’s suggestion, Kate met with Tom Davis, a professor of MIS at the local university. It happened to be during the first week of February when menu sheets were being processed and the entire staff was on hand. Piles of file folders and papers covered all surfaces and the small offices were abuzz. QCI’s rented space in an office condominium was around 1,000 square feet, divided into four small offices and a reception area. Each of the offices had several desks and chairs and the staff was working elbow to elbow. One of the offices was also used for storage of surplus food from the food bank that was delivered to providers each month. In that first meeting, Kate described her business in detail. Tom noted what seemed to him to be key symptoms: lots of transactions; manual processing; high error rates; expected increased workload; variable work schedules; non-standard forms and so on. To him, it was a case of a company that had outgrown the convenience and flexibility of manual processes. The number of customers had increased to the level where dividing the work was causing communication and consistency problems to mushroom. The task of combining lots of different human information processors into a workable network was proving to be too complex. Over the next several months, Tom visited QCI frequently, talked with members of the staff and observed their activities. During those visits, he noticed a mixture of reactions to his presence: on one hand, he sensed a feeling of hope that something could be done to reduce the tedious manual processing; on the other hand, he sensed their uncertainty. Four months after their initial meeting, Tom reported to Kate that he believed the staff was certainly ready for change and most likely capable of adapting to new processes involving information technology. He and Kate discussed the options and settled on a conservative approach: since the processes were different enough between departments, one department would be chosen as the trailblazer. It would be the Homes department.
Phase I Tom was immediately faced with the “make or buy” decision. He spent several weeks digging out information on available software designed to support CACFP sponsors. While there were several packages available, they were either simple extensions of spreadsheet applications and still too dependent on manual counting, or they required the purchase of specialized scanners and the use of complex forms. Tom recommended to Kate that they build their own system around a database using inexpensive image scanners. Building their own system would take longer, but it would give her some control over how much and in what way her business processes would have to change. Kate agreed, but to Tom’s surprise, she resisted the idea of a database environment. Tom was to build the system around a spreadsheet. It was a structure that Kate understood and could work with if she needed to. In a followup meeting with her staff, she announced that Tom would be building a new system for homes that would make the home claim process much simpler. They were to cooperate with him and provide whatever information and help he needed. The goal was to have a prototype of the system up and running by August. Table 5 (see Appendix) summarizes the Phase I changes that Tom implemented in the system for processing menu sheets for daycare homes. At the core of the solution was the addition of scanning and laser printing capabilities. The menu sheet shown in Figure 1 presents space for writing meal contents and an array of option bubbles for marking meals served to each child. The basic structure of the menu sheet remained the same in an effort to ease the transition by providers to the new form.
Adopting IT: Food Program Sponsor Discovers It’s No Picnic 215
Under the old process, generic menu sheets were printed by a local print shop and provider information was filled in by the providers themselves. With the new system, a laser printer printed the menu sheets and included bar-coded and printed information on each provider at the same time. Completed sheets were scanned and computer-edited for unauthorized meals and summary reports were printed. Third-party software that supported inexpensive image scanners was used to print and scan the menus. A file of reimbursement amounts and associated data was generated in QIF format for import into Quicken for updating checking account records. Figure 2, below, shows the initial hardware configuration. Software included a spreadsheet for the master file, scanner software for printing and labeling menu sheets and scanning completed sheets, word processor for quick bar-code labels, file transfer software, anti-virus software, an editor to process scanned data and produce summary data, and accounting software for checkbook maintenance and check printing. Figure 1: The Menu Sheet
Figure 2: Hardware Configuration Menus
Menus
Scanner Scanner
PC
Scanner Scanner
Laser Printer
PC Zip Zip Drive Drive
Zip Drive Menus, Menus, Checks, Checks, Reports Reports
QIF File
216 Anderson & Gwinn
The initial system was introduced on schedule and was run parallel to the old manual processes. It was quickly recognized that even the new system was dependent to some extent on the accuracy of manual processes. With the old manual system, much of the inconsistency among forms received from providers was unimportant; the human information processors have great visual filters. Forms that were completed with various colors of ink and pencil–or were folded, spindled, and mutilated– could still be processed by people. But, the scanner sheets had to be filled in with greater care and consistency. What at first was thought to be a major obstacle — i.e., training harried daycare providers to exercise more care with their paperwork — turned out to be quite manageable. The scan form was designed to look like the old manual form. To the surprise of the company, most of the providers embraced the new form with real enthusiasm — many expressed excitement over being a part of a system which used computers. In the first two monthly cycles with the new form, there were still staples, folds, incomplete erasures, lineouts, and other problems that interfered with accurate scans. In addition, software and hardware adjustments were made to improve scanner speed and reliability. By the third cycle, most of the providers had adopted new habits and submitted their forms with clean and accurate markings and in good physical condition. Scanning accuracy improved sharply. After six months, scanning errors resulting from form condition and form preparation errors were virtually eliminated. Nutritional requirements were still checked visually and any obvious problems with the forms were corrected before scanning. Once the forms were scanned, an editor program allowed visual verification. After several cycles, it was obvious that the scanner and scanner software was accurately interpreting the forms, so complete visual editing of bubble patterns was replaced with sampling. During the scanning process, form images were verified by the software to resolve ambiguities caused by variations in mark densities and stray marks. Image-editing software was developed for visual verification of the scanned images to confirm that the scanner sees what it is supposed to see. Line-editing software then checked meal patterns on each of the roughly 20,000 lines. Those found to be in conflict with the contract or the law were displayed for editing. When edits were complete, the meals were tallied and a detailed claim report was printed. In addition, the reimbursement amounts were automatically combined with information pulled from the master spreadsheet file to produce a Quicken import file to be used for writing the reimbursement checks. The editors also generated log files for changes made to the scanned data. The positive effects of the system on the company were many. At the top of the list was employee morale: the morale of the employees jumped immediately when a large part of the stressful manual checking process disappeared. Night work and overtime work was eliminated for most of the staff. The error rate dropped from around 30% to less than 1%. After several monthly cycles, the company began to receive calls from daycare providers who wished to change from another sponsor to QCI. Several said they wished to change sponsors because of QCI’s simplified paperwork.
Phase II Once she saw how handily the system disposed of a huge chunk of their most taxing work, Kate began to wonder about other possibilities. The prototype system delivered a file to her which she imported into her Quicken program for printing checks. She wondered if there was a better way to get the checks printed. Betty wondered if there was a way to get various lists out of the system that the staff could use on their routes. Someone mentioned that it would be nice if they could get rosters of kids. Someone else mentioned that several sponsors in Texas were using direct deposit for their providers. In February of 1998, a year after his first meeting with Kate, Tom had completed his initial task. Kate began talking about the real heart of the business – Centers. Tom knew he would need some help. Silky Traynham was a senior IS major in Tom’s Visual Basic programming class and was anxious to get some practical experience before graduating. She was captain of the girls’ volleyball team and
Adopting IT: Food Program Sponsor Discovers It’s No Picnic 217
maintained a dean’s list GPA. She was bright, enthusiastic and aggressive — a natural candidate to introduce to Kate along with the suggestion that QCI could benefit from an internship relationship with the university. Kate and Silky hit it off immediately. Silky started her internship working for Betty. Her initial tasks included data entry, some training of other staff in the use of the computers and developing a newsletter. It was soon evident that Silky was not challenged, and worse, that Betty felt threatened. After one month into the internship, Betty was actively reducing Silky’s interactions with other staff and limiting her assignments to manual tasks such as addressing envelopes and emptying trash cans. Betty’s relationships with other staff began to deteriorate and she soon became isolated from them. The staff liked Silky and relied upon her for help with the new systems. At the end of two months with Silky as an intern, Betty abruptly stopped coming to work. She notified Kate by telephone that she was quitting and there was no further communication between Betty and the staff. Kate was very upset over Betty’s departure. She made several attempts to contact Betty in an effort to persuade her to return. Betty refused to take phone calls and her husband apologized as he hung up the telephone. It began to look like the computer-related changes were going to be a catalyst for other changes that Kate could not anticipate. As a possible short-term solution, Kate approached Silky with the idea of staying on for a while after she graduated. To Kate’s surprise, Silky announced that she was enrolling in the university’s evening MBA program and that she would be pleased to continue on as a full-time employee. Silky understood the technology and was working with Tom on extending the system’s capabilities, but she didn’t know enough about the business to manage the Homes department. Kate decided that Silky would be in charge of all things related to the computer and the “new systems,” and would report to her and work directly with Tom on systems projects. Brooke Sneeden was moved in to the Homes department manager’s job. Brooke was a very pleasant, detail-oriented professional who had also been a public school teacher. She had left the classroom to raise her family and when the last child left home, she re-entered the job market. While working for QCI on a part-time basis, she quickly impressed Kate with her skills in human relations. All of the staff knew Brooke as an quiet, optimistic, people person – she was nonconfrontational, yet persuasive. She never had a negative thing to say about anyone or any situation. She and Silky worked well together. Brooke deferred to Silky on technology questions and Silky deferred to Brooke on business issues. Over the next year, Tom and Silky extended the prototype system to include check-writing (using third-party software), and direct deposit (using bank-provided software). When the direct-deposit feature was added, the decision was made to convert the master file from a spreadsheet format to a delimited text format. Kate continued to resist the idea of a database. Besides a flurry of additional reports, several major functions were added to the system. The major addition was a module used to keep track of enrolled children. The day care homes had a total enrollment of around 9,000 children. There was no mandate from NSS to track children at that time, but the homes were required to submit information on children in order to determine the tier status for both the home and the child. Tom set up an interface for entering data on children and demonstrated it to Brooke and Kate. The initial reaction was mixed and the module was left unused. But not for long. In early 1999, the USDA Office of the Inspector General (OIG) launched a nationwide investigation of the CACFP after a whistle-blower in California complained about problems in a sponsor organization. The investigation involved some 3,200 unannounced visits to day care homes and centers on the program. North Carolina was included in the investigation, and QCI was chosen for review because of its size – it is one of the largest sponsors in the state. A team of investigators introduced themselves at a selection of QCI’s homes and centers across the state at the same time that a team of auditors arrived at QCI’s offices. Information collected from the various sites was funneled to the team at QCI and a two-week detailed review of records was undertaken. Except for a few normal
218 Anderson & Gwinn
errors in record-keeping or reporting, QCI was given a clean bill of health and privately praised by the auditors for the integrity of their systems. But the larger picture was not so bright. Inspector General Roger Viadero reported widespread fraud and abuse in the program. In some states, sponsors were found that routinely overstated the number of meals served by its providers. In some cases, claims were made for nonexistent children in fictitious day care homes. In the testimony of Thomas A. Schatz, President, Citizens Against Government Waste, before the Senate Subcommittee on Research, Nutrition, and General Legislation on September 27, 2000, it was noted that as a result of the OIG efforts, “CACFP officials have terminated 26 sponsors receiving more than $46 million annually in food and administrative funds. Of the 60 individuals charged with crimes through CACFP, 45 have been found guilty and 37 sentenced. In one particular case, the president of a Michigan day care center was sentenced to nine years in prison followed by three years of supervised release. The man was ordered to pay $13.5 million in restitution, a $10 million fine and a special assessment of more than $3,000.” In General Viadero’s judgment, there were basic flaws in the structure of the administration of the program that permitted sponsors to engage in such activities as money-laundering, embezzlement, forgery and extortion. Some sponsors charged fees as high as 30% for administrative services. Others required kickbacks from homes and centers that participated in schemes to fatten their reimbursement checks. As a result of the findings by the OIG, the states began major restructuring in their processes and procedures associated with the CACFP. In North Carolina, key personnel changes at NSS led to significant additional reporting requirements that were aimed at closing the loopholes in the program. Changes in the state’s administrative requirements that went along with adjusting to revised USDA expectations were met with apprehension by sponsors. For some sponsors, the work required to process the claims under the new procedures was simply too much to accomplish manually and they balked at the prospect of computerizing. Those sponsors chose to drop out of the program. Many of their clients applied to QCI for help. Almost overnight, QCI was faced with the specter of demand increasing more rapidly than their ability to meet it. One of the new requirements called for tracking children in the homes more closely. It soon became evident that checking a child’s economic status by referring to a completed paper form was taking too much time and was too error-prone. When asked if the computer could help, Tom and Silky reintroduced the interface developed earlier and it was readily embraced. After a year as the computer support person and close to completing her MBA, Silky announced that she would be taking a job with a local software company. During that year, QCI had added several more computers and printers, and the Homes staff had become dependent upon the technology for most of the day-to-day administrative tasks. She agreed to stay in touch for a month or so to enable a transition. Kate and Tom were both disappointed and concerned about finding a replacement. Kate knew that Silky had doubled her salary by moving and that she would be in a much more technical environment where she belonged. Kate wanted another Silky, but she knew that she couldn’t compete in salary terms with technology companies for MIS graduates, especially one with an MBA. As if it were predestined, two weeks before Silky was scheduled to leave, Phyllis Dean walked in to Tom’s office at the university. She was going back to school after a divorce and would be studying information systems. She had experience working with computers that included the Microsoft Office suite, check-writing software, direct-deposit software, office networks, and database. She had been office manager for a large real estate firm and wanted to get into the IS field. When Phyllis said she needed to find part-time work to help pay for her school, Tom decided to talk with Kate. Kate was ready to try anyone who had experience with computers. And so Phyllis joined QCI. The next two months at QCI were memorable. Phyllis moved in to Silky’s position with a high level of self-confidence. As Silky explained the ins and outs of the QCI setup, Phyllis seem detached and uninterested. She had done all of that before. In fact, she pointed out, she had used database and networking in her last job — certainly more sophisticated than what QCI had in place.
Adopting IT: Food Program Sponsor Discovers It’s No Picnic 219
When Phyllis ran her first set of menus at the beginning of the month, it became apparent that things would be different with her. Instead of the friendly and cooperative attitude they were used to with Silky, the staff – including Brooke–was greeted by a tense and irritable person who preferred to be left alone to do her work. Almost immediately, the staff deferred to the new computer person. They stepped back and watched as she disposed of problems using assumptions based on experience from her previous jobs. Within a few days after the checks were mailed to the providers, the telephone began ringing off the hook. Editing decisions made by Phyllis during the scanning process were not based on QCI policy or on NSS regulations. They were arbitrary in many cases and incorrect in most. Silky had left a manual describing the processes in detail. At many points in those processes, decisions were called for from someone with extensive knowledge of the providers and the program. That someone was not interested in helping Phyllis. When questioned by Kate and Brooke about how so many errors could occur, Phyllis was contrite and attributed her attitude and inattention to detail to her personal situation. She promised to do better. Unfortunately, the following month’s results were not much better. That month a large number of the direct deposit transactions were incorrect – an old file had been used. Kate called Tom to inform him that she was going to fire Phyllis. QCI was back in the situation of not having a lead technical person. After a few weeks worrying about how to proceed, Kate and Tom reached the same conclusion: someone inside should be trained in IS. Looking down the road toward future development, Kate saw developing a system for the Centers department as the logical next step. Tom agreed. And the logical person to involve in the technology was Terry Mintz, the manager of the Centers department. Terry had quickly learned the changes in the Homes systems and had been thinking about how those same kinds of applications might be used in the Centers claim process. For the next several months, Kate and Tom met during the first week of each month to discuss the directions QCI should be taking. After assessing all that had occurred with the Homes department and how dependent they were on easy access and manipulation of their data, Kate finally agreed that it was time to develop a Centers system and that a database approach should be used. Tom recommended using Access. To help speed the development process, he recommended bringing another MIS faculty member into the project. Kate agreed and Jerry Thorpe joined the team. Kate reorganized QCI and attached the technology responsibility to Terry’s job description. Terry was to work with Brooke where Homes issues were involved and would work with Tom and Jerry on the development of a Centers system. Tom would work with Terry to develop her technology skills. Jerry would work on the centers database and processing the Centers claim. To compensate Terry for the additional responsibility, Kate raised Terry’s salary by 30% to bring it in line with what she had been paying Silky. Both Kate and Tom were hopeful that Terry would grow into the job. At Tom’s suggestion, Terry began a series of computer-based training modules on database. She took books home and studied when her two children were in bed. It became apparent that Terry was the right choice for the job. In a recent meeting, Tom and Jerry outlined a strategic direction for QCI to Kate and suggested that a good first step would be to improve the IT infrastructure for the whole of QCI by moving to a local area network. As she was considering the suggestion, she pointed out that NSS was moving toward the Web and that they anticipated electronic document exchange before too long. Kate had an Internet service provider at home and occasionally used her personal email to communicate with Tom, but she did not see it as in integral part of her business. Tom had been encouraging Kate to get an ISP for QCI and to consider a Web site for QCI as well. Kate approved the LAN but was not quite ready for the Web. After a month in her new role, Terry called Tom. She was wondering if they had done the right thing. The rest of the staff didn’t seem to accept her in her new role. Terry felt that the others didn’t think she knew any more about computers than they did. They weren’t asking for help in solving their computer-related problems. Kate was expecting Terry to do the job, but the others weren’t helping her.
220 Anderson & Gwinn
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION Pending State Level Changes In the summer of 2000, the NSS held the first of several workshops for its sponsors to discuss the contents of a recently-issued USDA document entitled “Management Improvement Guidance: Family Day Care Home Sponsors.” The document specifies a set of standards for each of the three key programmatic areas and spells out expected performance for ongoing sponsoring organizations and their state agencies. It includes outcome-based performance measures and performance improvement plans related to the standards. The 20 sponsor standards deal with the areas of organizational management, financial management and oversight of provider operations. QCI must review its processes in light of the new federal standards. A major concern is the outcome of the review will be a recommendation for major changes to QCI’s information systems. QCI has just undergone the re-engineering of key business processes and was starting to think in terms of continuous improvement. The specter of yet another re-engineering effort and its impact on QCI’s staff, not to mention the cost, is certainly changing the way management thinks about the business. In a recent letter, NSS let it be known that there would be additional workshops offered in an effort to bring sponsors into compliance with USDA expectations regarding CACFP, and that there would be significant policy and procedure changes over the next year. When asked about the content and scheduling of expected changes, the NSS reply is usually terse and noncommittal, leaving QCI and other sponsors to guess the future. Occasionally, word leaks out of NSS about impending changes. Up to now, where its own information systems needs were involved, NSS has had to rely upon a centralized information processing system used by all of the other state government agencies. During the last audit by NSS, one of the auditors remarked that QCI was ahead of NSS when it came to information system implementation. Shortly after that visit, it was leaked that NSS is looking at automating some of its systems quickly by purchasing an existing system that is in use in several other states. There was mention of a requirement for submitting claims electronically. The possibility of NSS improving its systems raises a new issue. So far, QCI’s systems have helped respond to requests from NSS in a timely manner. Better systems at NSS will change that balance — NSS will be able to handle responses more quickly which will probably lead to more frequent requests and possibly more complicated requests. QCI’s management finds the question of hiring a full-time IS professional back on the table.
Pending Federal Level Changes At the federal level, there is a deliberate effort underway to develop more responsive information systems to support childcare-related programs, including CACFP. A recent initiative by the Child Care Bureau of the U.S. Department of Health and Human Services Administration for Children and Families led to the development of a document called CEE-SAW – the ChildCare Electronic Environment: a System Automation Worktool. The document is a master blueprint for child care information systems and provides detailed functional requirements, process models and data models. It is a public document that the Child Care Bureau hopes will lead to the development of better information systems at all levels. QCI knows that there is pressure on NSS from the USDA and that NSS must respond or lose federal funds.
Strategic Planning Challenges With pressure on the child care industry being exerted from both the federal and state levels, the handwriting is on the wall: in order to survive, a sponsor must not only develop information systems that meet today’s requirements, but those systems must be flexible enough to accommodate promised changes that are in the works behind the doors of the oversight agencies. It seemed obvious to QCI’s
Adopting IT: Food Program Sponsor Discovers It’s No Picnic 221
management and Tom that there could be a more deliberate effort to coordinate the adoption of information technology by the industry. Tom recalled a similar scenario a decade and a half earlier when the shipping industry was struggling to implement information technology. The federal oversight agency in that scenario was the U.S. Customs. With the final say as to what can enter and leave the United States through her ports, the Commissioner of U.S. Customs was able to motivate state ports authorities, brokers, forwarders and carriers to launch industry-wide change with a simple statement: “Automate or perish!” Tom mused that the discordant rates of IT adoption in the shipping industry were tuned by threat. Given the size of the budget for programs such as CACFP and the renewed political interest in child care during the recent elections, he wondered who might be eyeing the tuning fork in Washington. QCI’s management thinks it’s important to understand the big picture. They worry about what the oversight agencies have in store for sponsors; about how the staff will respond to mandated changes; and they worry about providing quality administrative service to their providers. But, even though QCI is one of the largest sponsors, they feel unable to influence what at times seems to be a random evolution of IT implementation in the industry. While QCI enjoys an apparent competitive advantage among sponsors because of its systems, they also realize that there is a risk of those systems becoming obsolete by mandate. What is needed, they think, is a planned and coordinated evolution of systems. Sponsors in other states have organized into associations for the purpose of exchanging information and lobbying. Indeed, there is a national sponsors’ association. But, there is no sponsors’ association in North Carolina. Feeling a bit frustrated, Kate finds herself thinking about how her decision of a few years ago to use a scanner has somehow led her to the point of considering the impact of politics on QCI’s adoption of information technologies.
FURTHER READING Child Care Bulletin on using technology in child care (1996): http://ericps.ed.uiuc.edu/nccic/ccb/ccbmj96/ccb-mj96.html. Child Care Bureau’s Child Care Automation Resource Center: http://www.acf.dhhs.gov/programs/ ccb/ta/ccarc/index.htm. Links to licensing requirements by state for day care providers: http://nrc.uchsc.edu/states.html. National Child Care Information Center (NCCIC) online database for child care statistics and demographics: http://nautilus.outreach.uiuc.edu/eric/search.asp#StateProfile. National Network on Child Care: http://www.nncc.org/states/nc.html. NCCIC list of links related to child care: http://nccic.org/links.html. North Carolina administration of CACFP: http://wch.dhhs.state.nc.us/nss/nss2/index1.htm. North Carolina Division of Child Development: http://www.dhhs.state.nc.us/dcd/. Source for statistical data on day care: 2000 Kids Kount: http://www.aecf.org/. USDA CACFP page with links to statistics, management guidelines, audits, etc.: http:// www.fns.usda.gov/cnd/Care/CACFP/cacfphome.htm.
REFERENCES Cragg, P. & King, M. (1993). Small-firm computing: motivators and inhibitors. MIS Quarterly 17(1), 4760. Davis, F. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 318-339. DeLone, W. (1988). Determinants of success for computer usage in small business. MIS Quarterly, 12(1), 51-61. Igbaria, M. & Zinatelli, N. (1997). Personal computing acceptance factors in small firms: a structural equation model. MIS Quarterly, 21 (3), 279-305.
222 Anderson & Gwinn
Nooteboom, B. (1988). The facts about small business and the real values of its “life world.” American Journal of Economics and Sociology, 47, (3), 299-314. Soh, C., Yap, C., & Raman, K. (1992). Impact of consultants on computerization success in small businesses. Information and Management, 22, 309-319. Taylor, J. (1999). Fitting enterprise software in smaller companies. Management Accounting, 80, (8), 36-39.
BIOGRAPHICAL SKETCHES John M. Anderson is a professor in the Information Systems and Operations Management Department, Cameron School of Business at the University of North Carolina at Wilmington. He has managed information systems departments in government and education and serves as a consultant on information systems design and implementation. Dr. Anderson received his Ph.D. from North Carolina State University. William H. Gwinn is an assistant professor in the Information Systems and Operations Management Department, Cameron School of Business at the University of North Carolina at Wilmington. He has implemented and administered information systems in the Department of Defense and education and serves as a consultant on information systems design and implementation. Dr. Gwinn received his Ph.D. in Information Systems from Texas Tech University.
APPENDIX Table 5: Quality Care, Inc. Summary Of Homes System Upgrade–Phase 1
Homes Claim Processing Form Generation
Old Print shop. Blank forms filled in by providers
New Benefit Laser printer. ID info filled Timely. Better info for in by computer/printer. visits by reviewers. Cost savings.
Cost Computer: $2,000 Printer: 2,500 Software: 1,500
Form Editing
Manual. Check nutrition requirements. Count meals and get subtotals.
Computer. Check nutrition Almost total elimination of req’ts by hand. Computer pattern errors. edits for errors in meal patterns.
Scanner: Software:
Form Tallying Summary Reports
Manual entry of subtotals into spreadsheet. Sums from spreadsheet typed into report.
Computer tallies meals and Almost total elimination of produces summary claim. tally errors. Sharp reduction in staff time.
(Covered above)
Amendments
Manual preparation of amended reports.
Make changes to stored data and re-run tally and reporting system.
Accuracy and quick turnaround.
(Covered above)
Check Writing
Manual entry into Quicken.
Import into Quicken.
Reduced time and data entry errors. Auto update of accounts.
(Covered above)
Data Security
Save menu sheets and spreadsheet sums.
Save menu sheets and reports on disk.
Easy off-site backups maintained on Zip disk.
Hardware:
$2,500 5,000
$150
Adopting IT: Food Program Sponsor Discovers It’s No Picnic 223
Table 6: Child and Adult Care Food Program: Average Daily Attendance Data as of February 28, 2000
State / Territory
FY 1995
FY 1996
FY 1997
FY 1998
Preliminary FY 1999
ALABAMA ALASKA ARIZONA ARKANSAS CALIFORNIA COLORADO CONNECTICUT DELAWARE DISTRICT OF COL FLORIDA GEORGIA GUAM HAWAII IDAHO ILLINOIS INDIANA IOWA KANSAS KENTUCKY LOUISIANA MAINE MARYLAND MASSACHUSETTS MICHIGAN MINNESOTA MISSISSIPPI MISSOURI MONTANA NEBRASKA NEVADA NEW HAMPSHIRE NEW JERSEY NEW MEXICO NEW YORK NORTH CAROLINA NORTH DAKOTA OHIO OKLAHOMA OREGON PENNSYLVANIA PUERTO RICO RHODE ISLAND SOUTH CAROLINA SOUTH DAKOTA TENNESSEE TEXAS UTAH VERMONT VIRGIN ISLANDS VIRGINIA WASHINGTON WEST VIRGINIA WISCONSIN WYOMING TOTAL
33,903 6,898 39,021 20,381 262,700 41,622 20,375 11,886 4,957 71,009 66,870 574 25,603 6,570 75,449 40,917 28,525 56,511 36,995 60,910 14,031 42,974 50,207 69,287 96,751 29,437 42,869 12,713 39,298 4,803 6,814 44,428 45,374 160,393 60,670 18,979 82,804 42,640 37,529 77,186 8,538 7,694 24,556 12,549 36,601 158,529 39,837 8,830 1,148 40,918 56,420 10,497 49,397 7,853 2,354,225
34,786 7,240 39,504 20,654 272,240 38,780 20,404 11,870 4,317 75,114 79,192 541 9,076 6,476 77,168 40,660 28,058 54,709 38,147 58,944 14,277 52,330 50,016 71,172 94,648 26,896 42,419 13,832 38,813 4,838 7,109 43,623 42,383 168,476 103,182 18,452 79,453 43,641 36,697 63,528 29,668 6,820 26,043 12,255 37,337 154,447 40,464 9,018 1,096 41,747 53,496 10,238 50,763 8,135 2,415,186
36,209 6,813 43,628 23,663 282,893 39,978 20,544 12,693 4,595 83,656 86,929 659 9,128 6,545 82,199 44,155 27,931 53,985 39,550 55,805 15,318 53,439 51,139 74,536 94,866 27,084 44,008 13,002 39,143 5,250 6,762 38,243 44,973 181,938 99,763 17,975 76,250 45,467 36,490 64,950 21,547 6,899 27,467 12,117 37,970 156,950 41,051 8,679 1,199 39,922 54,366 11,920 51,887 7,504 2,471,627
37,129 7,036 46,636 23,096 283,344 42,686 20,485 12,805 4,137 99,954 95,373 507 9,251 6,399 96,773 45,889 27,124 54,223 41,661 54,706 15,338 51,113 54,869 71,192 93,594 27,317 45,774 13,875 38,533 4,964 6,765 44,102 43,104 230,772 97,511 17,703 80,436 45,881 36,041 69,226 22,745 7,739 28,591 11,322 41,863 161,481 37,256 7,426 1,216 41,932 64,794 16,312 52,549 8,020 2,600,561
37,254 9,385 52,456 24,707 286,648 43,177 20,949 12,421 3,570 97,445 98,703 326 8,908 6,786 98,094 48,003 27,818 54,919 45,605 55,165 15,003 50,723 57,561 73,497 95,046 26,937 47,560 14,164 38,413 5,473 6,806 50,082 42,898 257,161 114,399 17,458 82,422 43,325 35,290 69,719 22,679 9,100 30,915 11,159 43,064 172,735 35,984 7,823 1,126 43,745 67,519 15,930 54,323 8,543 2,700,912
Note: Average daily attendance data are reported for the last month of each quarter; annual averages are the sums divided by four. Unlike participation data in the National School Lunch and School Breakfast Programs, average daily attendance is not adjusted by an attendance factor. Data are subject to revision. (Source: U.S. Department of Agriculture)
224 Anderson & Gwinn
Table 7: Child and Adult Care Food Program Data as of February 28, 2000
Fiscal Years
1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999
Total Participation Thous. 23 69 154 185 216 267 375 401 483 528 598 663 778 830 920 982 1,043 1,102 1,186 1,256 1,367 1,490 1,642 1,823 1,977 2,187 2,354 2,415 2,472 2,601 2,701
Homes
— — — — — — — — 19 32 54 84 168 154 178 217 253 277 309 357 414 481 543 613 668 729 766 777 775 751 743
Meals Served Centers Adult Millions 8 42 81 103 118 163 224 254 292 307 327 347 379 339 358 373 387 401 416 433 448 477 509 555 613 666 721 746 770 821 863
— — — — — — — — — — — — — — — — — — — 2 4 8 11 14 17 19 22 23 26 29 32
Total
8 42 81 103 118 163 224 254 311 339 382 431 547 493 536 591 640 678 725 792 866 966 1,063 1,182 1,298 1,414 1,508 1,546 1,572 1,602 1,638
Free+RP of Total meals Percent 78.2 80.3 83.5 85.4 87.1 88.6 87.6 80.6 82.6 81.8 79.8 82.6 91.0 85.5 84.4 84.0 83.7 83.6 83.2 83.2 83.5 83.9 84.5 85.4 85.4 85.3 85.2 85.2 85.3 84.7 84.3
Total Costs Mil. $ 1.3 6.2 13.4 16.5 20.0 30.0 51.0 87.5 124.6 152.4 189.6 236.4 339.7 324.4 355.8 406.7 452.1 496.2 547.7 628.2 697.0 812.9 945.1 1,094.2 1,225.2 1,354.0 1,464.1 1,533.7 1,571.7 1,553.2 1,619.4
Data are subject to revision. 1. 2.
FY 1969-75 data are for the year-roundcomponent of the Special Food Service Program. Participation data represent average daily attendance with no adjustment for absenteeism. Data were collected monthly through FY 1983, and quarterly in subsequent years. 3. Total cost includes food service equipment assistance (eliminated after FY 1981) and sponsor administrative costs. Audit and startup costs are included from FY-1988 onward. (Source: U.S. Department of Agriculture)
Reorganization of the Nevada DMV 225
Everyone’s Watching: The Remarkably Public Reorganization Of The Nevada Department Of Motor Vehicles William L. Kuechler and Dana Edberg University of Nevada-Reno, USA
EXECUTIVE SUMMARY In 1996 he Nevada Department of Motor Vehicles and Public Safety launched the “Genesis” project, a technology-enabled reengineering endeavor. In September of 1999, after four years of planning, organizational restructuring and system development, the new system was released. To the accompaniment of great publicity, it fell dramatically short of expectations. This case provides the background necessary to understand the origins and shortcomings of the system, then focuses on the turn-around effort that took the system to a point of successful operation within a year of its going into production. The turn around was accomplished under great pressure to retreat to the legacy system. The effort involved a synergy of manual and technical corrections to bring overall system performance to acceptable levels. The DMV now faces the formidable challenge of taking full responsibility for the long-term maintenance of a system that was designed and implemented by outside contractors.
BACKGROUND The Nevada Department of Motor Vehicles and Public Safety (DMV) is a state-level governmental agency, but both state and federal laws establish the agency’s responsibilities. The DMV performs the following core activities: evaluates and licenses drivers; regulates the vehicle industry; registers and determines legal ownership of vehicles; collects fees and taxes associated with operating vehicles on the highways; regulates vehicle emissions; and maintains vehicle records that are used within the state and nationally. Since so many people drive vehicles in the United States, the DMV has greater interaction with the general public than any other state department.
Nevada Demographics The demographics of the state have had a significant impact on the Genesis Project and the DMV. Nevada is a geographically large (110,540 square miles), sparsely populated state (18 people/square mile) with two major population centers: Las Vegas, in the southern part of the state, and Reno/Carson Copyright © 2002, Idea Group Publishing.
226 Kuechler & Edberg
City, in the north. There are approximately 2 million residents in the state. Nevada is currently the fastest growing state in the U.S. Most of the growth is in Las Vegas. Reno and Carson City, two cities approximately 450 miles north of Las Vegas, have slower growth rates and size than Las Vegas. The degree of population growth was unanticipated by state demographers. Projections from the early 1990s, on which government agency staffing levels and budgets were based, indicated 3.8 percent growth statewide throughout the decade. Actual growth between 1995 and 2000 averaged 11 percent per year. Currently 7,000 new families per month move into Las Vegas alone.
Pre-Genesis DMV Organizational Structure The DMV administrative offices are housed in Carson City, the state capital. These offices provide both administrative services for the department and processing center functions such as mailin renewals and the physical printing and mailing of vehicle titles. The primary points of public contact with the division are at the 22 field offices, where licenses and vehicle registrations are tested and issued, and emission exception conditions are handled. Primary field offices are located in Carson City, Reno, Henderson (a suburb of Las Vegas) and two locations in Las Vegas. Secondary field offices are strategically located in smaller population centers. Prior to Genesis all services provided by the DMV, with the exception of mail-in license renewal, required the physical presence of the driver or vehicle owner at one of the field offices. The functional organizational chart for the pre-Genesis DMV is provided in Figure 1. Prior to Genesis, the DMV was structured in a traditional, functionally oriented “smokestack” architecture of sharply delineated divisions. Even service personnel in field offices, known as field technicians, belonged to a single division and provided only a single service, such as licensing or auto registration. Separation of function and management hierarchy continued from the service level through management at the field service offices up to the department director level (see Figure 2).
Figure 1: Pre-Genesis Organizational Structure
Governor DMV and Public Safety Director
Deputy Director: Public Safety
Administrative Services Fiscal services Administrative Hearings Personnel Research Audit Services Training
Deputy Director: DMV
Driver’s Licenses Data Processing Applicant Screening, Evaluation, remediation and Punitive actions Commercial Driver
Chief: Administrative Services
Registrations
Vehicle registration and title License Plates Insurance verification Voter registration Boat licensing Handicap permitting
Boards and Commissions Bicycle / motorcycle advisory board Intoxication committee Advisory board
Reorganization of the Nevada DMV 227
Figure 2: Flattening the DMV Management Hierarchy During Reorganization
Pre-Genesis
Post-Genesis
Deputy Director
Deputy Director
Function Chief
Administrators
Assistant Chief
Managers
Regional Manager
Team Leaders
Supervisor II
Team Members
Supervisor I1
Replicated for field office staff and processingcenter center processing
Technician III Technician II
Replicated for each department Table 1: Budgetary and Transaction Information for the DMV (*Budget figures were derived from DMV departmental line items available in the Executive Budget published by the Nevada Governor’s office.)
Total Budget* (Millions of $) New licenses issued Renewed licenses # vehicle registrations # mail renewals processed
FY 1996
FY 1997
FY 1998
FY 1999*
FY 2000*
41
50
50
61
59
105,682
110,755
92,130
96,405
99,962
225,577
236,405
253,172
264,919
274,695
1,323,943
1,387,492
1,427,586
1,540,914
1,597,774
444,847
466,200
543245
511,979
563,177
DMV Size and Objectives Table 1 summarizes key budgetary and transaction information for the DMV. This information is divided by fiscal year, which runs from July 1 to June 30. There are currently about 875 employee positions within the DMV. As a state agency, the DMV inherits much of its operational philosophy and culture from the larger governmental environment. Nevada is a balanced-budget state. Funding for all projects must be determined prior to initiation of the project; deficit spending is prohibited by the Nevada constitution. The legislature (law making governmental group in Nevada) takes pride in its role as the
228 Kuechler & Edberg
prudent steward of public funds, and this is reflected in the relatively conservative approach to government and project management under which all state agencies operate. The legislature is an active participant in most publicly visible projects in Nevada. Since the DMV is a publicly used state agency, it is watched more closely than other state agencies. Its administration and activities are scrutinized by both the general public and legislative oversight committees.
SETTING THE STAGE The unexpected population increases of the early 1990s gradually overwhelmed the 1970s era systems and organizational structure of the DMV. By 1994 Nevada residents were complaining so vigorously about long lines, long waits and unreliable service that a legislative subcommittee authorized a study to investigate those charges. The study, performed by an outside information systems consultant, Regional Management Consultants (RMC), found that the complaints were well founded. RMC held meetings with a diverse set of users, management, legislators, and customers. They discovered that waiting times in lines were typically over an hour. It was not unusual for a customer to stand in line only to be told that they did not have the appropriate documentation for the service they sought, and they would need to return with the proper information, to the end of the line. In addition, data indicated that staffing the field offices for adequate service, given the current organization and information technology, would require the largest personnel budget of any state department by the year 2000. The study made it apparent to the legislature that new ways of providing DMV services needed to be explored. The stage was set for reorganization. The key goals, as recommended by the study, were to: • Shift from a bureaucratic to a customer-centric organization. • Increase efficiency of internal operations. • Increase level of service to the public through decreased service times and increased serviceprovision outlets. • Decrease or hold constant the cost of service. RMC recommended the DMV undertake a reengineering project to meet those goals (Grover et al., 1995). To understand the environment of the project, the remainder of this section describes the structure of the DMV legacy information systems, the reengineering study, and the make-buy decisions that preceded system development.
Legacy Information System Description There is a continuum of information systems operating choices available to each department at the State of Nevada, from maintaining a completely autonomous information system to contracting with the centralized Department of Information Technology (DoIT) for information systems support. The DMV elected to maintain a relatively autonomous information system with much of their hardware and network supported by their own in-house personnel. Their software was developed and maintained completely in-house. The legacy computerized information system used a two-tiered architecture. One or more Honeywell minicomputers at each primary field office provided data entry, data validation, and screen processing, and were connected via dedicated telecommunications lines to a DoIT mainframe in Carson City. Response times were acceptable, however software maintenance had become increasingly problematic. The systems were developed initially in the 1970s and were composed of COBOL programs using VSAM files. Different versions of COBOL were used on the Honeywell minicomputers and the IBM mainframe. The programs were developed separately and were maintained separately for each major departmental function. There was little documentation, and making changes was a timeconsuming and challenging task. Legislators became painfully aware of the situation when a simple modification to the licensing program system mandated by a minor legislative change cost $60,000 and took over four months to accomplish. One programmer at the DMV ruefully referred to the systems
Reorganization of the Nevada DMV 229
as being “one of the best examples of spaghetti code I’ve ever seen.” Separate development resulted in duplicate and inconsistent data among the different functions. For example, it was possible that a person’s name, address and other personal information were repeated in the driver’s license, registration, pollution control and insurance verification systems. The DMV information system (IS) department employed 26 people in 1994. Of those employees, five programmers and a supervisor handled all maintenance and system administrative functions for the DMV programs, including all telecommunications issues. Employees in the IT department were relatively traditional, experienced COBOL mainframe programmers with no background in graphical or PC-based applications.
Reengineering Study Results After the DMV received the initial results from RMC, the company hired to document and review its processes, it was clear that the current organizational structure and workflows had to be changed. The key activities recommended in the study were to: • Combine programmatic departments into two cross-functional departments: (1) field services and (2) processing center. • Implement a new information system that could support the combined and enhanced functionality of the new departments. • Implement new service provision facilities through technology, such as Web-based and pointof-emission-service registration of vehicles. • Revamp the workflow and physical layout of the field offices to streamline and accelerate service provision. The project, to be named “Genesis,” was expected to take seven years for complete implementation. RMC explained that the basis for Genesis was the use of technology to support the transformation of business processes. The DMV reorganization was “classic BPR” in many respects. The organization’s problems derived from typical 1970s era “stovepipe”, non-communicating divisions for each departmental service, and the inefficiencies had been codified into the computerized support systems for the division, just as in many early, widely publicized BRP efforts (Hammer and Champy, 1993). The proposed solution too was “classic BPR”: an organizational restructuring to merge and flatten the organizational hierarchy into fewer, cross-functional divisions, and an IT development project to provide computer support for the streamlined service provision. The nature of service provision for the DMV made the IT system a crucial element. The radical nature of the change in both the organization and its information systems made change management (Clemons et al., 1995) critical to the success of the entire project (Bashein et al, 1994). After the Director accepted the need for the project, it was clear that the first step was identifying a technology-based system that could support the newly identified requirements of the organization. Prior to embarking on a full-scale development effort, representatives of the DMV and RMC toured the motor vehicle departments of Massachusetts, Virginia and South Carolina to determine whether a system capable of meeting Nevada’s requirements already existed. Meetings were also held with other states, such as California, Washington and Oregon, to see whether an existing system was available. Several members of the legislative subcommittee on Genesis had experience with prior systems development projects and felt that a purchased system might prove faster to implement, more reliable and less expensive (Nelson et al., 1996). However, based on the requirements document, none of the systems viewed was deemed adequate in the opinion of both the consulting group and DMV senior administration. Deciding to develop rather than purchase a system, the second step was to determine the general requirements for the new system. RMC followed recommended industry practices by working with a DMV project team (Bennatan, 2000). The DMV project team, consisting of two DMV project managers,
230 Kuechler & Edberg
a representative of the DMV IT staff, end users and functional managers, worked with RMC to gather high-level requirements for the new system. RMC compiled a complete requirements specification for both the software and the client/server implementation platform. The third step was to identify an appropriate development organization. Neither the DMV’s IT staff nor DoIT staff were capable of shouldering the development effort. Neither group had the staff or resources available to conduct such a large-scale project. An RFP, which incorporated the requirements specification, was issued to help determine the best outsourcing choice for the DMV (Lacity et al., 1994). The RFP process was time consuming because of a change in DMV management, and two RFPs were issued before the administration chose a vendor. Ultimately, International Accounting and Consulting (IAC) was the vendor with highest scores in all categories, and was chosen even though they had no prior experience with DMV systems. IAC was chosen because of their considerable experience with graphical, front-end tools and client-server development. In addition, they were one of the lowest cost bidders and proposed a very aggressive 18-month development timetable to fit in with the original schedule in the Genesis plan. The extensive software requirements document was made a part of the contract between the DMV and IAC. Thus fulfillment of the requirements (not necessarily a successful installation) determined the successful discharge of the contract.
CASE DESCRIPTION The description below paints a picture of a failed development and implementation effort that was transformed into a working system. This section is divided into three parts that describe sequentially the development, implementation and transformation of the Genesis project.
Genesis Development Process Driven by the aggressive development timetable, systems development began immediately following vendor selection in early 1998. The project effort was coordinated through three-way participation among project managers from IAC, RMC and DMV. IAC was responsible for all systems development. RMC provided project oversight and contributed general technical knowledge as well as high-level knowledge of the DMV requirements. The DMV assigned a project manager from the user community and formed a team of key end-users to serve as application domain experts. A member of the DMV IT staff was part of that team during the first few months of the project. All Genesis team members were moved to a separate facility set up just for the project in a building several blocks from the DMV administrative offices. Genesis followed a rapid application development process (Whitten et al., 2001) with extensive user involvement through formal JAD (joint application design) sessions and iterative development through a series of prototypes. IAC decided to “front-end load” the project (Abdel-Hamid and Madnick, 1989) by deploying large numbers of programmers for the initial coding effort and maintaining that high level of manpower until the project managers were comfortable and the development timetable could be met. Between 50 and 60 programmers, predominantly Indian nationals, were brought to Carson City by IAC in 1998. Determining Detailed Design Specifications IAC coordinated the project by identifying distinct project “tracks” (such as driver’s licensing and registration) and assigning track leaders to manage definition of the design specifications. A “framework track” developed design and coding standards, as well as reusable code modules for all other tracks. Within each track, project team members held JAD sessions to develop detailed design specifications. Additional DMV personnel, beyond those assigned to the project teams, were called in to give their knowledge and advice about the system design when deemed necessary by the DMV project team. At least 50 different users participated in the definition of the design specifications.
Reorganization of the Nevada DMV 231
Members of the DMV team noted that they saw mock-ups of screen designs daily during this time. IAC personnel would take the ideas generated during the meetings and create screen designs, which were immediately communicated to the DMV project team who shared the designs with other users. Conspicuously absent from this process were the DMV IT staff. Little reassurance was given the legacy staff as to their place in the new régime when the new system was completed. This was a significant omission given Genesis’ programmatic commitment to effective change management. The legacy IT staff had historically been somewhat isolated from the DVM at large (Geisler, 1997, pp. 78 - 81), providing necessary but little understood services. There may have been some doubt as to the ability of the mainframe-skilled group to make the transition to the clientserver technology of the new system. While members of the IT staff participated in the JAD sessions for the first few months, they were slowly moved to “less important sessions in the tracks,” according to a participant. One IT staff member said whenever he tried to participate in the sessions, he was told “to be less detailed, try and think at a higher level.” He was assigned to work on the interfaces between the DMV systems and law enforcement, insurance, smog stations and other governmental entities. He noted that he persisted in bringing up details of the interfaces that were ignored by IAC and RMC. After a while, he was no longer notified about the sessions and stopped attending. The IT staff member believes that “IAC and RMC were trying to start with a clean slate, and I guess I kept trying to talk about the way we needed to do things to make them work.” According to the IT Department Manager, for most of the project, occasionally “one of their registration team would come over and talk to our registration people or one of their DL (driver’s license) people would come over and talk to our DL people.” For the most part, however, the domain knowledge resting with the legacy DMV IT group remained untapped. This had a negative effect on DMV IT Department group morale and communication between the legacy team, and the development groups became formal, desultory and guarded. After approximately six months, the development seemed to IAC project managers to be progressing adequately and the number of on-site programmers was decreased. For most of the remainder of the project, the number of programmers fluctuated between 15 and 25. During this time the DMV users were told that some of their user interface design specifications had to be changed to fit more easily within a computer window. They were also told by IAC that “the system will really be slow if we design it this way,” referring to the user interface specifications. As a result, some of the user interface was changed. In one application, this meant that the number of screens required to complete a task grew from four to eight. Screen designs and processing specifications continued to change up to the final minutes before the system went into production. Systems Testing and Training System testing began about six months into the project. All project participants were responsible for testing the functionality of the system. RMC worked on data conversion, IAC tested program functions, and the DMV team was responsible for user interface and system processing. Detailed scripts were used to walk DMV users through standard transactions so that testing was consistent. A problem for the DMV users was that the systems did not work during testing. IAC was still writing the programs that were being tested. The testers were frustrated because the applications would not execute without crashing. The testers were told how the system would work and based many of their tests on those results. RMC also struggled with their tests when data from the old system would not convert correctly to the new system. Data were in unexpected formats in the old system and some of the placement was unpredictable. For example, the “mother’s maiden name” field was used in the legacy system to store comments about a person’s driver’s license when there was no maiden name. Training began approximately nine months into the development effort. Training centers were established in Carson City and Las Vegas, and users were brought in from outlying areas to learn how to use the new system. The system continued to evolve during this time as changes were made to both
232 Kuechler & Edberg
the interface and functionality of the system. These changes were made because the system continued to have problems. Many users were trained on a system that had errors and crashed during execution. A significant number of practical problems were identified during training and testing, and were more difficult to correct than if they had been observed earlier in the development cycle (Jones, 1995). However, the very aggressive schedule and evolutionary development cycle made it difficult to identify those errors earlier. Developing a System Architecture The Genesis computerized information system architecture is shown in Figure 3. PCs replace the dumb terminals connected to the Honeywell minicomputers of the legacy system and serve as the entry devices to the system. PC LANs are placed at the field offices and at the Carson City processing center. LAN servers from field offices throughout the state feed twin RS6000s in Carson City over T1 (1.5MB/S) telecommunications lines. The RS/6000s in turn connect to the IBM ES9000 mainframe for database access through a channel connect. The PCs and LAN servers perform only data entry and editing functions. PowerBuilder® was chosen as the development system for the GUI interface and data validation functions on the PCs. The RS6000s serve essentially as telecommunications front-ends to the ES9000. The processing programs on the ES9000 are a mix of legacy batch applications and new applications which process field-entered data against a DB2 database using COBOL. Transferring Knowledge Between Consultants and DMV At the same time that IAC, RMC, and the DMV began determining detailed system design requirements, recruitment began for the “new system” DMV IT Department group who were to work
Figure 3: Architecture of the Genesis Information System
Field Offices
Carson City DMV offices
DoIT Offices
LAN RS/6000 ES/9000
Channel connect
LAN
DB2
RS/6000 LAN
Point of service LAN's running custom PowerBuilder ® entry/edit software
AIX
Dual RS/6000's serving as communications front ends and load balancing. Minimal processing on these units.
Central database Central database access and CICS access and CICS processing processing functions. functionsl Several Several batch batch functions functions (basically legacy (basically legacy applications). applications).
Reorganization of the Nevada DMV 233
with the IAC team. Recruitment was handled entirely by advertising the positions within the state personnel system. DMV management, drawing from their own hiring experiences and those of DoIT, had determined that outside recruiting was not viable due to the substantial difference in pay scales between the state and industry. According to the individual who now heads the DMV IT Department, though he applied for the position of leader of the new team, he was never interviewed or even contacted regarding his application. It appeared to one of the DMV IT Department members that senior administration at the DMV preferred “brand new employees to be part of the brand new system.” Structural issues began to impede the knowledge transfer effort almost as soon as the in-house team was assembled. First, there was a communications gap between the DMV team and the IAC contractors. The formal British English spoken by the development programmers with a variety of dialectic inflections was frequently unintelligible to the Nevadans and vice versa. Second, the contractors were under intense pressure to complete the project in the least possible time. Though teaching and some level of knowledge transfer were specified in the contract, this had little effect on the programmers “in the trenches.” They felt they had no time to teach, and so little teaching occurred.
Genesis Implementation Effort A pilot implementation of commercial licenses and registrations was performed about three months prior to the intended go-live date for the rest of the systems in order to check the efficacy of the systems. The pilot implementation experienced problems with invalid data conversion and program bugs, but these problems were believed to be isolated within that specific application. The targeted go-live date was rapidly approaching and many project participants were concerned about the integrity of the new system. Target go-live date was scheduled for July 4, 1999, based on a balance of considerations: (1) a three-day processing window was required for the conversion and this required a holiday weekend; and (2) the next holiday weekend after July 4th was Labor Day, almost two months later. Starting a few months before the proposed go-live date, a “war-room” was set up to closely monitor the progress of system development. Members of RMC, IAC, DMV and DoIT participated. Application testing with users took place continuously during this time and a considerable number of ‘bugs’ were discovered. Many of the bugs were usability problems rather than outright software errors, but of such magnitude as to demand correction. The problems were fixed, but the nature of the problems frequently led to changes in the appearance and functioning of the input screens. This had the effect that users, who were necessarily trained in shifts at different times, were trained on versions of the system that were different from each other and different also from the version deployed at go-live. Much consideration was given to the implementation approach for the project. The key stakeholders decided to use a “big bang” implementation approach because of issues involved with computer system connections between the DMV and other entities. It was decided that all field offices would use the new system at the time of implementation. It would have been almost impossible to maintain database consistency with parallel systems or selected locations for implementation, so the project participants believed that the only way to implement the new system was all-at-once. This approach usually requires extensive testing to ensure minimal problems (Pressman, 2001), but the tight deadline did not leave the DMV with sufficient time for such testing. A side effect of the “big-bang” implementation decision was that the only fallback position was to reinstate the legacy system. Thus the decision was made to keep the legacy system current, and to install legacy system terminal emulation on all field office client PCs so that a return to the legacy system could be made in the event of a catastrophe with the new system.
Slipping a Much-Publicized Deadline Only days before the 4th, the decision was made not to deploy the system at that time. Looking back over nine months, the IT Director’s assessment of the system at that point: was “No way was it even close
234 Kuechler & Edberg
to ready.” Shortly after the missed go-live deadline, the existing Deputy Director resigned and was replaced. The missed deadline was highly publicized and thus problematic for all elected officials sponsoring or even generally favorable to the project. The new Deputy Director was widely viewed as a troubleshooter for the project, and had ready access to the Governor to augment her own substantial knowledge and authority. She quickly became familiar with the details of Genesis system development, and increased resources were brought to bear on development within a week. Since much of the ‘back end’ system code turned out to be legacy code, the legacy team finally became a serious player in the project. With the installation of the new Deputy Director, Genesis had a strong internal champion (Geisler, 1995, pp. 149 - 153) for the first time since the resignation of the original director, three years earlier. The level of hands-on management of the project also increased dramatically, and this would prove vital following the unsuccessful initial deployment of the new system. The attitude in the war-room became more determined. RMC assumed an increased oversight role and continuously evaluated the system and the fluctuating number of bugs. In hindsight it is apparent that the “acceptability” of the system was highly subjective. What would have been considered unacceptable early in the project became increasingly acceptable as external criticism of the system increased and another go-live possible date, Labor Day 1999, approached. In the end the decision to deploy fell to the Deputy Director in close consultation with the Governor. According to one of the legacy team members, “[she] made a call to the Governor and he said ‘make it happen.’” The external political forces behind the scenes in Genesis since its inception are quite apparent in this decision process.
Going Live and Failing The system went live over Labor Day weekend 1999. Although it was anticipated that there would be some initial confusion, the first use of the system was, in the words of many high-ranking DMV employees, “disastrous.” The publicity campaign the DMV had mounted to curtail the demand for services during system ramp-up had just the opposite effect. Many people feared the new system would be unable to provide them with services in a timely fashion and came in before they actually needed to, turning concerns about system overload into a self-fulfilling prophecy. On the morning of go-live, lines at DMV offices quickly lengthened to stretch outside the buildings and wind through the parking lots. Within 20 minutes of the start of business, the field office terminals were effectively dead. Diagnostics quickly narrowed the focus to telecommunications hardware. Transactions were getting to the RS6000s but not to the ES9000 mainframe (see Figure 3). DoIT and IBM then discovered and fixed the undersized channel connect between the RS6000s and the mainframe (see Figure 3). A major hardware problem was resolved by noon of the first day of operation of the new system. Still, transaction times for the new system remained far longer than for the legacy information systems, and it was obvious to the customers that the service personnel were very uncomfortable with the new technology. By evening, customers who had been waiting more than eight hours were turned away. Disgruntled customers contacted the media, and the following weeks were a horror of publicity for the slow, unresponsive DMV system. The publicity brought more customers to the DMV (those who wanted to be on television and in the papers), which contributed to longer lines. While some of the problems with the new system had been anticipated, others had not. Still other problems were known, but their severity under actual operating conditions had been underestimated. The issues causing the most severe problems were: • Complex screens that were difficult to understand and slow in operation. • System design requiring many screens to complete a simple transaction. • Last-minute software changes introduced many minor but undocumented procedure changes. In effect, technicians faced systems on which they were untrained. • Memory leaks caused system crashes requiring as many as 15 PC reboots per day.
Reorganization of the Nevada DMV 235
Genesis Turnaround Activities Ongoing negative publicity brought DMV administration to question whether they should return to the legacy system. That strategy was actually demanded by several angry legislators. DMV administration, however, in concordance with several influential members of Genesis’ legislative oversight committee, decided to stay with the new system because they believed it was the only way to cope with the state’s population growth in the long term (http://www.leg.state.nv.us/70th/Interim/ Statcom/Genesis/). Although well intended, this decision brought on even more intense pressure to decrease transaction times and stem the deluge of negative publicity. It became clear that the many software bugs had to be brought quickly under control. Managing the Problems and Fixing the Software Given the huge number of problems and finite programming resources, directing resources became a problem in itself. The mechanisms that were already in place to identify, log and classify problems into “critical” and “non-critical” bugs were made more robust. To deal with the training issues and eliminate the constant distraction of the programming staff, a three-tiered system resembling a familiar “help desk” environment quickly evolved. Error reports are now phoned to a facility in Las Vegas staffed by field technicians with long-term experience with the DMV, and with both the legacy and new systems. This tier filters out and handles training problems. True software errors, or requests to change extremely cumbersome interface elements, are made electronically by the Las Vegas facility to the programming staff in Carson City. The field technicians gained more say in prioritizing problems, and most of their issues revolved around the complexity of the entry screens. A solution to this problem has turned out to be highly problematic. DMV administrators want to gather more data during transactions than were gathered with the legacy system. Additional forms and data entry must be performed to achieve that end. While simplifying some forms would be a possible solution to the entry problem, it would also preclude additional data collection. In addition, the programming framework, which helped IAC generate the interface quickly, requires that certain data be accessed hierarchically to work effectively. Some forms must be accessed before others or the system will not work. The result is that field technicians are forced to enter more data through multiple hierarchically related forms. A few forms have been condensed but most remain multiple-screen entries. IAC agreed to extend their contract and offered additional hours to fix problems and repair screens since the DMV IT department has virtually no experience with PowerBuilder. Ultimately, however, it may be that field technicians will be forced to adjust to a system that is more complex and difficult to use than the legacy system. Fixing Hardware/Software Compatibility Problems The most frustrating problem for field technicians was the apparently unreliable hardware in the field offices. PCs locked up and crashed up to 15 times each day. A combined team of people from DMV, DoIT, and IAC finally pinpointed the problem as incompatible versions of the Windows operating system and PowerBuilder®. This was an unanticipated problem and necessitated hardware and software upgrades of all workstations. It took almost six months to find the error, and then another 4 months to complete the upgrades. During that time, the morale of the field technicians plummeted, and many of them became convinced that the Genesis project was a failure. Workflow Changes at the Field Offices Even while expending resources to repair the software and make it usable, the DMV put additional resources into increasing customer satisfaction. The process modifications that in many ways produced the greatest perception of improved service involved customer routing and work flow rather than computer system reprogramming. Before Genesis, customers entering the DMV buildings found themselves in a large room rimmed with multiple service booths. There were a few signs explaining
236 Kuechler & Edberg
which booths provided which services and there were no information desks. While unable to change the physical plant, the DMV radically improved service with three key elements: immediate routing of customers by free-roaming service representatives, a ‘first stop’ help desk and a service queuing system dubbed ‘Q-matic’. Customers entering the DMV are now immediately greeted by a service representative who determines the service the customer is seeking and ensures that the service is available at the site. The representative then routes customers to a multi-station help-desk that determines that the customer has all the information and documentation required for the service prior to being assigned a number for the next free service booth. After the help desk, customers take seats in a waiting area in the center of the facility. The Q-matic assigns numbers to customers on a first-in-first-out basis, similar to a paper number at a delicatessen. Numbers are assigned electronically to free technicians on a sequential basis. That number is displayed in red LEDs over the free service booth, and announced by a synthetic voice system over a public address system. There are helpful signs placed throughout the office. It is important to note the synergistic interaction of multiple re-engineering changes–manual and computerized. The Q-matic resource assignment system would have been much less effective without two other process changes: 1. The Genesis project computer information system design allowing any transaction to be completed at any workstation; and 2. The cross-training of technicians which was required to allow service booths to perform multiple functions. Another important aspect of the re-engineering was the training received by all employees in customer service attitudes. Though very difficult to quantify, the effects of the ‘attitude adjustment’ by DMV employees have been widely reported in the press and favorably received by DMV customers. Providing New Services DMV management also decided to accelerate the development of computerized services such as Web-based services and alternate registration methods to alleviate crowding at the field offices. However, as for other states that have implemented alternatives to DMV office visits, usage remains slight to date.
Genesis Project Current Status In March of 2000 the contract with RMC was prematurely terminated because the new DMV Deputy Director believed that the organization could handle all project oversight activities in-house. A month later all 4,000 hours of the extension contract with IAC, negotiated just after deployment of the system, had been used. The DMV was suddenly on its own with respect to Genesis. Creating a New IT Team in the DMV The IT personnel and organizational change management tasks facing the DMV during that time were daunting: (1) to merge the DMV’s legacy and “new system development” teams into a coherent unit; (2) to structure the combined group to effectively maintain the increasingly sophisticated application; (3) to overcome programmers’ resentment of the new system; and (4) to overcome the persistent view of the field technicians that the new system was inferior to the legacy system. With the full support and constant encouragement of the new Deputy Director, rapid changes were made in the DMV IT Department: The entire department was moved to a single physical facility, a new manager was put in place, the department was reorganized to fit the new functional requirements of the system and project leaders were established. Training programs were established and programmers were encouraged and enabled to become familiar with the new technology of the system. The Genesis project was intended to take seven years for full implementation with completion scheduled in 2003. At the time of this case, the project was in its fourth year and was slightly ahead
Reorganization of the Nevada DMV 237
of the scheduled implementation. However, Genesis was originally forecast to cost a total of $34 million over the seven years, and had already cost approximately $40 million. The cost overruns and negative publicity resulted in a legislative subcommittee to review the project. An independent consultant was hired to evaluate the project; however, the results from these reviews have not yet been made fully public. In summary, a large, difficult information systems development and implementation effort was delivered and labeled a complete failure. Through a series of manual process changes and software reprogramming efforts, public perception of the DMV has meaningfully changed and there are fewer complaints about inefficiency and poor service. Yet as the public trends toward a more positive view of the DMV and its services, members of the DMV IT Department worry about the long-term viability of the system and users grumble about the interface.
CURRENT CHALLENGES / PROBLEMS FACING THE ORGANIZATION The most significant challenges currently facing the DMV can be categorized into technical, managerial and data integrity issues.
Technical Issues This system will continue to evolve as state and federal requirements change, so the DMV IT staff must be capable of performing frequent modifications to all system components. Significant additional training and systems knowledge will be required by the DMV before they are able to fully maintain the existing functionality and grow the system as requirements evolve. No one in the DMV IT Department has much experience with PowerBuilder or LAN-based systems beyond the working experience they have gained on the project. Almost all of the contract programmers who did the actual work on the interface have left and are unavailable for consultation. While high-level design and requirements documentation exists, it does not always reflect the substantial number of changes made during the frenetic reprogramming immediately following go-live. Members of the DMV IT Department are also concerned with the behavior of the system during scheduled batch updates. Some of the updates, such as yearly registration renewals, were not programmed correctly during development. Few of the batch updates were tested completely prior to go-live, so the actual performance of all programs is still to be discovered. Administration of the Genesis database, a DB2 application running on a DoIT-housed ES 9000 mainframe, is a function handled outside the DMV IT Department. Currently, as it has been since the initiation of the project, the database is administered by a single technical specialist from IBM. Although this function is effectively and capably handled, the specialist has expressed a desire to relocate at some point in the future, and it will become necessary to bring database administration in-house. Many of the technical problems originated with or are severely exacerbated by the lack of success of the technology transfer effort. The need for technology transfer to the permanent staff – training in the basic technologies and experience with the actual system as designed– was understood at project inception. Devising techniques for countering the problems experienced by the DMV and many other organizations with technology transfer remains an open IS research issue.
Managerial Issues The shift to a more complex technology requires a larger and more sophisticated in-house IT department than pre-Genesis. Each of the tiers in the system, the PC LANs, the RS 6000s and the ES 9000 mainframe run a different operating system, and support different programming languages and utilities. In order to effectively maintain and grow the new system, the current DMV director of IT has
238 Kuechler & Edberg
estimated that 21 additional, new technical positions are needed (the staff has already grown by 10 people over 1996 levels). However, a common misapprehension has arisen with respect to the nature of complex information systems that is shared by some members of the Nevada legislature and general public: the belief that an information system, once developed, can be treated as an appliance, such as an automobile or a refrigerator, providing sophisticated functionality while requiring only occasional maintenance. The result of this perception problem has been to make obtaining funding for the proper support and growth of the system more difficult than otherwise. In fact, several members of the Genesis legislative subcommittee were surprised to find that the members of the “new” IT group were being merged with the legacy group; they had assumed the new group would be disbanded when development was complete. This will be a difficult issue for DMV management to cope with during upcoming legislative budgeting sessions. Another staffing issue with long-term implications for the success of the system is the difficulty in hiring qualified IT professionals. The new system requires client/server and GUI design and programming skills, however industry salaries for persons with these skills are 30% higher than state scale. Just as for the original “new system” team, the salary disparity has led to a strategy of hiring exclusively from within the state government and attempting to “grow” the required skills through training. The problems anticipated with this strategy, as experienced by similar organizations in similar situations (Knock and McQueen, 1996) are twofold. First, once trained, technical employees realize their value and leave state positions to take jobs in industry. Second, the time required to build the in-house expertise to deal with the sophisticated system already in place can jeopardize the success of the project. Although the user interface for the system is cumbersome and resistance to it among field personnel is high, DMV IT staff eventually determined that the requirements for multiple screens originate in the system core architecture. Baring a full-system rewrite, the screen sequence is fixed and no technical interface simplification is possible. Thus a technical issue has become a management issue requiring ongoing expectation management (change management) for the field staff who must use the interface. Currently this consists largely of field supervisor “pep talks” that emphasize the flexibility and additional functionality of the new system. Eventually employee turnover will reduce the number of field staff who still recall the simpler legacy system interface.
Data Integrity Issues A third challenge is that of data integrity–coping with the entry of conflicting or erroneous information into the database. A significant quantity of duplicate, conflicting and inaccurate data entered the new system during the conversion from the multiple legacy database systems to the new database. Additional conflicting data has resulted from relaxing the editing and data validation standards built into the original, complex screen sets and also from an incomplete definition of required validation during the design of the system. Some of the data has been corrected by batch programs applied to the new database, but much remains to be corrected manually or by more sophisticated batch programs. Data integrity has a large impact on staffing. According to the manager of the IT Department, this is an expensive system flaw. “Five seconds of data entry by a 20K/year clerk takes six hours of 60K/year programmer time to correct.”
Final Comments In summary, Genesis remains a reorganization in process. Many of the intended structural changes to the organization, the flattening of the management hierarchy and the empowering of personnel through extensive cross-training have been accomplished. From an organizational culture perspective, the transition from a bureaucratic, program-driven organization to a customer centered service organization has been successfully completed. Further, the workflow reorganization at the service centers and the increasing availability of alternatives to visits to service centers have
Reorganization of the Nevada DMV 239
successfully changed the perception of the DMV to that of a forward-looking, courteous and efficient organization. However the continued success of the new divisional structure and its streamlined service provision depends completely on the success of the new computerized information system. Many of the technical problems remaining to be resolved with regard to that system are known, but others may arise as new yearly processing occurs. The primary challenges are managerial: to bring sufficient resources to bear on the technical problems, to provide ongoing funding for the resources, and to structure the resources in a stable, effective information services department within the division.
REFERENCES http://www.leg.state.nv.us/70th/Interim/Statcom/Genesis/: Most Nevada legislative hearings, including those of the Interim Finance Committee on Genesis, are public. Transcriptions of those hearings are available at this site. Abdel-Hamid, T. K. and Madnick, S. E. (1989). Lessons Learned from Modeling the Dynamics of Software Development, Communications of the ACM, 32(12), 1426-1436. Bashein, B. J., Markus, M. L. and Reiley, P. (1994). Preconditions for BPR Success: and how to prevent failures. Information Systems Management, Spring, 7-13. Bennatan, E. M. (2000). On Time Within Budget. New York: John Wiley & Sons. Clemons, E. K., Thatcher, M. E. and Row, M. C. (1995). Identifying Sources of Reengineering Faulures: A Study of the Behavioral factors Contributing to Reengineering Risks. Journal of Management Information Systems, 12(2), 9-36. Geisler, E., (1997). Managing the Aftermath of Radical Corporate Change: Reengineering, Restructuring and Reinvention. Westport, Connecicut, USA, Quorum Books. Grover, V., Jeong, S. R., Kettenger, W. J. and Teng, J. T. C. (1995). The implementation of business process reengineering: building a comprehensive methodology. Information Systems Management , Spring, 13-22. Hammer, M. and Champy, J. (1993). Reengineering the Coproration: A Manefesto for Business Revolution. Harper Business Pres, New York, NY. Jones, C. (1995). Determining Software Schedules. Computer, 28(2), 73-75. Knock, N. F. and McQueen, R. J. (1996). Is Re-engineering Possible in the Public Sector? A Brazilian Case Study. Business Change and Re-engineering, 3:3, 3-12 Lacity, M., Hirschheim, R. and Willcocks, L. (1994). Realizing Outsourcing Expectations: Incredible Expectations, Credible Outcomes. Information Systems Management, 7-18. Nelson, P., Richmond, W. and Seidmann, A. (1996). Two Dimensions of Software Acquisition. Communications of the ACM, 39:7, 29-35. Pressman, R.S. (2001). Software Engineering: A Practitioner’s Approach. New York: McGraw-Hill Publishing Company. Whitten, J.L., Bentley, L.D., and Dittman, K. (2001). Systems Analysis and Design Methods. New York: McGraw-Hill Publishing Company.
BIOGRAPHICAL SKETCHES William L. Kuechler is an assistant professor of Computer Information Systems at the University of Nevada at Reno. He holds a BS in Electrical Engineering from Drexel University and a Ph.D. in Computer Information Systems from Georgia State University. His 20-year career in business software systems development provides insights to his research projects, which include studies of inter-organizational workflow and coordination, web-based supply chain integration and the organizational effects of inter-organizational systems. He has published in IEEE Transactions on Knowledge and Data Engineering, Decision Support Systems, Information Systems Management and other international conferences and journals.
240 Kuechler & Edberg
Dana Edberg is an Assistant Professor of Computer Information Systems at the University of Nevada, Reno (UNR). She has an M.B.A from UNR and a Ph.D. in Management Information Systems from Claremont Graduate University. Prior to joining UNR, she performed software engineering project management in government and industry. The major focus of her research is on the long-term relationship between users and information systems developers, but she has also published articles discussing the virtual society and global information systems. She has published in Journal of Management Information Systems, Information Society, Information Systems Management, and other international conferences and journals.
IT Help Desk Implementation
241
IT Help Desk Implementation: The Case Of An International Airline Steve Clarke University of Luton, UK Arthur Greaves London Borough of Hillingdon, UK
EXECUTIVE SUMMARY This case study concerns IT help desk management within an international airline. The core of what is described relates to attempts at implementing help desk procedures in practice, and illustrates the problems of treating these both as predominantly technology systems and predominantly human systems. From the failures outlined in the case, an alternative approach is proposed, based on the application of methods drawn from an understanding of critical social theory. The practical problems and theoretical issues are discussed, and a theoretically informed framework is applied retrospectively to the case. This allows conclusions to be drawn which, it is argued, strongly support the value of a critically informed approach to human-centered IT help desk issues.
BACKGROUND The international airline on which this case is based was formed in 1984, and operates scheduled freight and passenger air services. Sales have historically been divided fairly evenly between the United Kingdom and overseas, with transatlantic travel providing much of the overseas income. As can be seen from the information on the next page, both turnover and profit have climbed consistently since 1995, with the latest figures available showing turnover in excess of £900 Million, and profit over £100 Million. The company employs almost 6,000 people–a relatively small number for such a large organisation, pointing to the efficiencies expected in its operations. The almost doubling of employees in the last five years gives some indication of the training and business continuity issues requiring ongoing consideration. With the exception of 1997, control of current assets has been strong: the company pays its debts on time and collects funds owing efficiently. Copyright © 2002, Idea Group Publishing.
242 Clarke & Greaves
Table 1: Sales Differences
Turnover: U.K. Overseas Total Turnover Profit (Loss) before Taxation
1999 £'000
1998 £'000
1997 £'000
491709 423329 915038 105227
453107 369124 822231 75601
365426 313030 678456 45172
1996 1995 £'000 £'000 8 Months 10 Months 193383 189658 383041 34437
228000 220000 448000 32000
£'000
Sales and Profit 1000000 900000 800000 700000 600000 500000 400000 300000 200000 100000 0
Turnover: U.K. Overseas Total Turnover Profit (Loss) before Taxation 8 Months
10 Months
£'000
£'000
£'000
£'000
£'000
1999
1998
1997
1996
1995
Table 2: Number of Employees 1999 5913
Number of Employees
1998 5183
1997 4256
1996 3525
1995 3405
Employees 1995 1996
Number of Employees
1997 1998 1999 0
1000
2000
3000
4000
5000
6000
7000
This is clearly a successful enterprise, and that success has meant that the organisation needed to grow as the service provision expanded, placing a strain on, amongst others, the information technology (IT) department. This growth was–through the 1980s and 1990s, and in common with other major international organisations–accompanied by an expansion in the use of computers, particularly networked PCs. To facilitate this, a help desk was established to deal with user issues and problems. The evolution over time of this help desk from being a multi-skilled base of generalists to a more complex and diverse team providing support activities across a broad range of systems, forms the core of the subject of this case study.
IT Help Desk Implementation
243
Table 3: Control of Assets
1999 73707 48031
Net Current Assets Working Capital
1998 33263 50468
1997 5430 42019
1996 24393 32448
1995 38000 40000
Liquidity and Working Capital 80000
£'000
60000 Net Current Assets
40000
Working Capital
20000 0 1999
1998
1997
1996
1995
The Information Systems Department The Information Systems (IS) Department was run by the IS General Manager who had nearly 125 people working for him, of which 47 were contractors. During the time covered by this case, there were approximately 10 unfilled posts. The department was split into four sections: • Systems and Networking • Applications Development • Consultancy • Overseas The largest section was Systems and Networking of which Operations formed the biggest area. The case study was mainly involved in this area. At the time of the study, the Operations area had 42 members of staff of which 13 were contractors. The cost of this area was in the region of £3 million, including provision for facilities (desk, chair and desktop computers) although not for the computer room, for which there was an additional budget of £2 million. There was a target set for contractor staff of £25 per hour, but in practice the lowest paid contractors were paid more than this, with top rates of £55 to £60 per hour. A key objective of the IS General Manager was to move towards a charge out rate for all work, and thus become budget neutral. The Consultancy and Application Development areas already operated in this manner.
SETTING THE STAGE The background to this case can be traced to 1995, when external consultants were commissioned to review the help desk operation. This resulted in four documents, produced in April 1995 under the banner of ‘Help Desk Best Practice Procedures’: • Service Request Management • Service Reporting • Problem Management • Change Management These documents outlined the role of the help desk and the help desk manager within IT. In ‘Service Reporting’, for example, there are suggestions of how to report progress to increase help desk visibility and credibility, including the use of a monthly telephone survey. However, one thing that is conspicuously absent from these studies is any mention of user meetings or user agreement. What
244 Clarke & Greaves
is also very clear from these studies is a complete absence of any evidence that the findings were implemented. Instead, the existing help desk system was replaced by a product called ‘Help Desk for Windows’ from a packaged software supply company. Following this initial development, the situation seemed to become somewhat confused. An example of this is the release (in draft) in December 1997 of an internal document called ‘managing releases and other outages’. This document makes no mention of the earlier procedures or documents and does not appear to have been released in a final version. It is couched in very technical language, and, unsurprisingly, there is no evidence of it ever being implemented. Again, a key issue here is that the document was produced without the involvement of any help desk staff. As the organisation continued to grow, the methods used in the help desk area became further strained, and at the start of 1998 a new IS general manager was appointed. It was this manager who commissioned a ‘Health Check’ of the current IS support service in order to recommend an appropriate way forward which satisfied the business needs and at the same time retained core strategic and business skills in-house; this case study begins with that ‘Health Check’, a report which was issued in May 1998, and which contained many recommendations for improvement. The report suggested a four-phased approach: • Phase 1–Quick wins, to: Address those areas of the service most visible to the users. Address some of the internal issues needing minimal effort to resolve. Put in place service measurement. • Phase 2 – Infrastructure set up, to: Establish an improved support environment with better tools and knowledge base. Improve processes. Undertake initial preventive maintenance initiatives. Agree and implement trial service level agreements (SLAs). • Phase 3 – Effecting the change, to: Restructure the support service in order to improve the user-IS interface and service and to offer IS staff the opportunity to develop strategic skills. Put in place SLAs with the business. Establish back-to-back SLAs between the help desk and other support teams. • Phase 4–Continuous improvement, to: Develop the proactive aspects of service delivery to improve the stability of the infrastructure and systems. Further improve and adapt the service to meet business requirements. Maximise the use of technology to improve the service. Ensure the cost-effectiveness of the service. As a result of this ‘Health Check’, a contract was awarded for the first phase of a project aiming to achieve the ‘quick wins’ and to get a better working relationship with the staff in the IS area.
Phase 1 – Quick Wins This phase did not deliver as many benefits as was hoped for at the initial health check, and highlighted weaknesses in a number of areas: • Help desk management weaknesses in the collection and production of statistics as a result of no overall-working pattern. • No local knowledge in the use and development of the Help Desk for Windows product. • Weakness in product management identified in the rollout of the new network. • Overall a lack of communication between the different duties (or sub-departments) of the IT department. All of this is a precursor to the main case to be described here, which starts at Phase 2: ‘Infrastructure Set Up’.
CASE DESCRIPTION As a result of the weaknesses highlighted in Phase 1, it was decided that this task should now be revised to ‘Help Desk Supervisor Support’. The primary driving force for this decision was the organisation bringing in additional resources to assist the help desk supervisor in her role.
IT Help Desk Implementation
245
Help Desk Supervisor Support The Systems and Networking Department consisted of 53 staff, of which 19 were permanent and 34 contractors. In the help desk area, morale was low, owing to concern that the help desk was about to be outsourced. Initial terms of reference for this study were drafted and agreed by the help desk supervisor. The first priority was to produce the statistics for the previous month. These had not been produced before, and there were no targets in place. Since recording of information could be achieved through the automatic call distributor (ACD) and the help desk software, it was agreed that 80% of calls received would be logged, and that a target of calls being answered within 20 seconds would be set. The results were disappointing, with only 25% of the calls answered being logged, 19% of the calls remaining unanswered and the number of calls outstanding rising by 44 to 139. What was even more disappointing was that the information pack containing the results was only issued to other IS managers – there were no plans to issue any performance statistics to users. However it was decided to review the statistics on a weekly basis with each individual help desk analysis. The results were as follows: Figure 1: Results of Phone Calls on a Weekly Basis 100 90 80 70 60 Calls Looged
50
Abandoned Calls
40 30 20 10 0 Week 1
Week 2
Week 3
Week 4
Table 4: Percentages
Week 1
Week 2
Week 3
Week 4
Analyst 1
47%
44%
68%
99%
Analyst 2
52%
72%
61%
95%
Analyst 3
8%
28%
48%
98%
41%
80%
Analyst 4 Analyst 5
22%
22%
26%
Analyst 6
49%
71%
85%
125%
23%
26%
72%
Analyst 7
246 Clarke & Greaves
The number of calls logged rose steadily over the first three weeks, corresponding with a drop in the number of calls that went unanswered. The 80% target was still not being met, with a number of operators logging less than 30% of calls. It appeared that the reason for this was that the message had not been delivered in a consistent manner. For example, there were two new starters during Week 2: one of these did not log any calls even though he sat with another analyst for a day. He suggested that at his previous company they did not log calls but just solved them. In an attempt to remedy this, the help desk supervisor spoke to the analysts on the Friday of Week 3 and made it clear that improvement was essential. This led to extraordinary behaviour, with one analyst going sick, and another finding calls from previous weeks that had not been logged. In addition, three analysts each took a days leave during the week with the result that the number of unanswered calls also rose during the week. The consultant produced an information pack for the current month; this was now expanded to include month by month comparisons and trends in addition to the monthly data. The consultant also wrote a procedure to show how the information pack was produced. Yet again the information pack was not issued outside the IT department, although some of the graphs were placed on a notice board by the manager’s office.
Some Early Conclusions What are we to deduce from this situation? The results over the month proved to management that the help desk supervisor needed additional support and assistance. However, the problems were seen to be deeper than this. The overemphasis on technical issues had effectively led to the human problems being treated as secondary. Within the IT domain there is a wealth of empirical and theoretical evidence to suggest the problems likely to result from such an approach, a short summary of which is given here before moving on to the practical measures taken to improve the situation. Technology-Based Approaches to Information Systems The literature on information systems reveals a strong adherence to pragmatism, with little or no explicit recognition of underlying theory. Following on from this adherence to pragmatism is the treatment generally of information systems as a technical, problem-solving domain. This adherence to pragmatic problem solving leads to tensions when the system to be implemented or managed requires significant user input. Just as most of the information systems literature stresses a project management, methodological, pragmatic approach, so it also emphasises the need for discovering the requirements of users, basing this view on the observation that systems frequently fail to meet user needs (Clarke, 2001 p.89ff). Most commonly, the incorporation of user requirements into information systems development (ISD), for example, is achieved by including a user analysis stage within the existing problem solving approach (Clarke, 2001 p.8), with advice on how to undertake this user analysis often addressed only weakly. The argument for an alternative to these technology-based approaches is supported by the findings from a number of studies of systems failure. Boehm (1989, p.2) cites examples of such failures, and considers that, directly or indirectly, they contribute to as much as 50% of total systems cost. In all of these instances, the systems development life cycle emerges, implicitly or explicitly, as the prime control element, resulting in a methodology which adheres to the functional engineering model, taking a structured, problem-solving approach: human complexity in the system is seen as something which can be analysed, and toward which a specification can be written. Beath and Orlykowski (1994), focusing on a three-volume text by Martin, mount a convincing critique of the interaction between users and systems professionals in information systems development (ISD), concluding that the commitment to user participation is revealed as ideological rather than actual, with users frequently shown to be passive rather than active participants in the process.
IT Help Desk Implementation
247
Through a thorough review of the information systems development literature, Lyytinen and Hirschheim (1987) make a compelling case for the argument that few information systems can be considered a success. The reason for claiming success is, they argue, largely based on an erroneous classification of how such success should be measured, which usually focuses on the extent to which the completed system meets the requirement specification laid out in advance. Lyytinen and Hirschheim promote the notion of expectation failure, or the failure of the system to meet the expectations of the key stakeholder groups, as conveying a more pluralistically informed view, and forcing a dynamic perspective of information systems development. If technology-based approaches to information systems provide an impoverished view of the domain, perhaps the solution is to be found in human-centered methods.
People: The Human-Centered Methods The human-centered approach to information systems has given rise to the so-called ‘soft’ methods. It is argued that traditional ‘engineering’ approaches are ‘hard’ or functionalist, being based on a view of the world which sees it as composed of deterministic, rule-based systems, in contrast to which the soft methods take an interpretivist, ideographic stance. An early attempt at incorporating human issues into what was seen as a technical domain was the ETHICS methodology. ETHICS (Mumford, 1985) was developed in the 1970s as a socio-technical methodology (Effective Technical and Human Implementation of Computer-based Systems), which follows an essentially problem-solving approach. More recently, Stowell and West (1994) have promoted the client-led design (CLD) methodology, which takes the position that, since the information system results from social interaction, participants in that interaction ought to be central to systems analysis and design. In their view, information systems development needs to be driven by interpretivism, and not, at the technical development stage, “engulfed by functionalism” (Stowell 1991). Consequently Stowell and West are critical of methods whereby soft, interpretative approaches such as soft systems methodology (SSM) are used to front-end a technological development, arguing that once the soft analysis is passed to the technical specialists, the benefits of that soft analysis are largely lost. Reflection on the activities carried out so far within this study from the perspective of technology-based versus human-centered analysis seemed to point to a need for the human-centered side of the work to be given more prominence. Considerable thought was given to this, before deciding to use focus groups of relevant participants to throw light on the issues.
Focus Groups During 1998, a number of focus groups were set up in an attempt to arrest the ever-worsening situation in help desk support. The groups identified were: • Service directory and targets: responsible for the development of service targets. • Support processes: responsible for the identification of tasks and activities needed within the support processes. • Team relationships: responsible for reviewing the draft support processes and the implementation of those processes. • Help desk/field feedback meetings: responsible for the operation and the development of the support processes. • Support cross feedback meetings: responsible for monitoring of support processes and identification of improvements. • User review meetings: responsible for the review of the IT information pack, service level agreement (SLA) monitoring and identification of new requirements. The membership of each group was carefully determined, linked to individual and team skills.
248 Clarke & Greaves
Attendance and Expectations of Group Members Attendance at the meetings varied from five to eight. Lateness was a common problem, owing largely to: a lack of time management within the IS team–the informal culture of the organisation exacerbated this; and opposition to the changes likely to flow from the meetings – this also surfaced in actions from meetings being ignored, and poor response to other requests. There were different feelings and expectations from the group participants at all of the meetings and in particular at the process model meetings. The detailed findings below relate to the key attendees (names anonomised). AB was the most senior participant at these meetings in terms of position and length of service, although perhaps the youngest. He had his own ideas about what needed to change, but tended not to articulate them other than to defend his position. His discipline within IS department is Networks, and he had a reputation for not completing projects. Frequently late for meetings, AB never responded to emails or reports from the meetings. He was very polite although was never available at other times for informal or formal chats. While it was difficult to get a clear view of what his feelings or expectations were, he seemed to favour small personnel adjustments rather than large-scale changes. He was also aware that the new manager was looking to outsource some of the operation to reduce costs and improve quality, so was perhaps concerned about his future within the organisation. CD worked for AB. He had no formal management training, and had a number of problems within the network team, seemingly related to his attitude to other IS colleagues. Although he appeared open and ready to accept change, he did not speak at all at the meetings and did not responded to emails or reports from the meetings. Although informally he ventured a number of opinions and ideas as to how he would like to see the organisation changed, these were at no time formally expressed in the group meetings. The impression was that CD would like to see change, but did not expect it because of the attitudes of his manager, and therefore did not want to be seen to be formally involved. EF also worked for AB, and had been frustrated about the lack of developments in the IS area. Her expectation from the process meeting was not only to identify what needed to change, but also to gain a new position in the organisation as ‘Change Manager’, which would report to the new IS manager. She expected the consultants to be able to implement change by removing the Help Desk supervisor and outsourcing the Help Desk. EF had trained as a manager but had little experience in this field. She seemed unhappy about the lack of commitment and involvement of others in this process. GH worked for EF and was keen to get involved, though he did not talk in any of the sessions and preferred to put his ideas in writing to the consultants. He was aware of the tensions within the organisation and seemed to feel intimidated by management. His expectation was that change would happen and he would get a new job providing that he did not “rock the boat”. He expected the consultants to be able to change the organisation. IJ was at the same level as AB and was responsible for Application Development. Her expectations were that the work being done by the consultants would not be implemented while AB was still employed by the organisation. She refused to attend the process meetings or send any of her staff, although she did agree to meet with the consultants separately. She supplied feedback on the reports produced following the process meetings, but did not see Applications Development as part of the Help Desk process.
Service Directory and Targets This group consisted of the IT management team along with the consultant service manager. The purpose of this group was to draft service performance measures and targets. The idea was to set a number of performance indicators, set targets for each of these indicators and put procedures into place to monitor them. The group had three months to put these into place before they were discussed with users in an attempt to set up service level agreements. To be able to set realistic targets,
IT Help Desk Implementation
249
however, the aim was to implement the performance indicators as quickly as possible. The group met on a weekly basis, and initial feedback was encouraging, with all managers picking up actions. However, there were actions from two managers that were never resolved and were carried over from meeting to meeting. Also there were problems with performance indicators that could not be measured by existing software. The manager responsible for development was reluctant to spend resources or to give priority to the work needed to create new software, and to overcome this the service manager developed his own software using Microsoft Excel, at which point the focus group stopped meeting.
Support Processes There were a number of workshops to discuss the support processes. Workshop 1 was set up to brainstorm call management and problem management. The technique used at the workshops was to look at each process from the perspectives of: • Users • ‘Doers’ (staff ) • IT Management The processes would then be drafted and reviewed by a group called ‘Team Relationship’, who would also be responsible for implementation of the processes. Each department of IT was invited to send a representative to the workshops, and most ‘nominated’ the manager or supervisor. The first workshop was without representation from development or business consultants and in fact started, because of lateness, without the IS support and network manager (AB) or the network manager (CD). Also, although those attending were representatives of their department and the relevant skills areas, there were anomalies: • There were no representatives invited from the field or help desk areas of the organisations. Therefore in addition to no user representatives there were no ‘doer’ representatives either. • With the representatives who had turned up, there were organisational conflicts that could have had career implications. CD and the IS support manager (EF) reported to AB and the database administrator (GH) reported to EF. The initial debate as to whether this was the right approach to take took place mostly between AB and EF. AB suggested that the consultant, who had previous experience, should develop processes and procedures without wasting the time of employees, while EF asked if users could be requested to make an input to process requirements rather than relying on those present identifying what they thought users would want. It was agreed that the draft processes which emerged from the workshop would be circulated to user representatives by the IT management to seek feedback, and that this would furnish input to the ‘Team Relationship’ group. The debate continued with a lot of requirements identified for all aspects; in fact, the management requirements were split between what the help desk manager would want and what the rest of IT management seemed to require. However the discussion was mainly between AB and EF, with occasional comments from CD but no comments at all from GH. In the end AB made the comment that the problem was with the help desk: if they were doing their job correctly, this workshop would not be necessary and that as professionals we all knew what the users wanted. At the end of the workshop, it was agreed that the consultant would write up the notes and send them to all attendees. After feedback from the notes, the consultant would draft the processes for call management and problem management. After the meeting a representative from development turned up and explained that they had not come to the meeting because they felt they had nothing to offer since both call and problem management was outside of their responsibilities. He did make three points however: • How will calls from the United States impact the statistics, since there were time delays? • It takes a long-time for the help desk to answer the phone, and if the caller did not hang up it would not be recorded as an abandoned call. Other measures were needed and he would
250 Clarke & Greaves
document these. He thought it was important to conduct a customer survey to find out the views of the users, and this should be made ‘public.’ There was no feedback from the notes from this first workshop, and the other measures mentioned by the development representative were never received. Workshop 2 was scheduled to discuss change management. On this occasion a development manager (IJ) asked if she could change the meeting, and when this was not possible, asked for a private meeting. This was agreed and took place before the scheduled workshop; this was hoped to bring any comments made into the discussion. However the meeting was quite negative and the following points were made: • It was felt that development was outside of the phase 2 review and that the problems were in the IS support and network area, and in particular with the help desk. • There was no desire to change current working practices at the company, the IT department and been round this loop on at least two previous occasions without any differences other than the loss of time. • There are a number of different release procedures depending on the application, and the help desk was not included in these processes. • The help desk had refused to get involved with the ‘bug report’, which meant that the user has to proceed to development directly, without problems being given any priority or resolution being planned. • A tool was implemented to control change management but the rest of IT was not using this. The workshop proceeded after this meeting with the same attendees as before and with the same people, AB and EF, dominating the discussion. However on this occasion AB was keen to keep the responsibility for change management and did not feel that the other areas of IS Support and Network, including the help desk, needed to get involved in the process. By having a project database online everybody would know what was happening and when it was happening. The meeting managed to develop the requirements for the three aspects and the consultant agreed to issue the notes and would seek feedback. On this occasion the comments from development would be included. Workshop 3 took place with a different group, and this time included representatives from the help desk and field instead of AB and CD; development was not invited. This was an attempt to remove the negative comments from the previous two workshops and to get comments from GH. The workshop had the same purpose as the previous workshops, but in this case there was also an attempt to include some aspects of the Team Relationships, i.e.: • Decide what asset management should cover, from a user’s, IS doer’s and manager’s viewpoint. • Decide who best should be involved in the process. The first part of the meeting was concerned with the control of assets. There was a very negative element when the field supervisor suggested that there were a number of users implementing their own software and hardware. It was suggested the way around this would be to identify and publish the Top 10 offending users, but the notes of meeting were more tactfully worded to suggest that this could be considered in the future. The rest of the meeting proceeded without further negative comments and yet again the consultant produced the notes from the workshop. •
Workshops: Conclusions A key element to emerge from all of these workshops was the extent to which the proceedings were distorted by managers taking positions of power to force through their views. This issue will be seen to be key in the conclusions to this chapter. Team Relationships This group met on two occasions and consisted of more junior staff than had previously been
IT Help Desk Implementation
251
involved. The first meeting was to discuss the implementation of the problem management process,: however not one member of the group had read or even looked at the document before the meeting. In fact they were not aware of the change programme that was being implemented at the company, and therefore it was necessary to explain what was being done and the current status. The participants then received a presentation on the problem management process, which mainly consisted of working through the process diagram. This sparked a lot of interest and the process was broken down into three sections: incoming problems, problem resolution and preventing problems. For each of these sections, different activities, and who should be responsible for these activities, were identified. The second meeting was arranged to discuss the implementation of change management, but attendance was disappointing. As well as no representative from the help desk, two of the people from the previous meetings did not attend. However the people who did attend had read the change management process document, although they did not identify any changes or errors. This meeting succeeded in identifying problems with the current change management procedures, with quite a number of areas that could be improved: • Identification of projects to be undertaken over the next six months: there is a project database but it appears to be a speculative wish list with no plans for implementation or even project start dates. • The need for more active project management to ensure that more direction is given. • A need for standards. In addition, a number of roles were identified during this process by employees. There were not seen as an increase to the current establishment but to give responsibility to existing members of staff. These posts are listed below.
Help Desk / Field Feedback Meetings Following the support workshop, feedback meetings were held, according to the following terms of reference drawn up: • To improve information logged by all duties. • To identify preventative maintenance or other strategies to reduce the number of incidents
Table 5: Part Time Roles Identified During the Team Relationship Meetings Royal Blue Help Desk for Windows (HDfW) Administrator
Configuration manager Tool for Change Management Standards and Procedures Document Control
More training was required in the use of HDfW, geared towards the company’s way of working. Training was also identified as an important requirement for the implementation of the Support processes especially in Call Management. In addition it was identified that a new set of closure codes and a mechanism for allocating codes are required. It was assumed that the Help desk supervisor or a HDfW Administrator could do this work. There was a role for a configuration manager to control asset management. This role is currently in the field team. A tool to help record and control change was needed. Someone will need to evaluate, select and implement an appropriate tool. During change management discussions it was identified that procedures and standards were required. Also some procedures, and perhaps a tool, for project management. With standards, procedures and processes in place it was necessary to have someone to take responsibility for document control - this should fall into the configuration management role.
252 Clarke & Greaves
and problems. To coordinate the resolution of problems to improve cross-team cooperation and develop a service culture. In order to: • Provide a better service to the user. • Help provide a support service that is responsive, knowledgeable, available, consistent and appropriate. This first meeting was well attended with resolutions being proposed to meet the existing problems between the two teams. It was agreed to meet two weeks later to give time for all the nine actions to be completed. However the next meeting began with participants arriving between 20 and 35 minutes late for a meeting scheduled to last one hour, and with no actions having been completed from the last meeting. New completion dates were set which were about three or four weeks away, but subsequent complaints by participants about the organisation, the management and the software being used (in fact, very similar complaints to those being raised before phase 1 of this change process) began to stall the whole operation. After a discussion with EF, who was not present at this meeting, it was decided that it was perhaps too early to hold meetings at this level since it was apparent that the necessary ‘discipline’ was not in place. It was decided to hold meetings at the supervisor level instead. •
Support Cross Feedback Meetings These meetings were held on a weekly basis for three weeks, and although supervisors raised a number of problems, there did not appear to be much enthusiasm for resolving them. At each of the meetings, participants volunteered for actions to resolve the problems identified, but none of these actions was completed. To assist in this the actions were issued within hours of the end of the meeting, in advance of the minutes, but there were still 27 actions outstanding. The major concern was these meetings were ‘Responsible for the monitoring of the support processes and the identification of improvements’. However the support processes had not yet been implemented and a lot of problems being brought to the meeting would have been solved had the processes been in place. There were no further meetings planned despite all these difficulties since the timetable for phase 2 was coming to an end. The first support procedures were issued (in draft form with a request for comments) within two weeks after the first workshop. When none were received a follow-up email was issued asking for comments or queries about the documents, and in particular: • Level of detail • Flow chart format • Omissions or lack of understanding Still no comments were received, so it was decided to arrange one-to-one meetings with each of the people the drafts had been issued to. These meetings took place but were no more successful in eliciting comments. By the end of October, four support processes were issued: • Call Management • Problem Management • Change Management • Asset Manager Consultation with users was raised in the workshops, but none was implemented, since it was decided that the processes should be in place before they are discussed formally with the user.
User Review Meetings There were a number of reviews with senior management throughout phase 2, to report on progress, problems and issues. It was made clear at all of these reviews that there appeared to be a lack of commitment to change and that the help desk was seen to be the root of all problems. Another factor which became clear
IT Help Desk Implementation
253
through these reviews was the deep divide between development and support. Development liked to do things in a formal way–they used standards and methodologies; support liked to do things in an informal manner, without standards at all. It was recognised that phase 2 would not solve this relationship, and as a consequence, development were removed from any further involvement. However this did not solve the problem of the implementation of the processes within the support area. It was decided to arrange a workshop in an attempt to refocus the support managers in the need for change. The workshop was arranged for the planned end of phase 2, the end of October. However management had decided not to proceed with phase 3 but wanted to extend phase 2 for a couple more months to allow the development of service level agreements. The service manager (JS) resigned at this time, it being clear he felt that progress was not going as quickly as he would like and that issues had not been addressed by management.
User Review Workshop The workshop was a full day with the participants being the same people involved in the support process workshops, with the inclusion of the help desk supervisor (LW) in place of the DBA. The workshop was facilitated by a member of the consultancy company who not been involved with the company before, and had deliberately not read any of the previous documentation, including the support processes. This meant that he would go over old ground again and give the participants an opportunity to review the work that had been done. It was divided into a number of sessions, both group and individual working sessions, and there appeared to be plenty of enthusiasm. There were a number of actions identified including some for immediate action. These actions were never completed! Human Issues Emerging from the Meetings The Help Desk/Field Feedback meetings were held because of tensions between these two areas, and there was an expectation that by having a formal meeting, these tensions would be relaxed. The first meeting was polite, with a number of actions identified which the members of both areas were expected to carry out. Following this meeting, however, problems began to surface, related to personal relationships within the teams. In addition, tensions emerged between the field and help desk areas as a result of different employment contracts: the help desk operatives being contractors, and the field being permanent staff. Help desk staff, for example, only worked contracted hours and took all their breaks (very important in that role), while field staff worked longer hours and very rarely took breaks. This resulted in a call for action from the senior management, and when this was not forthcoming, the help desk staff “closed ranks” and began a “work to rule.”
Service Level Agreements The aim of this exercise was to improve the service to the end user, but before this could be done, the service to be provided had to be defined. Phase 2 of this change programme was all about moving towards this goal, the first activity, for example, being to produce IS Service Catalogue and Service Targets. To ensure that this service could be provided, a disciplined professional team was needed who were working to provide this service in a consistent manner. To achieve this required a well-trained and motivated workforce, working with procedures and processes which allowed them to deliver the required service. Unfortunately the processes developed were never implemented, although a number of measures and targets were in place. The next step taken was consultation with the user. The first stage was a presentation to the user group representative by the IS support manager, outlining what had been done and what was going to be done. This presentation was well received and plans were made for further oneto-one meetings with individual users. The first one was with flight operations and it was clear that the service being offered fell short of that required. The flight operations manager was not
254 Clarke & Greaves
happy with the service his department received, and the service targets confirmed his concern. The IT manager suggested that rather than receive a budget for running support, it would be better to get people to pay for it. A cost model was developed which calculated the standard service, i.e., what was being provided now, and this could be supplemented by additional services provided on a charge-out rate. The process for SLA development was developed and the plan was to implement these over the first three months of 1999 ready for the new budget period from May 1, 1999. This did not happen either.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANISATION A number of initiatives developed during phase 2 were never implemented; these included the support processes, service level agreements and service measures. Phase 3 was not started, with the consultancy organisation withdrawing from providing consultancy, and the airline wanting a period for reflection on what had been done. The consultants had wanted an outsourcing agreement with the company, and it seemed unlikely that this would happen. In addition during phase 2, because of a number of network problems, the company commissioned the consultants to conduct a network server health check. This highlighted problems with the set up of the servers and recommended a number of actions; some of these actions were called ‘for immediate action’. No action was taken to implement this report. So why did these did these initiatives not get implemented? In a practical or pragmatic sense, it might be seen that there could be many reasons for this: • The IS general manager was given additional duties shortly after the start of phase 2 and did not have the time to champion the change programme. • A number of difficult staffing decisions were not made: in particular the decision not to replace the help desk supervisor. She had expressed a desire to change her role. She left the company early in 1999. • There were culture clashes between the consultancy organisation and the company. • The IT staff were concerned that the change programme was leading towards services being outsourced, and they resisted this. • The change programme was not supported by all of the management team, so that staff got different messages dependent on whom they worked for. • Perhaps the help desk was doing a good job as far as the users were concerned and there was no real drive to change this service. • The time scale was too ambitious and the consultants were making decisions based on experience rather than on requirements and observation. However, a more radical alternative is, rather than to consider the pragmatic issues raised by the case directly, to look at the problem context afresh from an alternative perspective. The nature of the case gives clear guidance on this, pointing to power or coercion being a key factor in the lack of success of user-focused initiatives. In seeking a way forward, the company might consider the following questions: 1. To what extent have the company’s problems been caused by an over emphasis on technological issues? 2. Given the inadequacy of the current meetings structure to address the human side of the implementation, what could be done to improve this? 3. How has power distorted this implementation, and how might this problem be overcome? Further questions, to be considered by the consultants and/or students of this subject, might be:: What is the value of theoretical support in determining a way out of the company’s problems? How might such theories be applied? The following sections pursue this theme and offers a potential way forward.
IT Help Desk Implementation
255
CRITIQUE AND A POSSIBLE APPROACH One possible analysis of this case would be that the problems were initially not adequately addressed by technology-based or ‘hard’ analysis, so a soft approach was taken, which had limited success because of the coercive or power-based issues getting in the way. Power in organisational intervention is a difficult issue to address, but some approaches do exist, and the rest of this chapter aims briefly to introduce these to those readers who may be unfamiliar with them. These approaches are drawn from critical systems theory, and the techniques which seem most appropriate to this case fall under the general heading of ‘critical analysis of the system of concern’. Such an approach is premised on the idea that, to be fully informed, a user system must be critically appraised through the views of all those involved in and affected by it. In this case, this would lead to two methods: 1. Critical boundary judgments. 2. Surfacing coercive elements.
Critical Boundary Judgments The system boundary is an issue to be settled before the intervention can proceed further. Whilst the problem of system boundaries has exercised the minds of both academics and practitioners for many years (for a summary of early work, see Jones, 1982), it is from Ulrich (1983, 1988, 1996) and Midgley (1992) that the recommendation to critically challenge what should or should not be considered part of any system is drawn. Midgley’s approach is to begin with a boundary definition which is accepted as arbitrary, and progress by “ … looking for grey areas in which marginal elements lie that are neither fully included in, nor excluded from, the system definition.” The critical choices made at the boundary are of truth and rightness: truth being represented by questions of what is, and rightness by questions of what ought to be. The starting point for the critique of boundary judgements within this intervention is represented by the critique of the system boundary below. Figure 1: System Boundary Critique The Wider System IT Department
Primary Boundary
Secondary Boundary Help Desk Analysts
Help Desk The IT System
Help Desk Management
Critique of the System Boundary (Clarke and Lehaney 2000)
IT Specialists
User Issues People as part of the system Other stakeholders Accommodation of Views
256 Clarke & Greaves
Critical assessment of the system boundary would be undertaken by a representative sample of participants in the system, the approach working as detailed below, and expected to yield the results outlined. An arbitrary system definition should be presented, as, for example, in ‘critique of the system boundary’. The primary boundary represents the main area of concern, whilst the secondary boundary encompasses that which is seen to be marginal to that area. Beyond this, all other issues are represented by the ‘wider system’. What immediately emerges from this Table 6: Critically Heuristic Boundary Questions After Ulrich 1983) Question
“Is” Mode
“Ought” Mode
1
Who is the client? Whose purposes are served by the system? Users. What is the purpose? Timely satisfaction of user requests. What is the measure of success? Unclear, but premised on technological measures. Who is the decision taker? Senior managers representing each department.
Who ought to be the client? Users.
2 3 4
5
What conditions are actually controlled by the decision taker? Choice of solution.
6
What conditions are not controlled by the decision taker? None.
7
Who is the system’s designer? Unclear.
8
Who is involved as an expert, what is the nature of the expertise, and what role does the expert play? IT and consultants.
9
Where is the guarantee of success? With experts, political support, etc.? With experts. Who represents the concerns of the affected (but not involved)? Users, but poorly represented.
10
11
Are the affected given the opportunity to emancipate themselves? No.
12
What world view underlies the system of concern? Command and control systems.
What ought to be the purpose? Users happy with the level of service. What ought to be the measure? “User Satisfaction.” Who ought to be the decision taker? Managers, but informed by feedback from all participants in the problem context – all involved in and affected by the system. What components of the system ought to be controlled by the decision taker? “Controlled” is not helpful. The decision taker should determine the required approaches to satisfy the changing needs of the participant groups. What resources and conditions ought to be part of the system’s environment? Everything within the system as defined by critical boundary decisions. Who ought to be the system’s designer? Complex combination of technical expertise and ‘user’ knowledge. What kind of expertise ought to be involved, who should exercise it, and what should his/her role be? Expertise is lodged with all who participate in the system. If this is not understood, the system will fail. Where ought the guarantee of success be? With participants. Who ought to represent these concerns? Who among the affected ought to become involved? All those involved, and, with care, representatives of those affected. To what extent ought the affected to be given such an opportunity? Power is clearly getting in the way of this project. It needs to be addressed. On what world view ought the design of the system be based? The system as a set of communicated understandings between participants.
IT Help Desk Implementation
257
analysis is the way in which the original conceptualisation of the system as shown within the primary boundary is open to challenge. Once consideration is given to including help desk analysts (initially excluded) and other users / those affected in the wider community, the reasons for the failure of the original approach starts to become clear.
Surfacing Coercive Elements Once the system of concern is clarified, a brainstorming session (de Bono, 1977) might be set up, attended by representatives of all the key participant areas. The purpose of the session would be to enable participants in the system (those ‘involved and affected’) to conduct the critique on their own behalf. The system should be critiqued within the brainstorming session by a combination of Midgley’s and Ulrich’s approaches to boundary critique: a) Midgley’s (1992) approach to examining what is in the margin for elements which support the secondary boundary or the primary boundary. b) Ulrich’s (1996) approach to challenging system boundaries through 12 “critically heuristic boundary questions” which address issues of motivation, power, knowledge and legitimisation. Our experience is that the approach outlined above, particularly the use of Ulrich’s critical boundary questions, both surfaces the necessary issues and gives some guidance as to how to proceed. This reconceptualisation of the system is an important part of this kind of intervention, through which it can be perceived not as a clearly defined technical or organisational problem to which a solution is to be found, but as a complex interaction of all the issues involved in help desk operations and management. This has the effect of changing the focus from technology or organisational functions to the views and ideals of the stakeholder groups involved in the system. The task becomes not one of how to engineer a solution to a known and agreed problem, but how to study and improve a problem situation made up of complex interacting issues. People are not only part of the system; they are the primary focus of study. To complete this analysis, suggestions are given in Table 6 of the likely outputs from this exercise. What emerges from this may be summarised as follows: 1. The system of concern is currently seen as a set of rules to be implemented, but if this were truly relevant as an approach, we would be able to identify facts, or basic truths, to be set as goals. Unfortunately, this in no way describes the system, which is actually determined by the norms of the organisation and its participants. The only way to surface norms is by asking normative questions–that is, ought questions not is questions. 2. Once this is understood, a scan of the answers to questions in the right hand column yields some interesting results. The management task moves from one of control to one of facilitating others in the performance of their jobs: the ‘right’ of managers to make decisions is not challenged by this approach, but the basis on which those decisions are made is! 3. The ‘power’ or ‘coercion’ which is currently causing so many problems is seen for what it is – a distortion of the problem context. As long as it is allowed to pervade all the issues, no solution will ever be found.
CONCLUSIONS This kind of approach offers many challenges to managers, but arguably all are concerned with the ‘true’ management task, which consists not of using authority to force decisions, but rather of facilitating people to contribute all they possibly can to the success of the enterprise. In practice, we have come across many objections to these views, but none that could not be resolved by good managers working in a learning environment, whilst increasing their own understanding of the nature of the task they face. But the real test lies with you, the readers. We argue that this reconceptualised approach works – why not try it!!
258 Clarke & Greaves
FURTHER READING Material on the ‘hard-soft’ debate may be found in the first instance in: Key text: Walsham, G. (1993). Interpreting Information Systems in Organisations. Chichester, John Wiley and Sons Ltd. Other texts/papers: Checkland, P. B. (1978). “The Origins and Nature of Hard Systems Thinking.” Journal of Applied Systems Analysis 5(2): 99-110. Checkland, P. B. (1994). Systems Thinking, Systems Practice. Chichester, Wiley. Jackson, M. C. (1982). “The Nature of Soft Systems Thinking: The Work of Churchman, Ackoff and Checkland.” Applied Systems Analysis 9: 17-28. Lewis, P. (1994). Information Systems Development. London, Pitman. For those wishing to delve more deeply into critical approaches, we would suggest: Key text: Jackson, M. C. (2000). Systems Approaches to Management. New York, Kluwer/Plenum. Other texts/papers: Ackoff, R. L. (1981). Creating the Corporate Future. New York, Wiley. Clarke, S. A. and B. Lehaney, Eds. (2000). Human Centred Methods in Information Systems: Current Research and Practice. Hershey, PA, Idea-Group Publishing. Clarke, S. A. and B. Lehaney (2000). Human-Centred Methods in Information Systems Development: Boundary Setting and Methodological Choice. Challenges of Information Technology Management in the 21st Century, Anchorage, Alaska, U.S.A., Idea Group Publishing: 605608. Clarke, S. A. (2000). From Socio -Technical to Critical Complementarist: A New Direction for Information Systems Development. The New SocioTech: Graffiti on the Long Wall. E. Coakes, R. LloydJones and D. Willis. London, Springer: 61-72. Flood, R. L. and M. C. Jackson (1991). Creative Problem Solving: Total Systems Intervention. Chichester, Wiley. Hirschheim, R. and H. K. Klein (1989). “Four Paradigms of Information Systems Development.” Communications of the ACM 32(10): 1199-1216. Midgley, G. (1995). What is This Thing Called Critical Systems Thinking. Critical Issues in Systems Theory and Practice, Hull, U.K., Plenum: 61-71. Mingers, J. (2000). “Combining IS Research Methods: Towards a Pluralist Methodology.” Information Systems Research.
REFERENCES Beath, C. M. and W. J. Orlikowski (1994). “The Contradictory Structure of Systems Development Methodologies: Deconstructing the IS-User Relationship in Information Engineering.” Information Systems Research 5(4): 350-377. Boehm, B. W. (1989). A Spiral Model of Software Development and Enhancement. Software Risk Management. Washington D.C., IEEE Computer Society Press: 26-37.
IT Help Desk Implementation
259
Clarke, S. A. (2001). Information Systems Strategic Management: An Integrated Approach. London, Routledge. Clarke, S. A. and B. Lehaney (2000). Human-Centred Methods in Information Systems Development: Boundary Setting and Methodological Choice. Challenges of Information Technology Management in the 21st Century, Anchorage, Alaska, U.S.A., Idea Group Publishing: 605608. de Bono, E. (1977). Lateral Thinking. Aylesbury, U.K., Pelican Books, Hazell Watson & Viney Ltd. Jones, L. M. (1982). “Defining Systems Boundaries in Practice: Some Proposals and Guidelines.” Journal of Applied Systems Analysis 9: 41-55. Lyytinen, K. and R. Hirschheim (1987). Information Systems Failures: A Survey and Classification of the Empirical Literature. Oxford Surveys in Information Technology. Oxford, Oxford University Press. 4: 257-309. Midgley, G. (1992). “The Sacred and Profane in Critical Systems Thinking.” Systems Practice 5(1): 516. Mumford, E. (1985). “Defining System Requirements to meet Business Needs: A Case Study Example.” The Computer Journal 28(2): 97-104. Reynolds, G. W. (1995). Information Systems for Managers. St. Paul MN, West. Stowell, F. A. (1991). Client Participation in Information Systems Design. Systems Thinking in Europe (Conference Proceedings), Huddersfield, Plenum. Stowell, F. A. and D. West (1994). “‘Soft’ systems thinking and information systems: a framework for client-led design.” Information Systems Journal 4(2): 117-127. Ulrich, W. (1983). Critical Heuristics of Social Planning: A New Approach to Practical Philosophy. Berne, Haupt. Ulrich, W. (1983). The Itinerary of a Critical Approach. Berne, Haupt. Ulrich, W. (1988). “Systems Thinking, Systems Practice, and Practical Philosophy: A Program of Research.” Systems Practice 1(2): 137-163. Ulrich, W. (1996). A Primer to Critical Systems Heuristics for Action Researchers. Forum One: Action Research and Critical Systems Thinking, Hull, UK, University of Hull, Centre for Systems Studies.
BIOGRAPHICAL SKETCHES Steve Clarke received a BSc in Economics from The University of Kingston Upon Hull, an MBA from the Putteridge Bury Management Centre, The University of Luton, and a PhD in human centered approaches to information systems development from Brunel University – all in the United Kingdom. He is Reader in Systems and Information Management at the University of Luton. His research interests include: social theory and information systems practice; strategic planning for information systems, and the impact of user involvement in information systems development. Major current research is focused on approaches to information systems and strategy informed by critical social theory. Arthur Greaves is a Computer Auditor at the London Borough of Hillingdon. Previously he was a IT Consultant working on a number of assignments involving project management and service management. He has been working on computer systems design for more than 25 years, and this work has been reinforced by Management training (MA in Management) and in Management Accounting (Chartered Institute of Management Accountants).
260 Paper & Tingey
Application of Tree-Based Solutions: A Case Study With INEEL David Paper and Kenneth B. Tingey Utah State University, USA
EXECUTIVE SUMMARY This case is a study of the application of tree-based solutions to Idaho National Engineering and Environmental Laboratory (INEEL) challenges in the development of a computerized system to meet complex, yet exacting compliance requirements extended to thousands of employees in a largescale organization. We rehearse the history of the project and include information on the theoretical structure of the tree-based solution used. Using primary research documentation, we use a constructivist approach to the issue of subject matter expert empowerment, a major theme of the case. Of particular interest is how the engineer in question was able to modify his work paradigm to incorporate a new role as digital content designer and overseer of the project. Additionally, the study concentrates on the overall effects of the project on other INEEL systems and working environments at the INEEL. Implications of management- and subject matter expert-directed system design projects using treebased tools are considered with respect to all aspects of enterprise systems development.
BACKGROUND The INEEL is located in the northeastern portion of the Snake River Plain in southeastern Idaho near the foothills of the Little Lost, Lemhi, and Bitterroot Mountains. With a military and nuclear energy history of management by the federal government, during World War II, the INEEL supported the United States Naval Ordnance Plant in Pocatello Idaho, serving as a practice site for naval gunnery up to the period of the Vietnam War. In 1949, the federal government named the site as the National Reactor Testing Station, a site where prototype nuclear reactors could be designed, built and tested. From that time, 52 “first of a kind” reactors have been built and tested at the INEEL. Projects have been conducted for the Department of Energy, the Atomic Energy Commission and the Department of Defense–particularly the U.S. Navy and the U.S. Army. The site has operated under three names. The original name, National Reactor Testing Station, was changed in 1974 to the Idaho National Engineering Laboratory. In 1997, the site’s name was further modified to the Idaho National Engineering and Environmental Laboratory. At present, the mission of the laboratory is to lead in systems engineering and the development and application of environmental technologies. Copyright © 2002, Idea Group Publishing.
Application of Tree-Based Solutions: A Case Study With INEEL 261
The INEEL functions under the auspices of the Department of Energy’s Idaho Operations Office. Overall, the facility encompasses more than 1,000 buildings of various kinds covering 890 square miles of territory in southeastern Idaho, an area approximately 85% of the size of the state of Rhode Island. The facility encompasses a combined total of 177 miles of paved roads and public highways, 56.5 miles of electrical transmission lines and 14 miles of railroad lines linking ten major facilities clusters. With administrative facilities in Idaho Falls, the INEEL employs approximately 6,000 people with varied backgrounds and professional interests, as is understandable in an institution with such a comprehensive and varied mission. The annual budget of the facility is approximately $700 million. Currently, there are three companies, Bechtel B&W Idaho, Argonne National Laboratory - West and Bechtel Bettis that are under contract to perform research, waste processing, and support functions for the Department of Energy at the INEEL. In its approximately 50 years of existence, the site was managed by a large number of independent commercial contract managers. In 1994, in an effort to consolidate management of the site, the Department of Energy contracted for the first time for management of the INEEL with one private organization–Lockheed Martin Corporation. Prior to that time, segments of the site had been under simultaneous management by five private contractors. Through consolidation of the operating contracts into one manager relationship, it was hoped that economies of scale could be gained. In addition, the Department of Energy, the Department of Defense, and the Defense Nuclear Facilities Safety Board (DNFSB) hoped to improve the safety profile of the site, its employees and the environment. The DNFSB is not a regulatory agency and is not a function of the DoD. Under its enabling statute (Public Law 100-456), the Board is responsible for independent, external oversight of all activities in DOE’s nuclear weapons complex affecting nuclear health and safety. The Board reviews operations, practices and occurrences at DOE’s defense nuclear facilities and makes recommendations to the Secretary of Energy that are necessary to protect public health and safety. In the event Board reviews disclose an imminent or severe threat to public health and safety, the Board is required to transmit its recommendations directly to the President, as well as to the Secretaries of Energy and Defense. Review about the DNFSB at this link: http://www.dnfsb.gov/ (S. A. Hawke, personal communication, October 25, 1999). INEEL believed that it could accomplish this through consolidated safety, health and environmental compliance procedures using the considerable combined expertise of the scientists, engineers, operations management, craft employees, managers and consultants affiliated with the INEEL. In July of 1997, information technology professionals, employees of energy contractor Lockheed Martin at the Idaho National Engineering and Environmental Laboratory, were faced with a daunting challenge. The Defense Nuclear Facilities Safety Board , a Department of Defense oversight committee with congressional reporting responsibility, mandated that Lockheed Martin implement a networked systems-based work order management solution. The mandate stipulated that the system provide a definitive assurance that all applicable regulatory requirements were met by over 6,000 employees of the nuclear waste storage and management facility. Such compliance requirements included federal environmental and energy laws, safety and health requirements, specific nuclear provisions of the Department of Energy and the Department of Defense, other federal regulations, applicable state and local requirements, and other standards of performance of the Department of Energy and the INEEL itself. Over a period of several months, content professionals, scientists, engineers, and construction and maintenance specialists were able to document approved methods of meeting the site-wide requirements of the DNFSB, but there was serious concern over time and manpower requirements for converting these requirements to machine-usable form. With only a couple of months left before the DNFSB deadline, IT managers were given 18-month estimates for completion of the system using conventional tools and methods. Even within this time frame, there was little enthusiasm for the project
262 Paper & Tingey
by developers, due to its complexity and criticality. Given the ongoing need for content maintenance, IT management lacked personnel that could devote themselves to the program on an ongoing basis. Indeed, the multidisciplinary nature of the project underscored the challenge of maintaining system relevance through the efforts of individuals that did not have scientific, engineering, craft or operational backgrounds. IT management at the INEEL turned to a tree-based solution to meet the DNFSB deadline. This was a departure from the traditional method of systems development in which process logic is managed by means of nested if-then-else statements or rules-based artificial intelligence tools - both of which require extensive training and experience in computer programming and in-depth understanding of complex logic models and languages. Breakthrough performance was hoped for based on the idea that content managers themselves could design, implement and later maintain required content and functionality. Declaring default to the DNFSB or organizing a new design unit of programmers/ scientists/maintenance engineers, were not compelling options. If a solution were to be found, it would need to prove itself within a short period of time. Making use of relational database tools and a Java-based tree manager tool developed by an independent software developer, a Lockheed Martin engineer–the supervisor of the work order project with no programming expertise–was charged by INEEL management with initial design and implementation of the work order management system. This individual was tasked to design a tree or set of trees using the original requirements document and then to cut-and-paste the content of the document into the resulting hierarchical model. The project succeeded admirably. The engineer in question was able to readily comprehend the tree-based model. With some assistance from technicians, he was able to layout the basic structure of the application within a few days. By the end of the second week, he had incorporated a complete set of complex logical elements–essentially the logic that determines whether a project in question require review and authorization by individuals or committees within the INEEL structure. The resulting application was installed, users were trained and Lockheed Martin was able to meet the DFNSB deadline. In the meantime, the project has been enlarged on several occasions and the engineer in question maintains the content of the system (using the tree-based software) on an ongoing basis to assure its validity and relevance.
SETTING THE STAGE In the 1990s, the long-term mission of the INEEL was changed to a significant degree. The organization’s purpose was expanded to incorporate environmental compliance as a part of its mandate in addition to its engineering and nuclear energy research and development roles. Though arguably viable as a means of sustaining the defense of the country and as a means of generating electrical energy, nuclear power was controversial and problematic at that time. It was fraught with considerable negative societal and political baggage due to real and perceived risks from peaceful as well as military applications of nuclear energy. Ongoing economic benefits to Idaho and the inland northwest region of the United States as a result of the INEEL’s existence were judged to not reliably depend on nuclear energy alone. In the early 1990s as a result of initiatives at the INEEL, the Department of Energy and elsewhere, the INEEL’s mandate was extended to environmental management. To remain on the leading edge of research and implementation of the fruits of science, the INEEL was positioned to apply its resources to the related challenge of environmental science and compliance. Environmental compliance was also of importance with respect to movements within the compliance and governance community in general. The DNFSB in particular–largely due to public concerns about the safety and dependability of nuclear power generation and fuel management–set integrated safety management as a principal objective in all of the sites over which it had jurisdiction. This degree of focus was a direct response to concerns on the part of DNFSB officials with respect to its responsibility to the nation.
Application of Tree-Based Solutions: A Case Study With INEEL 263
The INEEL has been a leading researcher, developer and consumer of IT products and services since its inception–which roughly corresponds with the history of institutional computing in general. As a leading researcher in many fields, including applied engineering, biotechnology, chemical separations and processing, earth science and environmental engineering, information science, intelligent automation and remote control, materials and structural integrity, modeling of physical systems, nuclear science, radiochemistry, sensing and diagnostics, and systems engineering, the INEEL has the personnel, the technology and the capacity to deal with networked, computerized issues at all levels. The scale of operations of the site brings significant administrative burden that has brought managers over the decades to avail themselves of emerging information processing technologies as they have become available. The INEEL has collaborated with Stanford University, the Department of Defense and other leading organizations in the creation of software development tools and programming languages that support many mission-critical functions of the U.S. government and the Department of Defense. Thus, the INEEL was positioned by the late 1990s as a leading technology-based organization with particularly strong levels of expertise in information technologies and systems development–capable of comprehensive evaluation of available technologies for system acquisition and development.
CASE DESCRIPTION In 1997, concurrent with the inclusion of “Environmental” to the laboratory’s name, the DNFSB issued a specific mandate that the INEEL develop and demonstrate a systems-based safety management system as a part of the site’s basic operating environment. Definitive treatment of each of approximately 15,000 annual work orders, considering all regulatory and compliance issues as they applied to such work orders, was a requirement of the system. The goal of the mandated systems-based solution was to facilitate consistent interpretation of laws, policy, procedures, etc. A major objective was to eliminate the fear of noncompliant activities. The mandate included the need to implement a graded approach that would insure cost-effectiveness in addition to compliance. Concerns revolved around not only the complexity of the task, but the need to adapt to varied processes, a fragmented knowledge base of the issues and workable solutions in the INEEL’s 6,000-employee site. Among other things, any solution would have to be easily integrated with other existing and planned systems and it would need to be easy to use. In addition, it should require limited training while maintaining the comprehensive capabilities of the tool and the ability to serve a large segment of the working population of the site. The original DNFSB mandate stipulated a requirement of compliance that the system be operable by end of the government fiscal year (the end of September 1997). Although requirements had been established in the form of a 60-page document that had undergone extensive negotiation among INEEL subject matter experts, experts in the craft skills involved, and federal regulators, there was a decided lack of progress in converting such requirements to digital form. Estimates for completion of the project using conventional tools were in the 18-month range. Furthermore, a number of internal and external issues impacted project completion. These include the complexity and scope of the task, and the need to build in capacity to modify the system on an ongoing basis due to changes in regulatory requirements, operational imperatives, and new discovery. As such, confidence was very low that the project could be adequately completed in an 18-month period. IT management at the INEEL considered many options. In addition to standard tools and technologies, systems and solutions from various vendors were reviewed, including database products, expert system tools, Excess, ARC, etc. Personnel at the INEEL, under supervision of Wayne Simpson, also looked at various Web-based tools, traditional object-oriented tools, and other programming and design environments. None of these options appeared to offer a solution within the time frame allowed. A new option, using a tree-based approach, was discovered that promised a very quick solution.
264 Paper & Tingey
The tree-based structure is radically different in that it allows non-technical people (end-users) the ability to map their business processes (logic) and store them in a database. Processes are stored in the database with special keys that uniquely identify each end-user’s process paths. Thus, an enduser can use trees to quickly access and store the information they need to support their unique business logic and rules. In contrast, if-then-else and rule-based logic requires programmers to build applications in support of end-users’ logic requirements. End-users are unable to use the system to build their own logic without trees. In addition, applications require substantial investments in time and resources to support the system development life cycle. The strategic advantage of trees is that decision makers can get exactly the information they need to support their business logic. When programmers are involved the time to get needed information is drastically increased. Moreover, they tend to know nothing about the business logic of the organization that further impedes the flow of useful information to the decision maker. Although interoperability is never an easy task to achieve, the tree-based architecture is a complementary technology. It uses Lightweight Directory Service Access Protocol (LDAP), Extended Markup Language (XML) and process manager technology to enable end-users to build and maintain their specific business logic with trees. LDAP and XML are already integrated with ORACLE and other database software. For instance, ORACLE 8i and 9i promote the ability of its software to work seamlessly with all major directory services such as Novell Directory Services (NDS) and XML. IT planning is a different story. The tree-based architecture puts process planning into the hands of business managers and subject matter experts. Thus, systems-based process implementation becomes an integral part of strategic planning. Since managers can use the tools to design their own logic, they can make better use of technology and plan accordingly. In other words, trees put technology implementation in the hands of decision makers, allowing for more effective deployment of strategies and tactics, while allowing technical managers and workers to concentrate on technical tasks. Tree-based solutions to similar kinds of problems have been developed and commercialized to a limited degree by a local developer. The INEEL turned to this developer who was in the process of designing a tree-based tool using Sun Microsystems’ Java programming language and relational database technology. Although still in its beta development stages, the product was functional at that time to the extent that its features could be demonstrated to INEEL personnel. The demonstration was conducted in August of 1997, only weeks before the DNFSB deadline. With encouragement from INEEL personnel, the developer agreed to provide the product along with training on an accelerated schedule to the key engineer on the project. The engineer had considerable experience in the area of compliance management and was principal in the negotiations over requirements, but with no background in computer programming. The scope of the project was to provide the software to support a tree-based solution to the problem posed by the DNFSB. In addition, training and assistance was to be given to the engineer so that he could be principally involved in the design and implementation of the negotiated solution that existed in document form. The engineer in question, Scott A. Hawke, describes in a December 1999 statement subsequent developments: As an Industrial Engineer and a Project Manager at the INEEL, I was assigned to develop and implement [in 1997] a Web-based program to assist maintenance job planners to identify the regulatory requirements for OSHA, EPA, etc. to incorporate into work orders. It was called the Job Requirements Checklist (JRC). This project design was started in June 1997, and software development using Baton was started August 2, 1997. We were required to rollout the new process by October 1, 1997. I am not a software-programming engineer. I went to [the independent developer’s] offices . . . for two days, received about 30 minutes informal instruction and started work. We also had one of our INEEL programmers at [the company] who assisted me to understand the lingo and translate my desire into program-
Application of Tree-Based Solutions: A Case Study With INEEL 265
ming possibilities. I still had to program many of the logic trees. We had the product rolled out on schedule. After six weeks of use of our JRC, we identified immediate revisions to improve user friendliness. Most of these corrections were made to our in-house developed Web interfaces to [the tree-based solution]. Since that time, senior management recognized the potential for the [tree-based] JRC package and in November gave new operability requirements for an extensive improvement to the JRC, now called the Hazard Identification and Mitigation (HIM) Checklist. The automated HIM Checklist assists the user in evaluating hazards, determining the required rigor for planning work and developing the mitigation actions necessary for the hazards control set. It consists of a series of logical questions that combine the facility hazards documentation and regulatory requirements into an output report to be used by the planning team to develop the work order package. The HIM Checklist also serves to: 1) apply a standardized, tailored approach consistent with the complexity, uncertainty, and risks associated with the proposed work; and 2) ensure that the integrated safety management process for identifying and analyzing hazards and developing and implementing controls is consistently applied to similar work. The regulatory research and process design took four months; the redesign of the . . . logic trees only took about five man weeks time, performed entirely by myself with occasional consultation with INEEL programmers. Personally I have found the use of the [tree-based] process to be the easiest part of developing the HIM process. The commands are not difficult to follow and the built-in logic trees troubleshooting process saves hours if not days to resolve logic mistakes (S. A. Hawke, personal communication, October 25, 1999). As outlined by Mr. Hawke, the outcome of this approach was startlingly successful, if not miraculous. The key to the project’s success from a design standpoint was that it allowed Mr. Hawke to make direct transfer of the material in the document that had been created by various committees in the four-month regulatory research and design stage of the project. The nature of the tree-based design tool allowed him to employ a direct manipulation approach to the design process. Direct manipulation allowed Mr. Hawke to take advantage of his expertise to literally cut-and-paste content from the design document in the process of laying out the tree-based structure of the application. Success of the project allowed the INEEL to announce system availability by September 23, 1997, in advance of the DNFSB deadline (Hawke, 1997).
Unique Features of the Tree-Based Technology Baton. Baton is a leading process manager product using a tree-based technology. It is designed based on an original model developed at Brigham Young University, with a number of Diagram 1: EWP
266 Paper & Tingey
structural and functional enhancements. Written entirely in Java and built on top of standard SQL databases, Baton is a tool for defining complex processes and linking them to other networked applications and data repositories. Process manager technology constitutes a new and different kind of software product, one whose benefits would have been difficult to fully exploit prior to the development of networks, standardized relational databases and Java. Process managers allow non-technicians (and technicians, for that matter) to define and manage complex hierarchical (tree-based) logic with a minimum of training by means of a suite of tools that facilitate setting up a collection of logic trees. The technology makes use of Java and database technology to store this logic entirely in database records, providing an ability to manage and port functionality throughout the enterprise in a very transparent and flexible manner. Baton process manager technology provides a useful link between directory service resources and the XML document and meta-data standard (with the inclusion of certain short-cycle development projects outlined herein). Already, with Version 3.x of Baton, directory service links have been developed using Novell NDS/LDAP protocols to allow creation of directory objects and importing of critical elements from such objects. This allows Baton designers to set up users, resources and basic structural elements within a network directory. This functionality is key to the effectiveness of the OpenNet Baton product. The good news and the bad news about process manager technology is that it represents a new paradigm in computing–with all of the inherent challenges, risk s and rewards of such a task. The good news is that Baton offers managers the ability to design and build applications in support of business processes without being technology experts. The bad news is that people in the organization may not be ready for this ability. Technology people tend to control software development for everyone in the organization. They may not react kindly to this tremendous loss of power. The ideas we present are a breakthrough because managers currently have tenuous control over their systems and are not able to make full use of their systems to implement logic into processes. Process manager technology introduces two paradigms into the marketplace–an end user paradigm and a design paradigm. The design paradigm. The process manager design model is revolutionary. Like most revolutions, however, the process manager design revolution will take time to gestate. Keep in mind that process manager technology allows non-programmers to design complex logic with a minimum amount of training–similar to the time and effort required to learn and use spreadsheets or word processors. While seemingly beneficial, this development stimulates resistance from the programmer community and confuses IT managers, who typically don’t know how to address the phenomenon. OpenNet management believes that process manager technology - an innovation that can readily be demonstrated technically–will lead to an entirely new and better approach to using computers and networks to support the institutional mission in organizations of all kinds. Its use results in a fundamentally improved relevance of computer systems–given that logical updates can be maintained current to the minute and the resulting computerized business model can come into direct alignment with the business model of the organization in question. Furthermore, by storing and managing all system logic in relational database records that can be managed and transferred to any and all network servers, process manager technology calls into question the concept of hard-coded applications. In contrast to process manager tools, traditional functionally based software applications are inflexible, poorly integrated from a logical standpoint, expensive and time consuming to maintain, and difficult to deploy over a distributed network. Figure 1 shows the welcome screen for the Baton product. Baton is the process manager design software that was used by INEEL to quickly develop a logical tree-based design of the project requirements. As you can see the interface is user friendly and allows the user to quickly navigate to the desired component. Figure 2 shows the initial composer mode screen after the user chooses ‘Composer Mode’ from the welcome screen. Composer mode allows the user to begin developing the
Application of Tree-Based Solutions: A Case Study With INEEL 267
logical tree structure for the project. Figure 3 shows the code generated by the tool. Keep in mind that the Baton tool automatically generates the code. The user is only required to design the logical trees associated with the project and Baton does the rest. Figure 4 shows an active ‘slide’ in the concert mode. In the concert mode, the user is able to choose the logic involved in a specific activity or set of activities. Keep in mind that the concert mode helps the user coordinate the entire tree of activities, that is, the parent(s) and the children tree(s) of the activity currently being modeled. Although the process manager phenomenon empowers knowledgeable managers, workers, consultants and others to design powerful process capabilities and deploy them across large systems, many are slow to understand the benefits of such a tool. The reason for this may be that people are slow to accept change, especially radical change. OpenNet management (the developers of the Baton product) is of the opinion that the inherent superiority of the process manager model will eventually serve as the common language environment for defining processes and deploying standards-based integration. In a process manager model environment, knowledge professionals themselves will manage underlying logic and will be empowered to collaborate on a conceptual level free of technical and system limitations. Such a development will result in significantly increased usage of systems by business and non-IT people that should provide more opportunities for technical professionals to free them of the burden of maintaining system logic in areas outside their fields of expertise. The end user paradigm. For the end user, process manager technology is more like watching a slide show than using a typical software application. With each step in the decision process, the user is presented with an active ‘slide.’ Figure 4 shows an example of this. Options are selected and information is entered one step at a time. The user never gets more than one slide (situation) at a time to consider, and the user can always back up to the previous screen to double-check a prior decision or change one’s mind.
Figure 1: Welcome Screen for Baton
Figure 2: Composer Mode Option
268 Paper & Tingey
Figure 3: Report Setup Tool
Figure 4: Concert Mode
Application of Tree-Based Solutions: A Case Study With INEEL 269
Use of process manager technology, then, involves the process of answering a series of questions and entering data one step at a time. Confusion is thus minimized and important decisions are made as appropriate. There is a clear tutorial aspect to using well-designed processes with the help of Baton (OpenNet’s process manager tool). Baton presents issues in context and allows for branching to any and all networked resources for needed information. Happily, the end user paradigm is in many ways easier and more intuitive for computer novices than experienced technicians. There is much less ‘screen clutter’ than exists in typical programmed applications, and the basic parent text/child text format is familiar and nonthreatening.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION Based on a cash-on-cash comparison of the project with other alternatives that would not have resulted in a functional system in less than a year-and-a-half, the project saved approximately $1.35 million. The return on investment was 5.4 based on a $240,000 investment. Although the INEEL secured tremendous cost and timesavings, and full compliance with the government, company-wide adoption of a tree-based application development solution has yet to become a reality. Enterprise-wide adoption of a tree-based solution for systems development projects appears to have tremendous advantages with few disadvantages. Why therefore is adoption not forthcoming? We think that the main obstacle to wider adoption is inertia. Systems development life cycles for complex systems at the INEEL have always required extensive manpower, resources and time. As a result, the existing information systems hierarchy has been in place for many years and the information systems bureaucracy is mature. Moreover, people’s careers have been built on maintaining the “status quo” in systems development. The only incentive to change came from the external environment, that is, government mandates. We believe that adoption is not only a question of technological savvy, but also one of politics. Many information systems professionals may perceive the tree-based solution as a career-ending vehicle while others may see it as a threat to political clout. Keep in mind that 18-month life cycles give people a tremendous amount of slack resources and time to work with. The challenge is in adopting the tree-based solution on an enterprise basis without undermining employee morale. Another obstacle is the “mystic” of technology. We believe that nontechnical people view technology as nothing short of magic. They think that computers are incredibly technical machines that take input and magically produce the desired output. They think that programmers are antisocial beings that somehow talk to computers in cryptic languages to make them work. The challenge is in demystifying technology to make it more accessible and easier to use in support of critical business processes.
FURTHER READING Allen, D. K. (1984). Computer-aided process planning: Software tools. In C. R. Lui & T. C. Chang, (Eds.). Integrated & Intelligent Manufacturing. ASME Winter Annual Meeting. Billo, R. E. (1998). Organizing principles for the design of classification and coding software. Journal of Manufacturing Systems, 17,(6). Boer, G. (1987). Classifying and coding for accounting operations. Montvale, NJ: National Association of Accountants. Burke, M. (1999). Rapid knowledge formation. [Online]. Washington, DC: DARPA. Available: http:/ /dtsn.darpa.mil/ISO/index2.asp?mode=9 [“rapid knowledge formation”] [2001, January 29]. Denna, E. L., Cherrington, J. O., Andros, D. P., & Hollander, A. S. (1993). Event-driven business solutions. Homewood, IL: Business One Irwin. Drucker, P. F. (1998). The next information revolution. [Online]. Forbes ASAP, August 24, 1998, pp. 1-3 Available: http://www.forbes.com/asap/98/0824/046.htm/ [2000, September 27].
270 Paper & Tingey
Gabriel, R. P. (1996). Patterns of software. New York: Oxford University Press. Johnson, T. (1997). Time to take control. Boston, MA: Butterworth-Heineman. Kalyanapasupathy, V., Lin, E., & Minis, I. (1997). Group technology code generation over the Internet. [Online]. College Park, MD: Department of Mechanical Engineering and Institute for Systems Research, University of Maryland. Available: http://www.isr.umd.edu/ Labs/CIM/profiles/lin/ docs/gt/asme97-1c.html [2000, November 29]. Nadler, D., Shaw, R. B., Walton, A. E., & associates. (1995). Discontinuous change. San Francisco: Jossey-Bass, Inc. Petzold, C. (1999). Code. Redmond, WA: Microsoft Press. Ross, R. G. (1994). The business rule book: Classifying, defining, and modeling rules. Boston: Database Research Group, Inc.
REFERENCES Baton product manual: Basic design objectives. [Online]. OpenNet Corporation. Available: http:// opennet.net/Baton/BatonHelp/index.html [2001, January 26]. Hawke, S. A. (1997, September 23). New checklist beefs up maintenance planning activities. [Online]. Idaho Falls, ID: Lockheed Martin Idaho Technologies Company. Available: http://www.inel.gov/ resources/newsletters/star/1997/09-23-97/19970923star2.html [2001, January 26]. Integrated safety management system. [Online]. Idaho National Engineering and Environmental Laboratory. Available: http://ism.inel.gov/ism.asp [2001, January 26]. Paper, D. (1998). BPR: Creating the Conditions for Success. Long Range Planning, 31, 3, 426-435. Paper, D. (1999) The Enterprise Transformation Paradigm: The Case of Honeywell’s Industrial Automation and Control Unit. Journal of Information Technology Cases and Applications, 1, 1, 4-23. Paper, D. and Dickinson, S. (1997). A Comprehensive Process Improvement Methodology: Experiences At Caterpillar’s Mossville Engine Center, in Cases on Information Technology Management in Modern Organizations, eds. Khosrowpour, M. and Liebowitz, J., Chapter 9, Hershey, PA: Idea Group Publishing.
BIOGRAPHICAL SKETCHES David Paper is an associate professor at Utah State University in the Business Information Systems and Education Department. His academic credentials include a Bachelor of Arts in Computer Science from Southern Illinois University, a Master of Business Administration from Arizona State University, and a PhD. in Business from Southern Illinois University. He has several refereed publications appearing in journals such as Communications of the AIS, Journal of Information Technology Cases and Applications, Journal of Computer Information Systems, Long Range Planning, Creativity and Innovation, Accounting Management and Information Technologies, Business Process Management Journal and many others. He has also spent time in industry and consulting with Texas Instruments, DLS, Inc., the Phoenix Small Business Administration, the Utah State University Research Foundation, and the Utah Department of Transportation. His teaching and research interests include database management, e-commerce, business process reengineering, organizational transformation, and change management. Kenneth B. Tingey is a doctoral student at Utah State University in the Business Information Systems and Education Department. He has over twenty-five years experience in industry, working as a venture capital fund founder and general partner, entrepreneur, general and line manager, and executive staff assistant. He is founder, Chairman, and CEO of OpenNet Corporation, an enterprise software developer. His academic credentials include a Master’s Degree in Pacific International Affairs from the University of California, San Diego, a Master’s of Business Administration from
Application of Tree-Based Solutions: A Case Study With INEEL 271
Brigham Young University, a Bachelor of Arts in Music Education from Utah State University, and a Baccalaureate Major in Accounting from Brigham Young University. He professional affiliations include Strategic Information Division of Ziff-Davis Publishing Company, the Ventana Growth Fund, and Sunrider International. In addition, he has conducted many business consulting and systems development projects on contract with direct selling companies, software development companies, and government contractors. Mr. Tingey has engaged in many enterprise-level systems development projects with special emphasis on requirements of supporting the mission of institutions by means of information processing models and information technology tools. Mr. Tingey is the author of Dual Control, a book on the need to support top-down policies and horizontal processes in a unified system environment.
272 Mann
Recognizing Runaway IS Projects When They Occur: The Bank Consortium Case Joan Ellen Cheney Mann Old Dominion University, USA
EXECUTIVE SUMMARY Runaway projects have been a problem in information systems (IS) for quite some time. In 1988, KPMG found that 35% of their largest clients currently had a runaway project, and in 1991 the percentage of firms increased to 60%. Plus, over 50% of the respondents considered this to be normal (Rothfeder, 1988; Cringely, 1994). The traditional definition of a runaway is any project that grossly exceeds budget and time targets but yet has failed to produce an acceptable deliverable. Given that each runaway project is a dysfunctional use of organizational resources, it is important for practitioners to be able to identify them early and react appropriately. On the other hand, this case will help practitioners realize that the issues within runaway projects are complex and difficult. The case could be used in MIS courses for non-IS majors, systems analysis and project management classes for IS majors or EDP auditing courses in accounting.
BACKGROUND The Savings and Loan Environment The case begins at the time of the Savings and Loan Crisis in the mid-1980s. During the crisis, Savings and Loan institutions were locked into fixed-period low interest mortgages at a time when interest rates were skyrocketing. Moreover, high interest rates had dampened the demand for real estate making it difficult to find new borrowers. Savings and Loan managers were unable to move into new types of products because the industry was so highly regulated. Commercial Banks were more competitive because they were allowed to be more innovative. There was only one way the Savings and Loans could survive; they had to streamline operations. A news item of the time summarizes the plight of Savings and Loan institutions at this time. “Deregulation of liabilities before assets has caused a crisis of declining net worth in the savings and loan industry. S&Ls have 31% of their savings in low cost accounts, and removal of the interest rate differential on money market certificates has resulted in an increase of the Copyright © 2002, Idea Group Publishing.
Recognizing Runaway IS Projects When They Occur 273
commercial bank share of this deposit from 30% to 48%. On the other hand, 58% of S&Ls’ mortgages are at rates of 10% or less and slow loan growth prevents the rapid addition of new, high rate mortgages to S & L portfolios” (Jacobe, et al., 1982, p. 44). Federal policy during the crisis was focused on encouraging Savings and Loan organizations to merge their assets in order to improve their ability to compete with the much more entrepreneurial commercial bank. “Under the FSLIC policy of promoting the merger of S&Ls with a net worth to assets ratio below 2%, 320 associations were merged in 1981; even with declining interest rates, 1,000 will merge in 1982, raising the policy question of whether it is desirable that community-based specialized housing lenders be absorbed by the commercial banking industry” (Jacobe et al., 1982, p. 44).
The Narrator and Alpha Thrift The narrator of this case is an information systems auditor of one Savings and Loan organization. Soon after the Auditor joined this bank, it merged with what was thought to be a financially stronger Savings and Loan (hereafter known as Alpha Thrift). Here is the auditor’s initial impression of the institution when he arrived in 1982 before the merger: I became the first EDP auditor for a major S&L and received a significant pay increase and many congratulations. The S&L, a very old institution, was thought to be the bedrock of the city’s financial institutions and management was highly regarded, both by employees and the community. On the first day of employment came the news that we were losing money. Nevertheless, the EDP operation was similar to that of a previous employer and had its strong points. Everything looked promising. Then came the word that the board had opted for a merger with a similar S&L. The main attraction was that this one was making money. Not only was Alpha Thrift making money, it also had an innovative idea for helping other institutions succeed without losing their independence: “Another attraction was its involvement with four other S&L’s in creating a large S&L processing center in a distant major urban area. The resulting system was to be the cutting edge of S&L EDP technology.” This case describes the activities of the consortium and its participating Savings and Loans, but the emphasis is on the viewpoint of Alpha Thrift as told from the perspective of the Auditor from the Savings and Loan that merged with Alpha Thrift.
SETTING THE STAGE The Creation of a Data Center (1978) The history of the project went back several years to the ‘golden age’ of Savings and Loans, when management of eight thrift institutions conceived the idea of such a data center. All apparently felt that their in-house systems or service bureaus were not capable of providing the technology they wanted and that pooling resources was the answer. Financing was not a problem at this time and so they chose to locate the data center in a high-rent district of their fairly large city. It was surrounded by upscale hotels and department stores.
Key Players The Consortium–A cooperative venture by eight of the largest Savings and Loan institutions in the country. Each institution was independent of the others and none of the institutions could be considered the primary decision-making force. They were all equal partners. The Presidents of these very large Savings and Loans were each secure in their positions and were quite willing to back this innovation. President of Alpha Thrift– One of the originators of the Consortium and one of the key people
274 Mann
in the data center project. Data Center President–The person hired to run the center, who apparently received a sizable salary but was not universally liked by the leaders of the individual Savings and Loan managers. His responsibility was to ensure that the data center functioned effectively and efficiently. Data Center Staff– The DP staff was made up of personnel hired from outside. No one had worked for any of the thrift institutions previously. In addition to the staff on the system development project, there were operators and data processing personnel. S&L Conversion Teams–Each Savings and Loan had to contribute people who understood their bank’s systems and how they would convert over to the data center. After conversion, these teams would be phased out.
Consortium Interactions Decision making in the consortium was tricky from the start because there was no leader among them. All decisions had to be made by consensus but there were areas of major disagreement between the Presidents. Personality conflicts between the Data Center President and some of the Savings and Loan Presidents only added to the difficulties.
CASE DESCRIPTION The Project Begins (1980) The original idea was that the Data Center personnel would design and develop their own software. The plan was to create two basic systems that were protected by security protocols. One system would handle deposits and withdrawals to and from regular accounts and IRAs. The other system would handle the processing of loans. Figure 1 shows their vision of how the systems would interact. Security was very important because some of the participating banks had no security on their systems at all. It took a few years to get the project moving. In addition to finding a location for the Data Center and hiring a President, they had to hire a staff, install a computer and operating system software, and get the whole thing going. All of this technology was picked by the personnel in the Data Center with the Presidents of the different Savings and Loans’ contributing broad guidelines. As stated by the Auditor, the Presidents basically said: “Here’s your project, here are your limits, here is what we are trying to achieve; you have to make it happen.” Figure 1: Schematic of System Interactions
Deposit System
Loan System
Interface IRA Savings
Security System
Recognizing Runaway IS Projects When They Occur 275
While the Presidents and the Data Center people were working, however, there were already some issues developing in the minds of the workers in the individual Savings and Loan institutions. They had not been included at all in the system’s design and none were hired to work in the Data Center itself. Plus, it took so long to get to the stage of hiring personnel for the Center and get started that many employees regarded it ‘as a joke’. Because no progress reports were ever presented to them, they wondered if it would ever get going.
Stop and Redirection (1982) An early blow was the realization that building software from scratch would take far more resources than could be committed. After numerous lengthy meetings it was decided, instead, to buy packages from vendors and rebuild them as necessary. For the next few months, they searched for vendors with software that would be acceptable and solicited bids from them. They then analyzed the various offerings to determine which was the best match with what they were trying to accomplish. Eventually, when the Deposit, Loan and Security systems were chosen, each came from a different vendor.
Development Truly Begins (1983) The deposit software chosen had been developed for use by commercial banks and so it did not function exactly as they needed. The Proof of Deposit function within the deposit software was especially troublesome. Proof of deposit processing is quite different for commercial banks because businesses will often want to deposit money from several checks written by their customers. These checks must each be processed separately but then later reconciled with the total to be put into the business’s account. In a Savings and Loan, the process is much less complex. People do not write checks. Instead, they transfer money from their accounts to their commercial bank and then write checks from there. To make the software work for the Savings and Loan, they decided to remove the Proof of Deposit function altogether. Getting the system to work after ‘they hacked the guts out of it’, was difficult, time consuming and costly. The Loan System did not resemble the Deposit System at all and so the two systems did not interface well. Middleware had to be created to allow the systems to interact and work together. The Loan System was also inadequate in handling a task that was common in Savings and Loans–the processing of construction loans. A construction loan is like a mortgage except that the structure to be mortgaged has not been built yet. The bank loans money to individuals or developers for use in building homes, malls, office buildings, etc. The loan balance grows while the contractor is incurring bills. The money loaned is used to build the structure after which the loan is converted to a mortgage. Employees at the individual Savings and Loans had noticed that the new software did not include a construction loan function and had repeatedly complained. They were finally told that the system would never include a construction loan subsystem. This meant that each institution would either have to handle them manually or develop their own system on a PC that would interface with the Data Center system. For some Savings and Loans, this meant that a previously automated function would become manual after conversion to the new system. The Auditor summed up the opinion of the employees as: “Now, here was something that was handled by all of our old systems but not being handled at all by this ‘high-tech’ new thing we are paying a lot of money for.” To make matters worse, Alpha Thrift decided to use its own system to handle IRA/Keogh plans because it was felt that the Auditor’s original Savings and Loan (before the merger) was superior to that offered by the Data Center.
Development Becomes Troubled (1984+) Modifications made to the systems were large enough that it was difficult to get the customized
276 Mann
systems to operate properly. In at least one case, the manufacturer declined further support for its system as rebuilt by the Data Center. One deadline for conversion came and went, then another. The conversion team personnel from each Savings and Loan had been selected but had not been consulted yet and were just waiting around to be put into action. Rumors became rampant because no one knew what was happening. At one point a particularly outspoken EDP auditor from one of the Savings and Loans raised the concern that the contributed capital had yet to generate a dollar in revenue and that investments might have to be written down. Moreover, an audit of expense accounts revealed that the cost of operations for the data center was $500,000 a month!
Major Crisis in Confidence (Summer 1984) A serious blow to the project came in the Summer of 1984. Four of the original Savings and Loans decided to withdraw from the project. First, financial difficulties caused two of the withdrawals and signaled that the golden age of Savings and Loans was beginning to end. The withdrawal of two banks meant it was necessary for the remaining six banks to increase each of their capital contributions. This caused two more banks to rethink their participation. The other two decided that the project was not providing the benefits promised and participation in the project was becoming too expensive. They pulled out and now there were four banks left. The remaining four banks had to split the capital costs to build the system and the cost of operating the data center. It was revealed at this time that the Data Center’s prestigious accounting firm had resigned from its engagement. The accounting firm was soon replaced but then the Data Center’s EDP auditor announced his resignation as well. A final blow was when the EDP auditor of one of the remaining Savings and Loans suggested stopping the project until certain situations that he viewed as critical problems were resolved: “At one of the quarterly meetings between the Data Center and the Savings and Loan Management, one of their auditors raised, in accounting what is called, ‘the going concern question’. That is, is your investment in this candidate worth what you’ve put in or not? If it doesn’t realize any return or any revenue at all from the purpose for which it was created, should your investment maybe be written down or written off?” The reaction of the group was to grumble for a while and then drop the subject. After all, it was not EDP auditors who would make this decision. Management at this point had sunk so many resources into the project that it would lose face if it admitted that the proposed system would never function as intended. The Auditor of Alpha Thrift brought up the going concern question at one of the staff meetings with his President. The President became very upset because pulling out would mean they would have to write-off their investments. Plus, with the looming Savings and Loan crisis, he had other problems on his mind. The situation at Alpha Thrift seemed to coincide with that of the other three institutions because they all continued their participation in the project and were fully committed to making it work. The Auditor explains why he thought the Presidents continued to participate: “Well, I think the reason the whole thing kept on was, management wanted to save face; it was their brain child. Surely, it must work somehow. That a lot of money had been sunk into the place by this time. We kept having to make more contributions. This is when the bubble was bursting on S&Ls. They had all been into some big bad investments that they shouldn’t have been into. And a lot of them were sinking.”
System Testing Problems (1986) A second major delay was called because it was felt that the Data Center was not ready for operation. The cost of operation, moreover, continued to be very high, although there were no users. First, the on-line security arrangement, which was not used as intended, was extremely difficult to administer and caused problems in on-line transmission. New programs were required to improve it but the coding and structure of the existing code was hard to comprehend.
Recognizing Runaway IS Projects When They Occur 277
Second, the on-line system repeatedly bogged down during ‘stress tests’. They were going to have on-line teller terminals linked to this system and ‘stress tests’ were conducted to see how well the system handled peak usage. The stress tests involved bringing all the tellers in on a Saturday and have them fire away transactions like crazy. Time after time they did this but each testing session caused the system to crash. After about seven different stressed tests, there didn’t seem to be anything anyone could do to make the system stand up to this. And finally, on about the eighth time, somebody had a vision. They said, “Wait a minute! Tellers in real life don’t do that. They have one customer but there may not be another one right there in line. It is only after a minute or so that another customer comes up.” So, they worked out a way to have a teller enter a transaction, wait 30 seconds and then put another one in. In essence, they staggered the transactions. The system reacted much better to a realistic level of usage, and with a bit of tweaking (two more stress tests), the system was ready for use. Still, each delay hurt employee morale because it was feared that effort put into testing and preparation for conversion had gone for nothing. Many employees were beginning to doubt that the conversion would ever take place. When management announced that they were gearing up again, many did not take it seriously. Employees began to lose faith in management. Not only that, when they saw that management could or would not change the situation, they begin to fear for the safety of their organizations and their jobs.
Conversion and Operation (Fall 1986) Once the systems met the stress tests, conversion of all the banks to the new system was completed successfully over a period of two months. In fact, after all that had happened people were pleasantly surprised with how well it went. Users, however, were very unhappy when they began to use the systems. The Loan and the Deposit Systems had very different interfaces. This made it very tough on the people trying to do things online. Departments such as Internal Audit that worked with more than one on-line application found the presence of completely different systems to be a burden. Plus, service problems seemed to plague the operation. Much fine-tuning was necessary to make things work properly. The final result of all this work and expense was an inadequate online system that had unusually poor response time which turned out to be related to the security software. The security system was supposed to be one package that would encompass and protect the other systems. Apparently, the problem with it was that it included a lot of software that wasn’t really necessary. Every submitted transaction had to go all the way to Dallas, through all this software and back with its response. To make matters worse, every now and then, no response was ever received (the transaction was lost in the security software). So, the teller’s natural inclination, after standing and waiting without a response, was to submit the transaction again. The teller would then receive an acknowledgment (from either the first or second submission) that they would give to the customer. Unfortunately, both transactions were recorded on the system and then the bank would not balance at the end of the day. What made everyone even more annoyed was that this problem was known about before the conversion took place. They, eventually, figured out the characteristics of these transactions and were able to track them. They began generating Lost Transaction reports that were used to balance the bank each day. It became a standard procedure. They also informed the tellers that if an acknowledgment was not received, to create a handwritten receipt for the customer and to refrain from submitting the transaction again. Having a slow, awkward and inadequate system was bad enough, but having to pay top dollar for it was excruciating to banks in the midst of the Savings and Loan crisis. The cost of maintaining the facility was horrific. First, management had chosen a location in the high-rent district and so the rent was very high.
278 Mann
The facility costs were also very high because when the technology was purchased finances were not a problem. It was also costing Alpha Thrift $6 million a year just in data processing fees. By the time the auditor’s original bank merged with Alpha Thrift, there were problems and it became clear that the merger had not produced the ideal Savings and Loan after all. Shortly after the merger was consummated, it was found that Alpha Thrift, the supposedly ‘financially sound’ member of the merger, had sunk a large portion of its resources into the Data Center boondoggle that was producing huge losses. The exorbitant cost of second-rate data processing only made things worse.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION Selling Off the Data Center (1988) After the Data Center had been up and running for a couple of years, one of the other Savings and Loans bought out the interests of the other three. Soon after, the President of the Data Center was fired, which was not surprising because that bank had tried to convince the others to fire him several times.
Alpha Thrift Tries to Pull Out (1987-88) At that point the President of Alpha Thrift decided enough was enough and commissioned its accounting firm to conduct a search for the ideal new in-house system. The reasons for pulling out were many. The monthly fee was too high, the system’s poor quality of processing and the fact that the bank still had to do its own IRA/Keogh and construction loan processing. ‘Enough is enough’; especially when there was no prospect for improvement. They had used the system for 2.5 to 3 years. They hired a consulting firm and developed specifications with the idea of developing their own systems. They then encouraged several vendors to submit proposals with bids. The President, however, finally decided it was too expensive to go in-house and became resigned to remaining with the Data Center. When this President retired, the new president of Alpha Thrift also attempted to find an alternative solution but decided that it could not afford a new in-house system and to remain with the Data Center for better or worse. It soon came to worse when the Savings and Loan that had acquired the Data Center was declared insolvent and the Resolution Trust Company repossessed it. Alpha Thrift was soon sold to a bank holding company from a distant city, and many jobs were lost. Here is how the Auditor summed up the case: “It is significant that not one but four organizations buried themselves in this project. I know that two of the Savings and Loans no longer exist and I am not sure about the other two. While I feel management was always informed of difficulties, political considerations prevented any thought of withdrawal from the project. I feel this is one of the most horrible things that can happen to a company when a project goes far out of control and management paints itself into the corner.” Thus, ends the case of the Banking Consortium. It took six years to build the system, such Figure 2: Project Timeline 1980-82 Start Initial Design Conflicts
Stall Vendor Selection Problems
1985
1984
1983
Start
Stall Software Customization Problems
Fall '86+
1986 Start
Stall Testing Problems
Limping Conversion & Operation *
* slow response, inconsistent interface lost transactions, etc.
Recognizing Runaway IS Projects When They Occur 279
as it was (see Timeline in Figure 2). The project cost three times more than was budgeted and almost everyone involved (except management) thought it should have been discontinued or significantly redirected. At the end of this case, we can split the banks into two groups, those that pulled out after becoming frustrated with development problems and costs, and those that remained with the project until the data center folded. This case encourages us to think about exactly how projects become runaways and how difficult it can be to ‘pull the plug’. It also raises issues as to exactly what is failure.
FURTHER READING Recent Practitioner Literature on Runaways (in addition to References below). Kent, A. (1991). Stop That Runaway! Australian Accountant, 61(3), 52-55. Simpson, R. L. (1993). Beware the Runaway IS. Nursing Management, 24(11), 33-37. Drummond, H. (1996) Escalation in Decision-Making: The Tragedy of Taurus. Oxford, Oxford University Press. Diamond, S. (1997). How Oil Firms Can Control Runaway Enterprise-Wide Software Systems. Oil and Gas Journal, 94(46), 25-27. Glass, R.L. (1998) Software Runaways. Upper Saddle River, NJ, Prentice-Hall, Inc.
REFERENCES Cringely, R. X. (1994). How to Forfeit Millions in Exchange for Nothing. Forbes ASAP, August, 6064. Jacobe, D., Smith, B. P., & Fahey, N. (1982, April 1982). The Thrift Crisis: The Result of High Rates and Bungled Deregulation. Savings and Loan News, 44. Rothfeder, J. (1988). It’s Late, Costly, Incompetent — But Try Firing a Computer System. Business Week, 64-65.
BIOGRAPHICAL SKETCH Joan Ellen Cheney Mann received her Ph.D. in Management Information Systems from Georgia State University in 1996. Currently, she is an Assistant Professor at Old Dominion University in Norfolk, Virginia. Her research interests are Runaway Projects, Management of Information Systems, as well as Systems Development Methodologies. She has published in many journals and proceedings including MIS Quarterly, Internal Auditor, HICSS and AIS.
280 Wedemeijer
Long-Term Evolution of a Conceptual Schema at a Life Insurance Company Lex Wedemeijer ABP, The Netherlands
EXECUTIVE SUMMARY Enterprises need data resources that are stable and at the same time flexible to support current and new ways of doing business. However, there is a lack of understanding how flexibility of a Conceptual Schema design is demonstrated in its evolution over time. This case study outlines the evolution of a highly integrated Conceptual Schema in its business environment. A gradual decline in schema quality is observed: size and complexity of the schema increase, understandability and consistency decrease. Contrary to popular belief, it is found that changes aren’t driven only by ‘accepted’ causes like new legislation or product innovation. Other change drivers are identified like error correction, changing perceptions of what the information need of the business is and elimination of derived data. The case shows that a real Conceptual Schema is the result of ‘objective’ design practices as well as the product of negotiation and compromise with the user community.
BACKGROUND Justification Many large application systems in government, banking, insurance and other industries are centered around a relational database. A central component is its Conceptual Schema, being the linking pin between information requirements and perceptions of ‘reality’ as seen by users, and the way how the corresponding data are actually stored in the database. As user requirements can and will evolve over time, it must be expected that changes to the Conceptual Schema (CS) become necessary. Nevertheless, it is often assumed that superior quality of the initial design is sufficient for it to remain stable over the entire information systems lifecycle. Thus, the ability to adapt to later changes in the user requirements is taken for granted, if not blatantly ignored in most design methods. This case looks at the cumulative effects of a series of changes on the overall quality of a CS, by tracing the actual evolution of one CS in its natural business environment. Although we do describe the separate change step, we don’t intend to study or criticize the individual change projects or the realization of strategic targets. Our aim is to develop an overall understanding of successive changes Copyright © 2002, Idea Group Publishing.
Long-Term Evolution of a Conceptual Schema at a Life Insurance Company
281
in the CS, and its change drivers. And by taking the viewpoint of sustained system exploitation, we place the importance of initial design quality into its proper long-term perspective. To our knowledge, these kinds of cases aren’t available in contemporary Computer Science literature. Benefits of the case study for teaching purposes are: • it provides students with an example of a real schema, instead of academic examples which tend to be unrealistic and untried • showing the evolution of a Conceptual Schema in a real business environment puts the importance of ‘high-quality design practices’ as taught in the university curriculum in its proper perspective.
The Company The enterprise where this case study has been conducted is a European life insurance company, or to be more exact a pension fund. Pensions provide financial coverage for old-age, death and earlyretirement of an employers workforce. From now on, we will refer to it as the ‘Pension’ company. The Pension company manages pension benefits of over a million (former) employees and family members. The net current value of their pension benefits is in excess of US$1 billion, and the monthly paycheck to pensioners is over US$0.5 billion. However interesting these financial aspects, we will concern ourselves with the data management aspect as pensions require meticulous and complicated recordkeeping.
Business Functions Figure 1 shows the (simplified) chain of primary business functions involved. It shows how employers submit data about their workforce (usually some copy of the payroll) and pay in their financial contributions. These two inflows are carefully checked against each other. The accepted data are then transferred to Benefit Administration for further processing. All claims are processed by the Claims-and-Payments departments. The case study concerns the business function of Benefit Administration only; we will not study the integration of this business function with its neighboring functions.
Figure 1 : Value Chain of Primary Business Functions
employer
Pension company
submission of payroll data
data acquisition
benefit administration
claims and payments
financial contribution
accounts receivable
asset management
accounts payable
282 Wedemeijer
Management Structure The Pension company is functionally organized. There is a single Data Acquisition section and Benefit Administration section. The business function of Claims-and-Payments is carried by three ‘spending’ departments: Old-Age Pensions is responsible for payments to old-age pensioners, Dependents-Payments takes care of payments to dependent family members upon decease of the (former) employee, and finally Early-Retirement Payments handles early-retirements. Each department and section employs about a hundred full-time workers. An additional 100 workers are employed in several staff sections. Of these, only the Information Management section is of interest as it is their responsibility to safeguard quality of the overall CS layout. Finally, an Information Systems department of some 300 employees takes care of all hardware and software installation, exploitation and maintenance.
Daily Operations •
•
•
Responsibilities and activities of the functional departments are broadly as follows: Data Acquisition collects data about the participants in the pension scheme (employees and dependant family members) from external sources. The main data source is employers’ payrolls. Tape copies of monthly payrolls are received from all employers, and matched with the Accounts Receivable department collecting the pension contributions. A second data source is the municipal registry offices (‘city hall’) that is tapped for addresses and family relationships by electronic data interchange. All acquired data are first validated, then reformatted and transferred into various Pension databases. Benefit Administration keeps complete records on all pension benefits. This involves recording all job switching, changes in wages, in part-time factor, changes in the type of due benefit, etc. It also involves recording marriages and divorces, because pension benefits is legally a joint property that has to be divided upon divorce. Most, but not all of the data processing is fully automated. If a benefit is due, customer data is transferred from the Benefit Administration to the particular Payments department. Their information systems are loosely coupled with the Benefit Administration systems, i.e., claim processing begins by taking out a full copy of all the benefit data.
Information Technology and Modelling Guidelines Our case study concerns the major information system of the Benefit Administration department. The information system uses ‘proven technology’, i.e., mainstream graphical user interfaces and relational DBMS, which still dominates today’s marketplace. In addition, the Information Management department formulated guidelines and best-practices on Information Modelling to direct the design and maintenance efforts of the information systems. The ones that are relevant for the CS are: • Single unified view and therefore single point of maintenance Benefit Administration demands a single highly integrated database that supports all business functions in all their variants. There is no partitioning in separate modules or local databases that can be maintained independently. It is felt that disintegration would cause problems to coordinate the local changes and their reintegration into a single consistent view. The consequence is that departmentwide priority-setting and maintenance deadlines are crucial management decisions that are indispensable but very time-consuming. • High level of generalization and therefore low-maintenance The CS should rise above the level of ad-hoc, implementation features and focus on persistent properties instead. This guideline steers designers away from quick solutions (that are often not only quick but ‘dirty’ as well) towards the long-term, more stable solutions. • Snapshot-Data if possible, Historical-Data where obligatory It is typical of life insurance, and pensions in particular, that future benefits are based on past history of the policy holder / employee. It calls for temporal capacities that most of today’s databases
Long-Term Evolution of a Conceptual Schema at a Life Insurance Company
283
are yet incapable of delivering. Instead, temporality must be modelled explicitly into the CS which may result in overly large and complex models. The business guideline is to try and convince users not to demand temporal data wherever possible, and keep CS complexity down. • Representing derived data in the CS Apart from the temporal issue addressed by the previous guideline, an important issue is storage of base data for calculations, the intermediate results, and the final outcomes. These are important modelling decisions to which we will return later on.
Chain of Command in System Development and Maintenance In practice, the entire system is always under (re)construction, to accommodate the latest set of new laws and legal exception rulings, changes in system and data interfaces with adjacent business functions, etc. Due to size, complexity, broad scope and large number of users, maintenance of the information system has grown to be a well-established but slow managerial process. First, there is a rather informal part where new developments, change requests from users, problem reports etc. are merely assembled onto what is called the ‘maintenance stock list’. The Information Management section in cooperation with the Information System department analyzes the topics on the stock list and translates them into actual change proposals. All change proposals are submitted to the Pension management team that has the final say on priority-setting, budgeting and resource allocation. Once a change is committed by upper management, a formal change procedure based on ITIL standards is followed. The procedure cascades through the usual steps of information systems maintenance: design, implementation, testing and operation. The steps are coordinated with other necessary changes, e.g., user training, programming of data conversion routines, and adaptation of standard letters and forms that the system sends out to customers. The Information Management section is responsible to outline all CS changes early in the design phase. These specifications are forward-engineered into technical changes on the operational database (DDL statements) to be implemented by the Information Systems department. This is a matter of concern as the technicalities of the actual change on the database level may deviate considerably from the original intentions on the conceptual level.
SETTING THE STAGE Design Versus Maintenance Quality of Conceptual Schema designs has received considerable attention in the literature. See for instance (Lindland, Sindre and Sølvberg, 1994; Kesh, 1995; Shoval and Shiran, 1997; Moody, 2000). Without attempting to be exhaustive, some of the more prominent CS quality aspects are: • Simplicity : is the CS easy to understand both for system engineers and the user community ? • Consistency : is the way of modelling consistently applied across all of the CS at all times ? • Flexibility: once the CS is operational, how well are requirements changes accommodated ? Notice that most quality aspects of a CS can be checked at design time; the one exception being its flexibility. A well-designed CS is supposed to be flexible, but this can only be checked in its later evolution. And while many approaches to obtain a high-quality CS design have been proposed, these approaches mostly concentrate on the initial design phase. Much less has been written on maintenance of the CS: what changes does an operational CS accommodate and what is the impact on overall composition and quality of the CS over time?
Flexibility of a Conceptual Schema This case study is focused around the quality aspect of flexibility. But what is flexibility? In the absence of a well-established and generally accepted definition, we use as a working definition: Flexibility is the potential of the Conceptual Schema to accommodate changes in the information structure of the Universe of Discourse, within an acceptable period of time.
284 Wedemeijer
This definition seems attractive as it is both intuitive and appropriate. It assumes a simple causeand-effect relation between ‘structural change’ in the UoD (Universe of Discourse) and changes in the CS. Also, it prevents inappropriate demands of flexibility on the CS by restricting the relevant environment from which changes stem to the UoD only. Based on this working definition, we can investigate CS flexibility according to three dimensions: • ‘Environment’, i.e., what is the driving force of the change, is the CS change justified by a corresponding change in the designated (part of) the Universe of Discourse • ‘Timeliness’ i.e., do change driver and the corresponding CS change occur in approximately the same period of time. Sometimes a CS change is committed in anticipation, long before it is required by a change materializing in the environment. And some relevant changes may not very urgent, and get postponed • ‘Adaptability’, i.e., is the CS changed in such a way that its quality aspects (simplicity, consistency etc.) are safeguarded. We will not study this dimension in detail, and judge adaptability by looking only at complexity of the overall CS lattice.
The Case Study Approach The CS evolution is studied by analyzing documentation collected from the Pension company in the course of time. Every time a new CS version went operational, a full copy of the documentation was obtained and time-stamped for later analysis. The case study approach is to first outline the composition of each consecutive CS, and identify the dominant business changes. Next, we analyze the differences between consecutive CS versions, and finally we assess the level of flexibility in terms of the three dimensions as discussed above. We decided to leave the CS as ‘real’ as possible, including its modelling errors, overly complex structures, etc. We could certainly have polished up the CS. But we feel that this would severely detract from our purpose: to show an example of ‘real schema’, instead of academic examples which tend to be unrealistic and untried. And polishing up the CS would certainly affect the constructs that we want to see evolve, and thus diminish the value of the case study.
Schema Representation Our analysis covered all the usual CS constructs, i.e., conceptual entities, relationships, attributes, and constraints. Nevertheless, we only report the evolution of the overall CS structure made up by its entities and relationships. For space reasons, we leave out the ongoing changes in attributes and constraints. Entities are represented in the diagrams by rectangles. A specialization entity is represented as an enclosed rectangle, so that the “is-a” relationship is immediately obvious. Aggregate “has-a” relationships are drawn as lines connecting the two related rectangles, and cardinality is shown using the customary “crow’s foot” notation. An optional relationship, where not all instances of the member entity need to participate in the relationship, is indicated by an “O”. These conventions make for compact, yet easy-to-read diagrams. As usual, attributes and constraints are not depicted.
CASE DESCRIPTION Our case study concerns the ‘Integrated Benefit Administration’ information system. This highly integrated transaction processing system supports most of the daily business processes of the Benefit Administration department in varying degrees of automation. Our subject for investigation is the CS at the core of this information system. In keeping with its high level of integration, the CS has grown to well over a hundred entities (not counting specializations) and is still growing. Obviously, this is not a comfortable size for our research purpose, and we therefore limit the scope of the case study to the ‘pension benefit’ concept. We trace how this real-world concept is perceived and represented as the Conceptual Schema evolves. Design and implementation of the system and its CS began in 1994, going operational at the end
Long-Term Evolution of a Conceptual Schema at a Life Insurance Company
285
of 1995. The case study covers the period 1996-1999, but the system is expected to run at least until 2005. The time series of CS versions that we include in the case study is shown in Figure 2. The time intervals between consecutive versions vary between a half and one-and-a-half years. Actually, there were some intermediary versions, but we could eliminate them from our analysis. It was found that those intermediate versions were targeted at other concepts than ‘pension benefit’; remember that we are dealing here with a highly integrated CS. In a similar fashion, we excluded CS documentation that was collected for the initial design phase of the CS. These first attempts were considerably improved upon before the ‘real’ CS went operational some months later. In our opinion, these improvements don’t reflect flexibility of the CS as a reaction to UoD changes, but rather a progress in understanding and modelling of the UoD by the designers. This is the CS at the start of the evolution. It has a fairly simple structure. The core concept of Figure 2 : Time Series of the CS Versions
CS versions january 1996 -october 1996 --
dominant business changes pension scheme for Early_Retirement innovated
july 1997 --
information strategy revisited
february 1999 --
facilities for benefit-exchange extended information strategy revisited
september 1999 -time
January 1996 : Initial Production Release Figure 3
version: January 1996
Product
Customer Insured Party
Relationship Policy
Participation
Participation Trail Successor
Trail Premium / Reduction
Benefit
Benefit Premium / Reduction
286 Wedemeijer
the CS is BENEFIT. It records the exact amount of pension due for a PARTICIPATION, i.e., what is due to a beneficiary under a particular pension scheme. The POLICY entity records the coverage as insured for an employee. All BENEFIT amounts are computed from PARTICIPATION TRAIL data which basically is the historical details of employment and salary, but adjusted (and sometimes readjusted) according to regulations. The initial production release of the CS provided exactly two types of pension benefit, recorded as the occurrences of PRODUCT. ‘Regular’ is the combined old-age and dependents pension that is the default insurance scheme. ‘Separated’ is a more peculiar phenomenon where divorce laws have an impact on pensions. Whenever a marriage ends, the accumulated pension benefits are split among ex-partners. The part due to the ex-partner can be partitioned off to become a separate policy for old-age benefit, with the ex as insured party. Of course, benefits due to the other partner are reduced accordingly. Although the information guidelines advocated a single unified view, the early-retirement pension benefits weren’t included in the CS. The reason is that at the time, another information system handled early-retirement pensions and payments reasonably well. It was decided to let that be, and to exclude it from the scope of the new system under design. The PARTICIPATION TRAIL entity contains base data for the calculation of pension benefits, but the derivation itself is not supported in the CS. The applicable rules and regulations are rather complicated, and in the initial design, they are completely relegated to the applications level.
Change Drivers October 1996 : Major Extension version: October 1996
Product
Figure 4 Customer Insured Party
Relationship Policy
Exchanged E.R. benefit
Participation Trail Successor
Trail Premium / Reduction
Benefit obtained by ER.Exchange
D
Participation
Benefit
Trail for level 3
Early-Ret. Benefit level 1
A
Benefit Premium / Reduction
Early-Ret. Benefit level 2
B
Early-Ret. Benefit level 3
C
Parti cipation Trail (Early-Ret. level 2)
A major driving force in the UoD develops 9 months later. Completely new facilities and benefits regarding early-retirements are introduced. The old ways of handling early-retirement by the Pension company, and the information system that supported those ways of doing business, become obsolete almost overnight. Two new business processes are pressed into service (other business process improvements are postponed for the time being): • the administration of early-retirement benefits, and • the administration of benefit exchange. When an early-retirement benefit hasn’t been cashed in (the employee doesn’t retire early or dies prematurely) ‘regular’ pension benefits are increased in exchange.
Long-Term Evolution of a Conceptual Schema at a Life Insurance Company
287
Changes in the CS The CS is expanded and becomes much more complex. Actually, four coherent groups of additions can be discerned in the CS: (A) EARLY-RETIREMENT BENEFIT LEVEL 1. This is a straightforward addition (B) EARLY-RETIREMENT BENEFIT LEVEL 2 and its associated PARTICIPATION-TRAIL FOR LEVEL 2. (C) EARLY-RETIREMENT BENEFIT LEVEL 3 with an associated entity TRAIL-FOR-LEVEL-3. (D) EXCHANGED EARLY-RETIREMENT BENEFIT and BENEFIT OBTAINED BY E.R. EXCHANGE.
Flexibility of the CS As for the ‘environment’ dimension, the CS changes are all justified by the pending changes in early-retirement pensions. As for ‘timing’, the CS changes precede the real-world changes, and don’t coincide with them. While the new early-retirement rules and regulations were contracted in the course of 1996, the new rules only took effect in the spring of 1997. The time lag was necessary to prepare business processes, to train personnel, to inform employers and employees of the innovation in pension benefits etc. And perhaps most importantly: to adjust information systems. As for ‘adaptability’, notice how the way of modelling has now become inconsistent across the several Benefit-like entities, for no apparent reason. And the guideline to go for ‘high level of integration’ in the CS is compromised by the decision not to merge the new entity EARLYRETIREMENT BENEFIT LEVEL-2 with the semantically very close entity BENEFIT. A final observation (not in the schema) concerns the PRODUCT entity. The previous version held two instances ‘regular’ and ‘separated’. The new version adds the instance ‘Early-Retirement’. Apparently, the UoD change is accommodated by changes at both the instance and the structural level. This must surely be considered an update anomaly.
July 1997: Ongoing Changes
Figure 5
version: July 1997
Product
Customer Insured Party Relationship Policy
Separation
Participation Trail (new)
Participation Trail Successor
Trail Premium / Reduction
Participation Trail (new) for Benefit
G
Exchanged E.R. benefit
Benefit obtained by ER.Exchange
Participation
Benefit
Benefit for ex obtained by ER.Exchange
Participation Trail for Benefit
E
F
H
Participation Trail of ex-spouse
Trail for level 3
Early-Ret. Benefit level 1
Benefit Premium / Reduction
Participation Trail of ex, Premium / Reduction
E
E
Early-Ret. Benefit level 2
Participation Trail of ex-spouse (Early-Ret. level 2)
E
Early-Ret. Benefit level 3
Parti cipation Trail (Early-Ret. level 2)
288 Wedemeijer
Change Drivers The early-retirement innovation still acts as an important driving force nine months later. The business processes changes that were postponed earlier on, are now being implemented. The CS of our case study is affected by only one of them: • having the legalistic peculiarities of benefit division for divorce apply to early-retirement There are no other material changes in the UoD. However, there is a changing perception of the UoD: • the earlier information modelling decision not to represent derivative relationships is reversed The reversal has major impact on the CS, and on the application level where existing derivation routines have to be altered and entity update routines added.
Changes in the CS As before, the CS is expanded and becomes much more complex. Again, we can discern several coherent groups of changes, most of them being additions: (E) To accommodate ‘divorce’ regulations, SEPARATION and several associated specializations and relationships are intricately woven into the CS, increasing overall complexity. (F) The complex derivative relationship how BENEFIT is related to PARTICIPATION TRAIL data was absent from the initial CS. It is now modelled by way of the PARTICIPATION-TRAIL-FOR-BENEFIT entity. (G) The change of strategy even went one step further. Maintenance engineers came to believe that the old way of working with the PARTICIPATION TRAIL entity was ‘legacy’. To prepare a graceful evolution, PARTICIPATION TRAIL (NEW) and PARTICIPATION TRAIL (NEW) FOR BENEFIT are added. And a final change must be considered a ‘correction’: (H) cardinality of the BENEFIT-to-PARTICIPATION relationship is increased from 1:1 to N:1
Flexibility of the CS As for the ‘environment’ and ‘timeliness’ dimensions, SEPARATION and its associated CS changes are justified, being a belated consequence of the early-retirement innovation. Not so for the three other changes. As for ‘adaptability’, the new CS elaborates on the previous CS, making it ever more complex but in a largely consistent way. Only the additions of PARTICIPATION TRAIL (NEW) and PARTICIPATION TRAIL (NEW) FOR BENEFIT are suspect, as these entities create redundancy in the CS.
February 1999: Stabilized
Figure 6
version: February 1999
Product
Customer Insured Party
Relationship Policy
J Participation Trail (new)
Participation Trail Successor
Trail Premium / Reduction
Participation Trail (new) for Benefit
Separation
I
Policy Attribute
Exchange
Participation
Trail for level 3
ER
Benefit by by ER X excha.
I
Benefit for ex by by ER X excha.
Participation Trail for Benefit
Benefit exchanged benefit
I
Participation Trail of ex-spouse
Participation Trail of ex, Premium / Reduction
Early-Ret. Benefit level 1
Benefit Premium / Reduction
Early-Ret. Benefit level 2
exchanged benefit
I
Participation Trail of ex-spouse (Early-Ret. level 2)
Early-Ret. Benefit level 3
Parti cipation Trail (Early-Ret. level 2)
ER exchanged Early-Retirement benefit Benefit by X benefit obtained by any kind of exchange by ER benefit obtained by E.R.exchange excha.
Long-Term Evolution of a Conceptual Schema at a Life Insurance Company
289
Change Drivers (The lower right-hand corner of Figure 6 explains entity names that are abbreviated in the diagram). For over a year-and-a-half, there are no important changes in our section of the Pension company business. The business isn’t at a standstill, rather it means that current ways of doing business in the Benefit Administration department are satisfactory. The relative quit in the UoD is reflected in the CS: while several intermediate CS versions were implemented, we can ignore them all because they don’t concern any features of our CS. Only one change is announced that will become effective as of summer 1999: • New legislation forces all pension companies to offer their insured parties more freedom of choice for ‘exchange’ of pension benefits. In the ‘regular’ pension scheme, a dependent’s benefits was a fixed proportionality of the corresponding old-age benefit. A customer’s freedom to exchange various kinds of pension benefits means that the proportionality now turns into a variable.
Changes in the CS The CS version of February 1999 is impacted by the upcoming change in the UoD. (I) In response to the broadened concept of exchange, a generalized EXCHANGE entity is introduced. It subsumes the former EXCHANGED-EARLY-RETIREMENT-BENEFIT and impacts various other entities and/or specializations. Notice how the EXCHANGE-to-BENEFIT relationship shows a 1-to-1 cardinality whereas the subsumed EXCHANGED-EARLY-RETIREMENT-BENEFIT-to-BENEFIT relationship was N-to-1 cardinality. Another minor improvement in the CS is simply motivated as ‘progressive understanding’, without there being a clear need or user requirement that drives the change: (J) POLICY ATTRIBUTE is added.
Flexibility of the CS The overall CS structure remains stable. As for ‘environment’ and ‘timeliness’, the upcoming legislation causes advance changes that are gracefully accommodated in the CS. As for ‘adaptability’, quality and complexity aren’t much different from the previous CS version.
September 1999: Simplification Figure 7
Contract
Contract Conditions
version: September 1999
Product
K
Product in Contract
Customer Insured Party
Relationship Policy Separation
O Policy Attribute
Participation Trail Successor
P
Trail Premium / Reduction
L
Exchange
Benefit obtained by Exchange
Participation
Benefit exchanged benefit
Benefit for ex obtained by Exchange
Participation Trail of ex-spouse
M
Participation Trail of ex, Premium / Reduction
N
Trail for level 3
Early-Ret. Benefit level 1
Benefit Premium / Reduction
Early-Ret. Benefit level 2
exchanged benefit
Participation Trail of ex-spouse (Early-Ret. level 2)
Early-Ret. Benefit level 3
M
290 Wedemeijer
Change Drivers Apart from the new legislation, there is only one business change for seven months. Even then, it is internal to the enterprise, a change of perception on the strategic management level where a new philosophy in product engineering is professed: • Pension products should vary across market segments and employers, instead of being uniform. But to our surprise, we find that once again the perception of the UoD is radically reversed: • The CS is to record original source data only, while derivative data is to be eliminated from the CS.
Changes in the CS While the new philosophy in product engineering has little relevance for the current way of recording the benefit data, it drives a change in a ‘corner’ of the CS impacting key integrity of the Policy entity: (K) The concept of CONTRACT and a dependent entity CONTRACT CONDITIONS are introduced. The relationship POLICY-to-PRODUCT is redirected via an intermediate PRODUCT-IN-CONTRACT entity. The previous CS versions recorded the intricate deviated relations between BENEFIT and PARTICIPATION TRAIL data, but the new CS version does away with all this. While functionality and complexity now has to be accounted for at the application level, the pay-off at the CS level is a remarkable simplification: (L) The preparatory entities of PARTICIPATION TRAIL (NEW) and PARTICIPATION TRAIL (NEW) FOR BENEFIT are also eliminated. Notice how these entities were never really used. (M) PARTICIPATION TRAIL FOR BENEFIT as well as PARTICIPATION TRAIL FOR EARLY-RETIREMENT LEVEL 2 are eliminated. (N) Three subsumed EXCHANGED-EARLY-RETIREMENT-BENEFIT entities are dropped. (O) The BENEFIT OBTAINED BY EXCHANGE-to-PARTICIPATION TRAIL relationship is short-circuited into a BENEFIT OBTAINED BY EXCHANGE-to-POLICY relationship. Finally, one relationship is changed for which we could not ascertain a change driver. The change seems only to pave the way for a simplification in the next CS version: (P) The TRAIL PREMIUM / REDUCTION-to-PARTICIPATION TRAIL relationship is redirected to SUCCESSOR.
Flexibility of the CS The ‘environment’ dimension impacts only a small part of the CS. Flexibility in the ‘adaptability’ dimension is evident by the many entity eliminations and the few relationships being redirected. In this case, ‘timeliness’ is less relevant as the change in perception has no significant indication of urgency. But please notice how the shift of TRAIL PREMIUM/REDUCTION-to-PARTICIPATION TRAIL relationship is in anticipation on a change to be made in the next version.
EXPERIENCES Having described the long-term evolution of this single Conceptual Schema in considerable detail, we can now look at the overall picture in order to draw conclusions. Apart from the CS as a whole, we also look at how the core constructs are being represented over time.
Stability of the CS The first and foremost observation is that the CS has successfully adapted to past changes, while its overall composition has remained relatively stable over almost half a decade. Table 1 quantifies stability by looking at the number of changes in the evolving CS. A fast expansion is seen from January 1996 to July 1997. The period roughly corresponds to the innovation of pension scheme for early-retirement as CS change driver. After that, additions and deletions are more evenly balanced. During this time of relative quiet, major business developments take place but these are accommodated in the CS without expanding very much.
Long-Term Evolution of a Conceptual Schema at a Life Insurance Company
291
Table 1: Number of Changes in the Evolving CS entity CS version
count
addition deletion change
January 1996
addition deletion
count change
9 (+ 2) 7
0
0
October 1996
11 (+ 2) 12
0
0
16 (+ 2) 7 (+1)
0
0
July 1997
23 (+ 2) 13 (+1) 0
1
23 (+ 3)
1 (+5)
0
3
February 1999
36 (+ 3)
2 (+5)
0
6
24 (+ 8) 3
September 1999
relationship
4 (+3)
0
38 (+ 8) 3
23 (+ 5)
8 (+3)
3 33 (+ 5)
Numbers in parenthesis in ‘entity’ columns indicate specializations Numbers in parenthesis in ‘relationship’ columns indicate corresponding specialization-to -generalization injective relationships
Stability of Concepts When inspecting each entity of the CS from the viewpoint of users, we find that real-world semantics, once it is modelled into the CS, doesn’t evolve that much. Indeed, every entity retains its ‘structure’ or ‘semantic content’ over time, and its relationship to other entities also remains stable as evidenced by their fixed locations in the CS diagrams. At the same time, most entities change in some way or another, the two exceptions being the CUSTOMER and BENEFIT-REDUCTION/PREMIUM entities. While relationships to higher entities are next to immutable, relationships with lower entities are much more volatile. An entity can aggregate one or two underlying entities at one time, but six or seven at another. This reflects a common practice in maintenance. When adding new entities to a CS, it is easier to create relationships to existing ones than from them. The latter operation has immediate consequences for referential integrity and it is usually avoided. This points at an important user requirement that is often left unspoken in change requests. It is the demand for compatibility, i.e. to safeguard prior investments in existing CS constructs and applications, and keep these operable in the new design. A more outspoken form of compatibility is extensibility, when existing constructs may not be altered and changes may be implemented as additions only. The latter form is seen in several CS changes, but not all of them. The compatibility demand constitutes a strong drive towards stability, but it also restraints maintenance engineers in their search for problem solutions. They must often produce a design solution with some flaw in it, resulting in loss of quality in the long run.
292 Wedemeijer
Contributions of the Modelling Guidelines to CS Stability We introduced four modelling guidelines that are relevant for engineers working on this case. We now discuss how each has affected overall CS stability. • Single unified view and therefore single point of maintenance This guideline was well adhered to. Two or three other information systems have been operated in the Benefit Administration section, but their conceptual schemas were in full agreement with the single unified view expressed by our CS. All CS changes could be defined once, and propagation of these changes to the minor systems was almost trivial on all occasions. While the guideline as such doesn’t address stability, it has contributed to stability by minimizing schematic discrepancies. • High level of generalization and therefore low-maintenance This guideline hasn’t been adhered to too well. One reason is that business pressures for quick implementations often override long-term intangible goals such as this. But we think there is another reason. Consider the phenomenon of benefit exchange as introduced in 1996. At the time, it is unknown that this facility will be generalized from Early-Retirement benefits to other kinds of benefits as well. The guideline doesn’t help to determine what the ‘best’ or ‘essential’ generalization is. As a result, the guideline is impractical for business usage. • Snapshot-Data if possible, Historical-Data where obligatory This guideline’s contribution to stability is uncertain. It has primarily affected various “timestamp” attributes of entities but on the level of the overall CS, no effects of the guideline on entities or relationships were detected. • Representing derived data in the CS As the guideline itself wasn’t stable, it is no surprise that it hasn’t contributed to stability at all. From this brief analysis, we conclude that CS stability can’t be achieved by relying on modelling guidelines alone (Schuette and Rotthowe, 1998). Even if the guidelines are based on sound state-ofthe-art theoretical arguments, they may still change over time, or be too impractical. Or business developments may take off in a direction that isn’t covered by the guidelines.
Dimensions of Flexibility of the CS We outlined how flexibility can be assessed by considering ‘environment’, ‘timeliness’, and ‘adaptability’. Table 2 summarizes our findings regarding the CS changes for these three dimensions. As to the environment dimension, approximately half of the 16 changes in the CS could be labeled as ‘justified’. The business changes have clearly been accommodated into the CS by an incremental maintenance approach, taking care that current data and applications aren’t disturbed. The other changes were either driven by changes in modelling guidelines, or by maintenance considerations that don’t derive from the changing UoD at all such as error correction, or changes in anticipation of future developments. For timeliness, the UoD and CS display a joint evolution, but the timeframes of their changes don’t always coincide. Sometimes there is advance warning of an upcoming UoD change and the CS can be prepared in advance. Strictly speaking, the CS models not only the current UoD but it covers a future UoD as well. That this way of working is not without risk is illustrated by changes (G) and (M), where designers add entities into the CS because of a predicted needs to be eliminated again some time later. Apparently, there is a penalty to be paid when proactive maintenance goes awry. On one occasion (E) the desired change exceeds the capacity for change and had to be postponed. And some kinds of change drivers pose no timeframe at all: the changes in modelling guidelines were accommodated in the CS as opportunity presented itself. As to adaptability, the case shows how the demand for compatibility has a negative effect on simplicity. Semantically similar structures like BENEFIT (‘regular’, E.R. LEVEL-1, etc.) are added (changes (A), (B), (C)) and remain in the CS for years. These entities (but not their data content!) could have been generalized but weren’t. As a result, maintenance that would apply on the level of the generalization must now be done on each separate specialization, which is a constant source of
Long-Term Evolution of a Conceptual Schema at a Life Insurance Company
293
Table 2: Findings Regarding CS Changes For Three Dimensions change in the CS (A) E.R. BENEFIT LEVEL 1 added
environment by change in UoD
timeliness yes, in advance
adaptability equal
(B) E.R. BENEFIT LEVEL 2 added
by change in UoD
yes, in advance
increases
(C) E.R. BENEFIT LEVEL 3 added
by change in UoD
yes, in advance
increases
(D) E.R.EXCHANGE added
by change in UoD
yes, in advance
increases
(E) SEPARATION inserted
by change in UoD
yes, but belated
increases
(F) PARTICIPATION TRAIL-FOR-BENEFIT inserted (G) PARTICIPATION TRAIL (NEW) inserted
by change in modelling guidelines unjustified
"opportunistic"
increases increases
(H) BENEFIT relation to PARTICIPATION corrected (I) EXCHANGE generalizes E.R.EXCHANGE (J) POLICY ATTRIBUTE added
unjustified
N/A (in anticipation) N/A
by change in UoD unjustified
yes N/A
equal equal
yes, in advance
equal
"opportunistic"
decreases
"opportunistic"
decreases
"opportunistic"
decreases
N/A
equal
N/A
equal
(K) CONTRACT etc introduced
by changing perception of the UoD (L) PARTICIPATION TRAILS eliminated by change in modelling guidelines (M) PARTICIPATION TRAILS (NEW) eliminated by change in modelling guidelines (N) subsumed EXCHANGED-E.R.-BENEFIT by change in modeldropped ling guidelines (O) BENEFIT OBTAINED BY EXCHANGE relation unjustified shifted (P) TRAIL PREMIUM / REDUCTION relation unjustified shifted
equal
‘Environment’ concerns how the CS change was justified. ‘Timeliness’ expresses whether the CS change was committed in the correct timeframe. “Opportunistic” indicates that no definite timeframe applies; “N/A” is not applicable. ‘Adaptability’ indicates overall complexity of the CS.
duplicate maintenance. At the same time, we have learned that the notion of CS simplicity or ‘understandability’ isn’t as clear-cut as it may seem. System engineers and the user community grow accustomed to the overall picture, and come to understand its complex constructions. As a result, they will use the existing CS as their yardstick for simplicity. New CS proposals are measured by their difference with the familiar schema, rather than by quality of the new CS by itself. As they evolve over time, the overall structure of the CS and the semantics of core concepts remain relatively stable. An initial decline in schema quality is observed, when overall size and complexity increase and level of integration, understandability and consistency decrease. Later on, schema quality remains at a constant level. An important finding is that changes in the CS aren’t driven only by dominant changes in the UoD. The actual CS reflects the ‘objective’ user requirements, but also the current modelling guidelines and the subjective perceptions of maintenance engineers.
294 Wedemeijer
It is widely recognized that there is no ‘best possible’ CS once and for all. The case study suggests several corollaries. First, the case demonstrates that not only the evolving UoD acts as change driver for changes in the CS, implying that any CS captures more than just UoD features. Second, whenever a CS is being changed, the impact of change is always kept to a minimum, in response to an implicit user demand. The case also brings out the mismatch between changes in CS semantic and the elementary changes that are provided by the (relational of Object-Oriented) data model in use. Some elementary changes, such as simple addition or deletion of an entity, or redirection of a relationship are rarely seen in our case study. Most UoD change drivers cause a series of coherent changes in a whole group of entities. In view of this, we think that a demand that ‘every aspect of the requirements appears only once in the schema’, as formulated by Batini, Ceri and Navath, 1992, p.140) needs revisiting. Finally, the case shows how suboptimal solutions tend to stick around. There is no drive to make a CS any better if it works well enough. The combined effect of these corollaries is that the maintenance engineer is kept away from realizing a ‘best possible’ Conceptual Schema. Of course, a single case study isn’t enough to base generally valid conclusions on. But we think that our experiences touch upon several serious problems in conceptual modelling that are in want of further research.
CURRENT CHALLENGES Our case study has covered the period 1996-1999, and we demonstrated how the CS has successfully accommodated the ongoing changes in the environment so far. Of course, developments haven’t come to a standstill since 1999, and the CS still continues to evolve in order to accommodate the changes in its environment. To name a few: • Increasing differentiation of pension schemes across market segments and employers, instead of being uniform. Some differentiation was expected; it is why CONTRACT and PRODUCT-INCONTRACT were introduced in the first place. But the full impact of change hasn’t been realized yet. The real challenge is to keep track of historic data, and to do the benefit calculations according to the correct pension scheme while both the pension scheme and the participation in the scheme are changing over time. • New variants of old-age and early-retirements pension benefits that allow more customer options. Some suggested options are voluntary participation; arbitrary amount of yearly contribution; or a choice of investment funds with different risk and performance profiles. The business function of benefit administration is bound to become more complex as a result. • Integration of business functions and data usage with Claims-and-Payments departments downstream in the information value chain. The old way of working was to simply transfer all relevant data across an automated interface. While it has clearly been inefficient all along, it was satisfactory. But now the information systems of the Claims and Payments departments are approaching the end of their life cycle. The target is to merge the legacy information systems into the ‘Integrated Benefit Administration’ system, expanding its ‘single global view’ ever more. • Conversion to the new Euro currency. It calls for extensive changes in information systems. All “amount” and “value” attributes in the database must be adjusted: both current and historic data. The impact on applications is even larger: all calculations and derivation rules must be checked for known and unexpected currency-dependencies. For instance, many cut-off values hardcoded into the software are currency-dependent. • A final challenge facing the Pension company is the drive to ‘go online’, and deliver the benefit data to the customer over the web, whenever and wherever. The Pension company considers the overall quality and flexibility of the CS to be high enough for it to remain operative for years to come. The challenges present both the drive and the opportunity for continued CS maintenance, but how this will affect overall composition, schema quality and level of integration is a subject for further research.
Long-Term Evolution of a Conceptual Schema at a Life Insurance Company
295
FURTHER READING This case study, describing ‘real-life’ experiences of evolving schema in an existing organization, demonstrates how several areas of conceptual modelling intermingle in practice that current literature often approaches as independent and unrelated. A general framework for the aspect of CS flexibility is developed in Wedemeijer, 2001). For bestpractices in design, i.e., how to achieve the required quality level of the CS, one can best turn to textbooks. And it is not necessarily the latest that is best. We find Teorey (1994), Blaha and Premerlani (1998) and Elmasri and Navathe (2000) useful. Our case study came across the difficulty of recognizing similar concepts and having them merged in the CS. This is the problem of schematic discrepancies, as has been studied by Sheth and Kashyap (1992). Elementary transformations on the level of the Conceptual Schema have been described in Ewald and Orlowska (1993), Batini, Di Battistaand Santucci (1993). An investigation how elementary transformations on the schema level propagate to the level of data instances is discussed in Lerner and Habermann (1990). The first case study of evolving systems to become widely known is Belady and Lehman (1976). A longitudinal study of an evolving Internal Schema has been reported by Sjøberg (1993). The handling of derived data in business processing has been discussed in Redman(1996). It develops the concept of information chain as can be recognized in our case study. A taxonomy for derived-data at the level of the Conceptual Schema is developed in Wedemeijer (2000). An important assumption underlying our approach is that the CS documentation is a faithful description of the operational database structure. In other words, we assume that the Internal Schema and the data stored in the DBMS are in full agreement with the Conceptual Schema. This assumption needn’t always be true, as has been noticed by several authors engaged in Reverse Database Engineering (Winans and Davis, 1991; Hainaut et.al., 1996). The wholesome effect of good user documentation is studied in Gemoets and Mahmood (1990).
REFERENCES FOR THE FURTHER READING SECTION Batini C.W., Di Battista G. and Santucci G.(1993). Structuring primitives for a Dictionary of ER Data Schemas, IEEE Transactions on Software Engineering 19(4), 344-365. Belady L.A. and Lehman M.M. (1976). A model of large program development, IBM Systems Journal 15(3), 225-252. Blaha M. and Premerlani W. (1998). Object-Oriented Modeling and Design for Database Applications, Prentice Hall, Upper Saddle River, NJ . Ewald C.A. and Orlowska M.E. (1993). A Procedural Approach to Schema Evolution, International Conference on Advanced Information Systems Engineering CAiSE’93, Paris France, Springer Verlag series LNCS 685, 22-38. Elmasri R. and Navathe S.B. (2000). Fundamentals of Database Systems, third edition Addison-Wesley Longman Incorporated. Gemoets L.A. and Mahmood M.A. (1990). Effect of the Quality of User Documentation on User Satisfaction with Information Systems, Information & Management 18(1), 47-54. Hainaut J.-L., Henrard J., Hick J.-M., Roland D. and Englebert V. (1996). Database Design Recovery, CAiSE ’96 Advanced Information Systems Engineering, Springer Verlag series LNCS 1080, 272300. Lerner B.S. and Habermann A.N. (1990). Beyond schema evolution to database reorganization, Proceedings of the International Conference on OO Programming, Systems, Languages, and Applications, SIGPLAN notices 25(10), 67-76. Redman T.C. (1996). Data Quality for the Information Age, Artech House Publishing, Boston.
296 Wedemeijer
Sheth A.P., Kashyap V. (1992). So Far (schematically) Yet So Near (semantically) Proceedings of the IFIP Working Group 2.6 DS-5, 272-301 Sjøberg D. (1993). Quantifying Schema Evolution, Information & Software Technology 35(1) , 35-44 Teorey T.J. (1994). Database Modeling & Design: The fundamental Principles, second edition, Morgan Kaufmann Publ.Inc. Winans J. and Davis K.H. (1991). Software Reverse Engineering from a currently existing IMS database to an E-R model, ER’91 Entity-Relationship Approach, 333-348 Wedemeijer L. (2001). Defining metrics for Conceptual Schema Evolution, Proceedings of International Conference on Data Evolution and Meta Modelling, Springer Verlag series LNCS, to appear Wedemeijer L. (2000). Derived data reduce stability of the Conceptual Schema, Proceedings of 12th International Conference Intersymp2000, Lasker G.E., Gerhardt W. (eds.) International Institute for Advanced Studies in Systems Research and Cybernetics, 101-108.
REFERENCES Batini C.W., Ceri S. and Navathe S.B. (1992). Conceptual Database Design: An Entity-Relationship Approach, Benjamin/Cummings Publishing Comp. CA. Kahn H.J. and Filer N.P.(2000). Supporting the Maintenance and Evolution of Information Models, Proceedings of IRMA-2000 International Conference, Hershey: Idea Group Publishing, 888-890 Kesh S. (1995). Evaluating the quality of Entity Relationship Models, Information & Software Technology 37, 681-689 Lindland O.I., Sindre G. and Sølvberg A. (1994). Understanding quality in conceptual modeling, IEEE Software, 42-49. Moody D.L. (2000). Strategies for Improving Quality of Entity Relationship Models: A ‘Toolkit’ for Practitioners, Proceedings of IRMA-2000 International Conference, Idea Group Publishing,, 1043-1045 Schuette R. and Rotthowe T. (1998). The guidelines of Modelling: an approach to Enhance the Quality in Information Models, Proceedings of the 17th Entity-Relationship Approach Conference. Shoval P. and Shiran S. (1997). Entity-relationship and object-oriented data modeling: an experimental comparison of design quality, Data & Knowledge Engineering, 21, 297-315
BIOGRAPHICAL SKETCH Lex Wedemeijer received an M.Sc. degree in pure mathematics from the State University of Groningen, the Netherlands. He works as Information Architect at ABP Netherlands. Before coming to ABP, he was project manager in systems engineering with the Dutch Royal Mail company. His interests include data administration, database modelling, business process redesign and design methodologies, and quality assurance. He is currently engaged in developing and implementing the unified Corporate Information Model for ABP.
Incentives and Knowledge Mismatch
297
Incentives and Knowledge Mismatch: The Deemed Failure of a BPR Project in a Large Banking Organisation Parthasarathi Banerjee NISTADS, India
EXECUTIVE SUMMARY A large public bank ‘B’ in an economy now under transition to liberalization, attempted reengineering its structure and business processes. ‘B’ has a large branch-based structure to acquire local savings where banking processes add little value. Value is added at the head office through bankbased financial operations and through providing credit to industry. Appreciating that competition was sharpening; two successive chairmen and a few senior managers initiated change management. However, they could not choose–from out of structure, business process, strategy and technology – what was the driver of change and in what sequence of change would be the best outcome! A consultant was appointed. However, negotiations on change management between the stakeholders and the consultant resulted in tacit opportunistic alliance. An apparently loaded report on change resulted in minor changes. BPR failed because processes remained unrecognized and technology instead of hastening change turned out to be a new instrument of monitoring.
BACKGROUND Indian banks have remained under strict government control since 1969, the year of nationalization. The process of liberalization followed by a partial globalization which began about the mideighties took up first industry and then finance and banking. Public banks with a large number of employees and a huge burden of lost assets were under the control of several regulators. Foreign and private banks did not have these problems and they were becoming increasingly competitive. Moreover a liberalized economy began demanding that the public banks take up a new appropriate developmental role. The bank ‘B,’ which was studied here, was nationalized in the same year, and like other banks, it had a board with a chairman and other members nominated by the government. Several machinery of the government for example the Reserve Bank of India (RBI), the Indian Banks Association (IBA) and the Ministry of Finance (MoF), have maintained control over the bank. Moreover, an industry-wide employees’ union has been exercising a large degree of control over both personnel management and general management through industry-wide negotiations as well as Copyright © 2002, Idea Group Publishing.
298 Banerjee
through representation on the board. Banking policies, expansion of bank and personnel policies, modernization and technology use, and even the strategy and structure of ‘B,’ thus continued to languish under these controls. However, since in particular 1991, the system of controls over banks and over ‘B’ began weakening. ‘B’ too began preparing itself for managing the changes, however, the culture and managerial system was burdened with sufficient dead weights. ‘B’ was set up in 1906 by a group of businessmen of Mumbai, a city known for its business communities and for its traditional skill in handling banking and finance. Prevailing British colonial rule over India prevented through a large number of injunctions a free growth of banks and ‘B’ learned over several decades how to bypass and cope with preventive rules. Its structure built within itself a mechanism and a culture as it were, to overcome regimes of external control. Such a managerial structure indeed helped it overcome again, to a large degree, the control regime of the post-1969 nationalization period. Over the years this bank grew up in two strategic areas: retail ‘branch-based’ banking carried over through large number of dispersed branches, and non-retail ‘bank-based’ banking in several financial instruments carried through the head office. With nearly 2,500 domestic branches, it has a large network across cities and the countryside, though most of its branches are in western India, which is also the region known for its business and industry. ‘B’ compares very favorably with the best and the large banks of India; for example, deposits with ‘B’ grew at 23.04% during 1997-98, while three other large nationalized banks had growth rates of 21.63%, 14.18% and 18.94%. Advances made by ‘B’ grew during this same period at 20.09% while these three banks had advances growing during this period at 19.79%, 14.05% and 15.59% respectively. Growth in operating profit during 1997-98 was 20.2% for ‘B’, and it had a similar figure during 199697 though there was a fall in 1998-99.. Only two other banks in India experienced growth in operating profit in the range of 10%. Some of the key financial ratios (see Table 1) of ‘B’ and the comparative performance of ‘B’ vis-à-vis two other large public sector banks as well as the entire banking system (see Table 2) indicate its viable and positive performances during this period. Table 1: Key Financial Ratios of ‘B’ Parameter Return on average assets Yield on advances Average cost of deposits Non-interest income to net income Staff cost to average working funds Figures in Percent
1996-97
1997-98
1.01 13.17 7.50 30.06 2.01
0.86 12.20 6.90 31.10 1.83
Table 2: Comparative Performance of ‘B’ Deposit – 1997 Deposit – 1998 % Growth in deposit Advances – 1997 Advances – 1998 % Growth in advances Net profit, 1997-98 Figures in Rs. Million
Bank ‘B’ 319730 393390 23.04 183370 220200 20.09 3650
Bank 1 321370 391260 21.63 165230 198030 19.79 4590
Bank 2 308060 351740 14.18 140670 160430 14.05 5010
Banking system 5055990 6013480 18.94 2784010 3218130 15.59
Incentives and Knowledge Mismatch
299
Typically, a nationalized bank would have to offer, through branches mandatorily located in rural areas, a minimum percentage of its total credit disbursement to the government-defined ‘priority sector’ inclusive of rural lending, lending to small industries, etc. Structure of a bank then suffers from two types of constraints, first on credit management and second on branch-based competition. Mandatory opening of branches by several banks operating in the same region and offering identical services have rendered branch-based competition hazy. In order that it could define competition even in such settings, ‘B’ looked for novelty and opened up five agricultural hi-tech branches and 29 branches exclusively for the small scale sector (see Figure 1). Such branch-based specialization opened up possibilities for ‘B’ to develop reputation and brand-name. Typical branch structures are revealed if we consider that the entire region of south India has out of a total of 53 branches (and two extension centers) 16 in metropolitan, 14 in small urban, 5 in semi-urban and 18 in rural areas, the size distribution of which were 1 super-large, 4 very large, 6 large, 29 medium and 13 small. The number of loss making branches were 11 in 1994-95, 6 in 1995-96 and 6 in 1996-97. Looking at the sector-wise credit flow, we observe that direct agriculture had 11.76%, indirect agriculture had 16.98%, small scale industry had 46.43%, other priority sector had 20.69% and commercial and institutional credit (C&IC, for industry) 57.64% share in total credit in 1996-97 for the entire south India area. Business profit is earned mainly from two business processes, namely credit to C&IC, and nonretail financial operations. Both these operations are undertaken at the head office. Such structural limits conditioned the strategies and culture of ‘B’, while at the same time this imparted certain peculiarly authentic business skills to its employees. Branch-based banking developed retail skills and employees acquired knowledge of retail saving behavior. Bank-based C&IC credit and financial operations undertaken from the head office learned skills of financing the industry. The former skill cannot be offered by a foreign bank or by a bank engaged in financial areas alone. Branch-based banking and bank-based banking processes of ‘B’ have so far remained complementary. ‘B’ moreover was saddled with the burden of huge lost assets (called NPA – non-performing assets), large manpower and lack of a business thrust. ‘B’ was in search of a strategy appropriate to a public bank in a liberalized economy. No wonder that the board and the managers of ‘B’ were under compulsion to transform business processes and banking strategies. In this period of transition from a strictly governed nationalized bank to a liberalized and going-to-be globalized bank, the board of ‘B’ and its senior managers thus took stock of these assets of skills and the culture of business.
Conditioned Structure and Business Processes Transforming a structure conditioned by constraints imposed by outsiders to the structure is difficult indeed. These structural constraints also imposed limits and deformations on the business process. ‘B’ shares similar business process of branch-based and bank-based banking with other public banks. Near Figure 1: New Solutions to Branch-Based Banking; there are 79 specialised branches 9 Recovery Branches 4 Capital Market Branch
29 SSI branches 1 Lease Finance Branch
8 Corporate Banking Branches
1 Housing Finance Br
5 NRI 5 Agriculture Branches Hi-tech 7 Overseas 10 Commercial & branches Branches Personal banking branches
300 Banerjee
uniformity across competing banks in detailed business processes–such as in credit disbursal, retail banking and personnel policy–implied that success would depend on the quality of intangibles in a business process. A bank competes with other banks or other financial agents either through its branches reaching out to the customers or through head office-based operational skills in financial instruments. The former can be called a ‘branch-based’, spatial, retail-customer centric competition and the latter a strategy driven financial-skill dependent ‘bank-based’ competition undertaken primarily from the head office. These two processes are often not linked except that funds raised from branch-based processes acquire value addition through bank-based processes. The former is employee-intensive primarily non value-adding while the latter is potentially technology-intensive adding most value.
Processes to Mop-up Resources at Branch ‘B’ had to open branches in locations unprofitable or in locations having a high density of total number of bank branches. In 1969, the year of nationalization of banks, an average branch served 69,000 people and by 1990 this figure dropped to 12,000. Typically, a bank would thus be burdened with a very large number of small deposit accounts whose operational costs surpass any possible profits from the deposits with such accounts. In 1990 the total banking system had over 300 million such deposit accounts. ‘B’ for example cannot keep track of such accounts. It has more than 12,000,000 advance accounts (for credit) whose value is less than a meager Rs. 25,000 (equivalent to US$550), about 7,000,000 advance accounts in the range of Rs. 25,000 to 200,000 (US$4400) and about 75,000 such accounts above Rs. 200,000. Typically a branch serving retail customers would receive deposits in three categories: savings constituting about 12% of total deposits, current (mainly for business) about the same and term deposits about 75%. Part of a deposit in a branch would be lent through the zonal/regional special credit divisions Table 3: Interest and Non-Interest Income Profile of a Region of ‘B’ Operating results Total interest income Total other income Total expenditure as interest paid Total expenditure as establishment expenses Total expenditure as other expenses Total earning through transfer price Profit (including head office interest) Net profit for the year Figures in 10 Millions of Rs.
1994
1995
1996
24.06 2.62 20.66 7.00 1.00 12.56 4.95 10.58
24.01 3.51 22.08 7.32 1.77 24.78 23.40 21.13
32.68 4.17 25.28 12.90 N/A 25.02 27.20 23.69
Table 4: Comparative Performance of Branches of a Region of ‘B’ Deposit/Credit Metropolitan deposit collection Urban deposit collection Semi-urban deposit collection Rural deposit collection Total credit disbursed Credit to commercial (C&IC) sector Figures in 10 Millions of Rs.
1994
1995
1996
184.96 61.54 15.41 24.22 178.85 103.88
230.87 69.70 15.27 31.20 219.71 123.19
266.79 73.38 17.91 37.72 274.76 1588.36
Incentives and Knowledge Mismatch
301
and the rest would be transferred through a transfer price mechanism to the head office, who while disbursing credit to the profit-earning businesses, transfers a part of the profit to the region back. A region would in general disburse about 65% of its deposits as credit in its region, out of which only about 20% or about 12% of total deposits would get disbursed to the profit-earning businesses; the rest of credit goes to several weaker sections. As a result almost 90% of income of a region would flow from ‘interest income’ (non value-added category), and only about 10% from non-interest income (value-added category) (see Table-3). Business processes in a branch or a region thus strive to increase interest-income category. Skills for deposit mobilization thus get developed in a branch. Skills of ‘B’ in mopping up savings is well distributed over all kinds of populations (Table 4) – from the metropolitan areas to the rural.
Lack of Value-Adding Specialization However, there are other limitations imposed by the regulators. Till recently all the banks could only offer the same rate of interest. Moreover, a bank could exercise little option to specialize in a certain areas of deposit mobilization or in specialized-credit disbursals or in other forms of business such as merchant banking. The result was that inter-bank or inter-branch competition was hazy. Branch-based banking in particular could acquire specialization only along sectors, such as agriculture, but were prohibited to develop skills along business processes, such as credit management. Value-adding skills in finance such as in hedging, commodity trading, derivatives, options and futures were missing even in the head office of ‘B’. Structure of ‘B’ with nearly 25,000 employees (see Figure 2) has been hierarchic and regional/spatial as well as functional. It has not been based on business processes. It could not be made less hierarchic or in the extreme case ‘flat’, because of several reasons, the primary of which is culture. Incentives to an employee in ‘B’ have never been special pecuniary rewards (it is not legally permissible); it has remained the promotions across a long chain of seniority. If the career movements are lost, as happened in some foreign banks with unenviable consequences (Scott & Walsham, 1999), the incentive system breaks down.
SETTING THE STAGE It remained a puzzle to the senior managers as to how to restructure the system of incentives in place. It was clear to them that personal initiatives of the middle level managers were to be unleashed. Indian banking has remained high cost. The average operating cost of Indian banks as a percentage of assets was about 2.3 in 1990-91to 1995-96, as compared to 1.1 in China, 1.6 in Malaysia and 1.0 in
Figure 2: Structure of “B”
H.O.
Z.O.
Z.O.
R.O.
R.O.
Branch
Z.O.
Branch
Branch
Logo: H.O. = Head Office; Z.O. = Zonal Office (Total = 16) R.O. = Regional Office (Total = 64)
. handling 35% of total credit; Special branches: There are 8 corporate banking business branches, 300 personal banking business branches; retail branches total about 2500
302 Banerjee
Japan. The Reserve Bank of India Governor commented: “operating costs depend on labour productivity, technology, innovation and organizational effectiveness...without gaining sufficient advantage in this respect, it is difficult to think of a significant improvement in the banking system in future.” (Jalan, 1999: 16). The Management of ‘B’ had appreciated the need for changes much earlier, beginning in 1995. It was known to then chairman of ‘B’, Mr. Anil, that in a change management, its sequence was most important. The question was: what was the driver of change and how could the sequence of transformations begin to affect the best desired outcome? A latent problem remained unquestioned however. ‘B’ has been having its successive chairmen for an average period of one year to a maximum of two years. Often persons on the verge of retirement get this highest executive position. The board of ‘B’ too keeps changing its composition rather frequently at the discretion of the regulatory ministry. Managers both middle and senior get regularly and often in about a year’s time, transferred to a different functional area in another location. Longterm continuity of either board-level policy or of the directions laid out by the chairman, and of its continuation through efforts put up by the senior and middle-level managers, have remained elusive. There could not have been then an emphasis on individual initiatives. Zeal for change may then be inculcated in the middle and senior management through some other non-incentive instrument. One could not also overlook the long and complex structure of rules which had substituted individual predispositions with strict rule-following.
Change Management Through Broad Strokes The first initiative towards change management was initiated by B’s Chairman, Mr. Anil, during 1995, to the leadership of whom people attribute a turnaround in the performance of ‘B’. He conducted a SWOT analysis, the results of which were shared with a large number of staff members. He in fact took into confidence the association of the officers and the union of the employees, who both soon became active agencies of transformation. A team of senior managers under his leadership formulated a turnaround strategy (the bank was in red two years earlier) consisting of four aspects: transformation of organizational structure, introduction of novel organizational processes, restoration of financial health and induction of customer orientation. Mr. Anil and the managers did not thus attempt to find out a single and singularly effective change instrument, such as ‘incentives reengineering’. Instead, a sweeping move though with little clarity regarding strategy and business processes was unleashed. Results were to appear soon, ‘B’ posted profits of Rs. 500 (1 US$ = Rs. 45) and 2,760 million in the following years of 1995 and 1996. Mr. Anil created among the bank employees a vision of ‘B’ in future. He bypassed several rules, set himself and other super-performers as the ‘role’, set up performance bench-marking, identified and then nurtured talents inside the organization by way of providing them with rewards, challenging tasks and recognitions. Overall, he encouraged a culture of cooperational work, risk-taking, Table 5: Man Power and Employee Productivity of a Region of ‘B’ Manpower/Productivity
1994
1995
1996
Man power – Officers Man power – Special assistants Man power – Clerks Man power – Sub-staff Total man power Per employee business in deposit collection Per employee business in credit advance Per employee total business
216 55 555 152 978 0.29 0.18 0.48
224 55 561 150 990 0.35 0.22 0.57
226 55 544 198 1023 0.39 0.27 0.66
Figures in Pure Numbers
Incentives and Knowledge Mismatch
303
accountability especially to bank customers, customer satisfaction as the end result of performance and an innovative attitude towards ‘rules’ which were to be interpreted anew, as he thought, but not to be bypassed. Customer orientation appeared as the core of changes though in the finer analysis it appears that Mr. Anil did not understand value chains and he also did not identify critical business processes. This set the norm of recognizing new and novel banking processes through recognition of new interpretations on those rules. This was perhaps most glaring in personnel policies. ‘B’ has remained overstaffed, and it was obvious that with new processes and restructuring of the organization there would appear serious yawning mismatches in skills-profile. Per-employee business in a typical branch of ‘B’ remained much below even other developing countries, such as China (reported above). Business at a branch remains limited to deposits and advances, and from Table 5, we observe that though at a rather slow rate, the employee productivity at the branch level began appreciating from 1995 onwards. Total bank level productivity depends, however, more on several other modes of bankbased business processes, important areas of which are treasury, foreign banking, export credit, commercial credit, etc. Branch-based processes are almost entirely interest-income dependent. In order to be competitive the bank as a whole should depend more on value-adding non-interest income. So far such value additions have taken place at the head office-centered business processes. The current question was therefore whether ‘B’ could redefine value-adding processes in such a way as to encompass the entire length of value chain from the branch to the head office. Looking back at Table 1 for ratios on ‘B’ as such, we observe very modest appreciation of some productivity measures, such as rise in non-interest income over interest income, decrease in staff cost to average working funds and decrease in average cost of deposits. A rise in branch productivity can thus be directly attributed to compliance by bank staff to higher norms of productivity, while rise in such aspects as non-interest income to interest income for the overall bank is attributable more to restructuring and induction of new value-adding business processes. Compliance by employees in the latter case is different from acquiescence to higher norms of productivity. The latter could be secured through better work management, applying standard principles of organization and method (O&M) by the team of Mr. Anil.
Culture of Risk-Taking By 1996 there was a new chairman, Mr. Murty, perhaps even more dynamic, farsighted and less averse to taking risk. Understanding that novel interpretation of rules as initiated by Mr. Anil would not take ‘B’ far, Mr. Murty thought of two alternative modes: incentives reengineering, or changing the culture of work. Reengineering of incentives, amounting to declaring publicly a new set of rules regarding novel incentives, might attract unwanted criticisms and even refusal (Milgrom & Roberts, 1992). New incentives should be part of the written code of rules. However, many of the rules were set by the ministry leaving little scope for changes and amendments at the individual bank level. Mr. Murty, thus, thought of a culture in which risks could be taken by an employee or rules could be remodeled by the employee towards achieving greater customer satisfaction. Such an employee would secure reward, if not within the organization then outside in other organizations. This culture would remain unwritten and formally unannounced. This new chairman wished that culture would provide a milieu to those who wanted to ‘achieve’. An incentive system relies more on individual propensity towards maximized self-benefits while, Mr. Murty understood that, a culture relied more on ‘roles’ played up by individuals. A culture would encourage roles differentiation and betterment of roles on the one hand, and on the other a maximal alignment of individual with several roles. Mr. Murty was in direct contact with debates and experiments in change management at several U.S. management schools. His close acquaintances were conducting business process reengineering (BPR) projects and value chain analyses in the U.S., and he could thus appreciate fully the knowledge gaps and incentive mismatches in his own organization. Moreover, he also knew that if he were to stay with ‘B’ for about another one year, by what time he should deliver worthy tangible benefits such that upon termination of his relationship with ‘B’, he could look for opportunities as chairman of another
304 Banerjee
large and dynamic financial institution. He did not have an incentive system in place inside ‘B’ for himself, albeit he had a cultural milieu in the total banking/financial system existing for ‘doers’ such as himself. Finally, this overall milieu around 1996/97, called for examples of successes in change management, and there were none in India around that time.
Sequence of Change Management Mr. Murty, however, did not have the answer to the riddle: which among ‘structure’, ‘strategy’, ‘process’, ‘technology’ and ‘individuals and roles’ –would be the best driver to effect transformation! What could be initiated first in effecting change leadership, and then how and which elements of these five aspects would form the sequence of change? Structure was saddled with regulations by outside agencies and it was beyond the powers of ‘B’ to make drastic changes in the structure. Strategy too was burdened with twin constraints – imposed banking objectives such as providing credit to the priority sector, etc., and the severe schism internal to bank structure reflected in branch-based banking versus bank-based banking. Processes as such were amiss; there were none and introduction of new cross-functional and vertical banking processes would interfere with existing structure as well as with the existing strategies. Technology for banking, though known to have been existing abroad, were not easily available in India; and moreover, serious drawbacks to technology use was put up by the absence of a communication infrastructure. Finally, individuals represented the greatest challenge since they offered both the greatest resistance to change and also the strongest agency for change. Mr. Murty and a small team of a few middle and senior managers observed, however, a common denominator to these five aspects– gaps in knowledge. These gaps can be represented on the four corners of a diamond, in which the left entries, namely ‘branch’, ‘structure’, ‘rule following’ and ‘individual’ depict the existing states of affairs in ‘B’, while the right entries, namely ‘bank’, ‘process’, ‘value maximize’ and ‘roles’ represent the desired states of affairs (see Figure 3). This chairman and his team understood that there were gaps in knowledge between banking through a branch or through the entire bank, or between knowledge about the rigidity of existing structure and the fluidity of desired banking processes, etc. Transformation of ‘B’ they felt, would involve a knowledge reengineering, by which the left side of the diamond could be metamorphosed onto the right side. However, as in the previous discussion regarding incentive system and Mr. Murty’s predilections to substitute reengineering of incentives by a culture, here also Mr. Murty and his close team were undecided about the engine of change and about the sequence of knowledge reengineering needed. Could a culture provide a compromising solution to the problem of sequencing? Welcoming culture, they felt, would do away with big-bang changes – they would not require a set of new systems, such as of a system of well-defined roles, or a system of customer orientation that maximizes value chains, or even a set of well-defined processes of banking which incorporate value maximization, new roles and bank-based banking. An emphasis on culture would also assume that individual persons are the key to change management. Therefore, a culture, supported and encouraged by the leadership of the chairman, would save ‘B’ from a catastrophic big-bang reengineering.
Figure 3: Diamond of Transformation Branch
Structure
Individual
Rule-following
Bank
Roles
Process
Value-maximize
Incentives and Knowledge Mismatch
305
Such a culture-system is opportunistic as well. Mr. Murty wanted to prove his worth as a changeleader, not so much to the board and equity-holders of ‘B,’ as to the stakeholders of the much larger banking-financial system (Frooman, 1999). He knew that he was to retire from ‘B’ soon, and hence his stakeholders were from the overall banking system. Soon in 1997 he left for a prize-posting with a very large public finance body, taking along with him several members of his inner group of middle and senior managers. The culture, however, was too fuzzy; and since the stakeholders would appreciate a set of signals of changes, Mr. Murty and his team thought of formal and ceremonial change-rituals. Mr. Murty had friends in Harvard working on BPR related changes. He convinced the board that ‘B’ was in need of a BPR. The board consisted of a bunch of old professionals from audit, economics and banking – none of whom had any idea of BPR, though they were convinced that a serious transformation was needed. The board felt threatened since the regulatory bodies had asked for major changes and set tough benchmarks on banking practices (RBI, 1991; Reddy, 1999). Signals from the private and foreign banks and from the money market set it clear that technology was leading the changes in banking practices. Therefore the board approved that an international consulting firm be appointed as a consultant on BPR. With haste, and therefore without any internal preparation, Mr. Murty got approved the appointment at a huge cost of an international large consulting firm to ‘advise’ ‘B’ on strategy and on BPR. Mr. Murty left for the greener pasture before the consultant was to submit its ‘Report’, which was submitted during the tenure of an otherwise unimpressive and short-tenured third chairman.
Technological States-of-Affairs at ‘B’ The central bank (the RBI) wanted to provide a common technological field to all domestic banks. It has been on this mission since at least the early eighties. The labor had earlier been resisting any induction of computing, though with passing years they compromised and bank computerization proceeded piecemeal and with halting pace. As a result, all domestic banks including ‘B’, began on branch-based computerization and were disallowed, until about mid-nineties, to plan information system globally for the entire bank with strategic objectives as driver and with emphasis on key business areas. Some domestic software companies had, however, brought out by this time integrated banking business solutions, perhaps better than most internationally available products. Sadly, such products could not find a place in ‘B’ and in similar other public banks until the late nineties. This intervening period of computerization, from about the early eighties to about the late nineties, was based on piecemeal induction of legacy systems. Branch computerization remained without being networked. Major features of this are summarized below: 1) Branch computerization was limited to functional tasks; only a few tasks were allowed to be put on computers such that the manual ledger-based entries continued along with a partial automation. 2) None of the branches understood banking as a set of processes cutting across functions and involving total banking; computers were assistants to functional islands of tasks, unrelated to processes. 3) Computerization thus was not at all targeted to bring higher values to customers, and also was not targeted to bring to the overall bank-based centralized and profitable finance-businesses the required information; this resulted in serious weaknesses in the new profitable areas of bankbased banking. 4) Over the years, these islands spread over from select branches to the regional, zonal and then the head office; though never connected through communication network, or through business processes – these islands of deadened functionalities began establishing a ramshackle framework of management information system, reporting generally on a quarterly basis and often reporting not to a separate department of MIS, but mostly to an oldstyled planning department.
306 Banerjee
5)
Computerization of ‘B’, as of other public banks, could not get a customer-focus and a customergoal either in the branch-model or in the bank-based model of banking – resulting in technologyinto-backyard syndrome. This also strengthened somewhat the old-styled planning as well reporting on bank operations to both its own board as well as the statutorily required reporting to the central bank – leaving in the lurch the internal structural management of ‘B’. The system thus also failed to become an MIS. 6) Moreover a technology that reengineers the business processes and business strategies is much more than a system of hardware and associated software – it is an ethnographic tool, a live part of the working lives of employees and managers, who must employ such a set of computing tools for changing the business towards achieving higher value (Christensen et al., 1998; Crabtree et al., forthcoming). Existing technology in ‘B’ was stand-alone, derelict and could neither integrate work processes, the working lives or the imagination of employees and the managers of ‘B’ – it was not thus a systemic technology, which alone can effect a thoroughgoing change. None of the agencies inside ‘B’ such as its chairman, senior managers or the end-users of computing (that is middle managers and clerks) were free to import and design technology. Traditional stakeholders, represented as the board of ‘B’, had only a perception of the impending changes being brought about by technology, but were generally illiterate about technology. The external stakeholders, cultivated through cultural management by Mr. Murty, were however, knowledgeable about technology. Sadly they did not have any formal representation inside ‘B’. Technology, while available in its best form as several types of competing products and also available as information among the public as well as with the competing banks in private sector, could not appear inside or in its management milieu. Technology as such could not become the first driver of change. Beginning by the mid-eighties, ‘B’ inducted computing systems at several of its locations. The year-wise spread of computing machinery during and immediately following Mr. Murty’s tenure, and the annual rise in amount of business conducted through these machines, are shown in Figure 4 (a & b). It appears from Figure 4 that the rise is sharper in the later years. However, ‘B’ primarily used, for the ceremonial purposes of audit as it appears, the term ‘business conducted’. Typically, a region or a zone would earmark a part of its budget on computers, and with the support of its own EDP staff often develop local legacy-based applications. These have remained add-on to the initial legacy applications, procured on a custom basis from external software vendors by the head office on a centralized basis. This period of 1996-98 thus experienced continuation of the branch automation. Figure 4a: Number of Totally Computerized Branches 500 450 400 350 300 250 200 150 100 50 0 M ar-95
M ar-96
M ar-97
M ar-98
M ar-99 Proj
Incentives and Knowledge Mismatch
307
Figure 4b: Business Covered by Computerized Branches 70 60 50 40 30 20 10 0 M ar-96
M ar-97
M ar-98
M ar-99 Proj
CASE DESCRIPTION The above mode of conducting the business of banking perpetuated the knowledge-gaps perceived as the major obstacle to transformation by Mr. Murty and his team. The existing knowledge base of ‘B’ has remained limited to the left-domain of Figure 3. This knowledge separated in islands of functional practices does not add up to the knowledge of business process, or in other words pieces of such knowledge even while added up do not constitute a banking process. ‘B’ has been following a personnel policy of transferring regularly its managers across tasks on the same functional line and jobs across multiple lines. It has also erected a long hierarchy of promotional avenues. Knowledge management implied integration of skills and experiences, and the board of ‘B’ believed that this was accomplished through the above personnel policies. However, Mr. Murty realized that knowledge ought to have related to processes alone, and banking was but a set of processes – thus knowledge management ought to have been based on processes (Braganza, Edwards & Lambert, 1999). Changing processes appeared then as crucial. Change management literature has considered five factors: ‘structure’, ‘strategy’, ‘process’, ‘technology’, and ‘individual and roles’. A change or reengineering strategy can initiate the process of transformation through a key factor, the driver of change, which then takes on a definite sequence another four factors (Hsiao & Ormerod, 1998). In the much talked about MIT framework of processcentered approach (Scott-Morton, 1991), strategy is the driver which takes up in sequence structure first, followed by in parallel both the management processes and technology (when technology sends input to the management processes) – these two parallel paths then converge on individual and roles. The Fujitsu framework (Yelton, 1994) is technology centered. Technology as the driver impacts first upon individuals and roles which change structure, followed by causal changes in management processes and finally in the strategy of the organization. These approaches, however, appear simplified and linear. A change is a series of outcomes from negotiations transacted over a long period. It involves several agencies with varied types of power and controls. The initial proposed solution often takes a beating (Weerakkody, Bennett & Tagg, 1999): “..this complex nature makes the change context-dependent, and a single and all-embracing solution such as process re-engineering or organizational restructuring cannot be relied on. Therefore, there is a need to understand how organizations can manage change in a dynamic and integrative way so as to enable the negotiation of change from an existing state to a new state of equilibrium” (Hsiao & Ormerod, 1998: 28).
308 Banerjee
Negotiated Contour of Change The international consultancy organization held rather long consultations, made frequent visits and submitted its report nearly after a year, based as it appears on the MIT paradigm–though from a closer look it appears to be a mixture of everything. The consultant presented seven ‘modules’ on strategy, organization structure, business process redesign (for credit management, and for branch operations only), treasury, human resources, MIS and information technology. No wonder, given the fact that the board had little understanding about the contemporary language of change management and the fact that Mr. Murty was outgoing, the consultant’s appointment was treated by the board more as an event of public relations and image management than as an exercise and a project on major change. The consultant was asked to ‘suggest’ only the thrusts and areas of change; it was never mandatory on them to be part of the change management process. Managing change is an act of negotiation and an act of ‘doing’ – a theory or an advice on the same without a commitment to be part of the changeactivities, turn out to be meaningless and such advices fail (Weerakkody, Bennett & Tagg, 1999; Uchiyama, 1999). The background to appointment of consultant was interesting. Bank ‘B’ was then in need of going public for raising equity capital (Tier-1 capital). The capital market was in a boom condition, so ‘B’ did not want to lose time. At the same time there was competition since a number of other public banks were coming out with public equity offers and it was the first time that ‘B’ was going public. However, profitability and other accounting figures of ‘B’ was not good enough to attract a high price (the issue was not to be at par). An image-building exercise was necessary. Mr. Murty too wanted an image building, since he was yet to get the coveted position. The consultant, with an international reputation, knew that ‘B’ was not as serious in changing over as B’s agents were in building images. The consultant found it convenient to shirk off. All the agencies thus acted as opportunists and the alliance of win-win for all was too convincing. Senior managers initially felt threatened. They were apprehensive about their future in the organization. Very few of them indeed were keen in changing the states of affairs. Apparently a compromise was reached early and the consultant, while proposing for transfer of jobs and tasks across new functional divisions in fact increased the number of positions for the senior management. Middle managers could not get a number of positions enhanced or could not get implemented a faster track of promotion. Perhaps this middle group was the most active and the most zealous of all who sought change. A long career in the coming years with ‘B’, they thought, should ensure them not only a raise in pay and protection but also should increase their social esteem. A new ‘B’ with a strong brandpresence and definite competitive edge over others would be desirable; and so, many of these managers studied the BPR literature and analyzed for themselves possible scenarios of BPR.
Consultant’s Suggestions It was suggested by the consultant that key strategic services from ‘B’ should be under: corporate banking, small and medium business banking, personal banking, retail and mass market banking, and rural and developmental banking. The consultant suggested a new functional-divisional structure (Figure 5). Suggestion was to restructure along reorganized functions which had been existing but under a diffused divisional structure. A matrix of competitive position of ‘B’ versus market attractiveness (Figure 6) highlighted strategic thrusts. It suggested serious efforts to be put in for personal banking, corporate banking, banking for medium businesses and for cooperatives. The suggested functions constituted the proposed strategy. No process defined by value chain across functions and along the vertical line from branch upwards was identified. Strategy was identified with a reorganized functional structure. Required efforts were to be given through identifying for suggested areas the existing skilled manpower who then could be given training. ATMs, electronic banking facilities, 24-hour banking (Figure 1) were to be launched. Suggestions regarding organizational restructuring included defining of clear reporting lines, ensuring that existing processes remain well controlled, roles defined and business
Incentives and Knowledge Mismatch
309
structures set up around target markets. Figure 5 suggests moderately different functional lines of businesses to be managed by respective ‘heads’. The support/control functions again were to be along functional lines managed by four heads – which included a position for ‘corporate services’. ‘Corporate services’ was new and was to include functions of planning, MIS, etc. However, functional structure did not include any technology division or technology function. Suggested business processes to be reengineered were from counter-services and chequeclearing ‘functions’ of a branch, and from credit appraisal and management. These are however functions, and the structure of ‘B’ was along these functional lines even prior to the consultant’s suggestions. Process redesigns, as per the suggestion, was not to involve major shakeouts in structures and existing functional lines (in fact strengthening of reporting lines and stricter defining of roles were suggested). It advised ‘B’ to delay, set time-bounds, set-up single-window customer service and reduce staffing. It did not, however, indicate processes of value chains. Figure 5: Management Structure Suggested By Consultant Board
Group audit
Chairman Group strategy
Business operations
Support, control and admin.
Head of Head of Head of Head of corporate commercial priority sector international banking & personal banking operations banking
Chief Chief financial credit officer (CFO) officer (CCO)
Figure 6: Strategy and Competency As Proposed By Consultant Target position
High
‘B’s relative
medium
rural
competitive
Mass market Small 4 business
position
Medium business
low
medium
Enhance the Bank’s position in corporate banking
2.
Enhance the Bank’s capability with medium business
3.
Establish a successful personal banking operation
4.
Selective growth with co-operatives
2
1 corporate
Personal banking 3 Co-operatives
Low
1.
high
Market attractiveness for ‘B’ Fig. 6: Strategy and competency as proposed by consultant
310 Banerjee
The branch-based and bank-based diarchy in ‘B’ was left untouched by the consultant. Mr. Murty however, had expected that such business processes could be identified that would integrate the functional structures of the branch with the head office. Such a business process would have also opened up possibilities of value additions at the branch level. Two processes that were identified by the consultant in fact increased the distance between the dipole.
Technology as the Ultimate Driver Most importantly, the report did not identify the ‘driver’ of change and the sequence with which the driver would effect successive changes. It may be recalled that Mr. Murty was looking for advice on this alone. If strategy were to be driver, then how adoption by the board of a set of strategies would translate into new processes or would align the structure – that is through what instruments could such changes be brought about by the supposed driver, remained unclear. Technology is the most preferred driver for banks and financial institutions abroad. However, as discussed above, this chairman knew that the time was not ripe to adopt technology as either the driver or else as the major instrument. He and his team thus felt that change be initiated with an incentives-system, which would induce technology. In the succeeding period this technologicalincentives structure would bring forward new technology, which then would take the role of driver. In that case, with technology as driver, ‘B’ would be driven by the forces of market, the supplier of technology. Mr. Murty understood that a formal incentives-system could not be designed. He thus encouraged a culture of risk-taking, role-based individual acting and consequently a culture of learning – the last, he knew while it would surely incorporate the emergent business processes and any emergent technology, it would also reduce and finally erase the existing yawning knowledge gaps. They were thus looking for new technology as the ultimate driver with knowledge about and incentive-supported business processes. The consultant failed miserably on this score.
Fear of Change and Technology to Control As happens often uncertainty regarding change arouses fear for loss of control; having understood this, the consultant suggested a rigorous and invigorated MIS while keeping the role of technology only as an adjunct. Suggested MIS should, the consultant suggested, reduce time to collect information from the branches, get centralized and collect right information on clients, defaults, risks, etc. Such a centralized MIS function should be part of the function of finance, and management reporting should be performed by operating divisions such as corporate, commercial and personal banking. This suggested MIS should then induce a stronger functional/divisional control structure, and remaining within the function of finance, it would be guided by the planning approach. The board of ‘B’ surely would like strengthening of planning and a planned change. So should this be liked by other managers who were fearing loss of control and loss of negotiating powers. Moreover, an MIS with strong functional emphasis would, by way of enhancing the power of planning, reduce ‘B’s exposure to the changes in technological market. The internal agencies thus liked this idea of the consultant. The consultant suggested IT strategy had rather limited range and a shallow focus. It talked about IT’s strategic needs to support the strategic initiatives of anytime anywhere banking for personal and corporate customers – as the most important objective. However, IT should also address the redesigned processes, support the MIS and HR renewals, improve customer service and ensure uniformity of service, and improve efficiency of housekeeping. IT included extended deployment of computers, integration and networking of computers and a centralized customer database. In line with the suggestions relating to centralization of MIS and creation of a centralized customer database, it was suggested that the architecture of information that ‘B’ might adopt would be a ‘hybrid’ of a partly ‘decentralized’ system, implying a direct MIS reporting by high priority branches to head office MIS,
Incentives and Knowledge Mismatch
311
and a partly ‘centralized’ system, implying branch terminals reporting to a local hub-system which on a locally centralized mode would collect the regional MIS and forward the same to the head office MIS.
Change Management as Compromise In short, suggestions on IT failed far short of any contemporary and comparable IT projects in a large bank abroad. The suggested implementation schedule, as per the critical success path, kept ‘PC/LAN’ automation and selection of IT systems at the beginning of the eighth month from the start date of the transformation-management project and should be continued through the end of the project at the twelfth month. Interestingly, ‘establishing new senior management team’ was the first task on the first month of the suggested schedule, followed by ‘developing business plans and marketing strategies’ and ‘defining roles and responsibilities for key positions’ on about the second month, to be followed by ‘reconfiguration of branch network’ on about the third month. These were to be followed by ‘identification of existing skills and staff reassignment’ on about fourth month, succeeded by ‘management of staff transition’ in the next month, to be followed by a set of parallel activities undertaken on about the seventh month. These parallel activities were ‘implementation of revised processes at selected branches only’, ‘communication of process redesigning benefits’, ‘setting up of MIS project team’, ‘production of new information-manuals for MIS’, and ‘PC/LAN automation’, etc. Phases of identified ‘change management’ were successive as following: organization structure; process redesign; human resources and change management; management information; and information technology. Interestingly, while the initial approach to transformation appeared to have been driven by considerations of strategy, the implementation schedule conveyed the hint that the change was to be driven by structure. In the phase layout while the process appears as second, in the implementation schedule it appears much later. On close scrutiny several other incongruities surface. Such incongruities appear to be the result of a complex negotiation process involving the consultant team and several agencies internal to ‘B’ including its board. As a result of the negotiated outcome and as it was being conditioned by an opportunistic alliance of the insiders with the consultant, process redesigning appeared as an item of low value and now relegated, the process ‘reengineering’ was not to bring about structural changes nor was it to be supported by IT. The opportunistic consensus zoomed on a senior management team, who was to ‘plan’, make ‘budget’ and identify market thrusts based on a new centralized MIS and an expanded branch based computer support. Hybrid-IT strategy, as it was named, did not imply branch-based processes renewals, or freedom to end-user computing; instead it encouraged direct centralized access to high-priority banking information by the head office based planning system. Earlier each department had its own MIS. Now these were replaced by a centralized MIS Department reporting directly to the chairman and the board. Even the regional offices had freedom and competence (since it had a small EDP staff) to design, develop and implement local MIS and other local customer-support applications. A degree of end user computing was allowable and middle level managers could enjoy decision-freedom. The result of the renewed thrust on centralized MIS was that customer focus of banking, especially at the branches, and the process-centricity of information generation, collation and collections were lost altogether. The information system was replaced by a management information system. The former could sustain a local business process-driven initiative based on end-user computing and generation of skills and novel businesses at the local or branch level. The MIS could relocate all informational activities towards the upwardly dispatch for head-office based monitoring, business planning and bank-based banking. It may be recalled that Murty and his team wanted to overcome this problem of ‘B’. Only a select MIS can now be accessed by the Planning Department. The complete set of information was now available only to the head-MIS and the chairman. Earlier MIS was in Cobol and now it was replaced at the MIS Department by newly customized software in Sybase and FoxPro, although branches are still continuing with Cobol.
312 Banerjee
CURRENT CHALLENGES/PROBLEMS The banking environment has changed much in the meantime. Interest rate restrictions have been liberalized. Banks can hire and fire personnel and can close down unviable branches. Universal banking has not been allowed, however, banks are free to participate in financial activities. ‘B’ too has fired several hundred of its managers and closed down quite a few branches. Lost asset (NPA) is on the downhill and cost curve has stopped growing. ‘B’ has remained a consistent earner of moderate profit since the time of Mr. Anil. However, the current chairman and the board know that things are not bright for ‘B’. There have been changes in ‘B’ though. Most changes came through laterally from the market, from knowledge procured from the market by its middle managerial staff. It was more a market-led transformation than a strategic BPR. ‘B’s operations failed, enabling it to amass enough cash which could have been used to buyout strategically positioned banks abroad. Size matters in banking. Integration of several types of banking also matters. Such an integration of a large number of banking processes in a large corporate entity is the foremost long-term challenge ahead of ‘B’. This bank has remained a midsize entity with several banking activities missing and the banking processes un-integrated. The turnaround exercise of BPR, its employees now feel, has failed the expectation. Its BPR never laid out clearly the processes or set up a process performance measurement system (PPMS) to monitor the progress of process reengineering. This BPR did not set up clear tangible targets-to-achieve in both the mid-term and long-term. Turnaround operations have been a mix of change activities in strategic, structural, process, personnel and technology aspects. The puzzle of what constituted the deriver of change remains unsolved. Absence of a strategic position in the market and an edge over others in productivity in market presence or even in failure to establish a dependable brand-name, have rendered ‘B’ unattractive in the long-term. It has met with short-term targets and its stakeholders in the board are happy over that. In the mid-term ‘B’ offers a good prospect though written with uncertainty. The most significant gain from a BPR is that processes are visible, identifiable and synergistically integrated. A visible process renders passage of information or signals especially from the top management quicker and uninterrupted. However, signals inside ‘B’ face road jams. Its chairman is unsure whether the goals set by him pass down untransformed or whether performances of the processes are reported by its MIS. ‘B’ could not regain the control over its structure, and removal of this mismatch between the strategic intent and the structure, while it should have been the prime target of its BPR, does not appear to have been attempted. The real challenge for ‘B’ is to acquire a large size with a complex structure such as can be both globally competitive and under complete control by its strategic intent. The next challenge is to sustain such process value chains as can integrate the processes with the market. Only this appears to offer a long-term perspective. A simple strategic control over the processes or functions by the top management does not ensure long-term prospect because market is the ultimate source of innovative competence. An important aspect of any business process is to remain open to the customer – the market. Increase in size, complexity and process integrations might ensure the board and chairman of ‘B’ a transparent complete control over organization, but it might as well not ensure that windows are open to the market. Control ensures compliance with strategy but it does not ensure creative participation by the end-users in the organization. The third challenge is of transition from the existing status to a globally competitive frontranking bank. In fact this BPR and other turnaround moves have been towards achieving this transition. However, negotiations and dynamic formation of opportunistic alliances between several teams inside ‘B’ with the consultant subverted the turnaround time and again. It appears any real-life change management demands dexterity in the art of negotiation. It is a challenge to the top management of ‘B’ as how to become a skilled negotiator of change. A complex negotiation-skill alone can guide though the maze of change initiatives along the dimensions of strategy, structure, process, personnel and technology.
Incentives and Knowledge Mismatch
313
Middle managers understood that this negotiation was not reengineering. It did not involve a proper design of the information system, based as it should have been on the core competencies, on the knowledge profiles and on the peculiarities of regulations prevailing in Indian banking. Middle managers and functional managers were not involved in the project conceptualization, nor were they involved with the consultant. The consultant and the board did not understand the design implications of IS/MIS and of the BPR. Managers felt that the project should have given importance to incentives reengineering, to harnessing core knowledge competencies and in bringing competitive pressures down inside the business processes through a thoroughly overhauled IS/MIS, a high degree of training in computer use with subsequent training in attitudinal changes, a better software and a better computing platform. The case of ‘B’ proved the complexity, ever-changing character and settlements over negotiations of a change management. The core problems of branch banking versus bank-based banking, gaps in knowledge and inability of the structure to offer an incentives-system remained unsolved. Some intermediate solutions emerged, though in this interregnum through a long drawn-out process of negotiation. There remains the challenge to transform branch-based banking into bank-based banking as well. The next major short-term challenge lies in identification and conceptualization of business processes. IT needs to be in the driver’s seat. Whatever IT infrastructure created was used almost entirely by the monitoring function of the now-centralized MIS. Customer orientation and customer support, or creation of new businesses through such IT and the IT-supported banking-processes remain, thus as dreams yet to be fulfilled.
APPENDIX ‘B’s association with stock exchange began in 1921 when it entered into an agreement with Bombay Stock Exchange (BSE) to manage the BSE Clearing House. Later this association gave birth to a joint venture to take care of depository services. Relation with industry proves crucial to bankbased financial operations. However, ‘B’ cannot be part of the board of a firm taking credit from it; it cannot access special privileged information from the management regarding details of business of the firm. As a result ‘B’ landed up with huge non-performing assets (NPA). ‘B’ had 6.5% and 7.3% of its net advances in this NPA category (assets turned dead, because the firm having taken credit is now financially sick) in the years 1997 and 1998 respectively. However, ‘B’ was better compared to an average Indian bank. Nearly 50% of ‘eligible’ branches are ‘fully computerized’ following the BPR project. IT part of the project was planned as long-term initially to be undertaken for about 4/5 months and then to be continued for about 4/5 years. Networking for integration to MIS too improved. From near absence it rose up to email/browser based connectivity at the metropolitan city branches. Lease line based network along with VSAT based connections are being inducted, as part of the central bank (the RBI) sponsored project. Some of the computerization projects, such as on electronic fund transfer, are being coordinated by the central bank and for all the domestic banks. Even now branches of ‘B’ transfer data by floppies sent through courier services. The total time taken by these information to travel to the head office from the branch is about three to four months out of which branch itself takes about one month, then the zonal office takes about two weeks in collating, the rest taken up by the central MIS and by the time of passage. This MIS at head office can be accessed by all deputy general managers of divisions, upon request to the chairman.
FURTHER READING Banerjee, P. (2000). Technology and restructuring of organization and assets in an Indian bank faced with the demands from financial restructuring and globalization. In F.J.Richter (Ed) The Asian Economic Catharsis: How Asian Firms Bounce Back from Crisis. Westport, Conn.: Quorum Books: 93-114.
314 Banerjee
Barth, J., D. Nolle & T. Rice. (1997). Commercial banking structure, regulation, and performance: an international comparison. Economics Working Paper, Office of the Comptroller of the Currency, Washington, DC. Humphrey, J., R. Kaplinsky & P.V.Saraph (1998). Corporate Restructuring: Crompton Greaves and the Challenge of Globalisation. New Delhi: Response Books. Khandwalla, P.N. (2001). Turnaround Excellence: Insights From 120 Cases. New Delhi: Response Books. Kim, B.O. (1994). Business process re-engineering: building a cross-functional information architecture. Journal of Systems Management, 45(12), December, 30-5. Kotter, J.P. (1995). Leading change: why transformation efforts fail. Harvard Business Review, 73(2), March/April, 59-67. Slywotzky, A.J. (1996). Value Migration: How to Think Several Moves Ahead of the Competition. Boston, MA: Harvard Business School Press. Venkataraman, N. (1994). IT-enabled business transformation: from automation to business scope redefnition. Sloan Management Review, 35(2), 73-87.
REFERENCES Braganza, A., C. Edwards & R. Lambert. (1999). A taxonomy of knowledge projects to underpin organizational innovation and competitiveness. Knowledge and Process Management, 6(2), 83-90. Christensen, M., A.Crabtree, C.H.Damm, K.M.Hansen, L.Madsen, P. Marqvardsen, P.Mogensen, E.Sandvad, L.Sloth & M.Thomsen (1998). The M.A.D. experience: multiperspective application development in evolutionary prototyping. Proceedings of the Twelfth European Conference on Object-Oriented Programming (ECOOP ’98), 14-41, Brussels,Belgium, June. Crabtree, A., D.M.Nichols, D. O’Brien, M. Rouncefield & M.B.Twidale (forthcoming). Ethnomethodologically informed ethnography and information system design. Journal of the American Society for Information Science, 51(7), 666-82. Frooman, J. (1999). Stakeholder influence strategies. Academy of Management Review, 24(2), 191-205. Hsiao, R.L. & R.J.Ormerod (1998). A new perspective on the dynamics of IT-enabled strategic change. Information Systems Journal, 8(1), January: 21-52. Jalan, B. (1999). Towards a more vibrant banking system. Reserve Bank of India Bulletin, January, 1120. Milgrom, P. & J. Roberts (1992). Economics, Organization and Management. New Jersey, Englewood Cliffs: Prentice Hall. Reddy, Y.V. (1999). Financial sector reform: Review and prospects. Reserve Bank of India Bulletin, January, 33-94. RBI (1991). Report of the Committee on the Financial System (Narasimham Committee). November. Mumbai: Reserve Bank of India. Scott-Morton, M. (Ed.) (1991). The Corporation of the 1990s: Informational Technology and Organizational Transformation. Oxford: Oxford University Press. Scott, S.V. & G. Walsham (1999). Shifting boundaries and new technologies: a case study in the UK banking sector. Working Paper Series. Department of Information Systems. London School of Economics and Political Science. London: LSE. Uchiyama, K. (1999). Reconciling the “global” and “local” by using the soft systems methodology: a case of building trust relationships in South Africa. Journal of Scientific & Industrial Research, 58(3&4): 302-20. Weerakkody, V., J.Bennett & C. Tagg (1999). Implementing business process and information systems reengineering in Sri Lanka: identifying critical success factors. Journal of Scientific & Industrial Research, 58(3&4): 160-71. Yelton, P.W., K.D. Johnson & J.F.Craig. (1994). Computer-aided architects: a case study of IT and strategy change. Sloan Management Review, Summer, 57-67.
Incentives and Knowledge Mismatch
315
BIOGRAPHICAL SKETCH Parthasarathi Banerjee is with the National Institute of Science, Technology & Development Studies (NISTADS), New Delhi. He stayed earlier at SUNY; Ecole Polytechnique; Tokyo University. His research interest is in the areas of knowledge and intangibles, innovation, information studies and strategic management. He has organized several symposia, conferences; edited and written books; acted as consultant; contributed papers to refereed journals and taught management courses. The latest book edited by him jointly is from the Palgrave on “Intangibles in Competition and Cooperation”.
316 Harindranath & Sillince
Risk in Partnerships Involving Information Systems Development: Lessons from a British National Health Service Hospital Trust G. Harindranath and John A. A. Sillince Royal Holloway College, University of London, UK
EXECUTIVE SUMMARY This is a case study of a US$ 30 million project to establish a new form of rapid healthcare service delivery within the context of a highly politicised National Health Service Hospital (NHS) Trust in the United Kingdom (UK). This project involved large-scale redesign of long-established healthcare procedures and the development of sophisticated new information systems (IS) through a unique partnership between the public sector (the UK’s NHS) and a number of private sector companies (a software developer, a facilities manager, a hardware vendor and a builder). The case study concentrates on, what is often, one of the more important determinants of the success or failure of such partnerships involved in information systems development, i.e. ‘risk’.
BACKGROUND At the core of risk is the possibility of loss, which arises whenever uncertainty exists about the outcomes of possible actions (Yates & Stone, 1992a). According to Rowe (1997): “If risk implies something unwanted or to be avoided, risk is then associated with consequences that involve losses to the risk-taker” (p.23). It is the probability of loss which is actually described in practice, although a more thorough consideration of risk involves three important but imprecise elements: (1) the type of possible loss; (2) the significance of those losses; (3) how uncertain are those losses (Yates & Stone, 1992b). Ritchie and Marshall (1993) point out that there may be a high degree of interaction between risk and uncertainty in any particular decision situation, i.e., decisions involving high degrees of uncertainty are also likely to be seen as high-risk situations. Hence the need for appropriate strategies to reduce uncertainty and by association, risk. Information systems (IS) risk management techniques are important devices for minimising unwanted problems in IS development projects (Baskerville, 1991; Boehm, 1989; Saarinen & Vepsalainen, 1993). Uncertainty within the context of IS development often arises from lack of understanding between business and IS staff (Reich & Benbasat, 1996). Uncertainty, and therefore risk, may also arise Copyright © 2002, Idea Group Publishing.
Risk in Partnerships Involving Information Systems Development
317
from other factors such as, multiple implementers, inability to cushion the impact of the project on others (Alter, 1979), technological complexity, the degree of novelty or structure of the system being developed, the extent of technological change as well as project size (Zmud, 1980), among many others (Jiang et al., 2000). This case specifically focuses on the risk associated with IS development projects that involve multiple partners. In projects where several partners collaborate, not only is uncertainty increased by each partner having different objectives, but by conflicting common objectives. On the one hand there is the need for widespread diffusion of information within a consortium, which contrasts strongly with the need for secrecy and appropriability within profit-seeking companies. This means that conflicting objectives exist when partners collaborate, making risk even more difficult to estimate (Catsbaril & Thompson, 1995). The case reported is of the perceptions of risk by partners in a consortium involved in creating a new medical facility, involving redesigned working practices, the development of sophisticated information systems, and new building design. Hospitals can respond several ways to risks. They can vary prices, change service mix, or reduce variation in resource use (Friedman & Farley, 1995). The creation of an internal market within the UK’s National Health Service is forcing hospitals to pay increasing attention to financial planning and the management of business risks. Our analysis reveals a number of different types of risk associated with IS development and consortium collaboration. Ownership risk is the risk that collaboration will lead to a loss of ownership or control of vital or valuable assets generated by a collaborative project. This type of risk depends on (i) the dependence of one partner on another; (ii) the market value of the final product to each partner; and (iii) the separateness of the partners’ markets. Ownership risk is important in projects that involve multiple partners, because it influences commitment (if we expect to lose assets due to the collaboration, then we reduce commitment) and intimacy (ownership and control involve information, so that if we expect to lose ownership or control we will attempt to reduce information flows). Uncertainty risk is the risk borne by a partner committing itself before another commits itself. A high level of commitment from each partner in a project will make it easier for everyone to deal with this type of risk, and such commitment can only be driven by mutual trust, which in turn can be generated by an expectation of mutual benefit. Control risk arises when responsibility is given to someone for decisions, which depend upon earlier decisions over which that person has no control. This type of risk is often generated within collaborative projects due to unknown bad effects on various partners of a previous decision made by another partner. Such negative impacts of decisions made by one partner on the others are often unintended, and these can be minimised by employing various project coordination mechanisms such as vertical authorisation and horizontal deals. These will be explained in more detail in relation to specific instances within the case study. Internal incompatibility risk is the risk of generating incompatibilities within business processes when they are redesigned or changed radically. Such incompatibilities within business processes could be generated due to technical reasons (such as incompatibilities between old and existing information systems) or due to organisational reasons (such as cultural differences between participants). External incompatibility risk is the risk of generating incompatibilities between business processes when redesign takes place. Incompatibilities between business processes and between new and existing information systems may also be generated due to technical or organisational reasons.
SETTING THE STAGE The context for this research is a major healthcare project at the CMT NHS Trust1 (henceforth referred to as either CMT or the Trust), an acute hospital in the UK National Health Service (NHS)
318 Harindranath & Sillince
employing 1,300 people and with an annual turnover of around US$75 million. Funding, amounting to around US$30 million, had become available through land sales and central government funding through a unique public-private partnership, termed the Private Finance Initiative (PFI), for the Trust to establish an Ambulatory Care and Diagnostic Centre (ACAD) adjacent to CMT. Five theatres for 20,000 cases per year were planned. This form of rapid service delivery and medical process redesign was adventurous and still at an early stage of diffusion into acute care, following successful though politically controversial experimental schemes of world-class quality in Australia, Switzerland and the USA (the Mayo Clinic). In particular, the proposed redesign of medical processes had yet to win widespread acceptance among hospital consultants. ACAD required both radical changes to business and medical processes and social structures, because it focused on rapid throughput and computerised scheduling. Although ‘greenfield’ in being a new facility, it was sited within an existing medical complex and drew on the existing resources of that site. Also ACAD required a careful relating of new systems to existing hospital functions and information systems (IS). The main top-level business processes of ACAD were: to educate and prepare the patient; to regulate and direct referrals to improve predictability; to schedule on the basis of units of time, taking power from doctors and giving it to schedulers; to use predictable flows and processes to design jobs and workloads; to develop system, machine, manpower and building use to maximise effectiveness of jobs and clinical process flows; and to manage the patient quickly back into the community. The information system needed had to network with General Practitioners (GPs), the main hospital at CMT and other NHS care centres, and had to solve complex scheduling problems, in order to shorten patient care from typically 4-10 days to one day. The approach not only required radical restructuring of working practices but it also required new technology–for example MRI and CT scanning which enable rapid interactional diagnosis. The ACAD design also facilitated the performance of imaging-guided interventions, involving, for example, clinical procedures requiring imaging and endoscopy, or imaging and surgery.
CASE DESCRIPTION CMT was headed by a Chairman and also a Chief Executive, together with six Directors. A software developer, a hardware manufacturer, a facilities manager and a builder were involved as a Consortium in ACAD together with CMT. One of CMT’s directors, the Director of Contracts & Clinical Activity, was also given the role of ACAD project manager. He acted as chairman of the ACAD Steering Committee, which included the Chief Executive, Chairman and other directors of CMT, together with representatives from the other Consortium partners, the ACAD architect and the software requirements team. The user organisation was therefore ACAD; the client organisation was CMT; the project organisation was the Consortium; and the subcontractors were the Consortium members–the equipment provider, the software developer, the facilities manager and the construction company. There were various Groups within ACAD. The Design Group commissioned architects who began design work on the new building between June 1996 and January 1997 based on a master plan, intending to commence construction by April 1997. The IS Group commissioned a software requirements team to produce a requirements document by April 1997, with the intention of software development by the Consortium partner responsible after that date. The Negotiation Group dealt with contracts and agreements between the Consortium partners. One of the main problems that the Hospital and the Consortium partners encountered was that of uncertainty, and therefore risk, associated with the development of new information systems for the ambulatory care center. This uncertainty was holding back the Consortium partners and CMT from coming to a legally binding agreement for the development of ambulatory care facilities and infrastructure. There were several processes taking place. One was building design, another was the requirements analysis for an information system for the new ambulatory care facility, and yet another
Risk in Partnerships Involving Information Systems Development
319
was the redesign of clinical processes and working practices to accommodate ambulatory care principles. CMT wanted to slow things down because the information systems design was still uncertain, and this was having a knock-on effect on the building design. However, Consortium members wanted to get commitment (via a contract) as early as possible in order to maximise the chance that the project would be completed and that they could recover their sunk costs. Moreover, the facilities management partner in the Consortium was supposed to play a crucial integrative role and thus was the most accountable (and hence was the most vulnerable to risk) because it depended on the other partners for its own successful project completion. The greatest source of risk for the facilities management supplier was that they had responsibility for an activity, which was controlled by others. This led to a series of delicate negotiations between the Consortium partners and CMT. The project was tracked from July 1996 to July 1998, and all the meetings (36 in all) during this period of the various Groups and Steering Committee were recorded for later analysis. Problematic statements were later followed up by means of questioning participants, and a total of 16 interviews were conducted. The negotiations between CMT and the Consortium suggest that there were five types of risk involved, which later on went on to impact the very life of the ACAD Consortium.
Ownership Risk The first type of risk that emerged from the negotiations between the Consortium and CMT was that of ownership risk. Ownership risk is the risk that collaboration on a project will lead to a loss of ownership or control of vital or valuable assets that might arise from that project. This type of risk is critical as it influences the extent of commitment each partner will show towards the project as well as the extent to which partners will become intimately involved with each other. Partners may commit less if they expect to lose out on valuable assets that the project might generate, and partners may also tend to withhold information from other partners if they expect to lose ownership of such vital assets later on. CMT claimed that the principal intellectual property innovation in the ACAD project arose from clinicians’ reports on redesigning medical procedures, leading to rapid service delivery. However, other consortium partners claim that they add the main source of innovation. In particular, the Consortium software partner has claimed that its software design is appropriable. The implication of any one partner gaining the upper hand in such negotiations is that that partner then has a saleable product within the potentially vast ambulatory care market that could open up within the UK NHS. Of course, each partner is adding unique value to what is undeniably a productisation process. In this case, the three important partners were: ACAD, with its knowledge of potentially new medical working practices; the software developer, with its knowledge of hardware and software technologies for delivering systems for imaging, diagnostics and scheduling; and the construction company, with its knowledge of hospital design. In terms of these three partners, the important dimensions of ownership risk were: The Dependence of One Partner on Another For example, the ambulatory care concept was based on minimisation of the total time a patient spent in the facility and this was largely dependent on recovery time. So redesign of working practices and caseload planning by ACAD depended on new scheduling software written by the software developer: “Caseload planning should be based on recovery time. This will require a new form of scheduling” (Director of Contracts & Clinical Activity, CMT). Moreover, even though the architect claimed that building design was unproblematic and certain, there were many design issues, which depended upon decisions about software systems and equipment: “What are the implications of IS for building design? They must include (1) image storage and retrieval; (2) medical records (electronic or not); and (3) patient scheduling. The building itself will depend to an extent on what kind of activities IS can do” (Software Engineer, Requirements Team).
320 Harindranath & Sillince
Also software and equipment decisions were interconnected: “Most equipment procurement is based on the assumption that paperless systems will be introduced to ACAD” (Clinical Director of Imaging, CMT). Decisions about software systems also had implications for staffing levels: “IS will bring staff numbers down” (Director of Contracts & Clinical Activity, CMT). The Market Value of the Final Product to Each Partner All three parties could take away and reapply new knowledge, although such knowledge varied from partner to partner. ACAD could develop a new set of medical working practices which could be disseminated in manuals, training courses, and university education, and its copyright over such knowledge could enable it to derive income directly from this process or sell such rights to a specialised training company. The builder-architect could use the experience as track record to gain further ambulatory care commissions. The software developer could reuse or modify improved versions of the code, which it was developing for ACAD and could sell it to other ambulatory care clients. Awareness of the market value of ambulatory care involved considering in what way it was different from other medical services, which were already provided elsewhere, and of the market value and profitability of alternative mixes of medical services in the new facility: “Someone wants cardiac stuff in ACAD. This is fantasy as the utilisation rate [of equipment] is 20% [and thus unprofitable] and there are teaching hospitals, which are good at this. So this should be out [of ACAD]... Consultation is out of ACAD except if it brings in money. If consultation is in ACAD then the facility will turn into another hospital. This is not what ACAD is about” (Director of Contracts & Clinical Activity, CMT). One of the problems that arose during the negotiation process was that ACAD directors felt that, although each partner could potentially benefit from the large amount of new knowledge, which the project would create, some partners wanted to regard the project in more conventional terms. These more conventional terms were to regard each partner as being told what to do and being fully paid for it, without the problems of negotiating and educating each other. “There is tremendous intellectual value for the Consortium from this project. But they’re being front-ended by salesmen who just want all the answers now. CMT needs to think through all the logistics. We are prepared to educate them” (Clinical Director of Anaesthetics, CMT). “[The Consortium partner providing the building] seems more like a salesman and doesn’t seem to be interested in a ‘partnership’ approach. They may not sign a contract until the IS uncertainties are sorted out. It may take at least 12 months” (Director of Contracts & Clinical Activity, CMT). The Separateness of the Partners’ Markets (the substitutability of final products) In the ACAD case, although working practices, software and building design were separate products, it was possible to represent them to the market in an integrated form as a ‘total solution’, giving the seller greater credibility and lower marketing costs, and giving the customer reduced complexity. Ideally, the partners should have worked together, creating this novel, total product. However, this ideal was difficult to achieve, because it was far easier for each partner to regard its product and its market as separate. For example, insufficient information was given to the software engineers about other partners’ requirements: “People here don’t appreciate the need to articulate what they require from IT. They also don’t like plans. They turn up late for meetings. The assumption is that they are always doing something more important” (Software Engineer, Requirements Team).
Uncertainty Risk Yet another type of risk that emerged was that of uncertainty risk. This is the risk borne by a partner committing itself before another. It relates to the question of how long decisions should be
Risk in Partnerships Involving Information Systems Development
321
postponed. In the ACAD case, the Consortium software partner had to complete, as one of its deliverables, an IS Building Design, documenting the IS-related issues identified as having a material effect upon the internal design of ACAD. The building design was to be frozen, yet until IS issues had been resolved (particularly the issue of paperless IS, and the juxtaposing of specialities to minimise patient flow), there may have been calls for architectural changes, such as space for records storage. This dimension of risk arises from the fact that some activities (theatre, imaging and exterior design) were relatively certain–for example: “Considerable certainty exists about theatre, imaging and exterior design” (Architect). On the other hand, other activities were not. These activities included: (1) information systems’ potential to simplify process flow; (2) multi-skilling and nurse empowerment reducing process hold ups; (3) elimination of offices–all ACAD staff needed to be mobile so rooms were for multiple purposes; (4) elimination of departments and beds to maximise practitioner-patient contact time and to speed up throughflow. However, one of the reasons why pressure was put on partners early on to settle the building design issues was as a way of reassuring the building Consortium: “The Consortium only gets [US$4.5 million] out of [US$30 million]. So let’s concentrate on the building and engineering of ACAD and give the Consortium a simple reassurance of their involvement” (Director of Contracts & Clinical Activity, CMT). Uncertainty risk relates to a difficult problem for the Consortium– how could processes redesign occur at the same time as an IS specification: “Reengineering of processes is taking place at the same time as the specification, and this is very, very difficult” (Director of Contracts & Clinical Activity, CMT). Often, existing links between business processes and IS mean that both processes and IS need to be redesigned together, yet in practice there is more sequence than simultaneity. In the ACAD case, building design preceded IS requirements. Figure 1 in the Appendix shows how the building issues were seen to be driving the information system issues. One of the most important issues in Figure 1 was the question of the correct scope and boundaries of the new facility (Issue G)–how did it differ from conventional hospital treatment, how did it relate to the existing hospital, and how did it relate to GPs? An answer to these questions would provide some answers to questions such as the degree of computerisation required and staff numbers (Issue F). The new facility was envisaged as needing a new approach to scheduling of treatment (Issue E) based upon caseload planning and recovery time (Issue D). Resolving these issues could then fix the building design more precisely (Issue B) which could then make information requirements more certain (Issue A). This means that some activities (e.g., building design) must be done with no knowledge about other activities (e.g., IS design), increasing the risk for particular Consortium partners, who then seek to cover their risk in various ways. Despite evidence that avoidance of delays as a management concept is not widely used in the software industry (Carmel, 1995), the perception of this risk by two of the three private sector Consortium partners became clear during the case study. The components of uncertainty risk are therefore: (1) the proportion of diagram elements (the issues in Figure 1) which should be and are simultaneous; (2) the proportion of diagram elements that should be and are not simultaneous; (3) the length of time that the situation described by (2) persists. Members of a collaboration will tend to be more able to deal with uncertainty risk when there is high commitment from collaboration members, because the expectation of a long-term mutually beneficial relationship leads to higher trust and a longer payoff horizon.
Control Risk This type of risk arises when responsibility is given to someone for decisions, which depend upon earlier decisions over which that person has no control. It is the risk of unknown bad effects of
322 Harindranath & Sillince
a previous decision by another partner. Control risk can be reduced by coordination mechanisms such as vertical authorisation and horizontal deals (Nidumolu, 1995). In vertical authorisation, a superordinate person or group is given responsibility for earlier decisions, and must ensure that earlier decisions help later ones. In a sense the ACAD Steering Group was supposed to ensure this consistency, although it was only as effective as the trust and group-mindedness of each of its partner members. Horizontal deals also ensure consistency, this time by spelling out responsibilities and deadlines. The ACAD case was punctuated by a number of points when contracts were used in this way between partners. For example, the Consortium building partner was concerned that it would have to manage and operate a building that was not designed by it, where IS-relevant design defects (e.g., defects relating to networking or communication facilities) might come to light later. The components of control risk are therefore: (1) the proportion of user requirements which should be and are met by the partner; (2) the proportion of user requirements which should be and are not met by the partner; (3) the length of time that the situation described by (2) persists; (4) the legal, financial and safety consequences of incorrect or incomplete user requirements determination.
Internal Incompatibility Risk This is the risk of generating incompatibilities within business processes when they are redesigned. This partly depends upon human relations factors that suggest separation of staff from each other for reasons of personal relationships or professional rivalry: “To schedule certain kinds of activity may be a bad thing. Some specialities cannot stand each other” (Chairman, CMT); or because of cultural contamination of the new facility by old ‘bad’ working practices and attitudes: “We may need totally new staff for ACAD. If you move an entire department like “Eyes” into ACAD you will get the old “Eyes” social structure and this is not good for ACAD” (Director of Contracts & Clinical Activity, CMT). Incompatibilities also arise from a poor relationship between existing and new information systems. Changing a business process causes changes to data flows and data types that then are not adequately dealt with by old information systems. In the ACAD case, images were intended to be ubiquitously available within ACAD but unavailable at the main hospital, severely curtailing any possibility of continuity between consultation (in CMT) and treatment (in ACAD). There was also the problem of how to relate new and old information systems: “Existing IS are outdated by at least ten years. They don’t provide all the information even at present” (Software Engineer, Requirements Team). The components of internal incompatibility risk are therefore: (1) the proportion of data flows within each business process which have compatible source and destination; (2) the cost of the making new information systems simpler and less functional merely to enable them to be compatible with the existing information system; (3) the cost of the existing information system being improved to enable it to be compatible with the new information system.
External Incompatibility Risk This is the risk of generating incompatibilities between business processes, and between new and existing information systems when process redesign takes place. Integration between ambulatory and other treatment regimes is an acknowledged problem of the radical changes to business processes brought about by the ambulatory care concept (Suber, 1996). Barrows et al. (1994) found that although ambulatory care is aimed at cost reduction, one of the significant factors raising costs was the need for compatibility with IS serving old business processes. Sources of incompatibility are technical (different data models, normal forms, data formats and data sets), design-related (unique or domaindependent designs - Kohane et al., 1996), business-related (IS for a business process still based on old requirements), cultural (user resistance or politicisation of the IS development process within the
Risk in Partnerships Involving Information Systems Development
323
user organisation), financial (new IS requires too-expensive adapting of legacy IS), security-related (new and old IS having different security procedures) or safety-related (when different safety standards or protocols are used). In the ACAD case, communication between ACAD and GPs will either necessitate expensive modifications to upgrade GPs’ information systems or else expensive modifications to create a paperbased ACAD system able to be posted out to GPs. “Are we having paper? Will it come over the next few years? What about the boundary of ACAD and external links to GPs?” (Software Engineer, Requirements Team). Similarly, the new ACAD information system will need to connect to the existing information systems of the main hospital (CMT): “What will be the relationship between ACAD and the main hospital systems?” (Software Engineer, Requirements Team). A question raised by this relationship was, who pays for the links and who pays for reducing the data incompatibilities? “Who pays for external links from ACAD and what’s ACAD’s boundary? My own view is that what’s most important is that CMT should fit into ACAD, not ACAD fitting into CMT. This means that CMT will incur extra costs” (Clinical Director of Anaesthetics, CMT). Another type of incompatibility is the incompatibility arising from staff working both at ACAD and CMH, two facilities that will have different working practices and objectives: “If someone is “inside” ACAD they should not be called out to the main hospital every time there is a need” (Director of Contracts & Clinical Activity, CMT). Old information systems create many different problems, including the need to enforce consistency, the effect of raised maintenance costs, the difficulty of using information from multiple sources (Li et al., 1994), and the creation of a shared vocabulary (Barrows et al., 1994). New system owners probably wish to overlook old systems, and indeed current cost justification schemes encourage short-term ignoring of inherited IS, information and expertise (Hinton & Kaye, 1996), yet many connections between the old and the new IS are usually necessary. It is important to identify those connections, the owners of the connected processes, and the ways in which connections are protected or ignored. Much will also depend on the owner of the old system (CMT in the case study) and how much it is prepared to spend to make it compatible or to alter or replace parts of it. This extra work may include creating intermediate systems, which can make use of old systems, but which interface with the new IS (Meistrell & Schlehuber, 1996; Vanmulligan & Timmers, 1994). These three roles (new system owner, old system owner, and owner of connections between old and new systems) require some kind of agreement between relevant parties. It may be that doctors’ expectations of a paperless ACAD (electronic prescribing, case notes taking, doctors using notepads, image and medical records storage, transmission and retrieval, JIT scheduling and complete tracking of all intervention) were too high. At the start of the project no commercial electronic scheduling system existed which was easily useable in a health context. Complete digitisation may have been too expensive and intermediate solutions needed to be considered, partly because of the need to communicate with old CMT systems and GPs and others who were not at that time using electronic media for all information flows. Complete electronic tracking of patient intervention may have been too expensive and may have needed to be relaxed to attend just to entry and discharge data, leaving the period between the two untracked and hence flexible. The Consortium software partner had to complete, as one of its promised deliverables, an Information Systems and Technology Strategy, which defined a plan to ensure that the resourcing of IS in ACAD and the information, systems and technology architectures defined and agreed for ACAD “fitted with” and complemented the resourcing and corresponding architectures used by the main hospital. There would be problems of data transfer, and patient referrals from the main hospital, but options still existed. For example, interaction between the two systems could have been minimised or made compatible. This
324 Harindranath & Sillince
raises the question of what the factors are which lead to decision-makers deciding on compatibility rather than a new stand-alone system. Another question is about data ownership by GPs, various hospital Trusts in the UK NHS, the UK Department of Health etc., and about the boundaries where data transfer is difficult. The ACAD project will also be bounded by several constraints relating to old systems. Examples of constraints are: (1) a constraint to maximise utilisation rates of shared equipment increases the amount of necessary interaction (e.g., a CT scanner will be cited within CMT and ACAD patients will use and do bookings for it); (2) a constraint to outsource equipment increases the amount of necessary interaction (e.g., a CAD will outsource sterilisation equipment, and will also store film off site). An agreement between relevant parties would therefore need to take account of (1) the type, access rights, security status and number of data flows between old and new IS; (2) how these are changed during the process redesign; (3) who owns the business processes affected and whether or not they are prepared to spend money to increase compatibility; (4) the implications of disputes over data ownership; (5) the consequences of different compatibility–creating options for new IS, old IS and new-old IS connection owners; (6) the implications of constraints internal to business processes on flows between new and old IS. The components of external incompatibility risk are therefore: (1) the proportion of data flows between business processes which have compatible source and destination; (2) the cost of making new information systems simpler and less functional merely to enable them to be compatible with the existing information system; (3) the cost of the existing information system being improved to enable it to be compatible with the new information system; (4) the ease of insulating working practices from outside influences; (5) whether or not the relevant parties are prepared to pay for incompatibilities to be minimised.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANISATION In this case, we have examined what is often one of the more important determinants of the success or failure of partnerships involving information systems development, i.e., ‘risk’. We have shown how conflicting objectives exist when partners collaborate on a project, often leading to increased uncertainty and by implication a perception of increased risk from the viewpoint of the various partners. In this case, we have suggested ways of assessing different types of risk associated with information systems development and consortium collaboration, and have illustrated these with a case study involving information systems development and radical changes to business processes within the context of a major UK NHS hospital. In summary, the five types of risk that became important during the life of the ACAD project were: ownership risk–the risk that collaboration will lead to a loss of ownership or control of vital or valuable assets resulting from the collaboration; uncertainty risk–the risk borne by a partner committing itself before another commits itself; control risk–which arises when responsibility is given to someone for decisions, which depend upon earlier decisions over which that person has no control; internal incompatibility risk– the risk of generating incompatibilities within business processes when these are redesigned; and external incompatibility risk, which is the risk of generating incompatibilities between business processes during their redesign. Towards the end of the case study period, CMT’s negotiations with the Consortium had collapsed. A total lack of convergence between the views of the Consortium partners and that of CMT on the issues of risk and uncertainties surrounding the ACAD project were to blame for this failure. At the time of writing of the case study, unable to reconcile the differing perceptions among its partners, CMT had abandoned the idea of a Consortium for the project and had negotiated separate deals with a number of companies to construct the building, supply the equipment and develop the required information systems.
Risk in Partnerships Involving Information Systems Development
325
An important lesson that can be drawn from this case is that communication of IS and business executives’ perceptions should be a central concern to organisations wishing to achieve change (Lederer & Mendelow, 1986). These perceptions include cross-references between IS and business plans, mutual understanding and congruent long-term vision between IS and business executives, and executives’ self-ratings of congruence (Reich & Benbasat, 1996). We suggest that an additional dimension of congruence will be the perception of the types of risk that we have examined in this case study. Where linkage between IS and business executives is low, we would expect business executives to be most concerned with ownership risk (which is relevant to business-related assets), uncertainty risk (which is relevant to when they should make decisions) and control risk (which is relevant to their reading of what is their ‘territory’ or span of control) and to underestimate internal and external compatibility risk (with its relevance for incompatible data flows). Conversely, we would expect in lowlinkage contexts for IS executives to be most concerned with internal and external compatibility risk, and to underestimate ownership, uncertainty and control risk. Such variations in perceptions can often derail otherwise successful IS projects. The typology of risks presented in this case study can be used in this context as an aid to communication as well as to create better understanding between various project participants.
ENDNOTE 1 The acronym CMT has been used to preserve the anonymity of the hospital.
REFERENCES Alter, S. (1979). Implementation risk analysis. TIMS Studies in Management Science. 13(2), 103-119. Barrows, R.C., Arrows, R.C., Cimino, J.J. & Clayton, P.D. (1994). Mapping clinically useful terminology to a controlled medical vocabulary. Journal of the American Medical Informatics Association. 211-215. Baskerville, R.L. (1991). Risk analysis as a source of professional knowledge. Computers and Security. 10(8), 749-764. Boehm, B.W. (1989). Software risk management. Washington D.C.: IEEE Computer Society Press. Carmel, E. (1995). Cycle time in packaged software firms. Journal of Product Innovation Management. 12(2), 110-123. Catsbaril, W. & Thompson, R. (1995). Managing information technology projects in the public sector. Public Administration Review. 55(6), 559-566. Friedman, B. & Farley, D. (1995). Strategic responses by hospitals to increased financial risk in the 1980s. Health Service Research. 30(3), 467-488. Hinton, C.M. & Kaye, G.R., (1996). The hidden investments in information technology - the role of organisational context and system dependency. International Journal of Information Management. 16(6), 413-427. Jiang, J.J., Klein, G. & Means, T.L. (2000). Project risk impact on software development team performance. Project Management Journal. 31(4), 19-26. Kohane, I.S., Greenspun, P., Fackler, J., Cimino, C. & Szolovits, P. (1996). Building national electronic medical record systems via the World-Wide-Web. Journal of the American Medical informatics Association. 3(3), 191-207. Lederer, A. & Mendelow, A. (1986). Issues in information systems planning. Information & Management. 10(5), 245-254. Li, P., Kramer, L., Pineo, S. & Kulp, D. (1994). Evolving a legacy system: restructuring the Mendelian Inheritance In Man database. Journal of the American Informatics Association. 344-348. Meistrell, M. & Schlehuber, C. (1996) Adopting a corporate perspective on databases - improving
326 Harindranath & Sillince
support for research and decision-making. Medical Care. 34(3), 91-102. Nidumolu, S. (1995). The effect of co-ordination and uncertainty on software project performance residual performance risk as an intervening variable. Information Systems Research. 6(3), 191219. Reich, B.H. & Benbasat, I. (1996). Measuring the linkage between business and information technology objectives. MIS Quarterly. 20(1), 55-81. Ritchie, B. & Marshall, D. (1993). Business Risk Management. London: Chapman & Hall. Rowe, W.D. (1977). Anatomy of Risk. New York: John Wiley. Saarinen, T. & Vepsalainen, A. (1993). Managing the risks of information systems implementation. European Journal of Information Systems. 2(4), 283-295. Suber, R. (1996). Chronic care in ambulatory settings - components of an integrated care system. American Behavioural Scientist. 39(6), 665-675. Vanmulligan, E. & Timmers, T. (1994). Beyond clients and servers. Journal of the American Medical Informatics Association. 546-550. Yates, J.F. & Stone, E.R. (1992a). The risk construct. In Yates, J.F., Risk-taking Behaviour. Chichester: John Wiley. 1-23 Yates, J.F. & Stone, E.R. (1992b). Risk appraisal. In Yates, J.F., Risk-taking Behaviour. Chichester: John Wiley. 50-81. Zmud, R.W. (1980). Management of large software development efforts. MIS Quarterly. 4(2), 45-55.
BIOGRAPHICAL SKETCHES G. Harindranath is a Senior Lecturer in Management Information Systems at the Royal Holloway School of Management, University of London. He holds a doctorate in Information Systems from the London School of Economics. Hari’s research interests include information systems management; global IS management issues including information infrastructure policy; and information technology and economic development. He is Associate Editor of the Journal of Global Information Technology Management and Editorial Board member of the Journal of Global Information Management. He is also the Vice President of International Relations for the Information Resources Management Association. John A.A. Sillince is Professor of Management Information Systems at Royal Holloway Management School. His main research area is argumentation in management and organizations. He has theorized and carried out empirical studies of face to face and mediated organizational communication. He has designed and developed computer mediated argumentation support systems. He is Coordinator of the European funded SCALE project (Internet-based intelligent tool to Support Collaborative Argumentation-based LEarning in secondary schools).
Risk in Partnerships Involving Information Systems Development
APPENDIX Figure1. Partners’ Perceptions of Building and IS Issues A. IS issues External links? Paperless? Boundary/ scope of ACAD?
D. Caseload planning
Recovery time
B. Building work is key project driver
Master contract Equipment procurement & working groups
C. Theatre, imaging, ext. design certain
Reassure Consortium
Consortium needs reassurance
E. New type of scheduling
Consortium gets small part of total budget
F. IS will bring down staff numbers Electronic imaging and medical records?
G. Low utilisation of imaging equipment If imaging equipment in then flow is fast
Scope & boundary of ACAD processes?
Not cardiac
4 criteria for inclusion of specialities
Avoid clash between specialities
327
328 Robra-Bissantz
A Case on Communication Management Susanne Robra-Bissantz University of Erlangen-Nuremberg, Germany
EXECUTIVE SUMMARY When Bissantz & Company GmbH, a small software-producing company, experienced a rapid growth in 1997, the need for a strategic concept for communication activities with external partners arose. By that time a research project at the University of Erlangen-Nuremberg, that dealt with structures and strategies of external business communication, had reached a point, where strategic concepts for corporate communication had been developed. Bissantz & Company GmbH and the project team of the university decided to co-operate in a case in order to transfer the theoretical results to a practical situation. The whole concept of corporate communication, that includes, e.g., the definition of communication goals and strategies for all communication forms, proposals for the contents of messages and media selection, was applied to the company. As a result Bissantz & Company GmbH gained valuable insights into its communication processes. The strategic orientation of communication with all stakeholders is still visible and now the basis of e.g., the structure and contents of the company‘s Web site. A proposal for the use of innovative media for customer care and customer consultation was accepted and initiated the implementation of a database-supported system for all communication activities, especially those with customers.
BACKGROUND The Company Bissantz & Company GmbH is a software house, founded in 1996 as a technology spin-off from university research projects. The company is specialising in solutions for demanding business data analysis. It develops tools and analysis technology, which are sold under their own labels and as OEM components. Hand in hand with academic institutions, Bissantz & Company GmbH undertakes basic research. The results are integrated in the standard software for data mining, business intelligence and data warehousing. Dr. Nicolas Bissantz and Dipl.-Inf. Michael Westphal, the CEOs and owners of the company, Copyright © 2002, Idea Group Publishing.
A Case on Communications Management
329
Figure 1: Departments of Bissantz & Company GmbH Managing Managing directors directors Dr. N. Dr. N. Bissantz, Bissantz, M. M. Westphal Westphal
Business Business Development Development
Software Software Development Development and and Customer Customer Support Support
Sales Sales
Secretary Secretary
Technical Technical Support Support
started as members of the FORWISS’ Information Science Research Group, which is led by Prof. Dr. Dr. h.c. mult. Peter Mertens. Back then they developed the basis for data analysis automation and proved the suitability of data mining processes for enterprise control. Bissantz & Company GmbH started with three employees and two managing directors. In 1997 it experienced a rapid growth. New employees had to be hired so that today there are 15 employees and still two managing directors. The main positions in the organisation are depicted in Figure 1.
Communication Management During the last decade the technical possibilities of communication have steadily grown. Yet there is uncertainty about the reasons for successes and failures of new means of communication on the side of suppliers as well as about the requirements for successful media application on the customers’ side. Both groups are aware that there is a demand for active market research and strategic planning in the field of business communication as competition increases. Communication has the power to become a strategic weapon if companies are able to describe their communication needs and to organise all forms of communication consistently and actively in order to achieve their goals. A research project at the chair of business computing of the University of Erlangen-Nuremberg (Prof. Dr. F. Bodendorf) deals with structures, developments and strategies of external business communication. The core of the project is an empirically confirmed explanatory model of media application. Among other applications this model leads to a concept of corporate communication, that takes relevant aspects into account which are important for decisions concerning the process of organising communication contents and media. External Business Communication The term ”external business communication” is used to describe communication processes of an organisation with three main characteristics: • It is business communication, which means that the cause of communication is always a business activity and that communication has to serve business goals. • It is external e.g., with customers, suppliers or other partners of the company. To distinguish between internal and external communication we use the physical limits of the company’s location. • As a third characteristic we only analyse one-to-one communication, which means that our understanding of external business communication does not include mass communication like marketing communication. An Explanatory Model of Media Application The explanatory model of media application for a message in external business communication
330 Robra-Bissantz
is based on a theoretical background derived from different fields of research. • Communication theories (Austin, 1989; Searle, 1993; Watzlawick, 1996; Schulz von Thun, 1981) provide an insight into the structure of a message. This structure can be described by the factual content, the intention of the message and the relationship between the sender and the recipient. • Marketing theories (Howard, 1969; Böcker, 1992) can explain media usage as a two-stage process of demand for communication services: At first users build a kind of evoked set of media that are generally suitable for the communication need and secondly they choose their medium according to their preferences. • Decision theories (Heinen, 1982; Diller, 1991) describe the decision for a communication medium by its goals, alternatives and the situation in which it takes place. • Theories of critical mass systems (Weiber, 1992; Rogers, 1995) analyse the barriers for the usage of new telecommunication systems. These theories lead to an explanatory model that describes the process of media selection as depicted in Figure 2. The cause of communication determines the communication need that finally leads to media usage. According to the two-stage process of demand there are two influencing areas to distinguish: the suitability of a set of media is determined by the communication need that results from the message (area: object of communication). This selection is independent of the sender. The company’s or the employee’s situation leads to the final media selection (area: subject of communication). Object of Communication. The cause of communication in companies lies in a business process that requires the delivery of information. We differentiate between structured and unstructured communication depending on the degrees of freedom when information is encoded to become a message. If the cause of communication emerges regularly and traditionally and therefore leads directly to a message in a fixed form with fixed contents, such as orders, invoices, or contracts, we talk about structured communication. If the employee himself has to judge the content and intention of the message as well as his relationship to the recipient, the same cause of communication can lead to different messages. This communication is termed unstructured. The following part focuses structured communication. Communication needs in structured communication can be described by the requirements that result from the content, the intention and the relationship between sender and recipient. Empirically found were four clusters of structured communication that take place during the transaction process: • Contact communication (advertising and information) occurs at the beginning of the transaction process. It has to be prestigious (reflect the company’s personality), of great variety and quality. It tries to persuade the recipient and therefore must be able to transport emotions. The partners of communication are often unknown. Suitable for contact communication are letters and Figure 2: Explanatory Model of Media Selection OBJECT
cause cause of of communication communication
media media
SUBJECT
communication communication need need
communication communication situation situation
relevant relevant media media
suitability suitability
decision decision
media media application application
selected selected medium medium
A Case on Communications Management
331
multimedia communication e.g., via the Internet. Dialogue communication (customer advice, customer care) is needed at the beginning of the negotiation phase. It requires fast and dialogue oriented media that make personal discussions possible, like telephone and email. • For negotiation communication (offer, order, confirmation of order), the speed is still important but it also has to offer confidentiality and the retention of legitimacy. Personal nearness is not as important as in earlier phases, but negotiation communication still needs media that lead to an agreement. A suitable medium today is EDI, especially in close and steady relations. • The main demands of transaction communication (invoice, reminder) are the confidentiality and the retention of legitimacy. Communication has to be prestigious and factual. For most companies the letter is the medium of choice. The suitability of media for communication results from a comparison of media features and communication needs. In addition to the requirements, as mentioned above, the suitability of media in the business process can be analysed. Subject of Communication. The subject of structured communication is the associated field in which it takes place, namely the company with its internal situation, e.g., business goals and strategies, its external situation, e.g., the suppliers, competitors and customers, and the macro situation with the legal situation, emerging technologies, social and political influences, etc. Empirically we found out that for example the following characteristics of the company influence the final media decision: • Companies where innovation is important tend to use innovative media as well. • Companies that have a big share of business customers prefer media that offer personal communication, confidentiality and security. • Regular relations to business partners lead to an acceptance of less secure media. • Companies that have to perform in a market with increasing competition select media that are fast. • Preferences of the communication partners are an important influencing factor in media selection. •
Overview of a Concept of Communication Management Organisations are becoming increasingly aware of the fact that communication is gaining the status of a valuable, if not indispensable management tool (van Riel, 1995). The market-oriented approach introduced below aims at a credible and consistent picture of the company. From this point of view, the growth of different forms of communication has resulted in a tendency for the sum of communication activities to be certainly no greater than its constituent parts. An active planning and implementing of corporate communication programmes can have strategic influence on companies‘ successes. Aside from external business communication, it is possible to distinguish three basic forms of communication in organisations. • Marketing communication covers all communication activities that are meant to increase the profits of the company. It tries to persuade customers in all stages of the buying process. Communication instruments in marketing communication are for example advertising and sales promotions. • Public relations communication has to take all stakeholders of an organisation (different social groups associated to the company) into account. It is not meant to persuade them but to avoid conflicts and promote cooperation with them. • Internal communication consists for example of management communication with the employees, company newspapers or circulars. It becomes very important if you take into account that employees often transmit the image of the organisation to the public. A concept for corporate communication has to integrate and harmonise these communication forms by formulating common goals and communication strategies. For the different communication forms and subsequently the different communication instruments this leads to substrategies accord-
332 Robra-Bissantz
Figure 3: Overview of a Concept of Corporate Communication
goals main goal subgoals for target groups single goals for communication instruments
Situationsanalyse communication strategy substrategies for communication forms single strategies for communication instruments strategies internal (company) external (competition, customers suppliers) macro (influencing areas: e.g. politics)
ing to their basic orientation and partners. A management of external business communication is thereby embedded in the concept of corporate communication as a substrategy. A concept of corporate communication takes into consideration: • the strategic orientation of the company (cost leadership, differentiation or concentration), • the kind and structure of stakeholders of the company and • a description of the company’s communication processes. The market-based approach in communication management leads to the following steps: • definition of the goals of corporate communication as well as of the goals of different forms of communication, • formulation of a communication strategy, the substrategies for communication forms and the single strategies for communication instruments.
SETTING THE STAGE In 1997 Bissantz & Company GmbH faced the development of many start-up companies: tasks, activities and decisions had to be delegated. In a company of the service sector, the main problem of this delegation lies in the handling of external contacts. Customer advice, sales, customer service as well as scientific lectures on conferences and conversation with (potential) partners and offices are tasks that had formerly only been carried out by the responsible managing director. The division of tasks on new employees naturally leads to decentralised communication with customers, partners and the public. The need for a strategic concept for communication with external partners arose, which was supposed to include aspects of the management of the contents that are to be communicated as well as new IT-based communication systems that can support the structured communication of the company.
A Case on Communications Management
333
CASE DESCRIPTION The whole concept of corporate communication was applied to Bissantz & Company GmbH in cooperation with its management. Dr. Nicolas Bissantz provided the company’s business strategies in numerous discussions. After a first concept for communication management was worked out by the members of the chair of business computing, the results have been discussed, modified and finally accepted in two joint workshops. Each of the following sections is divided up in a description of the theoretical concepts for communication management and the practical realisation in the company.
Communication Goals Theoretical Aspects Communication goals result from a strategic positioning of the company and its competitors (van Riel, 1995) that can be achieved by an analysis similar to the SWOT (strengths and weaknesses, opportunities and threats) analysis. We use a strategic positioning that was invented for ‘‘Marketing Image Management” by Bruhn (1995). Its axes are strenghts / weaknesses and high / low relevance for the target groups. It shows the position of the companies as intended by the management (corporate identity) and as observed by the public (corporate image). Important characteristics of the company can be found in the upper right quadrant. Practical Analyses and Results The strategic objective and subjective positioning of Bissantz & Company GmbH and two important competitors is depicted in Figure 4. The main communication goal of the company can be formulated with the acquired CSPs: ‘‘Strict support of the positioning of the company as the most innovative of the branch of industry with the highest technical competence and a strong interest in social matters. The company distinguishes itself by highest quality, flexibility and personal care for its customers”. Subgoals refine this main goal with regard to the target groups and the intended effect of communication. Single goals can be formulated for the communication instruments: ”Increase of the corporate image with regard to the characteristic technical competence through optimal consultation by 20% in the next year.”
Communication Strategies How communication goals can be achieved by communication instruments is the subject of communication strategies. According to the fields in the basic strategic concepts of management, we propose communication strategies as depicted in Figure 5. Communication strategies are always based on business strategies like the competitive strategy or the partner strategy. A basic decision is then, whether corporate communication can promote the business strategy, or has to support it or at least should not counteract it. At first we make decisions for the message (the message content and its requirements regarding media characteristics) in all strategic fields. In a second step we choose the media that are suitable for the requirements – preferably on the level of communication instruments. This justifies extensive analyses of single communication activities because message strategies can be the long term basis for the selection of media from the constantly changing offer. Personality Strategy Theoretical Aspects.The personality strategy orientates the messages, their requirements and media strictly to the corporate personality and the strategic positioning. It fixes the common starting points (CSPs) for all communication forms. To filter out the CSPs from the strategic positioning we suggest a matrix that depicts an internal and an external axis for communication, namely the potential
334 Robra-Bissantz
Figure 4: Strategic Positioning of Bissantz & Company GmbH high relevance for target groups characteristics with increasable relevance
characteristics with high relevance quality innovation tech. comp. nearness social responsibility
weaknesses
strengths
flexibility personal care location
characteristics with no relevance
characteristics with increasable relevance low relevance for target groups
objective positioning of the company subjective positioning of the company intended positioning of the company
objective positioning of two competitors subjective positioning of two competitors
Figure 5: Communication Strategies
media message
personality strategy personality strategy
communication communication partner partner strategy strategy
competitive competitive communication communication strategy strategy substrategies for communication forms
external business external business communication communication
marketing marketing communication communication
public relations public relations work work
internal internal communication communication
type ofof commutype communication nication
e. e. g.g. advertisadvertisment ment
e.e. g.g. investor investor relations relations
e.e. g.g. company company newspaper newspaper
single strategies for communication instruments
A Case on Communications Management
335
of the characteristic that can be derived from the strategic positioning and the relevance of communication for the transmission of the characteristic. Once the CSPs are fixed they can find their expression in a strategic theme for communication and in aspects of corporate design (Bruhn, 1995). Practical Analyses and Results. Figure 6 depicts the matrix for Bissantz & Company GmbH. The characteristics in the upper right quadrant, namely innovation, social responsibility, quality and technical competence are the currently most important CSPs. Their potential is high because they are important for customers, a strength of the company, needed for differentiation from competitors and because the objective positioning is for some characteristics worse than intended as part of the corporate identity (see Figure 7). On the other axis the relevance of communication for the characteristic is high because the subjective positioning is worse than the objective or because it is an important image criterion and communication is able to transmit it. The characteristics in the left half of the matrix are no or subordinate CSPs. The characteristics in the lower right quadrant have to be analysed separately. Personal care becomes a CSP as well, because it is a strength of the company and its importance for the customer can be enhanced by communication. The strategic theme of communication for Bissantz & Company can be formulated as: ”We provide highly specialised innovative solutions for problems of data analysis.” The decision for the plain but extraordinary and ”young” typeface ”earth” supports the company’s characteristics innovation and flexibility as well as the technological area of business. Figure 6: Relevance/Potential Matrix for Communication potential of the characteristics subordinated CSP
innovation
CSP
quality flexibility
technical competence social responsibility
personal care nearness location
no CSP
? relevance of communication
Figure 7: Typeface of Bissantz & Company GmbH
336 Robra-Bissantz
Competitive Communication Strategy Theoretical Aspects. The competitive communication strategy can for example be derived from the competitive strategy of the company according to Porter (1980, 1985). A strategy of cost leadership calls for communication activities at their lowest possible costs, as derived from a value analysis. A big potential for cost reduction can be expected from an analysis of unstructured communication processes. A concentration strategy has strong impact on media selection in marketing communications, as it leads to a preference for media of low range (specialised journals instead of magazines, direct mail instead of television). For external business communication a concentration strategy may lead to the implementation of an EDI system for a small number of regular customers. In a strategy of differentiation, communication can be the differentiation advantage, which enables for example fast transactions or professional advice. Especially if the differentiation advantage is innovation, it can be supported by communication media that have to be innovative as well. Practical Analyses and Results. The company follows a concentration strategy with differentiation advantage (highly specialised, innovative product and consistent customer focus). This leads to a competitive communication strategy that includes for example: • the differentiation advantage of high-quality customer consulting and customer care, • a technology leadership role concerning innovative media, • a strict focus on customer service and innovation for all communication instruments. • media of low range for all communication activities.
Communication Partner Strategy Theoretical Aspects. Just like the competitive communication strategy, the communication partner strategy has to support superordinated partner strategies. Marketing theory introduces the main strategic alternatives of differentiated (by the means of a market segmentation) and undifferentiated treatment of the customers. Communication partner strategies have to consider additional relevant groups of stakeholders (Zerfaß, 1996). If the company pursues a differentiated treatment of customers, the communication partner strategy may support this differentiation or even facilitate it by using different communication channels for different customer groups. Additionally, communication partner strategy has to decide on a differentiated treatment of the other target groups of communication. Grunig and Repper (1992) suggest judging the stakeholder groups by their involvement, problem recognition and restraint recognition in order to build publics of high interest. If a company decides to treat these high interest publics distinctively, it has to modify the CSPs for the target groups. A
Table 1: Stakeholders of Companies economical sphere
organisational sphere
extra economical sphere
customer interested person prospective customer dealer supplier cooperation partner contact personal management employees claimholders (e.g. science, critics, neighbours) general public (journalists, opinion leaders, citizens) politics and administration
A Case on Communications Management
337
Table 2: Analysis of Stakeholders at Bissantz & Company GmbH customer prospective customer co-operation partner supplier employee general public science
I 2 3 3 1 3 3 3
T 3 3 3 1 2 3 3
G 2 2 2 2 3 3 1
Q 3 3 3 3 3 2 3
F 3 2 3 3 3 2 1
P 3 3 2 1 3 2 1
CSPs T, Q, F, P I, T, Q, P I, T, Q, F Q, F I, S, Q, F, P I, T, S I, T, Q
valuation: 1 = low priority, 3 = high priority I: innovation, T: technical competence, S: social responsibility, Q: quality, F: flexibility, P: personal care
hierarchic structure, that derives the message contents for the target groups from the CSPs of the company via the CSPs for the publics, takes into account that no person can be excluded from communication that was initially not intended for him. Practical Analyses and Results. In the field of customer-oriented strategies, the company has decided to address different market segments distinctively. The communication partner strategy can facilitate this strategy for example by offering specialised information only to important customers via the Internet. These customers have their accounts and passwords on the Web site (www.bissantz.de) and can retrieve newest product information and advice. The analysis of the most important stakeholders leads to important publics and the prioritisation of two groups (cooperating partners and science) as depicted in Table 2, together with their rating and adaptation of the CSPs. For cooperating partners there is also a secured entrance to specialised pages on the Internet. The group of scientifically interested people can find information on research and development as well as publications in a category named background. Strategies for Communication Forms Theoretical Aspects.The substrategies for communication forms take into account that marketing communication, public relation work and external business communication can transmit different characteristics of a company. Therefore each communication form has to concentrate on a subset of the CSPs. An analysis of the communication forms (Robra-Bissantz, 2000) leads to advantages for each of them in developing certain effects. Whereas the domain of marketing communication is to transmit emotions and to persuade, public relation work rather applies personal communication for mutual agreement. The speciality of external business communication is that it transmits characteristics mainly by ”deeds”, which means that it transmits for example quality without talking about quality (content of a message e.g., in marketing communication) but by using high quality paper and print for information brochures in letters. An overview of the results of an analysis of the communication forms is provided in Table 3. To find out which CSPs should be assigned to which communication form, one has to decide for every single CSP whether it can be transmitted best via agreement, persuasion, information, personal nearness, content or ”deed”. The results of this analysis have to be connected with the one depicted in Table 3. By assigning CSPs to communication forms, the company also influences the CSPs that are transmitted to each target group. Therefore the company has to ensure, that the objectives of the communication partner strategy are fulfilled, in a final combined analysis of CSPs for communication forms and CSPs for target groups. Practical Analyses and Results. For Bissantz & Company GmbH, the CSPs can be assigned firstly to the specialities of the communication forms and then to the communication form itself (see Table 4 and Table 5).
338 Robra-Bissantz
Table 3: Effects of Communication Forms marketing com. public relations external b. com. internal com.
agreement 1 3 2 3
information persuasion pers. nearness 2 3 1 3 1 2 3 2 3 3 3 3
contents ”deeds” 3 1 3 1 2 3 2 2
valuation: 1 = communication form is not suitable for effect, 3 = communication form is suitable for effect
Marketing communication is therefore suitable to transmit innovation and quality. Public relation work should be additionally used for the communication of social responsibility. The CSPs for external business communication are mainly technical competence, flexibility and personal care. The final analysis whether the objectives of communication partner strategies can be achieved with this assignment of CSPs to the communication forms is depicted in Table 6. The arrows announce whether the target groups can be reached by the communication forms in the special situation of Bissantz & Company GmbH (small company, relatively small number of customers, suppliers and prospective customers – as a result of the competitive strategy). In the fields of Table 6 those CSPs are noted which are intended for the target group and provided by the communication form. An evaluation of all fields in one row of the table leads to a decision whether all CSPs for the target group can be transmitted by the communication forms (agreement between the strategies for communication forms and communication partners). In the case of Bissantz & Company GmbH, the only CSP that is missing for the target groups science and general public, is technical competence. Therefore the company should add this CSP to the CSPs of public relation work, which is perfectly suitable to communicate this characteristic (see Table 5).
Strategies for External Business Communication In a next step, we put the emphasis on external business communication. We suggest that external business communication should be subordinated to the main communication forms marketing communication and public relation work. Therefore the following section on theoretical aspects at first Table 4: CSPs and Effects agreement information persuasion pers. nearness contents ”deeds”
innovation 1 2 3 2 3 1
tech. comp. 3 3 1 3 2 3
soc. resp. quality 3 1 3 2 1 3 1 2 3 3 1 1
flexibility 1 1 1 3 1 3
pers. care 3 2 1 3 2 3
valuation: 1 = CSP can be transmitted badly by effect, 3 = CSP can be transmitted efficiently by effect
Table 5: CSPs and Communication Forms marketing com. public relations external b. com. internal com.
innovation 1 2 2 1
tech. comp. 7 3 1 1
soc. resp. quality 3 1 0 2 2 2 1 1
flexibility 4 3 0 1
pers. care 6 3 1 1
valuation: outcomes of linkage of Table 3 and Table 4: 0 = CSP can be transmitted efficiently by communication form, 7 = CSP can be transmitted badly by communication form
A Case on Communications Management
339
Table 6: CSPs for Target Groups and Communication Forms CSPs
inno. qual.
soc. res., inno., qual.
flex., pers. care, tech. comp.
all CSPs
marketing Ô
public relation work Ô
external bus. communic. Ï
internal communic. Ð
qual.
qual.
flex., pers. care, tech. comp.
yes
prospective customers
Î
Î
Ò
Ð
inno., qual.
inno., qual.
tech. comp., pers. care
yes
co-operation partners
Ô
Ô
Ï
Ð
inno., qual.
inno., qual.
flex., tech. comp.
yes
suppliers
Ô
Ô
Ï
Ð
qual.
qual.
flex.
yes
inno., qual., employees flex., pers. care, soc. res. inno., tech. general public comp., soc. res.,
Ô
Ô
Ð
Ï
inno., qual.
inno., qual., soc. res.
yes
Ò
Ï
Ô
inno.
soc. res., inno.
tech. comp.
inno., tech. comp., qual.
Ô
Ï
Î
inno., qual.
inno., qual.
tech. comp.
comm. forms CSPs tech. comp., qual., flex., pers. care inno., tech. comp., qual., pers. care inno., tech. comp., qual., flex. qual., flex.
target groups customers
science
agreement
flex., pers. care, inno. etc.
Ð Ð
lack of tech. competence lack of tech. competence
covers the influences of marketing communication and public relation work on the communication instruments (communication types) of external business communication. Secondly it deals with basic strategies for communication types of external business communication. The section that deals with the practical application of the concepts, takes into account, that external business communications can transmit CSPs: • through the content of the messages, concerning the factual content, the intention and the relationship between sender and recipient, • through the requirements on the medium that result from the contents • through the medium itself. It shows the strategies for messages and media decision at Bissantz & Company GmbH. Theoretical Aspects Influences of Public Relations Work and Marketing Communication. Marketing communication tries to persuade customers by emotion, information or actuality in order to influence e.g., their attitude and behaviour. It often uses an image of experiences that can be associated with the company or a product. For external business communication it is very important that the contents and media of communication types, which also intend to persuade the customer (communication type contact communication), reflect the same experiences as marketing communication. Public relation work aims at a mutual understanding of the company and its publics. A two-way symmetrical model of communication, where the company discusses potential conflicts with small groups, should be favoured. For external business communication this leads to the insight that communication types that require dialogue-oriented communication with personal nearness (communication type dialogue communication and negotiation communication) should be influenced by the public relation work of the company. Basic Strategies for Communication Types. Communication types of external business communications can be characterised by their requirements. To find basic strategies, we analyse whether the fulfilment of special requirements of the communication types can support the CSPs of external business communication. Contact communication can be used to transmit all CSPs that are connected with the image of the company, especially its personality, the high quality of products, the creativity and design or
340 Robra-Bissantz
emotional experiences. Dialogue communication is suitable to communicate CSPs that promote the company’s speed, its personal nearness or ability to discuss and find agreements. Negotiation communication has to show the same CSPs as dialogue communication and additionally CSPs like security or confidentiality. The latter CSPs can be supported by transaction communication as well. Basic strategies for the communication types with special requirements are depicted in Figure 8. Practical Analyses and Results As far as the content of messages of external business communication is concerned, all communication types at Bissantz & Company GmbH should emphasise the companies’ technical competence and personal care. For the communication types an analysis of the requirements leads to the following strategies: • Contact communication of Bissantz & Company GmbH has to emphasise the personality (with the CSPs technical competence, personal care and flexibility). Furthermore it has to support marketing communication with the CSP’s quality and innovation. This leads to qualitatively and technically high-standing multimedia communication with logos and slogans. • With dialogue communication the company can transmit the CSP’s personal care and technical competence by fulfilling this communication type’s special requirements of dialogue, agreement and personal nearness. Therefore the company should use for example telephone or videophone for dialogue communication. • Because there are no CSPs concerning security aspects at Bissantz & Company GmbH, negotiation communication can transmit the same CSPs as dialogue communication. Because negotiation communication has additional requirements like the retention of legitimacy and confidentiality, suitable media are video-phone and email. Like in negotiation communication there are no special requirements resulting from the CSPs for transaction communication. Figure 8: Basic Strategies For Communication Types
“ security and automation“ strategy Weiterbearbeitbarkeit processability
5,00
Schnelligkeit speed
4,00 3,00
Bequemlichkeit comfort
2,00
“ agreement“ strategy public relations orientation Rückkopplungsfähigkeit interaction
Verständigung agreement
1,00 Rechtliche Anerkennung retention of
persönliche personalNähe nearness
-
legitimacy quality Qualität
Vertraulichkeit confidentiality
Persönlichkeit personality
Übermittlungsvielfalt
variety
Emotion emotion „image“ strategy marketing orientation
contact Kontakt
Dialog
Vereinbarung dialogueAbwicklung
negotiation transaction
A Case on Communications Management
341
Figure 9: Relevance/Potential Matrix for External Business Communication potential of communication
?
innovation information advertising
customer advice customer care
offer reminder
confirmation of order
invoice
tradition
? relevance of external business communication
Concerning media application, the CSP innovation leads to a preferential use of new media. But before a company invests in new media, it should analyse whether communication and particularly external business communication is not only able to transmit CSPs but is also relevant for the transmission of characteristics of the company. A portfolio (see Figure 9) shows that at Bissantz & Company GmbH, it is dialogue communication (with customer care and customer advice) that should be supported by innovative media. Searching for innovative media for customer advice and customer care, we suggested a brainstorming with the management of the company. The traditional medium for customer advice and customer care is the telephone. A discussion about possible innovative media starts from this medium and takes into account the special requirements that result from the CSPs of external business communication. Possible new media for customer care and customer advice are: • video-phone; • an automatic call distribution system that distributes incoming calls on different consultants with the possibility to give preference to important customers; • computer integrated telephony, that provides all data about the customer automatically with the incoming call; • voice response systems, that allow a dialogue between the person who calls the company and the computer through for example speech recognition; • call centres, where incoming calls are operated in specialised areas of the company. All new media have different advantages as depicted in Figure 10. Bissantz & Company GmbH prefers systems that support personal and specialised consultation to systems that offer any time availability. A call centre is rather suitable for larger companies. This leads to the recommendation that computer-integrated telephony or video-phone should be used for communication during customer advice and customer care.
342 Robra-Bissantz
Figure 10: Brainstorming of Innovative Media videovideophone phone
automatic automatic call call distribution distribution
call call center center
computer computer integrated integrated telephony telephony
voice voice response response
telephone telephone
5
3 2 1
R üc
kk
S
ch ne p l llig un ke V e r g s it fä p e sspeed r s tä n ... ön d i lic g u interaction he ng Ü be N äh agreement rm e Q itt u lu n g a lit personal nearness s v ät ie quality lf a Pe Em lt ot rs variety io ö R V e n lic n e c emotion r tr hk ht a ei lic u li t he c h personality k W B e A n e it ei qu er confidentiality te . r b em .. z e e a lic h legality rb itl . E e i ke it r r tba ei r. f lä comfort c c h h b .. en ar processability d e ke i ck t time availability en d
ubiquity
0
op
technical technical competence competence
4
personal personal care care
content
customer care and customer advice
requirements
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION One important experience from the case is that the proposed concept of communication management leads to satisfying results if carried out in a company. At Bissantz & Company GmbH, several outcomes of the study were perceived: First of all the management of Bissantz & Company GmbH generally accepted the proposal of a computer-integrated telephony. The recommendation of a video-phone had to be rejected as there are not enough partners of the company that use this equipment. Up to the present there is no complete integration but a comprehensive database system that every employee can use. The advantage of this solution is that either phone number or name or company of the calling person can be used as an identifier. Bissantz & Company GmbH is now planning the introduction of an integrated system for their sales department. Secondly Bissantz & Company GmbH became aware of the importance of a harmonised and differentiated corporate communication and its impacts on single strategies for contents and media. The strategic orientation of communication with all stakeholders, remains visible and is now the basis of e.g., the structure and contents of the company’s Web site. All communication processes, especially those with the customers, have been thoroughly analysed and structured. The database, that was meant to facilitate competent and personal consultation, now integrates all information about every single contact (via telephone, letter, email or personal contact) with every single communication partner. It contains information about conditions, offers, invoices and reminders (together with a reference to the physical file where the paper based copies can be found), customer
A Case on Communications Management
343
consultations, hot and cold leads, the personality of customers and partners as well as the kind of relation with them (e.g., difficult or demanding customer). Because most employees basically started their work with the new system, hardly any acceptance problems could be observed. It also turned out to be a good decision that no standard computer integrated telephony system was introduced, but developers of Bissantz & Company GmbH created their own system. Like this, many proposals for improvements emerged from their own ranks, which additionally increased acceptance. All employees use this computer supported telephony in daily work. The management of Bissantz & Company GmbH reports significant time savings (compared to the time when the system was not introduced completely) for all employees, which means efficient usage of expensive resources. Queries with the managing director during customer advice have almost never been necessary. There is hardly any case known where problems have arisen caused by differing statements to one customer. If there occurs a case where a customer complains about e.g., promised conditions (which happens about once or twice a year), every employee is ready to trace every single contact with him by content, date and time. In case of absence of single employees or even the management, the database contains all information necessary to carry on the processes with customers or partners. The management and the employees consider the system together with all further developments, that were initialised by the basic analysis, a valuable and important tool for daily and strategic work. Further challenges in communication management are to improve the customer interface in an holistic approach and to include employees’ communication. New technological developments enable and call for a simultaneous use of different media that are assembled on a Web site or envisaged in multi-channel e-business strategies. As a result, integration needs on different levels arise for communication with customers. Firstly systems using the contact database, as described above, should be able to collect and evaluate information from all customer contacts like email, chat contributions or traces on the Web site. For email communication, a solution that manages incoming and outgoing emails using the database is currently investigated. Finding methods for a stepwise automatic analysis of individual contact information to support a kind of learning relationship is still a demanding task. Secondly there are more and more tasks at the customer interface that can be carried out by means of telecommunication. Therefore external business communication will comprise more tasks in the future, e.g., transaction settlement and services. Defining e.g., new digital services for the customer, is a task of strategic management of the customer interface, where the digital offers, the structures and technologies that are presented to the customer are investigated. In terms of communication management, new digital services lead to additional communication processes that have to be planned according to the CSPs of corporate communication. At Bissantz & Company GmbH the next step in designing the structures of the customer interface is to build a platform that facilitates the exchange of information between customers. Prospective additional digital services are remote consulting and remote system configuration in e.g., shared screen applications. In times where employees often have to carry out tasks on site in the customer companies, internal communication should not be restricted to one way managerial directives. Therefore in a next step of communication management, the communication processes of employees should be thoroughly analysed to work out communication types similar to those of external business communication. Obviously new mobile technologies build the basis for appropriate media for internal business communication. Still the CSPs of communication with employees and the effects of communication processes between them have to be investigated prior to investments in new technologies. More decentralised structures may consequently lead to new emerging problems because it gets more demanding to include all contact information in the contact database. Besides, single employees may not be accustomed to the corporate style of communicating with customers. In these cases incentive strategies have to be employed to reward adding new information to the database, and
344 Robra-Bissantz
employees have to be trained in personal contacts with customers. In any case, especially when Bissantz & Company GmbH experiences a further growth, the integration of employees as active partners in communication management will become a demanding task.
FURTHER READING Robra-Bissantz, S. (2000). Strukturen, Entwicklungen und Strategien der externen Unternehmenskommunikation. Berlin: dissertation.de or http://www.dissertation.de/html/robrabissantz_susanne.htm.
REFERENCES Austin, J.L. (1989). Zur Theorie der Sprechakte. Ditzingen: Reclam. Böcker, F. (1992). Präferenzforschung. Diller, H. (ed.), Vahlens großes Marketinglexikon. München: Vahlen, 884-887. Bruhn, M. (1995). Integrierte Unternehmenskommunikation. Stuttgart: Schäffer-Poeschel. Diller, H. (1991). Preispolitik. Stuttgart: Kohlhammer. Grunig, J.E. & Repper F.C. (1992). Strategic Management, Publics and Issues. Grunig, J.E. (ed.), Excellence in Public Relations and Communication Management. Hillsdale: Lawrence Erlbaum Associates, 117-158. Howard, J.A. & Sheth, Y.N. (1969). The theory of Buyer Behaviour. New York: Wiley. Porter, M.E. (1980). Competitive Strategy, New York: The Free Press. Porter, M.E. (1985). Competitive Advantage, New York: The Free Press. Robra-Bissantz, S. (2000). Strukturen, Entwicklungen und Strategien der externen Unternehmenskommunikation. Berlin: dissertation.de. Rogers, E.M. (1995). Diffusion of Innovations: Modifications of a Model for Telecommunications. Stoetzer, M.-W. & Mahler, A. (ed.), Die Diffusion von Innovationen in der Telekommunikation. Berlin: Springer, 25-38. Schulz von Thun, F. (1981). Miteinander reden I – Störungen und Klärungen. Reinbeck bei Hamburg: Rowohlt. Searle, J.R. (1993). Sprechakte: Ein sprachphilosophischer Essay. Frankfurt: Suhrkamp. Van Riel, C.B.M.(1995). Principles of Corporate Communication. London: Prentice Hall. Watzlawick, P.; Beavin, J.H. & Jackson, D.D. (1996). Menschliche Kommunikation: Formen, Störungen, Paradoxien. Göttingen: H.Huber. Weiber, R. (1992). Diffusion von Telekommunikation: Das Problem der kritischen Masse. Wiesbaden: Gabler. Zerfaß, A. (1996). Unternehmensführung und Öffentlichkeitsarbeit. Opladen: Westdeutscher Verlag.
BIOGRAPHICAL SKETCH Susanne Robra-Bissantz was born in 1965. She studied management science at the University of Erlangen-Nuremberg with the major subjects of information systems, marketing and statistics. After this she worked as an assistant professor at the chair of information systems (Prof. Dr. Freimut Bodendorf). Besides the tasks of research and teachings she gained practical experience in projects with several companies. She finished her doctoral thesis on structures, trends and strategies of external business communication in 1999. Now she habilitates in the field of Electronic Business Management with a special focus on the customer interface.
A Case Study of One IT Regional Library Consortium 345
A Case Study Of One IT Regional Library Consortium: VALE–Virtual Academic Library Environment Virginia A. Taylor William Paterson University, USA Caroline M. Coughlin Consultant, USA
EXECUTIVE SUMMARY Historic models of library management are being tested and modified in the digital age because of several interrelated factors. First, the importance of place or a home library space changes as electronic opportunities for dispersal of library collections increase with IT innovations and availability. Second, the high cost of IT has made library managers more sensitive to issues of cost in general while the ability of IT systems to provide easy access to managerial data, data previously difficult to capture, has allowed library managers to begin to differentiate costs for services based on use. As a result of these two factors, new, partially cost-focused models for delivering IT systems and information sources to library users are being developed. The new IT library regional models raise many questions about appropriate organizational and funding strategies. In this case, one strategy is examined in depth. Suggestions for alternative managerial strategies and economic models for IT regional library managers to pursue are given, based on the lessons to be gleaned from this experience and an examination of the literature describing other regional IT digital library ventures.
BACKGROUND Today libraries are being challenged to develop digital library services, utilizing all the best information technology (IT) has to offer. These same institutions are also facing escalating costs for subscriptions to journals and indexes. Over the years, many librarians have chosen to form voluntary associations or consortiums. The majority of these ventures state as their goal the improvement of library services to users of each member library. In the past, the ability of individual libraries to pay the full costs of their use of the service being offered was not the primary issue library managers faced when building the association. It was common practice for wealthier libraries to cover the majority of the costs. Costs were not systematically reviewed, and decisions to subsidize some members were based on sentiments that favored inclusive, egalitarian models of service. This case centers on the work done by IT library professionals in New Jersey to develop a cooperative program for the digital Copyright © 2002, Idea Group Publishing.
346 Taylor & Coughlin
distribution of information resources. VALE is an acronym for the phrase selected to describe the goal of the new New Jersey consortium–Virtual Academic Library Environment (VALE, 2001). As a not-forprofit library regional cooperative venture, VALE exists to provide electronic databases and journals to its members, a large group of academic libraries in New Jersey. It has been in existence for almost five years, and it is an example of a collaborative organizational approach to the provision of information technology-based services in more than one library.
Data Sources for Case Materials In building this case we reviewed all VALE public records, including minutes of meetings of the 28 member VALE Task Force held since June 1997. An Executive Committee with nine members is responsible for ongoing operation and growth of VALE, including budgetary accountability and planning for future funding. Accordingly, we also examined the minutes of their meetings held between December 1998 and November 2000. Two key members of the VALE leadership team were interviewed at length. The head of the VALE Executive Committee met with us for two hours. A leading member of the Task Force, one of the individuals responsible for introducing the idea to the state, who is also the person who has been the volunteer manager/business agent for VALE, met with us for over four hours. In addition to participating in the interviews, both of these individuals have regularly responded to requests for additional information or clarification of a source document. Each has also been willing to speak off the record about issues surrounding VALE’s future. We have been in regular contact with both individuals since our initial meetings in November 1999 in preparation for a paper on value-added measurement perspectives available to libraries, especially libraries with significant IT investments. The study yielded evidence to show the applicability of three business models to the VALE project. They are Stakeholder Theory, Value-Added Model and Managerial Accounting. That paper was presented at the IRMA Conference in Anchorage in May 2000 (Taylor & Coughlin, 2000).
Issues in Consortium Formation Agencies like VALE are being formed in many different states at this time, and the issues surrounding the formation, continuation and growth of IT library regional cooperatives are issues of significance to the future of all IT-based library services. IT regional digital library initiatives are often described as significant opportunities for the provision of innovative library services. However, it can be argued that the real significance of many collaborative IT library ventures lies in the transformation of each individual member library’s funding structure. If more than one library can share a subscription because its digital nature allows for disparate locations, all the old patterns of funding libraries can be challenged. When economic realities demand that each library in the consortium pay based on its use of the subscription, all the older models of altruistic library cooperation are also challenged. These issues are often bypassed because IT library managers tend to focus their attention on the technical problems of building systems. In many cases, including VALE, the initial challenges for the regional library IT agency are perceived by the founders to be primarily challenges of structure, infrastructure needs for hardware, software, installation and training and funding, especially post-grant funds or matching fund formulation. From the start of a project, decisions about membership status and rights of the individual member institutions are present as issues of selectivity. Decisions must be made about whether participants must all do the same things to the same degree and at the same price. The development of VALE is illustrative of one-way management, and economic issues are handled when the goal is finding a way to offer new digital library services in a constrained funding environment. When libraries enter the digital library age, there are many changes in the way users receive information. These have been studied by experts such as Michael Buckland, who calls for using technology to encourage a radical re-centering of libraries towards users (Buckland, 1992). In the past libraries, like their universities, have focused on inputs. Today the focus is on other stakeholders. The end users perception of value received is often the cornerstone of an outcomes assessment to judge
A Case Study of One IT Regional Library Consortium 347
the best resource allocation for available funds. Christine Borgman, another well-known commentator on these issues, views the future for libraries in global rather than local terms and thus appreciates the antiquated nature of most individual library governance and funding systems (Borgman, 2000). Less obvious, but equally important to study, are the ways in which the infrastructure of libraries may change in the digital library environment. The studies of Bruce Reid and William Foster are pioneering in this respect (Reid & Foster, 2000). It is also important to understand why some types of IT library-related changes can be more difficult to implement than others and bring questions of structure (Saunders, 1999) and cost (Kantor, 1995) to the forefront in new ways. As with other important issues in life, if a person seeks answers to why questions, often there is a need to explore the history of the situation. In the case of IT library regional consortia development, including VALE, understanding the historical context of both library IT efforts and consortia developments is vital. Each history significantly influences the decisions of present day leaders in the field as they build new organizations.
SETTING THE STAGE Because VALE is an information technology (IT) centered library consortium, it operates within the context and value structures of libraries and library consortia. The history of libraries is essentially a history of non-profit, stand-alone organizations that are willing to share their resources with other libraries at little or no charge. Consortia are latecomers in a library history that spans centuries. Over time and at times, individual libraries have been willing to cooperate on the development of supplemental library services but each library accepts as its focus, a primary responsibility to deliver basic services to its own constituency within the parameters of its own budget. When there is cooperation it is generous. The library loaning an item or giving the supplemental service to other libraries often absorbs the cost of the service being delivered. This is a matter of professional ethics. In the world of non-profits, it is believed that it is right and good for institutions to help each other. This belief applies to relationships between institutions at equivalent levels of strength as well as to relationships between stronger libraries and all other libraries and library-related agencies. Each of the above statements applies to the great majority, if not all, libraries established until the later part of the 20th century. Starting in the mid-19th century, improvements in communication systems and technology made it possible for libraries in the same geographical area, or libraries with similar missions, to develop voluntary associations. The typical association was established to further some forms of cooperation, notably to support interlibrary loan among libraries and to help create union catalogs listing holdings of several libraries. The working plan of these voluntary groups was to facilitate the sharing of resources at little or no additional cost, except the contributed cost of volunteer labor or the absorbed cost of staff time devoted to an association project. Many of these voluntary associations and their attendant practices are still in existence. VALE, the IT library consortia featured in this study, is technically in this category of association. Because of the enormous changes brought about in all library operations with the advent of IT, there have been strains on the typical library cooperative venture trying to service a 21st century library in the midst of a technological revolution with a 19th century model of library association management.
Values in Transition As with all revolutions there have been both anticipated and unanticipated consequences. The traditional model of stand-along libraries freely offering other libraries supplemental services is being dismantled even as it is still being honored as the norm. The newer models of library service assume linkages among libraries, and expect that there will be fees to pay when libraries rely on others to provide services. Fee structures are not unknown to libraries, thanks to the development of commercial vendors offering libraries services for a price. Although still controversial for some applications, the practice of paying outside agencies to accomplish certain library tasks at a set or variable price is also now
348 Taylor & Coughlin
accepted as a reasonable managerial approach. This practice, often called outsourcing, has been used to purchase cataloging data and develop book collections, and with it came an acceptance of unit pricing, volume discounts, and other typical pricing structures. Although the practice of paying outsiders to do library work has modestly increased the willingness of libraries to pay each other for work done, there is still a very strong tradition in librarianship that calls for libraries to share freely with each other. It is this tradition that hovers over the current generation of IT library regional systems such as VALE. In these instances, IT is both the change itself and the facilitator of related changes. IT handles enormous amounts of data efficiently, encourages standardization and yet permits flexibility, and these facts alone significantly change the way in which any library handles its core functions. IT also permits easy linkages among libraries and facilitates accounting for costs among various libraries. This ability to communicate easily changes the picture when it comes to libraries offering supplemental services or developing shared programs. At the time of the creation of any regional IT library program, it is of necessity dependent on the state of IT innovation in general.
Intertwined Foci of National, Local and Regional IT Library Models The IT library model developed over the past forty years has three foci: national, local and regional. From the beginning of IT activity in libraries, the foci have been intertwined in practice. The national IT in libraries focus came first, supported in part by research in national agencies such as the National Science Foundation, the National Library of Medicine, National Library of Agriculture and the Library of Congress. In the national IT arena, the key tasks have been the establishment of standards and the testing and development of computerized systems for processing materials. Developing a standard for entering catalog data into the computer was a task taken on by the Library of Congress in its role as the de facto national library when it created the MARC record format for machine readable cataloging. This notable national achievement is the cornerstone of much other related IT-based database development. It represents the key first-generation national library IT effort. First Generation IT in Local Libraries Shortly after IT experimentation began at the national level, innovative libraries in many different localities also began to try and find library applications for computers. The local focus on IT in the first generation of automated libraries often centered on the development of experimental uses of an available mainframe computer for listing and circulating library materials. In these cases, activity occurs within the parameters of a single library or a library system located in one political district or on one campus. Size was relevant but not the determining factor. Libraries large and small, of all types, were involved in these experiments. The number of library outlets was not important. In addition to the presence of IT interest and talent on the staff, the other key factor in the ability of a library or library system to experiment with IT was the presence of funding from the parent agency. In the wealthier locales support for funding IT efforts occurred when it was well articulated and linked to the library’s goals. For the first round of libraries to receive IT funding, it sometimes meant that the parent institution or governing agency was interested in being judged a leader or innovator among their peers. For the late IT local adapters, the funding agency was either finally persuaded by the success stories of the early adapters, or shamed by their locale’s delay in implementing IT. What is essential to remember is that all these first generation local IT implementation decisions were designed for and funded by a particular constituency. In this sense, the first generation local IT world is really not very different from the 19th century world of wealthy stand-alone libraries with a sense of noblesse oblige to share their good fortune with poorer libraries. To use economic terminology, one could say this was a situation where stakeholders’ expectations created social pressure for IT library development. Three examples are: 1) government officials and private citizens sought to democratize information; 2) student and faculty scholars wanted convenient and timely
A Case Study of One IT Regional Library Consortium 349
access to information; 3) parent universities sought improved scholarly productivity; and, 4, funders such as taxing authorities, universities themselves and benefactors share a desire for efficient resource allocation. This encouraged innovation and investment in order to satisfy stakeholder expectations.
Regional IT Efforts Begin Soon these two strains of IT effort at the national and local level begin to infect each other, as in the case of a successful mounting of a local system using the emerging national standard of the MARC record. In the library field numerous meetings are held to share information about successes, learn from mistakes and begin to experiment with replication of code or sharing of IT resources, especially mainframe computers, both for the purpose of sharing and to attempt to achieve cost savings. Many of the efforts to share knowledge and computing power become the basis of the first generation of regional IT efforts. In the regional arena the IT players address IT development, education and linkage issues among the local institutions as well as between the local agencies and the national service providers. The regional IT library efforts are a mix of voluntary efforts via association activities and funded efforts based in agencies supported by member fees.
Questions of Cost and Payment Emerge As national and regional IT library services developed, one key distinction emerged. Services were not necessarily freely offered; instead the cost of providing the service was determined and charged to the participating library. Business economic models became important to library managers because cost behavior analysis requires an understanding of the fixed and variable costs involved as well as allocation methods. (Snyder & Davenport, 1997) Often U.S. agencies funded the creation of IT library services and products as part of a research and demonstration program. In many of these situations, the options for continuation were further government funding or passing the costs along to the service’s clients after any introductory, government-sponsored grant period ended. In the same manner, regional entities might also decide to eventually charge for services that they offered freely at the beginning of the service. However, the regional agencies might not, especially if the library region was defined as having the same boundaries as the local taxing and funding authority, either a multicounty government agency or a state. In these cases governmental appropriations could be sought to fund IT services for one public institution in the area or for a group of them. If it were the latter, then the establishment of the group became the equivalent of forming a regional governmental agency. When the group included privately supported libraries as well as public ones, the need for an agency to manage the cooperative effort increased. In either case cost-benefit analysis would be a useful tool in justifying any appropriation of funds. OCLC’S Impact Defines the 2nd-Generation IT Library Systems and Regional Efforts While it is the creation of MARC standard to create electronic bibliographic records that triggered the first generation IT efforts, it is the availability of MARC records in disparate databases that triggers the start of the second generation of IT work in libraries. In the second generation the barriers and boundaries accepted by libraries since ancient days begin to crumble. The development of OCLC, Inc. dominates the second generation of IT in libraries and in library cooperative ventures. It offers the technological mechanism for cooperation and develops an organizational structure to pay for it. When OCLC was begun in the late 1960s, the initials stood for the Ohio College Library Cooperative and represented a geographically based effort to harness IT to share the task of cataloging library materials. OCLC was established to allow a select group of Ohio libraries to copy the catalog record made by another member library. The MARC records of the Library of Congress were the main source, in the form of MARC tapes, freely given to OCLC to load onto a mainframe in Columbus, Ohio. Once the tapes were in the OCLC database, they could be viewed on OCLC-dedicated, local terminals
350 Taylor & Coughlin
and used to develop local cataloging copy. At first OCLC was funded with national and state grants. But, when the audience for the OCLC product reached beyond state borders, it was decided that the use of the file would not be free. Ohio legislators felt no need to pay for Indiana libraries and the leadership of OCLC wanted new funds to go beyond survival funding and pay for new IT. When OCLC sets its fee structures, based on use, a new era was born in libraries. Over time the initials OCLC have come to mean Online Computer Library Center and OCLC has become a major player in the library world. It is a supplier to libraries of software to manage many disparate library services, and it is the manager of an enormous bibliographic database that is the engine which controls ever-increasing volumes of shared cataloging, collection development collaborations, electronic database provision and interlibrary loan activities among libraries. The economic principle of return on investment (ROI) applies in these activities, whether the libraries call it by this technical term. They may simply celebrate that their collaborative ventures have increased the chances that one book, purchased by one library, will be used by other readers in other libraries. Or they may view the transaction in economic terms and see the above as leading to the growth of more knowledge and greater democratization of information, lower unit cost to deliver particular information packages and a geometric expansion of knowledge if research results are shared or published. Library managers who may be unfamiliar with the ROI term have become knowledgeable about unit costs, volume and time-of-use discounts, as well as learning how to strengthen their bargaining and purchasing power by increasing the use of volume purchases. From the start, OCLC illustrated the classic “if you build it, they will come” scenario. As more libraries joined OCLC and purchased its cataloging services, more libraries knew what each other had and the knowledge led to an increase in interlibrary loan (ILL) activities. This increase in borrowing and lending has transformed interlibrary loan from a minor activity in libraries to a significant program area, now called resource sharing. Now, more libraries seek to borrow more frequently from each other and then, in some strategic alliances, more libraries seek to share the costs of purchasing more items. OCLC is the national center of these efforts, but it is not a monopoly. In an attempt to share power equally among OCLC members, the decision was made to provide OCLC services via regional networks and to use the networks to help govern OCLC. The reach of OCLC is enormous.
Third Generation of IT Regional Library Efforts Nowadays, when a new IT effort is envisioned and created in libraries, the work is often intertwined with OCLC and one or more of its regional affiliates. OCLC demonstrates the power of incremental additions to historic practices to change the earlier resource sharing practices of libraries. But third-generation IT regional library planning is not simply a matter of paying OCLC for some service; it is a broader effort, one based on the Internet. While local libraries are still the source of decision making and funding, and national organizations are still the source of much innovative IT work, in this third generation, IT regional library groups are the linchpin of significant change efforts. Their goal is ambitious. IT regional library leaders seek to use both the resources of the library-specific IT agencies such as OCLC and the IT power of the World Wide Web. They want to build new virtual communities for information seekers in many disciplines, as well as create digital libraries and design other yet-to-be imagined library/information services. That vision is fairly clear. Less clear is the source of funds to pay for the digital programs.
Persistence of 19th Century Managerial and Economic Models The driving force for Generation 3 IT library services is the dream of digital library service. It is a dream of the one big library in the sky based in the future. With the technological choices now available to libraries, longstanding barriers between libraries & users and libraries can dissolve. A document can reside in many libraries and homes at once. It can be viewed by a number of people at the same time. Individual library IT innovators have resolved the issues of time and space as they offer
A Case Study of One IT Regional Library Consortium 351
digital library services to their patrons. They spend their dollars to feature these newer services, viewing the additional costs as worth it, given the greatly increased access offered to users. But it takes enormous amounts of money to do it all and choices must be made. Anyone familiar with the quest for distance education models that are both cost effective and quality centered is familiar with these issues. The two bodies of literature–distance education and regional IT library efforts–do not often overlap, but they are definitely parts of the same picture. Richard Katz calls the IT effort “dancing with the devil” (Katz, 1999) while Trevor Haywood considers the distance teaching effort a search for a balance between richness and reach (Haywood, 2000). The question of change is not only a search for sufficient funding for technology. It is a question of traditional approaches and their values. There is still tacit approval given to the vision of a student studying in a hushed cathedral-like library or engaging with fellow students in a spirited dialogue in small classes led by a dynamic and dedicated professor. In both cases there is truth and myth intertwined. There is also the reality of proprietary rights. There are over 2,000 independent colleges and universities in the United States. These facts combine to make it very difficult to abandon the 19th century vision of higher education as a independent enterprise practiced on many unique independent campuses. Technology can change that, but first must come the acceptance of technology as a key player in new campus strategies. Philip Evans may be comfortable calling his vision of the future, Blown to Bits, but others are not so sure they want to be caught in the crossfire (Evans, 2000). The promise of globally available digital libraries seems to make local library funding a bit of an oxymoron, but free Internet ideologies non-withstanding, the reality is cash is still needed to pay for library resources, digital or print. In 2001 over 98% of all library funding is generated locally; therefore the local library must be the base for any regional digital efforts. At the same time, managers know digital services are by nature capable of wide distribution and therefore should be supported over a wider base than one library. Finding ways to fund digital libraries on a regional basis is the key. As the various players in the library community build new resource sharing models, they deviate from some 19th century funding practices and cling to others. Third generation models no longer assume charity towards all –all the time–is the preferred budget model. Yet, many of these models assume that subsidies towards less wealthy libraries are valid and that much volunteer work on the part of professionals is the way to accomplish agency goals. Others may still believe in the need for subsidies but think that volunteer approaches are not sufficiently dynamic to handle digital library initiatives. They build new organizations, with new sources of funding, and staff them to provide a distinct set of services to the regional library IT community.
CASE DESCRIPTION VALE’S Models and Their Founding Rationales There are a number of experiments in developing consortia for digital library services. One of the earliest is called OhioLINK; it serves the academic libraries of Ohio and began in 1987. In 1988 the Texas State Library began its digitally focused resource sharing effort, and called it TEX SHARE. By 1994-1995 the library community begins hearing about comparable efforts in three other states. In Virginia the effort is called VIVA, or Virtual Library of Virginia (VIVA, 2000); in Louisiana it is called Louisiana Library Network and in Georgia it is called GALILEO, or Georgia Library Learning Online (GALILEO, 2000). Each of these state-based consortia was described in the literature (Potter, 1997) and has been the subject of presentations at one or more professional meetings. The web sites for VIVA, GALILEO and the other regional library ITs are good sources of detailed and current information about structure, budget and program emphases. The motives of the individuals who begin VALE in 1997 are similar to those who started VIVA and the other IT library regional cooperatives. A few key reasons are cited by many. Rapidly escalating subscription costs of print journals was a significant motivator for some
352 Taylor & Coughlin
librarians in the major academic libraries to seek out cooperative ways to share subscription costs and simultaneously experiment with electronic journals (Parang & Saunders, 1994). Other librarians had a strong interest in delivering electronic text to users, either in the library or via the campus network or the Internet (Saunders, 1999). Still others were beginning to explore the advantages of group purchasing for disparately funded libraries. By 1995, most academic libraries had been members of OCLC or the competing network, Research Libraries Information Network (RLIN). Both groups of members had, by then, a decade or so of experience dealing with unit pricing, cooperative purchasing and the cost benefits of shared cataloging. Many librarians understood the nature of vendor-client relationship better after having participated in their institution’s negotiations for an online public access library system. The use of the Internet increased rapidly on most U.S. campuses in the mid-1990s after graphical user interfaces such as Netscape were developed and librarians understood the impact this could have on the delivery of library services. By the late 1990s when EDUCAUSE developed guidelines, The EDUCAUSE Guide to Evaluating Information Technology on Campus, the premise of it, that access to full texts, not ownership, of library resources is not a disputed issue (Burdick, 2001). What remains a problem for all the libraries and their parent institutions is the development of an equitable funding structure for regional library IT access options. Libraries are not the only agency seeking new financial models for the work that they do. As Gardner says, there is a general need for the valuation of IT in terms of strategy development, valuation and financial planning (Gardner, 2000). Early Leadership of VALE The leaders who developed VALE are individuals with these experiences, although not all of them have had the same ones. For some, the 1990s have been a time of battling rising costs and declining budgets; for others it has been a time to focus on selecting their institution’s first online public access system. By 1990 most have purchased a few electronic resources in CD-ROM format and established a local network for them. For a few, it has been a time of experimentation with digitizing library collections and putting them on the Web. These are some of their individual experiences. The individuals who created VALE also have had collective experiences. Many are members of a voluntary association of academic library directors in New Jersey and have been members for several years. Others are members of the New Jersey Library Association as well as national library association and have been active leaders of them. By virtue of residing in a particular location, they have also learned something about the nature of local and state government in their state as well as the strengths of their institutions and their colleagues. There is a reservoir of trust, flexibility and knowledge in the group of creative leaders that come together to shape an idea brought to the table by one of them at a meeting in 1997. By 1998 the group has created an organization named VALE.The mission of VALE is assumed in the document called ‘‘Statement of the VALE Project.’’ It is exhibited below.
EXHIBIT ONE Statement of the VALE Project The VALE Project calls for the consortium to use matched state bond funds to develop interinstitutional information connectivity and collaborative library application projects among its members. VALE’s objective is to help institutions meet the demands of students and faculty for access to scholarly materials. Through cooperation and leveraged purchasing, and through the use of collaboration and cutting-edge technology, VALE seeks to provide a seamless network of access to shared electronic academic information resources throughout the state. This exciting concept has been enthusiastically endorsed by the New Jersey State Library, the New Jersey Library Association and the Council of College and University Library Directors of New Jersey. The VALE Project will provide a level of information access to academic resources that has been unknown in the state. VALE is a pioneer regional library IT effort. It represents compromise within this particular stakeholder group (consortia members) as well as among competing internal and external stakeholder
A Case Study of One IT Regional Library Consortium 353
groups such as scholars, funding agencies, citizens, vendors, authors and other IT professionals. When viewed in the context of innovation in IT library regional efforts, VALE can be seen as a hybrid solution, one that borrows from the technological future while still relying on current managerial and economic models. It represents a compromise. Part of the compromise is necessary given the unresolved national issues of intellectual property rights and author payments. However, there are New Jersey specific issues that call for examination and evaluation with respect to decisions made by VALE leaders. VALE was established to offer electronic access to library materials to the state’s higher education community. VALE is primarily a buying consortium, one that selects electronic databases and mounts them on a network server accessible to all members. What the end user at a given member college sees is a list of periodical indexes mounted on a local library online public access catalog. The user is then able to search the indexes and obtain one of three products: citations, abstracts and/or full-text articles from a range of periodicals. In some cases although full text is desired, it is not available electronically.
Questions of Strategy with Respect to Seeking VALE’s Initial Funding The VALE leaders developed their funding strategy based on currently available sources and amounts of possible funding for IT efforts supported by the state in the higher education community. It was believed that getting a foot in the door would give VALE the opportunity to make a case for greater funding at a later date after the program had enjoyed some successes. Because of New Jersey’s IT program guidelines, it was necessary to stress the purchase of equipment. Because of competition from other parts of the higher education community, notably the IT officers of the New Jersey state colleges, it was decided to try and avoid conflict and restrict funding requests to the limited funds perceived to be funds that could be used for a program like VALE. The rest of the needs as envisioned by the activists creating VALE were needs that would have to be funded locally in the budget parameters of the VALE members. In order to reach this agreement about how to pursue state funding for VALE, it was necessary for a few key library directors in the state to agree to take on VALE tasks within their budgets without an expectation of compensation. One director who agreed to do so did so based on the library’s historic role as the leading research library in the state, the library that was already the library of service for interlibrary loans from other state colleges. Another director agreed because the emerging state IT program was a priority of her university’s president and she knew she would receive support for the VALE effort from her superiors. In many ways, the VALE funding strategy could be considered a wing and a prayer effort. It relies on obtaining some funds, enjoying much generosity and expecting a well-behaved clientele of noble egalitarians.
Other States, Other Choices: Georgia and Virginia There are only a few states with active regional library IT efforts and so it is not possible to generalize from these experiences. It is appropriate to review the existing material from other programs for evidence of alternative strategies in the formative stages of regional library IT efforts. The Texas , Ohio and Louisiana sites are good, and could also serve as good sources for alternative foci in developing IT regional library network mission statements (Texas, 2001; Ohio,2001; Louisiana, 2001) Exhibit Two is the statement of the vision for a comparable program in another state, Georgia (GALILEO, 2001).
EXHIBIT TWO A Vision for One Statewide Library: GALILEO •
Goals To ensure universal access to a core level of materials and information services for every student and faculty member in the University System of Georgia–regardless of geographic
354 Taylor & Coughlin
location, size of institution, or mode of instructional delivery: traditional residential, offcampus or distance learning. • To improve information services and support through increased resource sharing among University System libraries, thus providing a greater return on investment • To provide the necessary information infrastructure so that all students in rural or metropolitan settings in the University System can be better prepared to function in an information society. • To enhance the quality of teaching, research and service by providing worldwide information resources to all faculty. • To ensure that adequate PeachNet bandwidth and state backbone are available to campuses to support library activities. • To place the University System in the forefront of library information technology, enhancing its reputation, along with PeachNet and distance education. GALILEO’s vision statement is more ambitious than VALEs and at the same time more direct in indicating value-added outcomes for multiple stakeholder groups and more political. In the opinion of the non-librarian author of this case, it is clearer in its explanation of the goals of the library IT effort. As such, it offers the audience of all non-librarians, be they educators, politicians, students and faculty or citizens, a better opportunity to understand the dimensions of any regional IT library effort , including VALE. It also addresses subtle but real issues of prestige and fame when it seeks to position Georgia as a leader in the higher education community. This enhances the brand image of the University of Georgia as a premier institution, a strategy that appeals to university leadership. When it calls for infrastructure investments in IT sufficient for the needs of the distance education community among other cohorts, it democratizes information, it reaches out to accommodate disparate groups in rural or poor communities who may be in competition for the same types of state funds. Finally, it is clear about what the academic libraries of the state need with respect to bandwidth and related IT infrastructure issues. While the authors of this case have not had the benefit of interviewing the Georgia leaders of GALILEO directly they have had the opportunity to observe second hand the work of Potter, one of the key organizers of the GALILEO program (Potter, 1997). Potter’s position as head of the largest academic library in the system is important, but equally important is his decision to seek substantial amounts of new money for the implementation of a clearly articulated program with many facets. Nonprofits and for profits can learn valuable lessons in stakeholder theory from a study of the regionalization of library IT efforts.
Virginia’s Version of VALE A similar analysis can be made of the VIVA program in Virginia. Its goals are equally clear and broad based in terms of reaching various political and academic constituencies. Exhibit Three is the VIVA mission statement and Exhibit Four is the member list. This is followed by a discussion of the budget, benefits, and value-added position. These can all be found on the VIVA Web site (VIVA, 2001).
EXHIBIT THREE VIVA’S Mission VIVA’S mission is to provide, in an equitable, cooperative and cost-effective manner, enhanced access to library and information resources for the Commonwealth of Virginia’s non-profit academic libraries serving the higher education community.
EXHIBIT FOUR VIVA’s Members All the libraries of the 39 state-assisted colleges and universities (at 52 campuses) within the Commonwealth of Virginia, including the 6 doctoral institutions, the nine 4-year comprehensive
A Case Study of One IT Regional Library Consortium 355
colleges and universities and the 24 community and 2-year branch colleges (at 37 campuses). Thirtytwo of Virginia’s independent (private non-profit) colleges and universities participate as full members where possible.
The Library of Virginia Discussion of VIVA’s Budget, Financial Benefits and Value-Added Position The Commonwealth of Virginia’s General Assembly provided the first funding for VIVA when it appropriated $5,238,221 as requested in the 1994-96 Biennium Proposal. The approved 2000-2002 biennium budget totals $10,720,619 from all sources. In addition to resources from the General Assembly, individual institutions have supported the VIVA project in a variety of ways, most notably through donations of time by dedicated library staff. VIVA libraries take pride in knowing that significant financial benefits have accrued to members through the group purchases. As of March 31, 2000, VIVA had recorded cost avoidance of more than $32 million. This represents money saved over what would have been spent had each individual institution purchased VIVA resources. In many cases these are resources that the local colleges and universities would not have been able to purchase in an electronic form without the Commonwealth support for VIVA (2001). The message is clear. Virginia’s academic libraries will work together; they will not pick fights among themselves and expect the legislature to solve their problems. It is equally clear that the results of receiving this new state appropriation is a commitment to demonstrating cost savings. Traditionally, libraries have been seen as cost centers for support services. VIVA helped to reposition its members on the value chain. They now are seen as enhancing the raw knowledge materials with selection and distribution functions that add value for stakeholder groups. The record of increased appropriations is a telling affirmation of the success of VIVA in meeting its stated goals. Exhibit Five–the executive summary of a recent VIVA document detailing reduced purchasing costs, cost savings and value added–demonstrates their cost efficiencies (VIVA, 2001).
EXHIBIT FIVE Executive Summary: Cost Savings and Value Added Survey–1996/97 The Virginia legislature, working with the State Council of Higher Education in Virginia and the state’s academic library community, continued funding for the Virtual Library of Virginia (VIVA) for the 1996-97 fiscal year. By working together, these entities restructured the academic libraries’ materials budgets to give VIVA a central budget of $1.8 million for group purchases (collections, software, ILL delivery service and training materials) for fiscal year 1996-97. The financial benefits to VIVA institutions can be measured in three ways: 1): reduced purchasing costs; 2). cost savings; and 3). value-added benefits. 1). Reduced purchasing costs: VIVA calculates financial benefits in terms of reduced purchasing costs for all purchases. During 1996-97, we calculate that VIVA purchased $6.7 million in resources for only $1.8 million. This represents a cost avoidance of approximately $5 million for fiscal year 1996-97. In the summer of 1997, for the second year in a row, VIVA institutions were asked to analyze the VIVA resources available to them in fiscal year 1996-97 and to record direct cost savings, indirect cost savings and value-added savings realized during that year. A total of 55 responses were received from 58 VIVA institutions and their branch libraries. Respondents included all 14 of the 15 state-assisted doctoral and comprehensive VIVA institutions, 25 of the 34 public two-year institutions and 16 of the participating 27 independent colleges. 2). Cost savings: The survey documented new direct cost savings of $552,188, for the 1996-97 year, a significant increase over the $330,997 recorded for the 1995-96 year. In addition, calculations for continued cost savings with an estimated 10.5% inflation rate for serials equals $274,425 during
356 Taylor & Coughlin
1996-97, for a total of $826,613 for the 1996-97 year. 3). Value added: This survey documented a financial benefit of $11,443,199 for value-added resources for the 1996-97 year. This is nearly twice the $6,121,031 of value-added benefits documented in 1995-96.
CURRENT CHALLENGES/PROBLEMS FACING VALE Impact of Historical Context and Politics on VALE’s Initial Funding Choices While other states requested state funds for the purchase of equipment and subscriptions and services to support a new program of digital library services in a consortium, the VALE leaders chose a different strategy. They sought funds from a state program that focused on increasing the availability of computers and related hardware on the college campuses of the state. This strategy was based on expediency and political realities. The majority of individuals on the various college campuses in the state who were in charge of IT efforts were more interested in receiving funds for locally based classroom-centered projects on their own campuses. It was not in the interests of most IT campus officials to support a large request for funds from a new, state-based IT library consortia. Key support for the VALE concept came from the IT vice president on one state university campus as he made the VALE case for some funding at the state level. Because of the competition for resources available under the bond issue, it was deemed politically astute to request only hardware for the VALE project. It was believed that this approach better positioned the VALE program as one deserving of bond money support for equipment, in line with the mission of the funding agency. The initial request for state funds was only for the purchase of the servers. Economic constraints on the uses of state funds stem from the nature of the source, bonds. The State of New Jersey insisted the money be used for long-term purchases, such as equipment, that would outlive the terms of the debt. VALE leaders revisit this decision often. It remains unclear, given the political climate in New Jersey, whether a different strategy would have been wiser in the long run. As the examples from Georgia and Virginia demonstrate, other states have funded digital library initiatives after having been lobbied directly by the state library or the flagship state university for such a program on behalf of the citizens of the state. In these cases the digital library initiative funding supports equipment, software or subscriptions and staffing. Thus, the resulting IT library regional programs for digital library service is understood from the start as a new model of collaborative library service. As a separate innovation, these programs appear to enjoy a visibility that is helpful when additional support is sought from the state legislature for expansion purposes.
Generating Additional Revenue Streams: State Funds and Member Contributions Ongoing direct costs for VALE include vendor payments, costs associated with servicing the server and program management costs such as negotiations with vendors. Current vendor pricing is based on student population figures. Individual members pay their share, based on subscriptions selected and user population. When participants choose partial participation, vendor negotiations and cost calculations have to be adjusted accordingly. This is an iterative, time-consuming and complex process for the negotiators involved. One of the challenges is establishing cost assessment models based on varying levels of library participation. Models must provide a clear and transparent picture of costs involved, while hedging for contingencies. Member contributions are viewed in terms of journal payments. This current situation is a matter of practice, not policy. There is nothing in the VALE membership agreement that precludes the group deciding to charge fees to accomplish other program goals such as expanding electronic delivery options, developing vendor-based training programs for faculty and staff, assessing subsequent changes or improvements in use of the system. VALE leadership needs to experiment with several
A Case Study of One IT Regional Library Consortium 357
models for outcomes assessment, including cost studies, use studies and benefit studies. One good byproduct of the current under-funding by the state of New Jersey is that each member library understands how important it will be to make future plans on a sound financial basis if the cooperative digital initiative is to flourish. Cost studies that compare pricing and calculate savings have become more common. There is willingness to explore the principles of activity-based costing and find appropriate opportunities to apply them. At present the leaders recognize the indirect costs relating to the costs of obtaining information from each user library about particular product choices are being absorbed by two institutions. These institutions are swallowing the indirect costs connected to the task of working with vendors to establish pricing schedules for a product as well as the overhead costs of being the host institution for the servers and administrative costs of running a membership organization. None of these costs are currently in the VALE budget. The two state institutions most affected view the costs as community service contributions that are part of the price they pay to foster a statewide commitment to a digital library initiative. In this age of rampant individualism and competition, generosity and egalitarianism are still very evident in the decisions made by VALE leadership. At times a decision made means that one of the richer libraries carries the cost disproportionately. At other times, the VALE Board continues to provide an earlier generation IT because some members cannot afford to upgrade to the currently recommended IT. VALE is an example of a 19th century model of library cooperation with some 21st century features. This approach is charitable; but it is also self-limiting for the VALE organization. An organization may be willing and able to give another agency one or two percent of its resources, but even the most willing organization will have difficulty giving more of its resources without clear standards and agreed upon benefits for the monies and in-kind services given. Unless other sources of income can be found, the result appears to threaten stagnation for VALE. In a way this generous gift by two universities in the state to all the other academic libraries allows the entire system of higher education in New Jersey to inappropriately believe that it has fully funded a modern, cutting-edge IT environment for its students and faculty. The present situation does not publicly acknowledge the social benefits of VALE’s egalitarian approach with its commitment to improve the quality of education at all colleges as a meaningful value-added approach to domestic welfare and American competitiveness.
The Challenge of Future Funding Strategies Perhaps the key unresolved issue is how to pay for digital library services that reach beyond the wired environs of one library. Building consortia demands a search for new sources of support beyond the local agency’s budget office door. Often the search leads to the state government since most states have a commitment to higher education. Many IT library consortia have strong proponents in the academic communities around the state–constituencies which could be mobilized for a political campaign to address funding issues ranging from incremental additions to disparate base budgets to the creation of a new unified source of new funding. While the VALE leaderships’ early arguments for funding were successful, these same arguments may have built-in pitfalls for the future growth of VALE. Many program management services were donated, limiting the agency’s ability to ask for funding. This is the crux of a major VALE problem. When the leaders of VALE examine the program goals of the other IT regional agencies, there are only modest differences. Yet, some of the other agencies have budgets that are more than ten times the VALE budget. Their member libraries have never been expected to bear the brunt of the costs. In return the IT library regional agency is expected to demonstrate cost savings to the state funding agencies and legislature with respect to the new program. VALE lost one important chance to make this economic argument at the time it accepted token funding for equipment; it remains to be seen if it will get the chance again. Two other decisions have potentially negative impacts on VALE’s future growth. First, when individual libraries agreed to join VALE, VALE promised it would not expect the library to change its
358 Taylor & Coughlin
current IT system to conform to a VALE standard. Instead VALE agreed to work with a multiplicity of systems. However, it may also be true that a promise of an IT agency to support all IT systems is a promise that cannot be kept forever. Secondly, when VALE began, it also agreed that each library was equal when it came to receiving service from the VALE technical support group. However, it may be necessary to develop funding models that include charges for services that go beyond a core group of agreed-upon basic services. There have been conscious efforts to improve funding. One related to lobbying for new monies from a projected state surplus, in concert with the state library agency and the state library association, and with the tacit acceptance of the effort by the state higher education leaders. It netted a modest amount, about one-tenth of what was requested. Another effort is linked to a modest, committee-run public relations campaign designed to demonstrate the satisfactions of library users with VALE. It has generated testimony and glad tidings, and photo opportunities with state officials, but no new monies from either surpluses or budgeted priorities in the state budget. VALE leaders remain hopeful that the power of the digital library concept will be great enough to move state funding decision-makers. At present the explicit VALE goal is the provision of state funds for database expansion. However, a more revolutionary VALE goal might be a modification of the current stand-alone model of library governance and organization into a model that incorporates separately funded IT library regional agencies as the preferred model for the delivery of digital library services to the citizens of New Jersey.
REFERENCES Borgman, C. (2000). From Gutenberg to the Global Information Infrastructure: access to information in the networked world. Buckland, M. (1992). Redesigning Library Services: a manifesto. Chicago: American Library Association. Burdick, B. (2001). The EDUCAUSE Guide to Evaluating Information Technology on Campus, EDUCAUSE Review (March/April), 64. See also http:// www.educause.edu/consumerguide Coughlin, C. and Gertzog, A. (1992) Lyle’s Administration of the College Library. Metuchen, NJ.: Scarecrow, 142-144. Evans, P. (2000). Blown to Bits: how the new economics of information transforms strategy. Boston: Harvard Business School Press. Gardner, C. (2000). The Valuation of Information Technology: a guide to strategy development, valuation and financial planning. New York: Wiley. Georgia. (2001) Galileo information is found at http://www.galileo.peachnet.edu. Haywood, T. (2000). Defining Moments: The Tension between Richness and Reach Information, Communication and Society 3(4), 648 Kantor, P. (1995). Studying the cost and value of library services: final report. New Brunswick, NJ: Alexandria Project Laboratory, School of Communication, Information and Library Studies, Rutgers University. Katz, R. and Associates (1999). Dancing with the Devil: information technology and the new competition in higher education. San Francisco: Jossey-Bass. Louisiana (2001) Louisiana Library Network (LOUIS) information is found at http://www.lsu.edu/ocs/ louis/ Ohio. (2001) OhioLink network information is found at http://www.ohiolink.edu/ Parang, E and Saunders, L. (1994). Electronic Journals in ARL Libraries: issues and trends: a SPEC kit. Washington, D.C.: Association of Research Libraries, Office of Management Studies. Potter, W.G. (1997) Recent trends in statewide academic library consortia. GALILEO in Georgia, the Louisiana Library Network, OhioLINK, TexShare in Texas and VIVA: Virtual Library of Virginia. Library Trends 45 (Winter 1997), 416-34. Porter, M. (1991). The Competitive Advantage of Nations. NY: Free Press.
A Case Study of One IT Regional Library Consortium 359
Reid, B. and Foster, W. Eds. (2000). Achieving Cultural Change in Networked Libraries. Brookfield, VT: Gower. Saunders, L. Ed. (1999) The Evolving Virtual Library II: practical and philosophical perspectives Medford, NJ: Information Today. Snyder, H. and Davenport, E. ( 1997) Costing and Pricing in the Digital Age. NY: Neal-Schuman. Taylor, V. A. and Coughlin, C. M. (2000) “Value-Added Measurement: A Business Perspective on Budgeting and Resource Allocation applied to the Non-Profit World of Libraries” Proceedings, Information Resources Management Association Conference, Anchorage, AK Taylor, V. A. and Coughlin, C.M. (1999) “Activity Based Costing” Proceedings of Information Resources Management Association Conference, Hershey, PA Texas network information (TexShare) is found at http://texshare.edu/ VALE network information is found at http://www.valenj.org VIVA information is found at http://www.gmu.edu/library/fen/viva/about.html/
BIOGRAPHICAL SKETCHES Virginia A. Taylor, Ph.D., is an Associate Professor, Department of Marketing and Management, College of Business, William Paterson University, Wayne, NJ. Her doctorate in International Business Administration is from the Fox School of Business and Management, Temple University, PA. She passed the CPA exam in NJ, has served as founding director for an MBA program, as national university accreditation site evaluator, and as costing consultant to three major university libraries. Her publications focus on the design of multinational control systems, global strategy and transfer pricing, international business ethics, government and business perceptions of value, location determinants for value-added activities, decision support paradigms, the impact of information technology in the workplace and education, and active learning pedagogy. Caroline M. Coughlin, Ph.D. is a library consultant who specializes in library planning and staffing issues. She is co-author ( with A. Gertzog) of a standard text on academic librarianship. She served as director of the university library at Drew University in Madison, NJ during the 1980s and 1990s, and was part of the founding information technology team at the university. Her experience as a professor of library science includes over a decade of fulltime teaching at Emory University and Simmons College as well as many years of serving as a visiting lecturer at Rutgers University, the University of Washington, and the University of Alabama as well as some overseas assignments in Wales, Finland and Estonia. Her M.Ln. degree is from Emory University; her Ph.D is from Rutgers University.
360 Borchers & Mills
Prudential Chamberlain Stiehl: The Evolution of an IT Architecture for a Residential Real Estate Firm, 1996-2001 Andy Borchers Kettering University, USA Bob Mills Prudential Chamberlain Stiehl Realtors, USA
EXECUTIVE SUMMARY This case describes the evolution of an IT architecture for Prudential Chamberlain Stiehl Realtors (PCSR), a 14-office, 250 sales agent real estate firm located in Southeast Michigan. Initially, the CIO of the firm concentrated on providing basic connectivity to sales agents and a simple World Wide Web (WWW) presence. Although this was accepted by users and moved the firm forward technically, management questioned the value of this technology. In the next phase of development, PCSR worked to build a “rich” set of applications that enhance the firm’s relationships with clients and agents. At the end of the case, the CIO ponders future moves in technology and their impact on the firm’s strategy.
BACKGROUND Prudential Chamberlain Stiehl Realtors (PCSR) is a residential real estate brokerage operating in the upscale northern suburbs of Detroit and Flint , Michigan. The firm’s roots go back to 1948, but their current organization came about through a series of mergers between several area realtors in the 1990s. In 2001 the firm has two owners that believe that control of the residential real estate market will belong to a set of very large firms and small niche players. The owners have worked aggressively to stake out a significant share of the area’s real estate market and survive in an era of escalating competition and declining profit margins.
Current Operation In 2001 the firm’s operation includes 14 sales offices. The organization’s employs about 300 employees, including 250 sales representatives and 50 support personnel. The offices are spread across a 70-mile span from north of Flint to Royal Oak, a Detroit suburb. This area comprises one of Copyright © 2002, Idea Group Publishing.
Prudential Chamberlain Stiehl
361
the richest markets in the state of Michigan. Oakland County, the heart of PCSR’s market, had a median income of nearly $60,000 in 1997. The area is home to over 1,000,000 people. PCSR’s annual real estate sales in 2000 were approximately $600 million. Broker commissions on these sales were about $18 million per year. The firm is the largest Prudential franchisee in the state of Michigan and one of the largest in the United States. PCSR has a sizeable market share. In the Flint area, for example, nearly 40% of all home sales are through PCSR. The belief that the company needs to grow to survive is largely created by market conditions. The Southeast Michigan real estate brokerage business is highly fractured with small market share per broker. The sales associates who actually deal with clients (both sellers and buyers) are independent contractors paid on a commission basis. Typically, associates receive commission on a sliding scale. Associates can only be compensated by one broker. However, associates can easily move from one broker to another.
Industry Composition Although there are a number of national real estate firms, including Century 21, ReMax, Prudential and others, all of these firms franchise local operations. The national firms provide marketing and advertising support. Operation of the local business, however, is strongly controlled by local brokers and their agents. The industry has seen a reduction in the number of associates, coupled with a sharp increase in sales volumes per associate. For example, during the five years ending in early 2001, nearly half of all sales associates left the Southeast Michigan market. In all likelihood this trend will continue for the foreseeable future. The cost of participating as a sales associate, nearly $4,000 per year to maintain memberships and licenses, plus fierce competition leads weaker associates to leave the field. Strong associates, however, have much to gain by selling more and more homes given the sliding commission scales used to compensate them. This environment has created intense competition among brokers to recruit effective sales associates. High selling associates can easily change brokers if they see the potential for higher return or better support. This has forced brokers to pay higher commissions to associates and has hurt their profit margins. The competition for sales associates has had other effects besides increased commissions. Brokers are under pressure to provide sales associates with more support. Services such as office facilities, marketing and technology have become points of comparison that sales associates now use to compare brokers. As a class of goods, homes have several unique characteristics. First, home purchases are typically the single largest purchase made by a consumer. Unlike other major purchases, such as automobiles, each home is unique and the “market” for a home is made through a bid and counter bid process. While comparable properties may be known in the community, putting a home on the market and seeing what a buyer will pay for it is the only true way to determine value.
Purchase Process The process of buying a home can be viewed as a series of steps taken by buyer and seller. In most cases sellers are simultaneously in the process of buying as well. 1. Education – In this phase the buyer asks, “What can I afford?” “What features do I want in a house?” and “What is available in the market”. The seller asks, “What is my house worth?” 2. Listing – In this phase the seller arranges to list his property with a broker. 3. Matching–In this phase one or more sales associates works to match buyers and sellers. Associates arrange open houses and visits by prospective purchasers. 4. Negotiation–In this phase buyers make offers and sellers accept, counter or reject these offer. 5. Financing–In this phase the buyer obtains financing for the home. 6. Closing – In this final phase the deal is legally executed.
362 Borchers & Mills
This list is comparable to Crowston’s (2000, p. 4-6) analysis. There are several opportunities for technology to speed up or alter this process. Crowston (2000) identifies real estate as having strong potential for e-markets as it is “information-intensive and information-driven industry; transaction-based, with high value and assetspecificity; with many market-intermediaries (agents and brokers who connect buyers and sellers rather than buying or selling themselves); and experiencing on-going information technology (IT) related changes.” He suggests that Web technology could damage the information monopoly that real estate agents have enjoyed and lead to disintermediation. Indeed, real estate brokers are pure intermediaries (Crowston, 2001). They rarely own the properties that they sell. Their position is at risk if sellers can find lower cost alternatives to market their homes. Further, he warns that such disintermediation could undermine profit margins and ultimately end the broker’s business.
PCSR’s Strategy PCSR, however, is operating in strong opposition to this analysis. Instead of letting new Internet sites neutralize their market presence, PCSR is using technology to enhance their position. PCSR’s approach focuses on using technology to build a “rich” mix of services to buyers, sellers and sales associates that justifies the cost of a broker in a home sale. The firm believes that “service sells” and that their strategy will succeed in the face of efforts to disintermediate and commoditize their industry. This case outlines the evolution of technology at PCSR and the impact of technology on the firm’s competitive position.
SETTING THE STAGE Historically, technology was not a competitive issue within the real estate market. All brokers belonged to a local association called the Multiple Listing Service (MLS). Computer systems within the office were limited to a few dumb terminals that associates shared. The type, performance, contents and number of terminals was controlled by the MLS, not the broker. This resulted in all brokers having essentially the same technology. The introduction of the PC and the evolution of the World Wide Web in the late 1990s changed this environment dramatically. The introduction of PCs allowed sales associates and customers to automate and speed up the home buying process. Several vendors introduced low cost tools for sales associates, brokers and related firms. A review of available real estate software shows several categories of software: • Appraisal–Software that automates the appraisal process. • Closing–This category allows for automation of the closing process. • Mortgage–Software that helps identify mortgage options for buyers. • Investment Analysis–This category of software helps in valuing real estate. • Multiple Listing–MLS (Multiple Listing Services) provide realtors with information about available properties. • Property Management–Software to automate the management of rental properties. • Relocation–Cost tracking and reporting software for customers who are relocating. With the WWW came two well-established Web sites that continue to list large numbers of properties: Homeadvisor.com and Realtor.com. These sites help consumers quickly identify properties of interest in a given locale. However, these sites rarely list the address of actual properties. Customers are given the name of the listing agent and must contact them to view the home. Of further concern, the information listed at these two sites is often out of date. Acceptance of computer technology has varied significantly among real estate agents, however. In 2001 the average PCSR sales associate, for example, is in his or her early 1950s. Although some have embraced computer technology, others have not. Further, they are serving home purchasers that have an average age of just 38. Increasingly PCSR’s younger customers have PCs at home and are capable
Prudential Chamberlain Stiehl
363
WWW users. Two other changes also impacted the situation. The first occurred when the MLS dropped the limitations on the number of PC’s or terminals in a sales office. This seemingly small change brought competition to the technology area because now a broker could make access to the MLS available to all of his associates. The second change was the MLS allowing agents to have access through their home PC’s. This changed the atmosphere of the real estate offices by changing the sales associate’s perception that access to the MLS was granted by the broker. The amount of time sales associates spent in the office dropped dramatically. The dependence sales associates had on their brokers also dropped and with it their commitment to a single broker. These changes challenged two beliefs in the industry among sales associates. First, it was considered normal for 30 sales associates to share two or three PCs. Unlike many other industries where each individual has his own computer, sales associates were used to shared PCs. Second, the sales associates expected the broker to deliver new technology. Computer systems were viewed as the responsibility of the broker, and most sales associates expected that relationship to carry forward.
CASE DESCRIPTION Bob Mills, VP of Finance and CIO for Prudential Chamberlain Stiehl Realtors, has been responsible for the technology focus of the company from the mid-1990s through the present. In meeting with the case writer in early 2001, he reflected back on the firm’s early efforts to build an infrastructure for basic connectivity.
Initial Implementation “It was early 1996, right after the merger of our two owners, Dan and Jerry, that we really started getting serious about technology” Bob explains. “At that time I was really working in a vacuum. Dan and Jerry are not technically proficient, and never will be. It’s not their focus in the business. However, we were getting a lot of pressure from the sales associates to give them the most current tools. Dan and Jerry did believe this was an important area to compete on, and one we could make a difference on.” “I have always struggled to get good input”, Bob continues. “It is a very unique industry, often with material differences from market to market. I find people outside the industry don’t understand the structure. For example, my customer is really the real estate sales associate, not the person buying a house. An experienced sales associate can go to work for any broker in town. They are in very high demand. Because they are independent contractors, we do not tell them what to do. Each one is an independent business. The only way I can increase the brokers’ revenue is to attract more sales associates.” There were several other important issues to deal with in early 1996. The company did not employ any IT staff and with pressure on profit margins, the owners wanted to avoid staff additions if possible. Each office had three or four PCs. All were standalone with modem dial up to the MLS system. Top producing, as well as younger sales associates, bought their own PCs to use at home. Individual agent PC’s within the office were still rare. “I knew that I had to get more input from the customer–our sales associates”, Bob explains. “So we created a technology advisory council made up of two associates from each office. I also invited the branch managers to attend to get some management involvement. Response to the council was very enthusiastic. We meet once a month for two hours and attendance has always been excellent. The feedback we get from this group has been invaluable.” “We had these great ideas coming out of the advisory council but we needed some way of testing them” Bob continues. “So we set up a test branch, and started trying out several new ideas. We learned a lot of valuable lessons in a very short period of time. We tested the concept of sales associate rental machines, server-based applications, automating several key processes, and digital circuit connec-
364 Borchers & Mills
tions to the MLS and Internet. We also learned several important lessons on what not to do. I quickly realized that having a server in each sales branch and having a lot of applications loaded on each desktop was not going to be compatible with our goal of minimizing internal IT staff.” Using these lessons learned during 1996, Bob Mills began to put together a long-range IT plan to make PCSR a technology leader in its marketplace. “The goals I focused on were quite simple”, explains Bob. “We wanted to offer the best services to our associates, create a dependence by the sales associates on our IT services, keep maintenance and staff requirements to a minimum, and position ourselves to automate some processes that would make us a leader. We also knew that we would have to get the sales associates to participate in the cost, which had proven very difficult in past projects.” “By the end of 1996, I was selling a three-year plan to management”, says Bob. “It is very difficult when you are working with such a long-range perspective because you have to stay away from specific details. However, I felt we had so far to go we had to take a long-range approach. Actually, you have to sell your plan to everyone: users, suppliers and management. They are all critical to making it work.” “My approach was to explain the vision which included a WAN (Wide Area Network) connecting all offices, applications delivered from a centralized location, all connections to be reliable digital circuits (no modems), full sharing of data between offices, and centralized administration. Then I would explain some benefits like lower cost, location independence, concentration and specialization of some process that people could relate to. The general game plan was to build the infrastructure in year one, gain sales associate participation in year two, and automate several key processes and integrate with suppliers in year three.” “The hardest part of selling the plan to management was the lack of deliverables in the first year”, explains Bob. “Our schedule had us installing a lot of wire, NICs (Network Interface Cards), routers and infrastructure type equipment. Everyone would keep asking “and what do we get for this?” I would have to keep explaining that it was a foundation and that nothing else could be done without it. We got through it, but it took a lot of patience from everyone.” The owners of Prudential Chamberlain Stiehl Realtors did accept the three-year plan and work began in early 1997. The early work of installing local area networks in the sales branches went smoothly. A joint project was started with the largest MLS to install a frame relay network between the company’s sales offices, the MLS and the Internet. It was fortuitous as the MLS was installing a new system at this time that was plagued with modem problems. While Prudential’s competitors struggled to make the new dial-up system work, Prudential’s associates enjoyed fast, trouble-free access to the MLS system. “It was kind of humorous,” recalls Bob Mills. “I was worried about not having a deliverable of any significance in the first year of the plan and suddenly we had a big one. In fact, it created problems that management did not feel we were moving fast enough with the frame relay rollout.” The success of frame relay connections also created a new dilemma. The sales associates were used to having the same MLS service from home and the office. Now they had a faster and more reliable connection at the office. Requests started coming in from the associates to hook up their own computers to the PCSR network or to rent desktops for their personal use in the office. This type of participation was a goal of Bob’s IT plans and supported the owner’s goals for recruitment and retention of associates. However, it was decided to hold to the plan of rolling out the company-wide WAN before adding the associates on a large scale. The decision created some conflict during 1997, but by year-end all of the offices were installed on the WAN. The goal for 1998 was to gain agent participation. Management decided to offer two options: sales associates could either pay to hook up their own computer to the network, or the firm would provide a turn-key rental unit. “The first option was easy to provide,” explains Bob Mills. “We are just providing a connection method to the MLS or the Internet that’s better than a modem connection.” This approach was made available first in the sales offices and immediately became popular with the sales associates who had
Prudential Chamberlain Stiehl
365
their own portable computers. The second option proved much more challenging to fulfill. “I really wanted to avoid having to install the half-dozen Windows-based application on each desktop we put out there” explained Bob. “You have to understand we are talking about offices that are an hour-and-a-half drive apart, and users who are not very sophisticated. I knew we had to find a different approach to keep our administrative costs down.” The approach that Bob settled on was a product by Citrix called Winframe. It allows for the delivery of Windows-based programs over a dial up or WAN connection, treating the desktop like a dumb terminal. This technology allowed Bob to centrally control software applications and data resources, and minimize the need to visit client PCs in the field. The Winframe solution provided associates with a number of tools. With this solution agents could: 1. Connect into the MLS using Telnet to view listings. 2. Use Bressers and PACE, two CD ROM databases on property data. 3. Perform file transfer–To circumvent file protection problems, the Winframe solution allowed users to run Word 97 or Excel to open up a file. Users could then save a file to a shared directory. 4. On-line forms (purchase agreement, listing agreement, etc.) - These on-line forms became a common way for sales associates to process transactions. 5. Communicate via e-mail with customers and fellow sales associates. The new server was installed during the first quarter of 1998, but a large acquisition put everything on hold. “Just as we were ready to start converting the windows applications to the new server, we bought four new offices,” Bob explained. “We had to go back and do infrastructure work on the new locations before we could roll out the new system. The acquisition put us into the third quarter before we could continue with the rollout.” The Winframe system was successful in meeting the objectives of providing firm-wide connectivity, access to shared data and a platform for process automation. New desktops could be configured with a minimum of administration and maintenance, and the system was opened up to the agent hookups and rentals in the fourth quarter of 1998. “We spent the last three months adding sales associates to the network,” states Bob. “I am really happy Figure 1: Network Configuration–1998 Winframe Solution Frame Sales Office
Troy Headquarters Citrix Winframe Server
PCSR Owned
Multiple Listing Service - Via Telnet Frame
Public Internet
Agent Owned Associates Home
PCSR web server Rented Customers Home
366 Borchers & Mills
Figure 2: MLS Listing–Via Telnet Address: 875 N ADAMS City/Township: Waterford County: Oak Post Office: Waterford Zip Code: 48320 Price: $149,900 Total Finish Area: 1290 Status: Active MLS #: 21175450 Style: Ranch Aprx. Yr Blt: 1965 Bedrooms: 3 Baths Total: 2 Basement: Y Garage: 2 Car, Detached Acreage: 0 Lot Size: 70X126 Paved Road:Y Water Front: Design: 1 Story Basement: Part. Finished, Yes Exterior: Brick Fireplace: Y HVAC: Steam, 2+window Ac Fuel: Gas Waste: Sewer Sanitary Water: Mun. Water Other Bldgs: Homestead: Y Winter Taxes: $3,909.00 Remarks: Ranch Home With Character And Charm. Newer Kitchen, Neutral Decor,natural Fireplace In Living Room, Hardwood Floors Throughout, Partially Finished Basement. 12x9 Sunroom Off 2ndbedroom. Covered Porch Off Dining Room. Newer Sewer Line Into House. Home Warranty. Rooms Location Size Kitchen: 1 12X12 Dining: 1 13X12 Living Room: 1 20X12 Family Room: Bedroom #1: 12X12 Bedroom #2: 15X09 Bedroom #3: 14X09 Bedroom #4: Special Rooms: Misc. Exterior: Porch, Fenced, Outside Lights Misc. Interior: Stove, Refrigerator, Dishwasher, Disposal, Cable Tv For more information call: Information herein deemed reliable but not guaranteed and may be incomplete.
Prudential Chamberlain Stiehl
Figure 3: Winframe Menu
Figure 4: Pace CD Display
367
368 Borchers & Mills
with the response, but it has been a zoo. We now have over 50 of our 250 sales associates paying to use our network, and the number is climbing fast. We have even figured out how to give these associates access from home through the Internet, creating our own VPN.” A VPN is a Virtual Private Network. VPNs give the appearance of a private network while using the public Internet to keep costs low.
Evaluating the Initial Implementation In early 2001 Bob Mills took stock of his network configuration and future business plans. Over the prior three years, PC equipment, the Winframe Citrix server and the Internet had become an integrated part of his firm. Bob’s IT staff remains limited to his part-time efforts and two part-time college students. This is essential to meet the firm’s need for “lean” administration. Indeed, IT costs are viewed as overhead and a drag on bottom-line profits. The interplay of IT strategy and business strategy at PCSR is an interesting example. At first, IT was hardly a driver for the firm’s strategy. Mergers fueled growth and survival for the firm in an increasingly hostile market. The owners viewed IT as providing two key advantages, support of operating processes (such as sales agreements, closing documents, etc.) and as a recruiting tool to draw sales agents from competing brokers. As the IT infrastructure matured and competition increased, the owners increasingly looked to IT for rich information content and as a differentiator for the firm. These areas proved much more challenging to provide than the initial efforts at achieving connectivity.
A Second Implementation A new effort began in early 2001 to build custom applications that increase PCSR’s service to associates, sellers and customers. It was not enough to merely present information to agents, as real estate information is increasingly available for little or no cost. Instead, PCSR had to increase the value of its information to all stakeholders. With respect to agents, Bob wanted them so involved in using the technology that they could not think of working without it. This dependence was important if PCSR was to retain agents, and hence sales revenue. Figure 5: 2001 Configuration Multiple Listing Service - 70k record download
Frame
Frame Sales Office
PCSR Owned
Troy Headquarters Troy Headquarters Oracle Winframe Winfram Oracle DatabaseCitrix Database e Citrix Server
PCSR PCSR Web Web Serve Server
Agent Owned Associates Home
Customers Home
Frame Frame
Public Internet
Prudential Chamberlain Stiehl
369
With this effort Bob installed a new hardware configuration. At the center of this configuration is an Oracle database server that receives 70,000 records daily from the multiple listing services that support PCSR’s service area. PCSR is developing customized applications that process this data into useful information for agents. Bob also has acquired a LINUX-based Web server that presents information to users from the database server. The Citrix Winframe server remains in use to support sales associates. With this data, Bob staked out the following functionality: • For buyers, Bob wants to provide a “rich” set of information about PCSR listings, including a virtual tour of PCSR properties. Further, he wants to offer customers who are willing to sign a customer agreement the ability to search the MLS by themselves on the WWW, including the ability to see property addresses, without having to contact a PCSR agent. • For sellers, Bob wants to provide information about what PCSR has done to sell their properties. Given the size of commissions (typically $20,000 on a $400,000 home), sellers need to know that PCSR is actively working to sell their home. • For PCSR agents, Bob wants to project the image that they have the most up-to-date technology. This is essential to retain them. Tools include CMA (Comparable Market Analysis) that allows the agent to do a market analysis on-line, access to on-line property data and MLS, the existing Figure 6: Home Owner’s Activity Report–MLS # 21000774, Randy Goodie Tuesday, January 30, 2001
HOME OWNER’S ACTIVITY REPORT ACTIVE 29753 Main, Clarkston, MI 48346 $339,900 Activity Date Agent Name Feedback Information 39387 W. Thirteen Mile, Suite 100 Farmington Hill, MI 48331 Phone: (248)999-9999 Email:
[email protected] Page 1 01/29/01 Progress Report e-mailed activity update to Beth 01/29/01 Showing Home showing from 4:30 pm to 5:30 pm 01/29/01 Reverse Prospect reverse prospect match and brochure follow up completed Match 01/29/01 Showing Nice home. One of the cleanest that they’ve seen. Priced too high for the sq footage. They are looking for something bigger. 01/28/01 Showing Home showing from 2:00 pm to 4:00 pm 01/28/01 Showing Home showing from 2:00 pm to 4:00 pm 01/28/01 Advertised - Ad # 1 was published in the Detroit News & Free Press Section: DS using format: Detroit News & Fr DS number of lines: 9 Price published: 339,900 01/27/01 Showing purchased another home in same square mile for about the same price 01/27/01 Showing buyers thought home was priced too high and was not large enouph for them / basement only being partial was a large deterant 01/26/01 Showing Home showing from 1:00 pm to 2:30 pm 01/25/01 Showing Unable to contact agent for comment. 01/24/01 Progress Report written status mailed to home 01/23/01 Showing customer had no interest 01/22/01 Reverse Prospect reverse prospect match and brochure follow up completed Match
370 Borchers & Mills
•
Winframe applications and attractions for buyers, such as the virtual tour. With all of the new applications, agent’s work habits will be tied to technology. Virtually all of their activities, from looking for a property to reporting on sales activities to closing, involve the use of PCSR’s technology. For non-PCSR agents, Bob want to communicate with them in a hope of interesting them in PCSR. After a house showing by a PCSR agent of a non-PCSR-listed property, it is typical to provide feedback to the listing agent. PCSR wants to use this opportunity to email non-PCSR agents.
Figure 7: PCSR Main Web Page
Figure 8: Agent’s Home Page
Prudential Chamberlain Stiehl
Figure 9: PCSR Listing
Figure 10: Virtual Tour
371
372 Borchers & Mills
Evolving Markets As Bob is introducing this new functionality, competition in the real estate market remains fierce. In 2001 PCSR has approximately the same number of agents as three years ago, but has only been able to maintain this number by acquiring new offices. A number of weaker competitors have dropped out of the market, but the remaining firms are challenging to compete with. Meanwhile, the growth of the Internet had been nothing less than spectacular. The stock prices of Internet start-up firms such as Amazon.com soared on unrealistic expectations up until mid-2000 and then crashed in early 2001 when reality set in. Expectations continue to run high that virtually all products will be bought and sold on the Internet. With the growth of e-commerce comes the commonly accepted wisdom that customers are “king” and all firms seemingly face commoditization of their markets and declining prices. The reality of the Internet revolution, however, is far more complex than this. As Wilder (1998) identifies, there are numerous myths and realities. Moving from traditional business relationships to Web-enabled commerce is neither easy nor cheap. Not everyone is making the move nor does the Web inevitably to disintermediation. In real estate, for example, early web implementation by many players focused on only limited aspects of the business such as financing and display of information on properties. To truly upset the traditional sales channel, Web infomediaries need to provide “integrated, single source, highly personalized experience” (Hagel, 1999, p. 61). Such integration is not evident in any current Web presence. Bob’s strategy can be viewed in the context of Evans and Wurster’s (2000) classic richness versus reach analysis. According to Evans (p. 24), firms typically face a tradeoff of “richness” and “reach.” For example, a firm can employ a “rich” strategy of using commissioned sales representatives. However, such a strategy is normally restricted by the ability of representatives to make sales calls. Firms, alternatively, can employ high “reach” strategies with lower “reach.” For example firms can choose to advertise on TV or radio. The growth of information technology, and in particular, the Web challenges this tradeoff. With the WWW the tradeoff line can shift upward and to the right, due to the “explosion of connectivity” and “dissemination of standards” (Evans, p. 31). Bob’s strategy is to work to the upper left of the tradeoff line with “rich” technology support and the firm’s traditional strong personal sales support. Other competitors have used the Web to focus on increasing “reach”, primarily in obtaining sales leads. They are working on the lower right part of the tradeoff line.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION As Bob reflected on the state of the firm’s IT infrastructure in early 2001, he turned back to the situation at the end of 1998. At that point Dan and Jerry, the owners, were pleased that their sales Figure 11: Richness/Reach tradeoff
Richness
Reach
Prudential Chamberlain Stiehl
373
associates considered themselves technical leaders, but felt there should be more value for their IT spending. Bob’s bosses wanted to know then if this IT spending would lead to hiring more sales associates and improved business processes. These questions continue to haunt Bob in 2001. As quickly as PCSR implements technology and improves business operations, management senses a renewed level of competition that has to be responded to. Can PCSR ever achieve and maintain a competitive advantage in the real estate market through IT? Bob is amazed at the progress the company had made in achieving connectivity throughout the organization and how they had begun to build applications. Even with the acquisitions of new offices, the game plan is basically on track. But as he looked back over the past several years, there is a sinking feeling, however, that this was the easy part. Bob knew that he needed to add new applications to leverage his infrastructure and fully use and share information throughout the firm. In 2001 Bob also has several management issues to attend to. First, his efforts to build applications based on Oracle and Web technology had proven very difficult to manage. Given the fact that he has no full-time development staff, all development work is outsourced to contract programmers. At this point, Bob is budgeting programming effort and cost, but he has little idea how to manage development of any given project to a completion date. Should Bob continue custom development? Or should he search for third-party software? If he simply buys third-party software, what advantage will he have over his competitors? Second, Bob wants to combine voice and data to get the most out of his communication dollars. But who should Bob turn to, to provide integrated voice and data? Many of his potential suppliers focused on one or the other. Third, Bob’s IT staff is limited to a couple of part-time employees that configure PCs and the network, but as the firm’s infrastructure grows, the maintenance and support gets harder to handle. Should Bob consider outsourcing of his network support needs? If so, what type of firm should he look at –a local vendor VAR or systems support firm? Or a support organization out of a national computer firm? Bob also has to consider how to relate to his peers on the management team . He had delivered the best technology in the industry, but they do not know how to use it to recruit new agents, the single biggest selling point of the technology to management. Bob is confident his implementation offers the best services to the sales associates in the marketplace, but it does not seem to be helping to recruit sales associates. What, Bob wondered, should he say to his peers? Figure 12: Web Usage Report–Usage Statistics for prudential-michigan.com
Summary Period: Last 12 Months
374 Borchers & Mills
Figure 12: (continued)
Summary by Month Month
May 2001 Apr 2001 Mar 2001 Feb 2001 Jan 2001 Dec 2000 Nov 2000 Oct 2000 Sep 2000 Aug 2000 Jul 2000 Jun 2000
Daily Avg Hits Files Pages Visits Sites KBytes 28786 19060 18017 18140 12367 6825 10975 10874 11294 11688 8807 12731
16388 12138 10890 11445 7962 4168 7442 7798 8277 8558 6627 8435
12899 8696 7626 8364 5402 2845 4726 4506 4537 4473 3093 3986
296 372 389 398 308 181 266 277 280 281 268 286
Totals
451 2750 2936 2774 2510 1587 2045 2074 2105 2074 2116 2143
Monthly Totals Visits Pages Files
Hits
198793 593 25798 32777 57572 3145099 11177 260888 364166 571826 3134009 12063 236422 337618 558533 2585424 11150 234212 320475 507929 1823284 9555 167475 246829 383394 1069249 5633 88209 129236 211589 1689554 7984 141786 223288 329254 1688964 8594 139694 241738 337101 1748531 8427 136133 248314 338842 1753287 8728 138672 265313 362341 1256423 8310 95887 205437 273044 1356680 8598 119582 253074 381931 21449298 100812 1784758 2868265 4313356
Generated by: Webalizer Version 2.00 Bob has more tactical matters to consider. Web usage statistics show a respectable hit rate. But Bob wants to double the hit rate on the site. At the same time, Bob wonders, “What is the value of a hit?” “How much does the web help me reach buyers versus how much does the Web help me satisfy sellers and sales associates?” Finally, at a more strategic level Bob has to consider the future of the real estate market in Southeast Michigan. First, who will master the web enabled real estate market? Will strong regional operators like his firm continue to largely control the business? Or will national firms develop web strategies and reduce the power of brokers? Or will “pure play” Internet firms invade the industry? Having read of the impact of the Web on other industries, such as travel, Bob is concerned. Echoing Evans and Wurster (2000): “How vulnerable is the “venerable” PCSR in this market?” Bob wonders how his IT infrastructure will support the firm in coming years. The real estate industry was sure to undergo change. Continued consolidation of brokers was highly likely. Could his infrastructure help the firm adapt to market changes? What new technologies, such as wireless, should he adapt to support the firm’s strategic needs?
FURTHER READING Afuah, A. and Tucci, C. (2000). Internet Business Models and Strategies: Text and Cases. New York: McGraw Hill. Crowston, K. and Wigand, R (no date). Real Estate War in Cyberspace: An Emerging Electronic Market? http://www.isworld.org [2001, January 13].
Prudential Chamberlain Stiehl
375
Crowston, K., Sawyer, S. and Wigand, R (2001). Investigating the Interplay Between Structure and Information and Communications Technology in the Real Estate Industry. Information, Technology and People, 15(2). D’Aveni, R.A. and Gunther, R. (1994) Hypercompetition: Managing The Dynamics of Strategic Maneuvering. New York: Free Press. Evans, P. and Wurster, T. (1997). “The New Economics of Information”. Harvard Business Review. Evans, P. Wurster, T. (2000). Blown to Bits: How the New Economics of Information Transforms Strategy. Boston: Harvard Business School Press. Hagel, J. and Singer, M. (1999). Net Worth. Boston: Harvard Business School Press. Stanfill, John. (2000). How Brokers Can Counter the Risks of Disintermediation by Embracing Leveraging Technology Trends. Real Estate Issues. (24:4). Turban, E. Lee, J. King, D. and Chung, H.M. (2000). Electronic Commerce: A Managerial Perspective. Upper Saddle, New Jersey: Prentice Hall. Wilder, C. (1998, December 7). Myths and Realities. Information Week.
REFERENCES Crowston, K. and Wigand, R (no date). Real Estate War in Cyberspace: An Emerging Electronic Market? http://www.isworld.org [2001, January 13] Crowston, K., Sawyer, S. and Wigand, R (2001). Investigating the Interplay Between Structure and Information and Communications Technology in the Real Estate Industry. Information, Technology and People, 15(2). Wilder, C. (1998, December 7). Myths and Realities. Information Week.
BIOGRAPHICAL SKETCHES Andy Borchers, DBA is an Associate Professor of Information Systems at Kettering University in Flint, MI. He spent 21 years working for General Motors and Electronic Data Systems before turning to full-time teaching in 1997. His teaching and research interests are varied and include database management, electronic commerce and management of information systems organizations. Andy earned a Bachelors of Industrial Administration from Kettering University, an MBA from Vanderbilt University and a DBA from Nova Southeastern University. Robert Mills is CFO and CIO of Prudential Chamberlin Stiehl Realtors. He holds an MSIS from Lawrence Technological University.
376 Knight, Steinbach & Graf
Seaboard Stock Exchange’s Emerging E-Commerce Initiative Linda V. Knight and Theresa A. Steinbach DePaul University, USA Diane M. Graf Northern Illinois University, USA
EXECUTIVE SUMMARY While Seaboard Stock Exchange remains one of the top stock exchanges in the United States, its relative position in the world is slipping. E-commerce is threatening the organization by accelerating the rate of disintermediation and the entrance of new competitors into Seaboard’s market. Against this backdrop, Seaboard’s e-commerce initiative has emerged. Tension between control and experimentation surfaces as the association attempts to incorporate emerging technology while maintaining its traditional way of doing business. The organization struggles to merge new technology with existing IT strategy while internal entrepreneurs strive to shape a Web development methodology and define an appropriate role for standards and controls in an emerging technology environment.
BACKGROUND Seaboard Stock Exchange is recognized worldwide as among the leading exchanges of its type in the United States in the year 2000. The exchange was founded in the mid-19th century by a group of eight businessmen meeting informally in a garden outside of Rock Island, New Hampshire, to buy and sell local stocks. For decades, Seaboard remained, in the words of one manager, “a sleepy little regional exchange,” until the mid-1970s when it made a strategic decision to become a national exchange. Since that time, Seaboard has expanded to approximately 500 traders handling a wide variety of stock and bond products. The exchange continues its heritage of face-to-face trading in an open arena to this day. Traders meet on an open floor and use hand signals to convey the quantity of a particular stock or bond that they would like to buy or sell. Bids to buy and offers to sell are made by open outcry. When the highest bid meets the lowest offer, the two traders each write down the trade on cards that ‘runners’ carry off the floor to be keyed into a computer system. Although Seaboard remains one of the top stock exchanges in the United States, its relative position in the world is slipping. As Exhibit 1 shows, trading volume, which had been relatively steady from the earliest days, took a major swing upward in the 1970s when Seaboard decided to “go national.” Copyright © 2002, Idea Group Publishing.
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
377
This dramatic growth continued until 1998. However, volume has been dropping for the last two years. While the worldwide total stock trading market has been growing, Seaboard’s relative market share has declined, as has the price of a seat on the Seaboard Stock Exchange. Declines in Seaboard’s trading volume mean that its member/traders, who charge customers for each trade they execute, are earning less. This situation has triggered a drop in the price of a seat on the exchange. Declines in the price of a Seaboard Stock Exchange seat mean that when members decide to retire or cash out, they reap far fewer dollars than previously (Arvedlund, 2000). The price of a Seaboard seat has dropped from a record high of $905,000 in the late 1980s to $322,000 in late 2000. Seaboard’s membership is concerned about these declines, and has recently moved to trim staff in order to cut costs. These cost cutting measures can be expected to increase cash flow for the organization when it comes to paying its bills to support the trading floor; however, they will not put dollars in the pockets of its owners, simply because of the way the exchange is structured. As Exhibit 2 shows, the exchange’s 500 members are active owners of the not-for-profit association. The members own their own seats, have trading rights on the exchange, and manage the organization through the system of committees shown in Exhibit 3. Members also elect the Board of Directors and President. Further, since they are present every trading day, they play an active role in the day-to-day operations of the organization. The exchange employs a staff of about six hundred employees, one-third of whom are in the Information Technology department. The primary responsibility of all staff is to support the trading floor and keep it running smoothly. According to Vance Fernandez, Vice President of Information Technology, “Working at Seaboard means having five hundred bosses. Staff at all levels, including the President, will drop everything if a member calls with a request.” For example, when a member decided that he wanted a statistical analysis of recent energy stock prices, Fernandez immediately reassigned two top people from his most critical project to work on the member’s special request. Members earn their incomes primarily from commissions on trades or occasionally from making wise trades for their private accounts. Since Seaboard is a not-for-profit association of its members, any exchange income beyond that needed to meet costs is banked for future expenses, rather than being paid out as dividends or profits to the member/owners. Profits that had been saved in prior years are now being depleted as Seaboard’s annual income drops below that needed to maintain its fixed costs. Seaboard’s current economic dilemma, and the increasing domestic and foreign competition, Exhibit 1: Seaboard Stock Exchange Trading Volume 350,000,000
Seaboard Stock Exchange Trading Volume
300,000,000 250,000,000 200,000,000 150,000,000 100,000,000
Note that years prior to 1995 are in 5 year increments
50,000,000 0 1925
1930
1935
1940
1945
1950
1955
1960
1965
1970
1975
1980
1985
1990
1995
1996
1997
1998
1999
2000
378 Knight, Steinbach & Graf
Exhibit 2: Seaboard Stock Exchange Organizational Structure
Chairman and Board of Directors
M
M
M
Approximately five hundred M M owner/members rule exchange M M M M through committees M M M M M M M M M M M M
M
M
M
President
V.P., I.T. Vance Fernandez
Application Development
Director
V.P. Marketing
Operations Director, Roger Fields
V.P., Corporate Communications Paula Reese
Marketing Manager, Todd Lawson
Manager of Web Development,
Karen Greene
*Other departments: Legal, Financials, Trading Floor Operations, Physical Plant
Exhibit 3: Seaboard Stock Exchange Governing Committees • • • • •
Executive Committee Finance & Audit Committee Floor Procedures Committee Human Resources Committee Marketing Committee
• • • • •
Membership & Admission Committee Nominating & Elections Committee Product Development Committee Strategic Planning Committee Technology & Automation Committee
*Other Vice Presidents
*Other Departments
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
379
are causing its members to reconsider their organization’s structure. The members have organized a committee to study the possibility of becoming a publicly held for-profit corporation (Wall Street and Technology, 1999).
SETTING THE STAGE Strategic Use of Technology at Seaboard Seaboard is among the top floor-based exchanges in the United States in terms of technology. Seaboard’s trading floor was considered state of the art when it was totally rebuilt in 1995 at a cost of approximately $36 million. The floor is supported by an extensive network of fault-recovery mainframe computers, an ATM network capable of carrying voice and data, and a massive telephone communication system. This advanced technology supports the traditional open outcry trading system by bringing orders to the floor, and facilitating reporting of trades. Since 1996, Seaboard has offered electronic trading (Morgan and Perkins, 2000). However, traders, who are also the exchange’s owners, have not supported extending electronic trading beyond evening hours and nontraditional products that do not compete with the traditional trading floor (Osterland, 1998). Industry analysts who support Internet-based electronic trading note that it provides “unprecedented connectivity” at minimal cost (Wall Street and Technology, 2000). It is estimated within the industry that an electronic trade costs less than half as much as a traditional trade to execute. Proponents of traditional trading floors, on the other hand, praise their ability to provide unbiased price discovery. Compared to electronic trading, traditional trading floors, it is argued, provide buyers and sellers with the ability to have their trader or broker use his or her judgment and market insight to make the trade at the best possible price. Not all industry experts agree (Tsang, 1999; Mosser and Codding, 2000). Countries without a stock trading tradition, particularly in Europe and Asia, have opened exchanges that are totally electronic from the first trading day. Thus a small developing country like Singapore with a brand new stock exchange sometimes has a more modern wireless technology infrastructure than Seaboard or its traditional United States counterparts. Such foreign exchanges are not direct competitors because they are trading different stocks. However, all exchanges compete for the same investor dollars. Additionally, some competing United States exchanges have begun moving into other parts of the world through partnerships (Wagley, 2000). A further threat to existing exchanges comes from electronic communications networks (ECNs) (Latimore, 1999). ECNs, using Internet technologies, connect buyers directly to sellers without any intermediaries. In December 1998, the Securities and Exchange Commission (SEC) passed Regulation ATS, which grants Alternative Trading Systems, such as NexTradeTM, the ability to become full-fledged securities exchanges (McAndrews and Stefanadis, 2000). By cutting prices, ECNs captured 30% of all NASDAQ trading volume in just three years, and are now targeting other stock market segments (Carroll, Lux and Schack, 2000). Seaboard’s members generally view electronic trading, ECNs and new high-tech foreign exchanges as encumbered by an inability to leverage the trading insights that Seaboard members employ to get their customers the best prices.
Information Technology Department at Seaboard The Information Technology Department at Seaboard is divided into two departments, Application Development and Operations. The Operations area includes the data center, network support, and microcomputer support teams. The Application Development area, responsible for all new system development and maintenance programming, is divided into four teams: floor support, order processing, regulatory systems and financial systems. Application Development makes regular use of consultants from top-name consulting firms, particularly for the early stages of new system development, and when employing new technologies for the first time. The area has defined a strict methodology for how new development is to be approached, with ample upfront planning and an
380 Knight, Steinbach & Graf
emphasis on approvals at strategic points in the development process. This methodology, the Planned System Development Process, or PSDP, is depicted in Exhibit 4. Programmers joke about the PSDP, but grudgingly agree that it does a good job of insuring that new projects do not incur unexpected member criticism late in the development process.
CASE DESCRIPTION Early Web Development Efforts Seaboard’s first experience with Web development came in 1993, when a small group of microcomputer installation technicians became interested in the Internet. As one member of the group explained, “We didn’t have any experience building systems of any kind, but we had extra time available to experiment.” Most of the group’s first site consisted of profiles and personal Web pages of the PC specialists themselves, coupled with a loose collection of pages about the mission, services and policies of the microcomputer support group. During this same time period, one of the exchange’s members, Sam Butler, used free software to construct a public Bulletin Board System. Butler’s BBS made historical financial data on the most heavily traded stocks available to the general public 24/7. Looking back on that time from the year 2000, Butler said, “I was once a small trader myself, and I appreciate the position of the little guy. If we can help him out with some free data, why not? Of course, I thought making this data available would be good for Seaboard’s image, too.” Roger Fields, the IT Operations Director, recalled later that this time period “gave people a chance to try out the technology.” Interest in the World Wide Web was fueled by the development of the MosaicTM browser in 1993. As Exhibit 5 shows, by 1994 the business press as a whole was discovering the Internet. Seaboard’s president picked up on the trend, issuing a directive to create a Web presence for the organization by the end of 1994. The exchange president stated privately at the time “Although there is no member support for using the Internet for electronic trading, the technology still might be important someday in this business.” He then authorized the funding necessary for two microcomputer specialists to officially spend five to ten percent of their time on this project. The two developers, anxious to get a site working as soon as possible, concentrated their efforts upon “begging” various departments for content, and then translating that content into HTML as quickly as possible. In the year 2000, when Fields looked back on the site, he described it as “ a basic form of brochureware, with clickable links for the organization’s mission, history and goals.” This site, shown in Exhibit 6, combined with a somewhat expanded version of the original intranet and BBS, represented the organization’s secondgeneration e-commerce effort. By mid-1995, two microcomputer technicians were officially given the title of Web Developer, and allowed to spend 50 percent of their time on Web development, with the other 50 percent going
Exhibit 4: Planned System Development Process (PSDP)
System Planning
Work Order Request Form Dept. Manager Signature Dept. VP Signature Computer Requirements Committee Authorization Preliminary Investigation Feasibility Report
System Analysis
Data Requirements Business Processes Interface Requirements Communications Requirements Systems Requirement Report Dept VP Signature
System Design
Database Schema Application Schema Interface Schema System Architecture System Design Specification Dept. VP Signature Dept Manager Signature
System Implementation
Documentation Review Coding Review Quality Assurance Review Test Plan & Testing File Conversion Plan & Conversion Production Changeover
System Support
Work Order Request Dept. Manager Signature Dept. VP Signature Computer Requirements Committee Authorization
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
381
Exhibit 5: Business Press Coverage of E-Commerce
Business Week
Forbes
Fortune
400 Number of ABI/Inform abstracts mentioning Internet, WWW, ecommerce or ebusiness
200 0
1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000
Business Week
0
0
0
2
20
88
197
147
179
370
266
Forbes
0
1
1
3
11
50
103
87
105
196
74
Fortune
0
0
0
3
15
28
58
75
100
202
173
Year
to their traditional microcomputer support duties. Fields brought in an outside design team to improve the “look and feel” of the site, which up until that time had no graphic artists associated with it. He recalled that “These three consultants were not very technical—they used a WYSIWYG for editing. Their strength was on the creative end.” Both the two Web developers and the external consultants now began to build relationships with departments elsewhere in the organization, particularly marketing and corporate communications. As Fields explains it now, “When the Web developers first sought relationships with these departments, they were just looking for data for the Web site. As the relationships developed, they realized that these other departments could help us establish a vision for the site. At that point, the Web developers consciously began to seek input on the purpose and goals of the site. That’s also when the first efforts were made to profile potential users of the site by discussing with other departments who was likely to visit the site and why.”
Expansion of the E-Commerce Initiative In early 1996, Sam Butler became interested in expanding Internet use at Seaboard. Butler had owned a seat at the exchange for 27 years. He was also an entrepreneur, having started several highly successful small financial service firms during that same timeframe. In each case, once Butler had built up his business, he “sold the company and moved on to new challenges.” Butler describes himself as “someone who likes to read about technology, and experiment with new ideas.” In the 1980s when Seaboard’s statistical analysis was partially hand-generated and partially produced by the mainframe in the form of monumental stacks of computer printouts, Butler began doing his own number-crunching through PC spreadsheet software. When Mosaic and GUI interfaces became prominent in 1995, Butler decided to replace the BBS that he had created in 1993, providing Seaboard’s members with a simpler GUI interface to his online data. Using an $88,000 budget that he had been given by the exchange president, Butler hired one technician, Chester Bromwell to work by his side. Butler then spent the remainder of his seed money on some small hardware purchases. For most of his hardware needs, Butler refurbished old equipment being discarded by other exchange departments. Most of his software, including the Apache Web server, was shareware or freeware that he downloaded from the Internet. Since there was no available office space, Butler and Bromwell set up shop in a hallway on the path to the restrooms. Butler correctly reasoned that this location would give him contact with most IT
382 Knight, Steinbach & Graf
Exhibit 6: Second Generation Seaboard Consumer Web Site
employees at least once a day. Butler’s project generated widespread interest among IT personnel. It was not unusual to find programmers and networking experts standing or even sitting on the floor in the hall, chatting with Butler and Bromwell about Internet technologies and the future of the Internet at Seaboard. Fields recalls appreciating the hallway’s synergistic generation of ideas and encouraging participation by his staff, “as long as their regular duties were attended to.” Fields’ boss, Vance Fernandez, has a similar recollection. “It was a time of great excitement in the industry, and Sam brought that excitement to our area. There still was no real member interest in the Internet or its potential for electronic trading, outside of Sam of course, so I couldn’t officially support such development, but I was willing to do what I could unofficially. As a whole, the IT line staff were as curious as Sam about the new technology, and freely gave their time and expertise to Sam’s project. There was great excitement and positive energy in Sam’s hallway.” The one exception was Butler’s disagreement with the Director of Application Development concerning the Planned System Development Process. In Butler’s words, “The PSDP was and still is totally unworkable. No wonder those people in IT can never seem to get anything implemented. I didn’t have time for that nonsense, so frankly I just didn’t do it. We didn’t need plans or approvals. We needed action.” In mid-1996, Butler and Bromwell implemented StockScene.com. The site was designed to provide members with time-delayed quotes, individualized trading reports and a chat room. In addition, it included content from ten outside industry sources. Revenue was generated from advertising, coupled with online sales of current exchange information to the members and the public. Butler did not see security as a major issue since many of his users were members. He selected Netscape’s Credit Card SystemTM for payment because it was easy to use and readily available. As Internet technology
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
383
advanced, so did Butler’s ambition for StockScene. In late 1996, he built ‘SS (Seaboard Stock Exchange) Internet Radio’ with Internet-only access. As StockScene.com grew in importance, other departments at Seaboard began to take notice. By late 1996, the Legal Department began setting policies governing the sale of real-time exchange data, and it also set up an approval process for advertisements on any exchange site. In Butler’s words, “Legal was killing the advertising on the site. By the time they approved an ad, the client had lost interest in placing it on our site.” About the same time, the Marketing Department became concerned about the wisdom of providing free data to members when similar data was being sold to third parties who then repackaged and marketed it, sometimes to the same members. In Butler’s view, “I had built a great system for member use, and now these staff departments were nitpicking it. It became increasingly difficult to get anything through the approvals. When I couldn’t innovate anymore, I lost interest and turned my site over to the IT department.” At the start of 1997, Fields’ Web developers took over the maintenance and further development of Butler’s project. While Butler had been developing his Web site throughout 1996, Fields’ Web development initiative had also moved forward. In early 1996, Karen Greene transferred into the microcomputer support group from a Seaboard user department where she had automated some traditionally manual functions using Macintosh computers. While Greene’s job was to maintain approximately 250-300 Macintoshes for Seaboard, she found that “things pretty much ran themselves and I had a lot of free time.” Greene used the free time to teach herself about Internet tools and technologies. “The other microcomputer technician and I were just experimenting without any official approval, although Roger (Fields) did know what we were doing.” Greene describes the site developed at that time as “primarily cat and baby pictures, but we were learning the technology.” Greene’s first Web page was a list of all the departments at Seaboard, which she produced using WordPerfect. Although she was aware of Seaboard’s earlier Web development efforts, Greene sees those primarily as either consumeroriented or member-oriented, and views her first page as “the beginning of Seaboard’s intranet, the first Seaboard site really designed for regular employee use.” As Greene’s internal site grew, it began to attract the attention of others elsewhere in the organization. Paula Reese, Vice President of Corporate Communications, and Greene began communicating about the potential of Internet technologies. Greene told Reese that she did not think that either the intranet that Greene had developed or the earlier consumer-oriented site looked sufficiently professional. Reese readily agreed with Greene’s evaluation. Reese’s vision was to create a brand image for Seaboard on the Internet. According to Reese, “I wanted to reach out to those who don’t understand the stock market. In my mind, visual appeal and ease of navigation were as important to less informed users as informational content.” Together, Reese and Greene began to fine-tune the public site, SeaboardStocks.com, for both content and presentation. Reese was not content with improving the look of the consumer site. In her words, “My vision was to create a one-stop-shop for market information. I wanted Seaboard to be the first place consumers thought of when they wanted market information. So, I started to build relationships with search engines, journals, and newsletters, with the goal of enticing them to link to Seaboard on their sites.” This approach caught the attention of Seaboard’s other senior-level management. Largely because of Reese’s efforts, in late 1996 Greene was promoted to Manager of Web Development, a newly created full-time position. She reported directly to Fields, and was expected to develop an annual strategic plan for the site, prepare a budget and supervise the other Web developer. At first, Greene’s activities still centered on begging other departments for content, but over the next several years, this situation changed. As Exhibit 7 shows, Internet usage was steadily rising, and Seaboard employees also increased use of their intranet during this time. By 1998, Greene obtained permission to hire an experienced graphic artist and Web designer as the second member of her team.
Increasing Formalization In 1998, interested employees formed a Web Initiative Committee (WIC). The committee,
384 Knight, Steinbach & Graf
Exhibit 7: Internet Host Growth Internet Domain Survey Host Count Source: Internet Software Consortium (http://www.isc.org/) 120,000,000 100,000,000 80,000,000 60,000,000
Old Domain Survey Adjusted Count New Domain Survey
40,000,000 20,000,000 0 Jan- Jan- Jan- Jan- Jan- Jan- Jan- Jan- Jan- Jan- Jan91 92 93 94 95 96 97 98 99 00 01
composed entirely of volunteer staff members, was considered unofficial and, since it was not composed of members, was not part of the governing structure of the exchange. The size of the committee varied from 10 to 15 staff members, depending on who opted to attend meetings. The group’s focus was to discuss new items of interest in the industry and how Web technology might be employed in those new areas, as well as how new Internet technologies might impact the industry and the exchange’s Web presence. The group began to formulate strategy and set policies as seemed appropriate based upon the results of their discussions. As Greene recalls, “The only area we really steered clear of was electronic trading, which is really up to the members here. Other than that, we examined all aspects of the business and what the Internet could do for us.” Slowly, Sam Butler’s ‘shoot from the hip’ development methodology was modified. Over the period from 1996 to 1998, a new development methodology emerged for use within Greene’s area. Small projects were still done on request, with no real documentation or approval process, but larger ones began to follow a new procedure, as outlined in Exhibit 8. Fields later saw the new procedure as a version of the PSDP, “modified for Internet time.” By 1997, three sites, SSManifest.com (the staff Intranet), StockScene.com (a members-only site) and SeaboardStocks.com (the public site) were being maintained separately, each with its own “look and feel.” The three sites generated a combined total of more than one million hits per day, and according to Greene, “Content management was becoming unwieldy. Much of our subject matter was identical on all three sites, yet changes were cumbersome because they had to be done separately for each of the sites.” Throughout the second half of 1998, Reese and Fields worked, with the support of Fernandez, to gain a line item in the 1999 budget for content management and workflow software for Greene’s group (Knorr, 2000). Ownership of the project to convert Seaboard’s Web sites to content management software ultimately fell to the marketing area, through a series of events described below.
An E-Commerce Business Plan In 1999, two different approaches were taken to Seaboard’s e-commerce business plan, one through the IT department, and one originating with the Board of Directors and led by the Marketing department. In the IT area, Seaboard’s network infrastructure was being outsourced to a leading provider of Internet infrastructures. As part of the agreement for network services, this firm bundled in e-commerce consulting services. Fields was highly instrumental in negotiating this contract, and anxious to work closely with the well-regarded consultants. A subgroup of the Web Initiative
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
385
Exhibit 8: Web System Development Process
PreProduction
Work Order Request Form Site Assessment Rough Schedule Request for Client Design Specs Creative Strategy Formulation Present Creative
Creative Production
Technical Guidelines: Creative Production Flow Chart Creative Sign-Off Requestor Approves Creative
Technical/ Final Steps
Technical Guidelines: Hosting Coding Report Quality Assurance Report Legal Review & Approval Publish Site
PostProduction
Maintenance Analysis Report Final Site Review
Committee, led by Fields and Greene, began an in-depth study of the organization’s e-commerce opportunities. They began examining all aspects of Seaboard’s business, not just the competitive marketplace, but also the internal value chain, for opportunities to use Internet technologies for long-range competitive advantage (Rayport and Sviokla, 1995). As possible projects were identified, they were rated in terms of potential value and ease of installation, and prioritized. As of mid-2000, this study was continuing. Meanwhile, by mid-1999, senior-level management was becoming increasingly interested in the impact of technology. As Vance Fernandez describes it, “The Board’s primary technology concern was, as it had been since the 1960s, the issue of support for the trading floor.” Nonetheless, the Board of Directors was aware of the impact of electronic trading on other exchanges. They issued an official statement that, “Although competitors’ electronic trading has reduced trading volume for some other exchanges in different market segments, we do not believe that electronic trading by other exchanges will negatively affect Seaboard’s volume in the foreseeable future.” The board also addressed Seaboard’s growing Internet presence by forming a committee of members to investigate the wisdom of making data available free of charge on the Internet. As one board member explained, “How can we expect to continue to sell this data to third-party repackagers when we are giving it away free to their customers?” Several members, who had not shown any interest in the Internet previously, questioned the Board of Directors in mid-1999 about future plans. Referring to articles in the popular press (see Exhibit 5), they wanted to know what Seaboard’s Internet business plan was. In response to this query, the Board determined that an “e-commerce business plan” for the exchange should be developed. Both budget and staff resources were provided for this effort, which was tied to Reese and Fields’ initiative to gain approval for content management software. At this point, Seaboard’s marketing department became very interested in ownership of the project. In the words of Todd Lawson, a manager in the marketing department, “We in marketing saw the potential of the site, not for just simple corporate communications, but for marketing. Besides, we knew the federal regulations that the site needed to meet and corporate communications didn’t. We were the right ones for the job.” Lawson was placed in charge of the emerging project to align the three sites and bring them together under a single content management software umbrella. Lawson took the initiative in forming a second subgroup of the Web Initiative Committee, the Web Marketing Group (WMG). This group comprised mainly of marketing, marketing research and customer service staff. In Lawson’s words, “We decided that if we were going to move forward, then we needed a smaller group made up only of those of us who dealt regularly with
386 Knight, Steinbach & Graf
customers.” The WMG, led by Lawson, developed a project plan for the content management installation and conversion efforts. This plan was constructed without reference to any development methodology, marking the end of the Web System Development Plan’s use at Seaboard. The IT department, including specifically Greene’s Web development area, was not involved during these initial planning stages, and corporate communications played just a minor role, since neither area was included in the WMG. However, because Greene and Reese were both involved in the WIC, they did receive periodic progress reports. In January 2000, the process of aligning the consumer, member and employee Seaboard sites began with a market research study. The Web Marketing Group held focus groups with members of the exchange, and determined that the content of the three Seaboard sites was good, but that the Web pages were arranged poorly, non-intuitive and difficult to navigate. Based on this, the WMG determined that the next step was to interview design firms. Three design firms were chosen by the WMG to present their concepts. The competing firms’ presentation sites were judged by the WMG on creativity, navigation, organization and cost. According to Lawson, “Of the three we brought in, one was too heavy on the creative side, another had no process in place. The firm we chose had the best combination of price, experience and structure.” One of the outcomes of the marketing department initiative was a determination that there should be cohesion between the three sites and they should all be accessible through a single gateway. This was consistent with the earlier conclusions of Reese and Fields, when they gained approval for content management software. The Web structure shown in Exhibit 9 was designed by the chosen Internet design firm. This firm provided the artwork and the coding for the top two layers, before delivering the site to Greene for lower level development and implementation. In July 2000, contact with the design firm ceased, and Greene’s IT area took over the production side of the project, while marketing’s Lawson retained ownership of the initiative. Greene warned Lawson that the changes in the site’s “look and feel” were dramatic, and it seemed to her that some users would feel disoriented and would complain. Lawson however was concerned primarily with long-range improvements to the site. When the new combined Web site was installed, there was considerable negative reaction from members, internal staff users and external users. Most of the complaints centered on change, along with the fact that old bookmarks no longer worked in the new navigation scheme, and the search function was not fully available yet. Over time, however, users adjusted to the new navigation and the search feature was fully implemented. Now maintenance and enhancements are determined by the WMG. Lawson and Greene keep track of outstanding maintenance projects through an electronic spreadsheet and communicate as needed through email.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION Seaboard’s e-commerce innovators view the future differently. Karen Greene stresses the progress that has been made, “Look how far we have come. We have a huge Web presence, and we are making maximum internal use of Internet technologies. We have content management software, a growing Web development group with its own budget and at least recognition that we can use a different development methodology for the Web, although we don’t really have one in place right now. That is a lot of progress for a conservative organization in just seven years.” Overall, Todd Lawson agrees that much progress has been made, “The WMG has done an excellent job with the Seaboard Web presence thus far. We just need to keep developing its marketing potential, while insuring that we do not compete with our traditional product base.” Roger Fields views an e-business plan, like the one his WIC subgroup is working on, as central to the future. In Fields’ words, “We have a great opportunity here, with these consultants, to really look at the entire business from the standpoint of leveraging technology. I’d like to see this e-business plan completed and used. That is my hope for the future.” Vance Fernandez, however, thinks the
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
387
Exhibit 9: Newly Designed Web Structure, SeaboardStocks.com Welcome & Disclaimer
Home Page
News
Headlines Press Releases
Market Information
About Us
Ticker Quotes StockScene
Tour the Floor Members History FAQ SSManifest
Events Calendar Conferences & Workshops Seminars & Educational
Library
Online Publications Regulations Links of Interest
Share Your Views Chat Central
emphasis on e-commerce is misplaced, “All the e-commerce development we have done is minor in impact compared to what Internet-based electronic trading could do.” As for Sam Butler, he is planning his retirement, “I want to enjoy my grandchildren. I am too old to play the role of innovator here anymore. But I worry—who will lead the technology initiatives at Seaboard in the future?” Paula Reese, no longer directly involved with the Seaboard Web site, continues to watch the Internet activity at both Seaboard and its competitors’ sites. In Reese’s words, “I believe that Seaboard’s Web site is now impressive. With added graphics, it provides users with a virtual tour of the exchange floor, timely financial information and useful financial links. At long last, the financial news industry is raving about Seaboard.com. That is very rewarding to me personally. However, when I look back over the years, I see a lot of things I’d like to change. I can’t help but believe that we could have gone faster, farther. When I consider what some of Seaboard’s competitors are doing on the Internet, I see missed opportunities. We should be expanding into retail services: e-quotes, epayments, even e-trading. I just don’t see that happening here anytime soon.” Reese has begun talking with executive placement firms.
FURTHER READING Beath, C. M. (1991). Supporting the information technology champion. MIS Quarterly, 15(3), 355-372. Beer, M. and Nohria, N. (2000). Cracking the code of change. Harvard Business Review, 78(3), 133141. Bhide, A. (1994l). How entrepreneurs craft strategies that work. Harvard Business Review, 72:2, 150161. Cash, J.I. Jr. (1994). A Call to Disorder. InformationWeek, 498, 80. Cash, J.I. Jr. (1994). The Art of the Possible. InformationWeek, 504, 112. Christensen, C. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Boston, MA: Harvard Business School Press. Christensen, C. and Overdorf, M. (2000). Meeting the challenge of disruptive change. Harvard Business Review, 78(2), 66-76. Clair, C. (2000). Several exchanges becoming for-profit ventures. Pensions and Investments, 28(24), 61. Evans, P., & Wurster, T. (1999). Getting real about virtual commerce. Harvard Business Review, 77(6), 84-94.
388 Knight, Steinbach & Graf
Gallaugher, J. (1999). Challenging the new conventional wisdom of net commerce strategies. Communications of the ACM, 42(7), 27-29. Ghosh, S. (1998, March/April). Making business sense of the Internet. Harvard Business Review, 76(2), 126-135. Henderson, J.C. and Venkatraman, N. (1999). Strategic alignment: Leveraging information technology for transforming organizations. IBM Systems Journal, 38(2-3), 472-484. Rahman, S. M. & Raisinghani, M. (2000). Electronic commerce: Opportunity and challenges. Hershey, PA: Idea Group Publishing. Roberts, T. L., Gibson, M. L., & Fields, K. T. (1999, July-September). System development methodology implementation: Perceived aspects of importance. Information Resources Management Journal, 12(3), 27-38. Scharl, A. (2000). Evolutionary Web Development. London: Springer. Sherrell, L.B. and Chen, L. (2001, April). The W Life Cycle Model and Associated Methodology for Corporate Web Site Development. Communications of the Association for Information Systems, 5(7). TradeNet. http://www.tradeweb.com/abouttradeweb/Introduction.htm. Accessed August 7, 2001. Venkatraman, N. (2000, Spring). Five steps to a dot.com strategy: How to find footing on the web. Sloan Management Review, 41(3), 15-28. Wall Street and Technology (2000, November). Evolving Exchanges. 18(11), 14-20.
REFERENCES Arvedlund, E. E. (2000). New bottom line. Barron’s, 80(12), 26-27. Carroll, M., Lux, H., & Schack, J. (2000). Trading meets the millennium. Institutional Investor, 34:1, 3653. Knorr, E. (2000). Content management crossfire. CIO Magazine, http://www.cio.com/archive/ 120100_et_pundit.html. Accessed August 7, 2001. Latimore, D. (1999, November-December). Of marketts and mania. Financial Executive, 15(6), 24-27. McAndrews, James & Stefanadis, Chris (2000). The Emergence of Electronic Communications Networks in the U.S. Equity Markets. Current Issues in Economics & Finance, 6(12), 1-5. Morgan, C., & Perkins, S. (2000). Electronic exchanges emerge transforming the equities business. Wall Street and Technology, 18(10), 72-74. Mosser, M., & Codding, J. (2000, June). Gentlemen, start your exchanges. Futures, 29(6), 76-80. Osterland, A. (1998, September 7). Electronic trading: A hue and cry in the pits. Business Week, 3594, 82. Rayport, J. F. and Sviokla, J. J. (1995). Exploiting the virtual value chain. Harvard Business Review, 73(6), 75-85. Tsang, R. (1999). Open outcry and electronic trading in futures exchanges. Bank of Canada Review, 150(3), 21-39. Wagley, J. (2000). The battle for overseas listings: The NYSE/Nasdaq duel for listings in Europe and Asia is getting interesting…and more competitive. Investment Dealers’ Digest, 16(21), 16-21 Wall Street and Technology (2000). eCommerce in the U.S. Fixed Income Markets. http:// www.wallstreetandtech.com /story/electronic Trading/WST20000817S0001. Accessed August 7, 2001. Wall Street and Technology (1999). The great auction-issuing share and enticing IT talent. , 17(11), 30-36. Accessed August 7, 2001.
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
389
APPENDIX A: HISTORY OF THE WORLD WIDE WEB Year 1989 1990 1991 1993 1994 1999
Description • Tim Berners-Lee and colleagues at CERN propose the World Wide Web • First commercial provider of dial-up service, The World • First version of HTML • First widely-adopted graphical browser, Mosaic • New York Times estimates there are 20 million Internet users. • The World Wide Web Consortium (W3C), an international group of industry and academic representatives, forms to help insure commonality. • eGlobal Report estimates there are 130.6 million Internet users.
BIOGRAPHICAL SKETCHES Linda V. Knight is Associate Dean of DePaul University’s School of Computer Science, Telecommunications and Information Systems. She is also Associate Director of CTI’s Institute for E-Commerce. She conducts research and teaches in the area of e-commerce business strategy, development and implementation, and lectures on the topic of e-commerce curricula. An entrepreneur and IT consultant, she has held industry positions in IT management and quality assurance management. In addition to a Ph.D. in computer science from DePaul University, Dr. Knight holds a B.A. in mathematics and an M.B.A., both from Dominican University. Theresa A. Steinbach is an Instructor at DePaul University’s School of Computer Science, Telecommunications and Information Systems. She conducts research and teaches in the area of traditional and e-commerce systems analysis and design. As owner of an IT consulting firm, she has provided turnkey solutions for small and medium size enterprises in the financial services, municipal government, and health care industries. Ms. Steinbach is currently completing her Ph.D. in computer science from DePaul University. She holds a B.A. in mathematics, an M.B.A in quantitative economics, and an M.S. in information systems from DePaul University. Diane M. Graf is a Visiting Assistant Professor at Northern Illinois University’s College of Business, Operations Management and Information Systems Department teaching both undergraduate and graduate courses in IT management and strategy. Her past administrative positions in both education and business support her teaching and research interests in information technology and group collaboration. In addition to an Ed.D. in management information systems from Northern Illinois University, Dr. Graf holds a B.S. and M.S. in business education from Northern Michigan University.
390 Burgess & Darbyshire
Added Value Benefits of Application of Internet Technologies to Subject Delivery Stephen Burgess and Paul Darbyshire Victoria University, Australia
EXECUTIVE SUMMARY The application of Internet technologies towards distance education is widely discussed in the literature. This case applies Porter’s ‘added value’ theory relating to the use of IT to the application of Internet technologies used as a supplement to traditional classroom subject delivery. Most of the reported advantages of this type from online course and subject delivery relate to cost savings in terms of efficiency, flexibility and/or convenience for the students. The case study examines a range of subjects taught in the School of Information Systems at Victoria University, Australia. Each subject uses Internet technologies for different ‘added value’ benefits. Subject coordinators comment upon the use of the Internet technologies for both academic and administrative aspects. Students are surveyed to determine the value of Internet technologies from their perspective. Student responses indicated the applications were perceived to be at least ‘useful’, and findings supported Porter’s theory. The challenge for the faculty is to demonstrate the ‘business’ benefits to faculty staff of adopting Internet technology for teaching. The case studies have shown that the use of Internet technologies by students seems to be higher where the coordinator actively encourages it.
SETTING THE STAGE The application of Internet technologies towards distance education is widely discussed in the literature, however the overwhelming majority of educators use the Internet to supplement existing modes of delivery. Importantly, the Internet is providing a number of ‘added value’ supplemental benefits for subjects and courses delivered using this new, hybrid teaching mode. This case study examines a range of subjects taught in the School of Information Systems at Victoria University, Melbourne, Australia. The case study involves the examination of four separate subjects (two undergraduate and two postgraduate) offered by the school. Each subject uses Internet technologies (as a supplement to traditional teaching methods) in a different way, for different ‘added value’ benefits. Subject coordinators comment upon the ‘added value’ provided to them and to the Copyright © 2002, Idea Group Publishing.
Added Value Benefits of Application of Internet Technologies to Subject Delivery 391
School by the use of the Internet technologies in both the academic and administrative aspects of subject delivery. Students of the subjects are surveyed to determine the value of the application of the Internet technologies from their viewpoint.
Information Technology: Efficiency and Added Value There are a number of reasons for using IT in organisations today (O’Brien, 1999): • For the support of business operations. This is usually to make the business operation more efficient (by making it faster, cheaper and more accurate). Typical uses of IT in this way are to record customer purchases, track inventories, pay employees and so forth. Most uses of IT in this area are internal to the organisation. • For the support of managerial decision making. To assist with decisions such as whether to add or delete lines of merchandise, expand the business or employ more staff. This is done by the simplification allowing more sophisticated cost-benefit analyses, providing decision support tools and so forth. • For the support of strategic advantage. This final reason for using IT is not as well known as the other two areas (especially in small businesses). It refers to the use of Porter’s three generic strategies (low-cost producer, differentiation and niche market provider) as a means of using information technology to improve competitiveness by adding value to products and services. By their very nature such systems need to refer to forces external to the organisations (customers and sometime competitors). It has been recognised for a number of decades that the use of computers can provide cost savings and improvements in efficiencies in many organisations. Michael Porter (refer to publications such as Porter (1980) and Porter and Millar, (1985)) has generally been credited with recognising that the capabilities of information technology can extend further to providing organisations with the opportunity to add value to their goods. Value is measured by the amount that buyers are willing to pay for a product or service. Porter and Millar (1985) identify three ways that organisations can add value to their commodities or services (known as generic strategies for improving competitiveness): • Be the lowest cost producer (an organisation produces a product or service of similar quality to competitors, but at a lower cost). This strategy allows a business to charge a lower price than competitors and make a larger profit by increasing market share, or charge the same price and make a higher profit per unit sale (Kling and Smith, 1995). • Produce a unique or differentiated good (providing value in a product or service that a competitor cannot provide or match, at least for a period of time). It is hoped that customers consider the goods as being unique, and that they would be prepared to pay a premium price for this added value (Kling and Smith, 1995). If an organisation is the first to introduce a particular feature it may gain a competitive advantage over its rivals for a period of time. When another organisation matches that particular feature, it may not gain a competitive advantage but it will still be adding value, in the consumers’ eyes, to its own products. Some ways in which information technology can be used to differentiate between products and/or services are (Sandy and Burgess, 1999): • Quality: this relates to product or service traits (such as durability) that provide a degree of excellence when compared with the products or services of competitors. • Product Support: the level of support provided for the product or service. This can include information on how to use the product, product replacement/return strategies, and so forth. • Time: this works on the concept that buyers will pay more for a product that is provided/ delivered quickly, or will choose a product of similar price and quality if it is available now over a competitor’s product that is not currently available. • Provide a good that meets the requirements of a specialised market. With this strategy, an organisation identifies a particular niche market for its product. The advantage of targeting such
392 Burgess & Darbyshire
a market is that there may be less competition than the organisation is currently experiencing in a more general market. The specific characteristics used to target the niche market could be regarded as another type of differentiation. Summarised, a competitive advantage occurs when a business achieves consistently lower costs than rivals or by differentiating a product or service from competitors (Goett, 1999). The three generic strategies are an integral component of other tools that Porter and Millar (1985) mention to help an organisation to gain a sustainable competitive advantage. The first of these, the value chain, consisted of “the linked set of functions or processes that enable businesses to deliver superior value to customers and therefore achieve superior performance” (MacStravic, 1999, p.15). The second tool, the five competitive forces model, recognises that competitive advantage is often only achieved defending the firm against five competitive forces (the bargaining power of suppliers, the bargaining power of customers, jockeying amongst rivals, the threat of new entrants to the industry and the threat of substitute products or services) or influencing the forces in the organisation’s favour (Earl, 1989). Are Porter’s concepts, introduced some two decades ago, still useful today? MacStravic (1999) describes an application of the value chain to health care. Kling and Smith (1995) use Porter’s five competitive forces model to identify strategic groups in the U.S. airline industry. In relation to Internet applications, Allgood (2001) uses Porter’s three generic strategies to suggest how a stock broker offering Internet-based share trading is differentiating its services from a traditional stock broker and how stock brokers will need to use information technology in innovative ways to differentiate their services and maintain market share. There are a number of critics of Porter’s theories. The major concern is that these days it is impossible to achieve a sustainable competitive advantage, the likes of which Porter described in the 1980s. Businesses are more likely to use the theories to seize opportunities for temporary advantage. Whatever the length of the advantage, the basic theories can still be applied (Goett, 1999). The authors contend that while businesses can still achieve competitive advantage though producing goods at a lower cost, and customers are willing to pay a premium price for differentiated goods that are better quality, delivered faster, provide extra support and so forth, that Porter’s three generic strategies are as valid today as they were two decades ago.
Aspects of Course and Subject Delivery There are two overall aspects to course and subject delivery, the educational and administrative components (Darbyshire & Wenn, 2000). Delivery of the educational component of a subject to students is the primary responsibility of the subject coordinator. This task is the most visible from a student’s perspective. The administration tasks associated with a subject form a major component of subject coordination, but these responsibilities are not immediately obvious or visible to the students. However, if the administration tasks are performed poorly or inefficiently, the effects of this become immediately apparent to both subject tutors and students alike, and can be the source of much discontent. It is essential that all aspects of subject administration be carried out as efficiently as possible, so as not to distract the student from their goal, which is to learn. In both instances, IT solutions can be employed to either fully or partially process some of these tasks. Given the complex and often fluid nature of the education process, it is rare that a fully integrated solution can be found to adequately service both aspects of subject delivery. Most solutions are partial in that key components are targeted by IT solutions to assist the subject coordinator in the process. Administrative Tasks There are a number of administrative tasks associated with subject coordination for which IT solutions can be applied in the application. These include (Byrnes & Lo ,1996; Darbyshire & Wenn , 2000): • Student enrollment
Added Value Benefits of Application of Internet Technologies to Subject Delivery 393
• • •
Assignment distribution, collection and grading Grades distribution and reporting Informing all students of important notices
Most universities have a student enrollment system administered at the institute level. However, at the subject coordinator level, there are often local tasks associated with enrolment such as user account creation and compilation of mail lists, etc. Some of these tasks can be automated, or in fact partially automated and user driven (Darbyshire & Wenn, 2000). Many academics implement their own solutions to many of these small tasks. As the written assignment still remains the basic unit of assessment for the vast majority of educators, there have been many initiatives to computerize aspects of this task. Notable initiatives include Submit at Wollongong University, New South Wales, Australia (Hassan, 1991), NetFace1 at Monash University, Victoria, Australia (Thompson, 1988), ClassNet2 at Iowa State University, USA (Boysen & Van Gorp, 1997) and TRIX3 at Athabasca University, Canada (Byrnes & Lo, 1996) There are now many techniques to distribute student grades and other reports via the Internet. These range from email, to password protected Web-based database lookup. Hames (2000) reports on the use of the Internet in distributing reports and grades to middle school students in Lawrenceville Georgia. Notice boards and sophisticated managed discussion systems can be found in many systems (WBT Systems, 1997). For example, products such as TopClass, Learning Space, Virtual-U, WebCT, Web Course in a Box, CourseInfo and First Class all contain such integrated messaging facilities (Landon, 1998). Educational Tasks Many of the tasks viewed as ‘educational’ tasks can also employ IT solutions in order to try and gain perceived benefits. Some of these are: • Online class discussions • Learning • Course outline distribution • Seminar notes distribution • Answering student queries Just how many of these are actually implemented will relate to a number of factors, such as the amount of face-to-face contact between lecturers and students. However, using the Internet for many of these can address the traditional problems of students misplacing handouts, and staff running out of available copies. Discussion management systems are being integrated into many Web-based solutions. These are usually implemented as threaded discussions, which are easily implemented as a series of Web pages. Other tools can include chat rooms or listserv facilities. Answering student queries can take place in two forums, either as part of a class discussion via a managed discussion list, or privately. Private discussions online are usually best handled via an email facility, or in some instances, store and forward messaging systems may replace email. Implementing IT solutions to aid in the actual learning process is difficult. These can range from Intelligent Tutoring Systems (Ritter & Koedinger, 1995; Cheikes, 1995), to facilitated on-line learning (Bedore et al., 1998). However, the major use of IT solutions in the learning process is usually a simple and straightforward use of the Web to present hypertext-based structured material as a supplement to traditional learning.
Using Internet Technologies to Improve Efficiency and Add Value With the recent explosion in Internet usage, educators have been turning to the Internet in attempts to gain benefits by the introduction of IT into the educational process. The benefits sought
394 Burgess & Darbyshire
from such activity depend on the driving motivation of the IT solution being implemented. This is further complicated by external influences on the various stakeholders. There are three main stakeholders in the education process (we consider only the education process at the university level in this case). These are: the student, the academic and the university itself. Over the past decade or so, with economic rationalization there is an increasing tendency to view an educational institution such as a university, as a business. This seems abhorrent to some and educationalists resist it, arguing that education cannot be treated as a business lest we lose too many of the intangible results. Yet, external influences seem to be dictating otherwise. This has a definite influence on the roles of the stakeholders, and thus the value of the benefits gained from the introduction of IT technologies into this process. While many may not perceive a university as a business, it is nonetheless possible to match the current uses of the Internet in tertiary education with traditional theory related to the reasons why firms use IT. Internet technologies in education, which are used for the learning process itself, target the student as the main stakeholder. While the motivation may be the enhancement of the learning process to achieve a higher quality outcome, we can loosely map this to the ‘support of managerial decision making’ concept identified earlier. Such technologies allow educators to obtain a far more sophisticated analysis of individual student’s learning progress, and thus provide them with decision support tools on courses of action to take to influence this process. Technology solutions which target the academic as the stakeholder (Darbyshire & Wenn, 2000, Central Point) implement improvements or efficiencies that can be mapped to the ‘support of the business operation’ previously identified. Improvements or efficiencies gained from such implementations are usually in the form of automated record keeping and faster processing time, ultimately resulting in lower costs in terms of academic time, and added value to the students. By default, the university also becomes a stakeholder in the implementation of either of the above types of technology enhancements. Benefits gained by students and staff by such uses of technology also translates ultimately to lower costs for the institution or the provision of more and/or better quality information. The benefits of such systems can be mapped onto the ‘support of strategic advantage’ concept (as Porter’s low cost and differentiation strategies), previously identified as a reason for using technology in business. Most of the reported advantages of this type, from online course and subject delivery, relate to cost savings in terms of efficiency, flexibility and/or convenience for the students. These represent the traditional added value benefits of lower cost and faster access to course and subject materials. While there may be some dollar costs savings to the university, this is usually only slight, and can be offset against other resources used in developing or purchasing the technology solutions. Most of the added value lies in the flexibility such solutions offer to both staff and students. For postgraduate students, or for part-time students who have daytime jobs, on-line courses or courses supplemented with on-line material offer them the flexibility to study or obtain this material at a time convenient to them (Bedore, 1998). For the full-time students still studying in a traditional mode, the use of Internet technologies to supplement teaching also provides some convenience and flexibility in obtaining course materials, missed notes, or in contacting the course instructor and other students. There is also added value in terms of flexibility for academic staff in distribution of class material, sending notices and contacting students. The successful implementation of technologies to provide IT support for both the educational and administrative aspects of subject delivery will clearly benefit the university. If these institutions are to regard themselves as a business, then the successful use of IT in subject delivery could give the university a strategic advantage over other universities which it would regard as its business competitors. In terms of comparison with Porter’s three generic strategies, if universities can lower the costs of their subject delivery then it can provide two advantages – they can offer their courses at a lower
Added Value Benefits of Application of Internet Technologies to Subject Delivery 395
price to students or they can use the benefits gained from increased margins in other areas. In terms of differentiation, by being able to offer value added subject delivery as an ‘improved product’, over their competitors, the client base can be expanded. The differentiation can come in various forms, such as flexible delivery times, increased access to resources and so forth.
BACKGROUND: VICTORIA UNIVERSITY Victoria University is a relatively new university, being formed as recently as 1992 from the merger of two former Institutes of Technology. The recent addition of the Western Melbourne Institute of Technical and Further Education, which was previously involved primarily with industrial training, has made Victoria University into a large dual-sector university catering for a wide range of tertiary students from the central and western suburbs of Melbourne. As a new and rapidly changing technological institution, Victoria University has been very conscious of a need to continually reevaluate its curriculum and to look for new ways of best providing for the needs of its students and the business community that it serves in a cost-effective manner. The Faculty of Business and Law, of which the School of Information Systems is a part, has pioneered a project, with which it hopes to encourage all faculty members to use Internet technologies to support the delivery of courses (refer to ‘Current Challenges Facing the Organisation’ later). The School of Information Systems has a full range of undergraduate and post graduate courses relating to business computing. The school offers a range of subjects. Some are quite technical in nature while others are much more generally business-related. On average, students tend to mix electives across both ends of this spectrum.
CASE DESCRIPTION At Victoria University, there are no ‘fully online’ courses at the moment. There is developmental work currently proceeding on such courses, but adoption is slow. This is because there has been no centralized driving force behind the adoption of the Internet as a teaching tool in the faculty until the adoption of Central Point (refer below). There are, however, many good initiatives to supplement traditional on-ground courses with material available on the Internet. This material takes the form of lecture notes, useful links, supplemental material and examples, assignment distribution, managed discussions, assignment submission, notice boards and grades distribution. Initially, such activity was centered on a few people, but now there are faculty-wide initiatives to get as many academic staff as possible on-line with supplemental material. A range of subjects within the School of Information Systems were selected to gauge the effectiveness of the initiatives in relation to business practice. All of the subjects chosen in the case study use the Central Point System (Darbyshire, 1999; Darbyshire & Wenn, 2000) as the vehicle for providing the initiatives. The Central Point system is discussed briefly in a later section. This case study incorporates two undergraduate subjects (Information Technology Management and Object Oriented Systems) and two postgraduate subjects (Management Information Systems and Building Small Business Systems). Each subject runs for one ‘semester’ of 13 weeks duration (39 contact hours in total). Each subject coordinator has used Internet technologies in a different way to supplement their traditional teaching methods. Each expected a range of benefits to themselves and to students. For each subject, coordinators outline why they have implemented Internet technologies in such a way, and what types of benefits they perceive to have occurred from the implementation. When presented with the major reasons why business adopted IT, all agreed that their motivation was the equivalent of supporting their day-to-day business operations. Finally, students from each subject have been surveyed to determine if the ‘added value’ benefits from their viewpoint match the expectations of their subject coordinators. At the conclusion of each of the subjects chosen for this case study, students were asked to complete a standard survey form
396 Burgess & Darbyshire
consisting of the following nine questions. 1. Have you accessed details about your assignments from the subject Web site? 2. Have you accessed your subject outline from the subject Web site? 3. Have you followed an external link via your subject Web site? 4. Have you used the Web site in class? 5. Have you accessed the Web site outside of class? 6. Have you contacted your lecturer by email during the semester? 7. Have you contacted your tutor by email during the semester? 8. Have you submitted an assignment by email during the semester? 9. Have you submitted an assignment online via the subject Web site during the semester? For each question, the student could respond either “Yes”, “No” or “N/A” if the feature in question was not applicable to the subject. If the student responded with a “Yes”, then they were further asked to respond in numeric terms approximately how many times they used the feature, and then to gauge its effectiveness on a Likert-type scale of five ratings, using the choices ‘Very Useful’, ‘Useful’, ‘OK’, ‘Not Useful’ and ‘Useless’. Provision was also made for the student to enter their subject code. These responses were converted to an ‘effectiveness’ ratio by arbitrarily providing a score 4, 3, 2, 1 and 0 for the range of responses ‘Very Useful’ to ‘Useless’ and comparing it against the maximum possible score if every response was ‘Very Useful’. To provide an example, three responses of ‘Very Useful’, ‘Useful’ and ‘OK’ would return an effectiveness ratio of 75% (determined by [4+3+2=9] divided by [3 x 4=12]). The survey was placed on line at the following address–http://busfa.vu.edu.au/dbtest/ page2.html, and the students were asked to complete this survey within a few days after the completion of the subject. Responses were hence fully anonymous, and upon completion the data from each survey was automatically emailed to the authors. The results of the surveys for the individual subjects in the case study are shown in the case study results. In some cases, if a particular feature was not used within a subject, the students’ responses to that question would be N.A. These responses are not shown in the individual survey results. Each of the four subjects involved in the case study and the survey results are now discussed.
Information Technology Management Subject Description This subject typically runs over two to four campuses at any one time, the number of enrolments ranging between 150 and 350 per semester. It is a first-year subject for students studying the Bachelor of Business (Information Systems). This subject provides an introduction to information management and management of information technology. This is achieved by introducing concepts relating to: • The changing nature of information and information technology (IT). • Information Management in the Internet age. • Managing the information technology resource and projects within the organisation (emphasising the smaller to medium sized business). The main subject topics are: • Information and Management. Organisational information and its interaction with technology. Technology developments. • Information Management in Internet-worked Enterprises. Formulating information technology strategy and its linkage with business strategy. • Application of models to link information technology and corporate strategy. Establishing information technology projects. • Managing technological change and applications of information technology (small business context).
Added Value Benefits of Application of Internet Technologies to Subject Delivery 397
Subject Coordinator View Figure One represents the homepage for the subject, Information Technology Management, for Semester 2, 2000. In such a large subject, the major benefits from such a Web site are economies of scale. Lectures are conducted on multiple campuses at different times in large lecture theatres, so there is little flexibility available in subject delivery. The main features of the Web site are: • Provision of assignment details, past examination papers and a schedule of class times. One printed copy of each of these is handed out to students at the start of the semester. The idea of putting the information on the Internet is to reduce the need to reprint a copy when a student misplaces it, thus saving time and money for the institution. • Links to extra information about the subject on the Internet. It was felt that some students may like to access extra materials to support their study as a means of improving their understanding of subject material. • Lecturer contact details. Many lecturers travel between campuses and are not always directly accessible by telephone. This feature is placed on the web site to encourage students to contact lecturers by email, which is location independent. Educational benefits expected were mainly in the area of flexibility. Administrative benefits expected were flexibility and cost savings (in terms of time), particularly in the area of assignment submission and collection, and distribution of material. Student View Student responses are tabulated in Table 1. In all, there were 55 responses to the survey from students at three campuses. Summary The features used by students generally received an effectiveness rating of Useful or Very Figure 1: Information Technology Management Homepage
398 Burgess & Darbyshire
Table 1: Student Survey Results: Information Technology Management Internet Feature
Use
Effectiveness Ratio
Accessed assignment details Accessed subject outline
87% 75%
82% 84%
Used external links Accessed Web site in class
75% 96%
74% 84%
Accessed Web site out of class Contacted lecturer by email
93% 28%
88% 85%
Contacted tutor by email Submitted assignment by email
42% 24%
91% 87%
Useful. Most of the students (more than nine out of ten, on average) accessed the Web site during class (in tutorials) and outside of class, to perform information gathering (accessing assignment details and the subject outline) and to examine external links to gain extra information. In such a large subject, it was always more likely that the students’ first contact point would be with the tutor rather than the lecturer, with whom they would have a more personal relationship. Interestingly, one in four respondents submitted an assignment during the semester. This was not instigated as a formal method of subject submission, but was allowed by tutors if the student requested it. The use of the Web site by students ‘out of class’ was very encouraging, as was the number of students that accessed external links. There were definite benefits by students and staff from the information provision services provided, as expected. The contact with tutors and lecturers by email was also encouraging as it was not extensively promoted in class. Finally, the level of submission of assignments by email was unexpectedly high, as it was not formally announced at all! The challenge in the subject was in the encouragement of the adoption by the students of some of the features of the software. It seemed that while many were happy to read material placed on the Internet, most were reluctant to use the submission features of the software. When questioned, many said that they simply didn’t trust the software to deliver their material to the academic coordinator. It seemed that the students still preferred to physically hand their submitted work to the coordinator, as they weren’t sure that the coordinator actually received it when it was submitted electronically. This seems to be a challenge to the software designers to provide functionality that addresses these concerns, such as a facility for a student to see their submitted work online.
Object Oriented Systems Subject Description This subject runs over two campuses each semester, and the total number of enrollments ranges between 45 to 75 students per semester. This is a final-year subject for students studying the Bachelor of Business (Information Systems). This subject provides an introduction to Object Oriented programming and design using the Java programming language. The main concepts covered during the course of this subject are: • The nature of Objects • Object Oriented analysis and design • Object Oriented Programming The main topics covered during this subject are: • Objects, • Object Oriented Design using UML • Encapsulation
Added Value Benefits of Application of Internet Technologies to Subject Delivery 399
• • • • •
Introduction to Java Java Objects Java Event Programming using Java GUI objects the AWT, using JDBC to interface Java to a Database.
Subject Coordinator View The homepage for this subject is shown in Figure 2. This is not an overly large subject, but is taught across two campus locations and does contain some post-graduate and part-time students. The main point in supplementing the course delivery with Internet functionality in this subject was to gain flexibility for academic staff and students. The main features of the Web site include: • Detailed lecture notes and supplementary lecture material. It was hoped that by including detailed notes here the students could print this out, and use the lecture to concentrate on the material rather than just a note taking exercise. • Provision of administration material, such as assignments, course guides and notice boards. By placing such material here, timely notices can be sent to students, and assignments distributed at anytime throughout the week. Also, material such as course guides can be easily obtained by students at any time, and academic staff do not have to act in a pseudo-secretarial role by continually passing out such material. • Assignment submission and collection. Using this Web site, students can submit their assignments either through a Web page, or via email as an attachment. Thus, flexibility is introduced for all students, and particularly part-time students that don’t want to make a special trip in just to drop off an assignment. Also, there is flexibility of staff in collecting the assignments as this can be done anytime online. No real educational benefits were expected in this case, apart from flexibility of access to material placed on the Web. For the administrative benefits, it was hoped to gain flexibility for students and staff and there was also an expectation of some time savings to be gained in the administration side. For instance, the provision of administration material online and the assignment submission, offers a degree of automation that requires little or no academic coordination. Assignments would be time and date stamped so there was no dispute as to submission times. Student View For this subject, there were 22 survey submissions. The results are displayed in Table 2. Figure 2: Object Oriented Systems Homepage
400 Burgess & Darbyshire
Table 2: Survey Results: Object Oriented Systems
Use
Effectiveness Ratio
Accessed assignment details Accessed subject outline
100% 68%
91% 90%
Used external links Accessed Web site in class
73% 73%
91% 92%
Accessed Web site out of class Contacted lecturer by email
86% 95%
95% 90%
Contacted tutor by email Submitted assignment by email
95% 95%
93% 96%
Submitted assignment by Web site
59%
90%
Internet Feature
Summary From the survey of this component of the case study, there was a very high percentage use of the Web site’s features. However, all the features of the Web site were actively promoted by the subject coordinator. The lowest feature use was that of assignment submission through the Web site. It seemed that most students were more comfortable in submitting assignments as email attachments, even though there was no immediate conformation of submission as there was with the Web site submission. There was also a very high user satisfaction of the features as evidenced by the effectiveness ratio percentage (nothing under 90%). The highest feature use was in the area of assignment submission and contact with the subject lecturer. The expected time savings did not eventuate. In fact, as a result of this case study we found that quite the reverse was true. With the Web site, students had much flexibility in contacting their tutor or lecturer. This generated many emails, particularly in relation to assignment problems when submission time was close. Many of these queries would not have been generated without electronic access to the tutor. In that case, due to the delay, the student would either get help elsewhere or save some questions for a face-to-face visit. With Internet-based access the students could quickly generate an email for the tutor as the problems arose. This increased the workload to the tutors significantly. The main challenge in this subject was the realization of some of the added value benefits in terms of time savings for the academic coordinator. There was certainly added value achieved from the students’ perspective, in terms of cost, time and flexibility. However, the increased access to the coordinator by the student nullified any time savings benefits gained. Such problems could most likely be overcome with a more disciplined approach to the use of the technology
Management Information Systems Subject Description This subject typically runs over one or two campuses at any one time, the number of enrolments ranging between 100 and 200 per semester. It is a first-year subject for students studying the Master of Business Administration and an elective often chosen by other Master of Business students. This subject aims to introduce students to a broad range of topics relating to the field of information systems, to give an appreciation of the advantages computers confer in the management of information and to provide an introduction to how information systems are built. Theoretical issues are reinforced through laboratory work that leads to the design and implementation of small systems.
Added Value Benefits of Application of Internet Technologies to Subject Delivery 401
Upon completion of the subject, students will have a management perspective of the task of building and maintaining information systems applicable to any organisation. The subject covers a selection of topics relating to: the effective management and use of technology, the concept of information and how it can be managed, how information technology can be used to assist in managing information and elements of systems development. Specific topics covered are: • Management, information and systems • Problem solving and decision making • Process modelling, function charts, data-flow diagrams (DFD) • Databases and data modelling (ER diagrams) • Types of information systems • Data access: building database front ends • Project management • Innovation and the management of technological change • Use of IT in small business • Data communications • Electronic commerce • Strategic applications of IT Subject Coordinator View Figure 3 represents the homepage for the subject, Management Information Systems, for Summer Semester 2000-2001 (southern hemisphere). This subject is also a relatively large subject, so the major benefits from the Web site are similar as those for Information Technology Management. The subject has typically been conducted in three hour seminars combining tutorial rooms and computer laboratories, so there is some flexibility available in subject delivery. The main features of the Web site are: • Provision of assignment details, subject information, and a schedule of class and presentation times. As with Information Technology Management, one printed copy of each of these is handed out to students at the start of the semester. The result is to reduce the need to reprint a copy when a student misplaces it, thus saving time and money for the institution. Figure 3: Management Information Systems Homepage
402 Burgess & Darbyshire
•
Links to extra information about the subject on the Internet (out of screen in Figure 3). For students that wish to access extra materials to support their study. • Lecturer contact details. To encourage email contact with lecturers. Web Discussion List. As a means of introduction to the subject, the students’ first assignment is to read two or more articles on a related computing topic and to answer a discussion question posted by the lecturer on a Web-based bulletin board. After this they are to read the contributions of other students and comment upon those. Students are broken up into groups of about four to six students for the assignment. Refer to Figure 4 and Figure 5 for typical screens from the Web Discussion List. The use of this feature has the advantage that students can read the articles and submit their opinions at any time, provided that they have access to the Internet. For the lecturer it provides a more structured form of dialogue to assess than the ‘live’ version that could be conducted in class. Figure 4: Sample Messages Posted to the Web Discussion List
Figure 5: A Sample Message and Reply Screen for the Web Discussion List
Added Value Benefits of Application of Internet Technologies to Subject Delivery 403
•
Another use of the Web site is to use it for live demonstrations during lectures. The lectures for the topics of Internet Business Research and Communication and Electronic Commerce are conducted in computer laboratories, where students are encouraged to browse ‘live’ to various sites as they are being used as examples. This provides the benefits of students seeing actual examples of what is being discussed, with the option of revisiting the sites at a later date if they require further reinforcement. The sample lecture screen for Electronic Commerce is shown in Figure 6. There was one main educational benefit expected, and this was related to the stimulation of discussion expected from the use of the messaging system. It was hoped that some synergy would develop from emerging discussions, and further discussion would ensue. Flexibility in access to material was also an expected advantage. Some administrative benefits were expected, and it was hoped that some time savings would result as a consequence of the students using the technology for assignment submission. Student View For this subject, there were 24 survey submissions. The results are displayed in Table 3.
Figure 6: Electronic Commerce ‘Lecture’ Web Page for Management Information Systems
Table 3: Survey Results: Management Information Systems Internet Feature
Use
Effectiveness Ratio
Accessed assignment details Accessed subject outline
88% 75%
68% 79%
Used external links Accessed Web site in class
50% 88%
81% 86%
Accessed Web site out of class Contacted lecturer by email Submitted assignment by email
75% 63% 38%
88% 90% 92%
404 Burgess & Darbyshire
Summary It is interesting that the effectiveness ratio for accessing assignment details was the lowest for all of the features over all of the subjects in this case study. This could be because the assignments were quite detailed, and only a summarised version of them was made available on the Web site. The fact that 88% of respondents accessed the Web site in class showed that some of them must have contributed to the assignment discussion list out of class. For a master’s-level subject, the use of external links was disappointing, especially as some of these were to online databases of journals that could be used to gather material for the research paper. Most of the students were quite familiar with the use of email, and many of them contacted lecturers throughout the semester. The challenge in this subject was in getting the students to make use of the Internet features available. Many choose to not use the technology, and the effectiveness ration determination for this subject was low. More active encouragement was needed and in fact is planned for future semesters.
Building Small Business Systems Subject Description This subject runs on one campus at any one time, the number of enrollments ranging between 20 and 30 per semester. It is an elective subject for students studying the Master of Business (Information Systems) and the Master of Business Administration, as well as being an elective chosen by other Master of Business students. This subject introduces the student to a broad range of topics relating to the field of information technology and small business. Topics covered include: • Determining small business IT needs • Selecting applications for small business: business processes • Selecting hardware and operating systems for small business • Networking for small business • Building small business applications • Office suite programming • Sharing data with other applications • Use of automated input devices • Calling other office suite applications • Automating applications across packages Subject Coordinator View Figure 7 represents the homepage for the subject, Building Small Business Systems, for Summer Semester 2000-2001 (southern hemisphere). This subject is also a relatively small subject, and is typically conducted in three hour seminars combining tutorial rooms and computer laboratories, so there is some flexibility available in subject delivery. The main features of the Web site are: • Links to small businesses research sources on the Internet for students to assist in the preparation of their research assignments. • Downloads. There are a number of sample files used by students in the subject. This facility provides a convenient way for students to access these files as needed, in class or at home. • Lecturer contact details. To encourage email contact with lecturers. No real educational benefits were anticipated. However flexibility was cited as an advantage. Some time savings was anticipated, and flexibility for the subject coordinator in distributing material and collecting assignments was cited as the main administrative benefits expected.
Added Value Benefits of Application of Internet Technologies to Subject Delivery 405
Figure 7: Homepage for Building Small Business Systems
Student View For this subject, there were only 12 survey submissions. The results are displayed in Table 4. Summary The use of Internet technologies was strongly encouraged in this subject. This is reflected in the levels of usage shown in Table 4. All students used the Web site in and out of class, and all contacted the lecturer at some stage in the semester. Three out of four students submitted assignments by email. In the other subjects, there was little difference between the effectiveness ratios for accessing the web site in and out of class. The results for this subject show that accessing the web site outside of class was less effective. The reasons for this are unclear and need to be explored further. The major challenge in this subject has been traditionally to get students to make use of the technology. This subject is part of the masters by coursework degree, and many of the students are full-time overseas students. There has always been a reluctance to use the technology, mostly based on a lack of familiarity, and to encourage this, use of components of the technology were made compulsory–compulsory in the sense that part of a students score was allocated to their use of some Internet features. This form of active encouragement seemed to give the students confidence in using the Internet for the non-compulsory features. Table 4: Survey Results: Building Small Business Systems Internet Feature
Use
Effectiveness Ratio
Accessed assignment details Accessed subject outline
83% 75%
92% 83%
Used external links Accessed Web site in class
83% 100%
75% 88%
Accessed Web site out of class Contacted lecturer by email Submitted assignment by email
100% 100% 75%
69% 100% 100%
406 Burgess & Darbyshire
Case Study Summary The findings from both the ‘pre’ and ‘post’ implementation views of the subject coordinators were in accordance with their expectations and had delivered the ‘value added’ as expected. Student responses indicated that they had used the ‘compulsory’ aspects of the Internet technology applications extensively, and the optional aspects to varying degrees. All of the applications were seen to be either ‘very useful’ or ‘useful’ by the students. It seems apparent that Internet technologies can be used to supplement traditional subject delivery in a number of ways, leading to a number of ‘business’ type benefits for both institution and students. The subjects in the case study indicate that the use of the Internet technologies by students seems to be higher where it is openly encouraged by the coordinator and where the students are already familiar with the use of the technology.
CURRENT CHALLENGES FACING THE ORGANISATION: THE FACULTY OF BUSINESS AND LAW AT VICTORIA UNIVERSITY AND THE CENTRAL POINT PROJECT The previous section indicated some of the benefits available from the use of Internet technologies to assist with subject delivery. There are two major problems that arise when considering the adoption of these technologies by Victoria University. The first problem has been that developments have been ad hoc. That is, they have occurred on the basis of individual interest by faculty members, not a coordinated approach. As such, there was never a standard interface to be followed nor templates to be used. Work was continually being reinvented. Formatting of information (such as course outlines) occurred in a number of different ways. There were faculty members that did not even know the technology was available, or did not know how to use it if they did know it was available. The Central Point project began as a series of Web pages in 1996 to help two staff members coordinate the distribution of lecture material for subjects taught over multiple campuses. Since then, Central Point has evolved into an interface for subject Web sites (Darbyshire & Wenn: 2000). Central Point is the tool used to deliver the Internet functionality in all the subjects in this case study. The original design of the Central Point site was to “alleviate pressure on the subject administrator”. Central Point is now a series of Web pages and databases designed to provide basic administration functionality to academic subject sites that “hook into” it. This interface uses Cold Fusion scripts and Microsoft Access databases to provide this functionality. Central Point was not designed to impose any type of format or structure on subject sites. Quite the opposite in fact, as academics generally tend to oppose the imposition of any such structure. Instead the academics are free to develop their own Web sites for their subjects, and then “hook into” the Central Point site. This is done by a Central Point administrator making a subject entry into one of the Central Point databases. The subject’s Web site is then displayed as a ‘clickable’ link on the Central Point homepage (see Figure 8), and the subject coordinator is then automatically set up with a Subject Notice board, Web-based assignment box for assignment submissions and access (Darbyshire, 1999), facility for recording subject grades on the Web, and the ability to manage subject teams and diversion of assignment submissions to tutors. These facilities can be accessed by staff and students from the Central Point home page. Thus the students do not have to remember multiple home pages for different subjects, but just one central link. Hence the name: Central Point. Over time, the Faculty of Business and Law saw the development of Central Point as an opportunity to coordinate the efforts of faculty staff in their use of Internet technologies to assist with subject delivery. In Semester 2 of 2000, the faculty adopted Central Point as the standard Web tool to tie together
Added Value Benefits of Application of Internet Technologies to Subject Delivery 407
all the subject Web sites being developed, and to provide the basic functionality mentioned above. The homepage of the Central Point System is shown in Figure 8, and is still being developed by the faculty to increase the functionality provided to subject Web sites, and to personalize the site for students logging into the system. The homepage is at http://www.business.vu.edu.au/cpoint/ index.html. There is also current research underway to increase functionality and hence achieve accelerated benefits as discussed by Parker and Benson (1988) by the application of agent technology (Darbyshire & Lowry, 2000). The challenge for the Faculty of Business and Law is to demonstrate the ‘business’ benefits to faculty staff of adopting Central Point as their gateway to using the Internet to support their own subject delivery. The case studies have shown that if the subject coordinator actively encourages the use of the technology, then the students will use it. In addition to this, the use of Central Point does provide a single-entry point for students to access the web sites of subjects within the faculty. Standard information, such as subject outlines, is provided in standard formats. Training courses in MS FrontPage are being run for faculty members that are not used to set up Web pages. Once the adoption of Internet technology is realized for the majority of academic staff in the Faculty of Business, further challenges will soon present themselves. At this time we should begin to see requests for further development based on desire for accelerated benefits that occurs after the initial adoption of technology (Parker & Benson, 1988). If further initiatives are not explored at this time, the Faculty could miss an opportunity to capitalize on any momentum towards Internet technology acceptance. The adoption of further complex technology to assist in subject delivery will need to be handled carefully. Support mechanisms will need to be put in place to assist both faculty and students in the event of difficulty. This is an area often overlooked when large-scale adoption of technology is considered. Currently, both governmental and Faculty polity dictate the duration and structure of the courses to some degree. As the move towards the adoption of Internet technology progresses, many benefits will be perceived in moves towards complete on-line delivery. Courses taught in this mode do not necessarily fair well when the same rigid structures are imposed on them as onground courses. At this point, major challenges will be faced in the flexibility of the faculty in accommodating any such moves. Figure 8: Central Point Homepage
408 Burgess & Darbyshire
ENDNOTES 1 http://www.educationau.edu.au/archives/olcs/case25.htm. 2 http://classnet.cc.iastate.edu/. 3 http://oc1.itim-cj.ro/~jalobean/CMC/resource9.html.
REFERENCES Alexander, S. (1995). Teaching and Learning on the World Wide Web, Proceedings of AusWeb’95, [22/5/1999], http://www.scu.edu.au/sponsored/ausweb/ausweb95/papers/education2/ alexander/ Akerlind, A. and Trevitt, C. Enhancing Learning Through Technology: When Students Resist the Change, Proceedings ASCILITE’95, http://www.ascilite.org.au/conferences/melbourne95/smtu/ papers/akerlind.pdf Allgood, Bridget (2001). ‘Internet-Based Share Dealings in the New Global Marketplace’, Journal of Global Information Management, 9(1), 11. Bedore, G.L., Bedore, M.R., and Bedore, G.L. Jr. (1998), Online Education: The Future Is Now, Socrates Distance Learning Technologies Group, Academic Research and Technologies. Boysen, P., and Van Gorp, M. J., (1997). ‘ClassNet : Automated Support of Web Classes’. Paper presented at the 25th ACM SIGUCCS Conference for University and College Computing Services, Monterey, California USA. Byrnes, R. and Lo, B. (1996). A Computer-Aided Assignment Management System: Improving the Teaching-Learning Feedback Cycle, http://www.opennet.net.au/cmluga/byrnesw2.htm, 12/2/ 99 Cheikes, B. A, (1995). ‘GIA: An Agent-Based Architecture for Intelligent Tutoring Systems’. Proceedings of the CIKM’95 Workshop on Intelligent Information Agents. Darbyshire, P., (1999), ‘Distributed Web Based Assignment Submission and Access’, ProceedingsInternational Resource Management Association, IRMA ’99, Hershey, USA. Darbyshire, P. and Lowry, G. (2000). ‘An Overview of Agent Technology and its application to Subject Management’, Proceedings International Resource Management Association, IRMA ‘2000, Alaska, USA. Darbyshire, P. and Wenn, A. (2000). ‘A Matter of Necessity: Implementing Web-based Subject Administration’, Managing Web Enabled Technologies in Organizations, Idea Group Publishing, Hershey. Earl, M. J. (1989). Management Strategies for Information Technology, Prentice Hall, Cambridge. Goett, P. (1999). ‘Michael E. Porter (b. 1947): A Man With a Competitive Advantage’, The Journal of Business Strategy, Boston, 20(5), 40-41. Hames, R. (2000). Integrating the Internet into the Business Education Program, Alton C. Crews Middle School: CS Dept, http://www.crews.org/media_tech/compsci/pospaper.htm, [2000, 26/1/2001] Hassan, H. (1991). ‘The Paperless Classroom’. Paper presented at ASCILITE ‘91, University of Tasmania, Launceston, Australia. Kling, J. A. and Smith, K. A. (1995). ‘Identifying Strategic Groups in the U.S. Airline Industry: An Application of the Porter Model’, Transportation Journal, Lock Haven, 35(2), 26. Landon, B., (1998, 10/4/98). On-line Educational Delivery Applications: A Web Tool for Comparative Analysis, [Web Page]. Centre for Curriculum, Transfer and Technology, Canada. Available: http://www.ctt.bc.ca/landonline/ [1998, 10/10/98]. MacStravic, S. (1999), ‘The Value Marketing Chain in Health Care’, Marketing Health Services, Chicago, 19(1), 14-19 McNaught, C., (1995), Overcoming fear of the unknown! Staff development issues for promoting the use of computers in learning in tertiary education, Proceedings Ascilite’95, http://
Added Value Benefits of Application of Internet Technologies to Subject Delivery 409
www.ascilite.org.au/conferences/melbourne95/smtu/papers/mcnaught.pdf O’Brien J. A., (1999), Management Information Systems, Managing Information Technology in the Internetworked Enterprise, 4th Ed. Irwin McGraw Hill Parker, M. M., and Benson, R. J., (1988). Information Economics: Linking Business Performance to Information Technology, Prentice-Hall. Porter, M.E., (1980). Competitive Strategy, NY: Free Press. Porter, M.E. and Millar, V E. (1985). ‘How Information Gives You Competitive Advantage’, Harvard Business Review, 63(4), 149-160 Ritter, S. and Koedinger, K. R. (1995). ‘Towards lightweight tutoring agents’. Paper presented at the AI-ED 95—World Conference on Artificial Intelligence in Education, Washington, D.C. Sandy, G. and Burgess, S., (1999), ‘Adding Value to Consumer Goods Via Marketing Channels through the Use of the Internet’, CollECTeR’99: 3rd Annual CollECTeR Conference on Electronic Commerce, Wellington, New Zealand, November. Thompson, D. (1988). WebFace Overview and History, [Web page]. Monash University. Available: http://mugca.cc.monash.edu.au/~webface/history.html [1999, 2/1/99]. WBT Systems (1997), Guided Learning – Using the TopClass Server as an Effective Web-Based Training System, WBT Systems White Paper, http://www.wbtsystems.com
BIOGRAPHICAL SKETCHES Stephen Burgess is a Senior Lecturer in the Department of Information Systems at Victoria University, Melbourne, Australia. Stephen lectures widely to undergraduate IS and Masters degree students, teaching subjects such as Information Technology Management, Building Small Business Systems and Information Management. His research interests include the Internet for use in small business,as wellas small business use of technology and the application of Internet technologies. Stephen’s current research project is in the provision of an automated tool to help small businesses decide on a Web based implementation strategy. Paul Darbyshire is a Lecturer in the Department of Information Systems at Victoria University, Melbourne Australia. He lecturers in Object Oriented systems, C and Java programming, and has research interests in the application of Java and Web technologies to the support of teaching. His current research is into the use of the Web for university subject management and the use of AI techniques for the development of second-generation Web-based teaching support software.
410 Manning & Sarker
Enterprise Information Portal Implementation: Knowledge Sharing Efforts of a Pharmaceutical Company1 Alison Manning and Suprateek Sarker Washington State University, USA
EXECUTIVE SUMMARY This case study provides a detailed account of the formation of a knowledge management (KM) division within a multinational pharmaceutical company, and the subsequent undertaking of the first major KM project, which involved the implementation of a portal software technology. Specific issues discussed include rationale for replacing the existing intranet with portal technology, selection of the portal, justification for this selection, challenges in organizing and linking documents, as well as the social and behavioral factors influencing the implementation. A number of dilemmas and tradeoffs are presented with respect to each of the issues.
COMPANY BACKGROUND PharmaCo is a large multinational drug research company conducting a broad spectrum of pharmaceutical research. PharmaCo has successfully marketed drugs in several disparate therapeutic areas such as arthritis, cancer, diabetes and schizophrenia. While the patent of PharmaCo’s highest revenue-generating drug is soon to expire, the company still has other modest revenuegenerating drugs on the market. With five different promising drugs at various stages of clinical trials, PharmaCo’s drug pipeline outlook is viewed as reasonably favorable. However, despite a promising drug pipeline, PharmaCo anticipates that investor favor will decline when the patent on their strongest drug product expires. Founded in 1962, PharmaCo today is a well-known corporation that has resulted from a merger of two medium-sized pharmaceutical companies in 1989 and the subsequent acquisition of a small biotech company in 1996. As a result of the merger and the acquisition, PharmaCo is a complex organization with various offices throughout the United States as well international sites in Canada, Latin America, Europe and Japan. Research and development (R&D) efforts take place in all locations, whereas business functions that support the overall organization such as human resources, finance, accounting, information technology, marketing, and legal are centralized in Tucson. While most of the business in the company is conducted in English, the scientists Copyright © 2002, Idea Group Publishing.
Enterprise Information Portal Implementation
411
performing research in international locations tend to use both English and their native languages (as necessary) in their official communications. PharmaCo is in an industry that is very competitive with numerous players vying for increased market share. Generally, market share is calculated from a drug or therapeutic area perspective, rather than from a vertical industry perspective. Since it is a highly fragmented industry, even the largest players that are considered Tier 1 companies have only 3-5% overall market share. While biotech companies are seen as part of the competition for PharmaCo, the medium to large pharmaceutical companies are believed to be its primary source of competition. The largest five players are viewed as Tier 1 companies, while mid-sized ones are seen as Tier 2 companies. The Tier 2 companies are often considered more innovative, and some of these companies attempt to strategically utilize technology in order to try to climb into a Tier 1 spot. PharmaCo is viewed by its competitors and industry analysts as a Tier 2 company with an approximate market share of 2%. However, PharmaCo has a market share of 70% in the schizophrenia therapeutic area, due to their largest revenue-generating drug. PharmaCo’s second-largest revenue-generating drug is minor in comparison, with only a 30% market share in the arthritis therapeutic area. This lower market share is attributed to the more severe side effects of PharmaCo’s arthritis drug compared to those associated with the drug manufactured by its direct competitor. R&D is the core business focus of PharmaCo, as is characteristic of most companies in the pharmaceutical industry. A strong drug pipeline is essential to the success of drug research companies; however, the investment in both time and money to develop this pipeline is substantial. The time between the discovery of a molecule or compound to the launch of a marketable drug takes an average of 9 to 13 years. In fact, the typical cost to bring a drug from its R&D infancy to FDA approval is approximately $500 million. The cost to bring a drug to market is so phenomenally high that revenues from only 3 out of 10 drugs meet or exceed the average cost of research and development (Appendix 1). The drug development cycle is relatively consistent among all pharmaceutical companies, and includes stages of: 1) discovery (research and development), 2) clinical trial phases I/II/III/IV, 3) FDA approval, and 4) manufacturing and marketing (Appendix 2). In order to reduce the time taken to market a particular drug, the pharmaceutical companies have to reduce the time taken in the discovery and the clinical trial phases. Discovery, in particular, takes an incredible amount of time and effort before it can reach the clinical trials stage. Many individuals from various departments within a pharmaceutical company are involved in the discovery phase. Since the drug development process is so timeintensive and expensive, pharmaceutical companies strive to reduce this time in order to obtain a competitive advantage.
SETTING THE STAGE Formation of the KM Group In an effort to reduce the drug development cycle, PharmaCo recently created a Knowledge Management (KM) group within the Information Technology (IT) department. Dan Kramer, the Chief Technology Officer (CTO) of PharmaCo, was the driving force behind the creation of the KM group. Dan’s rationale for the formation of the KM group was that researchers, as well as employees in other business units, would benefit (in terms of research and decision-making) from increased access to both internal and external information. Dan believed that this improved access to information would lead to the reduction in time to bring a drug to market by improving communication among different divisions and business units, as well as by improving the information flow throughout the drug development cycle. Dan stated, “Improved access to information will help to reduce the duplication of research and work efforts which are so common within an organization of this size and help to reduce the drug development cycle time.”
412 Manning & Sarker
Dan Kramer faced a challenging situation because the importance of technology to the organization was a relatively new concept for PharmaCo. The Chief Technology Officer (CTO) was a newly created position in the organization, reporting directly to the Chief Executive Officer (CEO). In addition to the CTO, the Chief Operating Officer (COO) and Chief Financial Officer (CFO) also report to the CEO. The Vice Presidents of Research & Development, Global Management, U.S. Operations, Marketing & Sales, and Manufacturing report to the COO. Overall, Dan Kramer felt that the CEO was his greatest ally in emphasizing the importance of technology and the information issues throughout the corporation. The CEO believed that technology could be a differentiating factor in the continued success of the company, and would aid in making PharmaCo a Tier 1 contender. Unfortunately, the COO was not always as supportive. The COO believed that technology was useful and essential, but it cut into spending that was critical to the core of the business, research and development. Money spent on technology was money that would not be spent on research and laboratory equipment. The CFO was less opposed to technology-related spending since the departments reporting to him such as accounting, finance and legal, also drew money away from the core of the business (Appendix 3). As a result, the CFO was more understanding of the difficulty in presenting a case to the COO. The CEO would often act as a mediator in determining corporate priorities since he had the broadest perspective. The CTO usually had to present value proposition as well as the cost and benefit relationship of implementing certain technologies. There were certain technologies which were easier to justify and validate, such as backup recovery systems, whereas for other technologies, it was more difficult to define the benefits using concrete numbers. Technologies that had been in existence for a significant amount of time usually had histories to justify expenditure, whereas the return on investment of emerging and cutting-edge technologies was more difficult to ascertain. In situations where the value was more difficult to prove, Dan was most successful if he presented a business case scenario. In order to form this group that would be put in charge of PharmaCo’s KM initiative, Dan selected employees from within the IT department as well as from the library information center. Dan felt it was important to put together a group that was technologically capable and also trained in issues relating to information organization and access. To oversee the KM effort, David Allen, a manager within the IT department, was chosen to be the Director of Knowledge Management (DKM). David realized the distinction between data, information and knowledge was often blurred, and therefore formulated his own view regarding the role of the KM initiative. He explained to his group that he hoped to capitalize on providing data and information in such a manner as to assist in the personal conversion of data and information (processed/meaningful data) into knowledge, thereby enabling PharmCo employees to be innovative and to make better decisions: It is not just how we can provide information to the user, but how it can be presented to transform the employee into a knowledge worker. Within the KM group, a technology implementation team and a content management team were formed to address both functionalities. A former network and database administrator, Charles Heenan was chosen to lead the technology implementation team as the Director of Knowledge Management Technology, reporting to David (the DKM). In addition to Charles’ firm grasp over IT issues, his engaging personality and leadership traits made him an ideal candidate for this position. In order to fill the leadership role for the content management team, David selected Maile Ohye from the Library Information Center to become the Director of Content Management. As with Charles, Maile was chosen to fill this position because of her leadership skills as well as her proven success in addressing information access and organizational issues within PharmaCo. In staffing the budding group, Charles and Maile hired both internal and external candidates to fill important roles in both teams within the KM group. The hiring process took approximately two months, and a team of an additional three members was assembled (Appendix 4).
Enterprise Information Portal Implementation
413
While Dan Kramer’s job was to provide overall direction to the KM group, most of the planning was undertaken by David, Charles, and Maile. David would keep Dan abreast of the progress and status of the KM group. Dan had significant trust in David’s abilities since David had proved himself within the IT department over the past few years. While Dan primarily had to be a champion at a chief executive level in the organization, David had to be a champion at a departmental level. Many of the KM projects were dependent upon the cooperation of individual departments. David perceived that some departments, especially those involved in research, seemed reluctant to share their internal documents. Researchers were rewarded financially based on patents filed. The rewards were particularly significant if a drug that was developed in their group was brought to market. Such an incentive system could prove to be a hurdle that the KM group had to deal with. In addition, researchers were resentful of employees hired in support roles such as human resources and IT, since they were taking money away from their research. As a result, David’s role as a spokesperson of the KM initiative to different PharmaCo departments was critical to the success of the KM project. David realized that he would have to be patient and diplomatic in gradually changing different stakeholders’ attitudes towards the role of knowledge management in the organization.
CASE DESCRIPTION The newly formed Knowledge Management group began meeting to try to identify a strategy for improving upon the current information infrastructure within PharmaCo. The group spent considerable time formulating its mission statement that would support the company’s mission as well as add value to the organization as a whole (Appendix 5). The time invested proved invaluable in laying foundation for an implementation strategy, and also provided an opportunity for the group to work closely as a unit. Developing the ability to work cohesively was particularly important since some members of the KM group had earlier seemed apprehensive of each other, which could hinder the group’s ability to make decisions cohesively and rapidly. Consistent with its identified mission, the KM unit decided that an enterprise-wide allencompassing framework needed to be established to achieve the primary goals of supporting all business functions and facilitating the exchange of information. In order to deliver these primary goals, the PharmaCo KM team wanted to deliver one Web-based interface that would provide “one-stop shopping” to accommodate the information needs of the organization and to support the business functions. With these goals in mind, the PharmaCo KM group began to evaluate the portal of the existing corporate intranet, as well as explore other possible portal solutions that could enable the KM initiative. In order to help explain their value proposition to the executives within PharmaCo, the KM group members felt that they needed to define at a basic level what a portal is. The KM group decided that “a portal is an application that provides a personalized and adaptive interface enabling people to discover, track, and interact with other people, applications, and information relevant to their interests.” The portal would provide a single gateway to personalized information for making informed business decisions, and unlock internally and externally stored information irrespective of its physical location. In addition, the KM group wanted to facilitate the use of virtual teams for collaboration and information sharing that would help in making more efficient decisions. Unfortunately, portal technology was an emerging and cutting-edge technology. There was no published study which outlined the cost/benefits or return on investment of a portal implementation. As a result, Dan had to build a business case from scratch to help support his push for Knowledge Management efforts in the company using a portal. Dan had learned that a cross-departmental group working on a particular drug in clinical trials had obtained certain information that led them to believe that the drug would not be approved by FDA. This led to the dissolution of the group. However, six months after the group had been disbanded, more information surfaced which prompted the company to reverse its decision regarding the drug. When the group started meeting again, 60% of the project documents could not be located. As a result, the time for this drug to reach market increased
414 Manning & Sarker
significantly, thereby causing PharmaCo significant loss of potential revenue. Dan highlighted two important lessons from this case: 1) the storage of internal documents at a central location would allow seamless sharing and would enable collaboration; and 2) the efficient sharing would reduce the time taken to market a drug. Dan had realized that selling this business case to the members of the top management would be difficult, and had therefore prepared his case very carefully. When this portal proposal was brought to the other chief executives’ attention, despite a lack of concrete financial information, they agreed that there was a benefit that could be realized through the portal implementation.
Evaluating the Current State of Affairs Prior to the initiation of the KM initiative, PharmaCo had implemented a small-scale intranet portal that included some basic functionality. This functionality included: 1) links to the various Web pages of the divisions within PharmaCo; 2) basic search features to search full-text of Web pages included within the company’s intranet; and 3) a search engine for employee contact information. Additionally, human resources (HR) material on benefit and 401(k) information, procedures followed by HR, organizational charts and leave request forms had been made available through the portal. While this met the basic functionality requirements, the intranet was very limited in what it could offer. Documents were available in the intranet only if a web master had manually added them to the intranet earlier. If a document had not been added, it was doubtful whether an employee would be able to locate it. Since data had to be manually added to the intranet, documents became out of date quickly, and there was no easy way to determine when a document needed to be updated. Moreover, the KM group felt that additional information existed, both internally created content as well as secondary information that could be brought into the portal. The portal technology would also ensure that documents would be updated as they were changed or modified. To aid in accessing documents, a very basic taxonomy (or categorization scheme) was created manually with someone in IT manually identifying where to place the document in the taxonomy, posting it on the intranet and creating a link to the document. While the existing portal and the taxonomy were not very effective, the KM group felt that it would be advantageous to build on the foundation of this intranet system. The goal would be to retain some of the functionality currently available, and improve the existing infrastructure to include additional KM-related capabilities. The group members determined that they wanted to retain the concept of a corporate portal intranet environment which could be accessed by all employees. Overall, David, Charles and Maile decided that the enterprise resource portal (ERP) should stretch across organizational divisions of PharmaCo and its applications. PharmaCo’s KM goal would be to establish one horizontal architecture, customizable to needs of individuals throughout the company. In addition, vertical portals, or vortals, would be tailored to address functional requirements utilizing a specific tool or technology such as specialized human resource support packages. In order to improve access to internal data, knowledge management services, including indexed search, would be available both from the enterprise portal, and from within the vertical applications. In evaluating PharmaCo’s current intranet portal, the team concluded the weakest functionality of the current intranet was its limitation regarding the range of information that it could organize or access. David, Charles and Maile felt that the intranet needed to include more information beyond links to division web sites and provide searching capability beyond a basic full-text search of intranet Web sites.
Envisioning the KM System In order to identify methods of improving information access, David, Charles and Maile evaluated which types of information needed to be shared across the organization as well as what means were suitable for providing access to these types of information. They determined that in
Enterprise Information Portal Implementation
415
addition to the intranet web pages, they wanted to provide access to documents created internally, such as word processed documents, which were not available on the intranet. Additionally, they wanted to incorporate a push strategy to provide current information to their employees through the company portal. Such information included news feeds relevant to PharmaCo and to the pharmaceutical industry. By broadening the focus of information, the PharmaCo KM team members knew that they would be creating additional complexities that would need to be considered when evaluating their technological options for KM implementation. Charles was primarily responsible for evaluating the format of information, and the kinds of technologies that would be needed to encompass the variety of formats available within PharmaCo. Upon his initial evaluation, Charles determined that the company needed a system that could support internal and external Web pages, word processing documents, presentation slides, portable document files and spreadsheets, as well as cut across disparate operating systems. In addition, the data contained within Oracle and Access databases throughout the organization needed to be included. Much of the data already in databases was difficult to share with all employees since it was not available in a Web format. Charles explained: By taking a Web-based portal approach, the data will be made accessible to employees. [this] had not been possible with prior technologies. To further add complexity, the new portal needed to be able to handle a Lotus Notes environment, and to obtain information from a Documentum document management system. Charles wondered whether it would be at all feasible to create (or identify) a system that could successfully manage the wide variety of document types within PharmaCo. While Charles initially thought of utilizing a database system to organize the information, he realized that there would be problems in adopting a database approach in this project. A database assumes a predefined structure of the data being stored and accessed. Since 90% of the data that the PharmaCo’s KM team wanted to incorporate into the portal was unstructured, a database solution would be too time-consuming, costly, difficult to maintain and most likely infeasible. In addition, the content team or the employees themselves would be responsible for adding structure, such as author, title and subject headings to each document. This would be too large of a task for the content team to handle, and having the end-users add this information would trade-off the consistency in how documents are treated and also detract from creating a consistent terminology throughout the repository. As the Director of Content Management, Maile was responsible for evaluating the information available within PharmaCo as well as external content which would be provided to employees. Maile believed that the internal data was of prime importance, and focused her efforts on evaluating methods of making this information more accessible to employees across the organization. After spending considerable time evaluating the documents, Maile realized that the sheer number of documents was going to be the largest issue facing the PharmaCo content management team. At this point, Maile estimated that PharmaCo had internally created approximately 800,000 documents, and this number continued to increase everyday. In addition, the content management team also wanted to include external information, such as market research reports and pharmaceutical/chemistry research articles, which would increase the number of documents to 1.2 million. Given the sheer number of documents, Maile questioned how effectively the current search functionality could handle information retrieval, and knew that a sophisticated search engine was critical to the success of PharmaCo’s expanded information access plan. Additionally, Maile knew that only some of the documents contained properties or meta-tags (i.e., author, title, subject) to describe them. As a former librarian, Maile knew that these meta-tags or properties were central to improving the end-users’ ability to find information. Maile stated: While it is important to provide title search functionality, it is essential to allow the user to search the full-text of documents as well as subject headings which accurately describe the document, in order to make the search meaningful. This is our biggest challenge in
416 Manning & Sarker
improving access to information within a portal environment. The meta-tags would help to power additional search functionality and improve the quality of search results. Unfortunately, Maile determined that very little had been done to assign subjects to describe the documents, and knew it would be a formidable task to add subjects manually to the individual documents.
Portal Acquisition—To Make or To Buy The Knowledge Management group debated whether to develop the portal internally or to use outside vendors to provide the portal application. The deciding factor in acquiring the portal solution from a vendor was the KM group’s concern over future funding of the project, given an anticipated decline in stock price when PharmaCo’s highest revenue-generating drug’s patent expired. PharmaCo was willing to allocate money to the Knowledge Management group to develop a portal at this moment, but the KM team knew that this money would not be perpetually available. They were uncertain as to whether the portal implementation could be completed within the time during which the funding was guaranteed, if they were to create the portal themselves. Another point in favor of utilizing a vendor’s solution was that the KM group did not believe that the company had the employee resources to commit to a project of this magnitude. Many KM group members felt that an outside vendor would provide them with the time and skill-related advantages needed to do a full-scale portal implementation. However, some team members seemed convinced that the PharmaCo situation was sufficiently unique to warrant the development of the technology in-house, and this caused some tension in the team. With the passing of each day, the group felt the pressure to make decisions quickly in order to spend the allocated budget while it remained.
Selecting the Vendor Eventually, David, Charles and Maile made the decision to acquire the solution from an outside portal vendor. Even though Dan had a preference for a solution developed in-house, he reluctantly agreed to support the KM team’s recommendation due to his respect for David and the KM group. At this point, the KM team-members began to research the various portal vendors, and compare the vendors’ offerings with their current and anticipated needs for a PharmaCo corporate portal. They mapped out what components they would like to include in their ERP (Enterprise Resource Portal) to support PharmaCo’s knowledge management. They decided that six features were essential to the success of the portal: 1) full-text search, 2) categorization or taxonomy, 3) personalization, 4) virtual communities or collaboration, 5) business process applications support, and 6) applications support (Appendix 6). There are over 100 portal companies that provide very similar offerings, consequently, the KM group decided to focus on those companies whose products incorporated all the features PharmaCo required. After screening numerous portal vendors, PharmaCo invited two vendors to provide a proof of concept. One of the vendors, PortalOne, was a pre-IPO firm which had achieved a top market position in the portal space, but had very little name recognition outside of the portal market. PortalOne’s portal offered applications that could support business functions, launch applications from the portal, provide full-text search and categorization, while adding personalization features. In essence, PortalOne provided all of the requirements PharmaCo had established for their initial portal solution. While the KM group was very satisfied with the proof of concept shown by PortalOne, they decided to bring in another vendor to make sure that they explored multiple options. Dan heavily influenced the decision to bring in a large player in the software and database market, Oasis, to demonstrate its solution. David, Charles and Maile were very doubtful of Oasis’ capability of providing a portal suitable to their needs. Similar to PortalOne, Oasis provided support for business functions. However, unlike PortalOne, Oasis’ portal did not incorporate data access and organization, nor was it being currently developed. Oasis however claimed that it could develop a portal that would suit PharmaCo’s needs,
Enterprise Information Portal Implementation
417
including information access and retrieval. PharmaCo would be the beta site that the company would use to develop their portal solution. Given that Oasis had a substantial presence in the database market, and enjoyed a good reputation among technical professionals (including those within PharmaCo), there was a lot of pressure on David from Dan to pursue the portal solution proposed by Oasis, despite skepticism from the KM group. Dan felt comfortable choosing technology from Oasis, a vendor with a solid reputation, and felt there was some risk involved in purchasing technology from a pre-IPO software company, such as PortalOne. In addition, Oasis was willing to give a price break since PharmaCo would act as a test bed for the development of the code for their portal technology. David, Charles, and Maile believed that the political pressure was so strong that they would be required to purchase the solution from Oasis even though this offering did not fit PharmaCo’s KM needs. In response to this perceived pressure, they referred back to their goals and objectives of the KM initiative, and compared the features of both products with respect to the desired end result. Charles and Maile documented their findings and David presented them to Dan (Summary table in Appendix 7). In addition, David was able to convince Dan that they would be unable to meet their deadlines if they had to wait for Oasis to develop the features instead of implementing PortalOne’s solution which already had those features in place. After an extremely tense meeting, Dan conceded to David, Charles and Maile’s push for PortalOne.
Enterprise-Wide Portal Implementation— Hardware & Software Acquisition/Installation In order to prepare for the implementation of the portal application, Charles placed an order for database servers as per technical requirements for the portal of PortalOne. Each server was required to either support different features of the portal or store data properties. Charles knew that the company would need to invest in more servers for the storage of data properties, given the volume and variety of data to be processed. The hardware was a substantial initial investment, and would also involve an on-going investment for upgrades or to accommodate additional data. Once the hardware was received, PortalOne representatives installed the software. It soon became apparent that Charles and other members of the KM team would need training on at least two areas: how to maintain the portal, and how to implement additional functionality to the portal. PortalOne offered the basic framework, with the ability to customize the portal by adding additional applications that they referred to as “widgets.”
Implementation Issues Charles and Maile had not been able to carefully plan how they wanted to set up the portal, thus they were not able to fully utilize PortalOne initially. They were, however, able to set up some initial applications essential for supporting the organization, such as e-mail, calendar/meeting system, word processing and spreadsheet applications. In addition, Charles started developing business process support applications within human resources such as employee leave and employee benefits tracking; he also started to formulate ideas for supporting procurement. As Charles continued to work toward customizing the portal for PharmaCo, Maile tackled the numerous issues stemming from PharmaCo’s immense data repository. Maile realized that she and her content managers would need to develop a short-term as well as a long-term plan for improving access to PharmaCo’s data. Initially, the KM group members had decided that they wanted to implement an enterprise-wide portal with access to all information for all business groups. However, it gradually became clear to the team that this goal could not be realized immediately. Currently, the search functionality within PortalOne could be applied over many functional areas. However, given the constraints in terms of the number of servers that could be purchased by PharmaCo at this time, the search could only be applied to 500,000 documents before reaching the maximum indexing capability of the server. Another server for document indexing would need to be
418 Manning & Sarker
purchased and configured in order to process 500,000 additional documents. Given this immediate constraint, the content management group chose to develop vertical portals (i.e., “vortals”) for specific business functions, which could eventually be expanded in scope to incorporate all business functions with one enterprise-wide portal view. In determining its first focus area, the KM group chose the business function that had the largest value proposition for the company: the management of documents created by the researchers within the discovery stage. Following the Discovery documents, the next vortal would focus on incorporating documents created by researchers in clinical trials, and this would be followed by the implementation of a third vortal with focus on sales and marketing data. Though the implementation strategy to sequentially create the vortals seemed satisfactory, the KM group needed to create the first vortal and obtain initial feedback. This feedback from end-users would be used to determine how future vortals would be developed.
Organizing and Linking the Documents for Easier Access In order to index the documents and to support the search functionality within the PortalOne portal, the data needed to be crawled or “spidered” as required by many existing Internet search engine technologies. Essentially, the PortalOne crawl points to different file systems and data locations, and obtains a copy of the text of each document. By obtaining a copy of the text of each document, the data contained within each document is indexed and can then be referenced within the full-text search feature. In order to include the appropriate Discovery data, Charles had to identify which computers corresponded to the researchers working within the Discovery stage, and then he pointed the PortalOne crawler to those computers. Additionally, Charles pointed the PortalOne crawler to some of the databases that contained information for the Discovery researchers. Besides storing a copy of the text, the crawl also obtained the metadata properties as well as location for each document, and then stored this information on the data index server. The meta-tags were utilized to create a data card, similar to a library catalog card, for each document. In addition to searching by full text, users could search utilizing the meta-tag properties, such as title and author. The meta-tags added additional functionality to the search that was not previously available within the PharmaCo intranet. While this added more options for the end-user, many of the tags describing the documents were reported to be inaccurate, and this posed problems for the KM team that were difficult to resolve. While it was important to have a reliable and relevant search engine, the search component (discussed above) addressed the needs of only those users who were experts at finding information. Thus, in order to accommodate the novice end-user, the KM group sought to implement a taxonomy that would provide a hierarchical view of their data set. PortalOne offered a taxonomy feature, but it was difficult to create and maintain. Each category in the taxonomy needed to have a folder created for it, and the documents that matched the SQL query created for that particular category would be attached to the folder. However, in order to be successful in using this feature, the content managers needed considerable time and expertise. PortalOne suggested utilizing the expertise of its partner TextFinder, a company whose core product focused on categorization development within intranet environments. TextFinder and PortalOne had integrated the TextFinder taxonomy within the PortalOne portal, and this solution helped to address some of the content manager’s concerns. Instead of utilizing SQL queries, TextFinder conducted a linguistical analysis of the data, and extracted phrases from the documents. A category would be created, and the phrases related to the concept would be latched. In addition to latching the concepts and documents to the categories, TextFinder would create meta-tags that specified the category or categories to which the document could be latched. These meta-tags could be utilized as subject headings to help describe the document, and provide controlled vocabulary. Controlled vocabulary had been a strategy used in numerous library catalogs and indices directed towards the use of consistent vocabulary for describing information. With this approach, PharmaCo
Enterprise Information Portal Implementation
419
could develop its own thesaurus. Due to the lack of prior availability of meta-tags describing the subject of documents, this approach immediately appealed to the content management group. In addition, TextFinder was able to add the metadata to the data card created by PortalOne. This allowed additional metadata properties that could be used to enhance the search capability of the search engine within PortalOne. Maile had been concerned with the lack of meta-tags to describe the subject of the document, and had felt that it would be impossible to manually create the correct tags for each document. PortalOne, in association with TextFinder, showed promise in being able to provide some automation of the generation of subject meta-tags. Even though it was clear that creating a taxonomy was going to take considerable time and effort, Charles and Maile felt that the benefits merited the time commitment of the content management team. To begin the process of building a taxonomy for the discovery stage in the drug development cycle, Maile began researching thesauri that had already been developed in the area of chemistry. Once she identified the thesaurus that seemed to fit the content matter, she contacted the appropriate government organization to obtain the permission to use the thesaurus within their portal (which was readily granted). The content management team, thereafter, transformed the thesaurus into a hierarchical structure suitable for categorization. Besides focusing on chemistry, the content management team felt it important to develop a controlled vocabulary for all of the PharmaCo drugs, both those that had been marketed and those that were at various stages in the drug development cycle. Contrary to initial expectations, it wasn’t a straightforward process to develop a controlled vocabulary for their drugs. When a molecule is discovered, it is assigned a PharmaCo number for reference. Later, a common name and/or a generic name are also given to the molecule. As the drug enters into clinical trials, additional names are used to describe the drug. Once the drug passes FDA approval, even more names are assigned to marketed drugs. A marketed drug may have multiple names if it is marketed in different countries, and also if it is used in multiple therapeutic areas. Overall, the same drug may have between one and 30 names. This complexity posed severe problems in developing consistent terminology for the meta-tags. After discussing this issue with some of their contacts in Discovery, the KM team members decided to refer to the drug’s common name. They then utilized the synonym capabilities within TextFinder to recognize the preferred term for the numerous drug names. Unfortunately, a comprehensive list for different names of the drugs had not yet been developed within PharmaCo. Thus, the content management team had to undertake a lot of internal research to develop a list of the drug names, before the implementation could proceed. Initial Employee Reactions In order to obtain feedback from representative stakeholders, David, Charles and Maile decided to create a test group to pilot their initial version of a Discovery vortal with the PortalOne portal software. This test group consisted of representatives from various groups within Discovery. David spent considerable time and effort encouraging the management/members of the collaborating divisions in Discovery to access the newly created vortal. David allowed the managers within Discovery to share the location of the new portal with their employees, and also provided the option for the manager to sign their group up to automatically view the new portal when a Web browser was launched by one of their employees. Therefore, when the KM group rolled out the Discovery vortal, 1,000 of the 10,000 employees in Discovery used the new PortalOne interface. For the most part, the stakeholders’ feedback was positive and included the following comments: Being able to access more internal information is invaluable, and having it available in a browse-able structure and via search is an improvement over the previous intranet options… The online communities are helping with better project management... It saves time to only have to go to one Web site to access all the technical requirements
420 Manning & Sarker
I need to complete my job. I like not having to open up separate applications to access my e-mail and calendar… However, some interesting reactions to the portal did emerge. The KM group had (surprisingly) not considered the issue of sensitive data being viewed by everyone within the organization, and no thought had been given to setting up security profiles for users. This was an oversight with major implications for the rollout timeframe of the portal. While it was “technically” possible to address security issues, security profiles needed to be established and implemented for each employee. With the complex inter-linking of data using PortalOne, it was not at all obvious to the implementation team as to which users should be granted access to which information, in which context. Coming up with the security profile was clearly going to be a time-consuming process that should have been considered during the first stages of implementation. Now the KM team would be behind schedule at least by a couple of months in presenting the final Discovery vortal to all Discovery employees. David had promised Dan a rollout date that was no longer attainable. With Dan’s credibility on the line, he was very irritated by the delay, and he formally communicated his displeasure with David, Charles and Maile for not addressing this issue earlier. Another surprising reaction to the portal technology stemmed from the way the researchers conducted their work. While the KM group had placed a priority on providing access to as much information as possible, the researchers were concerned by this approach. In PharmaCo, the researchers are primarily rewarded and recognized for their contributions when a patent is successfully filed as a result of their work. By sharing information about their research, the researchers felt that other researchers within the company could inappropriately utilize the information available within the portal to apply for a patent, even though the research had been completed in a different group.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION Recognizing the clash between the collectivist assumptions of the portal environment, and the reward systems encouraging individualism as well as inter-group competition in the company, Charles and Maile realized that there could be strong negative reaction to the portal. Charles and Maile now had to evaluate whether to continue with their initial plan of including all information within the portal, or to eliminate certain information (such as those involved in Discovery). They were fully aware that eliminating Discovery-related information would result in most of the proposed benefits of the KM system not being realized. Some issues that continued to linger in their minds were: Would the researchers eventually be able to adjust to a new openness in sharing information? If so, the portal would prove to be a very powerful tool for the organization. If not, the scope of the portal’s capabilities would be severely limited. How could Charles, Maile and their other team members facilitate the adoption of this tool? Charles, David and Maile met to discuss the observed reactions and behaviors of the researchers regarding the technology-enabled ability to share and access data across the lines of different research groups. From a “rational” knowledge management perspective, they realized that opening the door to information discovery was very powerful; however, they also realized how institutional and other social factors could inhibit the growth of the portal. Around this time, concerns of many researchers regarding the open availability of the data and lack of security had also started diffusing to the attention of the CTO, Dan Kramer, and this worried him. He asked Charles, David and Maile to present a list of pros and cons that needed to be considered for deciding on the fate of the portal implementation project. Dan also asked for suggestions regarding possible ways of addressing the problems that had come to light. David, Charles and Maile were able to arrive at some conclusions to address the problems. Either content would have to be restricted to secondary data, or tight security would have to be created in order to restrict the exchange of information. Another approach that could be used would involve
Enterprise Information Portal Implementation
421
allowing each individual research group to establish its own community within which knowledge could be shared using PortalOne. Unfortunately, these were not optimal solutions. As Dan himself began to consider the tradeoffs associated with the implementation of the portal, he wondered if he could continue to support the direction the knowledge management team had chosen, and if the benefits of the portal would outweigh the costs in the long run.
BIOGRAPHICAL SKETCHES Alison Manning is employed as a Software Consultant with one of the software vendors involved in the project described in the case. She recently completed her MBA at Washington State University with an emphasis in Management Information Systems. In addition to holding an MBA, Alison Manning also has a Master’s in Library and Information Science, and was employed as a Business Reference Librarian at Washington State University. Suprateek Sarker is an Assistant Professor of Information Systems at Washington State University, Pullman. His research interests include virtual teamwork, ERP systems, computermediated learning and qualitative research methods.
APPENDIX 1
Source: Pharmaceutical Research and Manufacturers of America. Pharmaceutical Industry, 2000. http://www.phrma.org/publications
422 Manning & Sarker
APPENDIX 2
Source: Pharmaceutical Research and Manufacturers of America. Pharmaceutical Industry, 2000. http://www.phrma.org/publications
Enterprise Information Portal Implementation
423
APPENDIX 3 Chief Executive Officer
Chief Financial Officer
Chief Operating Officer
Chief Technology Officer
Chief Counsel (legal)
VP of Research & Development
Director of Information Technology
Director of Accounting
VP of Global Management
Director of Knowledge Management
VP of US Operations
Director of Library and Information Services
VP of Marketing & Sales
VP of Manufacturing
424 Manning & Sarker
APPENDIX 4 Chief Technology Officer Dan Kramer
Director of Knowledge Management David Allen
Director of Knowledge Management Technology Charles Heenan
Portal Technology Analyst
Director of Content Management Maile Ohye
Content Manager
Content Manager
Enterprise Information Portal Implementation
425
APPENDIX 5 Mission Statement: The PharmaCo Knowledge Management division will develop and support the infrastructure to facilitate better decision-making, streamline business processes and encourage collaborative behavior via information exchange resulting in a competitive advantage by leveraging PharmaCo’s corporate knowledge. The Knowledge Management division seeks to provide an opportunity to reduce drug development cycle time by improving access to PharmaCo’s knowledge assets. The support of employee collaboration as well as the support of an expert-driven organization is central to the Knowledge Management infrastructure directed towards improving decision-making and organizational responsiveness.
APPENDIX 6 Portal Platform Requirements:
Business Process Support Full-text Search
Virtual Communities (collaboration)
Portal Platform (End-User)
Personalization
Categorization Applications Support (i.e. word processing, e-mail)
426 Manning & Sarker
APPENDIX 7 Criteria Evaluation for Portal Selection
Full-Text Search -Documentum Crawls -Oracle Crawls -Lotus Notes Crawls Categorization Personalization Business Process Support Applications Launch Virtual Communities Customization Required
APPENDIX 8 Sample Categorization
PortalOne
Oasis
X X X X X X X X X Minimal
X X X X Excessive
Design and Implementation of a Wide Area Network
427
Design and Implementation of a Wide Area Network: Technological and Managerial Issues Rohit Rampal Portland State University, USA
EXECUTIVE SUMMARY This case deals with the experience of a school district in the design and implementation of a wide area network. The problems faced by the school district that made the WAN a necessity are enumerated. The choice of hardware and the software is explained within the context of the needs of the school district. Finally the benefits accruing to the school district are identified, and the cost of the overall system is determined.
BACKGROUND The organization is a reasonably small school district with about 2,700 students, in a town named X that is comprised of four villages (A, B, C and D). There are five schools in the district, the Board of Education that oversees those schools, and the Bus Garage. The buildings that house these seven entities are spread over the four villages, and the distance between locations is more than ten miles in some cases. While the school district is not exceedingly wealthy, it does have access to state and federal grants for bringing schools online. What isn’t covered by the grants has been funded out of the municipal budget, with virtually no opposition from the Board of Finance, setting a stage whereby the town maintains an aggressive approach to deploying technology in its schools.
SETTING THE STAGE The Town of X needed to connect the computers and systems at the seven locations in order to share information and resources. The district is spread out over the four villages that make up the town (A. B, C and D) and its seven locations within that area are separated by as much as ten miles. The faculty, staff and students within the school district needed the ability to communicate with each other and with the rest of the world via electronic mail. Accessing information via the World Wide Web was needed to keep up with the changing world, both for the faculty and the students. They needed to share files without printing and distributing hard copies. There was a distinct need for getting access to the Internet into the classroom. The schools in the school district had computer labs, Copyright © 2002, Idea Group Publishing.
428 Rampal
and computer-based education was a priority, but the computers could not connect to the Internet. The students, teachers and administrators in the school district needed to run collaboration applications like groupware and various administrative applications that required connectivity to a central database. Given the geographical distances involved, a wide area network seemed to be a good solution, that would take care of most of the needs of the school district. An example of this need for connectivity and bandwidth is the Phoenix database, a school administration package used in the school district for town X and many other districts. Phoenix is a large and complex database written for Microsoft Access 2.0. It is clunky and painful to deploy in many respects, and it does not run on the newer versions of Access. So Access 2.0 must be installed for the database to run, but package serves its purpose and is popular. Phoenix has modules for maintaining student records (attendance, grades, discipline, etc.), class and bus schedules, payroll for faculty and staff, cost accounting and a number of other functions needed to run a school district. The ability to run this software product across a network is indispensable in this environment, as it allows each school to handle its own administrative tasks independently. In order to enable the proper running of the Phoenix software over the network, a good bandwidth is a requirement. When the project was envisioned, direct fiber over the distances involved was not a viable option (the distances between two consecutive schools could run to more than ten miles). A wide area network, using some form of leased transmission lines was considered the most efficient way to connect the school district and create a cohesive enterprise. Such an enterprise could communicate efficiently and act in concert, providing connectivity and services to the faculty, staff, students and even the citizens of the Town X school district.
CASE DESCRIPTION A Look at the Network The Town X wide area network is described from two perspectives. The first perspective gives an overview of the network and the services that it delivers. This view primarily describes the software. The second perspective explains how these services are delivered. This perspective provides a discussion of the hardware and the network protocols used in the wide area network deployed.
Network Services Just as a desktop computer requires an operating system (OS) to run, a network needs a network operating system (NOS). For the town X school district, the NOS chosen was Microsoft’s Windows NT Server 4.0. There were a number of reasons for choosing Windows NT 4.0, but the overriding reason was the fact that the network administrator was familiar with the software. The NOS resides on a single computer known as a Primary Domain Controller (PDC). The school district network is configured as a single NT domain network–that is, a user needs to log on only once to the network in order to reach all resources he/she is allowed access to. The NT Server OS can also be configured to run as a standalone server to host other services on the network. There may be a number of such servers in a domain. In the case of the Town X school district WAS, there are two other NT servers that serve as hosts for additional services.
The Role of Domain Controller(s) While the PDC, of which there can be only one, hosts the Security Accounts Management (SAM) Database, it is the Backup Domain Controllers (BDC) who maintain a copy of this SAM Database and actually perform the authentication of logins to the Domain. So when a user turns on a client machine and enters a user name and password, the “request” is directed not to the PDC but to a BDC, provided there is BDC present. For the Town X school district, each location has a BDC. At login the user name is compared against the SAM Database, and those privileges and accesses
Design and Implementation of a Wide Area Network
429
that were assigned to that user’s group or groups are granted to that user. (A user may belong to one or more groups). Some of the pre-defined groups in the NT domain include “domain users”, “administrators”, “print operators” or any of a wide variety of others. In a LAN running Windows NT, normally access is granted by group membership and not on an individual basis. This practice makes management of the domain a lot easier. In addition a user may be assigned a home directory on the server, access to which is normally restricted to the user only. The process of creating accounts in the NT domain occurs on the PDC and then is copied to the BDCs by replication. The BDC also mirrors most of the other functions of the PDC, so that in the event the PDC fails, a BDC must be “promoted” to PDC status until the primary machine is fixed or replaced. This redundancy enables fault tolerance and is an essential part of smooth network operation.
Internet and Proxy Service At the Town X High School, the BDC also hosts two other critical packages, namely Microsoft’s Internet Information Server (IIS) and the Proxy Server. This IIS connects both to the Internet and to the internal network. It also hosts the Web site for the Town X school district. In order to limit the number of IP addresses needed, the network in the Town X school system uses private IP addresses. What that means is that these internal IP numbers cannot be routed to the Internet. So the IIS server needs to have two network cards, one with a private address to connect to the internal network, and one with a public address to connect to the Internet. In case the IIS server had only the public IP address, the way for the computers on the internal network to reach the Internet server would have been to go out to the Internet via the proxy server and reach it via the public IP address. Proxy service allows the administration and monitoring of Internet usage as well as logging of inbound and outbound traffic. Also, more importantly the Proxy server allows the filtering out of objectionable materials from the office or academic environment. The school district was so concerned with this issue of restricting access to objectionable material that they augmented the filtering function by the addition of a third-party product called Web Sense. Web Sense is a subscriber service that downloads, on a daily basis, an updated list of URLs from which the objectionable materials issue. Web Sense then works closely with Proxy Server to block access to these URLs. Once the Proxy Server has been installed on the network, the client machines are configured to “point” at it for their access to the Internet. The server caches a configurable amount of content (100 MB – 200 MB is common) as it is accessed from the Web and compares its cached material to the remote material periodically, keeping it updated. This way the frequently accessed material is kept readily available locally, speeding up Internet access for the user, and freeing up bandwidth for other more desirable uses.
DHCP and WINS The Town X school district network uses DHCP and WINS to locate computers on the network. Dynamic Host Configuration Protocol (DHCP) is a dynamic TCP/IP addressing method that assigns each machine a TCP/IP address when it gets on to the network. Windows Internet Name Server (WINS) provides a NetBIOS host name address resolution for WINS clients. WINS increases network efficiency by reducing the number of network broadcasts needed to resolve a NetBIOS host name to an IP address. A NetBIOS name is a unique computer name that is entered in the identification tab in the network control panel of a computer. This is the name of the computer that would show up in the Network Neighborhood. Names are generally mnemonic, and so identify either the primary user of that computer (smith, librarian) or the location of the computer (Library12, Lab3), etc. On a TCP/IP network, however, the DHCP server assigns an IP address (which is a number) to the computers. A WINS server maintains a database that is updated periodically which matches the NetBIOS name with the IP address for each computer on the network.
430 Rampal
Mail Services The e-mail service in the Town X domain is provided by a dual-server MS Exchange 5.5 approach, while the e-mail client software used is MS Outlook 98, a 32-bit Windows application. The two-server configuration was chosen to enable greater security by separating student and administrative accounts while providing fault-tolerance though information store replication. So in case one e-mail server goes down, the second can take over till such time the first server comes back up. The protocols standardized for the students, teachers and administrators to send and receive electronic mail are SMTP and POP3 respectively. SMTP is the de-facto standard for sending mail, while POP3 was chosen as the e-mail client protocol as it limits the amount of incoming mail that stays on the e-mail server. One of the reasons for choosing MS Exchange as the e-mail server was the ease of use and management tools available. As with other Microsoft products (like MS Proxy Server) which are part of Microsoft’s Back Office suite of products, MS Exchange provides a number of tools for tracking usage and monitoring performance. Since email tends to become indispensable in an enterprise, keeping it online requires proper allocation of resources, and monitoring is critical to this process.
Server and Desktop Management The network management software chosen for the Town X school district was the MS Systems Management Server (SMS), yet another member of the Back Office family. The software is powerful enough that with the client portion of the software installed on the workstations, the network administrators can remotely control and manage desktops and servers as well as troubleshoot. SMS can be used to inventory and collect data, distribute software and upgrades, and monitor server and network traffic, all from remote management stations.
Desktop Configuration A conscious decision was made to configure each desktop in the domain in a standard fashion to present a consistent look and feel throughout. A standard computer has the MS Office Professional suite, a World Wide Web browser and MS Outlook 98 for e-mail and group scheduling tools. Other software as needed for the individual users is installed on a case-by-case basis. All the network clients are configured to use DHCP and WINS servers as well as MS Proxy server. Finally each machine is configured to log on to the NT domain and access appropriate logon scripts, user profiles, home directories if applicable, shared applications and other network resources.
THE DESIGN OF THE WIDE AREA NETWORK The Town X domain involves servers and desktop computers spread over seven buildings at five sites within the town. The Board of Education (BOE) and Town X High School, which are adjacent to each other, are connected by a fiber optic backbone allowing transfer of data at 100 Mbps. The Main Distribution Frame (MDF) at Town X High School is the network core. The remaining sites are connected by means of a wide area network implemented over frame relay connections at T-1 speed (1.54 Mbps). Another frame relay connection at T-1 speed links the network to a local Internet Service Provider (ISP) which then links it to the Internet.
Frame Relay vs. Point-to-Point A considerable amount of thinking went into the decision to use Frame Relay as the connection technology for this WAN. Traditionally, WANs were created using point-to-point technologies. The difference between Frame Relay and point-to-point is that point-to-point is implemented over a dedicated copper line from one site to a phone company digital switch and from that switch to a second
Design and Implementation of a Wide Area Network
431
site. In order to connect the entire Town X school district network in this fashion, each site on the network would need to have five such connections, one to each of the other sites. Since point-to-point connections are leased on a per-mile monthly fee (about $40 per mile in this case), the costs add up pretty quickly. Frame relay, on the other hand, requires only one connection per site. That connection is leased at a set fee and goes to what is called a “frame relay cloud,” a group of digital switches which on request forms a connection to another such cloud where the object of that request (the machine that is being accessed by your network) is connected. So the T-1 speed in the case of the Town X school district WAN is actually “burstable to T-1” speed. If there are simultaneous connections to more than one other site on the school district network, the bandwidth is then split among them. In spite of that, frame relay was found to be the more cost-effective and efficient way to connect at the time the network was designed. This was true even when compared to some of the lower cost technologies like ISDN or modems. The problem with ISDN or simple modem technologies was that neither of these was sufficiently robust to carry the kind of traffic generated by the school district. At one time in the past, the school district network was actually connected by 56kbs frame relay, but the administrative database program, Phoenix, which is extensively used, required a much higher bandwidth so the connection was increased to a T-1.
The Main Distribution Frames At the heart of each of the Town X school district WAN’s school is an MDF built around 3Com’s Corebuilder 3500 Layer 3 switch. The “Layer 3” refers to the fact that the Corebuilder provides for not only 100 Mbps port switching but also routing at “wire speed.” (The backplane of the Corebuilder routes packets as quickly as the network electronics can deliver them. Cisco’s router did not have this capacity; as it could only route at the speed of its processor). The Corebuilder design is also fully scalable by means of plug-in “blades,” each with fiber or Ethernet ports. So this design also allows for expandability using technology not even available yet. The servers at each site connect to the network through the TX (Ethernet) blades, while the Ethernet Switches, both at the MDF and the various IDFs (intermediate distribution frames), connect by fiber to the fiber blades. An added feature of the 3Com Layer 3 product is that its routing capacity allows for the creation of Virtual LANs (VLANS), by segmentation of a LAN into IP subnets. Since routing is a managed activity, this feature is used to provide extra security on the network by blocking traffic (via a process known as “packet filtering”), between the student and administrative subnets. For each LAN, three VLANs were set up so that students and administrators could be denied access to each other’s areas, without denying them access to common resources such as the servers themselves. In the absence of Layer 3 switching this packet filtering can also be done at the routers, but would slow down the network.
Switched Port Hubs The Town X school district network computers connect to their respective distribution frames (DFs) via 3Com’s Superstack 3300 24-port Ethernet Switch. In the High School alone, there are 15 of these. (Appendix A has full details on hardware, software, labor and costs for the entire project). The reason switched hubs were chosen for this implementation is the speed. One advantage of using switched port hubs is that the bandwidth is not shared, unlike in ordinary hubs. A second advantage of a switched port hub is that packets are delivered to the port the destination computer is attached to, instead of being broadcast to every computer connected to the hub.
Connecting to the WAN The Town X High School LAN connects to the WAN through a Cisco 2500 Router and a Motorola CSU/DSU. A router passes data packets from one IP subnet to another, and the CSU/DSU
432 Rampal
acts somewhat like a modem. It translates frame relay protocol from the telco loop (telephone company line) into TCP/IP (Transmission Control Protocol/Internet Protocol). In this case there are two connections through the router to two separate CSU/DSUs. One connection comes from the Internet server and routes a genuine Internet IP address to the ISP. Another comes from the TX blade on the Corebuilder and routes a private or internal IP address to the other LANs in the Domain. There it is picked up by the LAN’s respective CSU/DSUs and routed to their respective Corebuilders. There too the IP addresses are private and are not routed to the outside world.
TCP/IP Addressing in the Town X Domain The IP addresses of machines directly connected to the Internet never start with a 10 as that number is reserved for private networks. All the machines at the Middle school have an IP address starting with 10.0.10. At the High School and BOE the addresses are 192.168.*.*. These are also IP addresses reserved for private networks. At the Mem school they are 10.0.9.*, and so on. The routers meanwhile have addresses of 10.0.11.*, 10.0.8.* and so on. They use a convention where the .1 is on the WAN side and the .2 is on the LAN side. It helps the network engineers to keep track when they are configuring the network. The only actual real-world IP address in the whole domain is the one that gets routed from the IIS server to the ISP. The ISP has actually given the Town X school district network a quarter of a class C address: in this case the IP addresses 209.R.S.128 through 209.R.S.191 with the .128 and .191 being the network and broadcast addresses respectively. The number of IP addresses available to the school district was not sufficient, thereby making the use of private IP addresses for computers on the internal networks a necessity.
Problems in Implementation Problems in the implementation of an IT solution can come in many forms, all the way from technical/technological, to financial and political. The wide area network was designed and implemented for the school district of a small town. The cost of networking the individual locations averaged about 125,000 (for a total cost of about 750,000). While the WAN had a fixed initial cost, it had a recurring monthly cost. The WAN cabling was done by the local telephone company who brought the frame relay connections to the individual locations at a cost of $1,200 each. There was also a monthly line charge of $400 per location. As we said, Town X has six such connections. For a small municipality that was a considerable financial burden. One of the things that helped in Town X was a program of state-sponsored infrastructure grants, a program established specifically for the purpose of bringing schools online. There was also federal grant money available for these purposes. The Town X school district spent a lot of time and effort to make sure that they made a very good case for getting the state and federal funds. Approximately 90 percent of the cost of implementing the WAN was covered by the State and Federal grants. What wasn’t covered by grants was funded out of the municipal budget, with virtually no opposition even from the Board of Finance. Even though the remaining 10 percent of the cost was a significant amount of money for a small town, the Board of Education kept the Board of Finance involved and informed at all points in time. So in spite of the ever-present danger of financial problems, the WAN implementation managed to get through the implementation stage without any major hiccups. This was primarily due to a lot of work up front by the BOE members who made sure that everyone on the town council was committed to the project. The WAN project was overseen by an implementation team, which included a representative from the Board of Finance, the district technology coordinator and a Superintendent’s representative, who was also the grant writer for the Board of Education. These three spent a lot of time with representatives of the contractor to set up the implementation plan. Good planning ensured that the
Design and Implementation of a Wide Area Network
433
contractor was aware of the school schedules, and had the appropriate resources available when needed during the implementation. The implementation team, along with representatives from contractor, both administrative and technical, met regularly to coordinate their efforts and keep the project on track. The meetings were more frequent during those phases of the implementation when there was a lot of activity, and less frequent during the slower phases. The meetings helped ensure that all the parties involved were aware of the situation at all points in time, and any deviations from the plan were discussed and then agreed upon. This approach worked quite well and the project was completed pretty much on its schedule, roughly two months for the cabling of the various locations and configuring the network electronics. Most of the work was done in the summer or after school hours with the school custodians providing the cablers and technicians access to the buildings, often until 11:00 at night. On the administrative side the major headaches were in the accounting and coordinating of the grant and municipal funding and the allocation of costs to each budget as appropriate. The contractor had prior experience working with school districts, and on projects based on similar state and federal grants. So the contractor’s representatives were able to provide inputs into the process that helped properly administer the grants. There was one unexpected snag when asbestos was discovered in one of the wings of the High School. The work at the High School had to be halted while the asbestos was removed. However, due to the existence of a comprehensive implementation plan, resources could be shifted to other school sites while the asbestos abatement took place. The fact that the affected area in the High School didn’t include any of the wiring closets helped, as it did not impede the phased completion of the project significantly.
Benefits to the District Some of the benefits in this case are immediately apparent. Running administrative applications like Phoenix from a central location is not just a convenience. It represents significant savings in manhours, and therefore money, to the town. The fact that each school can now provide inputs into the scheduling system, maintain student records and other administrative tasks, without having to send over hard copies of data to the BOE, is a significant improvement in efficiency. Packet filtering at the routers and Layer 3 switches restricts access to the administrative side of the network while allowing for the use of less sensitive services by others, thereby increasing the overall utility of the project. Another part of this benefit is centralized Internet access. Having this access through a central point gives the network administrator the ability to establish an Internet use policy and enforce it without having to do so on a location-by-location basis. As mentioned before, a third-party product called Web Sense is used to implement URL filtering of potentially offensive materials (the use of this filtering is a politically charged issue, but it is common in a number of school districts). The use of the Internet in the classroom is becoming more prevalent. The students and teachers use the Internet to research current topics, and get access to information that was not readily available before the WAN was set up. With Internet access the district is beginning to make many services available to the citizens in the community at large. School schedules, lunch menus and library card catalogues are all posted for access through the Internet. Students and faculty can retrieve their internal and/or external email, through the Web, whether on or off site. Eventually other services, such as distance learning, may be implemented in Town X.
CURRENT CHALLENGES / PROBLEMS FACING THE ORGANIZATION The Town X school district is considering expanding the scope of the wide area network. One of the locations in the school district, the Bus Garage, was not connected to the WAN due to lack of planned funding. This has resulted in perpetuating some of the inefficiencies that the WAN was
434 Rampal
supposed to eliminate. Since the garage is not networked, the bus availability and the bus schedules have to be transported manually from the garage to the BOE offices and back. A number of classrooms in the schools are still not connected to the network. So it generates a feeling of inequity among the teachers and students who are assigned the non-networked classrooms. While the connectivity is good in most locations, the high school seems to hog most of the available bandwidth on the WAN. The ability to access information from the Internet seems to slow down during the time the High School is in session. There seem to be some problems with the filtering service, as there are cases of objectionable material being found on some of the computers in the high school labs. Some of the lab computers have been used to serve up copyrighted material, while others run bandwidth-intensive applications that bog down the network.
ACKNOWLEDGMENT This case was researched on site in Town X and through many hours of conversation with the Senior Systems Engineer with the contractor and the designer of this particular network.
FURTHER READING Stallings, W. (2001). Business Data Communications (4th edition), NJ: Prentice Hall. Comer, D. (1999). Computer Networks and Internets (2nd edition), NJ: Prentice Hall.
BIOGRAPHICAL SKETCH Rohit Rampal is Assistant Professor of Management Information Systems at the School of Business Administration, Portland State University, OR. He received his Ph.D. from Oklahoma State University. He has previously worked in the College of Business Administration, at the University of Rhode Island. His areas of research include telecommunications, information systems in manufacturing, virtual enterprises, forecasting, and neural networks. He has previously published in the International Journal of Production Research, and the Encyclopedia of Library and Information Science.
Design and Implementation of a Wide Area Network
435
APPENDIX A: PROJECT COSTS High School and Board Of Education (BOE) Cabling Qty Classroom Drops (4 data/1 voice) 64 Office Drops (2 data/1 voice) 30 6 strand fiber between MDF and 5 IDFs Cabling Total Network Electronics 3-Com Superstack 3300 24-port Ethernet Switch 3-Com Corebuider 3500 Layer3 Switch
Fiber Patch Cables Additional Network Hardware Total Network Servers LAN & WAN File Servers
400
5
5200
Qty 15
Each 2,20
3
8000
7 6
4800 3000
316 316 6
3 4 829
36
62
3-Com Corebuider 3500 6 Port Fiber Blade 3-Com Corebuider 3500 6 Port TX Blade Network Electronics Total Additional Network Hardware 4 foot Category 5 patch cables 15 foot Category 5 line cord APC 1000 Net UPS Rack Mount
Each 450
Qty
Total Purpose 28800 Connect classrooms to each other and the internet 12000 Connect offices to each other and the internet 26000 Connect all wings of the school together 66800 Total 33000 Connect servers to classrooms and offices 24000 Layer3 switch to connect fiber and add network security 33600 Connect all wings to servers 18000 Connect all servers to the network 108600 948 Patch rooms to network electronics 1264 Patch computers to network 4974 Protect servers from faulty power and power outages 2232 Connect fiber to network electronics 9418
2
Each 8500
Total 17000 NT server/primary DNS, mail server, backup domain/web server
3
Each 1400
300
60
Total 4200 Server operating system - NT, Mail, Web, Management 18000 License for clients to access servers 22200
Built-IN POP & SMTP Mail Built-IN DNS service Built-IN WWW server Built-IN Appleshare File & Print Services Network Software MS BackOffice Windows Backoffice Client License Network Software Total
Qty
Labor - Hardware & Software Installation, Training & Preparation Cabling Included Network hardware & hub installation 30 100 3000 Network server install & configuration 60 100 6000 Desktop configuration 316 75 23700 Training 40 75 3000 Total - Labor 35700 High School and BOE Total 259718
Installation and configuration Installation and configuration Installation and configuration Administration of network
436 Rampal
APPENDIX A CONTINUED Middle School Cabling Classroom Drops (2 data)
Qty 68
Each $200
Total Purpose $13,600 Connect classrooms to each other and the internet $13,600
Network Electronics 3-Com Superstack 3300 24-port Ethernet Switch 3-Com Corebuider 3500 Layer3 Switch
Qty 6
Each 1,450
1
$8,000
3-Com Corebuider 3500 6 Port TX Blade Network Electronics Total
2
$4,800
Total $8,700 Connect servers to classrooms and offices $8,000 Layer3 switch to connect fiber and add network security $9,600 Connect all servers to the network $26,300
136 136 3
$3 $4 $829
Network Servers LAN & WAN File Server Built-IN POP & SMTP Mail Built-IN DNS service Built-IN WWW server Built-IN Appleshare File & Print Srvices
Qty 3
Each $8,500
Total $25,500
Network Software MS BackOffice
Qty 3
Each $1,400
136
60
Total $4,200 Server operating system - NT, Mail, Web, Management $8,160 License for clients to access servers $12,360
Cabling Total
Additional Network Hardware 4 foot Category 5 patch cables 15 foot Category 5 line cord APC 1000 Net UPS Rack Mount Additional Network Hardware Total
Windows Backoffice Client License Network Software Total
$408 Patch rooms to network electronics $544 Patch computers to network $2,487 Protect servers from faulty power and power outages $3,439
Labor - Hardware & Software Installation, Training & Preparation Cabling Inclu ded 10 $100 $1,000 Network server install & configuration 60 $100 $6,000 Desktop configuration 136 $75 $10,200 Training 30 $75 $2,250 Total - Labor $19,450 Middle School Total $100,649
Installation and configuration Installation and configuration Installation and configuration Administration of network
Design and Implementation of a Wide Area Network
437
APPENDIX A CONTINUED Mem School Cabling Classroom Drops (4 data/1 voice) Office Drops (2 data/1 voice)
Qty 27
Each $450
15
$400
2
$5,200
5
Each 2,200
1
$8,000
1 1
$4,800 $3,000
6 strand fiber between MDF and 2 IDF's Cabling Total Network Electronics 3-Com Superstack 3300 24-port Ethernet Switch 3-Com Corebuider 3500 Layer3 Switch
Qty
3-Com Corebuider 3500 6 Port Fiber Blade 3-Com Corebuider 3500 6 Port TX Blade Network Electronics Total Additional Network Hardware 4 foot Category 5 patch cables 15 foot Category 5 line cord APC 1000 Net UPS Rack Mount
Total $11,000 Connect servers to classrooms and offices $8,000 Layer3 switch to connect fiber and add network security $4,800 Connect all wings to servers $3,000 Connect all servers to the network $26,800
138 138 3
$3 $4 $829
4
$62
3
Each $8,500
Total $25,500 NT server/primary DNS, mail server, backup domain/web server
3
Each $1,400
108
60
Total $4,200 Server operating system - NT, Mail, Web, Management $6,480 License for clients to access servers $10,680
Fiber Patch Cables Additional Network Hardware Total Network Servers LAN & WAN File Servers
Total Purpose $12,150 Connect classrooms to each other and the internet $6,000 Connect offices to each other and the internet $10,400 Connect all wings of the school together $28,550
Qty
$414 Patch rooms to network electronics $552 Patch computers to network $2,487 Protect servers from faulty power and power outages $248 Connect fiber to network electronics $3,701
Built-IN POP & SMTP Mail Built-IN DNS service Built-IN WWW server Built-IN Appleshare File & Print Services Network Software MS BackOffice Windows Backoffice Client License Network Software Total
Qty
Labor - Hardware & Software Installation, Training & Preparation Cabling Inclu ded Network hardware & hub installation 20 $100 $2,000 Installation and configuration Network server install & configuration 60 $100 $6,000 Installation and configuration Desktop configuration 138 $75 $10,350 Installation and configuration Training 40 $75 $3,000 Administration of network Total - Labor $21,350 $116,581 Mem School Total Cen School
438 Rampal
APPENDIX A CONTINUED Cen School Cabling Classroom Drops (4 data/1 voice)
Qty 34
Each $450
15 2
$400 $5,200
7
Each 2,200
Total $15,400 Connect servers to classrooms and offices
1
$8,000
2 2
$4,800 $3,000
$8,000 Layer3 switch to connect fiber and add network security $9,600 Connect all wings to servers $6,000 Connect all servers to the network $39,000
166 166 3
$3 $4 $829
4
$62
3
Each $8,500
Total $25,500 NT server/primary DNS, mail server, backup domain/web server
3
Each $1,400
136
60
Total $4,200 Server operating system - NT, Mail, Web, Management $8,160 License for clients to access servers $12,360
Office Drops (2 data/1 voice) 6 strand fiber between MDF and 2 IDF's Cabling Total Network Electronics 3-Com Superstack 3300 24-port Ethernet Switch 3-Com Corebuider 3500 Layer3 Switch
Qty
3-Com Corebuider 3500 6 Port Fiber Blade 3-Com Corebuider 3500 6 Port TX Blade Network Electronics Total Additional Network Hardware 4 foot Category 5 patch cables 15 foot Category 5 line cord APC 1000 Net UPS Rack Mount Fiber Patch Cables Additional Network Hardware Total Network Servers LAN & WAN File Servers
Qty
Total Purpose $15,300 Connect classrooms to each other and the internet $6,000 Connect offices to each other and the internet $10,400 Connect all wings of the school together $31,700
$498 Patch rooms to network electronics $664 Patch computers to network $2,487 Protect servers from faulty power and power outages $248 Connect fiber to network electronics $3,897
Built-IN POP & SMTP Mail Built-IN DNS service Built-IN WWW server Built-IN Appleshare File & Print Services Network Software MS BackOffice Windows Backoffice Client License Network Software Total
Qty
Labor - Hardware & Software Installation, Training & Preparation Cabling Included Network hardware & hub installation 20 $100 $2,000 Installation and configuration Network server install & configuration 60 $100 $6,000 Installation and configuration Desktop configuration 166 $75 $12,450 Installation and configuration Training 40 $75 $3,000 Administration of network Total - Labor $23,450 Cen School Total $135,907
Design and Implementation of a Wide Area Network
439
APPENDIX A CONTINUED Cen School Cabling Classroom Drops (4 data/1 voice) Office Drops (2 data/1 voice) 6 strand fiber between MDF and 2 IDF's Cabling Total Network Electronics 3-Com Superstack 3300 24-port Ethernet Switch 3-Com Corebuider 3500 Layer3 Switch 3-Com Corebuider 3500 6 Port Fiber Blade 3-Com Corebuider 3500 6 Port TX Blade Network Electronics Total Additional Network Hardware 4 foot Category 5 patch cables 15 foot Category 5 line cord APC 1000 Net UPS Rack Mount Fiber Patch Cables Additional Network Hardware Total Network Servers LAN & WAN File Servers
Qty 34 15 2
Each $450 $400 $5,200
Total Purpose $15,300 Connect classrooms to each other and the internet $6,000 Connect offices to each other and the internet $10,400 Connect all wings of the school together $31,700
Qty 7
Each 2,200
Total $15,400 Connect servers to classrooms and offices
1
$8,000
2 2
$4,800 $3,000
$8,000 Layer3 switch to connect fiber and add network security $9,600 Connect all wings to servers $6,000 Connect all servers to the network $39,000
166 166 3
$3 $4 $829
4
$62
Qty 3
Each $8,500
Total $25,500 NT server/primary DNS, mail server, backup domain/web server
Qty 3
Each $1,400
136
60
Total $4,200 Server operating system - NT, Mail, Web, Management $8,160 License for clients to access servers $12,360
$498 Patch rooms to network electronics $664 Patch computers to network $2,487 Protect servers from faulty power and power outages $248 Connect fiber to network electronics $3,897
Built-IN POP & SMTP Mail Built-IN DNS service Built-IN WWW server Built-IN Appleshare File & Print Services Network Software MS BackOffice Windows Backoffice Client License Network Software Total
Labor - Hardware & Software Installation, Training & Preparation Cabling Inclu ded Network hardware & hub installation 20 $100 $2,000 Installation and configuration Network server install & configuration 60 $100 $6,000 Installation and configuration Desktop configuration 166 $75 $12,450 Installation and configuration Training 40 $75 $3,000 Administration of network Total - Labor $23,450 Cen School Total $135,907
440 Seco, Guzmán, Segura, Fernández & Morillo
An Experience of Software Process Improvement Applied to Education: The Personal Work Planning Technique D. Antonio de Amescua Seco, Javier García Guzmán, María Isabel Sánchez Segura, Paloma Martínez Fernández and Juan Llorens Morillo Carlos III University of Madrid, Spain
EXECUTIVE SUMMARY This case describes the use of the Personal Work Planning (PWP) technique as a time management tool for student practicals on the Software Engineering II course at Carlos III University. It analyzes the results obtained and presents the methodology used to implement activities associated with the Personal Work Planning technique in an academic institution. In addition, an empirical study was carried out to determine the level of student satisfaction after using this technique. This case study concludes that many students realized the usefulness of PWP for their assignments. This technique is of invaluable help to lecturers who wish to improve the course curriculum and the practicals.
BACKGROUND Carlos III University of Madrid, founded in 1989, is a public university with an enrollment of 13,342 students and a staff of 1,231 (910 lecturers and 321 administrative staff). It offers a total of 32 degree courses and has three faculties: Social Sciences and Law; Humanities, Documentation and Communication; and the Engineering School. The Engineering School offers studies in Industrial Engineering, Computer Sciences and Telecommunications among others. The university teaching staff is divided into 16 departments, which organize research and lectures in their respective areas. Carlos III University opened with modern, flexible and multidisciplinary curricula and, from the beginning, made measuring and controlling its basic processes an integral part of their policy. Every year, a University Quality Committee implements Improvement Programs and carries out a Teaching Quality Evaluation Program for all degree courses. Every semester, students complete questionnaires to evaluate both courses and lecturers for the Computer Science degree. The evaluation for the Computer Science degree has been positive. At present, the questionnaires consist of 13 questions: one assesses their interest in the subject matter, eight assess the lecturer, and the remaining four assess the practicals and time dedicated to the course. Copyright © 2002, Idea Group Publishing.
An Experience of Software Process Improvement Applied to Education 441
This experiment was based on an evaluation of the Computer Science course. The aim was to determine the effort students dedicated to developing their practicals. With the real data, it will be possible to design better courses where there is a correlation between the total number of hours and the time dedicated to practicals, thus a more realistic approach to achieving the objectives.
SETTING THE STAGE The main objective of software process improvement (SPI) is to increase the quality of products and services, which a software company provides, by improving the quality of their production processes. One of the first research centers of software process improvement was the Software Engineering Institute (SEI) at Carnegie Mellon University, in particular under Watts Humphrey. The SEI developed the Capability Maturity Model (CMM) (Paulk et al., 1993) based on the definition of Humphrey’s software process improvement (SPI) principles (Humphrey, 1989). SPI has already been edited in ISO 15504 standard (ISO, 1997) as an international model reference for software process improvement. However, experience related to SPI has shown that the lack of success in an improvement program is due to the bureaucratic nature of the improvement designs and the need for an amount of human and material resources that many organizations cannot provide. Important research has conclusively shown that one of the project’s success factors is the capability of the personal software process used by software engineers. Watts Humphrey defined the main purpose of the Personal Software Process (PSP) as continuous improvement of the individual activities related to a software project developed by a software engineer. The Software Engineering curriculum must, therefore, help software professionals to do their job well; that is, to design and develop high-quality software products at agreed cost and on schedule. This will increase the quality of the products and services provided by the software companies. In the PSP description, Humphrey (1997), introduces software process principles to teach students disciplined personal practice to produce high-quality work. In order to develop quality software systems, students must learn to plan and control their work. Previous case studies (Lisack, 2000) concluded that many students failed to recognize the benefits of such a process and felt that it only took time away from programming. The authors of this experiment firmly believe that PSP should be taught with each course but different aspects would correspond to different courses. For example, individual task management would be taught during a software project management course, and software error prevention and detection during a software development course. For this reason, the techniques proposed in PSP for planning individual work have been extracted and modified. This is how the definition of Personal Work Planning Technique (PWP) came about. PWP was taught during a software project management course in Information Systems, a specialization of Computer Science. PWP techniques help to improve the individual process used by software engineers since previously registered personal process performance experience can be retrieved to organize and estimate the size, duration and effort of the new task to be accomplished. Consequently, software engineers can reduce the time spent on re-work resulting from poor task organization, which means that software products can be delivered on time, thus satisfying all quality levels previously determined.
Personal Work Planning Technique In this case, plan refers to the document that describes the way a specific project is to be developed: how, when and what time and effort are needed. The main contents of the plan were: tasks to be accomplished, starting and finishing dates and the time needed to execute them. Time could be spent on different work categories. A work category is a set of tasks of the same type; for instance, attending classes, studying, doing exercises. All work categories are types of tasks.
442 Seco, Guzmán, Segura, Fernández & Morillo
In this experiment, a personal work planning technique (PSP) was selected to teach students the importance of using a disciplined method to manage the time and effort each individual spent on a software project. The following steps were proposed in PSP to help students prepare a reliable and effective plan for their personal work on a software project: • Start with an explicit statement of the work to be done and check to ensure that it is what the customer requested. • Break up the projects that require more than a few days’ work into smaller tasks and estimate each task separately. The added detail will improve precision and will most likely improve accuracy as well. • Base estimates of this work on historical data of previous work done. • Record estimates and compare them with actual results. The tasks for Personal Work Planning can be seen in greater detail in the Diagram 1. We followed most of Humphrey’s recommendations, but with some modifications in the specification of this technique. The lecturers prepared an initial planning chart for students (see Table 3). This chart was used to estimate the time needed to develop their practicals. The main contents of the chart were: • A work breakdown (WBS) of the practicals, which included all the weekly tasks for the semester. • A set of task categories students could spend their time on was suggested in order to facilitate the process of planning and summarizing the weekly time. The task categories presented were: class attendance, studying, doing exercises, practicals and other. The daily registration table and the weekly activity balance (see Table 6 and Table 4) are the same Diagram 1: PWP Technique
D e f in e th e s c o p e o f th e w ork
D e te r m in e th e n e e d e d ta s k s t o d o th e jo b
R e p la n th e n e e d e d ta s k s
E s tim a te s iz e , e ff o r t a n d d u r a t io n
C o lle c t d a ta o n ta s k s p erfo rm a n ce
P e r f o r m t h e p la n n e d ta s k s
D o a w e e k ly b a la n c e o f ta s k s p e r f o r m e d
No L a st w ee k?
Y es
An Experience of Software Process Improvement Applied to Education 443
as those proposed in the PSP. Lecturers introduced a new step at the end of the process. They modified the personal work planning process proposed in the PSP. This was to help the students learn how to plan and estimate time for their assignments. Every week, each group doing the experiment would use the current week’s experience to refine the plan for the following week’s estimations.
Objectives The purpose of this experiment was to find out if the Personal Work Planning technique was a valid tool for lecturers to design and plan a good undergraduate degree course. The objectives were: • To observe the level of the student satisfaction using this technique. • To observe the effect of this technique on student grades. • To determine whether the findings of this technique provided useful data to improve courses.
Participants The experiment was done by Software Engineering II students in their third year of Information Systems, a specialization of Computer Science. The curriculum comprises basic software project management concepts and techniques. These students already have knowledge of Structured Analysis and Database Design, which are essential for the practicals. The course is offered in both day and evening classes, but we decided that the day class would be the pilot group and the evening class would be the control group. All the students in both pilot and control classes were divided into groups of three for the practicals. The pilot group was made up of 23 groups, with a total of 69 students; the control class had 18 groups, with a total of 54 students. Both classes had to do the same practicals but only the pilot groups used the Personal Work Planning technique. Each pilot group was considered a unit for the purpose of this experiment. Two lecturers per class and the coordinator of Software Engineering II supervised the classes. These same lecturers taught the practicals for both day and evening classes.
CASE DESCRIPTION Over the last few years, Information Systems students have been complaining about the workload. Lecturers have also observed a decrease in the quality of the students’ work. The reason for this poor quality was the somewhat chaotic system used for the practicals. We decided it would be convenient to use PWP so that: • students could learn to develop their practicals from a plan that was feasible. Experience in planning leads to progressive improvement of the same. • students could develop the practical aspect of the course methodically by using the plan that they themselves had prepared. • the proposed practicals would be easier for students to handle. Thus products could be submitted on time. • the quality of the work would improve. Many mistakes were a result of doing their work too quickly or making serious mistakes in the process, as well as lack of attention to details. The lecturers at Carlos III were, and are, driven by the culture of continuous improvement in the quality of the courses. For five weeks, they defined and planned the procedure to test the benefits of PWP in the field of education. They also designed the support materials that students had to use for the experiment. (See Appendix.) The norms and the procedures defined for this experiment are described below.
444 Seco, Guzmán, Segura, Fernández & Morillo
Norms Students involved in the pilot project had to faithfully record the data that indicated the real time taken to carry out each task. Therefore, they needed to plan their assignment thoroughly.
Experiment Development The management practices included in PWP are also used in software project management, so we decided to introduce them in Software Engineering II. Students worked in groups of three for the practicals, which involved software project analysis, estimation, organization and planning, in addition to basic tracking tasks. At the beginning of the experiment, lecturers provided students from both day and evening classes with the specifications for developing the software project, and deliberately divided the work into small and easily planned tasks spread out over nine weeks. They prepared a table (Appendix, Table 3) called “tasks distributed by weeks”. So, to begin with, students already had a tasks guide divided into weeks to estimate the hours necessary for each weekly task. In order to verify the objectives proposed, we decided that only one class should use PWP for their practicals and, although PWP techniques were not explicitly taught, they formed part of the procedure for the practicals. In the pilot class, groups were told how to fill in the table of tasks distributed by weeks Table 3), the table of daily registration (for students’ use only) (Table 3) and the table of weekly balance (Table 6), as well as how to send this information to lecturers. These tables were prepared on an Excel spreadsheet, and distributed to the pilot groups as follows: • Students had fill in Table 3 with estimated times before starting the practicals, as they had no previous experience. This table was then sent to the lecturers by e-mail. Every week for the next nine weeks, using the knowledge acquired, each pilot group revised the initial planning in order to reassign times to the tasks which had not been completed. An updated table was forwarded to the lecturers. • The pilot groups indicated the time spent on practicals (Table 3). This information was then used to complete the table of weekly balance (Table 6). It is important to emphasize that the table of weekly balance was filled in after completing the tasks assigned to each week. Therefore, this information represented the real time spent on each task. During the nine weeks, the control groups used Table 3 to follow what had to be done each week and developed the tasks which corresponded to the current week only. At the end of the semester, the pilot groups were given a questionnaire (see Appendix, Questionnaire 1) to find out their degree of satisfaction with the PWP technique used. This questionnaire evaluated eight statements ranging from disagree (1) to completely agree (4). They were also able to choose “Don’t Know.” Every student was expected to fill in the questionnaire, although it was not obligatory. To complete the experiment, lecturers evaluated: • data collected from the table of tasks distributed by weeks (Table 3) and the table of weekly balance (Table 6), and • data collected from the satisfaction questionnaires, Questionnaire 1. Diagram 2 shows the experiment activities diagram using UML notation (Booch et al., 1999).
Results Results Obtained From the Tracking Sheets After completing the practicals, the data (see Table 3 and Table 6) were analyzed to identify the evolution of the 23 pilot groups in their time estimations. The final number of groups evaluated was 19. Groups 8, 12, 13 and 23 were excluded because they submitted insufficient or unmanageable data. While 17.4% of the groups did not understand the PWP techniques, 82.6% finished the experiment
An Experience of Software Process Improvement Applied to Education 445
Diagram 2: Experiment Activities Diagram Pilot Class
Lecturers
Control Class
Design and Plan the Experiment
5 weeks
Select Pilot and Control Groups
1 week
Form groups
1 week
Receive specific information to fill in in tracking sheets
Form groups
Fill in and send table of task distributed by weeks Receive students data Development of current week tasks
9 weeks
Development of current week’s tasks
Fill in and send table of weekly balance.
Receive students data
Revise and send table of task distributed by weeks
Receive students data No
Last week?
Yes
5 weeks
Fill in satisfaction questionnaries
Analyse data Compare Results
successfully. The relationship between the estimated average time (EAT) for all the groups and the real average time (RAT) invested is shown in Table 1. The average EAT was 11.25 hours and the average RAT was 10.23 hours. Data from Table 1 is shown in Graph 1. The average EAT and RAT are also included. In Graph 2, there is a serious error in estimation (not unusual with inexperienced people). They
446 Seco, Guzmán, Segura, Fernández & Morillo
Table 1: Estimated and Real Average Time for the 19 Groups Evaluated TOTAL Week1 Week2 Week3 Week4 Week5 Week6 Week7 Week8 Week9
Estimated Average Time Real Average Time 12.17 8.66 9.16 11.65 8.42 14.10 6.26 8.05 9.91 13.03 21.32 7.91 7.97 10.41 10.28 8.21 16.18 9.65
Graph 1: Relationship Between Estimated and Real Average Time W eekly average of the groups 23,00 22,00 21,00 20,00 19,00 18,00 17,00
Hours dedicated
16,00 15,00
E stim ated A verage Tim e (E A T)
14,00
R eal A v erage T im e (R A T )
13,00
A verage of E A T data
12,00
A verage of R A T data
11,00 10,00 9,00 8,00 7,00 6,00 5,00 4,00 3,00 2,00 1,00
k9 W
ee
k8 ee
k7
W
ee
k6
W
ee
k5
W
ee
k4
W
W
ee
k3 ee W
ee W
W
ee
k1
k2
0,00
W eeks
overestimated the time needed in 44.4% of the cases. In Graph 1, we can observe that the average EAT is greater than that of RAT. Also in Graph 1, the relation between EAT and RAT does not follow a pattern. This may be because the study of METRICA2 (Metrica, 1993), a full software development methodology which was assigned in week 6 (see Table 3), was a difficult task for the students. If we eliminate the data from week 6, we get Graph 4, where a clear tendency during the nine weeks can be observed. In Graph 3, the difference between EAT and RAT less data from week 6 is shown. In 37.5% of the cases, the estimated times were greater than the real times and in 62.5% of the cases, real times were greater than estimated times. Results Obtained from the Level of Student Satisfaction Once the students concluded the practicals using PWP technique, they were given a questionnaire with eight statements to find out their degree of satisfaction (see Appendix, Questionnaire 1). The results obtained appear in Table 3. Each one of the questions was dealt with independently and the data obtained appear in Graph
An Experience of Software Process Improvement Applied to Education 447
Graph 2: Difference Between EAT and RAT 15,00 14,00 13,00 12,00 11,00 10,00 9,00
7,00 6,00 5,00 4,00 3,00
Difference between EAT and RAT
2,00 1,00
k9 ee
k8
W
ee
k7
W
ee
k6
W
ee
k5
W
ee
k4
W
ee
k3
W
ee
k2
-2,00
W
W
W
ee
k1
0,00 -1,00
ee
Hours dedicated
8,00
-3,00 -4,00 -5,00 -6,00 -7,00 -8,00
W eeks
Graph 3: Relationship Between Estimated and Real Average Time Less Data From Week 6 Weekly average of the groups without week 6 17,00 16,00 15,00 14,00 13,00
11,00
Estimated Average Time (EAT)
10,00 9,00
Real Average Time (RAT)
8,00 7,00 6,00 5,00 4,00 3,00 2,00 1,00
Weeks
W ee k9
W ee k8
W ee k7
W ee k5
W ee k4
W ee k3
W ee k2
0,00 W ee k1
Hours dedicated
12,00
448 Seco, Guzmán, Segura, Fernández & Morillo
Graph 4: Difference Between EAT and RAT Less Data From Week 6 7,00 6,00 5,00 4,00
2,00
D iffe re n c e b e tw e e n E A T a n d RAT
1,00
k9 W
ee
k8 W
ee
k7 W
ee
k5 W
ee
k4 W
ee
k3 W
ee
k2 ee W
ee
k1
0,00 -1,00
W
Hours dedicated
3,00
-2,00 -3,00 -4,00 -5,00 -6,00 -7,00 -8,00
W e e k s (w ith o u t w e e k 6 )
Table 3: Results Obtained From the Satisfaction Questionnaires Questionnaire of satisfaction about Personal Work Planning Value 1: Disagree Statement Statement Statement Statement Statement Statement Statement Statement
1 2 3 4 5 6 7 8
1 1 2 1 5 6 6 3
Value 4: Value 2: Value 3: Completely Indifferent Agreement Agree Don't Know 14 36 10 0 19 26 15 0 22 23 12 2 19 32 9 0 15 28 13 0 17 24 14 0 25 21 7 2 9 26 19 4
5. Most students chose Value 3 for all eight statements.
Problems Encountered During the Experiment In order to carry out this experiment correctly, we had to solve several problems. They were related to the misunderstanding of some aspects of the task and the technique to be used, as well as issues of planning. First, due to the difficulty students experienced with planning the first week, an entire two-hour class had to be dedicated to dealing with it. For their practicals, students found that the time they had estimated for class attendance and the relevant exercises did not correspond to the week in question but to the following. It was decided that students should write in the time they estimated even though there was no correlation between the two, since it was time they were going to use and they had to cater for. It should also be borne in mind that this information was not included in the conclusion. In addition, under class and exercise times, students included some hours that were not directly related to Software Engineering II, but to other courses taught at the same time. This was considered acceptable because they could have used that time for the practicals. Finally, students were able to modify the initial planning of the tasks (estimated planning) while
An Experience of Software Process Improvement Applied to Education 449
Graph 5: Satisfaction Questionnaire Results Don't Know
Value 4: Completely Agree
Statement 8 Statement 7 Statement 6 Statement 5 Statement 4 Statement 3 Statement 2 Statement 1
Value 3: Agree
Value 2: Indifferent
Value 1: Disagree
0
2
4 6
8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
Number of students the project was in progress. However, this was only feasible if the changes were made before they had completed the tasks. It was thought that, based on similar tasks already completed, the students themselves would provide the information needed to re-estimate the times. It would seem that this was not done properly because tasks differed radically from week to week.
CURRENT CHALLENGES/PROBLEMS FACING ORGANIZATIONS Conclusions from the Levels of Student Satisfaction • • •
After analyzing the questionnaires in detail, we came to the following conclusions: Students were able to track and control the state of the assignments of their practicals. The efficiency of team members doing the practicals improved in many cases. The PWP technique was well accepted for use in the academic world, as well as its application in the real world.
Conclusions from the Tracking Sheets From a detailed analysis of the tracking sheets, we can conclude that the PWP technique was well accepted among the students. However, it must be added that the students had difficulties in learning how to use this technique because the activities were completely different from week to week and they did not know how to use the experience gained from the previous activities to estimate the time to develop the ones which followed. At the beginning of the semester, the different times estimated for the activity were far greater than the real times spent on carrying them out. As the semester progressed, the difference between estimated and real time narrowed, until estimated times dropped to just below real times needed for the tasks. This may occur because, as their knowledge in a particular area increases, students tend to
450 Seco, Guzmán, Segura, Fernández & Morillo
overestimate their capacity to work and thereby make shorter time estimations for their tasks. However, this tendency was distorted in the sixth week of the semester as students had to select and customize an unknown software development method. This meant that the workload for that week became heavy and the students overestimated the time they needed. In the future, lecturers will reduce the workload for that week, allowing students to use an already tried and tested methodology. Graph 6 shows the grades awarded to the students upon completion of their practicals. The results show that the average grades of both day and evening classes were practically identical. The control class obtained an average of 7.05, while the pilot class had an average of 6.98. On the other hand, the number of students who exceeded 8.0 was greater in the pilot class. This means that the effort invested in the PWP technique was reverted. In many cases, the same products were obtained but of better quality. Lastly, it must be stated that the students delivered their work on time and with acceptable quality levels using the PWP technique. We can therefore conclude that the application of the PWP technique does not imply a substantial increase in each student’s workload. There was a general sense of satisfaction from achieving the proposed tasks and a clear understanding of the benefits of these techniques while the work was in progress. The final conclusion is that this technique is suitable for both students and lecturers. It also offers a valuable tool for analyzing what happens during the semester, and for designing and planning better courses. In order to apply successfully PWP technique, as it is described in this case, it is important that lecturers consider the following issues: • The practicals must be developed in a period of time corresponding to a semester without many difficulties. • The practicals must be decomposed in small weekly tasks. Some of these tasks must be similar in order to let students use their accumulated experience for estimating future works. Also the students must consider that the gathering of time dedicated to develop their practicals is not intended to control their work. The goal of this gathering activity is to acquire the needed experience for correctly estimating the effort for future practical tasks.
Graph 6: Distribution of Grades in Both Classes
Pilot Class Control Class
35,0 32,9 31,5 30,0 27,4
27,4
%Frecuency
25,0
20,0 16,4 15,0
13,7
10,0 8,2 6,8 5,5 5,0 2,7 1,4 0,0
0,0 0
0,0 0,0
0,0 0,0
0,0 0,0
0,0 0,0
1
2
3
4
0,0 5
0,0 6
Grades
7
8
9
10
0,0 0,0 bigger...
An Experience of Software Process Improvement Applied to Education 451
FUTURE TRENDS We plan to repeat this experiment on courses in which problems with practicals have been detected. That is why we are going to present the results of this experiment to academics in charge of degree programs, such as deans and heads of departments. We are preparing training material on the Personal Work Planning technique and how to implement it for lecturers. A program will be designed to institutionalize this practice in our university if further results confirm the success of this experiment.
ACKNOWLEDGMENTS The authors would like to thank the students of Software Engineering II (99/00) for their invaluable contribution and interest in carrying out this research.
FURTHER READING Disney, A.M., Johnson, P.M. (1999).A critical analysis of PSP data quality: results from a case study. Empirical Software Engineering, 4(4). Hilburn, T.B. (1999). PSP metrics in support of software engineering education Proceedings 12th Conference on Software Engineering Education and Training (Cat. No.PR00131). IEEE Comput. Soc, Los Alamitos, CA, USA, 135-6. Humphrey, W.S. (1995). Introducing the personal software process. Software Eng. Inst., Carnegie Mellon Univ., Pittsburgh, PA, USA. Annals-of-Software-Engineering, 1, 311-25 Linda, L and Brennan, M. (2000). Learning Technology Management While Teaching Technology Management: A Trial of Distance Learning in Higher Education – Annals of Cases on Information Technology, Vol 2. Miller, J and Mingins, C (1998). Putting the practice into software engineering education. Proceedings. 1998 International Conference Software Engineering: Education and Practice (Cat. No.98EX220). IEEE Comput. Soc, Los Alamitos, CA, USA, 200-8. Mingins, G., Miller, J., Dick, M., Postema, M. (1999). How we teach software engineering. JOOP11(9), 64-6, 74 Sauer, L.D., Lindquist, T.E. and Cairney, J. (1999). Tracking personal processes in group projects. Proceedings. Twenty-Third Annual International Computer Software and Applications Conference (Cat. No.99CB37032). IEEE Comput. Soc, Los Alamitos, CA, USA, p.364-9. The Personal Software Process(PSP) has been developed by the Software Engineering Institute (SEI). Canadian site: http:/ / www.cs.usask.ca/ grads/ vsk719/ academic/ 856/ project/ node3.html Software Engineering Institute (SEI) : http://www.sei.cmu.edu/tsp/ Building High Performance Teams Using Team Software Process SM (TSP sm) and Personal Software Process SM (PSP sm) East Tennessee State University’s Personal Software Process Studio Home page. http://wwwcs.etsu.edu/psp/. Personal Software Process (PSP SM) for Engineers. Spanish site: http:/ / esi.es/ Training/ Catalog/ pspeng_desc.html . Personal Software Process. Educational site: http:/ / psp.distance.cmu.edu/
REFERENCES Batini, C. (1992). Conceptual Database Design. Booch, G., Rumbaugh, J. and Jacobson, I. (1999). The Unified Modeling Language. Addison Wesley. Humphrey, W. S. (1999). Introduction to the Personal Software Process. SEI Series in Software Engineering, Addison Wesley. Humphrey, W. S. (1989). Managing the Software Process, Massachusetts: Addison Wesley Publishing Company
452 Seco, Guzmán, Segura, Fernández & Morillo
ISO (1997). ISO/IEC Std.15504 Software Process Improvement and Capability Determination, ISO/IEC/ JTC 1 Lisack, S.K. (2000). The Personal Software Process in the classroom: student reactions (an experience report). Thirteenth Conference on Software Engineering Education and Training. IEEE Comput. Soc, Los Alamitos, CA, USA, 322. METRICA versión 2; Metodología de planificación y desarrollo de sistemas de información. (1993) Instituto Nacional de las Administraciones Públicas. Paulk, M.C., Garcia, S.M., Chrissis, M.B., and Bush, M. (1993a). Capability Maturity Model for Software, Version 1.1, CMU/SEI-93-TR-25. Technical Report. Software Engineering Institute. Carnegie Mellon University. United Kingdom Software Metrics Association (UKSMA). (1998). MK II Function Point Analysis. Counting Practices Manual Version 1.3.1. Yourdon, E.- (1989). Modern structured analysis. Prentice Hall.
BIOGRAPHICAL SKETCHES D. Antonio de Amescua Seco. holds a Ph. D. in Computer Science and is a Full Professor in the Computer Science Department of the Carlos III University of Madrid. He worked as a researcher at the Polytechnic University of Madrid from 1983 to 1991 and from 1991 to date at Carlos III University of Madrid. Also, I have been working both in a public company (Iberia Airlines) as software engineer and in a private company (Novotec Consultores) as a software engineering consultant.His research interests include new software engineering methods, with results edited in papers and congresses. He was the research project leader for the development of the Information System Development Methodology for the Spanish Administration and participated in others project sponsored by the European Union. Javier García Guzmán has a Degree in Computer Science Engineering. He is Assistant Lecturer at the Carlos III University of Madrid and has 6 years of experience as a software engineer and software engineer consultant, especially in software projects and processes management and software engineering methods. He was a researcher in the project for the development of the Information System Development and Maintenance Methodology for the Spanish Administration. He also has participated in agreements between Carlos III University and Spanish companies for developing information systems and for evaluating software development methods. He has participated as a teacher in different training programs related to software engineering. Moreover, he is author of different books and conferences in software engineering congresses. María-Isabel Sánchez-Segura has a degree in Computer Science Engineering and Master on Software Engineering and Knowledge Engineering. She is assistant lecturer at Carlos III University of Madrid. She is member of the “Virtual Environments Group” of the UPM, where she has been researching in several projects related to definition of Software Technology Transition Packages for Spanish Enterprises and development of Virtual Spaces for Individual and Collective Presence and Interaction (ESPRIT Program). Her research interests include software engineering, technology transfer and multi-user virtual environments. She has some publications and communications in International Congresses. Paloma Martínez Fernández received her degree in Computer Science from Universidad Politécnica of Madrid in 1992. Since 1992, she has been working as an assistant lecturer in the Department of Computer Science at Universidad Carlos III of Madrid. In 1998 she obtained the Ph.D. degree in Computer Science from Universidad Politécnica of Madrid. She is currently teaching Advanced Databases and Natural Language Processing. She has been working in several European and national research projects about Natural Language Processing, Advanced Database Technologies, knowledge-based systems and Software Engineering.
An Experience of Software Process Improvement Applied to Education 453
APPENDIX Table 3: Table of Tasks Distributed by Weeks Name of course: Group Nº: Components:
Practical Nº: Date:
Work to be done: DATE
Estimated Time TASK Class
from 23/03/00 to 29/03/00
from 30/03/00 to 05/04/00
from 06/04/00 to 12/04/00
from 13/04/00 to 19/04/00
from 27/04/00 to 03/05/00
from 04/05/00 to 10/05/00
from 11/05/00 to 17/05/00 from 18/05/00 to 24/05/00 from 25/05/00 to 01/06/00
Study
Exercises
practical
Specification Requirements document detailed reading Intermediate Level Data Flow Diagrams first approach (Yourdon, 1989) Intermediate Level Data Flow Diagrams beta version Data model construction Intermediate Level Data Flow Diagrams final version Rest of Data Flow Diagrams Entity/Relationship model (Batini, 1992) Check consistency between Process Model and Logical Model
Total time 0
0 0 0 0 0 0
0
Refine the Excel sheet which automate the estimation method MK-II (USKMA, 1998), using the features of the current project.
0
Count Inputs, Outputs and Entities for each one of the logical transactions identified. Assign values to each one of the 19 features in the MKII estimation method, for each logical transaction. Write the practice work memory including: process model, data model, estimation with the description of the features values. Organization, planning and tracking practice work section, reading. Reestimation if needed Metrica Versión 2.0 ,detailed reading.(Metrica, 1993) Work Breakdown Structure, Product Breakdown Structure and Resource Breakdown Structure building. Gantt diagram building, using the last estimation done.
0
0
0
0 0 0
0 0
Document the Project Plan.
0
Build Project Tracking cases. Total
0 0
454 Seco, Guzmán, Segura, Fernández & Morillo
Table 4: Daily Registration Table
Table 5: Table of Weekly Balance
An Experience of Software Process Improvement Applied to Education 455
Questionnaire 1: Questionnaire on Personal Work Planning Techniques Value 1 Using the technique learnt, I was able to develop a clearer vision of the task to be completed. Therefore the time and resources that had to be assigned to each were foreseen.
Comments: With this technique, the level of control to estimate time and changes for each task was greater.
Comments: With this technique, the work team could do the tasks more realistically.
Comments: With this technique, the students succeeded in improving their own time estimations by applying previous experience acquired in similar activities.
Comments: With this technique, the efficiency of the team members has improved.
Comments: With this technique, the behavior of the team and the performance of each member improved the tracking of the practicals.
Comments: I would use these techniques on different courses of my studies.
Comments: I would use these techniques in different projects during my professional career.
Comments:
2
3
4
Don´t Know
456 Yermish
SEIU Local 36 Benefits Office: The Y2K Crisis and Its Aftermath Ira Yermish St. Joseph’s University, USA
EXECUTIVE SUMMARY This case describes how a service organization approached the Y2K compliance issue and how a complex decision-making process led to near operational disaster. We will see how software vendor relations can be complicated by vendor viability and technological innovations. Another issue we will explore concerns opportunities for personal growth and expanded responsibility in a small business environment. We will see how turnover creates stresses, particularly in an organization of this size. Changes in focus from developing in-house expertise to more extensive use of outside support will be examined. Finally, we will examine the relationships that exist among the various service suppliers and how data is interchanged between these suppliers and the organization. While standards exist in other fields, none have been adopted here. Case readers will be encouraged to formulate upto-date management strategies that address these issues head-on.
BACKGROUND The Service Employees International Union (SEIU) represents workers in the janitorial and service fields. The union is active politically on a national level as can be seen on the homepage of their Web site (see Figure 1, www.seiu.org). Despite a general downward trend in union membership, SEIU continues to grow, particularly in metropolitan areas with large immigrant populations. Local 36 of SEIU is located in Philadelphia and the surrounding Delaware Valley region. The Union Local has about 3,000 full-time and 300 part-time workers primarily in the janitorial, window cleaner and maintenance trades. Over the years this membership has been quite stable. The Union Office is responsible for developing the Union membership, for negotiating contracts and for addressing member/contractor disputes. The Benefits Office is separate from the Union Office and provides the administrative support for the health, welfare, and insurance and pension benefits negotiated as part of collective bargaining agreements with various employer groups. Though the two offices are in the same building in downtown Philadelphia, they are separated by more than just 13 floors. The Union Office has pressure to provide more benefits for the members at a lower cost to the employers. The Benefits Office seeks to maintain these benefits in a cost-effective manner for the longterm stability of the underlying funds. Copyright © 2002, Idea Group Publishing.
SEIU Local 36 Benefits Office: The Y2K Crisis and Its Aftermath
Figure 1: SEIU International Homepage
457
458 Yermish
Table 2 presents some statistics about the Benefits Office operations. Membership is divided into two basic categories, full-time and part-time, since benefits vary by this classification. Contributions for benefits are made by the employers monthly based upon hours worked and other criteria. Membership dues contributions are passed on through the Benefits Office to the Union Office. The Benefits Office maintains funds in the form of investments and CDs. The interest and growth of the portfolio represent a significant part of the operational income. Membership benefits include medical, dental, vision, legal and pension. The expenses for these are divided into two classes: third party and self-insured. For the third-party benefits, regular payments are made to the insurers and they are responsible for payments to the providers. In the case of self-insurance (e.g., vision), claims are processed and paid by the Benefits Office directly. Administrative expenses represent the other major area of expenses. The large losses estimated for 2001 will be reduced with major changes in some of the benefits provided. This is required to handle reductions in the member contributions negotiated in recent contracts. Figure 2 describes the relationships among the various entities. Contractor Groups are made up of a number of Contractors (employers). SEIU negotiates with a number of contract groups, particularly groups of contractors that provide building maintenance for their own buildings or for a number of other buildings. There are several hundred contractor/employers included in the contractor groups. The Union Office is a regional local branch of the International Union that represents the Membership (and Dependents) in the Union. In traditional terms, the grouping of employers in a contract group and the grouping of employees in a union provide a balance of negotiating power. In addition to establishing fair wage rates, the collective bargaining agreements provide for the employer collection of union dues and the establishment of welfare, loss of time, pension, insurance and other benefits. The Union Benefits Office is responsible for the administration of the benefits negotiated above. The establishment of a separate entity for administration resolves certain conflict of interest problems that may arise. The Union’s role is organizing and defending workers’ rights under the collective bargaining agreements. The Benefits Office role provides for efficient processing of benefits transactions. As can be seen from Figure 2, the Benefits Office is central. 1. They are responsible for implementing the benefits included in the agreements between the Union and the Contractor Groups. Table 1: SEIU Local 36 Statistics ($ x 1,000) 1997
1998
1999
2000
2,800 350 3,150
2,700 350 3,050
2,800 320 3,120
3,150 320 3,470
2001 Est. 3,000 320 3,320
$
12,558 1,352 13,910
12,556 1,432 $ 13,988
13,322 1,044 $ 14,366
14,492 938 $ 15,430
11,442 1,267 $ 12,709
$
7,365 3,652 11,017
6,797 3,841 $ 10,638
7,825 4,399 $ 12,224
9,043 7,762 $ 16,805
10,326 6,130 $ 16,456
1,219
1,281
1,217
1,005
1,057
1,674
$ 2,069
Full Time Members Part Time Members Total Members Contribution Income Interest Income Total Income Third Party Benefits Expenses Self-Insured Benefits Expenses Total Benefits Expenses Administrative Expenses Net Gain/(Loss)
$
$
925 $ (2,380) $ (4,804)
SEIU Local 36 Benefits Office: The Y2K Crisis and Its Aftermath
459
Figure 2: Entity Relationships Contractor Groups
Contractors Employers
Union Office
B
Membership and Dependents
A
Union Benefits Office
C
E D
Third Party Insurers
2.
Benefit Providers
They are responsible for collecting the membership dues and employer benefits contributions from the employers on a regular basis. Included in this function is an audit process that attempts to guarantee that all appropriate employees have been accounted for and their dues and benefits contributions collected. 3. They are responsible for identifying how benefits should be funded, either through internal selfinsurance or through Third Party Insurers. In this process the Benefits Office provides eligibility lists to the insurers, and track claims and benefit utilization. 4. They process claims from Benefit Providers (e.g., doctors) for services to the members. 5. They provide information to the Membership and their dependents on benefit eligibility, claims processing and other benefit issues. Consider the process cycles important to the Benefits Office. Every couple of years, depending upon the contractor group, the Benefits Office provides significant support (flow A in Figure 2) for the contract negotiation process. The Union wants to provide as many benefits to the membership as possible. Given the nature of this worker base, it is very difficult for employers to make large contributions to the health and welfare benefits of their employees. During this negotiation process, the Benefits Administrator calls upon a complex planning model developed by a consultant for predicting the impacts of changes in benefit rates and expenses. Once the contracts are in place, the Benefits Office is responsible for the monthly collection of payroll deduction and welfare contributions from the employers (flow B). This is one of the primary
460 Yermish
Figure 3: Benefits Office Organization Chart Benefits Administrator Mike Ragan (1)
Claims Manager Linda McCollough (8)
Contributions Manager Chrissy Cobryn Outside Consultants IT Consultant Benefits Consultants Fund Auditors Legal Support
Pension Manager Tina Soncini
IT and Audit Manager Mark Ranieri
Accounting Manager Claire Longeway
functions of the BASYS information system. It prints monthly contribution report forms for each of the employers and then handles the deposit of these contributions to the various benefit funds. From these contributions, the BASYS system calculates member benefit eligibility on a daily basis. As new members join the union after an initial period with the employer, their demographic and dependent information is added to the system. Contributions are placed into funds to support the expenses to be incurred against these benefit areas. Given the nature of the timing of these flows, the Benefits Administrator has latitude in investing these funds to earn interest income. The outside auditors and legal support are responsible for seeing that these funds are invested appropriately. Members deal with the Benefits Office (flow C) on a varying basis depending upon need. When a member or a dependent needs to use a benefit (e.g., medical, dental or legal), the member contacts or visits the office to submit a claim. In some cases (e.g., hospitals and prescriptions) these are handled directly through the Third Party Insurers. The Claims Department is responsible for determining the eligibility and limits of the claim and work with the primary Benefit Providers (flow D), or the Third Party Insurers (flow E) to handle payments directly to the providers or whether they are part of existing contracts. One of the complexities of this process is communicating eligibility information from the BASYS system to the various providers. Without an industry standard format for these transfers providers require their files be formatted differently. In addition, the claims utilization data provided
SEIU Local 36 Benefits Office: The Y2K Crisis and Its Aftermath
461
Table 2: Software Applications Package BASYS Benefits System
Peachtree Accounting Lotus 1-2-3 Excel WordPerfect Word Access Claims Analysis System Employer Audit System Eligibility Export System Fund Planning Models Miscellaneous Graphics Packages Miscellaneous Backup and Communications Packages
Functions Contributions Claims Pensions Benefit Eligibility General Ledger Accounts Payable Planning and Modeling Financial Worksheets Word Processing Word Processing Database Claims Analysis Integrates contributions with Employers data Extracts Eligibility data from BASYS and exports to Third Party Insurers Projects Contributions and Fund Utilization Reporting and Graphing Operations Computer and Internet Operations
Source BASYS, INC.
Support Mode Monthly Maintenance Fee to Primary Vendor
Peachtree Software, Inc. Lotus
Internal IT Support
Microsoft Corel Microsoft Microsoft External Consultant External Consultant
Internal IT Support External Consultant Internal IT Support Internal IT Support Internal IT Support Internal IT Support External Consultant External Consultant
External Consultant
External Consultant
External Consultant
External Consultant
Off-the-shelf
Internal IT Support
Off-the-shelf
Internal IT Support External Consultant HW Vendor Support
also varies in format from provider to provider. To support these functions, the Benefits Office is organized in a traditional functional structure (Figure 3). The Benefits Administrator is the primary executive, responsible for all internal operations as well as all high-level external relations. The operation is divided into five main areas: claims processing, contributions posting and tracking, pension operations, internal accounting, and information technology support and employer audit. Most of the personnel are found in the claims processing area. Given the highly technical nature of some of these operations, the Benefits Office makes extensive use of outside consultants for fund audits (as opposed to employer audits), benefit projections and software design and development. The Benefits Office basic information technology is shown in Figure 4. All users are connected to the network through an Ethernet connection. The primary support operation is provided by a standard benefits administration package supplied by BASYS, Inc. This is the only application on the SUN Server. All other operations are installed on the Windows NT Server. In addition to the standard office automation products (LOTUS 1-2-3, WordPerfect, EXCEL and WORD), there are a number of other standard, and custom programs. Table 2 lists these packages, their functions, source and mode of support.
SETTING THE STAGE Mark Ranieri, the Manager of Information Technology and Employer Audits packed up the DEC Alpha Server that was sitting idly in his office since he arrived six months before in June of 2000. The high bid he received for it on eBay.com was only $1,200, a far cry from the $27,000 invoice price. This
462 Yermish
Figure 4: Benefits Office Systems Architecture Peachtree Accounting Claims Analysis System Employer Audit System Eligibility Export System Planning Models
Windows NT Network Server
SUN SOLARIS SparcServer BASYS System 10Mbs Ethernet
Network Printers
CISCO SYSTEMS
Internet Router (DSL Connection) Network Printers 4 Printers
Terminal Emulation Word/Word Perfect Excel/Lotus 1-2-3 Access
20 Stations
PC Workstations
PC Workstations
was a sad end to a process that at one time was thought to present answers to many questions, including a resolution of the infamous Y2K problem. Here, so soon after his college graduation, he finds himself in a position of responsibility, but not always as busy as the popular press might suggest. Back in 1978 the benefits administrator of SEIU Local 36, Joe Courtney, contacted the Union’s law firm for a recommendation about how to implement a computerized system for their operations. Up to that point the benefits operations were completely manual with just some simple word processing equipment. The law firm recommended a consultant firm, MagnaSystems, Inc., and their principal, Ira Yermish, who had done some work for them. This firm helped the Union identify and negotiate a contract with software company, Benefit Systems, Inc., located in Baltimore Maryland that had a standard package for supporting operations such as SEIU’s. After demonstrations and discussions, the package, written in version of BASIC and running on a minicomputer using the PICK operating system, was installed, and slowly most of the regular processes were converted to the new system. Joe Courtney’s background was interesting. After high school and vocational training, he rose through the ranks of the pipe-fitters union. Along the way he attended seminars and training programs to gain the management skills needed to advance within union administration. His management style was paternal and caring. He led with a strong but understanding voice. He was still “Mr. Courtney”
SEIU Local 36 Benefits Office: The Y2K Crisis and Its Aftermath
463
when he retired in 1996. One of his management styles was to use outside expertise whenever he felt he didn’t have the requisite experience. At his retirement he gratefully acknowledged these sources and their contributions to his success. After the installation of the Benefits Systems package, the relationship with MagnaSystems continued. In 1983 they developed a package to support their employer audit functions. The package was written in LOTUS 1-2-3 for the “portable” COMPAQ computer. Additional software was written to extract contribution data from the Benefit Systems package to be matched against the employers’ records. Millions of dollars were recovered thanks to this package over the 15 years of its active life. Later, Joe Courtney asked the consultants to develop a planning model, again based in LOTUS 1-23, to forecast fund balances and the effects of changes in benefits. This model was even used in contract negotiating sessions with the contractor groups. Another issue that he wanted to resolve concerned problems with third-party administrators, in particular Blue Cross and Blue Shield, in their claims payment processes. There was some evidence that providers were over-billing and the thirdparty administrators were just passing these charges along, failing to check for problems. The consulting firm built a claims analysis program in Foxpro again using membership eligibility data extracted from the Benefit Systems package and claims data submitted on tape by the third party administrators. This package made it easy to identify duplicate billings and payments to ineligible members. It was also used to identify providers with patterns of dubious practices. Over the years the relationship with Benefit Systems was mixed. Courtney was always thinking of new ways to manipulate the data in the system but Benefit Systems was very hesitant to change their system to meet the needs of only one of their clients. There was a fairly hefty monthly maintenance charge for the software and customizations were, when finally approved, very expensive. But, on the whole, the package met the basic needs of recording contributions, calculating eligibility for the various benefits, and tracking and paying internally funded claims. Its general ledger and accounting functions were weak so a standalone package, Peachtree, was selected to implement those functions. Also, a bank processed and printed pension checks. Along the way a couple of hardware upgrades were made as the technology improved and the needs grew. Ultimately, the Benefits Systems became BASYS, Inc. and the system migrated to a SUN system, using the UNIDATA database system as the core for the BASIC code. This proprietary environment with its character-based interface did the job without much flash. The role of information technology management over Courtney’s tenure was interesting. When the first audit program was established, he hired a recent college graduate with a good understanding of information technology and accounting. He was responsible for conducting the employer audits with the new software. After a number of turnovers in this position, one of them, John Matekovic, demonstrated a real interest in labor and benefits issues. He attended a number of seminars and courses and earned his certification in benefits administration. He hired an assistant who was primarily responsible for the external audits while he concentrated on internal information technology and operational issues. Though he knew that Joe Courtney was going to retire shortly, he also knew that his experience wouldn’t be adequate to be his replacement. Faced with this, John found a position with the consulting firm of Towers-Perin that had provided actuarial and benefits consulting to SEIU for many years. His replacement, Chris O’Brien, started immediately after graduation from the same school as Matekovic. O’Brien faced a similar issue. The auditing function was straightforward. The process had become very efficient over the years. Under his direction, the consulting firm developed a new audit system, based on ACCESS and Visual Basic that handled the audit process even more efficiently. Audits that took several days using the old system could be completed in hours with the new system. The internal computer operations were smooth, with just the occasional PC or network problem to be resolved. Again, this gave an employee the flexibility to explore personal growth. In his case this was law school. Finally, in the fall of 2000, O’Brien with just a few months of law school remaining joined
464 Yermish
an Internet start-up firm, specializing in government and union technological issues. O’Brien’s departure was not taken with equanimity from Ragan. O’Brien’s subordinate, Ranieri, with just six months of experience, moved into his office and took over his roles. When Mike Ragan arrived in 1996, the operation was running smoothly. Except for O’Brien, all of the operational and clerical management had been at SEIU for many years. They understood the basic operations thoroughly. Their clerical experience met the organizational needs well. Ragan’s background was very different than his predecessor. With a management degree from the University of Pennsylvania and experience as an officer in the Air Force, Ragan served as the administrator of the Building Operators Union for a number of years. In that position he was able to use his own programming experience to develop a number of internal systems for his organization. He was a competitive rower and continued his relationship with the sport by heading a number of regattas on the Schuylkill River. What would be his mark on the organization that he now managed?
CASE DESCRIPTION In early 1998, the SEIU Benefits Office faced a dilemma in part prompted by the pending Y2K problem. Relationships with BASYS, the primary software supplier, had grown sour for a number of reasons. Ragan felt that they weren’t responsive enough for changes in the application that he requested. The BASYS management was concerned that their staff was tied up with the difficult Y2K conversions and other developments. Next, Ragan had stopped paying the monthly maintenance fees because he argued that they seemed like “money down the drain.” Though these fees entitled SEIU to get all of the software updates, SEIU would have to pay extra to have them installed on their system. The system was running adequately and he was willing to pay for emergency repairs on an hourly basis. From a technological point of view, there was some concern about the software. It was based upon a very mature technology that was growing more and more out of step with mainstream technologies. The underlying database (UNIDATA) was developed with a non-relational structure. The software was written in an obscure dialect of BASIC (UNIBASIC), originally developed 20 years before. The software did not take any advantage, nor could it, of the Windows or other graphical user interface (GUI) facilities. Though they were accessing the system through an Ethernet local area network (LAN), the PC was really acting only as a “dumb terminal” to this application. Ragan faced the following three alternatives: 1. Agree to pay the fees to BASYS necessary to bring their system up to current standards and reestablish a long-term arrangement with their vendor. 2. Design and implement a custom in-house system. 3. Investigate a number of alternative systems that had been developed using current software technologies. The first option seemed to Ragan as “throwing good money after bad.” The second option was too much of a gamble given the short time period ahead. Internal staff capabilities were inadequate and the consultant indicated that this option was extremely risky. It seemed that there was no choice but to explore the options. Ragan and O’Brien began their search for an appropriate system. A couple of options were identified. One in particular seemed interesting. A California company, Information Concepts, Inc. (ICI), was developing a “next-generation” Windows-based benefits management system. They had experience with a mainframe-based system that they were re-architecting for a client-server environment. The real draw to Ragan was Tom Dowling, who was the original designer of the Benefit Systems product. Dowling was now a principal for ICI and responsible for the East Coast sales. Dowling prepared the proposal and assured Ragan that their system would serve the needs of SEIU. Ragan also asked Tim McGuckin, a benefits consultant that Ragan used at the Building Operators Union for his advice. He seemed satisfied that the system would meet their needs. Ragan and O’Brien went out to California and saw a demonstration of the system and seemed impressed. Ragan, realizing that things
SEIU Local 36 Benefits Office: The Y2K Crisis and Its Aftermath
465
had to move fast, signed the contract in June of 1998. The cost of the hardware and software, modifications, installation and training was nearly $250,000. Though very expensive, there didn’t seem to be a real choice. The timeline would be tight but Dowling assured Ragan that it was well within ICI’s capabilities. There was one major change that was required for the system. SEIU negotiated an interesting arrangement with Amerihealth to handle a mixed claims processing environment. With traditional Blue Cross, all the claims were entered by Blue Cross, examined and then paid. With the Amerihealth system, the claims would be entered at SEIU and then transmitted electronically to Amerihealth for evaluation. When the evaluation was completed, an electronic transmission would be sent back to SEIU for actual check writing. This would lower the costs of claims processing significantly. Unfortunately, this capability was absent from the ICI system and would require additional programming. Because of the timing, this additional programming was needed. McGuckin offered to supply temporary programming to meet the needs until the ICI programming was completed. During the summer and autumn of 1998 representatives of ICI came to review the operations at SEIU and to start the training of the operations management in the claims, contributions and pension areas. The computer system was delivered and installed. For several months thereafter progress was not apparent though the invoices for work appeared regularly. In December of 1998 Ragan was concerned that after the training sessions during the summer there was no progress on actually getting the system implemented. Some data entry of pension information was completed, but ICI couldn’t get the system to print checks on the network laser printers. Pension processing seemed to be the easiest function and things just weren’t happening. At this point the computer consultant, Ira Yermish was brought in to review the situation. Except for a brief review of proposals he had been “out of the loop” for this contract. McGuckin was providing the consulting support for this project. Yermish examined the contract in detail and discovered that ICI was not required to provide conversion processing. In other words, transferring the existing data from the BASYS system to the ICI system was not included in the contract. McGuckin did not have the ability to provide this support so Yermish offered his services to make the conversion. This cost was not expected and was not insignificant. After a serious meeting with ICI principals, the delays were explained and ICI assured Ragan that the conversion would be completed on time. During the next few months, Yermish tried repeatedly to get complete conversion database details. Finally, in March of 1999 these details were finalized and Yermish developed the conversion programs. A couple of months passed, test data was prepared by Yermish, but ICI couldn’t get this data loaded into their system. Still, there was little progress in getting any of the system working. Seeing no progress, Ragan contacted their attorney and meeting was setup with Ragan, O’Brien, Yermish, McGuckin and the attorney, Paul Bomze. After reviewing the situation it appeared clear that there were significant problems at ICI. Over $100,000 had already been paid to ICI. The invoices for custom programming kept appearing from ICI. There was an expensive computer that was sitting idle at SEIU. There was no visible progress on the installation and obvious panic. The first step taken at this point was to contact BASYS. Could they actually make the upgrades to their system in time for the millennium change? Except for the Amerihealth processing, which McGuckin software was already handling, they could do it. It would not be inexpensive but it was possible. Once this option was clear, the decision was made to terminate the contract with ICI. Immediately the lawyers went to work. ICI claimed that their contract was still in force and that the balance of over $200,000 in costs was due. SEIU claimed that ICI failed to deliver on their promises. In discussions with the principals at ICI, it came out that Dowling made the contract without the home office consent to the details. He had sold a project that could not be delivered for the price in the contract. Cash flow problems at ICI forced them to work on more lucrative projects. The case was scheduled for arbitration, but ICI backed out at the last minute and both parties agreed that the contract was voided and that no cash need flow in either direction. In December of 1999, BASYS upgraded the server at SEIU and successfully implemented the changes to their software. Except for a couple of minor problems, the software ran properly on January
466 Yermish
3, 2000. In August of 2000, the modifications to handle the Amerihealth transactions were installed at SEIU. The software still looks like the old character-based software. Work is progressing at BASYS on a web-based front end, leapfrogging any direct Window interfaces.
CURRENT CHALLENGES In the spring of 2000, the panic was over. Relations with BASYS improved markedly. It was time to regroup and examine the long-range information technology issues. Ragan met with O’Brien and Yermish to review options for future developments. These included: • Upgrade of the 18-year-old audit system • Major revisions to the planning models • Implement EDI processes for eligibility transfer • Upgrade of the claims analysis software • Establishment of a Web site for members and employers The audit system was built in 1983 around a LOTUS 1-2-3 and the original Microsoft BASIC. With a few “band-aids” it survived into 2000. One of the major weaknesses of the original program was its limitation of audit size. With O’Brien and his assistant Chris Donoflio, Yermish designed a new audit system using ACCESS and EXCEL. Over the course of the next few months Yermish implemented this system and O’Brien and Donoflio tested it on several small audits. As the testing progressed they made important suggestions for improvements. Finally, with the completion of a major audit involving hundreds of employees, the system was deemed complete in December of 2000, but there will be changes needed as new contracts arise with more complex eligibility requirements. As time goes on the question of application support looms ever greater. The planning models were built for the former administrator, Joe Courtney. They were standalone LOTUS spreadsheets that were based on quarterly totals collected from various accounting and operational data in the organization. The models could hold one year of historical data (by quarters) and could project fund balances for five years into the future. The presentation of the model data is an important part of each quarterly trustees meeting. Ragan, however, felt that the models needed significant changes to reflect changes in his plans for future operations. He wants the models to extract data automatically from the operational systems (Peachtree accounting and the BASYS system). He wants to be able to review more historical data on a monthly instead of a quarterly basis. He also wants more flexibility in the model calculations, but how does he assure that this flexibility doesn’t create more opportunities for failure due to error by the end-user? One of the issues not handled effectively within the BASYS system has been the transfer of eligibility information to the third-party insurance administrators. The BASYS system calculates the eligibility based upon contributions and contract specifications. The problem is that each of the third party entities has a different format for eligibility transfer. Yermish implemented a system to extract the data from the BASYS system and translate it into the appropriate format. Unlike other industries, there is no established standard for this data. There have been attempts in the industry to provide these standards (e.g., HL7), but these have failed to gain universal acceptance. Perhaps the new movements towards XML will provide the springboard for this process. While waiting for the Benefits Office must get the data moved efficiently among the providers. There are a number alternative design and operational alternatives. They aren’t sure which one makes the most sense right now. The claims analysis software was developed in the early 1990s to identify problems with claims. Like the eligibility data, claims data is submitted in different formats by each of the third-party operations. To date, the SEIU system treats each of these as separate databases. There is a movement towards disease management, where the Benefits Office provides direct insurance for specific conditions or diseases. To prepare for this, the system has to integrate the claims data by member. Again, there is no standard for the presentation of claims data. Inconsistencies within the databases must be resolved.
SEIU Local 36 Benefits Office: The Y2K Crisis and Its Aftermath
467
Finally, the Benefits Office in conjunction with the Union Office wants to develop a Web site for to better serve the membership. The details of the design and the functionality of this site have not been finalized, though an arrangement with a web design firm, 7 Pixels Interactive has been made. One of the critical design issues concerns the nature of the membership. A large percentage of the members do not read English. What functions should be provided for members and employers? Ragan and Ranieri have a number of choices as to how to implement these functions: 1. Contract with BASYS to include reports and procedures to be included within their standard system. 2. Contract with an outside consulting firm to extract the data and load it into a working data warehouse in a manner similar to the processes that have been used in the past. 3. Develop more in-house programming and support expertise to take complete control over the projects. As the year 2001 unfolds, Ragan and Ranieri see that there is much work to be done. The Y2K crisis is behind them but they are not done with improving the information technology infrastructure appropriate for the modern organization. The big question is how it should be done and by whom. How do they work to effectively integrate the systems provided from a number of sources with different data formats and specification? They must consider how to manage these complex projects without straining their resources or losing focus on what the Benefits Office is responsible for: managing member benefits.
FURTHER READING Applegate, L. M. et al. (1999). Corporate Information Systems Management, fifth edition, IrwinMcGraw-Hill. Ripkin, K.M. and L. R. Salyes (1999). Insider Strategies for Outsourcing Information Systems, Oxford University Press. Sloane, A.A. and F. Witney (2000). Labor Relations (tenth edition), Prentice-Hall. Weiss,D (1996). Beyond the Walls of Conflict: Negotiating for Unions and Management, Irwin Publishing.
BIOGRAPHICAL SKETCH Ira Yermish is an Assistant Professor of Management and Information Systems at St. Joseph’s University in Philadelphia. His teaching and research areas include systems analysis and design, data base management, data communications, information resource management, and strategic management. In addition to his current academic activities, he is an active management and information systems consultant. Dr. Yermish earned degrees from Case Western Reserve University, Stanford University and the University of Pennsylvania. His early professional experience included positions with Univac, Control Data, RCA Corporation, the Institute for Scientific Information, and as founder of MagnaSystems, Inc. When not teaching or consulting, Dr. Yermish is an avid cyclist, runner, photographer, choral singer and arts board member.
468 Mornar, Fertalj, Kalpic & Krajcar
Credit Card System for Subsidized Nourishment of University Students Vedran Mornar, Krešimir Fertalj, Damir Kalpic and Slavko Krajcar University of Zagreb, Croatia
EXECUTIVE SUMMARY In Croatia the Ministry of Science and Technology is the major provider of funds for higher education. There are four universities, each consisting of a number of relatively independent and dislocated faculties and academies. In 1997, the Ministry engaged the authors to computerize the system for subsidized nourishment of the university students. The initial plan was to establish a simple credit card system. Faced with political and technical infrastructure difficulties, the authors had to develop a heterogeneous distributed database scheme and develop proprietary replication mechanism, capable to exchange high volumes of data over a slow network or over dial-up networking. The system has achieved full functionality at the largest university in Croatia, and it is ready to be installed in the rest of the country. Although developed under tight budget, it has significantly improved the organization and it provides valuable data for conducting the policy of subsidized nourishment.
BACKGROUND In Croatia for the majority of university students, no tuition is required. Moreover, the state provides for some of them subsidized housing and for all of them who are free of tuition, it subsidizes the nourishment in contract restaurants. The funding and responsibility is with the Ministry of Science and Technology. The Ministry itself had computer equipment and islands of computerization, which still could not be called an integral information system. Faculties and academies vary in size, from a few dozen students to thousands. Their level of organization and computerization varies heavily. Some have computerized most of their administrative functions (Kalpic & Mornar, 1994), while some do not have even the connection to the Croatian Academic Research Network (CARNet) (Pale & al., 1992), or in other words, they do not have Internet access. The annual amount offered to students in the Zagreb area was US$37,780,000. Out of this amount, the students have consumed the value of 13,423,000 US$. Precise data about all expenditures are stored in the database. An approximate amount of consumed subvention in the whole country is nearly US$20 million or expressed in the local currency, 160 million HRK. Copyright © 2002, Idea Group Publishing.
Credit Card System For Subsidized Nourishment Of University Students
469
Table 1: Number of Faculties and Students in Croatia for Subsidized Nourishment
University
Number of faculties and academies
University Zagreb University Rijeka University Split University Osijek Colleges Independent faculties and academies Sum
Number of scheduled subsidized students 35 13 14 11 38 11
51,662 2,734 6,906 3,373 7,239 3,907
122
75,821
Number of contract restaurants 35+7 7 16 9
35(existent) + 39(planned)
SETTING THE STAGE The system of providing the subsidized nourishment to the university students had been based on coupons for a long time. At the beginning of every month, regular students were allowed to buy a certain quantity of coupons that they could exchange for meals valuing roughly four times more than they had paid for the coupons. In other terms, they paid a quarter of the price for every meal, the rest being a subvention by the Ministry of Science and Technology. It had been noticed that this system was not performing satisfactorily for several reasons. A considerable number of misuses and frauds were perceived, from trading coupons from one person to another, to using the coupons for purchasing goods like cigarettes, detergents and even lawnmowers. In addition, the students were required to buy coupons in advance at some offices, which involved superfluous administrative work. The students were wasting their time waiting in queues. In this system, there were three principal players, the Ministry, the students and the restaurants for students. Some restaurants allegedly cooperated with the students in abusing the purpose of the subvention. Without having a meal, a student could sell his/her coupons to the restaurant at a price that was higher than the nominal coupon value, but was less than the respective subvention. By delivering coupons to the Ministry, the restaurant collects subvention for a meal that was not consumed. The subvention money would miss its purpose and end up in the student’s pocket and on the restaurant’s account. In the previous system, each individual was entitled to buy an amount of coupons that depended on the place of residence. Students residing in the place of study could buy an amount that allowed them one complete meal a day. Students residing out of their place of study could buy twice as much. Initially, the system was meant to give the students their daily rights to receive one or two meals at a subsidized price. That would be rather simple to implement, but the Students Council did not accept it. They insisted on monthly amounts, which were eventually approved, making the system considerably more complicated. Now each student has an account on which he/she receives a certain sum at the beginning of the month. The amount on the account can be changed at any time if the student’s level of rights has changed, and, of course, after the student has consumed a meal in a restaurant. Wishing that the funding be properly applied and to discard or minimize abuse, in 1997 the Ministry decided to start the development of a computerized credit card system for that purpose. The card was named X-card as an acronym of the Croatian term for the student identification card. The X-card was conceived not only for usage at the student restaurants, but also for other purposes, like getting discount in the museums, at public transport companies, etc. The ultimate goal was to eventually replace what is now called the
470 Mornar, Fertalj, Kalpic & Krajcar
Index, a small booklet used to prove the status of a student (Kalpic & Mornar, 1994). The faculty administration fills it in with enrolled courses and professors write down the grades. The validity of X-cards was intended for the whole state territory. Present population of university and high school students totals about 76,000. Roughly, 25,000 students enroll in 73 faculties within four universities every year. The X-card is issued to a student when he/she enrolls in a university. Validity of the card expires at the end of the academic year, and it is extended by regular enrollment in the next academic year. Initial assumptions and requirements for the future system were rather simple–a central database of students was to be established. The faculties and academies keep the records of their students. They were supposed to be permanently connected to the CARNet, which should make it easy for them to maintain the data. The faculties were to collect only the basic information about their student, assign levels of rights, make a digital photo and connect it to the student’s record. The student identification cards were to be printed out of the database. The restaurants were to be provided with point-of-sale devices, connected also to the central database. The students were to use the identification cards at restaurants where a check was to be done whether they have consumed their daily quota. Unused rights would not be preserved for future use. The Ministry would be reported from the database. This simplified organization is presented in Figure 1. The faculty equipment was devised as in Figure 2, while the restaurant equipment is in Figure 3. The students objected to these simple initial assumptions, regarding them as a loss of their rights. To appeal to the students, the system had to grow in complexity, almost to resemble a banking business today. The other difficulty arose from low availability and performance of CARNet at the majority of sites. The scissors in Figures 2 and 3 symbolize the quite probable loss of connection during operation. The necessity that the students enjoy their rights had the highest priority. It is inconceivable that the students do not receive their meals because of some error in the transmission lines or because of loss of connection with the central database. The typical pattern of gradual increasing of complexity during the development was present (Brooks, 1975). The authors had to develop a heterogeneous distributed database scheme and develop a proprietary replication mechanism, capable of exchange high volumes of data over a slow network or over dial-up networking.
CASE DESCRIPTION After the Ministry had made the decision to computerize the subsidized students’ nourishment, the Ministry required quick implementation and discarded the idea of elaborate project specification. Figure 1: System Organization
Faculties
Central Database
Ministry of Science and Technology
Students’ Restaurant
Credit Card System For Subsidized Nourishment Of University Students
Figure 2: Faculty Equipment
CARNet
Figure 3: Restaurant Equipment
CARNet
471
472 Mornar, Fertalj, Kalpic & Krajcar
The authors were engaged to its realization, and a sort of extreme programming (Kent, 1999, 2000; Juric, 2000) was the only eligible technique. The information technology infrastructure at the faculties and restaurants was being discovered during implementation. Similarly, the business rules were unveiled and modified during the whole development.
Decisions About Hardware and Software The first dilemma about the X-card was to choose between a magnetic and a chip card. The chip card offered the tempting possibility of containing more information. It could also be programmed to contain student’s rights, thus eliminating the need for online validation at the restaurants. After some consideration, the magnetic card was chosen for the following reasons: the card itself is cheaper, the devices are cheaper, it eliminates the need for periodical reprogramming and, most important, it is faster. At lunch times, the queues at the student restaurants are of considerable size, and every saved second of service time per student is precious. Reading of the magnetic card together with system response time adds just about one second to the total service time, which is now about 12 seconds, including the cashier’s work and printing of a detailed bill. As the student data include the picture and signature, considering also the availability at restaurants’ sites, the obvious choice of the operating system on the client side was a 32-bit version of Microsoft® Windows. Taking into account that applications at the restaurants should run permanently, independent of the availability of the network and the server, the client environment had to comprise a local database. The faculties were at first meant to work online, utilizing CARNet. To have a uniform software on both types of client sites, Microsoft Access was chosen, because it comprises a local database, royalty-free run-time version, and it offers a productive development environment. It turned out to be a wise choice also for the faculties, since the practice has shown that a vast majority of them had chosen to work offline and periodically exchange data with the server, due to a slow or even unavailable connection to CARNet. Having had some unfavorable experience with ODBC drivers for Informix®, which would otherwise be our preferred choice for the DBMS on the server side, Microsoft SQL Server was chosen for compatibility with clients and because of relatively low server hardware requirements. Thus, the server runs under Microsoft Windows NT. Initially, there was a requirement to acquire a digital picture of every student (Netraveli & Haskell, 1995; Pennebaker & Mitchell, 1993; Wallace, 1991), which would be printed on his/her X-card. A consumer use digital camera (Kawamura, 1998), Olympus® 420-L, was chosen because of its relatively good quality at an affordable price. Its macro lens mode enabled the authors to cope with a later request to store the student’s digitized signature together with other data. The signature is simply photographed, eliminating the need to purchase an additional scanner and to write additional software as an interface to the scanner.
Digital Camera Interface As mentioned above, the Olympus 420-L was chosen primarily for its price/performance ratio. Unfortunately, the accompanying software was appropriate for the casual home user, but not for the intended production. The accompanying software enables the user to manipulate the camera through relatively elaborate interface and preview and download pictures to files on hard disk. To eliminate the possibility of assigning a wrong picture to a person, the authors wanted to take and download the pictures from a proprietary database interface. They hoped that the enclosed TWAIN driver (Lindley, 1994) would be callable from the program, but an inappropriate user screen interface is activated by the call to the driver. The only solution was to build a proprietary low-level communication interface, which would be a rather simple task if the camera protocol had been available. The camera communicates with the computer over a standard RS232 port. However, all the efforts to acquire the camera protocol from Olympus were futile, so the authors searched the Internet, where an unofficial protocol was published. The author of this protocol acquired knowledge about it by scanning the
Credit Card System For Subsidized Nourishment Of University Students
473
Figure 4: Compulsory Basic Student Data
RS232 port during the camera operation. As the result, an ActiveX object was programmed in Visual Basic (Williams, 1997), exposing the properties CommPort and Baudrate and methods Initialize, NormalMode, HQMode, FlashAuto, FlashForceOn, FlashForceOff, FlashRedEyePrevention, GetNumberOfPicturesRemaining, GetNumberOfPicturesTaken, GetFlashMode, GetPictureQuality, EraseSinglePicture, EraseAllPictures, LensMacro, LensNormal, ReleaseShutter and Download, thus producing a complete set of commands required to drive the camera from any application. The picture is stored in the database as a 300x300 pixel color JPEG. The signature is converted to black and white and stored as a 400x70 JPEG. Since the camera resolution is 640x480, a simple userfriendly picture editor component was programmed as an ActiveX object which is called from the database interface, allowing the user to preview the picture, repeat the photographing if necessary and cut the desired area out of the complete picture. This editor also provides for some basic editing capabilities (rotate, contrast, brightness, etc.), reads the picture from a file, stores the picture to a file and acquires the picture from a TWAIN source. Recently the model 420-L has been replaced by the compatible model 830-L, which supports resolutions of 640x480 and 1280x900 pixels. Acquiring of the photograph and the signature proceeds by simple mouse clicking. The initial compulsory student’s data from Figure 4 is enhanced with the student’s photograph and signature, as in Figure 5. It is immediately available for visual inspection, so that some very unpleasant errors like for example a shift which would connect a series of pictures to wrong persons, are highly improbable. Using the data from the database, X-cards are produced as in Figure 6. The reverse side contains a magnetic stripe with a standardized card identification record.
Database Logical Design The production database currently consists of 92 relations. For simplicity reasons, an abridged logical design in the form of E-R model will be presented here. To facilitate the connection between the textual description and the E-R model in Diagram 1, the entity-set names in the following text will be bolded with capital initial letters, while the relationships will be referenced in italic. The central entity set is the Card. It is connected with otherwise rare 1:1 relationships with respective BLOBs Signatures
474 Mornar, Fertalj, Kalpic & Krajcar
Figure 5: Student Data Enhanced by Photograph and Signature
Figure 6: Specimen of a Student’s X-Card (Reverse Side Contains Magnetic Stripe)
Credit Card System For Subsidized Nourishment Of University Students
475
and Photographs. The Card is issued by an Educational Institution, where the Student is enrolled. The student enjoys one of currently possible 24 Levels of Rights, covered by the subsidizer, who is predominantly but not exclusively the Ministry of Science and Technology. The student can receive messages from the system and obtains a monthly amount of money as subvention. The Ministry approves of the company running the restaurants, being within a possible hierarchy of the prices for meals and drinks to be served. The payment proceeds over points of sale (POS). The keyboards are configured for faster data entry. A POS device prints the bill consisting of consumed meals and drinks. In case of a large amount on the Bill, an Authorization is required. Any Misuse of a Card is evidenced. As mentioned previously, the low performance and insufficient availability of CARNet forced most of the users to work offline. Offline working unavoidably required the system to be distributed among local databases and a central database server. Faculties involved in the system are numerous, dislocated and diverse in size. As they enroll in span from several dozen to several thousand students, it would probably be an overkill to apply Oracle® or Informix® to maintain a database of several hundred records on a single-user PC. Microsoft Access performs quite well in small networks, which are sufficient to maintain data even at large faculties. Of course, it would not be a smart decision to assign to Microsoft Access the task of handling the central database comprising all the faculties and containing millions of records. There is unfortunately no replication mechanism between Access and SQL server (Amaru, 1995), so the authors had to devise it, particularly because the replication process is not simple and there can be several owners of a single record in the database (Anderson et al., 1998; Breitbart and Korth, 1997; Goldring, 1994; Thompson, 1997). Each table on the client side, which has to be replicated to the server, gets an UpdatedOnServer flag. If a client is working online and changes to the record are successfully updated on the server, Diagram 1: Abridged Logical Database Design Signature
Photograph
1
Educational Institution
1:N
enrols
1
bears
contains
1
issues
1
1:N
1:N
1
achieves
enjoys
1
covers
0:N
1
Misuse
1:N
receives
1
obtains
1:N
MonthlyAmount
LevelOfRights 0:N
1:N
Subsidizer
POS 1
1:N
belongs to
keyboard
Bill
serves 1:N
1:N 1
0:N
Restaurant
1:N
0:N
reuquires
1
0:N
prints
0:1
1:N
Student 1:N
1
evidence of
0:N
1:N
Card
Authorization
Message
1
consists of
1:N
1:N
MealDrink
1
is within
476 Mornar, Fertalj, Kalpic & Krajcar
the flag is set to True. If the client is working offline, or if the connection to the server is broken in the middle of an online transaction, which is not a rare case, the flag is set to False. Deletions of records are tracked in a separate table, TDeleted, where all the deletions which were done offline, or which were not successfully executed on the server, are stored as SQL statements. This table is maintained by a program code because Access lack triggers. Fortunately, it is possible to write a universal function, which must just be called from any delete event (Fertalj & Kalpic, 1999). Since the data must be replicated from server to client as well, an additional attribute called SerialNumber is added to each table on the server that must be replicated to clients. This is a long integer representing a monotonically increasing transaction number that is set by an insert or update trigger. A timestamp column could have been used here (Clifford et al., 1997), but a sequential number was chosen instead, for simplicity and for better control, partially because clients lack adequate data type. Deletions are stored in the server version of TDeleted table, again as SQL statements. Each record in the TDeleted table bears its own transaction number.
Replication Process When a client wishes to replicate with the server, the following procedure is executed, described also by Diagram 2. On the client Side: P1: All records from TDeleted table are unloaded to an ASCII file. P2: For each table that must be replicated to the server, an ASCII file is produced, each line containing primary key value as well as all the attribute values of records containing False value of UpdatedOnServer flag. The attributes can be of any data type including binary large objects (BLOBs). BLOBs are stored in a separate binary file. P3: All ASCII files together with binary file containing BLOBs are compressed with a standard ZIP algorithm (Nelson & Gailly, 1996). P4: Compressed file is sent by file transfer protocol (FTP) to the server. The client program when required initiates dial-up connection automatically. P5: Client introduces itself over a TCP/IP socket (Comer, 1988; Stevens, 1994) to a service running on the server, submitting also the client’s highest transaction number (CHTN), which was previously successfully collected from the server. Initially, an attempt was made to communicate over DCOM (Eddon & Eddon, 1998; Rubin and Brain, 1999; Sessions 1997; Wang et al., 1997), but frequent blocking over a slow connection forced the authors to revert to mechanisms that are more elementary. On the server side: P6: The service invokes the server program. P7: The server program expands the ZIP file. P8: Deletions from client’s TDeleted table are executed on the server. P9: For each table that must be replicated to the server, values from corresponding ASCII files are stored on the server. If a record with the given primary key exists, the record is updated with new attribute values, otherwise, a new record is added to the table. P10: At this point, the server’s highest transaction number (SHTN) is memorized. P11: All records from the server’s TDeleted table having transaction numbers between CHTN and SHTN are unloaded to an ASCII file. Deletions produced on the server during the unloading process and the subsequent steps will be collected during the next replication process for that client. P12: All records from the server tables, which have to be replicated to the client, having transaction numbers between CHTN and SHTN, are unloaded to ASCII files. This includes unloading BLOBs to a separate binary file. P13: All ASCII files together with the binary file containing BLOBs are compressed with a standard ZIP algorithm.
Credit Card System For Subsidized Nourishment Of University Students
477
Diagram 2: The Replication Process P1 Unload TDeleted
Client program
D1 TDeleted
a P3 Client Database
P5
Compress files
D2 Tables
D4
ZIP file
Call the service
P2 Unload tables
CHTN
D3 BLOB
P4 b
Connect and send
Service on server
P8 Execute TDeleted
D1 TDeleted
c CHTN
P7 Server database
Uncompress ZIP file
D2 Tables
D5
ZIP file on FTP
P6 Run server program
P9 Load tables
D3 BLOB
SHTN
P10
Server program
Memorize SHTN D7 CHTN CHTN
D8
P11 Unload TDeleted
D6 SHTN
D1 TDeleted
D2 Tables
P13
P14
Compress files
Send ZIP to client
P12 Unload tables
New version
P16 SHTN
D3 BLOB
Terminate connection
P20 SHTN
Set CHTN to SHTN
P15 SHTN
Client program
Check for a new version
P18 Execute deletions
a
D1 TDeleted
P17 Client Database
D2 Tables
D4
ZIP file D9
P19 Store new data
Uncompress ZIP file
D3 BLOB
Downloaded new version
478 Mornar, Fertalj, Kalpic & Krajcar
P14: Compressed file is sent by FTP to the client. On the client side: P15: The server is checked for a new version of the client software using Remote Data Object Connection. If there exists a newer version of the client software, it is downloaded by FTP to the client. P16: Dial-up connection is terminated. P17: The client program expands the ZIP file. P18: Deletions from server’s TDeleted table are executed on the client side. P19: For each table that must be replicated to the client, the values from corresponding ASCII files are stored in the local database. If a record with the given primary key exists, the record is updated with new attribute values, otherwise, a new record is added to the table. CHTN is set to SHTN. If there is a break in the transmission, or if any error occurs on the client or server side, CHTN remains unchanged and the procedure can be repeated later. There exist two separate client programs: one for students’ data collection at the faculties; another is a point-of-sale (POS) program at the restaurants. Both are capable of working online and offline and they can exchange data in the previously described manner.
System Maintenance The software became operational under a considerable pressure of deadlines, so it had to be delivered without proper testing. Moreover, the rules of the whole system have been changing all the time during the production, and are still changing. The business rules did not change only numerically, e.g., as just the percentage of subvention or the overall amount of subvention. The very model of subsidizing has been changing. To illustrate the difference between project specification at the start, compared to the current situation, let us just mention that the Ministry envisaged only two classes of student’s rights; they would differ only according to the student’s domicile. Those students who were housed and supported locally by their families would enjoy the right to a single meal per day, while the others would have the right for all the meals. In practice, it turned out that the students were falling not within the expected two, but within 24 different classes of rights, e.g., certified sportsmen, military scholarship grantees, victims of war, etc. with combinations among these categories. Some of the required program changes can be effectuated on the server side, but due to the unavailability of a stable Internet connection, a classical separation of business rules into its own layer on the server was impossible. Unfortunately, a considerable number of business rules had to be programmed in the client software. Therefore, a procedure had to be devised for updating client executables as well as the structure of local databases. The problem was solved in a very simple manner. At every data exchange, a client, which stores locally its own version number, queries the server for a new version of the program. A new version of the program might require a change in the local database structure, a specific update of client-side data and/or registration of new custom controls. Therefore, a small set-up program must be executed upon download, which performs aforementioned tasks and records the new version number in the client’s database. The changing of the business rules from time to time requires some simpler tasks to be performed on client’s local data, which can be done by means of simple SQL statements. Therefore, a set of SQL statements can be sent together with the normal data exchange. These sets are grouped into three categories: sets which are executed on all the clients, sets which are executed on each client at a particular site and sets which are executed on a single machine only. In exploitation, a significant problem arose from unresolved ownership and responsibilities regarding the computer equipment and its maintenance. It was well known in advance that after a number of printed bills, a number of read magnetic stripes and so forth, the equipment has to be substituted. The proper reaction lacked until it became obvious that the whole system is endangered.
Credit Card System For Subsidized Nourishment Of University Students
479
Derogation of the system would cause a significant problem to the Ministry because the vast majority of students have accepted the X-card system as a much better and more favorable solution than the former coupons. This is partly due to the increased students’ rights, which were introduced together with the X-card system. It was a political decision, maybe crucial for the system acceptance. Otherwise, it was expected that some persons who lost their privileges or possibilities for non-authorized earnings, not to mention them explicitly here, would spread unrest within the student population and compromise the new system at the very beginning. The system gives complete information regarding all the transactions related to the subsidized nourishment. On the system Web site, http://www.cap.fer.hr, every student can identify him or herself and after authentication password obtain a report regarding all his or her expenditures within a specified period. We include an English translation of a sample from such reports. The same information is available to the Ministry. Improper spending or behavior of the students or cashiers can be spotted. It remains a political decision to act accordingly. On the other hand, the Ministry can access the same information and review and/or control the spending habits. The best and quickest insight into the population behavior can be obtained by display of statistical distribution. The chart in Figure 7 represents the real-life distribution. The abscissa contains the average daily consumption expressed in local currency, per student. The ordinate is the percentage of the student population with this average daily consumption. An anomaly at the end of the right tail demonstrates improper behavior of some students to whom, for certain reasons, practically unlimited consumption rights have been granted. The same information as in Figure 7, but displayed as the cumulative distribution is in the Figure 8. The latter one demonstrates even better that the students’ behavior can be regarded mostly as normal.
Table 2: Overview of Expenditures from January 1-31, 2001
John Smith 004746587 You made altogether 19 expenditures. You spent altogether 436,01 HRK subventions by the Ministry. You paid altogether 155,98 HRK to the restaurant cashiers. Overall amount (regular price) 591,99 HRK
Student’s Feeder LTD., Zagreb Bill No: Rice Vienna steak
25.01.2001 10:48:28 417574 4.17 16.21
Cake
3.07
Chicken soup
3.15
Beans salad
1.94
Porridge
3.26
Juice
4.00
Altogether HRK:
35,80
Subvention by the Ministry:
26,37
To be paid in cash:
9,43
480 Mornar, Fertalj, Kalpic & Krajcar
Table 2: Overview of Expenditures From January 1-31, 2001 (continued) Feed the Students Inc., Zagreb Bill No:
24.01.2001 10:45:12 416369
Menu 01
16.70
Cake
3.07
Juice
4.00
Altogether HRK:
23,77
Subvention by the Ministry:
17,51
To be paid in cash:
6,26
Select one of the displayed time periods and press Dohvat to fetch the bills
Top of Form i j k l m n
From
j k l m n
Current month
j k l m n
Preceding month
to
(dd.mm.yy)
Bottom of Form
Figure 7: Distribution of the Average Daily Consumption in Novvember 2000
12.50% 10.00% 7.50% 5.00% 2.50%
4 15 :1 9 30 :3 4 45 :4 9 60 :6 4 75 :7 9 90 :9 10 4 5: 10 12 9 0: 12 13 4 5: 13 15 9 0: 15 0
0.00%
0:
Percentage of students
15.00%
Amount of the average daily consumption [HRK/student]
Credit Card System For Subsidized Nourishment Of University Students
481
Figure 8: Cumulative Distribution of the Daily Consumption in November 2000
100,00% 90,00% 80,00% 70,00% 60,00% 50,00% 40,00% 30,00% 20,00% 10,00%
90 :9 4 10 0: 10 4 11 0: 11 4 12 0: 12 4 13 0: 13 4 14 0: 14 4 15 0: 15 0
70 :7 4 80 :8 4
50 :5 4 60 :6 4
0:
30 :3 4 40 :4 4
0,00%
4 10 :1 4 20 :2 4
Percentage of students with average daily consumption below abscissa value
Cumulative distribution of the daily consumption in November 2000
Amount of the average daily consumption [HRK/student]
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION One of the main goals of the system, to minimize the misuse of the rights to consume subsidized meals, was not achieved completely. Shortly after the system was introduced, some X-cards became available on the black market, sold by the students who do not really need them, because they regularly eat at their parents’ place. Such kind of misuse could be easily spotted at restaurants, but the cashiers have no time to check the consumer’s identity by looking at the photograph. Strict identity control would be also humiliating to the vast majority of the student population who are not prone to frauds. Nevertheless, for the time being, the number of such misuses and the damage done is considered negligible compared to the cost of hiring additional people to enforce the identity control at the restaurants. Moreover, this type of misuse demonstrates the urgent need to extend the scope of Xcards. The described abuse will most probably vanish as X-cards become indispensable for other student’s activities, such as identification on examinations, in libraries, public transportation and services, etc. A misuse of a card which has been lost is also possible to a certain limited extent, because some of the restaurants work offline, causing a delay of several hours between marking a card inactive and downloading this update to the clients. Offline work at the faculties required enormous additional amount of effort to maintain data consistency because transactions made offline frequently have to be cancelled afterwards (requests for an X-card when another X-card has already been requested or issued at another faculty) or even modified, because of a specific request of the investor that each student gets a unique ID which includes the ID of the faculty where he/she first enrolled. Most of the bugs and problems that we experienced could be contributed to the offline work. Improper status of faculty administration causes problems that should be resolved after implementation of the Information System for High Education. The software component of this information system is currently being developed by developers pertaining to the same group as the authors of this case. A local version of this software has already been in use for nearly ten years at the Faculty of Electrical Engineering and Computing in Zagreb (Kalpic & Mornar, 1994).
482 Mornar, Fertalj, Kalpic & Krajcar
It might be interesting to mention that the Ministry has recently attempted to reduce the subvention cost by limiting the consumption of some articles which bear a minor nutritive value and can be regarded as a kind of inappropriate consumption, like sodas and some more luxury cakes. They also intended to curb the consumption structure. It would not be allowed to buy, e.g., 30 packages of milk shake instead a complete daily meal. They envisaged introduction of vegetarian food as an option. The program version containing these restrictions has been prepared for installation, but the Government postponed these measures indefinitely, fearing the students’ protests.
Financial Aspects The whole project has been developed under rather scarce financial conditions. Neither money nor time allowed for development of a proper project prior to programming. The system complexity was heavily underestimated in both aspects functionally and technically. When it became clear that a simple online solution would not be possible, the authors did not demand an acquisition of sophisticated commercial software but provided their own solutions to circumvent the deficiency of the encountered information technology infrastructure and of the cheap applied programming tools.
Organizational Aspects The development of the described system was assigned to a research and teaching group at the Faculty of Electrical Engineering and Computing (Kalpic, Baranovic & Fertalj, 1997). The system was designed by the “use of great designers” (Brooks, 1987) and developed by the use of “better and fewer people” (Boehm, 1983). A single person, in addition to his regular and rather heavy load of university educational activities, performed most of the system design and the crucial distributed database development. The group leader gave some support, especially in the early project phase. The system designer achieved help from a high competent colleague and from an engineer experienced in Visual Basic. Two engineers in charge of the local student administration system helped in building the interface to the new system. A programmer was a substantial support during all the development. As the system approached installation, a group of three computing engineers was established for on-site implementation and continuous maintenance. They developed most reports and the Web site. In Table 2, there is evidence of the workload within the development team. There was also a high-level project leader, who was not a member of the development team. He took care about the contract, gave presentations and suggestions for acceptance of the system, he organized hardware acquisition and hardware and software maintenance. He was also advising the Ministry regarding the business rules. Chart 1 contains the workload over the four years of the project duration for the development team. The high-level project leader is not included, nor the constant workload of regular maintenance taken by a group of three engineers, which has started in 1999.
Table 2: Workload of the Development Team Person
Time span of engagement
Workload [h]
Principal system designer and developer (PhD) Group leader (Ph.D.) Professional help (Ph.D.) Development engineer (B.Sc.) Two engineers in charge of the local student administration system (B.Sc.) Programmer Sum
27.10.1997-15.12.2000 22.12.1997-04.10.2000 20.11.1997-10.05.1999 27.10.1997-23.12.1998 17.11.1997-02.11.1998
2936 40 27 1524 10
Percentage of the whole workload 33.16 0.45 0.30 17.21 0.11
23.10.1997-29.11.2000
4318 8855
48.76 99.99
Credit Card System For Subsidized Nourishment Of University Students
483
Table 3: Distribution of Activities
Activity Programming & Design Maintenance Miscellaneous Users Organization Education Testing Installation Meetings & Discussion Documenting Sum
Workload 4592 1592 960 490 327 248 223 170 167 86 8855
Programming & Design Maintenance Miscellanea Users Organization Education Testing Installation Meetings & Discussion Documenting
CONCLUSION Although not perfect, the system described in this case works stable now. Most of the trouble during the introductory phase can be contributed to the offline work and to the unclear and volatile definitions of business rules. At the moment of writing this case (January 2001), the program for data collection has been installed at 122 high or higher education institutions. The data have been collected for 153,300 students: 121,400 students have been photographed, producing 3.05 GB of BLOB data. The POS program had been installed in 35 places, where 12,552,000 bills containing 51,665,000 items have been produced in 27 months of work. All of the records are still residing in the database for yearly reporting, but a shift to data warehousing and archiving of old records is in plan.
FURTHER READING Interested readers could get rather comprehensive live information about the X-card system and its organization from the site http://www.cap.fer.hr. However, this information is only in Croatian because the site is aimed for the local students. Some information in English is available from the other
484 Mornar, Fertalj, Kalpic & Krajcar
Chart 1: Workload Over Time
Workload in hours
5000 4000 3000 2000 1000 0 1
2
3
4
Years
parties involved in this project. They are the Ministry of Science Technology (http://www.znanost.hr), the Faculty of Electrical Engineering and Computing (http://www.fer.hr) and the Computer Science Group at the Department of Applied Mathematics of this faculty (http://www.zpm.fer.hr).
REFERENCES Amaru, C. (1995). SQL Server bundles replication, Datamation 41(11), 61. Anderson, T., Breitbart, Y., Korth, H.F. & Wool, A. (1998). Replication, consistency, and practicality: are these mutually exclusive? Proceedings of the ACM SIGMOD on Management of data, Seattle, WA USA, June 1-4, 484-495. Boehm, B. (1983). Seven basic principles of software engineering. The Journal of Systems and Software, 3(1), 3-24. Breitbart, Y. & Korth, H.F. (1997). Replication and consistency: being lazy helps sometimes, Proceedings of the Sixteenth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, Tucson, AZ USA, May 11-15, 173-184. Brooks, F. (1975). The Mythical Man-Month, Addison-Wesley, Reading, MA. Brooks, F.P. (1987). No Silver Bullet: Essence and Accidents of Software Engineering. IEEE Computer, 20 (4), 10-19. Clifford, J., Dyreson, C., Isakowitz, T., Jensen, C.S. & Snodgrass, R.T. (1997). On the semantics of ‘now’ in databases, ACM Transactions on Database Systems 22(2), 171-214. Comer, D. E. (1991). Internetworking With TCP/IP: Principles, Protocols, and Architecture. Prentice Hall. Eddon, G. & Eddon, H. (1998). Inside Distributed COM, Microsoft Programming Series, Microsoft Press Fertalj, K. & Kalpic, D. (1999). An object based software development method, Proceedings of the 21st International Conference on Information Technology Interfaces, Pula, Croatia, June 15-18, 469474. Goldring, R. (1994). A discussion of database replication technology, Info DB Journal 1(8) Juric, R. (2000). Extreme Programming and its Development Practices, Proceedings of the 22nd International Conference on Information Technology Interfaces, Pula, Croatia, June 13-16, 97-104
Credit Card System For Subsidized Nourishment Of University Students
485
Kalpic, D. & Mornar, V. (1994). Student administration system, European review conference proceedings “University-Enterprise Information Systems”, Graz, Austria, September 15-16, 124-131. Kalpic, D., Baranovic, M. & Fertalj, K. (1997). How to Organise a University Based R&D and Teaching Group in Computing? A Case Study, Proceedings of World Multiconference on Systemics, Cybernetics and Informatics, Caracas, July 7-11, Orlando, FL: International Institute of Informatics and Systemics, 2, 174-181. Kawamura, S. (1998). Capturing images with digital still cameras, IEEE Micro 18(6), 14. Kent, B. (1999). Embracing Change with Extreme Programming, Computer, October 1999, 70-77. Kent, B. (2000). Extreme Programming Explained, Addison-Wesley, Reading, MA. Lindley, C.A. (1994). Image acquisition using TWAIN, Dr. Dobb’s Journal of Software Tools 19(10), 76, 78-80, 98-100. Nelson, M. & Gailly, J.L. (1996). The Data Compression Book., New York: M & T Books, Netraveli, A.N. & Haskell, B.G. (1995). Digital Pictures: Representation, Compression and Standards, New York: Plenum Press Pale, P., Bulat, D., Maric, I., Simicic, L. & Vujanovic, V. (1992). Concept and development of CARNet, Proceedings of the 14th International Conference on Information Technology Interfaces, Pula, Croatia, September 15-18, 265-272. Pennebaker, W.B. & Mitchell, J.L. JPEG (1993). Still Image Data Compression Standard, New York: Van Nostrand Reinhold Rubin, W. & Brain, M. (1997). Understanding DCOM, P T R Prentice-Hall Sessions, R. (1997). COM & DCOM: Microsoft’s Vision for Distributed Objects, John Wiley & Sons Stevens, W.R. (1994). TCP/IP Illustrated, Volume 1 — The Protocols. Addison-Wesley. Thompson, C. (1997). Database replication, DBMS Journal 10(5), 76. Wallace, G.K. (1991). The JPEG still picture compression standard, Communications of the ACM 34(4), 30-44. Wang, Y., Damani, O.P. & Lee, W. (1997). Reliability and availability issues in distributed component object model (DCOM), Proceedings of the Fourth International Workshop on Community Networking Processing, IEEE, Atlanta, GA USA, September 11-12, 59-63. Williams, A. (1997). Visual Basic 5 and ActiveX controls, Dr. Dobb’s Journal of Software Tools 22(3), 74.
BIOGRAPHICAL SKETCHES Vedran Mornar is an Associate Professor of Computer Science at the Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia, where he currently teaches several graduate and undergraduate computing courses. He himself graduated and received his PhD degree in Computer Science at the same university. As a Fulbright scholar, he studied at University of Southern California, Los Angeles for an academic year. His professional interest is in application of operational research in real-world information systems, database design, development and implementation. He is an editor of international journal “Computing and Information Technology”. Krešimir Fertalj is an Assistant Professor of Computer Science at the Faculty of Electrical Engineering and Computing at the University of Zagreb, Croatia. He achieved his BS, MS degrees and PhD degree in Computer Science at the same university. Since graduation in 1988, he has been working at the Department of Applied Mathematics, where he currently teaches several computing courses. His main research interests include software engineering, CASE tools, database design and applications development. Professionally, he is developing information systems, supported by database systems. He was an information systems developer and consultant in a number of software development projects for business, industry, administration and other institutions. He can speak and write English and German.
486 Mornar, Fertalj, Kalpic & Krajcar
Damir Kalpic achieved his PhD at the institution that is now called Faculty of Electrical Engineering and Computing in Zagreb. Since graduation in 1970, he has been employed there and currently acts as vice dean. His professional interest is the application of computers in different fields. He is developing information systems, supported by database systems and extended with mathematical models stemming from operational research. From this field of activity, a scientific, professional and educational group that he is leading offers consultancy, education and software development for business, industry and administration. He can communicate in English, German, Italian, Spanish, French and Portuguese. Slavko Krajcar received his PhD degree at the Faculty of Electrical Engineering and Computing, University of Zagreb, in the field of distribution networks planning in 1988. Now, at the same institution, he is associate professor and the Faculty dean. He lectures undergraduate and graduate courses in power engineering. He is author and co-author of more than 60 R&D papers, where the majority of them have been implemented in practice. He is chairman of the Executive Board of the Croatian Academic and Research Network (CARNet), chairman of the Organizing Committee of international conferences “Information Technology Interfaces” and the director of international journal “Computing and Information Technology”.
Designing a First-Iteration Data Warehouse
487
Designing a First-Iteration Data Warehouse for a Financial Application Service Provider Nenad Jukic Loyola University-Chicago, USA Tania Neild InfoGrate Incorporated, USA
EXECUTIVE SUMMARY This case study will describe the efforts behind designing a first iteration of an evolutionary, iterative enterprise-wide data warehouse for AIIA Corp., a financial application service provider. The study demonstrates the importance of the following steps during a data-warehousing project: a welldefined mission, effective requirement collection, detailed logical definitions, and an efficient methodology for source systems and infrastructure development. AIIA is a financial distributor that offers separately managed account and investment products with practice management services to financial advisors through a Web-based portal that can also be configured and private-labeled for the advisors to use with their clients. Unlike most companies, AIIA offers the advisors a hybrid of investment information and technology solutions, both designed with an open architecture.
BACKGROUND AIIA, the company described in this case, is established on the idea of seizing changes in the following three areas of the financial industry: 1. Distribution/Channel 2. Operations (or the Business Model) 3. Manufacture (or Products) Each will be discussed here as they relate to the opportunity that the company filled. 1. Distribution/Channel. In the past ten years, there has been a substantial migration of brokers away from the institutional brokerage houses (where they turned over large commission to their wire house) to smaller, independent shops that are fee based. There are now over 20,000 registered independent advisors (RIAs), and this new market is growing each year. While some of these advisors are grouped into regional consortiums or independent broker dealers (IBD), the market is still relatively fragmented and distributed. Without the tools, research, and Copyright © 2002, Idea Group Publishing.
488 Jukic & Neild
products of their former companies, the advisors have little infrastructure in place to reach and service their clients. 2. Operations (or the Business Model). The second main change was the growing acceptance of the application service provider (ASP) business model. Applications could be “leased” for use over the web on a monthly basis. For the users, this lowers the up-front cost, reduces maintenance costs, and mitigates risk;, allowing new companies to enter a market previously unreachable. 3. Manufacture (or Products). As mutual funds became mainstream, new separately managed account products became more palpable to those with $800,000 to $8,000,000 in investable assets (note that the inefficiencies of the mutual fund are not as significant for smaller investments totaling less than $800,000, and for those with more than $8,000,000 there are other advanced products that are available). When an investor owns a mutual fund, they own a slice of a fund in which, while managed according to some style or investment philosophy, the specifics stocks are generally unknown. For clients with multiple investments, transparency of the funds ensures that they are not over-allocated to a particular stock or sector. Additionally, mutual funds have an inherent tax injustice: if one buys a fund today and tomorrow sells a stock with a large capital gain, then he/she would realize the tax consequences of the gain without the appreciation in the asset. For an investor with substantial tax planning issues, the mutual fund is problematic. Separately managed accounts retain the efficiencies of a mutual fund, allowing the manager to pool assets together, and thereby gaining the same institutional transaction pricing and ability to manage and monitor the collective assets according to a model portfolio style expert. However, separately managed accounts also allow the investor to own, see and tailor their account to handle particular tax and asset allocation nuances of the financial picture. Together these three trends opened the door to a host of new companies, one of which is AIIA Corp. (see Figure 1). There are companies that provide technology/applications to the advisors, and there are others that offer separately managed account products. Some companies charge monthly for the technology, some adhere to a transaction-oriented model and others have embraced the fee-based “assets under management” approach. AIIA is designed to offer all of the comforts of the advisors’ former brokerage Figure 1: AIIA’s Separately Managed Account Market Space
3, 6
AIIA 5
2
Custodian
Portfolio Managers
Investors
6 4 Advisor 1 End
Designing a First-Iteration Data Warehouse
489
house, both investment products and applications, through a fee-based revenue model. For example, if Investor I places $1,000,000 into a separately managed account with Manager X, via the AIIA platform, and X buys 10 different securities in the portfolio for I, then I would pay based on a fraction of a percentage of the $1,000,000 rather than $Y per manager transaction, as is typically the case with many well-known investment products (such as those provided by Schwab or Fidelity). The unique hybrid of technology and investment products provides “one-stop shopping” for the advisor. An interesting twist on this situation is the pivot point between the old and the new. While the independence of the advisors is new, their core needs are the same. And while the creation of the separately managed accounts is new, traditional investment products are still practical for the average investors. Therefore, the new company must back-fill to satisfy the older offering, while embracing the new products and technology. Typically, companies are either converting with the marketplace going from old to new offerings, or they are new and focused predominately on the new solution set. To handle this delicate dynamic, AIIA has formed an operational and technical glue between managers, custodians, and other traditional and relevant investment and technology providers, while building new core value-added offerings.
SETTING THE STAGE From its beginning nearly two years ago, AIIA’s value proposition was its combined offering of business application and investment products to the newly fragmented financial advisor marketplace. Therefore, the CEO and visionary of the company embraced other founders and executive members that were leaders in either marketing/selling to a new advisor’s marketplace, crafting leadingedge investment products or delivering robust application solutions. With deep expertise in each area, the company’s reach has grown faster than projected. While still privately held and on its C-round of funding, AIIA has already attracted approximately 2,000 RIAs, 50 IBDs, commitments for $1 billion in assets under management (AUM) and holds $9 billion in AIIA’s clients’ AUM. Figure 2 shows the organizational structure of the company. While AIIA features a distinct blend of older-world investments and cutting-edge technology, each core area of its business is data-intense. The Sales and Marketing team requires deep analytics to understand this new marketplace. The Investment officers must provide thorough research on the market, managers, and products in order for the advisors or the company itself to make informed recommendations. The Operations and Technology department must integrate with advisory application offerings and interface with multiple custodians and portfolio accounting platforms, each with formats and methods for handling different types of accounts, transactions and products. Therefore, the mining of their data into knowledge was critical, and even before the data was collected, techniques for processing and leveraging it were considered. An open architecture was the key to the platform. As each solution was integrated or built, the Figure 2: AIIA Organization Chart
CEO Investments Research
Advisory
Distribution Sales
Marketing
Technology Service
Develop
Operations
490 Jukic & Neild
ability for the solution to permit easy data exchange was at the forefront of the decision. While typically the extract-transform-load (ETL) process presents its own unique challenges for a data-warehousing project, AIIA’s open architecture eliminated the usual complexities of extraction and load part of the process. Physical integration of the systems was a prerequisite. Flexibility and logical integration became the primary issues, allowing AIIA to concentrate on the mission of the data warehouse rather than on the “how” of the data warehouse.
CASE DESCRIPTION Introduction AIIA decided to draw on rich and varied data sources (both internal and external) to build a data warehouse, in order to turn the information into meaningful knowledge and in turn act to convert the knowledge to profit. AIIA’s data warehousing project is in compliance with a definition of a data warehouse as a separate physical repository, typically maintained separately from the organization’s operational databases, used for consolidating data that has been organized to facilitate analytical processing and strategic decision support (Chaudhuri & Dayal, 1997). Unlike any other information systems initiative, the data warehouse implementation is an iterative process whereby each analytical requirement is built upon the prior system iterations. Therefore, the first iteration of the data warehouse must be both flexible to allow future analytics to be added and structured to minimize future iteration’s development ambiguity. This case describes AIIA’s efforts of developing its first data warehouse iteration that satisfies those requirements.
Identification of Data Requirements During the identification of data requirements stage, a series of interviews with AIIA managers and employees from all of the departments shown in Figure 2 (including the CEO) was conducted. The need for the analysis of data capturing customer interactions, as well as the analysis of financial data, was repeatedly expressed by members of each department in the initial interviewing process. Therefore, a subsequent round of interviews focused on fiscal (monetary) and customer interaction management (CIM) data monitoring analytics as the areas for requirement collection for the first iteration of the data warehouse. Consequently, the decision was made that the foundation of the data warehouse should be built upon the need to monitor and analyze fiscal and CIM effectiveness of both AIIA and its advisors. The two main missions of the AIIA data warehouse were defined: M1. Leveraging the development initiatives by pinpointing effective product and service offerings. M2. Increasing revenue and profit margins by allowing AIIA and its clients to understand the most promising customer opportunities. As will be shown, the project continued to refer back to these main two missions (M1, M2) throughout the various stages. Based on the above-defined missions, conducted interviews and subsequent analysis, AIIA decided to initially monitor its performance through two types of analysis: • Monitoring and analyzing the fiscal information about AIIA’s advisors and their clients’/ investors’ accounts (herein called Fiscal Analysis or FA). This analysis is intended for AIIA as an organization where all advisors and accounts can be analyzed, and for the individual advisors where an advisor can analyze only their clients’ accounts. • Monitoring and analyzing contacts between AIIA and its advisors, its advisors and their clients/ investors (herein called Customer Interaction Management Analysis or CIMA). In order to perform FA, the transactions, balances, and revenues associated with accounts had to be analyzed. Basic manager, product, and security information was also required. This information represented the dimensions for FA. Given that this information varies over time, the date/time dimension was also considered as another basic building block. In order to perform CIMA, the information about instances of various contacts (phone, e-mail,
Designing a First-Iteration Data Warehouse
491
Web site, etc) between AIIA, advisors and investors had to be analyzed. Again, basic money manager, product and security information was required and it represented the dimensions for CIMA. Additionally, specific information about duration and mode of contact was also required. Of course, the number and frequency of contacts varies with time, so the date/time dimension was considered as another critical dimension. Logical Data Definitions The pivotal step in any data warehouse project (particularly one built off an open source architecture) is identifying and understanding the appropriate data. As the size and complexity of the data warehouse iteration can quickly become unmanageable, the goal was to integrate only the necessary data. A logical model was created to aid in the process of understanding the selected data, verifying that all of the required data was available, and disregarding the extraneous data elements. The logical model is the combination of clearly defined logical definitions and conceptual graphical diagrams (which show the relationships within the data captured by logical definitions). The list of all logical data definitions identified through interviews and the analysis of the existing underlying operational systems as necessary for FA and CIMA is shown in this section. While some of the terms may appear to be common for a particular business unit, they have been described for crossdepartment clarity. Often terms are used loosely, and these definitions are meant to be the standard within the data warehouse system. Ambiguous or subjective definitions of basic building blocks can lead to miscommunication, erroneous data warehouse implementation and faulty information. Following is the list of the logical data definitions: Advisor: a financial advisor who is an AIIA client. Advisors can have multiple Clients/Investors for whom they direct Accounts to be handled by a specific Manager according a specific Style. Investor: an individual or organization that has one or more Accounts overseen by an Advisor. Class: refers to a category of investment (e.g., domestic equity, fixed income, global equity, etc.). Style: refers to the investment method of the financial Product offered by Managers (e.g. LargeCap Growth, Large-Cap Value, Small-Cap Growth, etc.). Manager: a financial expert whose investment Products are featured by AIIA. A Manager can offer more than one Product. Each Product is associated with one Manager. A Manager can have several Classes and within each class a certain Style (or styles). Product: AIIA Advisors place investment assets in a Product. Each Product is offered by a Manager. Managers can offer more than one Product. Account: an individual investment account owned by a single Investor. An account is associated with exactly one AIIA Product. However, an Account can have a discrepancy with the associated Product. In other words, the list and percentages of securities in the Product and the Account do not have to match. In addition to the tax and accounting requirements, this Product customization requires that transaction-level details per Account per Security be maintained (as depicted later by the Security Transaction Fact in Figure 4). Custodian: a financial institution that physically hosts each Account. Each Account has one Custodian and a Custodian can host multiple Accounts. Contact Item: the item that is the topic of the contact between AIIA and its Advisor or Investor. This item can either be a Manager, Product, Security, Account or Other (e.g., news story on the Web site, technical question, etc.). Item Type: a type that every Contact Item is associated with and it indicates if the Contact Item is a Manager, Product, Security, Account or Other. Item Category: used to divide all Item Types (and consequently all Contact Items) into two categories: MPSA (Manager/Product/Security/Account) or Other. Contact Mode: indicates the mode of contact (e-mail, Web, phone, etc.). Each Contact Item can be accessed via various Contact Modes.
492 Jukic & Neild
Department: indicates to which Department the AIIA Contact Handler belongs. Sub-Department: indicates to which Sub-Department (e.g., Service) and consequently a Department (e.g., Service is a sub-department of Distribution Department) an AIIA Contact Handler belongs. AIIA Contact Handler: is an AIIA process which supports contact between AIIA and the Advisors and/or Managers and/or Investors. For example, an AIIA Contact Handler could be a person, a web site, or automated phone system. Date, Month, Quarter, Year: all members of the dimension Time/Date. A Time instance belongs to a particular Date, which belongs to a particular Month, which belongs to a particular Quarter, which belongs to a particular Year. The logical model is illustrated at the highest level in Figures 3 and 4, which use a dimensional modeling notation as given in Kimball et al. (1998). In particular, Figure 3 illustrates dimensions and facts principal for FA, and Figure 4 illustrates dimensions and facts principal for CIMA. The following is the description of dimensions and facts that were identified as necessary for FA and CIMA and used in the dimensional model shown in Figures 3 and 4: Dimension 1–ACCOUNTS: Advisors can have many Investors, who can have many Accounts. In addition, one Account is associated with one Custodian and one Product (Custodians and Products can have many Accounts). An Advisor and Investor can have Accounts across a number of Custodians. Dimension 2–PRODUCTS: Managers can offer many Products. A Class of Products can have a number of Styles of Products. Consequently, a Product belongs to one Style and Class. In addition a Manager can offer Products of different Classes and Styles. Dimension 3– SECURITY: Financial security (e.g., stock). Dimension 4–TIME: Depicts that the Year is composed of Quarters, which are composed of Months, which are composed of individual Dates. Dimension 5 – CONTACT HANDLERS: Departments can have a number of Sub Departments, which contain AIIA Contact Handlers. Dimension 6 – CONTACT ITEM: Contact Item Category can have a number of Contact Item Types, which contain a number of Contact Items. Dimension 7 – CONTACT MODE: Contact Items can be accessed via various Contact Modes (web, phone, e-mail, etc.). Fact 1 (in Support of M1)–BALANCE/HOLDING: refers to the monetary value of a certain Security within a certain Account (associated with a certain Product) at a certain Date. Fact 2 (in Support of M2)–REVENUE: refers to the monetary value of revenue generated by a certain Account (associated with a certain Product) at a certain Quarter. The amount of revenue is calculated based on the Account Manager’s fee schedule with AIIA and the balance of the account throughout the Quarter. Fact 3 (in Support of M1) – SECURITY TRANSACTION: refers to the event of Investor’s assets being added or taken out from a balance of a certain Security (including cash) within a certain Account (associated with a certain Product). Fact 4 (in Support of M2 as it relates to expenses) – ADVISOR CONTACT: refers to an instance of a recorded contact, which occurred at a certain Time via a certain Mode between an Advisor and an AIIA Contact Handler regarding a certain Contact Item. This fact stores the nature of the contact (e.g., routine, emergency, positive feedback, negative feedback, etc.), the duration of the contact and whether the Advisor or AIIA initiated the contact. Fact 5 (in Support of M2 as it relates to expenses) – INVESTOR CONTACT: refers to an instance of a recorded contact, which occurred at a certain Time via a certain Mode between an Investor and an AIIA Contact Handler regarding a certain Contact Item. This fact stores the nature of the contact, the duration of the contact, and whether the Investor or AIIA initiated the contact.
Designing a First-Iteration Data Warehouse
Figure 3: FA Balance (Holding) Fact Table Time Key (FK) Product Key (FK) Account Key (FK) Security Key (FK) Dollar Amount Unit Amount F1 …
Time Dimension Time Key (PK) Year Quarter Month Full Date
D4
Product Dimension Product Key (PK) Product ID Product Name Class Style Manager ID D2 Manager Name
Revenue Fact Table Time Key (FK) Product Key (FK) Account Key (FK) Dollar Amount (calc.) … F2 Security Transaction Fact Table
Security Dimension Security Key (PK) Security ID D3 Security Name
Time Key (FK) Product Key (FK) Account Key (FK) Security Key (FK) Dollar Amount Time of Day F3 Buy/Sell Flag …
Account Dimension Account Key (PK) AccountID AdvisorID Advisor Name InvestorID Investor Name CustodianID D1 Custodian Name
Figure 4: CIMA Advisor Contact Fact Table
Contact Mode Dimension
Time Key (FK) Contact Item Key (FK) Account Key (FK) Contact Md. Key (FK) Contact Hnd. Key (FK) Advisor ID F4 Time of Day Duration Nature Initiated By …
Contact Mode Key (PK) Contact Mode ID Contact Mode Name D7
Investor Contact Fact Table
Time Dimension Time Key (PK) Year Quarter Month Full Date
Contact Handler Dimension
D4
D5
Handler Key (PK) Handler ID Handler Name Department Name Sub-department Name
Time Key (FK) Contact Item Key (FK) Account Key (FK) Contact Md. Key (FK) Contact Hnd. Key (FK) Investor ID Time of Day F5 Duration Nature Initiated By …
Contact Item Dimension Contact Item Key (PK) Contact Item ID Contact Item Type Contact Item Category D6
Account Dimension Account Key (PK) AccountID AdvisorID Advisor Name InvestorID Investor Name CustodianID D1 Custodian Name
493
494 Jukic & Neild
Applications and User Prioritization of Data Needs As mentioned in the introduction, AIIA’s mission for the data warehousing system is: M1. Leveraging the development initiatives by pinpointing effective product and service offerings. M2. Increasing revenue and profit margins by allowing AIIA and its clients to understand the most promising customer opportunities. As stated in identification of requirements, FA and CIMA were selected as two types of analysis that the data warehouse will provide. Therefore, in the first iteration of the data warehouse, the applications will support FA and CIMA in the context of the two stated goals (M1, M2). The following is a representative list of reports and queries that focus on this stated mission.
Fiscal Analysis While the most common queries for FA concern accounts by advisor and balances within one account, examples of other desired FA calculations and roll-ups include: • List the managers whose products are used by less than 10% of advisors. (M1) • Find the top/bottom 10 most profitable accounts. (M2) • Find the top/bottom 10 most profitable advisors. (M2) • Find the top/bottom 10 securities by the amount of holdings in all accounts. (M1) • Compare the holdings in all accounts between domestic and international equity for the last 4 quarters and then roll it up for the whole last year. (M1) • For each month within the past two years, list the manager whose product attracted most newly created accounts. (M1 & M2) • Compare revenue generated by advisors from different territories. (M2) • Compare the list of 10 most profitable advisors with the list of 10 advisors whose accounts have the highest cumulative AIIA transactional cost. (M2) • List the top 15 securities by the amount of holdings across all accounts. (M1) In addition to the AIIA internal analysis (as illustrated by the above listed examples), individual advisors will be able to perform FA as well, within their own accounts. Individual advisors will be provided with the possibility to get answers to queries such as: • For each month within the past two years, list the manager whose product attracted most of my investor’s accounts. (M1) Customer Interaction Management Analysis While the most common queries for CIMA concern contacts by advisor and by investor, examples of other desired CIMA calculations and roll-ups include: • Compare the list of 10 advisors who make the most phone calls to AIIA with the list of 10 advisors who make the least phone calls to AIIA. (M1) • Find the top 10 products that generate most contacts by Advisors. (M2) • Calculate and compare the ratio of contacts via phone vs. contacts via Web for advisors for each of the past 6 months. (M2 as expenses relate to profit) • Find out which day of the month, for each of the last 12 months, had the most phone-call contacts. (M2) The AIIA Data Warehouse will allow for combined FA-CIMA calculations and roll-ups, such as: • Compare the list of 10 advisors with highest cumulative duration of incoming phone contacts with the list of top 10 most profitable advisors for each of the past four quarters. (M2) • Calculate and compare the ratio of contacts via phone vs. contacts via web for the top 10 revenueproducing advisors for each of the past six months. (M2) The examples of analysis listed within this section demonstrate the power of the data warehouse (such analysis would not be available on an on-demand basis without a data warehouse), and build a case project. It is at this stage (once possibilities for analysis are illustrated) that most of the constituencies within the organization realize the value of the data warehousing initiative.
Designing a First-Iteration Data Warehouse
495
Data Warehouse Project Scope and Implementation Regardless of the logical data definitions or applications, a data warehouse is only as good as the loaded data. A data warehouse reflects the data loaded into it; if there is complete and clean data loaded into it, the data warehouse will be complete and clean. On the other hand, if not all of the required data is loaded or if incorrect data is loaded, the data warehouse will be incomplete and incorrect. Like with all information systems, the old adage ‘Garbage-In- Garbage-Out’ applies to data warehouses as well. A prerequisite for any data warehouse is complete and correct data. Potential data sources (shown in Figure 5) were identified. Given that AIIA is only two years old, the systems were designed and implemented to be open. Therefore, the ETL process was reduced to mostly T process where data files where converted from one layout to another. Also, given the fact that data repositories were relatively new, the dedicated database and system monitoring capabilities were in place, and sound database methods with foreign keys, integrity constraints, domain value restrictions were used, the source data was of high quality. In addition, given the nature of the financial market and need for accuracy of the investment data, the Operations department was made responsible for a daily reconciliation and cleaning of the underlying data sources, eliminating the need for cleaning the data during the ETL process. The following gives the description of the content of the underlying data sources. Account Management Database: Contains all necessary account-related fiscal information (e.g., information about accounts, investors, securities, balances, transactions, custodians, generated revenues, etc.). Investment Research Database: Contains all product-related fiscal information (e.g., information about products, managers, etc.). Customer Interaction Management System: FAQ MGMT database contains information about FAQs (Frequently Asked Questions) and of the sources for Contact Item class. Web Traffic, Contact Center, and Contact Management databases are used as sources for each of the three fact tables for CIMA as well as a source for Contact Item and Contact Handler classes. Financial Planning system contains financial planning and asset allocation applications that state investor demographic objectives. Market Research: Contains news and markets information presented to AIIA’s clients via a Web site as a part of the service provided. This is one of the sources for Contact Item class. Even though some of the underlying data sources were not implemented at the beginning of the data warehouse design process, AIIA felt that this fact was not detrimental to the process of conceptually designing a data warehouse. In fact, synchronized-simultaneous design processes for data warehouse and its underlying operational systems mutually influenced each other into adopting standardized design approaches. Consequently, this resulted in the reduction of the amount of time and effort needed for the implementation phases of the data-warehousing project. While the underlying source data was clean, there was still a need to convert the data from one file format to another. Many of the sources overlap in the content. For example, there are multiple custodial data feeds that all provide account transaction and balance information. After translating a couple of custodial formats into the data warehouse format, others followed a similar logic and were relatively easy to convert. In other cases, some of the sources, like the contact center source, covered unique sets of information and the transformation logic was distinct. Regardless of the breadth or overlap of the source data, ETL tools (provided by Sagent, Informatica and InfoGrate) were used to streamline and hold the meta-data and transformation logic. The primary key to a successful data warehouse is the ability to use the combination of hardware and multi-tiered servers in any one of a number of ways and to keep the configuration as flexible as possible over time to address the changing profile of the data warehouse. When source data changes, warehouse views need to be maintained so that the two remain consistent (Labio et al., 1999). In addition, as the warehouse continues to expand, both business needs and the requirements relating
496 Jukic & Neild
Figure 5: The Data Sources CUSTOMER RELATIONSHIP MANAGEMENT
WEB TRAFFIC
CONTACT CENTER
CONTACT MGMT
Financial
FAQ MGMT
Planning
(Call, Web)
AIIA Fiscal-CRM Data Warehouse
MARKET RESEARCH
INVESTMENT RESEARCH
Statement ODS
FID.
STATE STREET
Dreyfus
CHARLES SCHWAB
DST
CUST
ACCOUNT MANAGEMENT
to the technological infrastructure will continue to change as well. Addressing this change through the implementation of an open-system, multi-tiered format is critical to the success of the data warehouse. For example, one of the standard configurations for a Decision Support System (DSS) application calls for a database server coupled with a “fat” client running a DSS presentation and query tool that may or may not address back to an intermediary query processing server or “engine”. However, AIIA opted for a newer standard to create applications based upon browser type technologies to reduce the loads on the client workstations and provide open connectivity to the data warehouse. This type of an application requires a Web server, a query engine and a database server to/from the data warehouse, and it provides the computational component to the client’s display device. A three-tier data warehouse architecture was developed. Figure 6 illustrates the three-tier architecture with the three shades of gray. The dark gray tier contains the presentation services in which users will be able to access the data and these services include the interface to the users. The middle tier contains the processing services in which large user requests and data manipulation are executed. The light gray tier contains the data services in which the data is stored and maintained.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION The scope of this case study was to present AIIA’s efforts during the conceptual design phase for the first iteration of the enterprise-wide data warehouse. Successful completion of this phase enabled AIIA to proceed with its data-warehousing project with a clear vision of future benefits and efforts required. The stages subsequent to the design phase involved their own complex and labor-intensive issues, but due to the successful completion of the conceptual phase described in this case study, the potential for “wasted effort and extra expense” scenario (Mattison, 1996) was minimized and the implementation phase was straightforward.
Designing a First-Iteration Data Warehouse
497
Data
Figure 6: AIIA’s 3-Tier Data Warehouse Architecture
Fiscal-CRM Data Warehouse
Presentation
Process
Data Server
Application Server
PC
PC
Investment Management
PC
Technology/Operations
PC Using the Web
Laptop computer Connected via the WEB
Sales/Marketing
Currently, the AIIA data-warehousing system is used by the initial core of users. Most of these initial users were, in some fashion, involved with the creation of the system. Therefore they were quite familiar with the system from the inception, and they did not require formal training. Since the user population is expected to grow significantly and expand outside the self-reliant core, one of the pending tasks for AIIA is developing adequate end-user education and support strategy. Some preliminary steps addressing that issue, such as developing educational-focused documentation, have already been undertaken. Another challenge facing the organization is maintaining the data warehouse and managing the growth. Due to the fact that AIIA is a relatively new company in which the data warehouse was developed in parallel with the operational systems, the likelihood for changes and additions to the structure of the data warehouse underlying systems is higher than in the typical data-warehousing project, where the data warehouse collects the data from mature systems. Consequently, the structural changes in the data warehouse itself are probable. In acknowledgment of this fact, AIIA has delayed the transition from the data-warehousing development team to the data-warehousing growth and maintenance team. This transition will eventually involve downsizing the number of members devoted to the project, and at this point AIIA feels that this step would be premature. Finally, AIIA still has to evaluate the accomplishment of the clear missions for the warehouse, set as detecting effective product and offerings and understanding the most promising revenue opportunities, in order to make attaining an appropriate and justifiable ROI apparent. This evaluation will be done gradually, as the system is used for the amount of time significant enough to evaluate its impact (or non-impact).
498 Jukic & Neild
FURTHER READINGS Adamson, C. & Venerable, M. (1998). Data Warehouse Design Solutions. NY: John Wiley & Sons, Inc. Agosta, L. (2000). The Essential Guide to Data Warehousing. Prentice Hall. Barquin, R. & Edelstein, H. (1997). Building, Using, and Managing the Data Warehouse. Prentice Hall. Bischoff, J. & Alexander, T. (1997). Data Warehouse: Practical Advice from the Experts. Prentice Hall. Inmon, W.H. (1996). Building the Data Warehouse. NY: John Wiley & Sons, Inc. Inmon, W.H., Welch, J.D., & Glassey, K.L. (1997). Managing The Data Warehouse. NY: John Wiley & Sons, Inc. Inmon, W.H., Rudin, K., Buss, C.K., & Sousa, R. (1999). Data Warehouse Performance. NY: John Wiley & Sons, Inc. Mattison R. (1996). Data Warehousing: Strategies, Technologies and Techniques. McGraw-Hill.
REFERENCES Chaudhuri, S. & Dayal, U. (1997). An Overview of Data Warehousing and OLAP Technology. SIGMOD Record, 26(1) 65-71. Kimball, R., Reeves, L., Ross, M., & Thornthwaite, W. (1998). The Data Warehouse Lifecycle Toolkit. NY: John Wiley & Sons, Inc.. Labio, W., Yang, J., Cui, Y., Garcia-Molina, H., & Widom, J. (1999). Performance Issues in Incremental Warehouse Maintenance. Technical Report, Stanford University. Mattison, R. (1996). Data Warehousing: Strategies, Technologies and Techniques. McGraw Hill
BIOGRAPHICAL SKETCHES Nenad Jukic is an Assistant Professor at the Information Systems and Operations Management Department of the School of Business Administration at Loyola University Chicago. He received his BS in Electrical Engineering and Computer Science from the University of Zagreb, Croatia. He received his Master’s and Ph.D. degrees in Computer Science from the University of Alabama. His research has focused on the areas of database management, e-business, data warehousing, and systems integration. Tania Neild is President and Director of Research and Technology at InfoGrate Incorporated, a data integration tools and services provider. With a National Physical Science Consortium 6-Year Full Doctorate Scholarship, she graduated from Northwestern University with a Ph.D. in Computer Engineering, with a concentration in heterogeneous database integration. She earned her Master of Computer Sciences from the University of Maryland where she concentrated in software specification, and her BA from Emory University, majoring in mathematics and computer science.
Reengineering the Selling Process in a Showroom
499
Reengineering the Selling Process in a Showroom Jakov Crnkovic State University of New York at Albany, USA Goran Petkovic and Nebojsa Janicijevic University of Belgrade, Yugoslavia
EXECUTIVE SUMMARY The case presented chronicles the reengineering efforts of a small Yugoslavian showroom wholesaler. Following an initial period of success, the company subsequently became unable to deliver the promised level of quality and service. A team of consultants was engaged who recommended business-process reengineering in order to help improve performance. The strategy they devised for the company involved replacing functional specialists with case managers. While the strategy was successfully implemented, it was not followed by appropriate changes in information technology, thus limiting the effectiveness of the entire process. The goals of this case are threefold. The authors seek: 1) to help the reader understand the current situation; 2) to develop a swift fix strategy; 3) to outline tactical and strategic plans for future development. Readers will be able to review several working prototypes1 of information subsystems designed to support the suggested reengineering process.
BACKGROUND The background of this case is the rapidly changing business environment in Yugoslavia2 during the period of transition towards a (relatively) free market economy. Recent changes in the business environment have led to new opportunities for individual investment and real possibilities for entrepreneurship. We hope that this case will be of interest to a variety of readers, not only because of renewed interest in the region3 , but also because the company discussed in it is a small wholesaler operating in the showroom business setting. There are many similar organizations all over the world, particularly in Eastern Europe and the Far East. By increasing export-import revenues, these relatively small enterprises are helping the global economy become truly global. The case describes Wissol, a Yugoslavian company that has continued growing despite turbulence in both the economy and the geographical region. The case describes several aspects of Copyright © 2002, Idea Group Publishing.
500 Crnkovic, Petkovic & Janicijevic
the firm, ranging from organizational structure, human resources and information systems to warehousing, local (onsite) and situational logistics, transportation and distribution.
Yugoslavian Trade (1990-2001) The legacy of a planned, centralized economy, along with hyperinflation and a huge economic crisis during the 1990s help account for the obsolete structure of Yugoslavian trade. Small companies predominate and their number is exceedingly large in wholesaling. Thus, in 1994, there was a 3:1 ratio between the number of wholesalers and retailers. Yet, the total sales volume in wholesaling was only 1.6 times greater than in retailing. Small wholesalers do not have sufficient market power, adequate sales and warehousing capacities or satisfactory professional knowledge of trade. However, in the newly opened private sector, market laws have started to work and competition has greatly increased. New companies seek to become dominant in smaller geographical areas or in more specialized consumer goods markets (“niches”). The most interesting situation that can be observed is in the packaged food and consumer goods market. Companies in these markets base their business on showroom sales and are among the most important types of wholesalers. Globally, these companies are reminiscent of the catalogue and showroom retailers that, together with discount houses, hindered the development of traditional department stores over the last 50 years4 . However, there are several key characteristics that distinguish these merchants from wholesalers and showroom retailers in developed markets. The following characteristics should also be taken into account in order to more fully understand the context and the problems new wholesalers face during periods of abrupt growth: • New wholesalers have small capacities. They do not pursue the real cash and carry trade since they lack sufficient space for both supplies and the free traffic of buyers. • The showroom represents a breakthrough solution and involves a smaller facility for exhibited samples. Buyers select merchandise on the basis of exhibited samples while the logistics department collects the ordered goods and prepares them for delivery. • Unlike retailers, wholesalers do not go to the expense of printing catalogs. • A showroom usually contains a sales department. The salespersons provide information to customers, help them with their order forms or even fill out the order forms themselves. • Aside from the salespersons, there are two more types of functional specialists involved in order processing: administrative clerks who prepare the documentation and logistics clerks who prepare an order for delivery. • Unlike their Western European counterparts, showroom merchants in Yugoslavia may assume the additional basic functions of a distributor. In the beginning, this happens on an ad hoc basis; in time, however, constant transportation lines are established. There is only a rudimentary level of specialized delivery for local markets (expensive and often unreliable)5 . Government-owned post offices are traditionally inefficient and do not deliver bigger parcels door-to-door. Foreign companies like UPS, FEDEX and DHL charge high tariffs and will only take parcels designated for export. • A phone ordering system is being developed in order to better serve repeat customers. This happened spontaneously at first; it was later formalized when certain salespersons were appointed to receive telephone orders. The idea of phone ordering, however, is not well established among the general population; a good, inexpensive 800-number system is lacking. • The Yugoslavian market has also been characterized by shortages of selected merchandise due to economic sanctions. As a consequence, demand overheated when articles became available. Since the import/export business was so limited, there was ample room for the black market and “gray” economy. • The banking system has also had serious problems. All credits and even credit-card purchases were virtually eliminated during the early nineties, but started to resurface by the end of the decade.
Reengineering the Selling Process in a Showroom
501
•
Diversification develops in the course of performing additional ancillary functions. Certain wholesalers finance production or retailing. Others offer to arrange merchandise in retailers’ facilities. Distributors for foreign companies enter into joint promotional activities or gather and distribute market information6 . The increasing number and strength of small, independent retailers and catering services primarily accounts for the increase in showroom wholesalers. Showroom wholesalers in Yugoslavia cater to buyers who would typically be served by cash-and-carry merchants in more developed economies. The share of small and independent retailers is likely to remain significant, at least for a time7 . Evidence from Western Europe suggests that small merchants and catering services will represent an important market for the emerging showroom merchants in Yugoslavia during the next stage of economic development.
SETTING THE STAGE Big Problems for Small Wholesalers Local merchants, organized by territories8 , dominate the distribution channels. The four most significant types of merchants are: • Wholesalers– mostly specialized in determined commodity groups. • Retailers—local supermarket chains which, in turn, supply local merchants. • General merchandise wholesalers – these mostly use showrooms as the point-of-sale; they frequently play the role of producers for tax motives, or of retailers in order to gather cash more promptly. Wissol belongs to this group. • Producers – those who sell their own goods and the goods they receive as a compensation9 for their products. The Wissol Company is privately owned and has been in business since 1993. It leases space for a showroom, offices and warehouses (a typical choice for showroom businesses around the globe). In 1997, Wissol had 51 regular employees and around 80 part-time employees, most of whom worked in the warehouse, performing simple, manipulative tasks. By the beginning of 2001, Wissol had more than 155 employees, making it one of the top competitors in the Yugoslav packaged food market. Table 1 (see Appendices) lists some of these competitors, a small number of which achieve higher sales volume than Wissol. When the firm was created, it was organized along the lines of a retail showroom company. There are many businesses in which the showroom model is very popular, including: jewelry showrooms, car salons (very popular overseas, gaining popularity in the U.S.) and specialized shops for home/ office remodeling (e.g., “plumbing showrooms”). The showroom model has also become increasingly popular in the wholesale sector and on the WWW. Some of the more widely known retail showroom chains like “Service Merchandise” or Luria’s, are currently going through a restructuring process. Luria’s, one of Florida’s leading catalog showroom retailers, is shifting its format from showrooms to special discount stores, specifically “superstores”. It is even discontinuing its annual catalog— a major symbol of the retail showroom business. Wissol adopted a functional means of unit grouping on the basis of work assignment and job similarity, resulting in the following business functions: purchasing; sales; warehousing; arranging; administration; finance and accounting. The functional model of unit grouping perfectly corresponds to Wissol’s age, size and business concept in the present stage of its lifecycle. Yet, its organizational structure, presented in Figure 1 (see Appendices), must be altered in order to eliminate a number of intractable problems. Wissol’s information systems operate at the transaction processing level and incorporate key elements of management information systems (more or less typical for small, new companies). The accounting system appears to be somewhat better developed than other systems, particularly in
502 Crnkovic, Petkovic & Janicijevic
marketing and decision-making. Complete IS support is outsourced. Note that the IS functions are not mentioned in Figure 1 even though, with the current level of design, they should fit under the Finance and Accounting department heading. Wissol’s sales method is based on selecting goods and filling orders in the showroom. The typical buyer is either a caterer or a small retailer whose store area varies from 500 up to 5,000 sq. feet (known in the U.S. as “convenience stores”). Using bar codes and quantities, Wissol’s administrative personnel prepare and print invoices and delivery lists in the showroom. The documents are then transferred to the warehouse where the buyer must collect the goods. The collected merchandise is then loaded into the buyer’s vehicle (not always the same day) or shipped by a company truck when deliveries are scheduled in the buyer’s neighborhood. If clients can count on prompt delivery, they may also decide to use the added convenience of telephone ordering. Employees in the sales department receive buyers’ orders by phone, oversee the preparation of delivery lists and invoices by administrative personnel and transfer the documents to the warehouse. This procedure is identical to the one used in the showroom. The only difference is that the goods must be loaded in Wissol trucks and transported to buyers (transportation is partially outsourced). Aside from the showrooms, Wissol also sells its merchandise through sales representatives (outside or “field” sales). This especially applies to selected imported merchandise. Wissol sells on a much larger territory than showrooms’ gravitational area. Figure 2 presents Wissol’s fundamental business processes. Our research focuses on showroom sales (“inside sales” in Figure 2) yet, at this point it should be emphasized that, due to active growth in the first and in the third business processes, the pace of showroom sales is slowing down.
CASE DESCRIPTION Initial Challenges Facing the Firm During Wissol’s first few years in business, the growth of its sales volume was dramatic. The convenience of selling through showrooms (areas over 3,000 sq. feet), together with a well-balanced assortment of merchandise (around 6,000 SKUs), favorable prices, a delivery service and a phoneordering system garnered Wissol popularity among small merchants. After their initial success, however, the firm was not able to maintain the level of service promised at the start; it was not able to keep up with its own success. The owner (and general manager) decided to engage two consultants from Belgrade’s Business School Research Institute10 . Their initial diagnosis (problem assessment) was based on the vast difference between organizational models for retail and wholesale showroom businesses; other problems stemmed from the very rudimentary level of information technology support. The suggested cure was a general reengineering of the organization and of the information flow. Following a case presentation at the Institute, research began on a complete redesign of Wissol’s business and information systems. Figure 2: Fundamental Processes in the Firm Fundamental processes 1.Stock and assortment providing 2. Showroom selling (inside sales) 3. Outside sales Support processes
Buying
Show-room
X X
Sales representatives
Warehouse
Transport
X
X
X X X Human resource management Financial flows management Information flows management
X X
Reengineering the Selling Process in a Showroom
503
The increase in sales volume caused some additional organizational difficulties. The impact of these was most strongly felt in the showroom and the warehouses. The showroom receives documentation for both standard customer orders and phone orders simultaneously. The temporal concentration of orders in busy periods posed a special problem. Additionally, enhanced buying followed enhanced sales and the storage of goods became more complicated and time-consuming. The same warehouse workers, using the same in and out doors and corridors, filled invoices and stored goods; they were organized into two shifts whose hours overlapped substantially. The workers were assigned to designated warehouses. The goods are sorted by origin and content and are stored in the following warehouses: chemical goods; dry packaged goods; beverages; frozen goods. Each warehouse manager controls his own crew and is responsible for the supplies in his warehouse. The warehouses became incapable of processing the received orders. This was especially dangerous because clients in the showroom or in front of the warehouses waited longer than they had previously. The time needed for invoice printing and merchandise collection doubled. The basic operations (i.e., showroom and warehouse activities) are presented in Figure 3. Certain errors or problems with documents can be catastrophic; they can cause both clients and employees to wait nervously and needlessly. In fact, if an error occurs or the merchandise listed in the invoice is out of stock (for whatever reason), the buyer is forced to return to the showroom and repeat the same series of actions. These arrangements caused personnel to expend a maximum amount of effort11 . A great deal of work (e.g., shifting documents from one department to another) and a weak preventive in case of error caused jams in client processing12 . The most serious problems were: • Unorganized reception of customers in the showroom: customers enter the showroom by themselves and a salesperson “chases” after them. An accounts receivable clerk, meanwhile, “chases” debtors. On occasion, administrative personnel are in the process of printing out an invoice when the accounts receivable clerk intervenes. As a result, the salesperson is “blocked,” serving a debtor whose order cannot be processed. • Customers who come alone and order goods in a showroom generally stay there too long. They spend most of the time waiting to collect their invoices; they then proceed to the warehouse. • Telephone orders (customer should have previously ordered or be able to describe needs, since there are no catalogs) are very time-consuming for salespersons. Irritated customers are left waiting while salespersons walk from shelf to shelf, receiving orders from absentee buyers. • Problems connected with ordering out-of-stock merchandise are particularly bad. Customers frequently fill out orders for goods on display in the showroom but missing from the warehouse; this indicates bad coordination. A customer that receives a printed invoice for nonexistent merchandise must repeat the entire process; naturally, this provokes a great deal of dissatisfaction. • Sometimes, products are coded incorrectly – customers or sellers write down the erroneous product code in the showroom. The wrong goods are collected and, if the customer does not want them, the invoice must be destroyed and the transaction must start all over again. • There are no catalogs. Even customers who know exactly what they want have to complete order forms in the showroom, instead of completing forms in advance. • Problems occur when insolvent customers try to place an order. • There is neither organized collection nor delivery of goods. Delivery is held up while the customer waits needlessly in the warehouse. • There is neither centralized, uniform logistics management nor adequate transportation management. Since transport is not organized in the logistics department, logistics costs and transport duration are both greatly increased.
504 Crnkovic, Petkovic & Janicijevic
The most serious problems stem from the fact that the selling process involves far too many specialized organizational components. In each business function there is a tendency to conclude a determined part of the job concerning one client, to perform it entirely and then to transfer the subject to the next functional specialist. Administrative clerks in the showroom tend to conclude the selling process and the preparation of documents, after which the client proceeds to the warehouse. The current sequence of steps can cause difficulties not only during the selection of goods but even after delivery. There is an inefficient division of labor between salespersons and those responsible for invoices in the showroom. The reverse arrows, labeled with the letter P (for Problem) shown in Figure 3, illustrate this problem: they indicate that a customer must start all over again if an error is found in the documents. To accelerate the traffic of clients through the showroom, customer solvency checks are performed simultaneously (indicated by the arrow and letter S in Figure 3). Ideally, however, these should be sequential rather than competitive actions. The consequences of the present system are lost time for salespersons and embarrassing public scenes in the showroom. An additional difficulty is caused by diversification, e.g., performing new functions in a distribution channel (transportation for customers). Such growth could overwhelm an already burdened logistics department and result in greater confusion and delay. Each organizational sector has attempted to solve these problems by itself. As Figure 3 indicates they are all active participants in a unique order preparation process. Yet, there is no one person clearly designated to oversee account processing from start to finish, which causes additional delays for irate customers.
Suggested Steps Towards A Solution: Reengineering Merely restructuring or reorganizing the showroom or warehouse could not have solved the problems beleaguering Wissol; broader functional changes were needed. Those changes could have been achieved either by rationalizing the order preparation and fulfillment process or by process reengineering. From the IS point of view, there were obvious basic problems in the area of inventory management and with the client database, HR and transportation system management. In order to survive, small Yugoslav wholesalers must learn how to achieve a tenable competitive advantage over rivals. Business process reengineering is a viable solution since it enables companies to achieve a competitive edge by working from a process organizational perspective. A majority of authors agree that, under turbulent conditions, a company achieves sustainable competitive advantage in the market by developing superior processes, not structures. A company can be understood as a network of processes. Each process, in turn, can be seen as a sequence of activities that transform input into output. By adopting this view, Wissol was able to change its organizational perspective from function–to process-oriented. This change was a part of a broader shift of the organizational paradigm from “strategy-structure-systems” to “purpose-processespeople.” This was not, however, the end of reengineering at Wissol. The radical redesign of business processes also implies changing the fundamental assumptions upon which these processes are based. Assumptions are made about the goal of the process, its users and the technology destined to be used in the process. The inefficiency of business processes frequently stems from the fact that they are based on obsolete assumptions, which must be identified and altered before meaningful changes can be made. Since organizational culture consists of many such assumptions, changing organizational culture becomes an integral part of the reengineering process. Alterations in the content and course of activities and operations may thus cause changes not only to individual jobs but to the whole organizational structure as well. The organizational structure is adapted by creating organizational units that comprehend the entire business process, not just selected steps. Although it often endangers the achieved level of specialization, the advantages of
Reengineering the Selling Process in a Showroom
505
Figure 3: Showroom selling process (primary design as firm started). Note: Employee title in bold CUSTOMER ENTERS THE SHOWROOM
DELIVERY Controller and Driver
THE CUSTOMER CONTACTRS THE SALESPERSON Salesperson in the showroom
P
S
Showroom salesperson
TRANSFERING THE ORDER TO THE INVOICE DEPARTMENT Showroom salesperson
SOLVENCY CHECK Account receivable clerk
SELECTING THE MERCHANDISE & WRITING ORDER
MISSING GOODS LISTING Controller
? TRANSPORTATION Controller and Driver
LOADING THE INVOICED MERCHANDISE Collector / Store keeper / Controller
P CONTROLLING THE MERCHANDISE Controller
WARE HOUSE 1 COLLECTING THE INVOICED MERCHANDISE
INVOICE AND DELIVERY LIST FULFILLING Administrative clerk
DOCUMENTS CARRYING TO THE WAREHOUSE Administrative clerk
Collector / Store keeper
WARE HOUSES 2 and 3 COLLECTING THE INVOICED MERCHANDISE
Collector / Store keeper
WARE HOUSE 4 COLLECTING THE INVOICED MERCHANDISE
Collector / Store keeper
RECEIPT AND DISTRIBUTION OF THE DELIVERY LISTS TO THE RESPECTIVE WAREHOUSE PARTS Warehouse manager
506 Crnkovic, Petkovic & Janicijevic
structuring on the basis of processes are manifold. We might also add that motivation tends to improve when employees’ tasks become more varied, complex and, as a result, more satisfying. Changes in business processes will not be possible without changes in the planning and control system, in the performance measurement system and, most importantly, in information systems. Information systems play an enabling role in reengineering; they directly influence the assumptions upon which business processes are built. The reengineering process at Wissol followed the program of changes proposed by the authors13 and based on existing literature in the field of reengineering. The process consists of eight steps and one additional proviso, viz., that the process must be closely managed from start to finish. Steps 1 and 2 are: Start from the top and Get the strategy right. Step 3 is: Identify core business processes. At Wissol, reengineering was initiated to help improve the ORDER PREPARATION AND FULFILLMENT PROCESS. It was designed on the basis of around 10 basic assumptions (defined in Step 3), a majority of which were proven to be false in Steps 4 and 5. The following are examples of these assumptions: • It is neither necessary nor possible to contact customers the moment they enter the showroom. • The customer solvency check is a “shameful thing” and should be performed without their knowledge. • A customer cannot select goods and prepare an order by himself/herself; a salesperson must do it for him/her.14 • The salesperson receiving an order by phone must first locate the merchandise, and then read its code. Step 4: Develop deep process knowledge. The basic assumptions noted in Step 3 led to serious problems in order preparation and fulfillment. Among the more severe were: • The customer solvency check is performed simultaneously rather than sequentially (after the sales process). The solvency check is not always performed very discreetly15 . • Salespersons are very busy because they must prepare the entire order for each customer; customers must try to “catch” a salesperson. • Salespersons walk around the showroom with headsets in order to read the code; they waste time and irritate customers waiting for their orders to be prepared.16 Step 5: Identify opportunities for improvement. Recommendations on how to do this are outlined below. Please compare these with the assumptions identified in Step 3 and the observations made in Step 4. • The process of recognizing customers upon entrance to the showroom can be very short. • The solvency check is a regular and legitimate act, which can be performed in an unobtrusive way. • A customer is capable of selecting merchandise and preparing an order form himself/herself (with the assistance of a salesperson only if necessary). • Salespersons can prepare a phone order simply by checking the stock list on the computer screen and without seeing the product on the shelf 17 . • Customers can prepare an order ahead of time and come to the showroom to have it processed and picked up. Otherwise, they can order over the phone. Based on these steps, a new process of order preparation and fulfillment is designed (Figure 4.) This leads us to Step 6: Identify world-class best-of-breed and customer requirements. Readers are invited to expand on these ideas, particularly from the IS/IT point of view. 1. Buyers’ reception and the solvency check. Upon entry, the showroom manager greets buyers and directs them towards an unoccupied sales representative in the showroom. In this way, the sales manager acts as a host in the Wissol showroom; he/she is the first point of contact for buyers. The showroom manager also performs solvency checks (in cooperation with an accounts receivable clerk) and intervenes when necessary18 . 2. Selecting the goods and order preparation. This activity is performed in three possible ways: by phone19 , using previously prepared catalogs20 and directly in the showroom.
Reengineering the Selling Process in a Showroom
507
Figure 4: Reengineered Business 1. Buyers’ reception and the solvency check
2. Selecting the goods and order preparation
Showroom manager
Salesperson
5. Collecting the merchandise by specification Logistic crew Logistics crew
2.1
3. Preparing an invoice and transferring it to the warehouse Salesperson
6. Printing the final invoice and solving the problems Logistics manager Logistic manager
4. Preparing the merchandise delivery lists in certain warehouses Logistics manager Logistic manager
7. Loading and transportation of the merchandise Logistics crew Logistic crew
Orders received by phone are taken by the “sitting” salesperson at whose disposal is a computer containing the stock price list and product codes; he/she does not walk around the showroom. He/she sends the prepared invoice by computer to the logistics manager in the warehouse. 2.2 Buyers who know exactly what they want can be offered previously prepared forms. Buyers complete the forms before they come to the Wissol showroom. When they arrive, they immediately give the forms to a “sitting” sales representative21 for processing. 2.3 Most orders involve a direct selection of merchandise in the showroom. For this type of ordering, automatic code scanning with a laser gun could be introduced. When a customer comes to Wissol, he or she receives a laser gun to decode the merchandise while walking around the showroom. By pressing a button on the device, the customer scans the code of the desired goods, writes down the quantity on a bar code device and, thus, prepares an order automatically. In an unobtrusive way, salespersons accompany buyers while they stroll around the showroom, offering help in handling the laser guns and providing information about the merchandise. If a buyer does not want or cannot manage the laser gun, a salesperson handles it for him/her. 3. Preparing an invoice and transferring it to the warehouse. Salespersons still prepare the initial invoices22 yet these are no longer printed in the showroom. They are sent electronically to the central warehouse – to the logistics manager’s terminal. The logistics manager phones (or sends an instant-message) to the showroom when the merchandise is ready for pick-up. 4. Preparing the merchandise delivery lists in certain warehouses. On the basis of invoices, delivery lists are prepared and printed in a specific warehouse. The logistics manager or his assistant performs this function. Warehouse workers (controllers, collectors and transportation workers) are organized into three-member teams and not into warehouses23 . 5. Collecting the merchandise by specification. The warehouse work teams go from one warehouse to another and collect the merchandise specified in the invoice. Once the client’s merchandise has been collected, the team leader reports to the logistics manager to discuss problems or necessary improvements, e.g., changes to the current invoice. 6. Printing the final invoice and resolving eventual disputes. When the merchandise has been collected, the logistics manager prints the final invoice and delivers it to the buyer. If any error has occurred, the logistics manager will proceed to the showroom to resolve the problem with the customer.
508 Crnkovic, Petkovic & Janicijevic
7.
Loading and transportation of the merchandise. The logistics manager entrusts the transportation manager with the loading and transportation of goods after the final invoice is delivered to the buyer. If the buyer transports his own merchandise, the transportation manager organizes the loading of goods into the buyer’s vehicle; otherwise, transportation will take place according to schedule. The next two steps of the reengineering process will be discussed together in the next section. They are Step 7: Create a new process design and Step 8: Implement the new process; they are currently being implemented (late 2000).
CURRENT CHALLENGES The redesign of order preparation and fulfillment was built upon completely new assumptions; this meant that changes in the: a) Organizational structure, and b) Improvements in company’s IS were also necessary. Reengineering the order preparation and fulfillment process at Wissol necessitated certain changes in the organizational structure of the company. The most significant changes in structure and job design were: • Creating the new position of showroom receptionist. • Drawing up separate job descriptions (and training requirements) for “sitting” (in front of a terminal or PC) and mobile salespersons. • Appointing logistics and transportation managers capable of overseeing the entire goods collection and transportation processes. • Changing the job description of warehouse workers: instead of being specialists (controllers, collectors, carriers) they become universal workers, organized by teams rather than warehouses and given more flexible hours. • Eliminating warehouse managers’ responsibility for the collection and delivery of merchandise; they are responsible only for organizing, receiving24 and keeping goods in the warehouse. • Architecturally restructuring warehouses in order to form three separate zones in each: reception, warehousing and delivery zones. As a result of these changes, Wissol’s organizational structure moved closer towards the process model; positions of authority and control in management were established. The showroom manager oversees the order preparation process in the showroom while the logistics manager controls the entire order fulfillment process. Furthermore, by creating logistics teams, the entire process of merchandise collection is streamlined and rationalized. Wissol management, with the help of the reengineering team, was able to help employees abandon those assumptions identified as a source of problems. The first step was to make employees aware of these assumptions; since most of them were subconscious or implicit, they could not be changed until they were first exposed and acknowledged to be problematic. In the meantime, alternative ideas consistent with envisioned changes in corporate culture were proffered. The corporation’s new vision of order preparation and fulfillment was successfully communicated to employees who accepted the new assumptions upon which the redesigned process was based. The most challenging step was in building IS support. It should have been established during the initial stages, but it was not. The complete IS sector is outsourced, including maintaining the existing applications and systems development. Currently, Wissol has more than 40 networked PCs (using Novel 4.1 Network Operating System); the majority of programs run the DOS database package Clipper (with very limited networking options). Maintenance and small improvements are added on a daily basis. The software company is aware that, given Wissol’s current technology, they cannot improve Wissol’s information system. They suggested that Wissol switch to Oracle (yet even the software company in charge is not quite ready for this step and they wanted to use this job as a test
Reengineering the Selling Process in a Showroom
509
case). This would require a major overhaul of the company’s hardware but significant improvements in IS support would certainly result. Unfortunately, the owner has not given his blessing; he mistakenly believes that IS cannot be profitable. His assumption is based on a typical return on investment (ROI) perspective and an inability to understand intangible benefits. He further doubted the software company’s ability to install Oracle (for the first time ever) at Wissol. With this case, the authors are trying to help in solving this problem by building a prototype of a comprehensive IS (in Access, Java or VB) which, if properly designed, could help persuade the CEO. The system can be demonstrated in the local area network environment. The next logical step, after placing a good inventory and accounting system on the local area network, will be to add selected options for implementation on the Internet. Typically, it will start by building a Web page covering basic marketing ideas. Another approach would be to build a new IS system using the intranet, which would be useful if the company decided to go into e-commerce. Strategic decision-makers think this a likely possibility. Yet, based on the current level of infrastructure, they expect Wissol to enter into B2B endeavors with vendors and foreign partners, before getting into e-business with local customers. The wholesale firm provides a great model for developing e-commerce B2B applications.
FURTHER READINGS Armistead C. & Rowland P. (1996). Managing Business Process. New York: John Wiley &Sons. Bowersox, D.J. &Cooper, M.B. (1992). Strategic Marketing Channel Management. New York: McGraw Hill. Crnkovic, J. & Holstein, W.K. (1995). Information Systems: Necessity or Luxury in changing economies. Journal of Information Systems, 5, 119-135. Crnkovic, J. & Petkovic, G. (1998). New trends in retailing: E-commerce. Nova Trgovina, 50 (1-2), 116. Crnkovic, J., Holstein, W. and Mohan, L. (1998). Designing EIS: Three cases from retailing. In Management and Marketing in the Global Environment, 391-406. Belgrade: NICEF Press. Crnkovic, J,(1999). EIS Building Blocks in E-Business. In Strategic Issues in Transformation of Big Companies in the Global Environment, 33-40. Belgrade: NICEF Press. De Looff, L. A.(1997). Information systems outsourcing decision making: a managerial approach. Hershey, PA: Idea Group Publishing. Gaston, S.J. (1997), Getting the Right Systems at the Right Price, New York: John Wiley &Sons. Holstein, W.K. & Crnkovic, J. (1999). EIS Design Issues: The Special case of E-Commerce. In IRMA, 10th International Conference Proceedings. Hershey, PA: Idea Group Publishing Johannessen, J., Olsen, B. & Olaisen, J. (1997). Organizing for Innovation. Long Range Planing, 30 (1) Kalakota, R. & Whinston, A.B. (Eds.) (1997). Readings in Electronic Commerce, Reading, Massachusetts: Addison Wesley. Knights D. and Willmott H (Eds.) (2000). The reengineering revolution? : Critical studies of corporate change. London: SAGE. Maddison R. and Darnton G. (1996). Information systems in organizations: Improving business processes. London; New York: Chapman & Hall. Mason, J.B., Mayer, M.L. & Ezell, H.F.(1991). Retailing, 4/E. Homewood, IL: Irwin. Pearlson, K.E. (2001) Managing and Using Information Systems. New York: John Willey & Sons. Tilanus B. (editor) (1997). Information systems in logistics and transportation. Oxford ; Tarrytown, NY: Pergamon. Whalen, T. & Wright, D. (1999). Business Process Reengineering for the Use of Distance Learning at Bell Canada. Annals of Cases on information technology: Applications and management in organizations 1. Hershey, PA: Idea Group Publishing.
510 Crnkovic, Petkovic & Janicijevic
ACKNOWLEDGMENTS After selecting several good IS prototypes designed by our students, there is a special thanks for putting extra efforts to improve their initial solutions to: Irena Khachatryan, Adam Rubin, Rachelle Freedman, Kim Goldberg, Inderjit Kaur and Mike Dixon. The authors would also like to thank Mehdi Khosrwopour (ACIT senior editor) and the review panel for their insight and helpful suggestions.
APPENDIX Table 1: The major competitors in the packaged food wholesaling in Yugoslavia Data for I-VI 1997
C-MARKET
The data are expressed in thousands of Yu-dinars Goods Index Total Profit Net worth Total Number of Type of activity turnover turnover assets employees 483,219 5.27 493,849 37 198,296 378,738 3,227.21 Retailing and wholesaling
PEKABETA
310,107
3.38
311,047
RODIĆ M&B
257,903
2.81
259,541 1,895
148
274,779 342,654 26,235
146,659
923.30 Wholesaling and production
DELTA
120,559
1.31
120,931 4,534
21,117
196,335
279.24 Wholesaling
IMPEX-PROMET
105,378
104,879
1.14
1626
15,435
42,032
WISSOL
91,697
1.00
92,951 2,038
2,765
36,109
JUGOŠEĆER
81,386
0.89
81,464
-822
27,044
100,030
1,727.76 Retailing and wholesaling
66.00 Retailing and wholesaling 51.00 Wholesaling and production 35.39 Wholesaling
DISKOMERC
74,298
0.81
74,487
1138
9,009
30,323
GRANEXPORT
63,197
0.69
91,015
800
98,339
164,786
458.79 Wholesaling
86.49 Retailing and wholesaling
VRBAS
61,516
0.67
61,516
24
33,467
59,219
309.50 Wholesaling
Figure 1: Wissol’s Organizational Chart
WISSOL General manager
Support stuff Manager
Commercial department CEO
Finance and accounting department Manager
Wholesale Manager
Sales department Sales manager
Purchasing Purchasing manager
Warehouse Warehouse manager
Wholesale clerk
Purchasing clerk
Assistant
Account receivable clerk
Accountant
Administrative clerk
Discount book clerk
Collector
Invoice clerk
Controler
Showroom operator
Store keeper Forklift driver
Figure 1: Wissol’s Organizational chart
Transportation worker Driver
Reengineering the Selling Process in a Showroom
511
ENDNOTES Over 100 juniors at the School of Business, University at Albany New York (SUNY), working in groups, have developed 23 prototypes in Microsoft Access. Three of them are improved and are available to readers who adopt the Case. 2 According to the laws and business practices of the former socialist Yugoslavia, ownership of small retail shops, small job-shops, farms, restaurants, etc. was permitted only if the number of employees was five or fewer (not counting members of the immediate family). A true market economy first became possible in the mid-1980s and Yugoslavia was poised to become one of the leading new economies in the region. After a promising start, progress was halted because of erroneous political decisions, which resulted in war and the imposition of international sanctions. Though the economy has been devastated, there are still entrepreneurs with interesting ideas, modest investments and vision. The company discussed in this case was established in 1993. The consultancy team was formed in 1997 and worked well into 2000. After recent elections (October 2000), it was expected that Yugoslavia would be reintegrated into international (e.g., UN, MMF) and regional (European) organizations. 3 We are presenting this case in 2001, now that there is a chance for lasting peace along with renewed interest in investment and development in the region. 4 Berman B. & Evans J. R. (1995). Retail Management. Englewood Cliffs, NJ: Prentice Hall. 5 The authors suggested this type of business expansion plan to the Yugoslavian Railway system in 1998, but management declined. Yugoslavian Airlines has door-to-door service, but companies like Wissol do not need air transport to cover small geographical areas. 6 Sanctions are lifted and more import (and eventually export) activities are anticipated. 7 Dawson, J. (1998). Trade and competition orientated developments in the EEA. In B. Maricic & G. Petkovic (Eds.), Management and Marketing Challenges in Global Environment. Belgrade, Yu: NICEF Press. 8 We can estimate the area of gravity for Wissol to be around 5,000 square miles 9 A compensation business exists because of high inflation or when a firm’s financial assets are frozen; goods are traded between companies rather than sold on the market. This idea is based on barter arrangements in foreign exchange (made by mutual agreement), but on a domestic level, companies frequently end up with unwanted merchandise or even spare parts. The general premise is that it is better to get something rather than nothing, but the companies affected are not the ones making the decisions. Typically, compensation schemes are endorsed by politicians relying on justifications like: “the common good” or “ higher interests”. 10 In the same time, Wissol’s owner was the main source of information for the consultants. In addition, researchers visited the company many times, received documentation and conducted interviews with employees prior to and during the re-engineering process. Business cooperation between the owner and the Institute had two peak points. The first was in 1997-98, when researchers from the Institute performed systems analysis and designed the reengineering process, which was implemented in 1998. IS was not implemented at that time. After two years (beginning of 2000), Wissol called back the consultants and decided to focus on IS and information technology. This second call was the main reason why the authors wanted to share the case with readers of this Journal. 11 There is very high unemployment across Europe, particularly in Yugoslavia. There is a tendency to keep any job (to go the “extra mile” if needed) and to learn the business (to pick-up “tricks”) in order to be able to go into business for oneself (particularly in the service industry). 12 During one of the authors’ visits to the company, they spotted one very busy person making calls and working at a desk in the showroom for several hours. When asked who this efficient employee was, the manager answered that this person was not an employee, but an unhappy customer making business calls while waiting for his order to be straightened out. 1
512 Crnkovic, Petkovic & Janicijevic
See Hammer, M. & Champy, J. (1993). Reengineering The Corporation. London: Nicholas Brealey and Hammer, M. & Stanton, S. (1995). The Reengineering Revolution. London: Harper Collins. 14 The existence of an IS/IT model (at the level of transaction processing) is assumed. 15 Ideally, this should be done prior to filling an order (good IS must exist) 16 Please compare this assumption to the current practice of any retail or wholesale showroom in your area. 17 If the firm’s IS has the MIS or DSS level. 18 Several other options are available. If there is a waiting line, operations management theory proves that using one line (users wait for the first available salesperson, with or without being issued “serving numbers”) instead of multiple lines is more efficient. If there is a good IS and the customer feels comfortable using it, he/she can make an initial invoice on the terminal provided (security issues must be resolved in advance). 19 Possible expansion to e-business. 20 There are various opinions about printing catalogs. They are expensive because they have a very limited life span. Wissol sells approximately 6,000 articles and has around 1,500 customers. They will need to print at least 2,000 catalogs and the cost of doing so will not be trivial. Making on-line catalogs is a better solution, but it is a serious programming and data entry task. In discussion with the consultants, Wissol’s owner made an interesting point; given the current market situation, it may not be advisable to advertise that you actually have “hard to find articles” in stock. 21 The showroom receptionist or manager should be able to help to expedite this process. 22 Customer data should be automatically added onto the invoice after retrieval from the Customer’s table (if a good database system is in place). 23 A similar retail practice is used at Sears (USA), making the waiting time for pick-up less than a minute. 24 If supported by good IS, it should not be a problem. 13
BIOGRAPHICAL SKETCHES Jakov (Yasha) Crnkovic is an Associate Professor of Management science and Information Systems at the Business School, State University of New York at Albany, New York. He received Ph.D., MS and BS degrees from the University of Belgrade, Yugoslavia, with additional studies in Switzerland and Great Britain. His major research interests are in Decision Support Systems, System Simulation and in IS/IT Education. He published more than dozen textbooks, over 60 journal and conference papers, and participated in many research and consulting projects in the area of IS/IT applications for various businesses and for governmental organizations in Yugoslavia and the U.S. E-mail:
[email protected] Goran Petkovic is an Associate Professor, Faculty of Economics, University in Belgrade, Yugoslavia. He received Ph.D., MS and BS degrees from the University of Belgrade, with post-doc studies in Great Britain and France. His research interests are in the areas of Decision-making, Marketing and Management the firms in the trade industry, Strategic Marketing and Human Resources Information Systems. He authored or co-authored seven books and over 20 journal and conference papers. His consulting activities include marketing, management and organization of retail, wholesale and production companies in Yugoslavia. E-mail:
[email protected] Nebojsa Janicijevic is an Associate Professor, Department of Management, Faculty of Economics, University of Belgrade, Yugoslavia. He received Ph.D., MS and BS degrees from the University of Belgrade, with additional studies in Boston, USA. His research interests are in the areas of Organization and Management, Entrepreneurship and Human Resources. He authored or coauthored several books and many journal and conference papers. His consulting activities include organization, management and HRM for many companies in Yugoslavia.
Leveraging IT and a Business Network 513
Leveraging IT and a Business Network By A Small Medical Practice Simpson Poon Charles Sturt University, Australia Daniel May Monash University, Australia
EXECUTIVE SUMMARY Although many medical information technologies require significant financial investment and are often out of reach of small medical practices, it is possible through careful alignment of IT and customer strategy, together with a network of strategic alliances to exploit IT effectively. In this case we present a small cardiology consultancy that has engaged in strategic planning in its attempt to leverage IT expertise to attain competitive advantage. We propose that through a network of alliances, a relatively small medical enterprise can benefit from its limited IT investment. The case study indicates the importance of a team of champions with both IT and medical knowledge and the notion of mutual benefit. We also discuss some of the issues faced by all participants in this alliance relationship. The objectives of this case are to provide readers the opportunity to: 1. Discuss how a small medical practice can leverage skills, expertise and opportunities within a strategic alliance to enhance its competitive advantage without heavy up-front financial investments. 2. Explore how small businesses in the professional and knowledge-based industry can gain strategic advantage through IT. 3. Understand the pros and cons of strategic alliances and potential issues related to building trust, consolidating relationships among members and risk management of such alliances on an ongoing basis. 4. Think about the plausibility of business transformation by moving from one industry (specialised cardiology services) to another (medical informatics).
BACKGROUND Although specialised cardiologist was often considered a fairly exclusive profession, it was not without competition. In specialised medical consultancies, medical technology was playing an Copyright © 2002, Idea Group Publishing.
514 Poon & May
increasingly critical role in determining the future success of their operations. Across all dimensions of health care, technology held the promise of enabling clinicians to provide more accurate diagnosis and engage more effectively in the clinical process (Achrol, 1997; Slack, 1997; van Bemmel and Musen, 1997). In addition to medical technology, information technology (IT) had also become an important support for medical professionals to be more effective. IT applications in the medical field varied greatly in their diversity, with the most commonly seen being office information systems (Collen, 1995). Such systems helped to achieve effective information management in hospitals, clinics and other medical establishments. Based on the principles of management information systems, office information systems could support effective retrieval and cross-referencing of medical records including patient histories, past treatments, prescriptions among other functions. Although playing an important role, these systems often worked in isolation and were not effectively integrated with other medical and clinical technologies to support better decision-making (Slack, 1997). Additionally, significant investment in cost and effort must also be made in the purchase/development of IT infrastructure, its deployment, maintenance and administration (Charvet-Protat and Thoral, 1998; Lock, 1996; Staggers and Repko, 1996). In general, a small medical practice1 faces the challenge of many small-and-medium enterprises (SMEs): it did not have the resources or financing of a large enterprise to enable it to enjoy expansion and compete at will (Eden et al., 1997). Thus, while IT was a powerful tool, it could also be a significant burden to invest in IT and related technology when there was insufficient business volume to justify the investment. However, despite its cost, IT could be the fulcrum upon which the SME‘s competitive advantage was leveraged. Some examples of IT as leverage: (a) human resource: an alternative to hiring additional administrative staff was to let staff perform their own scheduling supported by commonly accessible group diaries; (b) time management: electronic questionnaires could be used to elicit information from patients prior to consultation, enabling clinicians to focus on key diagnoses and reducing the pressure of increasingly limited consultation times; (c) patient management: electronic systems could reach out to the home of the patient to further improve the clinical management process and empower the patient; and (d) customer service: there could be enhanced service to customers by clinicians, providing more timely and informative diagnoses, and access to records and information.
SETTING THE STAGE Coupled with the advances in medical imaging, bioinformatics and advanced diagnosis techniques (van Bemmel and Musen, 1997), there was vast potential for IT to further enhance the quality of health care and deliver medical services to the patient on-demand, independent of location and time (Mitchell, 1999). In order to achieve this vision, it was essential to have a strategy to effectively integrate the various aspects of health informatics such as information management, diagnostic technology and patient management among others. Figure 1 illustrates a conceptual framework of how these components come together. In the following sections, we present the case of a small medical practice specialising in echocardiology that had created a strategic vision that integrated health services, technology, and marketing – fuelled by the use of IT and medical technologies, coupled with an innovative and entrepreneurial mindset. By considering the medical practice as a service entity that was driven by knowledge as its primary resource, this cardiology practice had embraced an IT and also in the future a knowledge management (KM) infrastructure as a competitive tool to achieve its strategic objectives. In doing so, it had encountered challenges as a function of its size – and sought to overcome them while retaining the flexibility and advantages of being a small enterprise.
CASE DESCRIPTION Eastern Cardiology Services (ECS)2 was established in 1997 by Dr. Jeff Curtin, a cardiology specialist with expertise in the subspeciality of echocardiography. After working in the U.S. for over
Leveraging IT and a Business Network 515
Figure 1: An Integrated Framework for IT, Medical Technology & Patient Management
information management
diagnostic technology
• E-Health services • Remote diagnosis • Remote patient support • Continuity of care • Decision-making • Health maintenance
patient management
Figure 2: The Organisational Chart of ECS ECS Cardiology Services Dr. Jeff Curtin Director Consulting Cardiologist 2 x Medical technicians
3 Administrative Assistants
IT specialist
a decade in consultancy and research, Dr. Curtin returned to Australia for personal reasons. Having been involved with the local public hospital and a group of cardiologists at a private hospital, Dr. Curtin saw an opportunity for creating a specialist consultancy in cardiology and echocardiography. Although there were existing providers of such services, Dr. Curtin perceived an opportunity to create a niche: by exploiting his expertise in echocardiography, adopting new models of patient management and business practices, and the use of state-of-the-art medical and information technology. At the time of this write up, Dr. Curtin was the only qualified echo-cardiologist at ECS. He was assisted by two medical technicians who were qualified to operate all the diagnostic equipment. In addition, ECS had a few administrative assistants and an IT specialist, a graduate from a local ecommerce research centre Dr. Curtin collaborated with earlier. As a small practice entering an existing market, ECS needed to create a competitive advantage that would allow it to be distinguished from its competitors. Curtin saw that this could be achieved through the effective use of medical and information technology in its service delivery – at a breadth and depth not undertaken by the competitors. However, the main constraint of this strategy was the lack of in-house IT expertise at ECS. Curtin addressed this by partnering with an interstate university, leveraging the expertise of an associate professor and a doctoral student, who could use ECS as a subject for industry-based research. ECS jointly with the university had applied for financial support to work on IT projects. In this way, ECS was able to take advantage of IT expertise (in the doctoral
516 Poon & May
student), not only to aid in maintaining the infrastructure but in formulating strategic IT direction as ECS evolved. Figure 3 illustrates the stakeholder relationships between ECS and its alliance members, and their respective interests. To enable the alliance to work effectively, clear understanding of mutual benefit to the different stakeholders was important. From the perspective of the interstate university, it is important to build a research profile that included industry collaboration. ECS provided such opportunities through the involvement of the doctoral student (working in-residence), funding contributions and participation in research grant applications. Similarly, the local e-commerce research centre was interested in developing industry-based research links. Pursuant to this, an Honours year undergraduate project team has been working with ECS on prototyping an Interactive Intelligent Questionnaire (see Figure 5), an adaptive electronic questionnaire that formed part of ECS’s strategic IT initiative. ECS was also exploring other follow-up projects with its alliances. Through this existing connection, the network of relationships had grown: involving a local research centre in electronic commerce (e-commerce) and a large interstate advanced technologies and health research centre. By combining these relationships, ECS had been able to gain access to expertise that would otherwise fall outside the domain of its core competence. More importantly, it was the sharing of funding opportunities and joint publicity which had created a win-win situation. For example, as the Australian Research Council (www.arc.gov.au), a national research funding body, was focusing on linkage research between universities and the commercial sectors, such alliance network would be ready to take advantage of it.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANISATION After considering how network alliances could contribute to the success of ECS, it is now important to reflect on how to align IT and strategic orientation to achieve internal success within ECS.
Figure 3: Stakeholder Relationships in the Alliance interstate university
• research performance • grants • industry-based partnership
interstate research centre
local e-commerce research centre
Eastern Cardiology Services • research performance • industry-based research • industry-based partnership
• industry-based research • industry-based partnership
• research degree completion • industry-based research
PhD student
Leveraging IT and a Business Network 517
Afterall, it was the objective to achieve ECS’s competitive advantage through IT deployment that drove the development of such strategic alliance. In order to have a focused IT strategy, it was essential to start by examining a firm’s strategic orientation. Aligning business strategy with IT strategy had been described as critical to successful IT deployment (Dvorak et al., 1997). Particularly for small practices like ECS, there were scant resources to arrive at an effective strategy through expensive trial and error, often affordable by large corporations. Although an alliance might be in place, the only way that ECS could ensure maximum benefit from the alliance was to play an active role in the strategic alignment process. Such alignment had only been possible through the clear championship of Dr. Curtin, who had been keeping an eye on any valuable partnership both in the medical and IT fields. While input and support had been sought from different stakeholders within ECS, the strategic alignment process had progressed only because it was clearly driven at the decision-maker level. The strategic orientation of ECS was guided by its mission: that each of their patients would experience the best medical and personal treatment that could be delivered. In achieving this, the strategic vision identified key goals as summarised in Figure 4. There are four key goals that form the strategic vision of ECS.
Achieving Quality In the diagnostic and consultation services that it provided, ECS sought to deliver world-class medical care, consistently benchmarking its services against international standards. This goal was fundamental to the existence of ECS: quality service was the key product that ECS saw itself as delivering. In order to achieve this strategic vision, ECS pursued to form strong strategic partnerships with research institutions both locally and nationally to exploit IT to support the delivery of its services. The kinds of quality benchmarks are accuracy, timeliness, customer satisfaction and consistency. Accuracy was of extreme importance because inaccurate information meant a matter of life-anddeath to the patients. So, to stay competitive, ECS not only needed to provide accurate diagnosis, but also the ability to record, retrieve and transfer information accurately. A robust IT system could provide ECS with good information management capability such that medical records were of high accuracy. Timeliness was another important quality attribute. To provide timely information accuracy, ECS must make sure patients’ records were correctly filed and easily retrievable. In a paper-based filing system, this was difficult, sometimes impossible. ECS wanted to ensure that deployment of an IT system, with interface to some of its specialised cardiology equipment, outputs from such equipment could be stored without the need for physical printing and filing. Figure 4: Strategic Vision of ECS Achieving quality
Satisfying customer needs information technology INFRASTRUCTURE knowledge management
Continuous development & improvement
Integrated medicine & “wellness”
518 Poon & May
Customer satisfaction was as important in the medical sector as in other service-oriented sectors. Due to the newly established status of ECS, providing quality services to earn a high customer satisfaction rate was one of the more sustainable competitive advantages. Customer satisfaction in the medical sector could be more complex than in other sectors because patients were often encountering difficult decisions and circumstances due to their illness. ECS was aiming to provide top class patient care by viewing every patient was an unique individual who deserved the best kind of care from all the staff members in ECS — from the cardiologist to the administration staff. Using IT and a Customer Relationship Management (CRM) system, ECS was trying to monitor customer feedback and respond to areas needing attention early. Delivery of consistent medical services was also a key factor to maintain high customer satisfaction rate. ECS was planning to deploy IT (a patient management and a CRM system) to ensure their services were consistent with their philosophy. Internal guidelines on medical procedures and practices were in place to make sure when ECS grew further, patients might encounter different doctors, but the same consistent services.
Continuous Development and Improvement As ECS grew, it would need to adapt and evolve accordingly with new business directions, needs and constraints. It would not get everything right initially, despite its efforts to attain the best possible levels of quality and service. Thus, a key goal of ECS was to embody continuous development and improvement across the organisation. This also included the idea of a living corporation (de Geus, 1997) that was able to renew and replenish itself in a sustainable fashion. To achieve this, ECS saw partnering with specialists in local academic institutions a strategic move. Through leveraging the intellectual support from graduate students, ECS could continuously improve its services via clever deployment of IT. Since IT development could involve high cost and ECS had neither the expertise nor the budget to do it commercially, working with graduate students was a good solution. Graduate students could work on their projects in a focused and relevant manner while producing commercial quality outcomes. As ECS had set up arrangements with local academic institutions such that this relationship would be a continuous one, it had secured a continuous development process.
Integrated Medicine and Wellness ECS believed that its medical focus should combine traditional and complementary approaches to medical care for a total perspective of medical care – integrated medicine. This also included a view of medical care as maintaining patient wellness, where the patient’s health and well-being was maintained (Zollman and Vickers, 1999) rather than disease management, where the focus was on resolving the disease (Hunter and Fairfield, 1997). The strategic vision was notable in that the key goals are being supported and driven by an infrastructure that combined IT and KM. Like many other medical practices, ECS relies on IT as critical infrastructure. However, ECS took this to the next level by leveraging the use of IT in a ‘highly relevant manner’, wielding it as a critical tool in achieving each of the strategic goals and differentiating it from other cardiology practices in improving service delivery. In addition, ECS has taken a view of itself as a knowledge enterprise (Drucker, 1957), focusing on the development of a framework that views the key resources of the organisation as its intangible assets and intellectual capital (Stewart, 1997; Sveiby and Lloyd, 1997). While information is distinct from knowledge, it forms an important component of the knowledge generation process (Nonaka, 1991). Thus, ECS is seeking to combine both IT and KM as a tool for competitive advantage and achievement of its mission. Figure 4 also indicated that this core IT/KM infrastructure was iterative, constantly being developed and refined, as its limitations become apparent.
Leveraging IT and a Business Network 519
Aligning Corporate and IT Strategy ECS had taken its first steps in implementing the aligned strategy as described above. An initial set of initiatives was identified pursuant to the strategy.
Interactive Intelligent Questionnaire This initiative involved the development of an interactive electronic questionnaire that could be deployed across the Internet or in the waiting room of ECS, allowing patients to complete information about their medical condition and summarise the information for the clinician to aid in diagnosis. Deployed over the intranet, the questionnaire could adaptively discard irrelevant questions (based on current answers and known information about the patient) so that the patient only had to answer a relevant number of questions. The underlying IT used to build this interactive intelligent questionnaire was a combination of artificial intelligence (AI) and Web-based technology. The Web was used as a delivery and access system, while the AI technique was used to build the ‘engine’ of the questionnaire. An illustration of how this works is shown in Figure 5. This initiative had been undertaken in collaboration with a local e-commerce research group which provided a team of Honours year Computer Science students to develop a prototype. The motivation behind this project was to ‘value-add’ the increasingly limited time available to the clinician during consultation (Sellu, 1998). The project was recently completed and was nominated as one of the best Honours-year projects in a statewide IT competition. As a result of this initiative, ECS had distilled lessons and concepts from its experience – using them to drive the next set of steps in its strategic vision. ECS had also started focusing on how it could manage the sources and channels of information that formed part of its business. Key initiatives that were part of the information management strategy included those outlined in the following sections.
Figure 5: Conceptual Diagram of the Interactive Intelligent Questionnaire System
Secure web server hosting the interactive intelligent questionnaire application, accessible via SSL connections
Patient's Record Intelligent Interactive Q uestionnaire
LAN, Infrared or Bluetooth connection, could also be via rem ote access
Firewall (proxy server)
TCP /IP connection via broadband access
Dial-up mdem , ADSL or LAN connecting using TCP/IP Patient's own PC or PC in ECS waiting room
Patient
The W eb
Doctor's PC
Doctor
520 Poon & May
Deployment of Digital ECG Management Systems Digital ECG management systems were medical equipment which allow the storage, editing and retrieval of ECGs diagrams. Compared to the traditional approaches to carry out such diagnosis, this provided the cardiologist the option to file and transfer information electronically, eventually via the Internet. This meant that eventually Dr. Curtin could forward patients’ ECGs to other specialists using secured connections without any paper-based delivery. In addition, the cardiologist could perform analysis of changes on patient’s cardiological history and use software tools to do so effectively. This would be a major competitive advantage because cardiologists could be better supported while identifying irregularities in patients’ heart activities.
Provides Access to Procedures, Documents and Resources Via the Intranet As part of the corporate Web site initiative, ECS was aiming to build an intranet using its existing local area network to support internal access and transfer of procedures, documents and resources. As part of the Quality Assurance strategic vision, making information and documents available on the intranet meant staff members could work more effectively with Dr. Curtin and patients. Traditionally, documents such as operating and administrative procedures were not readily available and when new staff members were recruited, one had to go through a time-consuming induction process by an existing staff member. This decreased the productivity of both the existing and the new staff members. With the intranet and its contents, it was hopeful that much of this relatively static information would be available online for ready access.
Contact Management System As part of the CRM initiative, it was envisaged that ECS would be implementing a turn-key CRM system which helped to manage and interface with existing databases containing information of vendors, pharmaceutical companies and partnering clinicians. As the business of ECS started to expand, ability to interact with supporting firms and medical practices was considered critical to ECS continuing success. Although at the time of this write-up, ECS already had databases, built using PCbased software to update such information, it was the ability to link these databases up using an integrated approach that would make a difference. Ultimately, ECS hoped to be part of the supply-chain management networks of vendors and pharmaceutical companies to enjoy the fast responses of the delivery services. With the CRM fully implemented, ECS would be able to interact efficiently with the upstream of the value chain (e.g., vendors and partnering clinicians) and downstream stakeholders (e.g., patients) by streamlining information flow in both directions.
Corporate Web Site and Branding ECS’s Web site forms an important part of its strategic vision. The web site was customerfocused, aiming at providing information in breadth and depth to support patients and clinicians in learning about cardiology, health maintenance and ECS. In particular, the health maintenance dimension of the site was targeted towards focusing on maintaining patient wellness and other aspects of medicine. The Web site will be increasingly used as a primary source of information to support continuing patient education that was location-independent – the objective is to turn the site into a service delivery point (Simsion, Bowles & Associates, 1998) rather than limit it to information provision. Eventually, the Web site would be a one-stop shop for all stakeholders, both of upstream and downstream of the value-chain, as well as internal and external stakeholders of the ECS could interact with the ECS and the necessary systems. For ECS, a critical part of building the customer relationship was winning the mindshare of customer. As mentioned, achieving quality was one of the strategic visions; ECS had adopted a
Leveraging IT and a Business Network 521
focused strategy of building a brand in the customer’s mind (Ries and Ries, 1998), founded on trusting the quality medical and personal service that ECS delivers, and reinforced through a distinct identity (i.e., targeted design of office, stationery and service culture). This also fit within the knowledge management (KM) perspective where the brand was classed as a key intangible asset in the marketplace that must be managed (Sveiby, 1997). Although branding in the medical sector, particularly related to service delivery, was different from, say the retail sector, it was word-of-mouth and patients’ experiences which counted most. ECS believed quality and branding went hand-in-hand. To achieve quality based on customer satisfaction, IT as a support infrastructure was considered very important. For example, one of the potential initiatives was to post letters of gratitude onto the Web site with RealPlayer clips allowing potential patients to share the experiences of existing patients. Also, providing the latest development echocardiology was also considered useful to educate patients and their families about ECS treatment philosophy and approaches. Given ECS’s uniqueness in capturing this niche market, consolidating patients’ confidence was considered the most critical part of the branding exercise.
The Way Ahead As the health sector was increasingly dependent on IT, the survival of small specialist firms was often determined by how well they could adopt technology and use them effectively (Chandra et al., 1995). As large health service providers were leveraging heavily off IT to provide better services, competitive pricing and efficient operations, the only way for a small practice to be sustainable was through personalisation and customisation, filling the void created by the ‘production-line’ style customer service characteristics of large medical practices. As such, both leveraging off IT and human resources within its business network were crucial. ECS embodied the challenges faced by small medical practices and SMEs which were trying to be competitive in an increasingly complex marketplace. This was especially more acute for ECS as it was endeavoring to be innovative, adopting ambitious and unconventional initiatives that distinguished it from similar medical enterprises. While small practices comprised the mainstay of care in primary care models of health care (van Bemmel and Musen, 1997), they lacked a breadth and depth of resources that large-scale medical institutions enjoyed. ECS had sought to buttress this weakness by leveraging the skills and resources of external stakeholders and exploiting its small size. ECS was a work in progress. In just over a year after commencing its strategic vision, it had managed to build a network of partners and a portfolio of projects that it had initiated. From these partners and projects, new nodes in the network were continuing to emerge, as were possibilities for collaboration. As it progressed, the challenges faced by an organisation such as ECS were:
Continuing to Maintain the Interests of Stakeholders As ECS relied on its network of stakeholders, it needed to have an intimate understanding of all the needs, interests and direction of development of its stakeholders (both synergistic and competing interests). In addition, this also included the building and maintenance of the relationship with its most important stakeholder: the customers. The forging of ‘stakeholder intimacy’ was a process of continuous improvement that ECS must maintain in order to position itself for the future. In terms of the benefits gained among the stakeholders, particularly those by the academic community and ECS were continuous and long term. Given that the Australian Research Council (ARC) had, at the time of this being written up, been emphasising the importance of ‘linkage’ research between academia and the industry, the collaborative relationship was strategically positioning itself for future opportunities. As mentioned by the ARC, one of the guiding principles was: Encourage and increase partnerships between universities, research institutions, government, business and the wider community at the local, national and international level. http://www.arc.gov.au/strat_plan/guiding.htm
522 Poon & May
Eventually, this relationship formed between the local e-commerce research centre, the interstate university and other research institutes would become a platform to pursue further support from both the public and private sectors.
Collaboration Between Stakeholders Working jointly with other alliance members had proven to be an effective way to minimise resources and maximise benefits. Ongoing relationships of this kind would evolve: as the relationships between ECS and its stakeholders ripen, further opportunities for collaboration between alliance members would also emerge (augmenting the model where ECS was the sole hub of contact). While this brought more opportunities to bear, it also resulted in additional complexity for ECS in managing and minimising any risk to the relationships. There would be a time in the future that ECS might not be the centre of this alliance relationship, but a worthy follower. As this case was being written, ECS had been proposing to the participating institutions extension projects based on the interactive intelligent questionnaire. Three projects were proposed: • Development of a prototype called Clinical Information Management Systems customisable interface which helps staff to interact with back-end medical information systems based on a set of preferences. • Development of a Clinical Application Service Provider platform which serves as the middleware between the client software and the back-end processing server so that clinics and specialists working with ECS can share patient records and information securely without the need to exchange paper-based records. • Exploration of the feasibility of secure document transaction technologies between ECS and its partnering clinics to test out the concept of secure patient data exchange. The project champion role of ECS was of critical importance if not because of its willingness to participate regardless of the differences in philosophy and timeline, it would have been quite difficult to attain success to date. At the same time, the institutions’ willingness to work with a tighter time schedule and real-life projects was also critical to the success of this relationship. To ensure the success of the extension projects, it is likely additional stakeholders–such as those from other segments of the medical field and commercial software developers–might have to be brought in.
Changing the Model as the ECS Grows ECS’s model for collaboration and strategy must be dynamic – rather than static. Complacency and lack of awareness of the needs to adapt could be a fatal error against competitors and a changing regulated health care marketplace. Firms like ECS must be prepared to alter their models driving the trajectory of their enterprise along the technological, tactical and strategic dimension. In the case of ECS, it was likely to expand in size, using its own resources and critical mass, while continuing to grow its network of relationships. These changes were likely to force it to question how best to reposition its current strategy and leverage its relationships and increased resources. If ECS represented the beginning of a trend, then it was likely that small specialist practices would increasingly leverage off its business alliances and technology to gain a competitive edge. This was particularly true among entrepreneurial CEOs of SMEs who span the disciplines of health care, commerce and technology as in the case of ECS. The diffusion of IT into the health sector at the small medical practice level would create an interesting balance of power as seen in the business sector. In health care, one of the most complex domains, stakeholders increasingly sought to innovate yet were also reliant on each other to bolster core competencies that they lacked. The high ground would belong to those who could manage their core vision and the lattice of networks upon which they would depend.
Leveraging IT and a Business Network 523
What’d Happened Since the First-Round Project Was Completed? The DotCom bubble burst in the Year 2000 also had an effect on the direction of ECS. Although a project such as the Interactive Intelligent Questionnaire had demonstrated potential to become a full-blown product with high demand, attracting funding to develop it to such a stage had proven to be difficult. The collaboration with the universities and employing postgraduate students to develop the Questionnaire into a functional prototype was a great success; however, neither Dr. Curtin nor the university partners managed to attract sufficient funding to commercialise the product. At the time this was written, Dr. Curtin was still keen to work with venture capital investors and large medical information systems vendors to breathe life into the product, but given Dr. Curtin’s busy schedule (his first appointment usually started at 8.00 am), this had proven to be difficult. Although one of the Honours-year project students joined ECS after his graduation, the current environment of ECS (average salary, lack of an established IT environment and long working hours) might make retaining staff difficult. For fresh graduates, often they would look for challenges and an environment that could give them maximum exposure to the latest IT systems. To achieve Dr. Curtin’s visions, experienced IT staff would be critical to success. As this case was written up, both financial and incentive issues were yet to be resolved. Due to these constraints, many of ECS’s IT projects had difficulties grow beyond the prototype stage. While these prototypes were in workable conditions, they risked being superseded by commercial products if they were not given the opportunity to be further developed into commercial products. The lack of continuing commercial funding might also mean the cooling down of the relationships between ECS and the universities due to the lack of support to engage further postgraduate students. Once this happened, it might be difficult to revitalise them.
Reality Check As this write-up was completed, Dr. Curtin was spending most of his time on building his medical practice to maintain cash flow. The reality was that there would always be further collaborations but if the medical practice failed, then it would have a major impact on Dr. Curtin. Consequently, Dr. Curtin was putting his medical practice as the priority. After saying so, Dr. Curtin was as keen as ever to maintain an ongoing interest in Honours-year projects of the local university hoping another bright group of students would take one of the prototypes further. He was still keeping close contact with those who had since left the project team (e.g., ex-Honours students) and ex-collaborators (e.g., the Director of the local e-commerce centre). It might be one of the best ways to maintain a business network, particularly when mutual trust had already been developed. As with many small businesses, the ultimate survival factor is ‘cash flow’.
ENDNOTES This small medical practice belongs to the micro-business category which in Australia is commonly defined as having less than or equal to 10 full-time members of staff. 2 Names of the organisation and the stakeholders have been changed. 1
REFERENCES Achrol, R. S. (1997) Changes in the theory of interorganizational relations in marketing: Toward a network paradigm. Journal of the Academy of Marketing Science, 25 (1), 56-71. Chandra, R., M. Knickrehm, et al. (1995) “Healthcare’s IT mistake.” The McKinsey Quarterly (3), 91100. Charvet-Protat, S. and Thoral, F. (1998) Economic and organizational evaluation of an imaging network (PACS). Journal de Radiologie, 79 (12), 1453-9.
524 Poon & May
Collen, M. F. (1995). A History of Medical Informatics in the United States, 1950 to 1990. American Medical Informatics Association. de Geus, A. (1997). The Living Company. Harvard Business Review, 75 (2), 51-59. Drucker, P. (1957). Landmarks of tomorrow. New York, Harper & Row. Dvorak, R. E., E. Holen, et al. (1997). Six principles of high-performance IT. The McKinsey Quarterly (3), 164-177. Dyer, J. H. and H. Singh (1999). The relational view: Cooperative strategy and sources of interorganizational competitive advantage. Academy of Management Review 23 (4), 660-679. Eden, L., E. Levitas, et al. (1997). The production, transfer and spillover of technology: Comparing large and small multinationals as technology producers. Small Business Economics 9 (1), 53-66. Hunter, D. and G. Fairfield (1997) Disease management. British Medical Journal 315 (7099), 50-3. Lock, C. (1996). What value do computers provide to NHS hospitals? British Medical Journal 312 (7043), 1407-10. Mitchell, J. (1999). The Unstoppable Rise of E-Health. Department of Communications, Information Technology and the Arts, Commonwealth of Australia. Nonaka, I. (1991). The knowledge-creating company. Harvard Business Review, November-December. Packard, D. (1995). The HP Way: How Bill Hewlett and I Built Our Company. New York, HarperBusiness. Ries, L. and A. Ries (1998) The 22 Immutable Laws of Branding: How to Build a Product or Service into a World-Class Brand, Harpercollins. Sellu, D. (1998). Have we reached crisis management in outpatient clinics? British Medical Journal (316) 635. Simsion Bowles & Associates (1998). Online Service Delivery: Lessons of Experience. Melbourne, Victoria, State Government of Victoria. Slack, W. (1997). Cybermedicine: how computing empowers doctors and patients for better health care. Jossey-Bass. Staggers, N. and K. Repko (1996). Strategies for successful clinical information system selection. Computers in Nursing 14 (3), 146-7, 155. Stewart, T. A. (1997). Intellectual Capital: The New Wealth of Organizations. New York, Doubleday. Sveiby, K. E. (1997). The New Organizational Wealth: Managing and Measuring Knowledge-Based Assets. San Francisco, Berrett-Koehler Publishers, Inc. Sveiby, K. E. and T. Lloyd (1988) Managing knowhow: add value ... by valuing creativity. London, Bloomsbury. van Bemmel, J. H. and M. A. Musen, Eds. (1997). Handbook of Medical Informatics, Springer-Verlag. Yoffie, D. B. and M. A. Cusumano (1999) Judo strategy: The competitive dynamics of Internet time. Harvard Business Review 77 (1), 70-81. Zachary, G. P. (1994). Show-Stopper!: The Breakneck Race to Create Windows NT and the Next Generation at Microsoft, Free Press. Zollman, C. and A. Vickers (1999). Users and practitioners of complementary medicine. British Medical Journal 319 (7213), 836-8.
BIOGRAPHICAL SKETCHES Simpson Poon is Chair Professor of Information Systems at Charles Sturt University, Australia. He is also a visiting professor at the University of Hong Kong. Dr. Poon earned his Ph.D. in Information Systems from Monash University, Australia. He was the Founding Director and currently an Honorary Fellow of the Centre of E-Commerce and Internet Studies at Murdoch University, Australia. Dr. Poon has been an e-business consultant and had worked with both government and business organisations in Australia and Asia. He has published widely in the area of e-business in both academic and professional journals. Dr. Poon can be reached at
Leveraging IT and a Business Network 525
[email protected] or
[email protected]. Daniel May is a PhD candidate in the Department of Computer Science and Software Engineering, Monash University, Australia. He can be reached at
[email protected].
526 Raisinghani
Systems Design Issues in Planning and Implementation: Lessons Learned and Strategies for Management Mahesh S. Raisinghani University of Dallas, USA
EXECUTIVE SUMMARY Telecommunications Company (TC) [company identity is concealed] produced a sales management application through internal and contract resources. This application, Schedule Graph (SG) System, was designed to automate the sales schedule process that had previously been a paper and pencil process. The system was designed and implemented in a matter of months to reduce cost and deliver an application that was long overdue. The project had been proposed for years, but funding issues had routinely delayed initiation. The sales development organization worked on the design and development for this application for approximately six months. The application was released with numerous software, hardware and network problems. The effects on the customer community, the information systems department and other stakeholders were sharp and far reaching. This case study investigates the lessons learned with this application and the implications for theory and practice. It can be instrumental to information systems managers, academicians and students to learn from the success and pitfalls of other organizations related to information systems development and management.
BACKGROUND TC is a Fortune 100 sales company in the telecommunications industry with 50 regional sales offices across the country. TC employs thousands of people with both domestic and international operations, however, the user base for the SG System is approximately 150 employees nationwide. TC wanted to automate a sales scheduling process. Previously, directory sales representatives had been scheduled in markets or canvasses by a paper and pencil process. This process was very time consuming and led to tremendous frustration among sales managers. The burdensome manual process took valuable time away from sales coaching and selling activities that produced revenue dollars. The sales calendar stems 12 months and is typically updated on a weekly basis as personnel and markets can change rapidly in their business. Copyright © 2002, Idea Group Publishing.
Systems Design Issues in Planning and Implementation
527
SETTING THE STAGE The sales managers had been requesting an automated solution for years in an effort to end what had rapidly become an administrative job rather than a sales job. The sales organization conducted business and processed sales online, a ‘Paper-less Automated Sales System.’ While the system itself was paper-less, the output as a hard copy report was paper-intensive in spite of on-line capabilities. While the automation was highly desirable and the efficiencies could not be argued, there were financial considerations and constraints that continued to push the project to the back burner for several years. The sales organization, in a time of declining sales, realized it was time to redirect its focus back into the development of its employees in an effort to strengthen its sales position. This meant that the manual processes needed to be removed by investing in process automation. An application needed to be developed to automate the sales schedule process. Time and money were significant factors in the development of the SG System.
CASE DESCRIPTION In-house application developers as well as contract resources were involved in the design and development of the scheduling system. Rapid Application Development (RAD) was used to produce the SG system. The systems development life cycle was significantly reduced to save time and money for a project that had been long awaited in the customer community. Many applications that are produced from the RAD framework are developed in isolation, since that contributes to its speed to market. The deliverables and outcomes of RAD are the same as for the traditional Structured Development Life Cycle (SDLC)–a systems development plan, which includes the application being developed, a description of the user and business requirements for the application, logical and physical designs for the application, and the application’s construction and implementation, with a plan for its continued maintenance and support. However, the traditional SDLC is indifferent as to the specific tools and techniques used to create and modify each of the deliverables listed above; RAD puts a heavy emphasis on the use of computer-based tools to support as much of the development process as possible in order to enhance the speed of development. In this case study, the focus is on the use of RAD instead of SDLC. However the SG System suffered from not involving other organizational business units. This is noted as a significant drawback with RAD because traditional development stages are able to have greater overall business understanding, as speed is not the primary concern (Hoffer et al., 1998). In fact, David E.Y. Sarna (Eskow, 1997), who is well known for his work with RAD, argues that network planning and monitoring are important issues that can often be overlooked when development takes place in isolation. Specifically for optimal application and system performance, server(s) must have adequate memory, processing capacity, and Redundant Array of Individual Disks (RAID) are recommended for back-up and data security. After the SG System was released to production, significant time was spent rewriting the application, deploying upgraded hardware, integrating network technology and developing a support staff to maintain the application. Additionally, the project and development costs were significantly increased with the activities required to stabilize the system. Approximately $350,000 was spent to purchase additional hardware and maintenance, and almost $550,000 was invested in application coding changes for the system to run more efficiently. Although $900,000 was not a significant percentage of TC’s overall budget, proactive systems planning could have minimized or eliminated this expense and caused less frustration for the approximately 150 SG System users nationwide. It is also important to note that the additional maintenance cost is not uncommon in applications developed using the RAD methodology due to lack of attention to internal controls (e.g., inconsistent internal
528 Raisinghani
designs within and across systems), lack of scalability designed into the system and lack of attention to later systems administration built into the system (e.g., not integrated into overall enterprise data model and missing system recovery features). In contrast, Office Depot jeopardized short-term profitability in 1996 when the company decided to delay most of its new application development for almost a year. The decision was made in an effort to stabilize an application portfolio before it was released. While the company suffered in the absence of short-term gains, the additional time spent on development has had a long-term positive effect on their bottom line due to better quality results from the applications produced (Hoffman, 1998). This is a sharp contrast to TC’s results with their release of SG. As noted in the SG discussion above, speed was the primary motivator in the development process. In the long run, neither time nor money was saved as a result of the rewrites and additional hardware acquisitions, that were expended in an effort to rescue a system that was the product of poor planning and design. The current system architecture of the SG system is illustrated in Figure 1 and described in the following section.
Current System Overview Purpose: Software: Tools: Client: Database:
On-line interactive tool for sales force market scheduling In-house vendor product Oracle Developer 2000: Oracle, Procedure Builder, SQL Plus & SQL Net Sales Managers Oracle version 8i
Figure 1: Current SG System Architecture Application Server with ORACLE DBMS
Print Server connected to the shared laser printer
UserWorkstations
UserWorkstations
Systems Design Issues in Planning and Implementation
529
System Architecture DEC Server: Oracle database resides in a DEC cluster - VMS version 6.3. Oracle Database: Users are connected to the Oracle database via TCP/IP and SQLNET. User PC Configuration: • Processor: Pentium • Memory: 64 MB • Network: Token Ring or Ethernet • User Interface: Windows 95/98/2000 • Communications: OnNet 1.1 FTP TCP/IP & SQL Net 2.1.4C Application Server: Application is distributed on local Banyan Servers to the desktop. Network Protocol: Application generates two significant types of network traffic. Banyan traffic is generated from the PC to the local Banyan Server. Additionally, SQL Net traffic is generated across the Wide Area Network (WAN) for database connectivity. There are a number of considerations that must be reviewed when new software is being developed. Additionally, there are a number of technical and customer groups that must be involved in this process. • Network: The network will likely play a part in any new application or system that is developed today. In the case of SG, the network implications were not investigated before the system was released to production. The system was not written to efficiently run across the network. When the application was released, it was determined that the network response was too slow because the application creates too much traffic across the wide and local area networks. The application required extensive bandwidth for customers to access as well as update the schedule graphs. Additionally, the client hardware was not powerful enough to mask the network response issues. To correct the current situation, the application was rewritten and eventually loaded and launched from the client’s PC. The network services group created a special distribution utility that would load any software updates from the server to the desktop. The client checks the server at log-in and if there is a new version of software, it downloads the changes to the desktop. This process should have been in place before the application was released rather than after as damage control. • Hardware Considerations: Applications for the most part are more sophisticated today with GUI interface and Web-enabled technology that dictates greater desktop processing power, memory, hard drive space, etc. With this in mind, it is important to understand what equipment your client community will be using to access the application you are designing. If an organization’s customers have not yet upgraded to Pentium processors, you will need to address this in the design or possibility in the project budget as recommendation for new equipment may be appropriate. In the case of SG, the customer community had been accustomed to working on “dumb” terminals (DEC: VT420 & VXT2000) and only recently been given recycled personal computers that ranged in models from Intel 80386 to low-level Pentium personal computers. When SG was released, customers with 486 machines could use the application, but with very poor response, and those with 386 machines could not access the application because they did not have the minimum configuration. Needless to say, this caused significant customer issues and dissatisfaction from the client community. To correct this situation, the computer equipment group replaced the customer’s equipment with Pentium II and III model computers. Again, a situation that could have been avoided with proper planning and requirements investigation. • Customer and Supplier Involvement: Customer involvement is very important in the design of any new system. This is especially true with rapid application development as the development time is shorter. The need for customer input and involvement is significantly higher because there is no time for misunderstandings or multiple reworks of the requirements or the code design.
530 Raisinghani
Additionally, a strong partnership with customers will likely yield important feedback and help the information technology (IT) organization to produce quality software by better understanding customer needs. Increasingly, organizations collaborate to complement their core competencies. New product development for TC, for example, is often a collaborative process, with customers and suppliers contributing complementary knowledge and skills. Information technology facilitates interorganizational learning since organizations collaborate closely through virtual integration. The role of information technology in lower and higher levels of interorganizational learning, cognitive and affective trust, and virtual and humanistic interorganizational collaboration should be leveraged (Scott, 2000). • Planning – Application Testing & Design: The notion of ‘write once, run anywhere’ may or may not be realistic yet in the software industry when many software vendors, of which Java is a good example, do not offer the same templates, training and features (Kobielus, 1998). However, planning is an important means to get there. Without proper planning before, during and after the designing and testing phases, there is little chance for implementation success. Quality in the form of customer needs is what must be planned in the design of any system software. The software must be functional and easy to use based on their frame of reference. Additional concerns include the speed at which the software performs based on customer equipment and interfaces. Anticipating future user needs and allowing flexibility in the design models are further examples of situations that should be addressed in the planning process to help ensure quality results. Thorough testing both from a technical perspective and a customer perspective are also vital events that require planning to achieve a successful implementation. Thorough testing of the technology including data integrity and product integration are key quality metrics. Additionally, customers must be involved in the testing process to verify functionality and utility of the system. • Maintenance & Support: Before a system is released to production, there should be a clearly defined support organization to sustain the application and resolve any technical or training issues. In the case of SG, a support organization was not established or trained nor was any formal documentation in place with information about the system design. This was a difficult obstacle to overcome. It was almost six months before a technical review document was distributed and system overview sessions were held for key support groups including: operations, production control, network operations and help desk personnel. These activities should be part of a project plan for the release of any new software or system. These activities should be performed long before the system is released, as it will dramatically increase the success of the project and the satisfaction of the customer community. The field of information technology moves very quickly, and it can be difficult to determine what new technology will be embraced by corporate America based on stability and business needs. Ravichandran and Rai (2000) identify top management leadership, a sophisticated management infrastructure, process management efficacy and stakeholder participation as important elements of a quality-oriented organizational system for software development. Businesses typically adopt a standard, as it is easier to implement, control and maintain. Over the next five years, “adaptive architectures” will be a primary design point for enterprise-wide technical architectures (Meta Group, Inc., 1998). The adaptive nature will allow more rapid change to be made to business applications and processes with reduced impact. It is predicted that by the end of next year, half of global 2000 businesses will cross-train technologists and business strategists (Meta Group, Inc., 1998). The merger should produce better information technology products and services which is a strong profit source. Global markets have increased significantly with Internet commerce and opened many more business opportunities. The Internet is and will be a driving force for years to come (Stevenson, 1995). Microsoft went through a reengineering process two years ago as part of a business strategy to
Systems Design Issues in Planning and Implementation
531
dominate the Web as it has the desktop (Porter, 1997). Finally, as a result of business process reengineering and corporate transformation due to the digital economy, improving process has become an important area. From a systems development perspective, it is important to keep in mind that the issue is not process; it is what programmers are asking process to do, and where to apply it. Armour (2001) points out three laws of software process that are relevant in the context of this case study: 1) Process only allows us to do things we already know how to do. 2) People can only define software processes at two levels: too vague and too confining. 3) The last type of knowledge to consider as a candidate for implementation into an executable software form will be the knowledge of how to implement knowledge into an executable form.
Current Challenges/Problems Facing the Organization In the last decade, there has been a great deal of attention and discussion on Business Process Redesign that is largely tied to the Total Quality Movement (Lee, 1995). Work processes in many large corporations have been under executive scrutiny to improve quality and customer satisfaction. Continuous improvement in all business processes is the corporate goal. Business processes have two important characteristics: 1) internal and external customers, and 2) cross functional boundaries (Davenport, 1993). Developing new software is a good example of a high impact business process. If we relate this to the design of SG, there should have been internal and external customers and crossfunctional involvement in the development process. Controlling business processes is important to their success and also a key to instituting incremental quality improvements. Information technology organizations have started to take on a customer advocate role as a means of controlling business and customer processes. “IT Capabilities should support business processes, and business processes should be in terms of the capabilities IT can provide” (Davenport and Short, 1994). Adherence to business processes maximizes efficiency and positive results. Competition in the information technology field is fierce, specifically with respect to software development. This has had a strong impact on information technology organizations. Many system vendors are offering additional services to corporate clients including multi-platform support, application development and integration services as a means of increasing revenue beyond hardware sales. This is positive for users because it will likely drive prices down and create more product choices for customers. The supplier, however, will only gain competitive advantage if the product is innovative, quality rich, service friendly and priced reasonably. Hardware suppliers are recognizing the importance of service and emphasizing this with customers (Vijayan, 1997). There are several focal points that will determine the survival of specialized vendors or inter-company resources and they can be narrowed down to the following: 1) quality (features, ease of use, tools, etc.); 2) service (performance, ability to enhance, etc.); 3) price (Anderson et al., 1997). Competition can be tempered with current strategy and careful planning. Technology is a key ingredient in any corporate growth strategy (Way, 1998). The business units can be aligned properly to supply quality software on demand with aggressive planning including tools, requirements, design, testing, etc. Among the key networking O.S. companies, Microsoft has a long-term plan to fully integrate its development environments. Microsoft believes the result will reduce software development time and reduce product prices as a result of the efficiencies gained (i.e., training time and money, etc.) (Gaudin, 1997). There is an inherent difficulty that surrounds software development. The nature of software is complex and unstable. Software, in spite of successful implementations, requires change over time whether it stems from customer request or vendor recommendation. “I believe the hard part of building software to be the specification, design and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation” (Brooks, 1985). Realizing that complexity is an inherent and unavoidable obstacle, it becomes more important to focus on planning and designing the appropriate technology to avoid unnecessary updates. A well-designed system will lead to greater stability and reduce maintenance costs.
532 Raisinghani
The release of SG was a challenging experience for the technical team as well as the customer community. How can future software releases be improved based on this experience: • Focus on Quality – Quality is an overriding business strategy for most global companies. Customer satisfaction is a key success factor for application groups developing software inhouse. If customers are not satisfied with the quality of the software produced, they can very easily find a contract vendor to develop any type of needed software. As previously mentioned, competition is fierce in the IT environment and customers have many choice which makes quality a distinguishing factor in success. • Importance of Planning – While there is no easy cookbook for software development, planning is an essential ingredient to a successful software implementation. Proper planning will help minimize rework and contribute to quality and customer satisfaction. Planning will also provide more opportunity for reuse of models, coding, and testing. • Understand your Customer – Extensive customer involvement is very important in the design and development of any new system, but particularly so with those developed under RAD. In the case of SG, customers should have been more involved in the requirements, design and implementation phases. • Initiate Cross-Functional Involvement – Successful software development requires participation from multiple information technology groups as well as customer groups. A business process should be in place to coordinate the involvement of all appropriate groups. It is suggested that within the next few years, “Architectural Webification” will replace ‘legacy-to-client/server migration’ as the dominant design structure. In the next few years, it is predicted that enterprise-wide technical architectures will have quality measures for ‘time-toimplementation’ and total-cost-of-ownership to benchmark: 1) logical horizon (linking any user to any node); 2) object heterogeneity (ability to share any information, service or process with objects from multiple platforms); and 3) systemic utility (scalability, portability, interoperability, availability, manageability, design transparency, and extensibility). It is expected that many Global 2000 companies will use a ‘software factory’ model to implement new application systems (META Group, Inc., 1998). This approach will require software developers to focus on assembly and reuse as opposed to a craftsman approach. With the strong influence of the Internet and electronic commerce on information technology, and more specifically its impact or potential impact on software development, systems designers and developers need to critically evaluate whether this medium would eliminate “middleware” issues and reduce development costs. Due to a constantly changing environment, a poor understanding of the user’s needs and preferences, as well as a lack of willingness to modify existing organizational structures and decision models, the full economic potential of Web Information Systems (WIS) has not been realized. Reference models in the case of WIS, have to integrate a conceptual data and navigational model and–by choosing a system-specific optimal level of abstraction–should equally be applicable for structured as well as unstructured information. For that reason Martin (1996) introduces the term “intelligent evolution”. With special emphasis on corporate business behavior, he compares three types of evolution with the classic Darwinian evolution based on the survival of the fittest: a. Internal (r)evolution during the pre-deployment phase: First order evolution, modifying a product or service (WIS) within a predesigned process and corporate structure. Second order evolution, modifying the process, methodology, or fundamental design of work (WIS methodology). b. External Evolution: Third order evolution, considering external factors outside the corporation (e.g., relationships with customers, other companies, governmental institutions, standardization committees, etc.).
Systems Design Issues in Planning and Implementation
533
One of the problems regarding modeling and developing hypermedia applications is the strong interdependency between presentation (user interface) and representation (explicit structuring) of published information. Many meta models and design methods for traditional client-server architectures lack the necessary object types for modeling this interdependency and are better suited for highly structured segments. Therefore, it is pertinent to design WIS with a close eye on the architecture of currently deployed systems and to compare them with each other.
FURTHER READING B. DeCori (1997). Bridging the Gap. AS/400 Systems Management, 25 (7), 47-50. M.K. McGee (1997). E-Commerce Applications in Three Months. Information Week, 643, 121-122. J. Hibbard (1998). Time Crunch. Information Week, 691, 42-52. K. Bull (1998). Making Magic. InfoWorld, 20 (25), 33. C. Wilde (1997). Do-it-yourself Service. Information Week, 659, 87-90.
REFERENCES Anderson, J., Gallagher, S., Levitt, J., and Jurvis, J. (1997). “Technologies worth watching-groupware, suites, security, data marts, and the ever-popular Web will be hot this year,” Information, No. 612, January, , 29-37. Armour, Phillip G. (2001). “The laws of software process” Communications of the ACM, Association for Computing Machinery. Brooks, F. Jr. (1987). “No Silver Bullet: Essence and accidents of software engineering,” IEEE Computer, 20(4), 10-19. Eskow, D. (1997). “Dealing with the aftershock of a new SAP implementation,” Datamation, 43(4), 105108. Gaudin, S. (1997). “Visual Studio takes big step,” Computerworld,31(10), 2. Hoffer, J., George J. and Valacich, J. (1998). Modern Systems Analysis & Design, 486-497, 767, 780781, 806-807, 814-815. Hoffman, T., “Office Depot endures app dev delays to ensure tech future,” Computerworld, 32(11), 1, 97. Kobielus, J. (1998). “Write once, run anywhere: An impractical ideal,” Network World, 15(9), 45. Lee, T. (1995). “Workflow tackles the productivity paradox,” Datamation. META Group, Inc. White Paper, “Enterprise Architecture Strategies,” 1998. (http:// www.metagroup.com/newwhos.nsf/InterNotes/Link+Pages/eas+-+trends) Martin, J. (1996). Cybercorp: The New Business Revolution. New York: Amacom. Porter, P. (1997).“Microsoft gets with the Web,” Software Magazine, 17(7), 101. Rai, Arun, and Ravichandran, T. (2000). “Quality Management in Systems Development: An Organizational System Perspective”, MISQ, 24(3). Scott, Judy E. (2000). “Facilitating Interorganizational Learning with Information Technology” Journal of Management Information Systems, 17(2), 81 – 114. Stevenson, D. (1995). “Positioning enterprise architecture,” ISWorldNet, June 1995. Vijayan, J. (1997). “Hardware vendors profit from integration push,” Computerworld, 31(30), 28-29. Way, P. (1998). “The direct route to America,” Insurance & Technology, 23(1), 35-36.
BIOGRAPHICAL SKETCH Mahesh S. Raisinghani is a faculty member at the Graduate School of Management, University of Dallas, where he teaches MBA courses in Information Systems and E-Commerce, and serves as the Director of Research for the Center for Applied Information Technology. He is also the President and CEO of Raisinghani and Associates International, a diversified global firm with interests in software
534 Raisinghani
consulting and technology options trading. As a global thought leader on E-Business and Global Information Systems, he serves as the local chair of the World Conference on Global Information Technology Management and the track chair for E-Commerce Technologies Management as well as a world representative for the Information Resources Management Association. He has published in numerous leading scholarly and practitioner journals, presented at leading world-level scholarly conferences and has recently published his book E-Commerce: Opportunities and Challenges. He is the invited editor of the special issue of the Journal of Electronic Commerce Research on Intelligent Agents in E-Commerce. Dr. Raisinghani was also selected by the National Science Foundation after a nationwide search to serve as one the panelists on the Information Technology Research Panel and Electronic Commerce Research Panel. He serves on the editorial review board for leading information systems publications and is included in the millennium edition of Who’s Who in the World, Who’s Who Among America’s Teachers and Who’s Who in Information Technology.
Index 535
Index A
bandwidth 434 bank 297 banking community 150 banking system 142 banking technology 152 budget 427, 432, 205 business communication 328 business functions 41 business operation 391 Business Process Redesign 531 business processes 79, 297, 404, 416 business strategies 190 business-to business e-commerce 184
change management 307 Chinese culture 146 Chinese government 141 client/server architecture 44 client / server technology 106, 230 client-led design 247 clinical management process 514 collaboration 317 commercial contract managers 261 Common Gateway Interfaces 44 communication 411 communication tools 86 communication types 339 communications technology 166 competition 361 competitive advantage 391, 43, 513, 518 competitors 391 computer integration 192 computer technology 362 computer-based tools 527 Conceptual Schema 280 connectivity 313, 365 consultancy 513 cooperative processing 107 corporate communications 386 cost-benefit analyses 391 county government 195 culture 305, 254, 90 customer interaction management 490 Customer Relationship Management 518 customer satisfaction 518 customer service 131
C
D
call management 252
data center 273
administrative functions 468 administrative responsibility 169 adoption process 163 Agricultural Bank of China 156 anti-virus software 215 Apache Web server 381 application functionality 125 application portfolio 528 artificial intelligence 519 ATM network 379 automated encoding 33 automated imaging technology 29 automated systems 198 automated teller machine 150 automatic call distributor 245
B
Copyright © 2002, Idea Group Publishing.
536 Index
data collection 202 data conversion 283 data flow diagrams 134 Data Gathering and Checklist Navigation 23 data processing 282 data processing personnel 274 data resources 280 data warehouse 487 data-processing environment 197 database 473 database management 108, 120 database technology 264 day care 210 decision-making process 456 deregulation 74 development cycle 411 digital library 346 direct deposit transactions 219 distributed database 482 distribution channels 487, 501 Dynamic Host Configuration Protocol 429
E e-business 343 e-commerce 376, 385, 185 e-democracy 198 e-government 198 EDI adoption 184 EDP auditing 272 educational equipment 58 educational technologists 58 electronic access 353 electronic commerce 111, 43, 532 electronic data 184 electronic data interchange 76, 282 electronic journals 352 electronic libraries 62 electronic mail 427 electronic trading 379 engineering 103, 120 enterprise resource planning 109 enterprise-wide data warehouse 487 entity relationship diagram 86 environmental surveillance 20 environmental technologies 260 expense accounts 276 Extended Markup Language 264
F fiber optic 430 file protection 365 file transfer software 215 financial industry 487 fiscal analysis 490 flow chart 252 food program 209 Ford Motor Company of Australia Limited 187 forecasting 33 foreign exchange 146 Fourth Generation Language 108 free market economy 499 funding strategies 345
G Genesis Project 225 global econom 206 government 297, 260 government oversight agencies 209 graphical modeling 86
H help desk 241 homelessness 162 hospital consultants 318 human capital investment 151 human resources 179, 154 human-centered 241 hypertext 60
I Idaho National Engineering and Environmental Labo 260 image scanners 214 incompatibility risk 317 information access 417 information flow 204, 411 information management requirements 168 information requirements 106 information systems software 103 integrated health services 514 intellectual capital 518 Interactive Electronic Technical Manual 16 interlibrary loan 350 international airline 241 International Telecommunications Union 124
Index 537
Internet 196 Internet shopping 75 Internet technologies 390 investment account 491 investment products 489 IP addresses 432 IS development 317 IS support 253 IT deployment 201, 206 IT help desk 241 IT/IS system 103 item processing 30
N National Science Foundation 348 natural language processing 12 network of alliances 513 Nevada Department of Motor Vehicles 225 nonprofit company 209 not-for-profit organization 162
O
K
office automation 461 Online Computer Library Center 350 online courses 394 OpenNet management 267 operating funds 35 operating system 107 operational database 283 organizational restructuring 229 ownership risk 319
knowledge management 514, 410
P
L
pension benefits 286 People’s Bank of China 142 personal growth 456 Personal Software Process 441 Personal Work Planning 440 personnel management 297 pharmaceutical 84, 410, 520 phone ordering system 500 point-of-sale systems 41, 475 Polish Public Administration 131 population expansion 195 population growth 226 portal technology 410 poverty 162 production planning 81 productivity 202 project leaders 236 project management 401 protocol suite 125
J Java programming language 264, 398 joint application design 230 just-in-time manufacturing 77
LAN-based systems 237 legacy systems 205 legislative oversight committees 228 library management 345 Library of Congress 348 local area networks 364, 195
M maintenance 478, 379, 225, 527, 531 maintenance and troubleshooting 58 management information system 305, 401 manufacturing 108 manufacturing process 77 market economy 142 medical information technologies 513 medical practices 513 medical technology 514 mobile users 44 MosaicTM browser 380 motivators 210 multi-line financial services 29 Multiple Listing Service 362 Multipoint Control Units 124
Q quality 517
R real estate market 360 real-time communication 122 reengineering 47, 297, 302, 499, 131, 225
538 Index
regulatory 265 relational databases 266, 280 replication process 476 restructuring 308 retailers 501 retrieval 514 risk 321 risk management 316 risk-taking 303, 310 rule-based communication 122 runaway projects 272
S savings and loans 273 scanner software 215 scheduling 514 secure transfer 123 service economy 195 service organization 456 service performance 198 service staff 385 shared application 126 shared workspace 119, 122 social factors 165 software applications 107 software designers 398 software development 264, 266 software engineering 440 software process improvement 441 speech recognition 12 SQL 476 stock exchange 376 stock trading market 377 strategic advantage 394 strategic alliances 513 strategic planning 264 strategy 297, 300, 307, 499, 504, 323, 384, 337 student satisfaction 449 subject delivery 392 subsidized nourishment 469 systems engineering 260
Total Quality Movement 531 tourist industry 212 TQM method 133 trading partner 184, 186 training 174, 60 training sessions 201 tree-based 260 tree-based structure 265 trust 184, 191 turnover 456
U unemployment 162 union 456 user interface 107, 113 user training 283
V value-added-network 191 value-adding skills 301 Virtual Academic Library Environment 346 virtual communities 416 Virtual Private Network 368 voice interactive technology 12, 15
W Web development 376, 386 Web Discussion List 402 Web technology 373 Web-based portal 487 Web-based services 236 Web-based solutions 393 Web-based technology 519 Web-based user-interface 84 Web-enabled commerce 372 Web-enabled technology 529 wholesalers 501 wide area network 44, 431 Windows Internet Name Server 429 wireless technology 14 work quality 198
T
X
technology adoption 141, 152 technology transfer 151 technology-enabled reengineering 225 telecommunications 229, 526 time management 440
XML 264